Hacker Newsnew | past | comments | ask | show | jobs | submit | mark_round's commentslogin

You're about the 5th person now in as many days who has recommended Elixir when I mentioned I was building a project in Ruby. I'll definitely have to check it out for my next project (whatever that may be!)

Can you expand on why you found it so appealing or "holy crap, this is awesome" things I should look at first ?


Not the guy, but I used rails at my old job for one and a half year, and used it in some personal projects. I looked into Elixir(and Phoenix) during this time, and Phoenix felt like it was designed for more modern websites, where RoR is built for older and tries to adapt to handle modern ones. It just feels that when you want to do something more responsive in Elixir, it's designed for it, but in Rails, it feels like you're doing something unorthodox or something that is added as an afterthought. Obviously this isn't quite accurate, but it is the vibe I got.

Elixir is also a very cool language in a lot of ways. I wouldn't go all in on Elixir/Phoenix, but that's because there's not a huge demand for it, at least where I reside. I would 100% consider it for some smaller projects though, if I stood between that and Rails, and I wouldn't mind having to get more comfortable with Elixir.

Edit: I haven't used Rails 8, and haven't followed the ecosystem since a bit before, so not sure how this feels nowadays. I *really* enjoy Rails backend though, but the frontend stuff never quite clicked.


Counterpoint on the "going all-in": we have a 7 year old Elixir/Phoenix project that currently sits at ~100K LOC and I couldn't be happier.

It has been absolutely wonderful building this with Elixir/Phoenix. Obviously any codebase in any language can become a tangled mess, but in 7 years we have never felt the language or framework were in our way.

On the contrary: I think Elixir (and Phoenix) have enabled us to build things in a simple and elegant way that would have taken more code, more infrastructure, and more maintenance in other languages/frameworks.


I think the OP's point was the job market. I.e. you probably aren't hiring for that role.

Not OP, but I made the move from Ruby/Rails to Elixir years ago, so I'll try to answer from my perspective.

Elixir is a functional programming language based on the "BEAM", the Erlang VM. We'll get back to the BEAM in a moment, but first: the functional programming aspect. That definitely took getting used to. I remember being _very_ confused in the first few weeks. Not because of the syntax (Elixir is quite Ruby-esque) but because of the "flow" of code.

However, when it clicked, it was immediately clear how easy it becomes to write elegant and maintainable code. There is no global state in Elixir, and using macros for meta-programming are generally not encouraged. That means it becomes very easy to reason about a module/function: some data comes in, a function does something with that data, and some data comes out. If you need to do more things to the data, then you chain multiple functions in a "pipe", just like how you chain multiple bash tools on the command line.

The Phoenix framework applies this concept to the web, and it works very well, because if you think about it: a browser opening a web page is just some data coming in (an HTTP GET request), you do something with that data (render a HTML page, fetch something from your database, ...) and you return the result (in this case as an HTTP response). So the flow of a web request, and your controllers in general, becomes very easy to reason about and understand.

Coming back to the BEAM, the Erlang VM was originally written for large scale (as in, country size) telephony systems by Ericsson. The general idea is that everything in the BEAM is a "process", and the BEAM manages processes and their dependencies/relationships for you. So your database connection pool is actually a bunch of BEAM processes. Multi-threading is built-in and doesn't need any setup or configuration. You don't need Redis for caching, you just have a BEAM process that holds some cache in-memory. A websocket connection between a user and your application gets a separate process. Clustering multiple web servers together is built into the BEAM, so you don't need a complex clustering layer.

The nice thing is that Elixir and Phoenix abstract most of this away from you (although it's very easy to work with that lower layer if you want to), but you still get all the benefits of the BEAM.


Something I never quite understood: differentiate between BEAM process and operating system process. The OS has launched one (in theory) BEAM Erlang VM runtime process with N threads; are we saying “process” here to try to emulate the OS process model internally within the BEAM OS process, when really we’re talking about threads? Or a mix of threads and other processes? I’m imagining the latter even cross network, but am I at least on the right track here?

A BEAM process is not an OS thread. The way I understand it, a BEAM process is just a very small memory space with its own heap/stack, and a message system for communication between BEAM processes.

The BEAM itself runs multiple OS threads (it can use all cores of the CPU if so desired), and the BEAM scheduler gives chunks of processing time to each BEAM process.

This gives you parallel processing out of the box, and because of the networking capabilities of the BEAM, also allows you to scale out over multiple machines in a way that's transparent to BEAM processes.


When I first started out with Elixir, it was more the overall architecture that first sold it to me. It is remarkably robust, my impression is that you can more or less yank RAM modules out of the server while it is running, and the last thing which will crash is Elixir. And it is absolutely top in class when it comes to parallel processing and scaleability. Not only how it does it internally, but also how it abstracts this in a way that just makes sense when you are working with it.

When it comes to web development specifically, what really got me hooked, was LiveView from the Phoenix framework. It keeps a persistant WebSocket connection to the client which it uses to push DOM updates directly. Instead of the usual request/response cycle on the client side, the server holds the state and just pushes the diff to the browser. It just made so much sense.


I've put together a number of resources here: https://elixirisallyouneed.dev

Great site. Thanks.

I am/was a huge Ruby fanboy, and I used Rails a lot and loved it (though had some criticisms around too much "magic"). I made the jump to Elixir/Phoenix around 8 years ago, and have loved it. Phoenix to me basically "fixed" all the things I didn't like about Rails (basically opacity and hard-to-find-where-it's-happening stuff due to someone metaprogramming aggressively). I will admit that I've been a functional programming fan for a very long time too. I always write my ruby code in a functional style unless there's a good reason not to (which is increasingly rare).

I still love and use ruby a ton for scripting, and still reach for Sinatra for super simple server needs, but Phoenix is my go-to stack these days.

I've also found the Elixir community to be amazing in the same ways the Ruby community is/was. It's not all roses, for example there's not as many libraries out there. Distribution is also not awesome so for example I currently use ruby or rust when writing CLIs. But for anything distributed (especially web) Phoenix is amazing.

This is a self plug, but I did a conference talk introducing Ruby veterans to Elixir/Phoenix some years ago. It's probably aged a bit, but should still be pretty accurate. https://www.youtube.com/watch?v=uPWMBDTPMkQ

The original conference talk is here (https://www.youtube.com/watch?v=sSoz7q37KGE), though the made-for-youtube version above is better because it's slightly updated, and I didn't run out of time :-)


The "one-person framework" thing is a big draw. I'm amazed at how productive I was in it, and it's not just at the code level. Even though I've been doing sysadmin/devops/architect work for over 25 years now, it's just so damn nice now not to have to think about e.g. standing up a HA PostgreSQL cluster or Redis and deployment is largely a solved problem.

> not to have to think about e.g. standing up a HA PostgreSQL cluster or Redis

I don't understand...Rails does not replace a HA PostgreSQL cluster or Redis, they are orthogonal. Why would you not have to think about them?


Author of the article here (hi! Anxiously watching my Grafana stack right now...)

I've only just noticed that on the Rails homepage, and while I acknowledge everyone's chasing that sweet sweet AI hype, I gotta say that's... disappointing[1]. The reason I fell in love with Ruby (and by extension, Rails) is because it enabled me as a human to express myself through code. Not to become a glorified janitor for a LLM.

[1]=Well, I had a stronger response initially but I toned it down a bit for here...


Definitely. It really makes me wish it was getting more attention - and I know I'm late to the party having only picked it back up over a year after Rails 8 was released! It's just such a smooth experience and I haven't found anything like it that compares.

The thing that really impresses me is how it's become a "one person framework"[1] and thanks also to the "batteries included" approach, you can run everything with zero external service dependencies. I have no problem with managing other services like a cache or DB, but it's just so damn nice to be able to focus on the code and not have to context switch!

[1]=Tons of posts and presentations I'm discovering now referring to that. EG https://mileswoodroffe.com/articles/rails-the-one-person-fra...


Author here, thanks for posting this! Any questions, comments or "You're wrong and this is why" let me know :) I do find myself wondering about the future of Rails (and I guess the wider Ruby ecosystem) though. I'm definitely in the "you can prise it from my cold, dead hands" camp but after years of watching them both slide down developer surveys it does make me concerned.

I'm kinda attached to "odd" outsider technologies like the Amiga and BeOS (which does make me wonder if there's a common thread there) so am used to seeing old packages and documentation gradually fade away but that's clearly not something that points to a sustainable future.

There's enough of the core components still active and after 20-odd years you could just say "it's done" (as I allude to in the Wrap Up) but I do wonder how many here would start a new project on Rails or make a Ruby platform a critical part of a new start-up ?


If you'd like to experiment with running your own AS in private address space, connecting to a friendly network of geeks over wireguard tunnels, check out DN42 https://dn42.dev/Home.

It's a great way to explore routing technologies and safely experiment with your own AS, running the same protocols as the "real" Internet, just in private space.

If you do get set up, give me a shout (https://markround.com/dn42), I'd be happy to peer with you if you want to expand beyond the big "autopeer" networks :)


This is really an amazing resource. If you don't know BGP and how to grok AS's, you aren't a fully actualized IP networking human.


This phrasing made me envision a future where I have 90% android replacement parts, and I actually need to know.


There's a sci-fi story in there about an android manual injecting routes to get around a failed limb.

It's been fun explaining to our cloud engineers that BGP is pretty useful in AWS. Most had never touched it after they got their CCNP/CCIE. My networking cred went up a bit.


Author here, hi! Was just venting last night, but that's a very good point, I'll update it later with your correction :)


You should make it about CT logs. I believe you need to compromise at least three of them.


That was what I was thinking of (but worded it badly in the middle of my rant!)

If I wanted to intercept all your traffic to any external endpoint without detection I would have to compromise the exact CA that signed your certificates each time, because it would be a clear sign of concern if e.g. Comodo started issuing certificates for Google. Although of course as long as a CA is in my trust bundle then the traffic could be intercepted, it's just that the CT logs would make it very clear that something bad had happened.


The whole point of the logs is that they're tamper-evident. If you think the certificate you've seen wasn't logged you can show proof. If you think the logs tell you something different from everybody else you can prove that too.

It is striking that we don't see that. We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering


> We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering

Why would they use the one approach that leaves a verifiable trace? That'd be foolish.

- They can intercept everything in the comfort of Cloudflare's datacenters

- They can "politely" ask Cloudflare, AWS, Google cloud, etc. to send them a copy of the private keys for certificates that have already been issued

- They either have a backdoor, or have the capability to add a backdoor in the hardware that generates those keys in the first place, should more convenient forms of access fail.


> Why would they use the one approach that leaves a verifiable trace?

It is NSA practice to avoid targets knowing for sure what happened. However their colleagues at outfits like Russia's GRU have no compunctions about being seen and yet likewise there's no indication they're tampering either.

Although Cloudflare are huge, a lot of transactions you might be interested in don't go through Cloudflare.

> the hardware that generates those keys in the first place

That's literally any general purpose computer. So this ends up as the usual godhood claim, oh, they're omniscient. Woo, ineffable. No action is appropriate.


That's the most naive take I've read online this year.

So your stance is that spy agencies aren't spying on us because if they were, we'd know about it?


Your "I bet they're God" stance is even more naive. They're not God, they've got a finite budget both in financial terms and in terms of what will be tolerated politically.

Of course spooks expend resources to spy on people, but that's an expenditure from their finite budget. If it costs $1 to snoop every HTTP request a US citizen makes in a year, that's inconsequential so an NSA project to trawl every such request gets green lit because why not. If it costs $1000 now there's pressure to cut that, because it'll be hundreds of billions of dollars to snoop every US citizen.

That's why it matters that these logs are tamper-evident. One of the easiest ways to cheaply snoop would be to be able to impersonate any server at your whim, and we see that actually nope, that would be very expensive, so that's not a thing they seem to do.


That's never been my stance because there's a difference between mass surveillance and targeted surveillance. If you understood that then you wouldn't be getting lost and making silly references to "God".

I don't believe that the NSA is omniscient. I believe they have 95% of data on 95% of the population through mass surveillance, and 99.9% of data on 99.9% of people of interest through targeted surveillance.

You think abusing public CAs for mass surveillance is a genius idea, and that its lack of real-world abuse proves that mass surveillance just doesn't happen - full stop.

Unfortunately you fail to consider that if they tried to do this just once, they would be detected immediately, offending CAs would be quickly removed from every OS and browser on the planet, the trust in our digital infrastructure would be eroded, impacting the economy, and it would likely all be in exchange for nothing.

On the other hand if you're trying to target someone then what's the point of using an attack that immediately tips off your target, that requires them to be on a network path that you control, and that's trivially defeated if they simply use a VPN or any sort of application-layer encryption, like Signal? There is none.


> They either have a backdoor, or have the capability to add a backdoor in the hardware that generates those keys in the first place

> That's never been my stance

It took you about a day to go from being absolutely sure of a thing, to absolutely sure you've never believed that thing.


The first quote was about them having nearly unlimited power for targeted surveillance and the second was about not having such power for mass surveillance. You keep confusing them.

Just stick to your original claim that I responded to - I addressed it in the second half of my previous comment which you glossed over.


There's no "nearly" in your statement. "a backdoor, or have the capability to add a backdoor in the hardware that generates those keys" is the same God powers claim again. If you now want to water it down with enough caveats it's nothing, this reminds me of how people go from "In lab conditions we can do a timing attack on the electronics from a FIDO key" to imagining that outfits like this just routinely bypass FIDO and so it's worthless.

It's very difficult and expensive to attack our encryption technologies, and so it's correspondingly rare. We are, in fact, winning this particular race.

Encryption actually works not because surveillance is now utterly impossible but because it's expensive. How you went from my pointing out that there's no evidence of this mass surveillance to the idea that I'm claiming these outfits don't conduct targeted surveillance at all I cannot imagine.


> How you went from [...] to the idea that I'm claiming these outfits don't conduct targeted surveillance at all

Again, I didn't. You concluded that the lack of evidence of public CA abuse indicates lack of surveillance, full stop, as if that's the only viable way of conducting surveillance. Here's a reminder:

> It is striking that we don't see that. We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering

That's a reasonable observation with an unsupported and faulty conclusion. It doesn't even matter whether you meant mass surveillance (preceding context) or targeted surveillance here because the conclusion is bunk either way. I discussed that earlier but you keep glossing over it in favor of these absurd tangents.


> It is striking that we don't see that

It probably just means they are asking the providers to hand over the data, no need to perform active attacks.


There is also Spectranet[1] and clones for the Sinclair Spectrum, which allows for a much richer Internet-connected experience. It can load and boot remote programs from a server which allows you to get quite creative and produce sites like my TNFS server[2]. You can also try it out from an emulated Spectrum in a web browser at https://jsspeccy.markround.com if you don't have the original hardware lying around to see the sort of stuff you can build!

There's also Telnet clients so you can access old-school BBSes, and a variety of interesting "bridges" that grant access to Gopher or even parse websites. Quite amazing to access the modern Internet on an 8-bit machine from the early 80s that originally loaded games from cassette tape :)

[1]=https://www.bytedelight.com/?page_id=3515

[2]=https://tnfs.markround.com


Once you have telnet you just get an SDF account and do anything you want with a Unix shell. And, if you fire up Emacs, you are god. IRC, EMail, Jabber, Mastodon, gopher, gemini, a calculator, a Lisp environment, play ZMachine games with Malyon (and spawn full v5 and v8 games unlike the Speccy which could just handle v3 ones)...


It's a lot of fun, still! I touched on it in my "Amiga Systems Programming in 2023" post[1] which had some discussion here[2]. In the few years since then there's been lots of development still across the whole scene. OS4 is largely stagnating (although I still fire up my X5000 whenever I have a chance) but the classic 68k scene is positively thriving.

Lots of great software & homebrew games, and the hardware options now are just amazing. There's FPGA, emulation, PiStorm accelerators, Vampire, re-amiga... and only this month, Hyperion released an updated OS3.2[3].

It was (and is) such a versatile, forward-thinking platform and I still very much enjoy seeing how far the community can take it.

[1]=https://www.markround.com/blog/2023/08/30/amiga-systems-prog...

[2]=https://news.ycombinator.com/item?id=37389376

[3]=https://www.hyperion-entertainment.com


A tangent I know, but looking at those old screenshots really made me miss that era of OS X. The first versions of Aqua with pinstripes were a bit busy for my liking, but by the Mountain Lion time frame it was just lovely. Actual buttons! Soft gradients! Icons that had colour!


I am still very sad that the point we started getting high-DPI displays everywhere was about the same time we decided to throw away rich icons and detail in UIs in favour of abstract line art and white-on-white windows.

Maybe it was on purpose? Those fancy textures and icons are probably a lot more expensive to produce when they have to look good with 4x the pixels.

iOS 4 on an iPhone 4 and OS X whatever-it-was that was on the initial retina MacBook Pros are still very clear in my memory. Everything looked so good it made you want to use the device just for the hell of it.


It’s because the higher the resolution, the worse those kinds of design effects look. It’s why they’re not much used in print design and look quite tacky when they are.

At low resolutions you need quite heavy-handed effects to provide enough contrast between elements, but on better displays you can be much more subtle.

It’s also why fonts like Verdana, which were designed to be legible on low resolution displays, don’t look great in print and aren’t used much on retina interfaces.


The font point aside, which I do agree with, the rest of your comment sounds very subjective to me.

I too prefer more distinction between different UI elements than is fashionable in recent years - and, make no mistake, that’s all it is: fashion - and don’t see why higher resolutions preclude that. That’s not to say we have to ape what was done 10 or 15 years ago, but we can certainly take things in a more interesting and usable direction than we’ve chosen to do since around 2013.

I find myself clicking the wrong window by mistake a lot more frequently than I did back in the day due, I think, to current design trends.


I don't understand why the effects would look worse at higher resolution, or why how they add contrast. The tacky part I do understand, as well as the point about screen fonts like Verdana.

To choose a relevant counter example: the Macintosh System Software prior to version 7 was also very flat. System 7 to 7.5.5 introduced depth in a subtle way and a limited manner. It was only around System 7.6 when they started being heavy handed, something that I always attributed to following the trends in other operating systems.


It’s because at higher resolutions you can see the flaws more easily.

They’d have to be implemented perfectly every time, otherwise the whole thing becomes a mess. Not everyone will bother to do this.

Also, often when creating designs, things look better the more you take away rather than add.


There are a couple of places where macOS still has Aqua-style icons. Not sure I should name them in case someone sees this and removes them, but... eh... set up a VPN but leave the credentials blank so you're prompted when you connect: that dialog has a beautiful icon.

It looks _just fine_ on a Retina display.

When Retina displays were introduced with the iPhone 4, gel-style iOS also looked just fine.

In print, we're interacting with paper and a fake reflective style looks odd. On a computer, we're interacting with glass and something reflective or high-detail feels very suitable. It matches the look to the medium.


> the point we started getting high-DPI displays everywhere was about the same time we decided to throw away rich icons and detail in UIs in favour of abstract line art and white-on-white windows.

I might have an alternative explanation.

I often think about something I saw, a long time ago, on one of those print magazines about house decoration, which also featured sample house blueprints. That particular issue had a blueprint for a house which would be built on a terrain which already had a large boulder. Instead of removing the boulder, the house was built around it; it became part of the house, and guided its layout.

In the same way, the restrictions we had back then (lower display resolutions, reduced color palette, pointing device optional) helped guide the UI design. Once these restrictions were lifted, we lost that guidance.


> Maybe it was on purpose? Those fancy textures and icons are probably a lot more expensive to produce when they have to look good with 4x the pixels.

That's an interesting observation. If it was indeed on purpose, I wonder whether they were weighting it based on the effort on Apple's designers/developers/battery usage or the effort it would have drawn from 3rd party developers.


The stark whiteness of “light mode” colors that’ve become standard since the rise of flat UI is I believe greatly under-credited cause for the increase of popularity of dark mode. Modern light mode UI is not fun to look at even at relatively low screen brightness, whereas the middle-grays it replaced was reasonably easy on the eyes even at high brightness.


I've also noticed that as screens got larger screen real estate got cheaper so UI design doesn't require as much effort and it shows.


Nah, it's because of mobile.

All flat boxes is easier to do with 1,000+ different screen resolutions.


Ah yes, when it hit, one of the first things I did was to replace as much of the Denim and "AquaFresh Stripes" as possible with a flat gray texture.

https://forums.macnn.com/showthread.php?t=26300


Long live Snow Leopard! It made my mac fly. A whole release dedicated to making Leopard better. It was amazing, peak macOS.


100% agree; if I could revive it to run it on modern arm hardware I would in a heartbeat.


Leopard was my first Mac OS, and Snow Leopard (obviously) my second, and boy was it great. I miss it so much...


I run an iMac G4 with 10.5 as a home music player. The strange thing is that it feels so easy to use. All the ingredients are the same in modern macOS but the feel is very different.

It’s hard to say why. Clarity in the UI is a big one (placement and interaction, not the theme, ie what we’d call UX today). But the look of the UI (colour, depth) really adds something too. Seeing a blue gel button sparks a sense of joy.


I disdain the modern UI, especially how it treats the scrollbar. On MacOS, even with "Always Show Scrollbar" turned on, applications and web pages try their worst to hide scrollbars or make them unclickable for the users. Check the webpage of ChatGPT for example.

I don't know who the hell had the original idea to do that, but I'll curse in my head for eternity.


Not to mention the macOS scrollbars are ugly as sin now.


Yeah, but I'm mostly focused on the functionality: it's too narrow and sometimes not clickable when you are just off by a bit. The Windows one is ugly AF too but at least it's wider.


> by the Mountain Lion time frame it was just lovely. Actual buttons! Soft gradients! Icons that had colour!

You may be thinking of Tiger, because Apple already started removing color from Finder icons and such in Leopard.

Leopard also introduced a transparent menu bar and 3D Dock.


flat UI was the triumph of mediocre repeatability over humane user interaction


If only there was theming available to recreate those old formatting and styles.


Copland the failed OS NeXT was acquired to replace had themes.

https://lowendmac.com/2005/apples-copland-project



Mac OS 8.5 and above technically the theming support as well (presumably salvaged from Copland), but Apple removed the themes from the final version of 8.5, and never released any of them. I'm not sure if many 3rd party ones were made either, as another commentator notes Kaleidoscope was already fairly established as the theming tool for classic Mac OS, and worked with older stuff like System 7.


For me seeing old OS'es always remind me of the bad stuff. Slow CPU's, slow networking, slow disks, limited functionality.

Maybe I'm a bit too negative but for example when people romanticise stuff from the middle ages I can't help but think of how it must have smelled.


Those who romanticize the past tend to highlight the best points and gloss over the low points, which is likely better than dismissing it altogether.

It's also worth noting that some points mentioned either didn't matter as much, or aren't true in an absolute stuff. Slow networking wasn't as much of an issue since computers as a whole didn't have the capacity to handle huge amounts of data, while limited functionality depends upon the software being used. On the last point, I find a lot of modern consumer applications far more limiting than older consumer applications.


Slow networking? Most people’s networking hardware is still only as fast as the best PowerMac you could buy over 20 years ago. Only in the last few years has 2.5GbE become noticeably common.


20 years ago most people were on ISDN.


And two years ago I was still on a 500Mbps download coax connection with my ISP, that has no bearing on the network hardware in my LAN.


To me, Finder often seems slower now with SSD and Apple silicon that it was with spinning drives and PPC. And the Mac boots slower!!

Apple's software today is poorly optimized. They're depending on hardware to do all the work.


OS X on a power Mac g4 quicksilver is a far better user experience in terms of responsiveness and consistency than raspberry pi os on a raspberry pi 3 even though the pi benchmarks faster.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: