As a long time Erlang developer I was suprised when I first heard of Elixir, and even more surprised at how nice it was.
That said I think Elixir is very unerlangish. It’s very much Ruby ported to a new platform, it’s heavy on meta programming with macros and using a giant solve all your problems web framework (Phoenix). Compared to Ruby on Rails I think it makes a lot of good arguments. It’s faster, mix is a better package manager, and the BEAM is reliable at small to
medium scale.
Of course I recently posted about
how I was moving away from the BEAM to Rust. I think Elixir has reached peak adoption, and I think the days of large frameworks like Rails and Phoenix that ask you to program in a DSL are sunsetting in
favor of client side frameworks and smaller web services built off of thinks like Go’s net/http.
I also think the reliability of BEAM is happening at the level of orchestration with containers and serverless. The BEAM itself doesn’t fit well into the cattle
model of reliability and it has scalability issues when you reach a certain cluster size.
Erlang is also quite slow at processing data and performing computations. What it does provide is reliable latency and load as you scale, which lets you grow your build mess quickly
without buying more hardware or performing software miracles.
I’m more optimistic about Rust because it provides the reliability I’ve come to expect from Erlang. We’ve been able to scale Rust to handle the same load as our Erlang cluster with a more manageable code base, lower average client latency, and lower memory usage on the same number of machines.
I also know that people like to treat dynamic vs static typing as a matter of taste, but in my experience dynamic typing only works until you reach a certain level of scale. I’ve watched several python and erlang codebases turn into incompressible messes because of this. This is why we’re seeing things like typescript and mypy become requirements rather than nice to have. So I’d rather just bite the bullet and get a really nice static type system up front.
I'd say the opposite. Core libraries have reached stability and engineering hours are switching to making easy what classic imperative/OOP paradigms have trouble doing:
- Dynamic web development without giant Javascript dependencies(Liveview, Drab, etc)
- Scalable distributed systems without giant teams (Firenest, Phoenix Channels/PubSub)
- Concurrency and data-infrastructure (Flow, Genstage, OTP)
- Reactive event-driven systems: OTP and the Actor model makes event-sourcing easier than you'll see anywhere else.
I like both Rust and Elixir, and I'm not sure I would use Rust for everything.
Case in point: I'm working on a fairly simple app - it runs continuously in the background, every minute downloads one file, and every second downloads a different file. These files are JSON, are parsed into domain structs, analyzed a bit, and occasionally persist to a DB.
I wrote this app in rust first, as a side project to better understand rust. The time I was seeing to parse a 50KB JSON file, turn parts of it into a map of structs, compare that map to the previously downloaded map, and log certain differences, was maybe 3-5ms, (compiled with --release).
I recently re-wrote that app in Elixir and the time it took to do that was... 3-5ms. If I were doing crazy calculations I'm sure rust would be faster, but I was actually surprised that in this real-world workload they were comparable in speed. And that's without dealing with a DB. Once you pull a DB in, the negligible processing time could be dwarfed by that anyway.
On top of that, the whole idea of starting up several processes to download each of the different JSON files on their own loop, and having it monitored in the supervision tree was a big win in Elixir, compared to spawning a couple threads and hoping they don't panic.
We replaced a little over 80k lines of Erlang with around 50k lines of Rust (not all at once of course). The biggest change was memory usage went way down. Erlang likes to run on nodes wth huge amounts of RAM for process mailboxes. We cut down our peak usage by almost 30%. That’s huge. Latency is also significantly better for the percentiles we’re looking at. The best part is that the code base is way easier to manage, and it’s much easier to keep adding to the code add without causing regressions.
Being a rewrite means that you understand the requirements well, which factor might play a big part in the performance story, regardless of language/platform.
> using a giant solve all your problems web framework (Phoenix)
Other people have said the same thing, but I want to echo the point that Phoenix doesn't need to be used as a large framework. I work on a large, production Elixir app, and all of our business logic is implemented as separate apps that don't even include Phoenix as a dependency. They're just structs, modules, functions, and supervision trees.
We then depend on those apps from our various frontends (web, api), which is where the Phoenix integration happens, and at that point it's more or less just template rendering and routing.
Phoenix is really a very small framework, and its codebase can be easily understood in just a couple of days. Phoenix being "large" is a common (but unfounded) criticism that seems to stem from Elixir's superficial resemblance to Ruby.
I mean, I read the codebase, which is why I'm saying it's small and easily understood. Sure, it has some specific things to know, and some ("magical") helpers like automatic view functions. But it's no more its own platform than any other micro-framework.
I'd be interested to know what you believe the ups and downs are, aside from speed.
Criticizing Phoenix for being the Rails of the Elixir ecosystem is particularly ironic given Phoenix's creation and design was heavily motivated by incorporating the lessons from Rails' successes and failures.
Phoenix has less 'magic', for one. While it sometimes does mean there's a bit more boilerplate, I think it strikes a nice balance, and the 1.4 actually removed some magic.
Another one is the ORM, Ecto. It sticks much closer to SQL and I strongly prefer it over ActiveRecord. Ecto also includes the concept of changesets, which is a wonderful way to deal with validating user data and definitely an improvement over the Rails approach.
And finally, while I wouldn't necessary call it a 'failure' on Rails' part, the channels functionality in Phoenix is awesome for the more real-time apps that we have these days, and Phoenix is generally quite a bit faster in general than Rails.
The optional parenthesis syntax is pretty similar and arguably a bit unfortunate, though wider adoption of the code formatter seems to largely correct this. It's nice in the console to type a bit less, but I can't imagine the utility of this in code that any other person is going to read.
Often times when writing a macro or a function that should appear as though it's a part of a DSL, the advantage of turning off parenthesis can shine here. That's why parenthesis are optional with a lot of the Ecto macros.
Optional parentheses are pretty much a requirement considering how much of Elixir is actually 'just' macros. I'm very happy that the formatter adds them for me as much as possible though, because I'm also not a fan of how far Ruby takes it.
As a long-time Emacs user recently I finally bit the bullet and started working with VSCode (with vscode-elixir and ElixirLS installed).
...Gods, what a relief. Especially the automatic formatting on save is just a lifesaver. At certain point I would not mind if there is one formal syntax enforced like Go does.
Ecto is practically the only library I use where I don't pay attention to parentheses. And the formatter has me covered afterwards.
>>I think Elixir has reached peak adoption, and I think the days of large frameworks like Rails and Phoenix that ask you to program in a DSL are sunsetting in favor of client side frameworks and smaller web services built off of thinks like Go’s net/http.
Is this just a hunch or do you have numbers to back it up?
edit: This is the second Elixir/Erlang thread you have attempted to hijack by spreading FUD about them and pitching Rust as an alternative. You come across as a Rust shill.
You don't have to use Phoenix... I'm currently doing a plug-only deploy and it feels great.
> I’m more optimistic about Rust because it provides the reliability I’ve come to expect from Erlang.
Im very curious about rust myself, Out of curiosity, how does Rust protect you against wierd things happening like errors in drivers, or dropped packets, e.g.
Phoenix "is", fundamentally, just Phoenix.Endpoint, Phoenix.Router and Phoenix.Socket. In other words, it's a slightly fancier version of Plug that manages its own Cowboy instance and supports abstracted websocket message delivery.
Everything else is up to you; you can use whatever you like. The `mix phx.new` generates a rather "complete" skeleton, but that's just a suggestion; none of it is required to make Phoenix work.
The other stuff in that Phoenix skeleton isn't so much "a framework" (in the sense of e.g. Ruby on Rail's ActionPack, where you don't tend to see people using one of its components without the others, because they're all their own weird shape and only intuitively fit together with their "own kind.") Instead, the Phoenix skeleton stuff is just a set of common libraries that all get used regularly outside of Phoenix, and happen to each suit a need in web dev.
Or, to put that another way: Ruby on Rails is like GNOME (a bunch of libraries written by one group that—even if they are theoretically componentized and able to be used separately—nobody actually uses standalone if they're not making a "GNOME application"); Phoenix is like LXDE (a putting-together of a bunch of regular libraries that already existed to solve a problem.)
> The BEAM itself doesn’t fit well into the cattle model of reliability
One day I hope people will realize that a running BEAM process isn't a thing to put in a container; it is, itself, a hypervisor. You could have a Docker execution driver that deploys Erlang "containers" as relups, and that would make perfect semantic sense. (It just wouldn't be sandboxed in any way, so it's a security no-no for now.)
Picture a k8s cluster containing a mixture of Windows and Linux hypervisor nodes. A Windows-ABI container would get deployed to a Windows node; a Linux-ABI container would get deployed to a Linux node. The Windows "nodes" might actually be VMs running on Linux; the Linux "nodes" might actually be VMs running on Windows. Doesn't really matter, right?
Well, a BEAM-ABI container should get deployed to a BEAM node. That could, theoretically, be an http://erlangonxen.org/ instance; but it's probably a BEAM VM runninng on Linux. No difference, really. It's still a hypervisor addressable by the cluster.
> it has scalability issues when you reach a certain cluster size
The erldist protocol is made for distribution sets, not clusters. A distribution set is a fixed set of named nodes with named roles. Clustering is supposed to happen outside of the distribution protocol. (See WhatsApp's scalability talk where they talk about "meta-distribution" by connecting entire distribution-sets to one-another to form clusters of dist-sets.)
Effectively, ERTS is built to be used in RAID1+0 configurations: within each dist-set, you have a named master and a named hot-standby and some other named accessory nodes; and then you horizontally scale by partitioning your data across clones of this dist-set. (This is, specifically how Mnesia is meant to be used. You don't have a 1000-node Mnesia cluster; you have a cluster of 250 4-Mnesia-node distsets, with data routers in front.)
That said, you can ignore/replace erldist and just do clustering of ERTS nodes, if you like. Like in Lasp (https://lasp-lang.readme.io/).
> in my experience dynamic typing only works until you reach a certain level of scale
Erlang (and Elixir) were created specifically for writing the code that takes untyped data (e.g. binary packets) apart, assigning it temporarily into data structures so it can be analyzed and pattern matched, to try to figure out what its type should be.
There's no real way to get away from this, it's just an infinite regress. Use a typed RPC protocol like gRPC? You'll need to write the gRPC client/server and protobuf parser that your typed language uses. What language is best to write those parts in? Probably not one where everything has to already use static types. Parsing in e.g. C++ or Java is ugly. (Look at the internals of LLVM.) Erlang is honestly the best language I can think of to write those things in—not necessarily in the median case (Haskell's parser-combinators are simple if your use-case suits them), but in the general case, where the message grammar you need to parse being context-free or in any way well-thought-out is not guaranteed.
In my ideal world, there is a system architecture that consists of two languages—one that deals with untyped data and IO, and another one that deals with only statically-typed data, and is probably functional-with-exceptions. These two languages would be learned together. You'd write your code in an actor model, where the "outside" of an actor—the place where its ABI interfaces to the outside world—is defined in the untyped language; and then, once you've massaged the received message into the typed language, you'd write—inline—code in the typed language that handles that now-typed struct, doing something with it, and eventually returning a typed result from that sorta-like-asm{}-in-C scope. Then the untyped language would take over again, translating the response from the inner scope into a binary to go out through the actor's ABI.
I'm basically describing how Erlang works if you write the core of all your logic as NIFs in Rust/Haskell/etc. It's just clumsy, because there's no "first class" static language that Erlang embeds inline, rather making you build separate dynamic-library files using traditional tooling and then load them.
Rust sucks especially for the problems that Elixir is good at. Maybe when writing some system tools I'd consider Rust but so far it has been an absolute pain to use. Can't even get the code to compile for hours while in Elixir I've gone and done it in a flash and there have been almost 0 errors. Almost like Haskell's "if it compiles, it works". No need to attack Elixir with Rust everywhere.
That said I think Elixir is very unerlangish. It’s very much Ruby ported to a new platform, it’s heavy on meta programming with macros and using a giant solve all your problems web framework (Phoenix). Compared to Ruby on Rails I think it makes a lot of good arguments. It’s faster, mix is a better package manager, and the BEAM is reliable at small to medium scale.
Of course I recently posted about how I was moving away from the BEAM to Rust. I think Elixir has reached peak adoption, and I think the days of large frameworks like Rails and Phoenix that ask you to program in a DSL are sunsetting in favor of client side frameworks and smaller web services built off of thinks like Go’s net/http.
I also think the reliability of BEAM is happening at the level of orchestration with containers and serverless. The BEAM itself doesn’t fit well into the cattle model of reliability and it has scalability issues when you reach a certain cluster size.
Erlang is also quite slow at processing data and performing computations. What it does provide is reliable latency and load as you scale, which lets you grow your build mess quickly without buying more hardware or performing software miracles.
I’m more optimistic about Rust because it provides the reliability I’ve come to expect from Erlang. We’ve been able to scale Rust to handle the same load as our Erlang cluster with a more manageable code base, lower average client latency, and lower memory usage on the same number of machines.
I also know that people like to treat dynamic vs static typing as a matter of taste, but in my experience dynamic typing only works until you reach a certain level of scale. I’ve watched several python and erlang codebases turn into incompressible messes because of this. This is why we’re seeing things like typescript and mypy become requirements rather than nice to have. So I’d rather just bite the bullet and get a really nice static type system up front.