BT

A Comparison between Rust and Erlang

| Posted by Krishna Kumar Thokala Follow 0 Followers , reviewed by Sergio De Simone Follow 17 Followers on Mar 13, 2018. Estimated reading time: 23 minutes |

Key Takeaways

  • Erlang provides lightweight processes, immutability, distribution with location transparency, message passing, supervision behaviors and many other high-level, dynamic features that make it great for fault-tolerant, highly available, and scalable systems.
  • Unfortunately, Erlang is less than optimal at doing low-level stuff such as XML parsing, since dealing with anything that comes from outside of the Erlang VM into it is tedious
  • For this kind of use cases, one could be tempted to consider a different language. In particular, Rust has recently come to the foreground due to its hybrid feature set, which makes similar promises to Erlang’s in many aspects, with the added benefit of low level performance and safety.
  • While Rust and Erlang take completely different approaches on many key aspects of language design, including memory management, mutation, sharing, etc., where they deeply differ is at the Erlang BEAM level. The BEAM provides essential support for fault-tolerance, scalability and other foundational featurs of Erlang, which are not present in Rust.
  • So, although Rust cannot be seen as a replacement for Erlang, it could make sense to mix both languages in the same project to leverage their strenghts.

In my two year long journey as programmer on a telecom network simulator, I have used Erlang for many CPU intensive applications by leveraging its concurrent, fault-tolerant and distributed computing features.  

Erlang, being a high level, dynamic and functional language, provides lightweight processes, immutability, distribution with location transparency, message passing, supervision behaviors etc.  Unfortunately, it is less than optimal at doing low-level stuff and is clearly not meant for that. For example, one of the most exhaustive use case is XML parsing, for which Erlang is not really good at. Indeed, XML stanzas have to be read from the command line or from the network and anything coming from outside of the Erlang VM into is tedious to work with. You possibly know the odds. For this kind of use cases, one could be tempted to consider a different language. In particular, Rust has recently come to the foreground due to its hybrid feature set, which makes similar promises to Erlang’s in many aspects, with the added benefit of low level performance and safety.

Rust compiles to binary and runs on your hardware directly just like your C/C++ program would do. How is it different from C/C++ then? A lot. According to its motto: “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety ”.

This article will focus on a comparison between Erlang and Rust, highlighting their similarities and differences, and may be interesting to both Erlang developers looking into Rust and Rust developers looking into Erlang. A final section will detail more about each of the language capabilities and shortcomings.

Immutability

Erlang: Variables are immutable in Erlang and once bound cannot be mutated nor can they be rebound to a different value. 

Rust: Variables in Rust are also immutable by default, but can be easily made mutable by adding the mut keyword to them. Rust also introduces the concepts of ownership and borrowing to efficiently manage memory allocation. For example, string literals are stored in the executable, strings are moved when assigned to some other variable, whereas primitive datatypes like integer(i32,i64,u32…), float(f32,f64), etc. are stored on the stack directly.

Pattern Matching

Erlang: The beauty of Erlang code conciseness comes from its pattern matching capabilities, which are available everywhere — on function names, number of parameters and parameters themselves, when using case statements, and when using the = symbol.

Rust: In a let binding, the = symbol can be used for binding as well as for pattern matching. Apart from this, Rust match is similar to the case statement in Erlang and the switch statement in most other languages in that it tries to pattern-match across multiple cases and then branching to the matching one. Function/method overloading is not built-in for Rust, but it is possible using traitsIrrefutable patterns match anything and they will always work. E.g: in let x=5; x is always bound to the value 5. On the contrary, refutable patterns may fail to match in some instances. E.g: in if let Some(x) = somevalue  explicitly say that somevalue should resolve to any value except None. Irrefutable patterns can be used directly inside a let binding whereas refutable patterns should be used inside either if letwhile let or match constructs

Looping

Erlang: Looping in Erlang is done either by recursion or by list comprehensions. 

RustLoops in Rust go the usual way in imperative languages, with basic looping constructs like for, while and loop. Apart from these iterators also exist.

Closures and Anonymous functions

Erlang: Erlang has anonymous functions that can be declared by enclosing their body within the fun and end keywords. All anonymous functions are  closures that take the current context and can be passed across processes on the same node or other connected nodes as well. Anonymous functions add great value to Erlang distribution mechanism. 

Rust: Rust also supports closures with anonymous functions. These also capture the environment and can be executed elsewhere (in a different method or thread context). Anonymous functions can be stored inside a variable and can be passed as parameters for functions and across threads.

Lists and tuples

Erlang: Lists are dynamic, unidirectional linked lists that can store any Erlang datatype as an element. Elements in the lists can not be obtained by indexes but must be traversed from the beginning every time (unlike arrays in Rust). Tuples are fixed size and cannot be altered during runtime. They can be pattern matched. 

Rust: Similar to lists in Erlang, Rust has vectors and arrays. Arrays are fixed size and can be used if the size of the elements are known at compile time. Vectors are linked lists internally (similar to lists in Erlang). Vectors are used when the size is changes dynamically and can be either plain or double-ended. Plain vectors are unidirectional whereas double-ended vectors are bidirectional linked lists that can grow on both sides. Rust also has tuples that cannot be altered during runtime. Tuples can be  used where a function need to return multiple values. Tuples can also be pattern matched.

Iterators

Erlang: Iterators in Erlang are used with lists. The lists module provides various iteration mechanisms like map, filter, zip, drop etc. Besides this, Erlang also supports list comprehensions that take a generator which is a list and can be used to perform an action on each element of the list based on a predicate. The result is a list again. 

Rust: Vectors, double-ended vectors and arrays can be consumed by iterators. Iterators in Rust are lazy by default. Unless there is a collector at the end, the source is not consumed. Iterators provide a more natural way of consuming any list datatype when compared with a traditional looping constraints like for loop etc, as they never run out of bounds.

Records and Maps

Erlang: Records are fixed-size structs defined at compile time whereas maps are dynamic and their structure can be declared/modified during runtime. Maps are similar to hashmaps in other languages which are used as a key value store.

Rust: Rust supports structs that are declared at compile time. Structs cannot be modified at runtime, e.g., you cannot add/remove members. Since Rust is a low-level language, structs can store references. References need lifetime parameters to prevent any dangling references. Rust has a standard collections library that supports many other data structures like maps, sets, sequences, etc. All these data structures can also be iterated on lazily.

Strings, Binaries and Bitstrings

Erlang: Strings in Erlang are just lists that store the ASCII value of each character in a unidirectional linked list. Therefore, appending to the beginning of the string is always easier than appending to its end. Binaries are special in Erlang in that they are like a continuous array of bits that form bytes (sequences of 8 bits). Bitstrings are special cases of binaries that store bit sequences of varied sizes, such as a three bit sequence, a four bit sequence, etc. The length of bitstrings need not be a multiple of 8. Strings, binaries, and bitstrings supports higher-level convenience syntax to make pattern matching easier. Hence, if you are doing network programming, packing and unpacking a network protocol packet is straightforward. 

Rust: There are two kinds of strings, in Rust. String literals which are neither stored on the heap nor on the stack but are stored directly in the executable itself. String literals are immutable. Strings can also be of dynamic size. In this case, they are stored on the heap with a reference kept on the stack. If the strings are known at compile time, they are stored as literals whereas strings that are not known at compile time are stored on heap. This is an efficient way of Rust identify the memory allocation strategy at compile time and apply it at run time.

Lifetimes

Erlang: Variables are only bound within the scope of a function and freed up at its end by a garbage collector specific to the current process. Hence, each variable will have the same lifetime as the function in which it is being used. This implies a program should be modularized into functions as much as possible to make efficient use of memory. Additionally, you can even trigger garbage collection with a special Erlang:gc() call to trigger a garbage collection when needed. 

Rust: Rust has no garbage collection. Rust manages memory using lifetimes. Each variable inside a scope (delimited with curly braces or within the body of a function) is assigned a new lifetime if it is not borrowed/referenced from the parent. A variable’s lifetime doesn’t end at the end of scope if that variable is borrowed, it only ends at the end of the parent scope. Hence, the lifetime of every variable would be either managed by the current scope or by the parent scope and the compiler makes sure of this. Rust invisibly injects code during compilation to drop the value associated to a variable when that variable lifetime ends. This approach allows to avoid the cost of using garbage collection to figure out which variables can be freed. Rust provides fine-grained control over memory by managing lifetimes within a function. Unlike Erlang functions that trigger garbage collection at the end of function, in Rust you can divide your code into multiple scopes using {} and the compiler will place the drop code at the end of each scope. 

Variable binding, ownership and borrowing

Erlang: Erlang has a simple binding approach. Any occurrence of a variable is bound to the right-hand value if it was previously unbound, otherwise it is pattern matched. Any type in Erlang can be bound to a variable. Variables are only bound within the function context where they appear and freed up by a garbage collector specific to the current process when not used anymore. Ownership of data is not transferable to a different variable. If another variable within the same function context wants to have ownership of the same data, it has to clone the data. This is in accordance to Erlang’s philosophy of not sharing anything and makes it possible to safely sent a closure using a cloned value to a different node or process without data races. In Erlang, there are no references and hence no borrowing. All the data is allocated on heap. 

RustOwnership and borrows are two powerful concepts in Rust that make the language unique among mainstream languages. They are the very reason why Rust is considered a low-level data-race-free language, which grants memory safety without requiring a garbage collector, which ensures minimal runtime overhead.  Ownership  of data is exclusive to a variable, meaning no other variable can share the ownership of that data. Ownership is transferred to a different variable, if needed, on assignment and the old variable is no longer valid. Ownership is also transferred if the variable is sent as an argument to a function. This kind of operation is called move, because ownership of the data is moved. Ownership is useful to manage memory efficiently. 

Ownership rules: Each value will have exactly one owner at a time. The value is garbage collected if the owner goes out of scope.

borrow happens when a value’s ownership is temporarily borrowed from the variable that owns it to a function or a variable, either mutably or immutably. The ownership is returned once the borrows go out of scope within a function or a {}-delimited block. During a borrow, the parent function/scope has no ownership over the variable until the borrowed function/scope ends. 

Borrow rules: There can be any number of immutable references for a variable but there can only be one immutable reference at a time within a scope. Additionally, mutable and immutable references cannot coexist within a scope.

Reference Counting

Reference counting is used to track the use of a variable by other processes/threads. The reference count is incremented when a new process/thread gets hold of the variable and decremented when a process/thread exits. The value is dropped when the count reaches zero.

Erlang: When data is passed across multiple processes in Erlang, the data is passed via a message. This means it is copied to the other process’ heap and not reference-counted. The data copied within a process is garbage-collected by a per-process garbage collector at the end of its lifetime. However binaries more than 64KB size are reference-counted when passed across Erlang processes. 

Rust: Whenever data is shared across threads, the data is not copied to increase efficiency. It is instead wrapped by a reference counter. References are a bit exceptional since multiple mutable references can be passed to multiple threads but are required to be mutex-ed for data synchronization. References to immutable data need not be mutex-ed. All related checks are done at compile time and help prevent data races in Rust.

Message Passing

Erlang: Message passing in Erlang is asynchronous. If a process sends a message to another process, the message is copied to the other process mailbox, if the lock is available immediately; otherwise it is copied to a heap fragment from which the receiving process will get it at a later point of time. This enable truly asynchronous and data race-free behaviour, albeit at the cost of duplicating the same message in the other process’ heap. 

Rust: Rust has channels — channels are like stream of water flowing between two points. If something is placed on a stream, it flows down to the other end. Whenever a Rust channel is created, a transmitter and a receiver handle is created. The transmitter handle is used to put messages on the channel and the receiver handle to read them. Once the transmitter puts a value on the channel, the ownership is transferred to that channel and if some other thread is reading from the channel, the ownership is transferred to that thread. When using channels, the principle of ownership is still preserved and there is exactly one owner for every value. Upon the last thread’s exit, the resource is garbage collected.

Shared mutation

Erlang: Sharing is a sin in Erlang, however Erlang allows controlled mutation using Erlang Term Storage (ETS). An ETS table can be shared across multiple tables and writing to it is synchronized internally to prevent races. ETS can be tuned to give either high read-concurrency or high write-concurrency. The entire table can be attached to a set of processes and if all those processes exit, the whole table is garbage collected. 

Rust: Being a low-level language, Rust provides a way for shared mutation of resources. Combining reference counting with mutex, resource access is synchronized for mutation from multiple threads. If multiple threads sharing the same resource exits, the resource is garbage-collected by the last exiting thread. This provides a clean and efficient way of sharing, mutating, and destructing resources.

Behaviors

ErlangBehaviours are formalizations of common patterns. The idea is to divide the code for a process in a generic part (a behaviour module) and a specific part (a callback module). You only need to implement a few callbacks and call a specific API to use a behaviour. There are various standard behaviors like genserver, genfsm, gensupervisor etc.  For example, if you wanted a standalone process that can run all the time like a server, listening to asynchronous and synchronous calls or messages, you could implement a genserver behavior for it. It is also possilble to implement custom behaviors.

Rust: If you have group of methods that are common across multiple data types, they can be declared as a trait. Traits are Rust’s version of interfaces and they are extensible. Traits obviate the need for traditional method overloading and provide simple scheme for operator overloading. 

Memory allocation

Erlang: Variables are dynamically and strongly typed in Erlang. No type definitions are provided at runtime and type errors are prevented at runtime with minimum type conversions. Variables are allocated dynamically on the heap of the underlying OS threads when the program is being run and will be deallocated on garbage collection. 

Rust: Rust is a static, strict, and inferred language. Static means the Rust compiler checks for types during compilation to prevent type errors at runtime. Some of the types are inferred during compilation — Eg: A string variable initially declared of String type being assigned to a different variable need not implicitly declare type, the data type for the new variable will be inferred by the compiler itself. Rust memory allocation is very efficient and fast as the compiler strives to identify which variables can be allocated on the stack and which on the heap. Unlike Erlang, to a large extent Rust uses the stack allocate all the data types whose size is known at compile time, while dynamic data types such as Strings, Vectors etc., are allocated on the heap at runtime.

Scalability, fault-tolerance, distribution

Erlang BEAM is a unique feature of Erlang. Scalability, fault-tolerance, distribution, concurrency etc. are all possible because of the way the BEAM is built.

How does Erlang scale? Unlike native threads in an operating system, BEAM can support lightweight processes called green threads which are mostly spinned off utilizing very few native OS threads. Literally, A million or more Erlang processes can be spinned off from any single native OS thread. This is made possible by allocating a big heap chunk to a native thread and sharing it across multiple Erlang processes. Each Erlang process gets a piece of it to store all of its variables. Since its size can be as little as 233 words, fitting in a million processes on the heap of the native OS thread is perfectly possible. Furthermore, thanks to Erlang’s  built-in asynchronous message passing, communication between processes is hardly a bottleneck. A process is never blocked to send a message to other process: it either tries to acquire a lock on the other process mailbox to put the message in it directly, or it puts the message in a separate heap fragment and attaches that heap fragment to the other process heap. Erlang VM also has built-in distribution features that can run processes and interact with them in a location-transparent way across machines. 

How does concurrency work in Erlang? When you take native OS threads, they are scheduled by the OS scheduler. In case of Linux, for example, the scheduling efficiency goes down with the number of threads. However, Erlang’s BEAM spins off and manages multiple green threads from one native OS thread. Each process is given 2000 reductions (every operation in erlang has a reduction budget, where one reduction is roughly equivalent to one smallest function call) by default and it is allowed to run until allocated reductions are exhausted and then preempted. On preemption, the next Erlang process in the run queue will be scheduled to run. This is how each Erlang process is scheduled.

How does memory management work at the BEAM level? As we mentioned the heap of any native OS thread is shared across multiple Erlang processes. Whenever an Erlang process wants more memory, it looks for available memory on the native OS thread heap and grabs it, if it is available. Otherwise, depending on the requested data type, a specific memory allocator service tries to get a chunk of memory from the OS using malloc or mmap. The BEAM puts a great deal in efficiently utilizing this chunk of memory across multiple processes by dividing memory blocks into multiple carrier blocks (containers for memory blocks, which are managed by an allocator) and each Erlang process is served with the right carrier. Depending on the need of the hour, like reading a huge chunk of XML stanzas from a network socket, the BEAM dynamically figures out how much memory should be allocated, how many carriers it has to split the memory into, how many carriers are kept on hold after they get freed up by a GC cycle, etc. Free blocks of memory are almost instantaneously coalesced after a deallocation, so that the next allocation will be faster.

How does Erlang garbage collection work? Erlang offers a per-process garbage collector which uses a generational mark sweep garbage collection algorithm. Together with Erlang’s built-in no-sharing approach, garbage collecting one process does not interfere with other processes in any way. Each process has a young heap and an old heap.The young heap is garbage collected more frequently. If some data is surviving two consecutive young GC cycles, it will be moved to the old heap. The old heap is GC’ed only after a specified size is reached. 

How does Erlang fault tolerance work? Erlang considers failures as inevitable and it tries to be ready for handling that. Any regular Erlang application is expected to follow a supervision hierarchy where each Erlang process is expected to be monitored by a supervisor. The supervisor is responsible to restart the worker processes under its control based on the type of failure. Supervisor can also be configured with a restart strategy for workers based on the type of workers it is monitoring, for example one-for-one (one worker process for every worker process exited), one-for-all (all worker processes will be restarted if one exits), etc. The BEAM provides links to propagate exits signals between processes, and monitors to monitor exit signals that propagate between processes within the same BEAM VM and can also propagate location transparently across BEAM VMs which are distributed. Erlang’s BEAM can also load code dynamically on one VM or on all the VM’s at a time.  The BEAM takes care of loading the code changes in memory and applying them. Some extra effort is needed in telling the BEAM about the order of loading modules, state management, etc., to prevent any undetermined states for processes.

Contrary to Erlang, Rust does most of the work when compiling your program and does very little at runtime. As most of system programming languages lack memory safety at runtime, Rust tries to ensure that once your code is compiled, it runs without problems. While the BEAM ensures memory safety with runtime overhead that sometimes go weirdly complex, Rust does it at compile time.

Rust core language features aim to be as concise as possible. One example: Rust used to have light-weight green threads (similar to Erlang processes) in nightly builds. At some point, that feature was removed consciously as it was not deemed a common requirement for every application and it comes with a runtime cost. Instead, that feature can be provided through a crate when required. Although Erlang can also import external libraries, its core features like green threading are embedded into the VM and cannot be turned off or swapped with native threading. Nevertheless, Erlang VM does very efficient green threading which has been proven for decades and turning it off would not be a common requirement for people who opted to use Erlang.

How does Rust scale? Limits to scaling are generally defined by the availability of communication and distribution mechanisms. As to communication mechanisms, it is debatable whether Erlang’s model, based on message passing and per-process garbage collection and ETS, is more efficient than Rust’s channels with single ownership and shared mutation. 

In Erlang, any message can be sent to all other processes by copy. The garbage collector does the heavy lifting of cleaning them up both in the sending  and the receiving process. On the other hand, Rust channels are multiple producer and single consumer. This means that if a message is sent to a consumer, it is not copied over and its ownership is transferred to the consumer. The consumer then injects drop code at the end of its scope to collect the value. Sending the same message to multiple consumers is possible by cloning it for all the channels. Rust’s ownership model combined with predictable memory cleanup might be better than Erlang’s garbage collection in certain scenarios. 

Another important aspect of communication is shared mutation. Theoretically, Erlang's ETS is similar to Rust's shared mutation used together with mutex and reference counting. But while Rust has a very granular unit of mutation, which is as small as a Rust variable, the unit of mutation in Erlang’s ETS resides at ETS table level. Another big difference here is Rust’s lack of a built-in distribution mechanism.

How does concurrency work in Rust? Rust threads are native by default. The OS manages them using its own scheduling mechanisms, so it is a property of the OS, not of the language. Having native OS threads brings you a significant performance boost in terms of their interaction with OS libraries for network, file IO, crypto etc.  Alternatively, you could use some green threading or coroutine library that comes with its own scheduler and you could have enough choices. Unfortunately, no stable crate exists as of today. Rayon is a data parallelism library that implements a work-stealing algorithm to balance load across native threads

How does memory management work in Rust? As discussed, it does lot of static analysis using the concepts of ownership and lifetimes to identify which variables can be allocated on stack and which on heap. One thing Rust does well here is  trying to allocate as much data as possible on the stack instead of the heap. This improves memory read/write speeds to large extent.

How does garbage collection works? As explained above, Rust marks and determines lifetime of a variables at compile time. Additionally, most of the variables Rust uses tend to live on the stack, which is even faster to manage. In Erlang, garbage collectors have to be triggered at given time intervals to look for unused data on your entire heap and then deallocate it. This gets even harder in languages where shared referencing is allowed without any caution, like Java. Predictability of garbage collection duration is hard to achieve in such languages, with Java being less predictable than Erlang, and Rust more predictable than Erlang.

How does fault tolerance work? Rust itself doesn't have built-in mechanisms to identify and recover from runtime failures. Rust does provide basic error handling through the Result and Option types, but that would never guarantee handling every unexpected scenario unless you also have a runtime fault management framework embedded in your language. Erlang has the upper hand here by providing at least five nines of uptime consistently using its supervisor framework and hot code loading. Rust still need to do some work to get there.

Conclusion

Both Erlang and Rust are strong in their respective fields. Erlang has been there for a very long time and has proved to be a strong and industry-ready ecosystem in terms of scalability, concurrency, distribution, and fault tolerance. Rust has its own defining features, like running at low level with high level language features that leverages native performance, safe programming, and common features like concurrency support and provisions for error handling.

In my opinion, an interesting option to consider for some really complex use-case where all of the above features are needed, is to use Rust in conjunction with Erlang as a shared library or Natively implemented functions (NIF). All the number crunching, IO operations, OS calls can be offloaded to Rust and the result is synced back to the Erlang VM.  This is project that aims to make that easy.

Is Rust a replacement for Erlang? I would say it is not. The Erlang BEAM has proved to give great scalability, concurrency, distribution, and fault tolerance for decades. Erlang always tried to  abstract many common concerns by handling them through the BEAM so the programmer does not need to specifically think about them and can focus on the problem at hand. On the contrary, for Rust there are many options we get through community-created crates, but as a programmer I need to mix them in the right way. Another big challenge with Rust is its steep learning curve. It is definitely a bigger leap for people who are just beginning or are coming from dynamic programming languages. In short, the two languages are meant for different audiences and address different sets of problems, but it can be good to combine the best each of them has to offer when it makes sense.

About the Author

Krishna Kumar Thokala is currently working at Thoughtworks as an Application developer. Earlier, he worked with Erlang on a telecom network simulator for quite some time as developer, built a configuration management system in Erlang using yang modeling over NetConf as architect. Besides building software systems, robotics, electronics, and industrial automation are his hobby areas. You can follow him on: MediumLinkedInTwitter

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

microservices by Sandro Manke

Am I right to conclude, that in a microservice-kubernetes world, where individual containers can crash and reboot quickly and scaling out to many cores is discouraged anyway, there is little advantage for Erlang, or would that be something challenged?

Re: microservices by Cameron Purdy

You would be right to conclude that people sometimes use containers in a manner that solves some of the same problems that people solve with Erlang.

The "cost" difference, though, is still 4-6 orders of magnitude between an Erlang process (4+ processes per KB) and an OS container (often tens or hundreds of megabytes each).

Re: microservices by Sandro Manke

I am all in for Beam and all the great isolation, introspection and what comes with it, yet the question was more or less: if you have most of the advantages (besides the cost difference) anyway already with your runtime, how does beam really fit in? It seems to me, that people have a hard time managing one way of doing things, but this is more or less 2 and putting the beam in containers is just making everybody very very sad and I have seen this happen way too often.

We could argue, simply use beam - the isolation of containers isn't really any better anyway, but then people would have to do Erlang, or elixir for that matter and well, downsides too.

When putting erlang in this whole microservice concept of having a docker per ms. Well. suddenly then there isn't a microservice per docker or docker per ms.

Re: microservices by Krishna Kumar Thokala

Comparing an infrastructure management tool with a language runtime looks odd to me.
If that is the case, one option could be to write an application and scale it horizontally by throwing more resources on it and take care of coordinating the state across containers,
advantage - no single point of failure, disadvantage - latency and high resource consumption.

and other option could be to write an highly concurrent erlang application that leverages all the CPU cores without horizontally scaling it, advantage - less number of nodes to coordinate/sync state,
disadvantage - beam crash, single point of failure.

I would say, the choice can be based on what you wanted to trade off with. Spending effort writing highly concurrent (should be easier) and fault tolerant code in erlang Vs/To resource consumption due to horizontal scaling and allowing for single point of failure.

One more influential factor is, It also depends on the type of problem you are solving, if your application will have huge impact if there is a network split(group chat service), probably you wanted to scale vertically as much as possible.

Imho, no language is ubiquitous to satisfy every real time problem. The kind of features erlang brings to the table is giving better concurrency with efficient scheduling, fault tolerance with low downtime and near perfect distribution (meshed by default) for location transperacy,Mnesia etc.

Yes, If you have opted your application to be in erlang or elixir completely, it has downsides too. For example, interacting with OS level stuff or had to do some number crunching. It isn't a best choice available, but it still can be complemented with native support for languages like rustlang/C/C++.

Technically, I kind of felt weird to actually dockerize an erlang container, what it would look like is that you will have one epmd daemon started alongside for every container which is waste of resources. If they could have started on host as nodes, they would have all shared one epmd. Containerizing doesn't look like a great solution here except that they can be orchestrated easily.

Hope this makes some sense.

Re: microservices by Sandro Manke

Its funny cause, in the end, you come to the same conclusion as I do, which is all I tried to say really. That it seems really weird and doesn't seem to mix well to put beam in containers. While its possible, that doesn't mean its generally a good idea.

Still, I didn't try to compare those at all, it was merely stating when having a microservice-kubernetes world, what place erlang has in there.

Also, I would like to note that I deem horizontal scaling with Erlang as very much possible in a couple of different ways. It won't solve OS level stuff, but there are enough other languages who suck at this too. Scaling horizontally is just painful and it introduces a crapload of problems, but I would argue - that's no better in other worlds. quickest link to find was learnyousomeerlang.com/distribunomicon along those lines

Re: microservices by aadi immidisetti

To answer your questions real quick. I think I need paste here few points form one of the discussions which is more are like related to this.

“Computers are fast, and you are not Google”
“Solve your business problems first”
“Industry aren't really engineers, but more scriptures following the herd”

“A lot of the cloud computing products are simply re-implementations of what was created in the Erlang/BEAM platform, but more mainstream languages. IMO it's cheaper to invest in learning Erlang or Elixir than investing in AWS/K8s/etc.”

“Here's the path that leads to K8s too early.
1. We think we need microservices
2. Look how much it will cost of we run ALL OF THESE microservices on Heroku
3. We should run it ourselves, let's use K8s”

“Elixir/Erlang are basically a DSL for building distributed systems. It doesn't remove all of the complications of that task, but gives you excellent, battle-tested, and non-proprietary tools to solve them.”

“One of the big "Elixir" perks is that it bypasses this conversation and lets you run a collection of small applications under a single monolith within the same runtime...efficiently. So you can build smaller services...like a monolith...with separate dependency trees...without needing to run a cluster of multiple nodes...and still just deploy to Heroku (or Gigalixir).
Removes a lot of over-architectural hand-wringing so you can focus on getting your business problem out the door but will still allow you to separate things early enough that you don't have to worry about long-term code-entanglement. And when you "need" to scale, clustering is already built in without needing to create API frontends for each application.It solves a combination of so many short term and long term issues at the same time.”

“Using Erlang You do get a lot of cool things with clustered nodes though (Node monitors are terrific) and tools like Observer and Wobserver have facilities for taking advantage of your network topology to give you more information”


As you have already mentioned quite goodness of Kubernetes let me take downsides of it too.

Additional logging required i.e necessary of some more infrastructure (e.g. for ELK-Stack for distributed logging, Graphite and Grafana for metrics, service discovery)
Increased configuration management & dedicated build and delivery pipeline. Installing and operating Kubernetes is not for the faint of heart
Mostly communicate has to happen over the network and a dedicated mechanism (like REST calls or messaging) and this could bring the some serious performance hit due to extra HTTP metadata and network implications .
Maintaining your Kubernetes cluster might be a bit complicated
Using Kubernetes is relatively white box, ie. you really need to know what's going on under the covers to a degree especially if you're not using GKE
The reality is often less exciting and involves pulling and pushing gigabytes of “layers”, images that are “insecure by default”, are difficult to adjust and configure and - when you’re done with them - are left without being updated for months. Because you don’t want to touch it - if it breaks you’ll waste hours fixing it.

Everything has its own pros and cons. Kubernetes might make you accomplish great things easily in distributed computing As it was achieved in danielfm.me/posts/five-months-of-kubernetes.html but same time it was also mentioned that “ Kubernetes does not offer a clean solution for a number of problems you might face”

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

6 Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT