Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Unique Resiliency of the Erlang VM, the BEAM and Erlang OTP

Unique Resiliency of the Erlang VM, the BEAM and Erlang OTP



Irina Guberman demonstrates how unique features of the BEAM in combination with Erlang OTP can take a company's servers to the next level of resiliency and robustness.


Irina Guberman is Principal Product Architect at Xaptum.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Guberman: My name is Irina Guberman, I'm a principal product architect at a really cool company, it's a startup, we don't have clients yet, but it's a really cool company in the IoT space. We are building our own internet for things, and I'm a principal product architect there, we use primarily Erlang and C++ in my company. I'm hoping that everybody in the audience is a professional Erlang developer. Is that true? Not everybody, but enough people. No worries, my talk is not targeted at senior Erlang developers, it's actually a very friendly talk that is targeted at everyone, so hopefully everyone will enjoy it one way or the other. We are going to talk about the BEAM, which is Erlang virtual machine, actually quite unique and amazing features of it, and compare it to JVM as we go. The difference are quite drastic, and that's what I wanted to point out in this talk.

Erlang or probably any language on the BEAM could be a perfect example of implementation of Actor Model. You probably won't find a better example, and if you think Akka is, it's not, and we'll prove it in this talk, but in my opinion, that's the best implementation of Actor Model. What we can do on Erlang virtual machine, the BEAM, is run hundreds, and thousands, and millions of processes simultaneously. The actual system limit can be up to 134 million, it's quite a bit.

They share nothing, all the variables in Erlang are immutable, and they communicate through message pass. Every message has a mailbox, and if the other message wants to talk to it, it will send the message to the mailbox. That's a very general picture, this talk will be mostly a demo because I feel talk is cheap, and I did a lot of that and nothing worked, so it's going to be mostly a demo, hopefully you'll be impressed.

A Hypothetical Server

We'll start with a hypothetical server, Erlang has production-grade framework called OTP. That's the idiomatic way to program in Erlang for your production system, but this is going to be a very simple server that's not OTP. It's just basic Erlang, even if you have never seen Erlang before you will probably understand what the code means, it's very simple, very basic, not something I would recommend doing in production, but for the demo purposes, I felt it was perfect.

There is a repo on GitHub that you could play with if it piques your interest. I'll just briefly explain what the code does, even if you don't understand Erlang I'll be nice about it. Normally you'd have some kind of accept_loop, and a big server that accepts requests. Think of ring buffer, accept_loop will be receiving requests from outer space and then process do something about them. Think of a web server, you'll get a request, and then you'll process it, and a lot of times it's a limited amount of time you have to process. The accept loop looks like that.

The receive is the key word in Erlang, if a process wants to receive a message from its mailbox, maybe somebody sent it there, maybe not, it's a blocking call receive, and it will be waiting for a process. The one I'm showing here is a very simple flavor of receive that just receives any message in its mailbox. This piece of code is actually not used in the present in the demo, because I'm not simulating sending messages from outer space, let's skip that part. We'll be just dumping a number of requests into the server, and I will be using 10,000. I hope you guys find that number impressive enough, I'll be sending 10,000 requests at the same time.

This is a sequential code this is like a for loop in Erlang. There's no for loop in Erlang, there's no while loop, no for loop, this is a substitution for it. It can use lists and sequence, and then we'll be sending the request to generate it, just an integer for the purpose of the demo, to a dispatch method.

What we pass through it is the request itself that we get from outer space and self. Self is a method call to get the process ID of the current process. That's how a process can obtain its own process ID, by calling this method. It passes a request and the process ID of that big acceptor loop to whoever is going to be processing.

What does dispatch do? Dispatch doesn't process the request it yet, it just spawns a process that will get it started. I chose to have two processes per request just for the purposes of this demo, you will see why in a second, maybe not in a second, maybe throughout the demo, it spawns a process. For every request that we get into our acceptor loop, you're going to be spawning a process, I'll be running 10,000 but you could do a million when there is more time on your own, so you can run this accept with a million requests, for instance, it's simultaneous.

We spawn a process, and then the method that will be executed by that process is kick_off_request_handler. That first process we create is like a supervisor, and that guy will actually create the RequestHandler. There will be two, one, the supervisor, and it's not idiomatic to have one supervisor per RequestHandler, but I'm just doing it for this demo, you'll see a little later why, hopefully. It kicks off a RequestHandler, RequestHandler, when we spawn a process, it returns the process ID of that created process. It will do a little bit of housekeeping, for the demo, we will be showing how the duration of the process handling. It keeps the start time, and then, after that it just sits there in receive. It's a blocking call waiting for the response from the handler and we did pass the self to the handler, so it knows who to send the response back to.

It sits in the receive and when it will get the receive, it will do this. It will capture the End time, it will calculate the duration, it will spit out the information for us, and send upstream to our acceptor loop the duration. Basically, it will inform the acceptor loop that, "Yes, this process is finished," so the acceptor loop can write it off so we don't have to wait for things. It's for the demo purpose, basically, or it might get some unexpected request and it will send an error.

Let's do the actual demo, I'm going to start an Erlang node with seven schedulers in it. I have eight core on my laptop, I only want to use seven so I can still get back to the Keynote and do things, so I'm going to be doing some processor intensive stuff, normally it will start with as many schedulers, which we will be talking about in just a sec. It will be starting with the same number of schedulers as there are available, but I'm trying to keep things a little safe for the presentation's sake.

I said plus S means number of schedulers, seven, and I told it that my code is in that source directory. I called the server Juggler just because it will be juggling thousands of processes, and it's also a cool and easy name to remember. We compile it, I started an Erlang node, which is a process itself, it has a process ID. We compiled our Juggler server and we're going to run it with 10,000.

I did not make any graphs and any of that, you'll be seeing a lot of stuff being spit out on the screen, and I did it on purpose so you can actually appreciate the massiveness of what's going on on that server, and I didn't feel like graphs are doing that. You'll be like, "Ok, it's a graph," and you'll tune out, you won't be able to tune out with what you're going to be seeing on the screen in just a second.

What you're seeing is I basically print out the sequence number of the requests that came, and then in the square brackets is the number of milliseconds it took to execute. I actually skipped a little bit of the code, let's just go back, I forgot to show you what the handler_request actually does. It does some rubbish, it basically just counts to 1,000, does a case statement, a binary:copy, it does it 1,000 times, C+1, it does some math, addition, it does binary:copy and a case then. It's just to simulate a production-like system. Sorry, that's where my imagination took me, not too far. It's pretty intensive in its processing for the simulation purposes.

Back to the demo, what we just saw, we just spit out 10,000 processes and we saw the execution times in the bracket are pretty good, they're around 200 milliseconds. That's just the first part of it.

Server with a Bug

The second part, we're going to introduce a bug into our system, because whatever I just showed you might be like, "Yes, ok, whatever, ten thousand processes, we can allocate enough, properly run multiple threads, and measure what we're doing. We can totally do the same thing on JVM," Now, we're going to introduce a really weird, obscure bug into our web server, there will be an infinite_loop every so often, every 10 requests. I'm just going to add this little snippet of code there, and if this is the tenth request it will be affected by that bug, and instead of finishing the request processing, it will go into an infinite_loop. infinite_loop, it's just going to process forever.

To do this in my code I just have to switch the BUG_MOD to 10 so every 10th process will be infinite_loop. If you think [inaudible 00:13:46] there is a GitHub repo you could try. Now, you see every tenth request is going to be an infinite_loop, the stars are infinite_loops. Everything else is still executing as normal. Now, 10,000 of the requests will be infinite_loops, the rest of them will be executing and finishing, they'll be slower.

You can't avoid that because we have only that much CPU power in my laptop, but they're still ok, they're still finishing up. You see a whole bunch of them is finishing up, it was in one second, two seconds, not as good as before because the stupid infinite_loops are taking up a ton of CPU here, but at the top, the BEAM, my node takes pretty much most of my CPU on my eight cores, most of the seven cores I gave it. At the end they have 9,000 finished and 1,000 running forever, and they're still running on that node. My poor laptop, it's still bombarded.

I can run it again if you guys want me to, it will complete another 10,000 requests and then introduce another 10,000 of infinite_loops, the whole thing will be a little slower, but my web server is still ok. I'm still handling the requests, my developer team is still sleeping, it’s in the middle of the night, they're still sleeping. They're not alarmed, unless the alarm is like, "Hey, your requests are not 200 milliseconds, but they're much slower," this could be an alarm. Other than that, things are still happening, the system didn't crash.

Let's move on and explain how this is possible in Erlang, how it's possible on the BEAM actually. It's possible with any language on the BEAM, whether it's Erlang, Elixir, or LFE, Lisp Flavored Erlang, it doesn't matter what language, it matters what virtual machine it runs on. If you write Erlang on JVM, I won't be able to do this demo, I'm sorry.

BEAM Scheduler

A couple of things that make this possible, first of all, it's BEAM scheduler. There are two distinct types of schedulers we could think of is cooperative scheduler or preemptive scheduler. Most modern operating systems are running preemptive schedulers so that a process gets chopped at some point that processes don't interfere with each other. It is not the most efficient scheduling model but it's very safe. Let's guess, what is BEAM scheduler, cooperative or preempt? From what we've seen, you don't think all these processes were actively running the whole time, not all the 10,000, only seven of them really could be running simultaneously, but they all appeared to be running simultaneously. What do you think, the Erlang scheduler is cooperative or preemptive?

Participant 1: I think preemptive.

Guberman: You're correct, but it's also cooperative. It makes perfect sense what you answered, it would be really weird if you said it is cooperative, because it doesn't look like. It is cooperative at the C level, so if there is anything going on at the C level - Erlang is written in C- and unfortunately, I won't have the time to go into details what exactly could be happening at the C level that shouldn't be interrupted, restoring that state would be very expensive and inefficient. At the C level Erlang actually is cooperative, and at the Erlang level it is preemptive.

Every Erlang process gets 200 reductions, it's preemptive by means of reduction counting, every process gets 200 reductions. After that, it gets preempted and another process gets a chance, there are things like priorities, but we won't talk about it, they're rarely used. You can mess things up a little more with priority, so we won't talk about it.

Reduction is roughly one function call, we don't have for loops, we don't have while loops in Erlang, everything is a recursion, so it's all counting function calls as pretty safe. I think there is more to it, but let's just keep it simple, it's roughly a function call. Word of caution for those who actually do use Erlang, or some of you who starts using it, BIFs, which is built-in functions, there are some functions in Erlang that will be written directly in C for performance reasons. The built-in functions are the ones that come with OTP with Erlang, and NIFs are the ones you can write.

You can write your own C functions and actually run them in line, so if there was performance considerations you have about Erlang, actually doing some interesting calculations, you can do it in a NIF, but you have to be careful that that NIF only executes in a very short amount of time so it doesn't take down your system. It's not in the scope of this conversation, but I feel like I have to warn you that there are some caveats.

BEAM scheduler is both cooperative and preemptive, so it's actually the best of both worlds. For performance reasons, it's cooperative at the C level, so no really expensive things will be interrupted, it won't be interrupted at a really bad time, which would be expensive.

BEAM Memory Model

The next important construct we have to look at is how BEAM Memory model is designed, so every single process, every one of those tiny processes has four memory spaces. It's a PCB, the process control block, that one is static, it doesn't change throughout the lifetime of the process. Then, there is Heap and Stack, while here you see two memory spaces, is actually just one. You'll have one Heap allocated and the Stack will start at the top, and Heap at the bottom. Stack will grow towards lower memory addresses, and Heap will grow towards higher memory addresses, it's like a trick on pointers.

If they don't meet it's free space, and once they meet we have something familiar to JVM people, the garbage collection kicks off. It will allocate all Heap, move everything there, it will grow all the Heap. When I say Heaps, it will basically grow the space for both Stack and Heap. One important distinction, this is all at a tiny, tiny space, it's all at the level of that one little process.

There's some nice functions that can help us analyze the memory space of the process. hipe_bifs module has nice functions, you might want to try them at home. show_heap for process ID, so you have to pass the process ID, show_stack and show_pcb. We can now quickly run the show_pcb just to show you one little thing. I'm going to start a process that actually does nothing. I think I'm gong to kill that node because it's still running those stupid infinite_loops, and start a new one.

P is a Pid of the process I will be spawning, and that process will just sleep. It doesn't do anything, but it doesn't matter what I spawn, the process was, it will be allocated the same way all the time unless I ask for more. By default, the same amount of memory will be allocated to that process.

Then, we can analyze the memory space of the process by pulling this function. Let's look at this interesting one, the Heap size, this is the Heap size that my process got at the very beginning when it was at first allocated. That's 233 words, so it's 4-byte words on 32-bit systems, and 8-byte words on 64-bit systems, so that's 233 words, that's the size of the Heap that was allocated to my process. You can imagine that it's totally fine to create millions of them in my system, it's tiny.

Let's go back to the Keynote. Here is just for the visual, this is the running system, running Erlang systems, those ten thousands of processes. Each one has its own memory space, its own Heap, its own PCB, and we can compare it to the JVM. I'm very sorry about this very simplified picture, but it's correct. I hope the people who are specialists in the JVM space agree with this picture, it has one big Heap, and all the threads share it. You see the huge difference? This is basically the key difference between Erlang VM and JVM.

Garbage Collection: Is That A Thing In Erlang?

When you have Akka and you think it's the same thing as Erlang, it is not. This is the key difference, these threads are sharing the same Heap. Do you think the stop the world garbage collection is a problem in Erlang? No, it's not a problem. Garbage collection happens at the level of the process, even if it's slow, and inefficient, or whatever, which it is fine. It's not a stop the world like it is in Java, I am actually not going to make this a big, strong point that garbage collection is such a huge problem.

If you hire a few geniuses like we did at one of my companies, we tuned the hardware of Java garbage collector. We did not have stop the world problems, we didn't have those crazy garbage collection process. We tuned the hardware of that garbage collector, the settings were so good that we didn't have that problem, so I would not make this a huge point of this presentation, you can fine tune it so it's going to be fine. Garbage collection, if it's in the wrong hands, can be really big problem in JVM, in the right hands it won't. Garbage collection process in Erlang, it's just not a thing, even if you give it to somebody who has no idea what they're doing.


Demo three is the last one, and that's going to be the killer one, I promise. In demo three we're going to add a tiny, little bit of code to make things much more production grade. We're going to modify our receive in that supervisor process, if you still remember what it was, it's the guy that was found by the acceptor loop that will be spawning our handler. We're going to show you a slightly different flavor of the receive construct, this was just a selective receive that expect a specific message in its mailbox, and this receive will be received as a time out.

Let's say we have a server that's serving target advertising, whatever bidding platform you're on could have different requirements, for whatever reason your requests will not be relevant after a certain amount of time. You don't need to finish that request processing if it didn't completely, it's very often a requirement, but you want your request to finish in specific amount of time, after that, it's irrelevant. My infinite loops will never finish, and who cares?

The processes were actually doing a good job, they're just too slow for whatever reason. I don't need them either, they're irrelevant after a specific amount of time. Let's say in my particular case, I'll make it generous, I'll make it five seconds, if you're bidding for a target advertising space, it's 200 milliseconds or something depending on the requirements could be your time out. I'll make it five seconds.

If I did not get a response from the request processor that was in five seconds, I'm going to kill it, that's how you do it in Erlang. You could say exit to the HandlerPid and the timeout is just I tell it that that's the reason but it doesn't matter, I'm going to kill it, it's not going to exist. My infinite_loops, they won't exist, they're gone. Then, I'm going to tell AcceptorPid upstream that this guy is down so it can account for it for the purposes of this demo.

Let's go back to the demo, I made a separate module, I know they're almost the same for this, so you don't have any demo problems, killer_juggler. If anybody decides to look at my repo after this talk, there is two modules but they're almost the same, I just wanted to play it safe. It's all the same, we have 10,000, I can have a million, but we don't want to do it right now, we have 10,000 processes that will be spawning 1,000 infinite_loops out of those 10,000. We won't be waiting for anything that's longer than five seconds to run.

We still see those same things as before, 10,000. Some processes finish, and some processes are getting into an infinite_loop. If you see those Xs, that's the processes that were killed, everything that runs too long, more than five seconds, is just killed. Those infinite_loops, they are just killed. That's those supervisors that are just like, "Hey, the RequestHandler I spawned is not doing what I expected, so goodbye." Yes, all these processes that didn't finish on time are killed.

9,000 processes that I did expect to finish, did finish, because I gave it a generous time out. If I made it a two, some of them, even the ones that didn't run into an infinite_loop, those would actually also be killed because, let's say, I don't want to wait for them for more than two seconds, that's all.

Now my infinite_loops are killed, my CPU looks fine, and the 9,000 processes that did not experience that bug, they just came to completion. I don't know if you guys are impressed or not, but if you're programming in Akka, in Scala, or Java, you cannot do this, if you think you can, I challenge you to try and talk to me after that, maybe I'm wrong. Maybe there is a way to do it, but I believe it's not possible with the way that JVM is designed, if you are disagreeing with me, show me an example.

One thing, don't have an infinite_loop that has something like thread.sleep in it. That will work, except for in production, you'll never have a bug that has inifinitethread.sleep in it, and it won't hurt anyone. I did have a situation once where I had that argument and the person wrote that call. It was thread.sleep in the infinite_loop, and they're like, "Yes, totally can kill it." Yes, of course, if it's sleeping, or even if it's checking, even if your thread, checking if it's alive, there is a way. Threads are interruptible in Java if you put specific code in.

We actually instrumented one of our DSLs that were given to the business people. They were the ones who introduced infinite_loops in one of the companies I worked for, and they would submit an infinite_loop into production without testing because they were like, "Oh, your servers are down. They're not working, so I'm just going to submit my code." Their servers are spinning because you introduced an infinite_loop, so your thing is not executing. We would instrument their code with every for loop, or every while loop, or every recursive code would have that check if it's alive. I learned Erlang and I could think of such a trick, for Java, that trick helped us to guard their code, but you're not going to instrument all of your developer's code. It's with just very special cases.

We can Google why you cannot kill a thread in Java, there is a reason, it is inherently unsafe, and you might want to look up why it's inherently unsafe and all that. It's not the point of this talk, but like I claimed, you can't kill a thread that's actually active thread in Java.

There's a couple other differences I wanted to discuss between JVM and the BEAM, they're not as critical. I don't want us to think, oh, I just want to talk about how awesome BEAM is, actually, JVM is awesome for certain use cases while BEAM is awesome for other use cases, and it's very important to understand which one to pick, because there are situations when I recommend the people to use JVM for their situations. Erlang has all the immutable variables, it's great for large-scale distributed systems, but there are, of course, certain drawbacks to that, if it's immutable variable, there are certain large array processing won't be as efficient it would be in JVM. JVM has amazing performance measurements demonstrated. It's important to understand all these key differences, and pick the one that fits your project goals.

One other interesting differences between Java VM and Erlang VM is the fact that Erlang is a registered-based VM and Java is tech based, and those have their own plusses and minuses. It's also important to understand, and also important to measure certain things. If you need performance, if you have a financial server that's doing a lot of calculations and math, I may not be recommending Erlang to you, but it all depends what's more important to you, like the resilience, if you're sending a satellite in space, or if you want something that's very resilient, and doesn't ever break, or doesn't ever die, Erlang is probably your best choice because you can actually have it set so it will stay guarded, it will never go down.

There are servers that are written in Erlang that run for years. It may not be so important to you, maybe for you, how fast it execute certain mathematical calculations is more important. This difference between Stack based and Register based, Stack is much more simple, every next instruction is being taken off the Stack, the variable, POP 20, then POP 7, then ADD 20 and 7, and you get the result so to do an addition you have four commands. On the Register-based VM you will actually only have one command, but it's going to be a larger, more expressive command because it will actually tell you, "Add register one to register two, and put the result in register three." It's fewer instructions, but they're a little more elaborate. They're a little more difficult to execute.

Performance Survey On Stack-Based And Register-Based Virtual Machines

There are some papers that are actually neutral to whether JVM or BEAM. There's a paper “A performance survey on Stack-based versus Register-based virtual machines” They created hypothetical virtual machines, one with doing the same in every aspect, except for one is a register based and one is stack based. The stack-based one was Conceptum and the Register based one was called Inertia, Inertia spends about 66% less time in the instruction dispatch, but on the other hand, it's slower in the overall patch time. If you're interested to get a better understanding, if you don't have already about this article, it's at the end in the references.

Based on the test results, the Stack-based virtual machines, they performed better for arithmetic operations, while the Register-based virtual machines are better for recursions, memory operations. I would say a typical business-like server will have way more of those than actual arithmetic calculations. It's very important to understand these differences before you choose your language, or actually choose your VM, because languages, I don't think they're that important, it's the VM that they run on top of that make huge difference.

Questions and Answers

Participant 2: From what I've read, a few years ago, Facebook for Messenger switched over from Erlang to Java. They quoted not finding developers as their biggest problem. Is this something you have had experience with?

Guberman: One of the managers I had had this approach. "I'm not looking for Erlang developers, I'm looking for polyglots." He looks at their resume, if it's a person who knows Haskell, who plays with this, and that, and if they're a person who knows Scala and Akka he will just hire them and then would just give them two weeks to learn by themselves the Erlang book, and they become pretty decent Erlang developers at some point.

The reason why there are fewer Erlang developers is because it becomes like a self-perpetuating problem. People might be saying, "Oh, Erlang is awesome, but hey, I'm not going to make money so I'm not doing it." It becomes self-perpetuating, that's the point of my talk, I want people to be aware how awesome Erlang is, and they might be thinking that they won't need such a huge team. If you're choosing the wrong tool, you will need a much larger team to accomplish what you need to accomplish because it's so much harder. If you're trying to screw with a hammer, it's very difficult.

You need more people, and very smart people sometimes to do large-scale distributed system in Java. They'll do it, but we hired some physics and PhDs, they're brilliant dudes that sit together, they figure out how to do it. It's possible to have a system, but it's so much harder, if you have Erlang, which is a perfect tool for specific things, you hire a couple of people who are average but just know it more or less, or have read the book, read a few other things, they'll do a much better job because it's a suitable tool.

The whole point of my talk right now is that I'm so upset how this is such a self-perpetuating problem. It's a great question, and it's very unfortunate question, I've been doing Erlang professionally since 2013, I don't have a problem finding jobs, and I personally do not experience that problem, but maybe because I live in Chicago and we're ahead of everyone in the world, just kidding.

If it's a startup you're better off with Erlang, for instance, because you just need to get stuff done very fast, and we can with Erlang, it's extremely efficient language. WhatsApp use the Erlang, and we don't know if they would be so successful if they picked something else, because they were under huge crunch to quickly develop such a system, it's not that easy. You decide, this is a super good question if you're a manager making these kinds of decisions. It is a tough decision in that respect, but I think choosing the right tool for the job is most important, that's my philosophy, basically.

Participant 3: You love the Erlang VM, the BEAM. What's your opinion on other languages that have chosen to adopt BEAM? Erlang alongside Elixir.

Guberman: I use Elixir a lot. For web servers, like RAS servers, I always use Elixir Phoenix framework, because that thing is a monster. I really love how I impress people with how fast I get it done, and it's not me, it's Phoenix, it has your back. The guys are amazing community, too. I personally like Erlang more than Elixir for numerous reasons, but that's just my personal choice. Elixir community is amazing, if you have a problem and you talk to them, they'll respond immediately. One time they did not respond in two days, I got a little upset, it turned out there was some huge Elixir conference going on and they were just busy, but it was just two days, usually they respond to me.

Amazing community, they keep on developing new things, so I love Elixir and I love Phoenix framework, I used it on three different projects so far, and every time I really impress everyone. I'm like, "It's not me, it's Phoenix." Nobody cares, I did it. In one month I did what other teams were trying to get done in two years, that's crazy, maybe they were too slow, but hey. With Elixir and Phoenix framework, if you need a RAS server, go for it, you'll be the hero, I promise.

Participant 4: You've told us a little bit about how good Erlang is, but I don't believe that any one system is perfect for everything, so where should I not use Erlang or the BEAM? Let's say BEAM instead.

Guberman: If it's not a large-scale distributed system but it's more like a financial kind of system, for instance, I wouldn't use it if you need to do a lot of cryptographic operations, or if it's a financial system. There are some financial systems, like trading systems that have amazing libraries written in Java, some of them are running on one server and they're making a ton of money. I would not recommend Erlang on this kind of system, it's for large-scale distributed system. If you get thousands of requests per second, if you're running 1,500 servers, that's where Erlang shines. If your requests are more long-running and they're doing some crunching, or mathematical computations, don't do much in learning on Erlang, that's just not suitable.

If you decide if you need mathematical performance, you can hook a native implemented functions, NIFs, into your Erlang programs. That's not something you want to do, you don't want your entire server consisting of NIFs, though, there is argument for it, so you could use the Erlang for distribution and have all the NIFs in there. It's unsafe, but then there's dirty schedulers, which could be a conversation for another presentation. It could be another presentation beyond the scope.

If you need very fast mathematical computation and it's basically all your server does, you probably don't want Erlang, unless there's a large distributed system on top of that, then maybe you do.

Participant 5: You talk about Erlang but then you mentioned you spend most of your time with Elixir. If you actually want to start playing around, should you start with pure Erlang, or Elixir?

Guberman: Actually, I don't spend most of my time on Elixir, I pick Elixir when I need to create a RAS server, and I don't spend most of my time doing a RAS server, I only did three of them. To be honest with you, I would start with Erlang, because that's me, a lot of people start with Elixir and they're loving it, and the community is very supportive, so maybe I would even say start with Elixir, because of how supportive the community is. It's just more modern, a lot of people just like it more, it just feels better. Honestly, it doesn't matter, whatever feels better to you, it's like your taste, but support will be probably better. It's community supported, so whatever you pick is fine.

At the end I'll share the slides, there's references to all the literature I went through. The main one, actually, is the BEAM, a lot of very interesting information about Erlang VM is in the BEAM book. It's in progress, it's on GitHub, it's not complete, but it's got a lot of very good information, so I strongly recommend checking that out. I based a lot of my talk on it, and these are the other. I didn't get touch up on Dalvik virtual machine, that's Android's VM. It's because they wrote their own VM because it's Register based, so you might want to take a look at this.


See more presentations with transcripts


Recorded at:

Jun 25, 2019