Bio Steve Vinoski is a member of technical staff at Verivue, a startup in Westford, MA, USA. He was previously chief architect and Fellow at IONA Technologies (now part of Progress Software) for a decade, and prior to that held various software and hardware engineering positions at Hewlett-Packard, Apollo Computer, and Texas Instruments.
The Erlang Factory is an event that focuses on Erlang - the computer language that was designed to support distributed, fault-tolerant, soft-realtime applications with requirements for high availability and high concurrency.The main part of the Factory is the conference - a two-day collection of focused subject tracks with an enormous opportunity to meet the best minds in Erlang and network with experts in all its uses and applications.
The last 3 years I’ve worked for a startup called Verivue. Verivue is a media distribution company and we’re working on a hardware and software solution for delivery of video content to be used by cable operators, telcos and also in the Internet for CDN’s content distribution networks or caches or things of that nature. We’re able to deliver video for Internet consumption in the browser or for video on demand consumption at home on your television. That’s what I’ve been doing for 3 years now. Prior to that I was Chief Engineer, Chief Architect at IONA Technologies which was the CORBA Company, which no longer exists. It’s been bought by Progress Software. I kind of did an industry switch, so I was doing middleware for 17 years then I switched into the media distribution industry.
That’s a long answer. Working on CORBA for many years there was a certain way of doing things and it was all centered around remote procedure call. CORBA was about objects, to read objects, but fundamentally you’re doing remote procedure call. We did a pretty reasonable job in the object management group of working on specifications. There was a core team of people that worked on the core ORB. Part of the specifications that even though came from different companies, they worked together pretty well. At the spec it was pretty solid. It was complicated, but solid.
I think it was in year 2000, Mark Baker who was also involved in distributed objects in the ‘90s pointed me to Roy Fielding’s REST thesis and I read it and said "That’s interesting!" and put it away. Then, I would take it out again and read it again and it started clicking. You have this decade or more of experience working with RPC based notions and CORBA and you work for a company who is building CORBA systems. So you’re trying to unlearn that almost, by reading things like REST, which takes a very different approach. It took a couple of years before I really started to sink in and then I realized "Oh, this is very important! I should pick my head up and pay more attention to this."
That was after the Tech Bubble and the company IONA changed quite a bit, downsized and things like that. My role changed to be the head of innovation. As an innovator, I had to look at what was coming down the pike and what sort of changes were coming. REST was one of these things that just kept picking up steam and I could understand why, because I read the specs so many times by then I fully understood what was behind it and how it differed from the CORBA based approach and how it would scale where CORBA could never scale to that level. I tried to push REST within IONA and didn’t get a lot of traction on that.
At the same time, in terms of innovation, I’m looking at how we built our middleware and how products at IONA and trying to come up with better ways of doing that. Primarily we were using Java and C++ for our products. Those are expensive to use, they’re complicated. You write a lot of code that’s accidental complexity. It’s not really that you are solving a problem; you are just filling in the gaps around the problem you are solving. I’ve always been an aficionado of programming languages, always looking at different languages and I found Erlang.
I started looking at it and I almost couldn’t believe that it had all the features that are middleware systems, things that we spent years trying to build, this language had built in almost in terms OTP it was kind of all there, plus things that we’ve never even gotten to were also there. I started trying to convince management that we should at least start using it for services. So we wouldn’t necessarily be giving Erlang based SDKs to our customers, but we could give them services that they would just run in their network that we would implement in Erlang. I thought that would be good and it got a cold reception.
By then, between REST and Erlang I’d learnt so much about other ways of doing things - the CORBA way and the C++ way were things I’ve done for years. But now I was able to almost step outside myself and say "There were good things about that, but here are all the things that weren’t so good." By then I just decided to move along to this new opportunity.
3. Can you tell us for people familiar with CORBA or other integration platforms, what’s interesting in Erlang and what are the answers that we find in Erlang that we wouldn’t find in these technologies (in the words they understand)?
The first thing to know about CORBA is it’s about multiple languages and integration of disparate systems. You have a system written in language A and another one written in language B; the developers of these systems never communicated with each other before and they have to now integrate. CORBA is about putting a layer over both of them and then having a common denominator of systems, if you will, to allow these things to communicate.
That part is pretty good. You had your IDL interface definition language in CORBA that could be mapped to different programming languages, primarily the ones that the OMG supported were C++, C and Java and there is Smalltalk and there is Perl and Python and a number of other languages, but the primary ones were C++ and Java. It’s all about integration, whereas Erlang, just the core itself is Erlang to Erlang. If you want to have distributed system written in Erlang, the communication is basically taking care of it for you. There is no marshalling that you are aware of because you are sending Erlang terms between the nodes.
It becomes a different kind of problem, but then, if you left it at that, the 2 systems would be very different. Erlang itself is almost like a middleware domain specific language. It’s a language for writing middleware-ish kinds of things, because it sort of bubbles up the network right to your programming level. It bubbles up the event handling to your level and you can wait on a number of events either from the network or from other parts of your application, transparently with a single model.
It doesn’t force your hand in ways. A lot of frameworks force you down a particular path, but this is like a bunch of building blocks that you can put together as you need to do it for your application. When you combine that in, when you have access to TCP as easy as it is in Erlang, when you have the bit syntax, which is another part of Erlang that lets you decode network packets instantly (you just take them apart bit by bit or byte range by byte range, named fields basically) when you can do that so easily as you can in Erlang, then this integration problem gets a lot easier.
In CORBA you’d have, like I said, system A and system B, you force them both to this common language which is IDL based and then you have the primitives that CORBA gives you, which are largely synchronous communication RPC style. In Erlang, you can adapt it to whatever system that you need. If you are dealing with system A over there, your Erlang system becomes almost an extension of system A instead of forcing it to be something different. Because you can adapt to whatever messages it sends, you can adapt to (if it uses TCP or UDP or whatever) you can do that in a very straightforward manner. The primary difference is the adaptability of Erlang is far superior.
In terms of concurrency in Erlang, when I first started to learn Erlang, I said "Let’s see how many Erlang processes I can create on my laptop" and it was 1 million and a half a second or something, just insanely ridiculous, if you are used to something like p threads or Java threads you can come anywhere close to that. I thought that was really something, you could create all these processes so you could do all kinds of stuff with more threads than you could in these other languages, but that’s not really the point.
The point is that you’re dealing with coordination. You have systems that have to be coordinated, they are receiving events, and they are dispatching events, where an event is a network message or something of that nature. You want to have these processes around not to just create them because it’s easy to do, but because it gives you this flexibility in how you’re going to handle certain situations. You might have a situation where you want to spawn a process just to mimic an asynchronous style of communication.
You let that process handle the communication while this other process goes off and does something else. In Erlang you just don’t even think about it much, you just do it. It reminds me of a quote from Joe Armstrong - "What if a language had artificial limits on a number of objects you could create?" In Java you could create only 500 objects and then the system starts acting strangely. Nobody will accept that, but in these languages that don’t deal with threads and processes like Erlang does, you do have those artificial limitations where you could only create a certain number of threads and the system goes belly up. In Erlang you can literally have hundreds of thousands of processes running concurrently, doing real stuff and you don’t have to worry too much about it. That’s concurrency.
Then, reliability Erlang has this sort of "let-it-crash" philosophy. I think it was a paper by Nystrom that covered comparing the "let-it-crash" philosophy to error handling and languages like Java and C++ and he found (I think it was about) 30% of the code was devoted to error handling. If you take 30% of the code there is going to be a certain number of bugs in that code, so error handling is actually adding bugs to your code. Its accidental complexity, in a way, to begin with makes the code less stable. The Erlang philosophy is "Just don’t do that. Don’t even have that code, just keel over at a certain point where you can handle the error and let someone else deal with it."
Now you might say "We’re just moving the error handling", but you’re really not, because this someone else is typically the supervisor process that just watches your processes and when it stops, this thing just restarts it. By doing that, you’re throwing away a certain state, you’re throwing away expectations. If you have a message, say process A is talking to process B and A has sent a message to B and it is expecting some response, if B dies, then A is probably monitoring B because this messages depends on the lifetime of B.
It sees that it’s gone and it can react and do whatever it needs to do, whereas if you try to handle the error and it off doing error handling, and process B meanwhile other processes might be sending to it and they are queuing up in its queue and it’s using up memory and it adds problems in some cases, instead of fixing the problems if you try to handle all these errors. By crashing and letting the process restart, you are starting from scratch again and it’s just much cleaner.
Yes. Erlang has these decoders that are just part of the system. When you open a TCP socket for example, you can tell it that you want it to do packet decode and you can say "packet, http" and it will try to read HTTP headers off of this socket. When it’s done, it will give you a response that says "end of headers" and then you can flip the socket out of that mode into a normal data mode and then read the body of the message. When you combine that with the results of this mode called "active once" (a TCP socket can be active, non-active or active once).
What active means is that the messages come in on the socket, they are just sent to the owning process, just straight away, queued up in that process’s message queue. If it’s passive socket, then that process has to actively go and receive a message off the socket, otherwise, they’ll get queued up TCP wise. The TCP window will close the other side will stop sending but they are queued essentially in TCP. Active once is in the middle.
It’s like "I want this socket to come into my event queue but only for one message. After one message, put it in passive mode." That lets the TCP window work and block the sender if necessary, but it lets you avoid overwhelming the receiver with messages. It’s a nice happy medium that most people use. Your typical loop of receiving and acting on messages is to put a socket into active once and in this case packet HTTP, read one header and if it’s the end of headers then go into data mode.
If there is more headers coming, then do it again - active once, packet HTTP, it will receive the next header and what it gives you is a tuple that describes the header as the name of the header, the value of the header, it’s all parsed for you. All that’s done in C code actually, so it’s quite efficient inside the Erlang VM. It’s the best of all worlds.
The thing about REST, if you’ve read Fielding’s thesis, is how well laid out the arguments are. I think in terms of all my CORBA experience, you are dealing with a number of people, you are in the OMG, you have people that have a lot of industry experience, people that are more managerial than technical and vice versa and all these people come together and they share their experiences and you’re trying to build this standards based on your experiences.
Certain people have certain ways, they like to do things and sometimes the standards reflect more the personalities involved than the technical details. In the case of Fielding’s thesis, this is Fielding working on his thesis, certainly getting influenced by advisors and other people around him, but it’s his own work and he’s able to sit and just work through this thesis. The thesis is about software architecture. REST is a piece of it but it’s networked software architecture and has a very methodical way of working through architectures in terms of constraints that you apply to achieve certain properties of the system.
If you think back to what CORBA was, it was like "I’ve got a product that does this and this company has a product that does that, and we want to have a standard that can cover both of them, so we put together a standard that lets these things fall under that umbrella. It’s not so much architecture in the Fielding sense, because you are not thinking about constraints on the system and you’re not thinking about the properties you are trying to induce with the constraints, it’s more of a politics and market thing.
A huge realization for me was that here I’ve been working on all these distributed architectures, implicitly knowing certain things that you do in distributed systems and things that you don’t do, but never having seen that laid out so clearly before as Fielding’s thesis did. Once I saw that, then as I studied REST more and more, I realized that some of the things we’ve done in CORBA could never scale to this same scale, the web scale, as REST does. Even IDL itself, the whole basis for CORBA is you’re defining these distributed objects.
So, you are writing IDLs to describe these operations they have, what parameters the operations take, what the operations return and how those operations are grouped into interfaces and then inheritance between interfaces, whereas with REST you’re dealing with a uniform interface, everything on the network has the same interface. Just the fact that you are writing an IDL means you are effectively writing a protocol. It’s a new protocol to this object. Anything talking to an object has to learn this new protocol, has to know the protocol and that is a huge scalability loss, whereas with Fielding everything has the same interface and you know what to do because it’s inherited in the system. Everybody knows the interface already. That alone makes it more scalable.
Everyone says "Aren’t you just shifting the problem to the data?" but the data is going to be a problem no matter what. The way I see it is in the CORBA system you have interface issues plus data issues, whereas in REST you made one of them constant by having a uniform interface. You still have data to describe in term interchange, but even there, in CORBA you are dealing with a single form of data, which is IDL describe data, whereas in the REST you have all the media types, mime types and you can choose the one that makes sense for you and interoperate that way. As long as the client and the server understand the same types, they’re happy - a lot more flexibility.
In Fielding’s thesis it’s all in there, but there are constraints. Client server is a constraint, the uniform interface is a constraint and then there are some sub-constraints of that. There is statelessness and the properties are things like visibility, manageability. You take something like representation state transfer, which is the basis of the name REST and what you are doing is you are saying "I’m going to send a representation of my state to someone who asks me for my state."
I’m sending some representation of that state and the request they send me and the response I give back are self-contained. What that lets you do, the property that yields is that now you can have an intermediary. Whether the originator sends this thing directly to me or it comes through a proxy, it doesn’t matter because the request is going to be the same either way, and the same for the response. So the response can go back through a proxy or a cache or any other intermediary back to the original caller and it’s still the same, it’s nothing that necessarily happened to it. That gives visibility to things travelling through the system. It lets you do things like caching, which is very important to something as large scale as the web.
That’s one example. Just by choosing to send a representation, the way it’s sent and to have statelessness and to have self-contained requests, you gain visibility and monitorability (if you want to call it that) of the exchanges. If you were to contrast that with CORBA, because you defined in IDL this protocol, it’s essentially a custom protocol, the caller has to know the custom protocol and the receiver has to know the custom protocol. If you stuck something in the middle, they can’t really know what’s going on because there is not enough information in the message to tell them what the message is, what the types are. The messages really aren’t self-described, the ends know what are in the messages, they are expected to know just based on the identifier for the operation.
Something in the middle can’t cache values, it can’t know that values are a certain type or what form they have or anything like that. People have done caching in CORBA, it’s always extremely painful, because you have to inject all this type information for the interfaces you want to use into your cache. Your cache becomes this conglomerate of knowledge about these custom protocols. If you try to use it with something else that it doesn’t know about, it can’t do anything. Imagine on the web, if your browser had to be changed every time you visited a new website, it’s very similar. Of course, that would never fly.
Part of the architecture is that the system is built to be more about "get" than it is about "post". Another type of system, where you are posting all the time or you are doing a lot of modification of that nature, then you wouldn’t necessarily the same constraints that REST uses. REST is an example of an architecture with a certain set of constraints. There is nothing to say that you can’t use the same principles that Fielding described in his thesis, to have a different named architecture. REST is an architectural style, that’s its name; you could define a different style with different constraints and give it a different name. REST isn’t the answer to everything; it’s the answer to the class of systems that fall under its constraints.
I’ve been looking at Erlang for about 4 years, using it for real stuff for over 3 years now. I learnt things the hard way along the way. First of all, the Erlang community, Erlang questions mailing list you should subscribe to, because the community itself is really pleasant. There are so many people that are just very helpful. People come to the list and ask some of the most basic questions and they never send away like "Go read the manual" or something like that, less nice. The community itself is very helpful, so don’t be afraid to ask questions.
But in terms of advice, if you are dealing with some multilanguage environment (which you will be unless you are starting something from scratch) just be aware that, if you are dealing with 2 languages, the more dominant or the more code there is in one language, the other language is going to get blamed for all kinds of things, whether it deserves the blame or not. There is a lot of fear; I think a lot of people are afraid of new languages. They don’t want to learn new languages, so when you introduce a new language, they’re a bit apprehensive about using it. That’s something to take into account. You have to almost be willing to bend over backwards in that situation, because if there is something to be done that involves both languages, you’re going to have to adapt to the other language instead of the other way around.
But what you’ll find in terms of using Erlang is you are a lot more productive, there is less code to write. If you learn the libraries, just like in Java, there is a lot of stuff available to you. In Erlang, there are all these OTP libraries that can do some pretty amazing things that you’re advantaged to. Get Joe Armstrong’s book and get Francesco Cesarini and Simon Thompson’s book and read those, understand those, learn the libraries that are described in those books. Once you know the libraries, you’re going to be very productive. Other than that, it’s like the things that you’ve heard about Erlang are generally true. It does have this really nice concurrency model and coordination capability, it has strong and practical reliability. You can use it to build prototypes that you can actually ship.
In a lot of languages you build something quick and dirty and you give it to a customer and it might work for a demo, but it’s never going to work in production, whereas with Erlang, you can build something that’s "a prototype" and it could actually run in production. The stuff about the ability to load code live into system is true, the tracing that you can do on a live system. Tracing is built into the system; you can enable it very selectively on certain functions or modules, whatever you need to see. You can turn that on and (we’ve actually done this) you trace a customer’s system that’s live and if there is a problem you can find it and fix the code and actually load the new code live into the system and fix a live system on the fly.
The final piece of advice is don’t get hung up on the syntax. The syntax isn’t Java or C or C++. That doesn’t mean it’s bad, it’s just different and it’s actually quite elegant when you get used to it. It should take you only a day or 2 to get used to it.
I’ve read Stu Halloway’s Clojure book, which is really nice. I had never met Stuart before, I met him recently at another conference and he knows the stuff very well. When you read his book, you can tell he’s coming from a Java background and he’s targeting the book to Java people. I understood from talking to him that some Lisp people were saying he should have focused more on the Lisp aspects because, if you are a Lisp person wanting to come to Clojure, this book isn’t necessarily the right book for you. I think you can’t do both really, that would be hard to pull off and given his background he chose the right direction. But one of the things he focuses on quite a bit is Lisp macros.
The macros in Lisp are what sets it apart from all other or most other languages. In fact I’ve been trying to figure out how, because Erlang has some macro-ish features and abstract form and parts transforms and exploring those a little bit more. Stuart’s book is really nice. I read a book on the way here. It’s called Let over Lambda, it’s written by Doug Hoyt and it’s all about Lisp macros. It was pure coincidence because I actually bought the Hoyt book before I got Stu’s book. That book is all about Lisp macros and they are written in a certain way and not everybody agrees it’s the right way to write them, but he shows these amazing things that you can do with macros once you understand how to do them and once you choose a certain way of doing them.
It’s not a book you read lightly, it’s best to know common Lisp going in. there is a lot of studying you can do in that book because you look at these marcos and they are quite involved, but I think it’s really well written. It’s pretty explanatory in terms of if you know Lisp, you’re going to understand what he’s talking about. Those are the 2 I read most recently.