BT

Bryan Cantrill on Containers, Linux, Triton and Illumos, Virtualization, Node.js
Recorded at:

Interview with Bryan Cantrill by Chris Swan on Dec 20, 2015 | NOTICE: The next QCon is in San Francisco Nov 7-11, 2016. Join us!
38:30

Bio Bryan Cantrill is the CTO at Joyent, where he oversees worldwide development of the SmartOS and SmartDataCenter platforms. Prior to joining Joyent, Bryan served as a Distinguished Engineer at Sun Microsystems, where he spent over a decade working on system software, from the guts of the kernel to client-code on the browser. In particular, he co-designed and implemented DTrace.

Software is Changing the World. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

   

1. [...] How did you first get into all this in the first place?

Chris' full question: Hi this is Chris Swan one of the cloud editors at InfoQ. I am at QCon San Francisco 2015 with Bryan Cantrill. Bryan yesterday did a fantastic presentation on debugging containers and DTrace and all of that good stuff. So that’s not one of the things I’m going to talk to him about today. But one thing you mentioned along the way, Bryan, was your Mum saying when you were a kid “You’re never going to be able to sit in front of a computer all day” and you said of course you did. But what were your computers growing up? How did you first get into all this in the first place?

A1 I fear it’s a very typical kind of experience for someone my age growing up during the personal computing revolution. Maybe it was slightly atypical in that I didn’t have an Apple IIe or Commodore PET but rather we had an IBM PC XT. And my father who’s a physician had a very ill conceived business plan to sell the software that he had written for his own emergency room, to other emergency rooms around the country. Of course he did this before he had actually done the math about the number of emergency rooms that are in the country who would buy the software. So he bought the personal computer for his company so we had, I believe, the second IBM PC XT to be sold in the state of Colorado which was a very big deal at the time. Ten MB hard drive at a time when basically no machines had hard drives, it was all floppies, 5 1/4” floppy, and I was programming Basic and definitely enjoyed it, had a Mannesmann-Tally printer that AMTEC Amber monitor, it’s kind of like mother’s milk talking about this, this is like the first animal that you come out of the womb and look at is your mother. It kind of feels like those first monitors and still that AMTEC, I loved that AMTEC Amber. Those were my first machines and 256K memory which in 1983 was a lot, this is just after the “640K ought to be enough for anybody” famous line.

And I got to use the computer growing up and got to start messing around with that, and programming that, but I really had decided that I was not interested in programming, I’d done a lot of programming, and like many young teenage males I was convinced I knew everything. I had done programming, I knew programming, I wasn’t interested in programming, and when I went to actual University, I was convinced I wanted to go to Economics. I was going to be an economic concentrator. And I’d taken some economics in high school, but what it really built down to was I'd never really taken serious economics or serious computer science. And I took those two things side by side in the first semester of my first year of university, and there was really no comparison between computer science and economics. Computer science - I had no idea there was actually a real discipline, we could actually reason about programs so that this was more than just programing. That thinking about computation and the way we compute, is more than just writing programs. And what I knew how to do was write programs but I knew nothing about computer science. And that begun my love affair with computer science which hasn’t really ended, and for many years I thought I’d done this hard turn in my life, from economics to computer science, looking back on it, I now realize that what I’ve always been interested in is systems. And it’s just that software systems are these terrific systems to work on, because they are synthetic and mutable and we can reason about them, and we don’t just kind of need to make some hypothesis we can't explore, we can go explore hypotheses very quickly. I am a software systems guy pretty much to the marrow.

Chris: It sounds like you’ve almost found religion or philosophy inside of computer science.

It is that way, I found purpose at a very deep level, I think. I think when I discovered computer science and I felt very lucky that I discovered it when I did, because I could have very easily not taken a computer science course to my senior year, or maybe not taken one at all, and I don’t know what would have happened because it does speak to me so deeply at such a profound level, and as I’ve become older I have come to believe in that even more strongly. I think that the incredible thing about what we do, is that we are able to create this mathematical machine, and it’s funny that I disagree with this even when I say it although it’s tautologically true, that correct software doesn’t wear out. Correct software is timeless, and people say “Yes, but software is never correct”, that’s not true, there is lots of correct software, there is correct software every day that our lives silently come into contact with correct perfectly working software.

And it’s amazing to me that we are able to do these things that are transient because software is so ethereal, and yet it’s persistent in a way that only mathematical knowledge is persistent. So it is almost finding philosophy actually, and when I first came out to Sun Microsystems after having graduated from school, I was working with Jeff Bonwick, who went on to invent ZFS, great engineer, the reason I came to Sun, and I'd been at Sun maybe a month and we’re debugging a problem together, and we are in Jeff’s office, we’ve got the editor up in one window and the debugger up in the other window and we are looking at it, he steps back and he says: “Does it bother you that none of this actually exists?” “Come again? What do you mean? What do you mean it doesn’t exist?” “It doesn’t actually exist, we are looking at representations of knowledge, we are looking at the orientation of magnetic particles on plastic somewhere that are pulled into a capacitor, that is now being refreshed tens of nano seconds and we are looking at the representation of that, and inferring a system that doesn’t actually exist, it doesn’t exist anymore than you and me exist, “you” and “me” doesn’t exist, that’s an abstraction that we’ve invented, does that bother you at all?”. And I am like “I think this guy is mentally ill”, it actually bothers me that I may be living with someone like this, come to spend my career with someone that has a mental illness. But Jeff’s right, it is amazing, that it is all abstractions, it’s these towers, and towers of abstraction, it’s got no analogy in the human experience, and I think we are still coming to grips about what software means.

   

2. You ended up becoming a kernel developer. How did you change from your early PC experience and DOS and all of that to C and the Unix philosophy? What was the journey you took?

I actually started as a kernel developer, I haven’t moved on, this is my problem, I was never an application developer per se, I was a know-it-all who didn’t know very much, eighteen year old male, that ego was summarily slaughtered when I arrived at university, as it happens to so many people. In computer science I discovered the operating systems course, fell in love with operating systems, I loved kernel development, then I spent two summers working for a company called QNX, QNX is a real time operation systems company, it was devoured by Harman and then Blackberry, it’s sitting in the belly of the Blackberry beast at some point we will cut open the carcass of Blackberry and QNX will pop out. Actually it’s pretty intact there, great company, great operating system, great technology. But I always had the idea that I am doing OS kernel development now because I don’t yet fully understand it, it’s not yet done, and when the kernel is done, I’ll go on to the next thing, whatever that is. And that was twenty some years ago and it’s just not done yet, that’s the problem, we’re just not done with the abstractions that are in the kernel. So it’s not that I ended up there, it’s that I started there. And I do believe strongly, that that layer of abstraction, the operating system has a unique kind of power in this stack of abstraction, because it is the lowest in that stack, it’s the lowest piece of software, beneath it is firmware and then hardware, and as a result it’s the operating system that really creates a reality and you got a unique power in the OS to create abstractions that you can’t create elsewhere. And I still love creating those abstractions, I still believe in that and we are not done yet, I mean any time you think we are done the needs change or shift and all of a sudden there is a whole new need and you look at it and that's certainly apropos of containers, containers are an OS kernel phenomenon, kernel abstraction and they should be, they are for us not so much for Linux, we can talk about that, but these are OS level abstractions, and I still love creating those abstractions. I kind of think that at some point I will be done with those, and I’ll go off to do something else but it’s not out of me yet, it’s still in the blood.

   

3. Do you think things like unikernels can potentially get us there

Now you’re trolling me, so I kind of feel like the breathless proponents of unikernels I treat this like a thirteen year old who has been caught smoking a cigarette, and if you’ve been caught running a unikernel, I’m going to put you in your bedroom and you are going to have to smoke the entire pack of unikernels and you run your entire infrastructure on unikernels until you can explain why unikernels are very very bad idea. And unikernels are essentially a regurgitation of a very bad idea from the mid nineties for which there has been generational amnesia, this is the nice thing, you can always go back fifteen years ago and resurrect ideas good and bad because everyone has forgotten them.

And they have resurrected the idea of the Exokernel, and the idea is that application developers actually want to be kernel developers, and that’s how we can empower our applications, which is an insane idea, I mean actually insane. That you would want to have your application in the operating system, not have any protection boundary within it. Because you are that lowest layer of abstraction and that lowest layer of abstraction on the one hand is very empowering, on the other hand it’s ugly, I mean the things that we need to do in the operating system are things that applications shouldn’t be burdened with. That’s that kind of element to it but the much more pernicious aspect of unikernels is that it is an operating system that is an application, these things can’t have the facilities that we need out of a modern operating system, to in particular understand pathological behavior. So in your application, if your unikernel goes side ways and starts misbehaving in any of the ways that things can misbehave.

If it starts consuming too much memory, how do you know that first of all, how do you know that your app is consuming too much memory? Because your app and the operating system are now the same thing. How would you know, your app will degrade performance, presumably, how are you going to figure out what’s going on? “I’m going to ssh in”, no you’re not, you actually can’t create a process. Why? Because you’re the operating system, the operating system creates the process, it doesn’t execute the process, so the idea is in-kernel SSH, oh my God, I mean at some point that abstraction of an operating system kernel has been a very productive one, it’s been the one that we gravitated through for a reason, it’s not because we didn’t have other ideas. Do I sound skeptical? I don’t know, I don’t want to be cynical, and if someone wants to go deploy on unikernels, I honor their spirit, I think it’s good to experiment with ideas, even if I think they are obviously bad, and maybe some good comes of it in some dimension, but I think when I compare running a unikernel to running a container on the metal I’ve got no idea why you’d choose to run a unikernel it just makes absolutely no sense to me.

Chris: You touched upon an “Us and them” situation between what you’re doing with operating systems and what’s happening with Linux. I guess it relates to Linux branded zones, and everything that you’ve been doing with Triton to that extent, so tell me more.

I believe in OS development, I believe in the OS axis of innovation, I am not the only person who believes that. The system that we developed SmartOS which is a derivative of Illumos is absolutely an operating system that believes that, that believes in innovation in the operating system. I don’t know that I would really put Linux in that category. The way the Linux kernel is developed is just very very different. And in particular there is a real reticence to large change. Even if that large change is fundamentally sound. You look at the initial reaction to ZFS years ago, was that it was a rampant layer violation. It’s like “Well, ok, it’s true that ZFS is doing away with a bunch of layers that needed to be done away with, but that’s its power not its peril”.

And, there is a fear of the unknown there, and you can look across the board, and it’s not just in terms of file systems, in terms of observability, in terms of debuggability, in terms of containers, Linux lags on all of these things, service management. And it’s got ubiquity which is, ok fine, but that doesn’t mean that there isn’t room for other people who believe in OS innovation, I do believe in OS innovation. So what we’ve done is tried to – we’ve got our system Smart OS but we also recognize that not everyone wants to run their app on Smart OS even though it’s Unix, it’s not Linux, it’s very similar, but one of the differences between Linux and Unix now we have to re-enact the Unix wars maybe we should do a puppet show, we should do a Unix wars puppet show, what do you think about that?

Chris: Can we get Tanenbaum?

We do, and we can do marionettes, or we do sock puppets, maybe with some performance artists, I just fell that the Unix Wars is like the Peloponnesian Wars, maybe we can sing epic war poetry. The Unix War has been lost in history but suffice is to say it’s a bunch of vey small differences that got blown very much out of proportion because AT&T did a bad thing basically a long long time ago and that was the original sin of Unix, is AT&T trying to proprietarize Unix. We are Unix but we are not Linux and the reality is that the lack of kernel innovation aside, binaries are Linux binaries that is the de facto standard for applications. So what we decided to do is resurrect technology that had been around at Sun which is to implement a Linux system call table on top of the Smart OS kernel. And you switch that system call table in and out whenever you switch in a process that’s a Linux branded process, that now runs the Linux system call table. And when it’s off CPU you switch back to the native system call table. And why did we do that? We did that because with the rise of containers, containers being popularized by Docker, we saw the ability to bring our container technology in zones, to a much broader audience. Zones originally developed by Sun back in the day, allows for true OS virtualization. By true OS virtualization I mean when you are in a zone it looks and feels and smells like a virtual machine, but it’s not, it’s a virtual operating system that’s on one OS kernel.

They are lightweight, they’re fast, it’s a container. In the Linux space though, there is no notion of a container down stack, there is no notion of a container in the Linux kernel. It’s actually a name space in a cgroup. And name spaces are kind of these diagonally crossed entity that is not very coherent, it’s not the way it was done in jails it’s not the way it was done in Zones. So what we wanted to fundamentally do is run containers on the metal, that’s what me and Joyent have always believed in I’ve believed in years prior to that, run containers on the metal. The Linux substrate doesn’t allow you to do that securely, because the name spaces take these diagonal cross hatches and there is so much surface area, that a Linux “container”, has so many ways of polluting the machine on which it resides, that you can’t reliably put these things side by side, I mean Docker just adopted user name spaces. User name spaces which are the ability for the same uid to be in two separate containers, and not be actually the same user. Which means like if you don’t have user name spaces you don’t have anything, like don’t talk user name spaces unless you have user name spaces, and user name spaces are recent, it’s a relatively recent thing and Docker just announced now that they are using it. So this is not even nascent, that’s not even a word for that. We want the ability to have our, and we think people want the ability to have, a bulletproof substrate on which they can take a container run it on the metal, it can’t be a Linux kernel though. What we could do is run our kernel but then put a Linux system call table on top, which was daunting probably even crazy, but what we found is that the reason that is was possible at all is because the Linux binary interface has really slowed down. Back in the day, Linux was much more cavalier about breaking apps, but Torvalds, because it does emanate from Torvalds, discovered late in life a kind of binary religion and like many that discover religion late in life, he’s become a total evangelical, he’s a binary compatibility evangelical, he is much more strident about maintaining binary compatibility with old Linux binaries than we frankly, speaking of Solaris, ever were. Which on the one hand may be bad for the ability to evolve a Linux kernel, it’s great from my perspective because it means that that Linux target is no longer moving and if you can convincingly execute that, again that Linux system call table, if you can convincingly make available all those Linux APIs, ABIs, you’re done, so we resurrected this work.

Actually it’s funny, we didn’t resurrect this work, a member of the community resurrected this work, this is now a year and a half ago and it actually still worked really well, we should actually take this to its logical conclusion, which is what we did. So over the course of 2014 we finished that technology and deployed it into production in 2015 and it’s now amazing you are now able to run all the Linux apps at on-the-metal speed running on a container on either the Joyent cloud or one’s own infrastructure. That’s what Triton is. The reason it is named Triton, by the way, is that when we were first floating this idea by folks, people who understood containers like Solomon Hykes at Docker, were saying wait a minute, so I can run a Linux binary in a Smart OS zone, this is the best of both worlds, I get the binaries that I want to run, but I get it in this bulletproof abstraction that’s actually been designed properly by someone with a developed frontal lobe, not that I have an opinion about this; it’s the best of both worlds and we wanted to come up with this mythological creature that captured two worlds. Triton is the son of Poseidon who lives in the sea but has legs, I don’t know, half ocean half land kind of thing, hence the name. But it’s been a lot of fun to see that take off and see people be able to deploy their Linux infrastructure, but still get all the advantages of running it on metal container.

   

4. You commented yesterday about the horrors of hardware trust anchoring, tell me more about that sort of journey and what you found, how it relates to what you do with Triton or not?

The horrors of what? Hardware trust anchors?

Chris: VXT, the Intel trust anchors like the hypervisor.

Oh, yes, sorry, the horrors of implementing hardware based virtualization? Oh, so what does that entail? It entails that it’s amazing that anything ever works at all, once; you think of what’s happening from a microprocessor perspective, a microprocessor has taken the entire supervisor service area and virtualized it and now you are running this software that believes it’s on hardware, but it actually isn’t and it occasionally traps out to the hypervisor operating system for assistance, but you look at the level of communication between those two entities, it’s very low level and it’s just amazing that it all works, given how intricate it is and then when it doesn’t work quickly, it’s not really a surprise, people complain “Oh, my I/O performance is terrible”, of course your I/O performance is terrible, do you have any idea how far away you are from the actual hardware when you do one of these things; usually when you are trapping into the operating system to do I/O, you are not in the operating system at all, you’re just in a body of software that’s being run by the actual operating system, that body of software needs to talk to hardware which is not hardware at all, virtual hardware, that virtual hardware then talks to the operating system beneath it and it’s just this amazing layer cake.

And again, it’s a marvel that it works, especially if you go to nested virtualization, the head begins to spin, it is a marvel that it works and it’s not at all a surprise that it doesn’t perform very well. Actually, I was at QCon last week and I think that people don’t understand how involved these abstractions are, you just think “Oh, it’s like a virtual hardware, that’s easy, there is no real cost to doing this”, and one of the presenters had decided to run QEMU inside of a Docker container on top of; so it was QEMU, which does hardware virtualization, in a container, OS virtualization, running on a VM, which is hardware virtualized, running on an actual operating system. Of course, that QEMU is not hardware accelerated at all, and I am thinking “Oh, my god, how does this not perform terribly”, but he said it with such cheerful optimism, I’m thinking “I don’t know, maybe they figured something out”; it turns out, it performs really poorly.

I was of course it performs poorly, and what was a minute became 45 minutes, they were saying “We are going to see if we can make that a little bit better”, I was like “No, you can’t because it’s the abstraction that is the problem”. So these abstractions are really tough to make perform and Intel did an amazing job making it perform well when you are minimizing the layers of interaction, and the fact that hardware virtualization works as well as it does is a modern miracle; but there is no way that it is the right answer [inaudible], it just is not.

Chris: Returning for a moment to ZFS, and also DTrace, and their absence from the land of Linux, if you talk to people in the Linux community they are kind of blaming the CDDL, the fact that Sun was a relatively benevolent dictator and Oracle basically wasn’t; how much truth is there to that? We seemed to have gotten past it that with ZFS on Linux.

Well, I think that shows how much truth there is to it and it was very surprising, I have to say, that when we open-sourced DTrace and ZFS and so on in 2005 and I was actually with Sun’s council at the time, literally pushing answer, I was like “How long to think it will be before DTrace shows up in Linux?” and we had guesses, and I think the quickest guess was two months, some Danish master student is going to get it working and I guess that Danish master student is a metaphor for someone who is bright but underemployed, I don’t know, not to offend the Danes. So it was going to show up in two months, and I think the guesses ranged from two months to two years, those were the guesses. Never in my wildest dreams would I think that ten years plus after that I would still be fielding this question, because I did not think they wouldn’t take it because of licensing? We assumed they would take it because of licensing, we had no idea they were going to let the CDDL, because I can understand why you don’t want NVidia or something proprietary to be in the GPL system, but it’s open source for crying out loud.

And the idea that there is this inside baseball differences between these two licenses and you are in a totally gray area because of the law anyway, what is a derived work, we don’t have a concise answer for what a derived work is in software, and nobody really wants one because they are all worried they are going to have the wrong answer, whatever the wrong answer is for them, so I think it’s ridiculous, it’s a very obvious and transparent excuse for what does make a lot of sense, of course it’s one of the oldest ideas around here in Silicon Valley which is NIH, not invented here. And the NIH syndrome, the not invented here syndrome is where when something else has been invented somewhere outside your walls, you reject it because it wasn’t invented here. And that’s what this is actually about, and that’s what the rejection of all that stuff is about, the fact that the ZFS on Linux shows that to be just a myth effectively, it boils down to it I think, the much larger issue is not the licensing, it’s this issue about the kind of change that Linux wants to adopt. Linux wants to adopt small incremental changes that can happen one drop at a time, the idea of taking 30,000 lines of code at once, that is just impossible, that can’t exist because that’s too complicated.

Well, it might not be actually, actually it may be something that was really well thought out over a long period of time, actually it has its most concise explanation here in 30,000 lines, it might be the case. And it is the case with ZFS and I think cooler heads will prevail with ZFS and DTrace, Oracle did a bunch ports of DTrace to Linux, I feel bad because I feel that you’ve got this entire community that doesn’t know that clean water is a thing. I’ve seen Brendan Gregg, he has performance tools anti-patterns, where performance tools fall down or something, and the TL;DR on that one is basically “Linux doesn’t have DTrace”. On the one hand I would say it’s, I wouldn’t say upsetting, it’s just I pity the folks who know what they don’t have. The ones that don’t know what they don’t have, they are whatever, they are kind of oblivious, but there are plenty of people who actually know that what they have is not state of the art, it’s not acceptable, they should have something better, that something better is available, it’s open source, the technology exists to get there, it’s just the will that is missing, I think that is an unfortunate state of affairs for those folks using Linux.

   

5. Given how little I expect just about every Netflix application programmers cares about Linux and what’s underneath it, especially now that they have moved to containers, why do you reckon they have not just gone to a Illumos derivative like you?

Oh, why not go, especially Triton where it’s Linux on top, I think that is just fear of the unknown, it’s that simple. No one wants to say this, but people don’t want to be among the few, they want to be among the many, there is great comfort in crowds, for the same reason no one got fired for buying IBM in the 60’s, no one got fired for buying Microsoft in the 90’s, no one is getting fired today for deploying on AWS, no one is going to get fired for deploying on a Linux distro, that’s a very powerful, emotional draw and I personally believe in the fullness of time, we are already seeing more people who are realizing “Wait a minute, if you are able to run all my binaries and I am able to run this thing on the metal, why would I not just do that?”.

I think there will be people, and I think there are people, technological change in my experience happens much more slowly than you think it should for a long period of time and then happens much faster than you ever thought possible. We saw this with containers; with containers I was in this weird future state where we knew for years that containers are the only right way to build things, but everyone else seemed happy to run VMs. I guess it’s us, something, I don’t know. But the more we built out the more convinced we became that it’s not us, so something has to change and then it did and it was just hard to figure what it was and it was Docker and when it accelerated, it accelerated way faster; if you had told me two years ago that there was going to be a container summit, container conference, container camp, container this, container that, container camp London, that there was going to be a container track at QCon, I would have been “You are putting me on, that’s impossible”.

If you had told me that two years ago, 24 months ago, no way. But here we are. I think that sometimes people overly infer from the present and they think that change doesn’t happen and yet change happens all the time. Part of the reason why history is important to me, when I present on the one hand people make fun of me for it, but on the other hand I know that young and old really appreciate understanding the history of technology better, because history tells you not just where you’ve come from, but where you could possibly go.

If you’ll permit a brief tangent, I was listening to what is a great podcast, Invisibilia, a terrific podcast, they were doing a show on computers and humans merging effectively, wearable computing and how computing changes the human experience, how computing is going to change what it means to be a human and they weren’t interviewing a bunch of breathless singularity is near kind of people, talk about how this is going to change everything, and it’s a little bit exasperating because the arrogance to think that the computer changed humanity more than the train, more than the telegraph, more than the television, more than the radio, more than the airplane, more than the automobile, more than the combine, the sheer arrogance to me to say, that is such willful disregard for history. Yes, computing is an incredibly important technological change partly because it unlocks software and so on, it’s very important, but let’s keep our pants on around here and let’s keep some perspective on this. And yes, computing changes lives, but the idea that computing is going to change what it means to be human more than these other inventions, these other innovations, that were frankly more profound in terms of affecting more people faster; people think the 20th and 21st century is a period of great change, I will see you and raise you the 19th century, the 19th century entered agrarian and exited in 1900 with a country that was connected by rail, with aircraft just right around the corner, the 19th century was a period of enormous technological change. So, I think this is the long way of saying that I think things change more than people think they are going to and when I look at how things are deployed today, I don’t get too caught up in that, if it works for people, great, I am not going to talk you out of it, I am going to continue take the path that I feel is the right path and that means right now innovating in the OS. And fortunately there are people that agree with me, fortunately I am not totally crazy, at least not yet. There are enough people that are developing and deploying on the technology that I think I am not completely insane.

   

6. We spent a fair amount of time talking about systems and system programming, but Joyent has also been involved with applications, particularly Node, and you wrote very eloquently a little while ago about the io.js fork, why that was actually a good thing, but people grumble about V8 and the lack of dependency there and the fact that it was never really developed for the server side, how do you see that whole thing playing out?

It was obviously a very educational experience, that whole thing, I think the problem that we run into is that if your open source project is so popular that multiple companies are building a business around the technology, it needs to be in a foundation, basically, it needs to be with a neutral third party, because the reality is that no matter how above board your stewardship is, no matter how hard you try to do right by the community, those commercial interests are just too great, they will rip you apart. And we’ve seen this before, we’ve seen it with Hadoop, we’ve seen it with Docker, we’ve seen it with Node, we’ve seen it with others where you’ve got corporate stewards that are not necessarily doing the wrong thing, but we saw that with Node, we realized it had to go to a foundation, in terms of where that is going some of these things were a little bit invented in terms of “Oh you know we are on this old version of V8, we should be on a much newer version of V8”. Alright, and they upgrade V8 and of course, everything breaks, well that’s why we were on... etc. And it’s true that V8 is a terrifically innovative at the time JavaScript engine, wasn’t really designed with Node’s use case in mind, was not really designed for modules to have binary stability, for example, you'd think it’s unfortunate that hasn’t been embraced as much by the community as we would like, but from Node’s perspective the good news is, and we clearly realized that, these efforts had to be merged, that on the one hand I am a forkaphilic, I believe in the power of forking, I think it’s important, and if I am to be totally honest about it, it’s important for exactly the reason the io.js fork ended up being important in that it did allow a lot of people to experiment with different ideas and allowed us to come to a middle ground and allowed us as a community, as a Node community now, to get to a better place. That, of course, means our role in the Node community has changed, obviously, but we still use Node, we are still a big believer in Node, for me Node and JavaScript, actually, paradoxically, still hits a bit of a sweet spot in that it is a dynamic language, that is flexible enough to allow for different ways, of course the flexibility of JavaScript, JavaScript is obviously way too flexible, there is a delta, there is a difference between flexibility and lawlessness and it seems that JavaScript goes into lawlessness way too frequently, it seems that JavaScript could do some things to help you out a lot more, but one of the things that as time goes on is big difference between Node and Go for example is that you don’t hear people complain that in Node they are writing the same code over and over again, because Node in JavaScript is much more flexible about what you are able to go do. So, I think it’s still viable and it’s still important to us, we still build things in Node, I think Go is another language that represents Zeitgeist, Rust is also very interesting, presumably other languages to come. But Node is important to us, but obviously our role has changed with the fork and the reunification.

Chris: Great. I think we are running out of time unfortunately, but it’s been fantastic for you to drop by and talk to us. Thank you very much, Bryan.

Thank you very much. Great.

General Feedback
Bugs
Advertising
Editorial
Marketing
InfoQ.com and all content copyright © 2006-2016 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT

We notice you’re using an ad blocker

We understand why you use ad blockers. However to keep InfoQ free we need your support. InfoQ will not provide your data to third parties without individual opt-in consent. We only work with advertisers relevant to our readers. Please consider whitelisting us.