Bio Tim Bray managed the Oxford English Dictionary project at the University of Waterloo in 1987-1989, co-founded Open Text Corporation (Nasdaq:OTEX) in 1989, and co-invented XML in 1996-98. Currently, he serves as Director of Web Technologies at Sun Microsystems, co-chairs the IETF "Atompub" working group, and publishes a popular weblog (http://www.tbray.org/)
1. So we are here with Tim Bray at Canada on Rails. It was a very pleasant surprise to see you here. I want to talk to you about some issues about dynamic languages, why you are interested in Ruby and such. Why don't you start by telling us a little bit about why you are interested in Ruby nowadays?
I'm an old Perl hacker and I've been using dynamic languages for many years, I go back before Perl, to the days of Awk and things like that. And my own blog which I built as a software by scratch by hand, is in fact a Perl application. Like many web oriented hackers I've kind of done everything in Perl because you get things done that way, but at the same time hated it because it had it's write-only aspects and it was gross. It doesn't really push you in the right direction. You get a lot of work done and it's fast. When things like Python and Ruby started to show up over the horizon, the notion that you can have something that's dynamic and gives you the quick reward and at the same time it was clean and object oriented, it was very appealing to me. Like most people I didn't start taking Ruby very seriously until Rails suddenly became hot and sexy, and all the rage.
Sun is a big company and I think that to a certain extent Rails probably is seen as falling into the LAMP basket (LAMP stands for Linux, Apache, MySQL, Perl or something like Python). I think that perception of the world is actually not entirely accurate. It's the whole notion that you have web applications that are quickly and easily constructed based on the use of dynamic languages and good database mapping frameworks and things like that. When it comes to that whole ecosystem, my opinion is that we are vendors of computer systems and a lot of people are writing very cool code based on those kind of technologies, so we ought to be falling over ourselves to sell computers and to run them on, even though it's not Java.
Absolutely. I think we have more than that. We have the responsibility to try and expand the Java platform. Right now if you took all the developers in the world and put them into baskets based on what their primary working environment is, I think the Java basket is probably the biggest. There are many applications where Java is really an appropriate tool and there's also the case where a lot of people make their living not doing greenfield aps, but expanding, extending and enhancing the existing application inventory which is largely in Java and on the JVM. So we totally can't take our eye off the Java ball, but that doesn't mean that Ruby and PHP are things that are going to go away; they are going to grow and that's great and I think we should be friends with all of them.
Right. I love JRuby. I love Jython, I love Caucho, the guys that got PHP running on the JVM, for a bunch of reasons. The first one is all those millions of developers living on the Java platform. Right now if they are on the Java platform that means they have to program pretty well on the Java language. That's silly. The Java platform has three parts: the language, the JVM and the APIs. And of those, the language is the one that's easily replaceable. So, I think that just because you're on the Java platform, you shouldn't be restricted in your choice of languages; so I want to empower the Java developers in the world with access to things like Rails, PHP and so on and so forth. One good way to do that is through projects like JRuby, which is actually making pretty satisfactory progress. But there's another good reason to support Jruby, which is this: if you look more Moore's law, it's no longer making the cpus faster, it's making them wider. We're shipping our Niagara chips, and IBM and AMD and Intel are all heading in the same direction, where instead of one or two, or four core running insanely hot at 4 gigahertz, you're going to have things that have 48 cores running plausibly cool at one gigahertz and having immensely more throughput. But to get the mileage out of that you're going to have to warm up to concurrency, you're going to have to warm up to highly-threaded execution. Right now, I'd say that's a weak spot, not just in Ruby, but pretty much across the entire spectrum of dynamic languages, and it's totally a strong spot for Java. So it might be the case that you might be able to take some existing applications in things like Ruby and by running it on JVM get a lot more performance based on... We're not there yet and it's a little speculative, but it certainly a very promising direction.
You can do it right now. You take JRuby you start IRB on the JVM. And the thing is you have the whole universe of Java APIs so you can type in Swing commands and have a window pop-up on your screen and things just work. It really ought to be quite a bit like Ruby as it is now. The problem is that some substantial proportion of the Ruby libraries are in C not in Ruby and obviously there are issues in getting them running on the JVM but to that extent, the whole pure Ruby universe should run pretty smoothly once we get the infrastructure in place.
6. The Java community is interested to have all these communities running on the JVM, but how can we do that? How can we make it their strategic advantage to be running on JVM no matter what language they are working on? What are we offering them?
It depends: who are you offering to? For the existing Java community it's obvious what you are offering: you're offering them Ruby, that's not hard to understand. So for the existing Ruby community what would you offer them? There are two obvious things: one is this potentially better threading; the second one is all those APIs. The universe of Java APIs is big, is even bigger than CPAN, if you look at it closely. And a lot of them are good. I've done a lot of web programming and I've used the web APIs in Python, and I've used a lot of APIs in Perl, I've used the Web APIs in Java, and the Web APIs in Java are just a lot better. I would like to program in Ruby but with the Java APIs in certain circumstances.
7. So at the higher-level application contracts, you've seen enough of Rails to understand that there are Ruby-specific idioms and programming practices that are an essential part of what makes Rails programming so special. How does that translate onto other platforms once you have Rails and JRuby or JRuby on Rails?
I don't think that there's anything about being on the Java platform that would get in the way of the Rails worldview, that I can see anyhow. And I totally agree that along with bringing Rails onto the platform it's also the case that the Python people and the Java people and the .NET people are trying furiously to learn the cultural and worldly lessons from Rails and get that goodness into their own environment. But having said that I don't really see a difference.
8. Once again into Rails programming, it's a clean break; it represents a very "out-there" kind of move. Do you think that Java programmers can make that [move] smoothly once they are able to use these idioms, these programming practices on their own platform?
Yes, in fact if you look at the history of the community of Ruby developers, a whole lot of them are Java escapees, so we've got proof that people can pick up the culture and so on. But I'm not arguing that Ruby should migrate to Java, that Ruby development should necessarily be done with JVM. I think that's interesting an option that's going to be useful in certain slots, but I'm not saying that's universally the future.
Absolutely. Imagine configuration; the whole thing about Ruby is convention over configuration and we try to avoid configuration to the maximal extent. I think you'll find broad agreement across the community that Java EE in particular had a design centered around very flexible configuration and it's starting to become apparent that it's not a win. In many cases, trying to avoid configuration is a win and that is a lesson everybody is trying to learn including the Java EE people. The fact that the configuration files are in XML is totally irrelevant. The problem isn't XML, but that they assume to use it you have to basically invent your world for each new application, whereas Rails gives you a pre-package road where you change this as little as possible and everything will run great, just great.
I've looked at Groovy and it's definitely cool. It has some nice things about it. I kind of like it. What it doesn't seem to have is a community, at this point in time. The last time I checked, it still didn't have a stable syntax. Excuse me, but I don't really see how you're going to really hit the big time when you don't have a stable syntax! Once you have a stable syntax, you might be able to develop community and accomplish new things. So, I certainly haven't written off Groovy, but I don't see it as a strategic player in the dynamic languages world right at this point.
That's a big question and nobody knows the answer. I was talking with Guido Van Rossum at some point and he was saying: "When I write a computer program and I have a variable x in it, well in my mind I know what this type is." So in fact there aren't really any un-typed languages, both Ruby and Python are very strongly typed languages, they're just dynamically typed.
That's exactly right. The advantages of dynamic typing in terms of avoiding silly bookkeeping code, and being able to do the kind of introspection you can do, and class re-jiggering and all the things you can do in Ruby and so on, are huge. On the other hand, there are advantages to static typing too. One of them is that in a statically typed language like Java, writing IDEs is a lot easier, because the IDE knows what everything is and it gives you immensely more help in terms of refactoring. See things like Intellij, Netbeans and Eclipse, which are way ahead of anything available for the dynamic languages. I tend to think that the barrier might be overcome in time. When you're doing a Greenfield code and you're typing a variable for the first time, the IDE can not know anything about it. Most times, you're not. And you have to say: I've got a big Rails project with a 120 different files I would think that the ID would be able to look at it and deduce that this variable is being used to hold these kind of responders and then react appropriately.
I think that 5 years from now the impact of Ruby might be bigger than the impact of Rails. Rails is cool, but there's a lot more in the world than web-centric applications and I think Rails is clearly the leader for that space at the moment, but Ruby is such a well designed, beautiful, pleasant language that it has a potential to play a major role in whatever comes next. But I'm not finished on static and dynamic typing yet, okay? The other big advantage of static typing is performance. Recent JVMs are just insanely, mind-boggingly fast. And they do all kinds of sleazy tricks based totally on knowing everything. You know, when you do a method dispatch in Java, it knows so much about where you calling from and where you're calling to, that there's no lookups or dispatch times, there's just binary offsets to this address - here's the address of the argument and I could just use it without looking at it. And dynamic typing is wonderful but every time you call a message in a dynamically typed environment, you have to say : "Does this have this method? And if he doesn't, does his parent?
14. When speaking about performance some Smalltalk advocates are keen on reminding Java people that they had just-in-time compilations and really fast virtual machines way before Java did. So does the future look good on that front for Ruby?
I would say that just like the IDE barrier, there's the potential to overcome this and to make the cost of dynamic typing, if not vanish, at least become much lower cost on the production environment. Everybody agrees that there are substantial benefits from dynamic typing, to the extent that we can decrease the cost of it. But right at the moment there are these 2 big advantages of static typing which is IDE support and runtime performance. Let's not pretend they don't exist; they may be surmountable, but I certainly wouldn't go so far as to say that dynamic typing is entirely the future.
We may be. Right now, the people who build compilers and IDEs and runtime supports, do kind of live in two incommensurable planets, so there's at least a substantial cultural divide. It may be the case that down the road it's a false dichotomy. Now it feels pretty real though.
I totally agree that XML has no typing built in. I think that attempts to establish typing via schemas have been troubled and often have had something of a negative return of investment. Some of the elaboration and stupidity that has been going on in web services space is a wrong-headed attempt to assume that you can automatically process things based on declared types and further more that you can do object-oriented subclasses and derivation on types, which seems unsupported by the evidence out there. In terms of XML and typing there was another candidate for the universal data exchange format and that was ASN1, which is very little used these days; but when you got an ASN steam all you got was types. If you unpacked an ASN steam, he would say, "Here's a 23 character string, and here's a 64 double precision floating point". When you unpack an XML object, it says: "Here's something named label, here's something named price", so it would appear on the evidence that is more important to know what something is called than what its actual data-type is in terms of actual utility in the market place.
I think there is a plausible parallel there.
I put that as a feature out in my blog today actually. It was such a great graphic that I featured it. So here's the problem: we have a radically heterogeneous computer environment. There are different operating systems, different languages, different databases, different computer architectures and that's not going to go away. The IT profession has struggled for decades, literally decades, on how to build applications to work across this heterogeneous network and accomplish useful things, and by and large have done a pretty bad job. Corba was sort of a sad story. Microsoft DCOM was understood by only 8 people in the world, and then all of a sudden about 10-12 years ago there was this application that worked across heterogeneous networks, had high performance, had extremely broad scaling, ignored networking issues apparently and worked great; that was the World Wide Web. The world, not being stupid, said maybe there's something we can learn from that. The thing about the web is that if you look at it, it has no object models and it has no APIs. It's just protocols all the way down. Some of the protocols are loose and sloppy like HTML, and some of them are extremely rigorous like TCP/IP. But if you look at the stack there's no APIs, there's protocols all the way down. I think that the thing that you take away from that, is that that is the way to build heterogeneous network locations. A few other things that we learned from the web is that simple message exchange patterns are better; I mean HTTP has one message exchange pattern; I send you a message, you send me a message and the conversation is over. And it turns out to have incredibly good characteristics and so on. Now, the other thing that came along around the same time was XML, and it provided a convenient lingua franca to put in the messages you're going to send back and forth. The basic take-away is "Let's adopt the architectural pattern of the web by specifying interfaces in terms of message exchange patterns, let's make those message exchange patterns simple, let's try and make statelessness possible and easy because that's on the truth path to scaling. I think that idea has legs, it's really the only way forward. The fact is that 10 years from now there's still going to be Rails apps here and Java apps there and they're going to have to talk to each other. The only way to do that is by sending messages back and forth. Somebody said to standardize that. And that led us down this insane trail and the destruction of WS*. If you look at WS* there are these huge universal schemas compressing thousands of pages of specifications, mostly cooked up in back rooms at IBM and Microsoft. Many of them are still unstable years into the project, and they are based on XML schema and WSDL, which are two of the ugliest, most broken and irritating specifications in the history of the universe. I just totally don't believe you can build a generic world changing infrastructure for the whole software development ecosystem based on broken specifications at the bottom level. So those guys have gone off the rails! [LOL] But there's something to be saved, the whole notion of what David was talking about yesterday, a low-rent REST approach where you address things by URI, you send message and you get messages back you have a large number of nouns and you have a small number of verbs in the system, that has legs.
19. The cynic would say that is in the best interest of vendors to let this situation persist and let the WS-deathstar continue being built out, because the perceived complexity and all the architecture that goes into it and all the vendor benefits that can be claimed in order to support all these specifications. The longer this goes on, the more they can leverage that for marketing!
I absolutely agree. I'm extremely cynical and I think that take is 100% correct. Specifically I think Microsoft has a huge win here because the actual message pattern is going back and forth is so complicated that you can't ask people to understand them, so you just do everything in Visual Studio and it will make the problem go away. On the IBM side of the client this is so complicated that your programmers can't handle it, so you have to bring in IBM global service to build the system, and I'm cynical and I totally believe that of those strategies are in play. I think however, that while these people are building their cloud castles in the air, the real programmers of the world are just going ahead and merely doing the right thing. Look at amazon.com. They are doing multiple tens of million of web-services transactions a day through their web services API, they offered a SOAP interface and they also offered a simple plain old xml interface. 90% of the traffic is going through the simple xml interface. I think it's necessary to do what David is doing and what I'm doing, make funny faces, point finger at those guys, laugh at them, but I don't think they're going to do too much damage.
I think a wider vision for Service Oriented Architecture is mostly bullshit, in the form in which it has been enunciated by the big vendors and by Gardner and so on. There's so much high velocity arm-waving going on that I don't know what you mean. I totally understand how to set up a service where something that listens on a well-known port, and I send in messages in and I get messages back, and we agree on what the messages are, and I think there are some unrealized opportunities around building tooling to support programmers in doing that that. But larger vision of SOA, that and three bucks will get you a coffee.
21. Railsers, Restafarians, people that believe in Rest architectures. There seems to be a growing community of people that think it's bullshit, that are cynical about vendor motives. You have a long history in the software field, you have seen things come and go. Does this movement of you have legs, where do you see it going?
Yes. Amazon and Google are doing it today. I'm spending half of my time coaching the Atom working group. Atom is a data format for trying to clean up the RSS mess, but it's also a publishing protocol. It uses HTTP plus the absolute minimum amount of wrapping to make it super easy to create an update content on the web, totally REST-style approach. I thing this is going to change the world, and very quickly.
There are 9 incompatible version of RSS out there, and unfortunately a lot of the people in the field don't like each other so there's very little prospect that they're actually going to get together and unify them. It means all sorts of unnecessary irritation for software to have to deal with all these versions. There's also a lot of small inconsistencies with RSS, you can't use relative URIs, you can't use internationalized URIs, there's lack of clarity about whether you can one or multiple enclosures... There's a bunch of little irritating problems.
Atom fixes those problems. We have now half a decade experience with our RSS, we know where the problems are, so Atom looks very much like RSS. In fact, it is very close to RSS 2.0, but with the irritating pain-points fixed and a clean, very well engineered specification that is implemented friendly.
I'm personally unconvinced by the semantic web vision. Tim Berners-Lee is a smart guy, and he's a friend of mine, and we spent a lot of time talking about this, and those guys have been grinding away on semantic web theory for a long time now, since about 1997 and I'm unaware of any actual, interesting working software. The RDF data format is something that has benefits that as far as I can tell are only theoretical. The Atom working group did actually consider whether we should do an RDF flavor and decided not to do that.
So there is the Atom data format, then there is the Atom protocol. Right now in the blogging space there are things like the blogger protocol and the meta-weblog protocol which are based on xml-rpc and are really bad, incomplete and non-operable. The Atom publishing protocol is a very thin layering on top of HTTP, where extremely dumb devices should be able to be primary authors of web content using this API. The idea is that once you have a device or a computer program that has the protocol in it, you should be able talk to any web publishing service. Most of the interesting things in the web right now, have a lot to do with people: posting, creating, adding things, and that's harder than it should be. Most blog authoring environments suck! They are really horrible and if we can get an API in place the protocol in place to reduce the friction, maybe we can see some progress in making that easier.
I don't know. Speaking as one of the people who cooked up XML, we thought we knew XML was going to be useful and we were really, really wrong. I bet you that Matz is kind of dumb-founded about what Ruby is being used for too. I'm not going to be so silly as to make predictions about where the publishing protocol is going to be used except that I'm pretty convinced it's going to become ubiquitous.
Well, that's a good question. I think that the vast majority of operations on the web right now are GETs, you know, information fetch and a vanishingly small, but important proportion of updates. And to the extent that you can model your system in what I call a web style, where you have a lot of nouns, a small number of verbs, and you clearly distinguish between idempotent, safe operations and things that change the state, you can really go a long way with that pure web style, REST approach. If you really need deeply reliable complex transactions with multi-phase commit and all that stuff, I'm not sure the Internet is a very good channel for doing that. I think you'd really like to do that kind of thing behind the firewall in an enterprise environment. If you're actually going to operate across a heterogeneous large scale network, it would really be much better if you could possibly do things in the web style, which is: "I'm going to send you a package of information with everything that's needed to accomplish this transaction and you're going to come back to me and say if that worked or didn't, which is reliable." Where things start to get weird is where you have to have intermediaries, multiple machines involved in executing some transaction. The WS* stack does have a bunch of proposed, unstable, in-motion drafts for doing reliable transactions and so on across the network that are very highly unproved in practice. At the moment, if I'm actually writing a high volume of transaction processing system, I would probably prefer to do it the old-fashioned way, by talking to Oracle or you know, however you do it. Let's not kid ourselves that the WS* guys are actually offering real solutions in the space of transaction processing, right now, today, it's all theory.
28. One of the biggest tenets of DHH and the Rails community is that frameworks are extracted. If we look at where WS* has gone wrong, it would seem that maybe they don't get that, they just don't understand that the way to write useful working software is to extract it from working solutions. Is there a way that we can move forward with this kind of frameworks that are extracted mentality in the wider community, not just in Ruby?
Yeah, and this is the old syndrome named by Joel Spolsky of architecture astronautics. There's a certain class of people who want to build big complicated systems from scratch, and it doesn't work! It's never worked! There is the famous Gall's Law which states that whenever you find an instance of a complex working system, it will be found to have grown from a separate working system. I've done a lot of work in standards over the years and a lot of very big complex, ambitious standards have been built and then ignored. If you look at XML, which is one that caught on, the reason it worked was that we had ten years of experience in a prior standard called SGML. We found out over the course of history what parts worked, so we extracted the 10% that worked. Atom is the same thing. We've got years of experience now with RSS and we're extracting [the parts that work]. I totally agree with the Rails community in that respect: you cannot build large ambitious infrastructures on the basis of theory and expect to have good results.
Larry Wall, the inventor of Perl often said that it's good for programmers to be lazy and impatient. And I'm lazy and impatient. I've always hated the traditional big system of model software development where you go off and write a spec for 3 months, then design for 3 more months. Agile appealed to me instantly. I would say that, looking back over the 20 years I've been doing this, that the 2 biggest developments in IT, most-significant, are Object-Orientation and Test-Driven Development, and I think that TDD is more important.
I agree and I think that's a problem. I think the ones that are going to win are those that hit the 80/20 point. Hitting the 80/20 point is a very central concept. If you look at a graph, if you select 10-20 big IT technologies going back over the decade, some that worked some that didn't, and you try and look at them and ask what were the crucial success criteria, what are the things that new technology has to have in order to go over the top and be important? You can say "Architectural elegance, good implementation, 80/20 point, investor backing, there's a whole ... of this as well." If you look at it carefully, it turns out that hitting the 80/20 point is the single most important success criteria for a new technology. If you look at the ones that change the world, SQL for example: SQL is this big sprawling monster now, that came along it was lean, mean and ignored three quarters of the problem. Java is this big empire now, but it was a small language when it came along. The personal computer, when it came along its operating system was a joke, its hardware architecture was a joke. The web! There have been hypertext systems before that, like Xanadu, Dexter, etc. and they had really grand architectural notions of what a hyperlink should be, and the metadata it should carry. Tim Berners Lee came along and said: "NO NO NO". If they break well, that's it. It's architected into the system. Hitting the 80/20 point is crucially important and our profession has not done a good job in learning that.
I'm impressed by the quality of the presentations. I think the Rails communities are interesting because Rails is interesting; it's also a provoking new work like that talk by Dave Astels on BDD (Behavior Driven Development). That was an astoundingly good talk. That guy has clearly thought more about testing and queuing and issues like that, than almost anybody. There's a lot of leading edge work that happens to be coalescing around the Rails community and I think that's good stuff.
Well, that's my job.
Interesting but agonizing to watch
I have two suggestions for infoq: Please provide interviews as an audio stream. This is about what people say, so there's little use in watching it on video. The second thing is that Flash doesn't seem to be very good at video streaming. I never had any buffering problems with Real or Windows Media...
Re: Interesting but agonizing to watch
Thanks in advance!
WS-death star and SGML