00:21:10 video length
Bio Stuart Williams is a Consulting Architect in the SpringSource Division of VMware. With over a decade of application development experience and asan active contributer to open source projects at Apache and elsewhere, Stuart has practical, frontline knowledge about building enterprise class applications, distributed systems and automating deployments.
Tech Mesh, the alternative programming conference,focuses on promoting useful non-mainstream technologies to the software industry. The underlying theme is "the right tool for the job", as opposed to automatically choosing the tool at hand. By bringing together users and inventors of different languages and technologies new and old), speakers will get the opportunity to inspire, to share experience, and to increase insight.
Hi, I’m a Consulting Architect in the SpringSource for vFabric consulting team of VMware and I mostly work with customers on enterprise applications.
Vert.x is a new application framework or platform and it tries to take the event loop type programming model and apply it to Java and a number of other programming languages that run on the JVM.
Yes, when Tim [Fox] founded the project it was called Node.x. Tim was inspired by the way that the Node programming model worked.
Yes, absolutely, for applications that use the event loop. The event loop is a way of having a non-blocking I/O (NIO in Java parlance) and to allow your application to scale more effectively by offloading I/O activities to native functions in the operating system.
So the java.nio packages implement some of the key-functionality and we use the Netty framework. Netty builds utility classes and functions on top of NIO which make it much easier to work with that programming model and we build on top of that to reduce the effort required to build applications even further.
Yes, I’ve been using Groovy for a few years and I’ve tracked the development of that language and I think the future of the JVM very much is a polyglot, a multi-language future and it’s clear that from the popularity of Scala, Groovy and other language implementations that Java developers and enterprise developers are taking this very seriously. And some of the functionality that is been missing in the JVM, in the Java language itself it’s been slow to come like closures, we know that it’s coming in Java 8, but some of that functionality is already available in other languages and some of the trend is about being able to use new language features. But yes, absolutely it’s interesting, it was interesting for me but also it makes vert.x a pretty unique platform to be able to program and use multiple languages in the same application.
I don’t know, I really like Groovy, I think probably the language that I still use most in day-to-day life is probably Java. We are working on two new language implementations at the moment: Scala and Clojure, I’ve got most of the Scala API implemented so far, I haven’t really started with Clojure yet. It’s extensible in that regard so we can add language support for any language really, any language around the JVM. I think Groovy offers some additional conciseness that makes it really easy to write code and be expressive really quickly.
The boiler plate, I don’t think you can underestimate the amount of overhead it adds to your development schedule to have to write getters and setters on every object if you need to build a data structure for some kind of Bean, and Groovy takes care of that. Closures are obviously implemented across the whole language in a number of different places and I use a pass functional like behaviors into arrays, objects and otherwise and obviously that fits really well with what we do in vert.x instead of handlers.
So primarily it is the NIO 2 features in the language that have added some stuff to the file system, and there is a slightly different implementation underneath. But other than that, vert.x could probably run on Java 6 if we took some of that functionality out, but Java 6 is end of life. When the project started it was a green-field project so it just made sense I think to choose Java 7.
The vert.x core has a number of capabilities, we mentioned starting servers and clients but there's also some additional components, there is an EventBus which automatically connects to any other vert.x instances that you start, so you can get very flat predictable horizontal scale, have a queue messaging system built straight into the application. There is a file system API which gives you the same event handling capabilities async stuff as you referred to it, in the same patterns that we use elsewhere to handle HTTP and other functions of the service straight onto the file system, which means that we can do non-blocking activity right away through an application that you build with vert.x which is obviously important because you can block the event loop.
If you block the event loop then you stop the thread returning to do useful work elsewhere and typically with these types of event loop non-blocking applications they use many fewer threads than a conventional application container. So we might have hundreds running in a container but maybe ten or twenty running in a non-blocking container, in fact Node famously in that regard is single threaded.
Absolutely, and this means that it doesn’t prevent you from using many more threads, it just means you can use them more efficiently so rather having CPU cores sat waiting for other activities to happen somewhere else in the system, because they are blocking on disk write or a database call, then we can return that core and start using it for something useful.
The typical pattern is to implement a callback and this is what they underlying structure in Java is doing, they select the selection keys and you get an event of a selector and then you can work with the events. If it’s a socket then you listen on a server socket, you get an event telling you there has been a connect event and you create a new socket channel. Because you can place a request, it’s not a queue, but if you place a request to be notified when something has happened, you can then go off and do other work until that event arrives and then you go back and pick the data up or respond in some way. For newer applications where there is async we can do implementations straight away, but when there are things like JDBC, there is not a non-blocking JDBC implementation that I’m aware of, so we have to handle that slightly differently in vert.x, so rather than having one event loop, one thread pool that is engaged in that activity, we have to have a second pool of threads which we call the worker pool.
And the worker pool has typically more threads in it, but it still listens on the EventBus and it still has all the same capabilities but we separate them out, so that you can make a decision to put your blocking activity in a pool with more threads available to it, in the knowledge that the rest of your application can remain async. So we have a pragmatic view of how to handle this and this also means that we have access to the rest of the Java ecosystem in terms of libraries because significant numbers of those are not non-blocking implementations.
15. I think generally a problem of these non-blocking systems also I think with Node.js, with other systems like Ruby that have EventMachine where everything has to be written to this non-blocking standard otherwise you can’t benefit from it?
Yes and it’s certainly the case that we would strongly encourage people to try build their application like that, and if they are using vert.x because of the benefits of using your resources more efficiently but we recognize that until the rest of the world has finished writing non-blocking drivers for things, then we of have to interact with them at some point.
So things like MongoDB has a non-blocking implementation, I think Redis has a non-blocking implementation, there are implementations with message queuing where we can do efficient asynchronous activity and maybe use something like a message broker, RabbitMQ or something to offload the blocking work to consume as they're sitting on the other side of the queue, so we’re not preventing anything from happening, we're not requiring you to be incredibly strict and because we recognize that, the world is not there yet in that regard.
Yes and I think this probably can be plenty of implementations of I/O activity and services that never change. HTTP is not really designed for asynchronous activity for example and that is why we have things like WebSockets where the protocol is launched off from the back of an HTTP connection, so there will always been some and I think we just have to work with that, it’s not necessarily a problem, it’s just a fact of life.
Yes actually I’m really interested in Akka and as I’ve been investigating Scala for the Scala implementation in vert.x, one of the things we looked at was how best to handle handlers. Do we do the same things as we’ve done in Groovy which is passing closures or Scala functions or can we use actors? And I was advised at a conference recently by one of the Typesafe guys to look at maybe Akka integration instead of actors, so it’s one of the things that I’m doing at the moment, but I haven’t made enough progress to make any pronouncements about whether we’ll go ahead with it or not, but yes, absolutely I’m really interested in that, I think it’s fascinating.
19. This is the right conference to find the right people and some of the creators of Akka are actually here, so it’s a great place to chat. So to get back to one aspect of vert.x, you mention the EventBus and in the talk you mentioned Zookeeper, how do these things tie together?
So we use Hazelcast not Zookeeper but I mentioned Zookeeper I think as well. We use Hazelcast as a distributed cache with event listeners to the cache mechanism and this means that we don’t have to implement membership. Implementing membership in a robust way is always a problem for applications that is why Zookeeper is so popular, JGroups. And Tim devised the way to use Hazelcast. The mechanism for the EventBus uses Hazelcast to keep track of which members are in the cluster and which members are listening to which addresses and caches those locally, so this it’s kind of a lazy lookup in there. But the EventBus itself is written using the vert.x Net Server and Net Client mechanism, so when messages need to be sent from one node to another, vert.x actually uses its own infrastructure to do that, so it keeps track of membership using Hazelcast but when data’s exchanged it’s actually using the minimal protocol that Tim’s built for communicating data across between nodes.
One of the things that I thought was interesting about Rich Hickey’s keynote yesterday is he was talking about applications and systems being oriented around composable services and messaging. I think that really rang a bell for me because I think vert.x is very similar to that model, the Verticles and modules are the units of code, the services, that can interact with each other using the EventBus and exchange information using messages. And this means that we can write quite distinct units of code, scale them independently of each other to manage performance issues and so on, and give that loose coupling that becomes clear that is so important for being able to build applications at scale horizontally.
On the vert.x objects there is an EventBus method, you get access to the EventBus, it’s just like calling the method to initialize an HTTP server so it’s already there, two mechanisms for sending messages so you can send point-to-point messages so you can send to an address and if there are more than one listeners on the address they will round-robin for resolving where the message lands, or you can do a publish to an address and then all listeners will get a copy of the message. And of course you can listen as well, you can register an event handler to listen on an address and you can specify that you are only going to listen on a local address or you can specify an address that could be clustered.
So obviously for a local address if you were building a module that only communicated on the EventBus but did transformation of HTTP request data into a rendered view so we send back a template perhaps Mustache or something, we could listen only on a local address and we would guarantee that any of the activity only occurred in the same JVM which would mean that we didn’t go across the network, we didn’t incur additional I/O costs for processing a template look. So there is a differentiation and a kind of a use case differentiation between when you want to listen on a local address and when you want to listen on a potentially clustered address.
I think it’s fair to say that Tim started with a Node.js model but since then it’s evolved in a quite some way beyond that. Tim’s got a great experience building messaging systems brokers on a queue and so on, and obviously this has influenced his architectural choices here with adding the EventBus in, which I think is one of the things that really differentiates us from Node.
The website is http://vertx.io , we have a very active mailing list or Google Group, a very active community, very enthusiastic community and everybody is very welcome to come and check out the project.
Werner: Well thank you very much Stuart!