BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Interview with Tim Fox About Vert.x 3, the Original Reactive, Microservice Toolkit for the JVM

Interview with Tim Fox About Vert.x 3, the Original Reactive, Microservice Toolkit for the JVM

Bookmarks

Vert.x is a reactive, microservices toolkit for the JVM, that provides an asynchronous, scalable, concurrent services development model. It supports polyglot language development with first class support for JavaScript, Ruby, Groovy, Scala, and of course Java.

InfoQ got a chance to catch up with Tim Fox, the creators of Vert.x and the Vert.x lead architect to get his thoughts on Vert.x in general and the upcoming Vert.x 3 release.

Tim explains how Vert.x compares to Java EE, Spring, Akka, and explains how Vert.x is a good fit for microservices, reactive development.

InfoQ: What is Vert.x and why would someone pick it instead of a traditional Java stacks like Servlets or Java EE or Spring?

Tim: Vert.x is a toolkit for writing polyglot reactive applications on the JVM. Unlike the traditional stacks its been designed from day one with microservices in mind, and it's also been designed with scalability in mind so it's almost completely non blocking (OS threads).

This is critical for many modern applications that need to handle a lot of concurrency - e.g. process a lot of messages or events, or handle a lot of connections.

Also, unlike the traditional Java stacks, Vert.x supports other languages than Java - e.g. JS, Ruby and Groovy so we don't force you to always use Java and can use the best language for the job at hand or for the skill-set of your team.

Another important point is that Vert.x is toolkit, not a container or "framework". That means you can use it within your existing application to give it the Vert.x super powers. For example you can use it within your Spring application, and many users do.

InfoQ: What are your thoughts on Spring? Spring Reactor? Spring Boot? Node.js?

Tim: I have a lot of respect for Spring. They have done an amazing job in creating a rich ecosystem, and there are components that do just about everything under the sun. For me though the main advantage of Vert.x, is many of the Spring APIs are blocking and that's going to limit scalability, also Spring is largely Java only, not polyglot like Vert.x.

But I'd rather not see Vert.x as a competitor to Spring. Vert.x is unlikely to ever contain as many bits and pieces as the Spring ecosystem. Let's not forget that Vert.x is just a library and you can use it along with Spring in the same application. What I hope to see over time is more components in the Spring ecosystem becoming non blocking so they can be more successfully used in scalable apps.

We're already seeing parts of the Spring ecosystem (like project reactor) taking a non blocking, event driven approach so this is promising.

Note: Project reactor from SpringSource/VMWare came out right after Tim Fox left VMware and joined RedHat. Project Reactor is a competitor to Vert.x and is similar in focus and style.

InfoQ: What are your thoughts on Java EE? What are your thoughts on application servers?

Tim: Java EE was originally designed with a very different development and deployment model to what is required for modern applications today. Java EE app servers were all about having a monolithic server which sat somewhere on the network and into which you deployed your application packaged as a jar or an ear. This is pretty much the opposite of a microservices model.

Moreover most Java EE APIs are inherently synchronous, so most Java EE app servers have to scale by adding thread pools as so many things are blocking on I/O (remote JDBC calls, JTA calls, JNDI look ups, even JMS has a lot of synchronous parts). As we know adding thread pools doesn't get you too far in terms of scalability. So really Java EE is crippled by design and is never going to be a good choice for applications that need a lot of concurrency.

Vert.x, in many ways, was a reaction against Java EE. To be fair, in the last few years there have been movements to make Java EE easier to use, and I see valiant efforts by some to repackage Java EE as a model that is suitable for reactive microservices. But to me this seems to be a hugely uphill struggle - Java EE was never designed with that model in mind and forcing it into that would require such deep changes in the Java EE APIs that you might as well throw it away and start again. Which is exactly what we did in creating Vert.x.

InfoQ: What are your thoughts on Scala and Akka?

Tim: I respect and appreciate the power of Scala, and those who can program in it. To me the major problem Scala has is its too hard and tries to do many things, which means it will never be truly mainstream like Java. But if you have a super intelligent dev team who can handle it then Scala may be a great choice.

Akka is a great system that I have a lot of respect for it too. In some areas it takes a similar approach to Vert.x - e.g. Vert.x has an "actor-like" approach for concurrency and we try and avoid mutable shared data. Also both Vert.x and Akka interoperate using the new reactive streams "standard". Both Vert.x and Akka are riding the same Zeitgeist (if that makes sense!) to a certain extent.

I don't see Vert.x vs Akka as an "either-or" situation. I think it will be common to see installations that use both Vert.x and Akka happily talking to each other in a nice scalable reactive way.

InfoQ: How does Vert.x performance compare to Node.js performance?

Tim: Well, you can see that for yourself - TechEmpower BenchMarks

InfoQ: Given the direction of the industry towards microservices and reactive architectures do you feel vindicated in the visionary direction you took with Vertx?

Tim: I guess so. When I started Vert.x (or Node.x as it was originally called) back in 2011, it was very much a reaction against the complexity of app server based applications.

From day one Vert.x has always been about writing your code as self contained services, in whatever language you want and running them wherever you want without having to first have an "infrastructure" or "appserver" pre-deployed there.

We were certainly one of the first projects to really push a microservices model of application development and deployment, and its great to see now that this is becoming very popular, so yes, I suppose I do feel vindicated.

There's another side to this though, and I guess it comes with the territory when an idea becomes more mainstream - and that's that everyone wants to jump on the meme and declare themselves as microservices. We see this now with various traditional platforms which were designed to be monolithic now zipping themselves up in a jar and adding a main class and declaring themselves as microservices, or "reactive" because they contain a few async APIs.

Other than microservices, a key feature of Vert.x has always been it's non blocking. And that's all about being able to scale your application to deal with a lot of concurrency using a minimal number of threads. A lot of users are now realizing this is important.

Many modern apps are crunching a lot of data and processing a lot of messages and events, or handling a lot of connections, you just can't do this effectively with thread pools and blocking (OS threads) implementations. Event driven for scalability is a big part of "reactive" and its great to see that reactive is also becoming mainstream - the last two years running reactive systems have won the JAX innovation awards (last year Vert.x, this year Akka).

A lot of users now get this, and non blocking now seems far more mainstream than it did some years back so I guess this is a vindication of the approach too.

InfoQ: What are the major differences between Vert.x 2 and Vert.x 3?

Tim: We've spent a lot of time in Vert.x 3 making things simpler to use.

In some ways Vert.x 2 was quite container-like, but in Vert.x 3 we've removed a lot of that and Vert.x 3 is truly embeddable. This is why we go to pains in the Vert.x 3 docs to say Vert.x is not a framework or a container.

We've also simplified the classloader model and have a simple flat model now by default (i.e. no extra classloaders). This fits much better into the world where you want to just write your microservice as a simple main class, use the parts you want, and go.

Vert.x 3 also has built in support for RxJava - we provide Rx-ified versions of all our APIs so if you don't like a callback based approach (similar to that you get in Node.js for example) which can sometimes be hard to reason with especially if you're trying to co-ordinate multiple streams of data then you can use the Rx API which allows you to combine and transform the streams using functional-style operations.

We're also looking into an experimental new feature for Vert.x which allows you to write your application in a classic synchronous style, but where it doesn't actually block any OS thread, the idea being you can get the scalability advantages of not blocking OS threads but don't have the callback hell of programming against asynchronous APIs., i.e., have your cake and eat it. We think this could be a killer feature, if we get it right.

Another really key feature in Vert.x 3 that we're very excited about is Vert.x-Web - this is a toolkit for writing modern web applications with Vert.x.

Vert.x-Web contains all the parts you need to make sophisticated modern, scalable, web applications, and of course you can use it from any of the languages that Vert.x supports. It contains all the things you'd expect - cookies and session handling, pluggable auth, templating, websockets, support for SockJS, content negotiation and many, many more features.

It's a great fit for whatever kind of web application you're writing, whether's a 'traditional' server rendered web application, an HTTP/REST microservice, or a client rendered web application.

You can explore the new, work in progress, Vert.x 3 web-site which has a lot of information on the various parts.

InfoQ: What are your thoughts on the reactor pattern? How do you think Vertx fits into this space?

Tim: Vert.x uses a variation on the reactor pattern which we call "multi-reactor". So instead of having just one event loop we have multiple event loops, but we make guarantees that any specific handler will always be invoked by the same event loop. This means you can write your code as single threaded (not having to worry about synchronized, volatile etc) but still have it scale easily.

So you get the benefits of the reactor model, but because unlike pure reactor implementations (like Node.js) we scale more easily over the cores of your server without having to deploy many server instances.

InfoQ: How important do you think runtime metrics are to modern development? What does Vert.x 3 support to make gathering metrics easier?

Tim: Runtime metrics are very important so you know what is going in Vert.x. Vert.x 3 provides a metrics SPI where you can plugin a provider to gather metrics for Vert.x. We have an out of the box metrics implementation that uses DropWizard metrics and another one in the works that uses Hawkular.

InfoQ: Performance: I started using Vert.x when I saw the TechEmpower Benchmarks and saw Vert.x was very dominate. It was very often the fastest and if not the fastest in some tests, then it was in the top three. Lately, I don't see it competing in the benchmarks and I assume that is because of the focus on Vertx 3. How is Vertx 3 performance?

Tim: We withdrew the more recent results because they were testing against older versions of Vert.x so it wasn't really representative of the latest version, and we just didn't have the time or resources to keep the benchmarks up to date with our small permanent team.

Vert.x 3 has not been performance tuned yet, but once we have done this and get 3.0 out, we can spend some time bringing the benchmarks up to date and publishing the results.

InfoQ: Vertx is polyglot? Which developers from which languages are the most annoying? What percentage of each language does the Vertx community consist of? How big is the Vert.x community?

Tim: In my view every language community has both annoying developers and some very helpful and knowledgeable ones too.

How big is the Vert.x community? That's pretty hard to measure but we have very active google groups, and we're one of the most starred Java projects on GitHub. We also have a lot of companies using us in production, both big names and smaller ones (you can see some of these on the web-site), although it's not easy for us to count them all as we don't (yet) offer any commercial support.

InfoQ: What was done in Vert.x 3 to make Vert.x development easier?

Tim: One thing we identified in Vert.x 2 was we did some things in a very Vert.x-specific way that could seem quite alien and tricky to some developers - especially those coming from a "traditional" Java background, e.g. those used to Maven based projects and packaging. For example we had our own module system with our own descriptor that you wouldn't find anywhere else. Also Vert.x 2 had a somewhat complex classloader model which made things a bit tricky to run easily in IDEs.

In Vert.x 3 we decided to stop pushing against the grain and instead to do things in a way most Java devs would expect - so we dropped the module system and simplified the class loader model (by default its a flat classloader model now). Now Vert.x components are packaged as standard jars that you can handle like any other dependency from Maven or Bintray. The flat classloader model has made things easier to run and debug in IDEs - just create a main and off you go. All of these have contributed to giving Vert.x 3 a much easier and simpler developer experience.

InfoQ: What sort of support does Vert.x have for MongoDB, MySQL and PostgreSQL? Why is async important for databases?

Tim: Vert.x 3 is a just a library so you can use it with any other Java library you like including any database client. However most Java database clients tend to be blocking so you have to be careful about not blocking a Vert.x event loop.

Vert.x 3 provides an asynchronous JDBC client which basically wraps the JDBC interface and calls it using a thread pool and provides an async interface to the user, so the user doesn't have to worry about wrapping it themselves. Clearly the client is still blocking on internal calls to JDBC but there's nothing much we can do about that as the JDBC is inherently synchronous and calls will typically block on network I/O. In an ideal world Oracle will bring out an official asynchronous JDBC API and driver vendors will write non blocking versions of drivers, but the traditional RDBMs vendors seem slow on the up-take.

The NoSQL vendors seems a lot quicker in understanding that non blocking is important; for example MongoDB have brought out a fully async client in Mongo 3.0 which we use in Vert.x 3. There's also a new Mongo Rx client which looks pretty cool.

So there are a growing number of options for non blocking data access, and I expect the number to grow over time (and maybe even the traditional Old SQL vendors will pick up!) as db vendors realise there is demand.

Why is non blocking database access important? Again it's all about scaling your application with a lot of concurrency.

Let's say your application needs to execute a database query against remote databases using the JDBC API. Bear in mind, this might not be one database it could be 100 databases located on different servers. Let's say each query takes one second on average to return a result. And let's say you have a thread pool to execute these requests with a maximum size of 200 (which seems like a reasonable number).

Doing the maths, you can easily see that you can never do more than 200 requests per second. So that's the bottleneck in your system. The remote servers might easily be able to cope with more load but because you have max of 200 threads you can never go faster than that. Of course if your DB servers are truly maxed out already then using async on the client is not going to improve throughput! But, in the case, they have spare capacity its the blocking model that's limiting your overall throughput.

InfoQ: How important is Java 8 to Vertx 3? Do you think lambda expressions make Vertx more appealable than using anonymous inner classes?

Tim: Java 8 is very important to Vert.x 3. Probably the two most important features for us are lambdas and Nashorn. Lambdas make programming against event style APIs much nicer, and we use the Nashorn JavaScript engine in our JavaScript implementation.

Additional resources:

About the Interviewee

Tim Fox has been a software engineering for 18 years. He has been very involved in the open source community for the last 8 years. Tim works at Red Hat. Tim is the creator and project lead for Vert.x - the reactive, polyglot application platform. Tim has also worked on JBoss HornetQ, which is now core to Wildfly (formerly known as JBoss Application Server). Tim also worked on RabbitMQ at SpringSource, etc.

About the Interviewer

Rick Hightower has been using Vert.x for a while. Rick built a REST/WebSocket/High-speed queuing lib on top of Vertx called QBit, a microservice lib for Java. He was originally interested in Vert.x due to the speed and the need to build a set of 100 million user microservices on a small budget. He has used Vert.x with several clients to great positive effect. QBit is heavily inspired by Vert.x. Rick also wrote Boon, a high-speed JSON parser written in Java. Rick writes about reactive microservices quite a bit.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Use cases?

    by Mac Noodle,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Is this useful for standard "business applications" were we need data to be "reliable" and have things like "2 phase commit"? This is one reason I have not moved to something like MongoDB yet. Business applications typically must have data be "right" right away - not eventually or sometimes not.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT