BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Eberhard Wolff on the Death of Java Application Servers

Eberhard Wolff on the Death of Java Application Servers

Bookmarks
   

1. My name is Steffen Opel. I am here at GOTO Berlin 2014 with Eberhard Wolff. Eberhard, could you introduce yourself to the InfoQ audience?

Yes, sure. My name is Eberhard Wolff, I am a freelance consultant and trainer here in Germany. I am also a Java Champion and I am also the leader of the Technology Advisory Board for adesso AG, an IT consultancy here in Germany.

   

2. You recently asserted that “Java application servers are dead”, which triggered a lot of discussion in the Java community. Can you give us a brief overview of what you mean here?

Right. It is important to point out that the main problems that I have with Java application servers are the deployment model and the monitoring model. So basically, what I am trying to say is that having an additional entity, like an application server, adds some complexity to deployment and monitoring. And with more recent approaches, where you would put in parts of the functionality of an application server into your application, it is much easier to deploy your application. So, if you look at it, even the script to automate the installation of a simple application server such as Tomcat is quite considerable and if you have the opportunity to not do that and also have an easier deployment model, not having to package everything up as a WAR or an EAR file, than you should probably do that and use a different approach. So, I think the way into the future is to have a deployment where you would have a JAR file and just start the JAR file as an application. And I see quite a few technologies doing that, like Dropwizard for example or Vert.x or the Play framework and finally Spring Boot, if you are looking at the Spring ecosystem.

   

3. What are the deficits of the deployment formats like WAR or EAR in comparison to that?

That is a good question. I think one of the problems is that they are actually proprietary. Proprietary in the way, that they only support Java applications. If you look at IT infrastructures out there and also at your average Java application, this is just one part of the whole infrastructure. So usually there is also a web server involved like Apache HTTPd, you would have database systems, you would have all kinds of firewalling software, log systems and all these kinds of things and the way to install those is usually by some kind of deployment automation. So, you would have a package manager, for example, if you are in the Linux world, you would have scripts like Chef or Puppet and the WAR files and the EAR files are just for Java applications, so they cannot be used for anything else. In some regards, there are some really huge drawbacks from that. So, for example, it is not possible to say: “This is a WAR file that runs only on this application server”, even though it is often the case, that the application is not really portable, but you cannot express that. You can also not express dependencies between this WAR file and the rest of the IT infrastructure. So, for example, which database is used and so on. So if you want to install all of your system, including the database, including the web server, including everything else, you would need to look at different ways anyway, so the question is: “Why wouldn’t you use this approach like a package manager, for example? Why wouldn’t you use that to also deploy your Java application?” Then it is much easier to have a large JAR file that brings all the infrastructure, than having one package for the application server, some way of tweaking the configuration of that application server, and then having again a WAR file or an EAR file that you would install on that application server. So that is basically the problem.

   

4. The dependencies are not easily manageable, it is kind of a cyclic dependency – is that what you are saying?

It is also about using the standard tools. So, if you have an Ops environment and they do more than just Java applications, they are probably used to some kind of deb files or RPMs for Linux, but maybe they do not even know what to do with WAR files and EAR files and instead of educating them, it is probably better to just give them what they want.

   

5. One of your observations is that production employments in application servers tend to be just one app per app server. Is this caused by these problems that you laid out or does it have another background also?

One of the reasons why I think this is somewhat commonplace is because of the better isolation that you can get that way. So if you have, for example, a batch application and a web application running on the same application server, there is a problem because as soon as the batch runs it will eat up all the resources and you cannot isolate that resource consumption against the web application. Also, there are other problems. So, if one application on the application server fills up the file system by writing huge amounts of log information, then it will crash all of the application server, all of the applications. So they are really not isolated. So if you want to have an environment where you want to really separate those applications and make them independent from one another, so that crashing one does not mean that the other one has crashed, you will have just one application on the application server and you will separate them using operation system processes or even virtualization, virtual machines. So that is one of the reasons. Having said that, of course, if you do not have these requirements, it is perfectly possible to deploy multiple applications on one application server and that might be a sensible approach, as you do not have these requirements in terms of fault tolerance and availability. Also, to sort of conclude, if you have a highly scalable application, you will even have multiple application server instances, in a cluster providing one application, to do the scaling. So in that scenario, it is not just one application per application server, it is actually one application per N instances of your app server running in a cluster.

   

6. The app server provides certain infrastructure capabilities like monitoring the apps – that is something people perceive as an important aspect. Don’t we lose that if we drop the application server?

That is again a very good question. I think one of the value propositions of the application servers is usually that it is a platform for professional operations. So for deployment – and we already spoke about that subject – what you are asking about now is monitoring. So, first of all, let me observe that monitoring non-Java applications is actually possible and there is a tool stack to do that. So, there are tools like Graphite that are used nowadays or Nagios or Icinga or things like that, that are used to monitor applications and there are also elaborate infrastructures to deal with log files like the ELK stack for example or Splunk, if you are looking at commercial solutions. So, generally speaking, to monitor an application, at least outside the Java world, no application server is needed. So, what you could do is that you could try to use that approach also for Java applications. So you could have an interface for your Java application to work with Graphite, Metrics or these other tools, or you could provide through HTTP some JSON information that gives you an idea about the status the application is in. And again, this is something that is sort of more standards compliant in the sense that it is the way that other people outside the Java world also do the monitoring of their application and it fits into standard tool stacks that you have there. Of course, application servers do provide those functionalities, but other approaches as Dropwizard or Spring Boot also support these kinds of monitoring in an application that you would deliver as a JAR file.

   

7. So, is all I need to do now to drop the application server and deploy the app in a different environment or does this also require, or suggest at least, to refactor the architecture of the applications, for example towards microservices?

My point is that the deployment and the monitoring model is something that you need to consider or reconsider. If you are using a microservices approach, it is probably even more interesting to not use application servers because this added complexity is just there in so many more places because there are many more small services. But, there is no reason why you could not do a monolithic deployment using this approach. So, it is perfectly fine to use it basically for any kind of application, not just microservices.

   

8. Applications do not need to change, but they might eventually. What are the alternative technologies you see emerging to replace the application server concept?

Generally speaking, those are technologies where you would put in your application and the result is a JAR file that you can run on your Java VM, including monitoring, including the deployment which is quite easy because you just need to copy over the JAR file and including also some means to do HTTP, for example. As I said, some technologies that do that are Spring Boot for the Spring ecosystem, there is Dropwizard. Dropwizard is somewhat interesting because they are using JAX-RS for JSON REST web services, so it is actually a part of the Java EE standard here that is used. And frameworks like Play or Vert.x also support these kinds of approaches. As we are speaking, I am not aware of a Java EE implementation that would support this model, but this would be possible because it is just a different way of deployment, it is not about the APIs or the features that are offered. So it is just a different way of monitoring, a different way of deployment. It is not about the APIs or the programming model that is used. And that is also why you do not need to re-architect your application, if you do not want to.

   

9. Do you think this is simply a technology shift or does it also require cultural change within the organizations, for example towards continuous delivery? Is this a requirement for dropping the app server model?

I do not think so. If you have simpler deployment, that is beneficial for continuous delivery because you do more deployments and it is better if you have a simpler tool stack. There is one catch, however: if you are used to using an application server, there is sort of a transition cost because the model in which you operate your application is obviously changing. So, if you already have application servers in place and if you are operating those, my recommendation is not to just throw them away because they do not make any sense. My recommendation would be to look at those tools and see whether they fit and then to figure out how much effort you need to invest to make that migration. And the migration will not just be about the development, it will also be about operations. So if you are fine with your application servers as you are using them right now or if you estimate that the cost of transitioning to this model are really huge, then you should probably stick to the model that you already have. So, I think that is an important thing: it is not just about whether it is the better technology. It is also about the cost of adopting that technology and making it work. In this particular case, it changes the processes and operations quite considerably. So, the question is whether you want to do that or not.

Steffen: You say there are two driving forces for considering this: one is the pain with operating application servers, so if it gets too high you would consider switching, and the other might be to adopt more agile processes or different language stacks or something like that.

Yes. So it is easier to do continuous delivery, it is easier to do microservices, it is also just easier for developers because all they do is they just start the application – I mean, they just run the application straight out of the IDE and can debug into the application. It is also easier to test because you do not need to start your application server, you can just do it inside your test – start the application and then have Selenium tests, for example, running in the same JVM quite easily. So, all this becomes less complex, so I would argue that, in terms of complexity, this approach is much easier and for that reason, it is something that you should definitely consider. But, as I said, there is some cost involved here.

   

10. [...] Do you also see that in enterprises already, do you see the demise of the app server already on the horizon in enterprise scenarios?

Steffen's full question: That kind of sounds like it is something that, as usual, has probably emerged from the startup culture, it sounds like a disrupting move in architectural patterns that smaller companies can easier adapt to because they can adapt to changes faster? Do you also see that in enterprises already, do you see the demise of the app server already on the horizon in enterprise scenarios?

I do not see a technical reason why enterprises would not adopt that. I think quite a few are looking at that and I mean if you look at the Spring stack – that is actually quite huge in enterprises. Having said that, the decision towards using an application server is sometimes a strategic decision that is done on a management level. And I am not sure whether you will get buy-in, as we are speaking right now from the management level, to shift away from that technology. That is not a technical reason. It is just a reason that is about choosing a strategic platform, standardization and all these kinds of things that are there in the enterprise and that are obviously hurdles that you need to overcome to do this. And I think this will be one of the reasons why it is not going to be adopted. So, back when I was doing a lot of Spring, it was actually the case that exactly this approach was working for the adoption of Spring because you could easily combine Spring with those application servers and still get a very, very nice programming model. This is different because it basically says, you need to shift away from the application server, from your strategic platform, and this is of course a much harder move than just using a different programming model.

   

11. You would not recommend to adopt this for new projects by default? You would rather still evaluate the value proposition for the strategic platform of the enterprise?

In my opinion, from a technical perspective, there are little reasons why you would not do that. What I am trying to say is that the shift in operations procedures, and also the strategic alignment that some companies have, will be an issue concerning that. And you have to be aware of that. I mean, if you end up with your project and you have some JAR file dropping out of that and you present it to Operations and Operations says: “Well, this is nice, but we need to run it on an application server”, you do have an issue. And the question is what you do then.

Great. Thank you for taking the time to explain this.

Thank you for having me.

Jan 18, 2015

BT