00:12:31 video length
Bio Paul Cowan is a Software Engineer working on the Resin application server at Caucho, focusing on dependency injection, concurrency, high-speed messaging, and distributed caching. Prior to joining Caucho Paul worked at NAVTEQ, where he was a senior architect in the Traffic.com division. He has 11 years of experience with Java and application servers, starting with NetDynamic.
JavaOne is all about exploring Java and related technology. From special exhibitions to demos and partner offerings to community networking opportunities, you'll find an endless source of inspiration as you explore JavaOne 2011. JavaOne offers the best Java-focused content, training and networking.
I got started about 11 years ago. I was weaned on Java and started with NetDynamics which (some people maybe remember) was one of the first JEE servers to come out. I’m primarily a backend software developer, have been doing threading and concurrency caching for the last few years, and recently at my work with Caucho, I’m mostly working on the health system, our health monitoring system, and get involved in some of the CDI implementations or web servers - pretty much anything that we need to work on at Caucho in terms of the Resin application server.
We see Resin as an elastic JEE application server layer. We are not a Platform-as-a-Service and we’re not a Software-as-a-Service, we’re a Platform-as-a-Service or Software-as-a-Service infrastructure provider or vendor. But we don’t sell the service; we don’t provide it to you. We just sell you the software and you build your Cloud on it.
We have what we call a Triad hub-and-spoke architecture; the Triad being three servers that are your primary servers that are in constant communication with each other. That forms the hub, and then n number of elastic spoke servers that you can add and remove, as needed, to handle the load.
What we found was that in most cases it ends up being easier to have these three primary servers, with IP addresses that are known, without this auto-discovery, because really that creates a lot of problems in EC2 environments. The hub architecture of the three primaries, that are responsible for caching the data and maintaining the knowledge of the architecture of the Cloud, eliminates a lot of issues that you get with a true peer-to-peer network.
Our messaging is based on what we call HMTP which is our Hessian Network Protocol over the web socket standard. What we are doing is we’re looking at, for example, in the distributed sessions we will usually only update your session when the data changes and, even then, we’re hashing the session and the Triad master will have a hash of what the current data is. The spoke servers will say, "Here is my hash. Do I have the latest data?" If the hashes match, then you don’t need to send very much data.
Resin is really simple and it’s primarily just a single JAR file with a configuration file. It’s the same configuration file for the Triad as it is for the web tier, any of the cluster tiers, the spokes. The three Triad servers are spun up and they are usually static on different machines, and the elastic servers are spun up just with a single command line. It’s really easy to bring them up and take them down as you need to.
It’s based on our HMTP messaging system, which I mentioned before. That forms the basis for our distributed cache and then our distributed versioning is based on a Git. So we have an internal version of the Git library and when you push a WAR, an application, to one of the Triad servers, we use Git internally to push it out to all the other servers. Sessions in our Cloud are tied, usually, to the Git version of the application, so you can have a number of sessions running off of one version of the application, push a new one, and then Resin will keep that version of the application up until all the sessions are over, and then new sessions that are coming in will get the new version of the application.
Yes, that’s how you upgrade. Alternatively, you can bring a server down, put the application on there, bring it back up. That’s an option the Cloud is auto-sensing and will move the load around, or the other alternative is to have a set of servers that you upgrade and then basically flick a switch on your load balancer and move your load to the other servers.
Yes. It’s pretty low, because you only need to replicate it to the server that is handling that session. We’re using sticky sessions, with a lease. So the same session will go to the same server and it will push its session update to the Triad. Now, that’s configurable whether you want it to push after every update or if you want to push it after a time period. The hashing of the sessions and comparison with the master and the spoke servers is what cuts down on the network traffic.
Resin always runs with actually two processes, and the Watchdog is the secondary process that controls Resin. When you start up a Resin server, you are actually starting up the Watchdog and the Watchdog starts up Resin for you. It’s a peer process, but it’s actually the parent. The Watchdog will monitor the health of the Resin system and automatically restart, if there is an issue. One of the advantages here, particularly in the Cloud environment, is that as you are spinning up elastic servers and bringing them back down, sometimes you lose track of your servers as you have lots coming up and going down. The Watchdog system is maintaining the health of the servers and bringing them back up, and noticing if there is an issue, and then reporting on that. It’s really part of our health system, so it triggers health reports because it’s an external process and can detect when Resin is having issues.
Primarily we have two kinds of reports: a Watchdog report is triggered when the server restarts. So, in say the 10 minutes - actually it’s configurable, but in the 10 minutes prior or leading up to a restart, we will produce a report from the data that we were tracking at that time. The second report that we have is similar and it’s a Watchdog report also. It’s produced as a PDF, it contains a thread dump, a heap dump, a stack dump, a JMX dump. So we go through all of JMX and dump out the attributes and the values. If you want, we can do profiling - we have our own internal native profiling library that can profile for a period of time. Then, we are monitoring lots of attributes in your system, like CPU usage, memory usage, file descriptors and we graph that for you. We produce what we call a snapshot report and it’s a PDF report containing a snapshot of your system at the time.
That’s right, we do. The health system is very configurable and we can configure it to do profiling for any period of time on a certain end response to a certain issue that it notices.
It’s similar, and really in the market we’re positioned between Tomcat and WebLogic. So we’re giving you more of an enterprise quality application server, elastic and with a nice health system and a nice administration system, but it’s much more lightweight. Our download is only 23 MB, we have about a 6 seconds startup time, so we’re well-positioned in the market to take advantage of customers that don’t need the heavyweight full JEE stack, but want something a little more enterprise quality than Tomcat.
Yes, JEE 6 Web Profile.
A lot of our health system, and the anomaly detection is a good example, rose organically out of our need to support customers. The anomaly detection feature came about as the result of a customer who was having an issue with thread spikes, and we knew that there was something happening at that time; we could see it in the graphs that we were producing, but we couldn’t tell what was happening; we needed to have a snapshot at the time. The anomaly detection feature will monitor virtually any statistic you want in your JMX, number of threads being an easy one to track. If it sees a spike, an unusual spike, in the amount of threads, for example, it will trigger one of our snapshot reports, and that’s really invaluable for support, because we’re actually detecting something’s happening at that time, something unusual, and getting a snapshot of the system that you can use for debugging.
Disclosure: I work with WSO2, which assists development teams by delivering a complete, composable, Cloud Native application platform. You can learn more about how WSO2 defines a Cloud platform by reviewing our Selecting a Cloud Platform White Paper , and learn more about our Cloud Native application platform at our WSO2 Stratos product page