BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Adrian Colyer on AspectJ, tc Server and dm Server

Adrian Colyer on AspectJ, tc Server and dm Server

Bookmarks
   

1. My name is Ryan Slobojan and I'm here with Adrian Colyer, the CTO at SpringSource. Adrian, what's the current state of AspectJ development and what are the future plans for major releases?

So, AspectJ; yes! So, obviously the first thing to say is personally, my involvement with that is less than it used to be, so the credit for everything I'm about to tell you really belongs to a guy called Andy Clement who worked with me for many years on the project. It's been a really interesting time, actually, for Andy and our team there working on thelanguages and compilers. We have done quite a bunch of work recently in AspectJ and in particular on the AspectJ development tools that sit on top of it. One of the big drivers for that, frankly, is that increasingly we're using AspectJ within our own products and within our own projects.

One of the things that uses it extensively is actually Spring Roo that is using AspectJ to help generate what we call intertype declarations that give the transparent persistence and things from JPA, so it's used very heavily in those environments, which means that a lot of people are getting AspectJ built in different projects that previously didn't and they don't even necessarily know they are getting that. There's been a lot of work going on in just making sure that the model is really good coverage, very efficient, and particularly for intertype declarations, which historically were the less used feature of AspectJ.Compared to Point Carson advice model which is very well known, theintertype declarations side of AspectJ - very powerful ! - less widely used, I would say, to this point sothat's had a lot of attention. There has been some great work in terms of reducing load time weaving overhead-the time it takes to do that -getting compile times down, etc. The performance numbers are looking really good now, so a bunch of maturity getting into the compiler and the tools,that in the last year have come on a long way quietly sothat's great!

One of the really interesting things we've been doing there is using what we learnt from the AspectJ compiler and language and the way we had to extend both the JDT compiler and the JDT UI pieces that sit on top of that to give AJDT. We've been taking that same knowledge and looking at how do we do that in the Groovy world. One of the very interesting things that Andy's been doing is saying"How can we actually get the Groovy compiler to work really well inside Eclipse, based on all those same AspectJ ideas and principles?" That's turning out incredibly well. We have an incremental Groovy compiler running inside Eclipse, based on the JDT, that we'll be showing off at SpringOne this week, generating a lot of excitement, using all the same techniques.

And in the same way that with AJDT, we'd learnt how to use aspects to actually weave the JDT itselfand get the right experience, so we are reapplying that idea into the Groovy user interface as well. So there is a lot going on, a lot of innovation in that team, and lots of people really interested in what they are doing as well;just to see "Hey! if you want to do Scala, for example, can we learn from what you've done here and can we generalize this to other languages?", and soquite a flurry of innovation there!

Expect to see over the coming year or so on AspectJ no major changes to the language itself - no need to do that really at this point. We are continuing work on the performance, the maturity, moving that forward, keeping up-to-date with the latest underlying JDT compiler levels etc., improving where we can the AJDT experience and a lot of time going into really making the Groovy compiler,and experience for Groovy inside Eclipse built on that, as good as we possibly can - so that's a lot of what that team's going to be doing over the next year or so.

   

2. SpringSource currently has two major server offerings - one is tc Server and one is dm Server. How are these two related?

Absolutely, great question! We started with dm Server - that's the server we had first - and, as I'm sure many of you are aware, that'sour OSGi-based server platform and the thing we were really after there was, yes, we built the server using OSGi which gives it some advantages in terms of small footprint, and adaptable, etc. but secondly and most importantly I think we actually exposed OSGi as a program model for users to write their applications and deploy onto the server. That has been the focus of the dm Serverproject, it's about saying "These Enterprise applications are getting pretty darn big, pretty complicated. How can we bring modularity into that in a structured way?"

We started with a project called Spring Dynamic Modules that gave a Spring based programming model for using OSGi and dividing up an application into those modules -a great principled way of doing it. We said "OK, what am I going to deploy this to?" Yes, perhaps to a raw OSGi service platform, but there was a lot of bleeding - shall we say - on the Spring OSGi mailing list as people really tried to make the Enterprise use cases in OSGi work. To fully enable that model, we built the dm Server and we made those cases super slick and easy.

The dm Server is all about Enterprise Java application workloads, in a modular fashion, taking advantage of OSGi. That's great and it hits a number of cases where people have very large teams coalescing on a single WAR file that is getting too big, getting out of control, lots of reasons why they want to move that way.

There's also, any time you look at this industry, you got this whole spectrum of people from the leading edge, technology adopters,through the early majority, and on to the later majority. There is a whole other big section of the market who frankly are perfectly happy building WAR files. They like them. They want to deploy on to Tomcat. They know it, they trust it. They don't want to change that right now. But what they were saying is "We like Tomcat, we're using it, we'd like to move more of our workloadon to Tomcat and simplify.

The thing that's stopping us is basically that, when we try and scale up in a production environment, we haven't got the operational sophistication around it that we want." And sowithtc Server we said "OK, what we'll do is, we will take Tomcat as you know it, without messing with it, basically,"and that's a big part of the value proposition of tc Server - is it really is the Tomcat you know. "And we will put around that all the necessary things such that you can scale it up and still manage it and get the metrics and get the right diagnostics, etc." That's really been the focus. The thrust of tc Server is, "All right. You just want to do WAR files. You know that model. You like that deployment. You want to scale it in production." That's the tc Server offering.

The dm Server offering is for teams to say "I'm building something larger. It's more sophisticated. I've got big teams; big applications. I need to structure it differently." Really for people who want to move beyond the traditional WAR file packaging.,You can deploy WAR files on todm Server, but you get the real benefits of that platform as you start taking advantage of the OSGi features.

What you'll see us looking at and thinking about,and it's a natural question for us over the next year or so,is "How do we make a smooth natural path for somebody who's on tc Server and they are getting some of the goodies around that-deploying a WAR file, now that it's growing, the team is growing, the project is growing, they want to incrementally start taking advantage of some of the modularity features of thedm Server - how do we make a really smooth road map for those people to move up the scale?" Those are some of the things we'll be looking at and thinking about.

   

3. What's new in dm server 2.0?

dm Server 2.0 has been a long while in incubation now, so I'm excited that it is finally reaching the end of this phase. For those of you watching this, we've just put out Milestone 5, which is probably the last planned milestone. Maybe there will be an M6, but we are real close to release candidate now, so dm Server 2.0 is really getting there. A bunch of goodies have gone into that. We've learned a lot since the 1.0 release. One of the things I'm quite excited about is the way we've improved the provisioning system.

One of the key insights there is something we call Plans, which is the ability to deploy to the server a plan file, which is a description of artifacts and versions or ranges that you want to have installed in order to support some application. It is actually a big area of debate and interest among the OSGi community and in the Enterprise Expert Group, about what is an application under OSGi, because a straight OSGi service platform just knows about one flat bundle space. The notion that some group of bundles together comprise an application, and this other set are some other application, there is no structure for that, interestingly, in OSGi.

We have in the 1.0 product, and we carried it through to the 2.0, a concept called a PAR file, which brings that notion of an application as something you can package together, has a lifecyce you can move it through together, you can install it, start it, stop it, see it from the management perspective. That's a very important thing to bring in and it provides a Runtime scope, as well for services and for Hibernate and all the rest of it. A lot of things need the concept of an application because amongst other things it's tied to the context class loader and Enterprise leverage need that to make things work.

So we had that notion, we had the PAR file and so we were saying, "I want this notion of an application but I don't want to have to package everything in one big, essentially something like an EAR file really, an archive of archives, and can we get away with that?" And so we had thismuch more flexible model of the plan filethat lists what you need and it can provision that in, and coupled into this is a more sophisticated provisioning system.

A plan can contain bundles but also for example just straight properties files, which you can put in a repository and provision in. That's neat - if you provision a properties file into the server what it does is it ends up as a dictionary in what's called the OSGi Configuration Admin Service from where via normal Spring property placeholder mechanisms those values can be sucked out and injected into your application. What's neat about the Config Admin stuff is,like the rest of OSGi,it has a dynamic model about it, so there's a model for when you update these configuration values this is how everybody gets told about it and you can change it over time. So that's kind of neat - you can deploy the properties files.

We are looking at other kind of artifacts we can put in there - obviously WAR files you can deploy straight in- and amongstthe kind of artifacts we can provision, we've changed the actual way that repositories work. So, you had a local repository basically on disk in the 1.0 product;in the 2.0 line you can now have any combination you like of local and remote repositories and you connect them up in a chain. This enables models for example of where the developer has a local repository on their machine; maybe the project team you're working in set up a shared repository that goes in the chain beyond the developer's one;maybe as you go through into actual production, you might have an approved repository of the libraries that have been through the organization's process and we've put into this repository that the dm Server can provision from; even all the way out, if you want to, to the SpringSource Enterprise Bundle Repository running out in the Cloud. You can connect to that and provision from that as well, if you like.

So, much more sophisticated ways around how we provision, how we basically get your work load into the server. The idea really is that, from the very kernel itself (which is another thing we've done - we've got a stricter model around the kernel of the dm Server and then the subsystems that provide middleware characteristics on top), from that very kernel as you provision, you say "This is the bundles, the application I want you to run."And dm Serverfrom that can figure out what are the middleware capabilities and what are the other libraries it needs to suck in and provision to make that work. As you put more load onto the platform, it tries to minimize that footprint all the time - that's one of the things it's doing there. So, plans, repositories, properties files, all big features.

RFC 66 is another big thing we've been working on. So, alongside the release of the server itself, there's been a bunch of accompanying standards work. RFC 66 is the web container spec for OSGi.

It describes a standard model by which you can deploy web applications into an OSGi service platform. We are writing the reference implementation for that. It uses an embedded Tomcat inside, which is quite neat, so we've made Tomcat work in embedded form in OSGI, which is a very nice piece of work that was done there by a guy called Mark Thomas. We have that, so we're moving towards a standard way of doing it. Related to that in the web arena, something I'm pretty excited about,and we're coming to the last key feature I'll highlight for now, issomething called slices in the web arena. Some people imagined you could already do thisand in fact you can't.

Interesting thing is when we've got a big application, we want to break it up into pieces, we want to make it more modular. The question is "What's the first way I go about doing it? What's the primary decomposition, the most interesting one?"What we were able to support in the 1.0 line, the prior state-of-the-art in the OSGi world is that you would have typically a horizontal decomposition, so, you would have the web layer in a bundle and beneath that you could have the service layer, and maybe you could partition that up, and you'd have some domain model bundles, etc. - You could do all that and that's fine but it turns out the most interesting decomposition and the one that best aligns with teams that we're interested in is a vertical one, by business function. You want the slices of the application end to end to be the primary decomposition mechanism. That was actually gated on the fact that you couldn't easily comprise a single web layer sharing a single servlet context from multiple bundles - there was no great way to do that.

That's really what the slices technology allows you to do, it's about being able to take vertical slices of your application, your end to end business functions, and decompose that way, including giving you the modular web layer and there's a lot of neat stuff gone into that model. But when you use it, it's one of those it-just-works things. It looks very natural. It looks like Spring MVC, it's composed together. It's dynamic, you can add and remove the web bundles and they come and go from the menus and screens and the accompanyingtag library stuff. That's a really interesting piece of work, and we will definitely be taking that further forward beyond 2.0, as well.

   

4. Can you describe the cloning feature, which was in early versions of 2.0, in a little more detail?

First thing is the background problem that cloning was intended to solve. There are a number of issues that you come up with, as you push further in your OSGi adventures. One of them is something that we described as pinning. It works as follows: it is entirely possible in OSGi service platform to have multiple versions of many bundles installed concurrently. That is something that it is designed to support out of the box and works beautifully.

It's still the case though that any one of those bundles can at any one point in time only have a single wiring. In other words, what that means is I could have multiple versions of some library, but if somebody depends on that library, they can only be wired to one version of it at a point in time. If you find bundles, for example the Spring Framework, that tend to actually link in a whole bunch of other third party things (because that's the nature of what Spring does) then what can happen is this pinning effect.

Take for example Spring 2.5.6 - the first application wants to use Spring 2.5.6 with the given version of Hibernate (you know, 3.2 something) and that's fine, it's the first one in, so the Spring 2.5.6 bundle - there are several bundles, but I'm simplifying - gets wired to the Hibernate 3.2.6 bundle and all is good in the world. Along comes the second application that says "I want to use Spring 2.5.6, as well, but I want to use Hibernate 3.3 something." Now, you got a problem, because not only does the user application have to talk to Hibernate, but also Springdoes and 2.5.6 is already wired to a different version of Hibernate - and remember it can't be wired to them both at the same time.

You can get multiple versions in consistent class spaces where those graphs don't overlap. As soon as they overlap, you have this pinning issue and you can't support that. That reduces the flexibility in terms of combinations of libraries that you can put into place. There are also some other issues around, for example, bundles that have statics in them and whether you are going to get your own copy of the static or a single systemwide shared one etc. What cloning was all about was saying "OK, when you've got one of these applications, a PAR file (or later, the plan files) and we detect that you are getting into a provisioning scenario where one of these pinning problems is going to occur, what we'll basically do is automatically try and break this problem for you. What we'll do is we'll recognize that the Spring 2.5.6 you want to use with Hibernate 3.3 is already pinned to an earlier version of Hibernate, we will transparently clone that Spring 2.5.6 bundle, making a whole new bundle that's free to be wired in a different way, wire it to 3.3 and off you go! We'lllet you run with that."

That was the automatic cloning model. There was also the ability to manually request that you got a private clone of certain libraries. We went a long way down this road and we implemented the whole thing, we made it work, and when it worked it was fantastic, because basically, you just deploy an application and you don't care about any dependencies and the whole system came up and it was great. What we actually found was that although when it worked it was fantastic, it was sometimes very hard to predict exactly what was going to happen when you deployed a certain set of bundles. You know, "What is going to get cloned here? What's going to get sucked in? What's the world I'm going toend up with?" When you start connecting your server up to larger and larger repositories with more and more bundles in them you gain the potential for this to get quite interesting. The issue was we just didn't feel that the user understanding of what they were going to end up with was easy to obtain.

We wanted a very simple predictable easy to understand model,so although we made it work and we spent several man months investing in this and some pretty sophisticated stuff was done, in the end we said "No". When we really tried it; we sat down with it: "This is not the right user model, because people aren't going to be able to debug, understand, analyze what's going on to the extent we want them to be able to. We want a simple, predictable system."

And so we backed off from cloning and we took on another approach, which we call regions.This is based around some ongoing spec work in the OSGi Alliance around a feature called nested frameworks. What it basically allows you to do is have an OSGi service platform and within that, launch another child OSGi service platform that has a principled way of sharing some things from the parent, sharing services, sharing types, etc. What we do at the moment is that we create two regions in the server. We have the kernel region, which all of the core dm Server bundles are in there,dm Server's use of Spring itself is in thereandthat is all completely isolated from anything the users might want to do in the user region, which is the second region we have.

Meaning that, for example, the choices we make in the server about the version of Spring we are using and how that's wired, do not have any impact at all on what you can do in your application. In dm Server 1.0 that wasn't the case. Basically everything was bound on the same version of Spring, more or less. This region split solves that problem and gives you freedom in the user space to pick any wiring you like.

In dm Server 2.0 it is one kernel region, one user region. What that means is that it's still possible to get back into the, inside my user region space,one of these pinning conflicts, and if you need to do that then you can actually yourself include in what we call a scoped plan file, actually inside that, the libraries that are going to be in the pinning position.(You end up with a copy of them basically, in the same way - they're shielded off)So you can get round it in that way. That works;it's perhaps not as great as we'd like it to be longer term, so certainly beyond 2.0 we will be looking at "Can we allow multiple user regions - you can carve it up as you like - and have them running side by side and free from all the constraints?" One step at a time, we think we've got a nice, good, simple model.

It's easy to understand what's going on, very predictable, very easy to diagnose. It solves the vast majority of the cases. We've got two odd years of user feedback now from your Spring Dynamic Modules and then into the dm Server so we understand what cases people are hitting day in and day outandwhat we've done in 2.0 will solve the vast majority of them. And then the multi regions stuff,as we look at that in the future, the multiple user region,we'll actually hope to get the remaining corner cases that we aren't squaring off now. So, that's where we're going and why we're going that way.

   

5. It was pointed out that Scrum is now being used in dm Server development. How has that worked out and what was the previous process?

We've been using Scrum for quite some time now. I think we're currently in our 12th sprint and the team is using 3 week sprints, so we've got some quite good experience. It's been working very well for us. One of the things that I think is important when you're a smaller company, you're moving fast, you're doing new things is that even if you wanted to sit and sketch out a roadmap for 12 months, 18 months and this is where we're going to go, well stuff happens (you learn, the markets,…) and youhave to be able to react and respond to that. And so basically having a 3 weekplanning cycle where we can tack and change direction and feel our way has been very important.

I've actually been playing the product owner role - to some degree a success, I think.The teams are frustrated that half the time we close a sprint and I'm somewhere else in the world, but certainly when I'm available it's been quite interesting. We have used this ability to slightly tack and change the features that we're afteron the majority, I would say,of the sprint turnarounds:"Actually, now we need to go this way a little bit, we've got some new data in." That's been incredibly valuable and we are certainly learning much better to calibrate how much work can we take on, what velocity can we move at.

In that sense, it gives us more flexibility andin another sense we're getting more and more predictable about what we can achieve in a certain time. So, that has worked out very well for us on that project and use is spreadingacross other projects inside SpringSource as well now. Some of the Hyperic team areusing the Spring Insight stuff that we hear about. This show has been developed that way in fact on very aggressive one week sprints, and that's worked brilliantly as wellsouse is spreading. Prior to the use of Scrum, it was probably a much more traditional kind of approach. What we would do is figure out what is roughly the set of milestones we want to go for and work outapproximately the functionality we'd have in each milestone, but then we might go two months maybe before there was any kind of release out. It's just a bit too long to have tight control over the project, to be able to have the flexibility. Scrum has been a good move for us. We're still learning from it, we're not there yet, and we're in fact right now experimenting with reducing the sprint size, bringing it down to 2 weeks - but I certainly think it's been a good move for us.

Feb 05, 2010

BT