BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Upcoming changes for the OSGi enterprise spec

Upcoming changes for the OSGi enterprise spec

Bookmarks
   

1. Hi, I am here at EclipseCon 2013 with David Bosschaert. David is a principal engineer at Red Hat, and one of the OSGi Enterprise Expert Group leads. So, David, Cloud Computing is growing in popularity within the world outside of the OSGi space, how is OSGi evolving to meet the needs of future computing architectures?

There is some really interesting work going on in the Enterprise Expert Group at the moment around Cloud. We are working on a couple of RFCs, which are basically predecessors of OSGi specifications. One of them is looking at a REST API to manage OSGi frameworks, so REST is the protocol of choice for Cloud usage and this basically brings that protocol to OSGi management. Another very interesting specification that we are currently working on is called OSGi Cloud Ecosystems and they basically look at how you can create an application that is built up of a number of Cloud nodes that work together to provide one logical function and the nodes can be comprised of different OSGi frameworks that all fulfill different purpose and together they provide this one function and it gives you really interesting capabilities in terms of scaling particular components of your applications independently of others, and it all works really nicely with the OSGi services model which is already very well capable of dealing with dynamic situations which you will encounter in Cloud environments.

   

2. And does this build on top of the existing remote services architecture and administration?

Yes. It builds on top of the existing remote services, the existing OSGi services model and it adds new features on top of that, that are primarily aimed at discovering and providing metadata. So, you can find out what’s there, the particular type of Cloud infrastructure that it runs on, if things die, you can find out about that. And that information empowers you to deal or to use this scalability in the dynamicity of the Cloud.

Alex: And so the idea would be if one particular node or one particular service failed on one environment then you could bring that up somewhere else.

Right. Or you could add more capacity if the load of the particular machine gets over certain thresholds or if your Cloud fails, you can move the whole thing to another Cloud and users will continue to operate.

   

3. Is this something that just Red Hat is pushing behind or are there other people involved in the Enterprise Expert Group?

Red Hat is definitely one of the drivers, Paremus is another driver who is highly active in it, we’ve got a number of invited researchers working on it and other companies who are active in the expert group, like IBM and TIBCO and Oracle, they are all active in the Expert Group, they all provide feedback and input to this work as well.

Alex: And so, I guess that is one of the things about Eclipse the ecosystem and the OSGi ecosystem which have a lot of shared roots where you have a lot of different companies working together to perform standardization that everyone can implement.

Right. So, this is the thing and OSGi standards are typically free to be implemented by anyone, the OSGi Alliance is a very nice organization in the sense that it’s not dominated by a single company and we always try to do things democratically and we try to find consensus and we’ve been very successful at that.

Alex: And that I guess that helps avoid vendor lock in for specific products.

That is the thing you get with specifications and we pride ourselves on providing specifications that are really quite portable and are many implementations of OSGi specs and it has been proven in the past that people can easily switch from one to another, if they decide to do so. And there might be reasons to that, that are outside of the pure API, maybe a certain implementation works better in a particular environment, for example.

   

4. One of the things that is happening outside of the OSGi space is really the increased use of annotations for doing processing, so the JMS2 spec is largely an annotation driven mechanism, the Eclipse4 platform is largely going annotation based for its hooks. How is OSGi responding to that?

Right. So, we did add some annotations support to the declarative services specification in the last release, it’s a little bit limited, but it’s at least a start, there’s more annotations for declarative services currently in the pipeline, but probably more interestingly to people who really like to work with annotations in the Java EE world, we are currently working on a specification to bring CDI, which is purely annotation based, to OSGi.

   

5. And CDI stands for?

Context and Dependency Injection, it’s a JCP specification, it’s a Java EE specification, it’s highly active and popular in the Java EE world and people really like the annotations that come with it where you can basically create beans, consume beans and wire the system up that way and what we are currently working on in the Enterprise Group is an RFC where we are basically taking those annotations and bringing them to OSGi, so that people can use that same programming model that is based on annotations in an OSGi context.

Alex: I guess that this is similar to things like Google Guice, some of the Spring stuff, with Inject and PostConstruct and so on.

Yes. And some of the annotations are even shared across those, because the inject annotation is a very general annotation in Java. This particular work is based on the CDI specification, there is also work going on around EJBs which is based on the Java EE EJB specifications and this also supports annotations and it will be supported in OSGi, hopefully in the next release.

   

6. Do you think we will see the ability for OSGi frameworks to directly host EJBs in the future?

Yes. That is currently the plan and that is what we are currently working on, it’s another RFC that is actively being developed at the moment.

   

7. We can already do this today with the Servlet specification, with HttpService I guess. But that’s had some new features being added to it recently to support new contexts; do you have the background on that?

Right. Today or in the most recent specifications there were two ways in which Servlets were supported. There was the HttpService, which is basically an OSGi specific programming model to work with Servlets, it’s actually quite popular and it’s very simple, and also there is another specification around web applications that allows people to deploy WAR files effectively in an OSGi context, so there is different ways of working. The HttpService specification, which is the OSGi specific one, has been around for many years and it’s currently being updated, it was basically not up-to-date, it was based on an older version of the Servlet specification and it also doesn’t include some of the more modern thinking around how services are used in OSGi. So, what we are doing at the moment with the HttpService implementation, and Adobe is driving that work, we are modernizing the Servlet API used so it works with more modern Servlet containers, you can now use ServletFilters for example, error pages and stuff like that, so we are modernizing that part, but we are also adding a new programming model which is based on the whiteboard pattern. OSGi uses the whiteboard pattern quite extensively which basically means that if you provide a particular capability you register that as a service in the service registry, so in this case it would mean that if you provide a Servlet that you registered that Servlet in the OSGi service registry, the HttpService implementation will find it there and make it available on your website. So, that’s a new programming model, a new feature that is being worked on in the context of the HttpService specification.

   

8. Now, you mentioned REST as well which is of course used by Servlets and so on, but I think you are implying REST in this case as being used for something that can monitor the state of the framework. Can you say something about that, also things like the data transfer objects, the way that one would interrogate remote OSGi VMs?

There have been many different approaches to managing and monitoring a framework in the past. There is JMX, there is DMT admin, and now there is REST as a way of doing this. They are all basically achieving the same thing, but through different means and whatever that means is the most appropriate one for that particular technology. JMX is great for Java, but doesn’t work that well in the Cloud because the remote JMX connector doesn’t always go through all of the Cloud firewalls. So, in the context of Cloud it’s much more appropriate to use REST. REST is the protocol of choice for Cloud deployments and accessing those. So, we started on that work and it’s quite far advanced, it’s available in an early access draft if people want to look at it, but basically the problem with all these different approaches is that if you want to make all of your specifications manageable through all of these management approaches there is a combinatorial explosion of work that needs to be done and we found that it was very hard to maintain all these management technologies, especially since the individual specs were also progressing. So, what we basically did to address that was come up with a management technology neutral API or mechanism that provides metadata about the particular components and we started with the framework and this is what we call a “data transfer object.” So, you can obtain data transfer objects about various things in the frameworks, you can obtain one for example that represents a bundle, and this data transfer object is an object that only contains data, there is no behavior, it’s like a struct in C, if you want to compare it with something, but it basically means that you can make a very easy mapping to various management technologies, you can even write a mechanical mapping potentially, which means that keeping all these management technologies up to date is much easier to do in a more manageable task. So, that’s why we started on these data transfer objects. Currently the focus is on the core but the idea is that once we have a clear idea of how these things are going to work, that all the compendium specifications and enterprise specifications are going to define their own data transfer objects so that you can manage them through whichever technology supports these objects.

Alex: And then I guess the way that those data transfer objects are rendered to support a REST based interface or a JMX based interface, is really just translating data rather than plugging in.

Yes, exactly and it can be mechanical in some approaches. The REST spec is the first spec that actually starts off with these DTOs, so it doesn’t support any other approach, it just supports the DTO approach and the intention is that the other management specs will be updated to support this DTO approach at some point in time as well.

   

9. Now, you’ve also, in EclipseCon, been talking about other features that are coming in with the enterprise specification, things like different scopes, service scopes and so on. Could you explain a little bit about the idea behind service scopes?

Right. So, in OSGi we like to do a lot of things through the service registry, service registry is really a nice way of finding objects, and publishing objects without knowing in advance what these objects are, so it’s proven to be very very useful over the last years, but the amount of interaction models that it supports is a little bit limited in the sense that you have two models today, one that is if you register service objects in the service registry that is the object that people will find, effectively a single singleton, although you can register multiple instances, they are basically shared across all users, and the second model is where you actually give each consuming bundle a separate instance, that’s what we call the service factory. So, they have been there for a long time, but it didn’t fit very well with some of the requirements of the EJB work and the CDI work, where you for example have session based objects, if you have a stateful session bean you want state to be associated with that particular object. And in that case you really want a new instance to be given to you every time you look one up in the service registry. So, that was the model that was not yet supported by the OSGi core, and there is an RFC currently underway to make this supportable and it’s going to be the EJB and CDI RFCs that are the prime users of that functionality, but it will be available in the core for anyone who wants to use that as well.

   

10. One of the other things that is coming up in the enterprise spec is portable contracts, what are they and why are they useful?

This is a fairly complicated topic, but just to try and explain this briefly. Basically in OSGi we like to version things and we like semantic versioning, it gives a lot of benefits if it’s applied correctly, especially we like to apply this to Java packages. There are rules associated with how you increase version numbers depending on how things are changed. This is really very useful and very powerful, but not everybody in the world uses semantic versioning and we especially had issues when we wanted to write portable bundles that used packages that came from other specification entities like for example the JCP. And just to give an example, if you want to write a bundle that consumes the Servlet API of the Servlet specification 3.0, there was no clear definition of what the package version of that API would be, and different application server vendors implemented that in different ways, there was no portable uniform way, no agreed way of how this was done, and this was simply because the JCP has their own way of applying versions and it’s just a little bit different. So the portable contracts at work basically aims at making it possible to write portable bundles that use these APIs that come from other specification entities or other groups that have a different versioning mechanism that doesn’t conform to semantic versioning, but still be able to know how you import those with a particular version in a portable way in OSGi. So, that’s really what it tries to address, so it brings more portability to consumers of technologies that come outside of OSGi in an OSGi context.

   

11. What is happening with the future then, of the Enterprise OSGi spec, what’s the timeline for the next release coming out?

Right. So, from a high level we focus on three things, we focus on Cloud, we focus on Java EE integration and we focus on updating some existing specifications, so I mentioned many of them already, and another piece that is currently on the table, that is being updated, is Blueprint and there are some additions to be made to that and is currently being worked on. The next Enterprise release is planned for the beginning of 2014, there will definitely be some work in that, although it’s unclear, it’s not certain whether all of the specifications currently being worked on are going to be filed in. Certainly, HttpService I expect it will be in there, CDI I hope to be in there, some other specifications may not be completely ready and we might release that a year later. We typically have a release train going at the beginning of the year and we have been doing releases every second year up until now, but it could happen that we do a release in 2014 and maybe another one in 2015.

Alex: David, thank you very much.

Thank you.

May 30, 2013

BT