BT

Why Do We Need Distributed OSGi?

Posted by Eric Newcomer on Feb 23, 2009 |

As we are achieving a key milestone with the Distributed OSGi project, it seems like a good time to review what’s been done so far, to identify the remaining steps, and talk about why we are doing this in the first place.

In November we published an update to the early release drafts of the design documents (Requests for Comment or RFCs in OSGi terminology) for the upcoming 4.2 release of the OSGi Specification. This month we released at Apache CXF the reference implementation source code for one of the important new designs for this release, RFC 119, Distributed OSGi.

The Distributed OSGi project was started as part of the next release of the OSGi Specification because the current release of the OSGi Specification has been successful in the embedded space, and was starting to be adopted in the enterprise space. For example, the OSGi framework is behind the Eclipse plug-ins and all app server and most ESB vendors have endorsed OSGi as well.

The OSGi Alliance hosted a public workshop in September, 2006 to further investigate requirements for a possible enterprise edition (Peter Kriens wrote an excellent background blog entry about it). The current release of the OSGi Specification has since become part of Java SE, included via JSR 291, and the question confronted by those of us who attended the workshop was whether the OSGi Specification should also become an alternate for Java EE, and if so, what requirements would need to be satisfied. One of the key requirements was the ability for OSGi services to invoke services running in other JVMs, and to support enterprise application topologies for availability, reliability, and scalability. (The current OSGi Specification defines the behavior of service invocations in a single JVM only. See Peter’s excellent workshop summary entry for more details.)

Work formally began in January, 2007 with the first Enterprise Expert Group meeting. Distributed OSGi remained among the top requirements ratified at that session. At first we often received criticism that we were "reinventing the wheel," or "creating another CORBA" but this is based on a misunderstanding. The early draft design document (RFC 119) and the RI code at Apache CXF should help clarify the fact we aren’t doing this. We are simply extending the OSGi framework to configure existing distributed computing software systems. We use the term "distribution software" or DSW in RFC 119 as a generic reference to any type of protocol and data format system capable of remote service invocation. Remote means in another JVM or address space.

Some suggested that we choose one particular type of distribution software and standardize on that. An advantage would be to be able to exploit protocol-specific features, such as serializing executable code, but this would reduce choice and create a potential lock in situation. Instead, we defined a general configuration mechanism that could be used with any distributed computing software system. We also tried not to prevent the use something like serializing executable code - in other words, you should still be able to do that if you want to, but it’s not standardized because it’s specific to a single type of distribution software.

In addition to the reference implementation at Apache CXF, the Eclipse ECF project and Paremus’s Infinflow product intend to implement the design, and we have heard some people say that the Eclipse Riena project is also considering it. So hopefully we are on the right track with Distributed OSGi. But we are still very interested in feedback, and there is time now to change what actually goes into the specification. The Distributed OSGi design also includes a discovery service and an SCA metadata extension for configuring multiple distributed software system components. Neither of these are yet available publicly but should be soon.

To explain where we are in the process, it’s helpful to give a little background on how the OSGi Alliance works. Its process is very similar to the Java Community Process. In fact, the OSGi specification started life as JSR 8, and basically still represents the evolution of that original JSR effort. The OSGi process starts with Request for Proposal documents (RFPs) that detail requirements. Once an RFP is approved, one or more Requests for Comment (RFCs) are created with designs that meet the requirements. After an RFC is approved, the specifications are updated to include the design. The RFPs and RFCs are both products of the expert groups, although they tend to be led by individuals or small team within the group.

When it gets to the specification part of the process, however, the OSGi Alliance is unique because it pays Peter Kriens to do the writing. This is great because Peter has been with the OSGi effort since the beginning, and he ensures the quality and consistency of the specification. But it also removes a typical political issue that other consortia face when they "pass the pen" to one or more members (typically representing vendors that compete with one or more other members).

The current version of the reference implementation was done to prove the design described in RFC 119, and to allow the RFC to pass EG voting. In the specification phase we expect further discussion on the design as it gets incorporated into the specification, which may result in further changes to the RI.

The OSGi expert groups are now starting to work with Peter on the updated specification for R4.2, which is scheduled for publication in preliminary form in March or April, and final form in June. A lot more information about the upcoming release is available at the OSGi Dev Con, held in conjunction with EclipseCon in Santa Clara March 23-26.

The other major parts of the upcoming release include various extensions to the core framework, the Spring-derived Blueprint Service component model for developing OSGi services, and various bits of Java EE mapped to OSGi bundles (JTA, JDBC, JPA, JNDI, JMX, JAAS, and Web Apps). The Java EE mapping is not as far along as the enhancements to the core, the Spring/Blueprint, or Distributed OSGi work, but a preview is expected to be published along with the R4.2 final release.

The past two years have resulted in the Distributed OSGi requirements and design documented in the early release draft and illustrated in the reference implementation code at Apache CXF. This is one of the significant new features of the upcoming OSGi Specification R4.2 enterprise release, due out in mid-2009. Together with extensions to the OSGi core framework, the Spring-derived Blueprint Service component model, and mapping of key Java EE technologies, the upcoming release represents a major step forward for the OSGi specification and community.

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

well-trodden path by Gerald Loeffler

hi,

my understanding of distributed OSGi is that it is the appropriation of extremely well-understood concepts in distributed computing to the OSGi component model and platform:
- meta-data to denote services as remotable
- in keeping with the OSGi spirit remote services are described via Java interfaces
- registry and discovery of remote services
- as-good-as-possible location transparency for the developer. there is even a remote exception for one case where this abstraction doesn't hold.
- re-use of existing remoting protocols and technologies (WS, CORBA, RMI, ), also in the area of service registries and discovery (UDDI, LDAP, ...)
- the use of SCA-like intents (quite a lot feels like SCA anyway) to match what services offer and clients need

all this is, as i said, perfectly well-established. it's "just" the introduction of these concepts into the OSGi technology space. and that's probably a good thing for a standard, although it makes one wonder how many reincarnations of the same ideas we will keep seeing.

the problems, as before, will be in interoperability and performance.

or am i missing anything fundamental here? is there some revolutionary innovation luring somewhere?

just trying to understand it as much as possible,
thanks,
gerald

www.gerald-loeffler.net

Re: well-trodden path by Eric Newcomer

Gerald,

Yes, your understanding is correct. The main reason for it is to allow an OSGi service to communicate with another OSGi service in another JVM. The existing specification defines service oriented behavior only for services running within a single JVM. Enterprise application requirements often require designs that span JMVs/address spaces - load balancing, failover, scalability. Although we have not explicitly defined how to achieve any of these, our intention was to define how existing technologies used to support such topologies can be integrated with an OSGi framework in a standard way.

Just as we are not defining anything new, nor are we solving any problems with existing technologies. Interoperability and performance challenges remain as they did before D-OSGi. But as a developer of a distributed enterprise system you will be able to get the benefits of OSGi in a standard way.

Eric

Belorussian translation by Eric Newcomer

I'm pleased and flattered to point to a Belorussian translation of this article:

www.designcontest.com/show/newcomer-distributed...

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

3 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT