Setting out for Service Component Architecture

| Posted by Henning Blohm Follow 0 Followers on Oct 18, 2007. Estimated reading time: 9 minutes |

Quite a number of bloggers have been wondering about the Service Component Architecture (SCA) standardization effort. 

SCA's pick-and-chose specification style makes it is easy to get lost in the SCA universe. Because there is little experience with using SCA in the community, many areas that deserve detailed specification are still under investigation or have not even been touched yet.

At first, readers might easily be misled into believing that SCA is (yet another) revolution in Java land. This is wrong on two counts. Firstly, although Java oriented work attract most of the attention, SCA is not only about Java land: there are specifications for C++, COBOL, PHP, and BPEL. What we want to focus on though is that SCA is not primarily about replacing existing environments (such as Java EE and OSGI) but about creating an infrastructure in which applications can cross the boundaries between different programming model in these environments. The details of how SCA will integrate with existing technologies are the missing pieces in the catalogue of published SCA specifications. There is simply still a lot of work ahead to figure out the tedious details of integration at all layers with these environments.

Technology integration is hard. No single interesting technology should be limited in its use. And yet, SCA is all about cross-technology integration. 

SCA looks promising. Very interesting prototypes have been shown at various occasions, including public conferences:

  • Oracle developers have demonstrated composition of BPEL processes with proprietary mediation components, workflow components, and event agents (see here)
  • SAP developers have shown composition of Java EE components with BPEL processes within a Java EE application package and domain-level assembly over Java EE applications (see JavaOne 2007 session here)
  • The Apache Tuscany project can run compositions that are a mix of scripting language components and Java components
  • The Fabric3 implementation of SCA shows how assembly service networks over a distributed runtime environment and implemented by a variety of programming technologies.

Let's try to summarize what we can learn from these examples:

SCA is an enhancement to frameworks that offer programming models for components and connectivity abstractions. Those frameworks may be standard offerings, but may also be proprietary technologies, such Remote Function Calls (RFC) for an SAP system, proprietary mediation or scripting components, SQL stored procedures etc. SCA defines an assembly language that may be integrated into such frameworks in order to realize a number of benefits. We will discuss various benefits in detail. Here are the claims we will make:

  • SCA can be supported in conjunction with existing technologies. That will likely be its primary use-case.
  • SCA's fundamental value lies in providing the foundation to cross-technology programming model integration, distributed deployments and assembly.
  • SCA will allow implementers to provide proprietary technologies in a consistent and recognizable way – which is good for both developers and vendors.

Integration with Existing Environments

If anything, SCA is about integration with existing technologies. This is not about re-using one or the other specification when designing SCA. It's the other way around: Specifications will describe how to integrate deeply with SCA models. You can see that when browsing the SCA white papers at or when following the prototype efforts (see above). The idea here is that wherever a specific technology is good, it can increase its value even more when scaling out its use using SCA.

Integration of existing technologies may happen in different ways and at several layers. For a scripting language, an implementation type definition is a natural choice. For service invocation technologies, such as messaging, remoting protocols, bindings are the way of integration. For runtime environments such as Java EE that provide a deployment model, a component model and maybe more, integration may happen on more than one layer.

A deep integration of SCA assembly with a given environment reduces nasty model frictions introduced by abstractions that try to generically wrap any sort of runtimes into one common "higher" runtime model. 

For example, there was a time, when it seemed like a good idea to abstract all service via WSDL interfaces and implement service invocation in some generic XML-oriented WS-* capable runtime. While that seemed to be a good idea from high-above, it looks much less attractive from a service developer and service consumer perspective: Wherever you are, you have to pay the impedance mismatch tax of the non-integration by converting to and from a different technology - including naming, transaction handling, security.

In contrast, an SCA integration will try to provide an interpretation of native artifacts so that they can be referred to in assembly definitions right away and only need to be modified when need for use of previously not available features arises.

Cross-Technology Programming Model Integration

SCA introduces the abstract concept of an implementation type. An implementation type describes the shape of a component from an SCA assembly perspective. In other words, it says what service endpoints a component offers, what references it makes, and what configuration properties can be specified for that given component. In that sense an implementation type provides a technology independent representation of component implementations.

That sounds a little like science-fiction, and we have heard about such things before. However, SCA does not attempt to capture all aspects of a component and its interactions in its own language. For example, SCA does not define its own interface description language but instead relies on Java and WSDL. Other interface languages may be supported by implementers as needed. In the same spirit, while SCA defines a policy framework, it does re-use WS-Policy definitions where applicable.

Once you have an implementation type, say foo, you can point to SCA assembly as a definition on how to combine components of type foo with, say, BPEL processes, Java POJOs, or EJB session beans - whatever your environment of choice may support.

From a vendor perspective that means that SCA lowers the marginal costs of providing  implementation or binding technology to its users. For users it means that SCA reduces the marginal costs of making use of  implementation or binding technology.

In the case of Java EE we actually did a study at SAP. We integrated an SCA runtime and a BPEL engine with our Java EE 5 environment (SAP Netweaver that is) and got a seamless integration of BPEL with the Java EE component and life cycle model. Let's see what that gives us: local BPEL to Java (and vice versa) invocations are indeed local (albeit not pass-by-reference), since we had sufficient application-local assembly meta-data. In particular A BPEL process can invoke a session bean via an SCA wire and update persistent data using the Java Persistence API (JPA) within the same transaction and without compromising information hiding by exposing a web service for an interface that should be local. The SCA wire, in this case, would have one end defined by a WSDL interface, the BPEL implemented component's side of it, and it would have one end defined by a Java interface, the Session Bean's business interface.

Taking it from another side: When offering support for an orchestration language like BPEL, the need arises to be able to re-use existing assets as seamlessly as possible. SCA helps here be allowing to use it locally, almost "in-place".

While it would not be reasonable to expect a similar integration of C++ code with Java (but... who knows), there are a lot of programming models from an Enterprise Service Bus (ESB) or Enterprise Application Integration (EAI) heritage that can be integrated the along the same lines BPEL was integrated.

Distributed Deployments and Assembly

While SCA wisely does not describe a particular deployment format, it does define a few aspects around deployment. In particular it defines the concept of a "contribution to the SCA domain". This is another key concept of SCA.

The moment we can talk about a contribution (think: a deployable) we can talk about assembly beyond the single contribution, which is exactly what domain-level assembly in SCA is about. The domain is visualized as a composite that includes composites from contributions. That is, we get a method of expressing assembly relationships across contributions, using the same assembly language we used to compose locally.

Distributed assembly, as enabled by the domain concept, is the logical counterpart of cross-technology integration at the programming level. In reality, business applications have to integrate across application packages and often across systems of so significantly different technology that a programming level integration is not reasonable.

Fortunately, a domain may span more than one system and interconnect several systems. In that sense, domain-level assembly provides a connectivity abstraction that moves the configuration of physical endpoints from system-to-system, to the definition of a compound domain construction.

It is not only about the abstraction of endpoint addressing within a domain. In addition to that, the assembly information may stay silent about what is the actual transport protocol to use, and depending on the heterogeneity of a domain leave that decision to domain administrators or even the runtime implementation.

From an Enterprise Service Bus (ESB) perspective this speaks to today's tendency of "moving the ESB capabilities to the edges of the ESB". That is, programming model integration (see above) allows us to implement integration functionality in a free mix with business application logic, while the domain abstracts the ESB topology details - that is programming on the service bus.

Proprietary Technologies and SCA

An important argument was made above: lowering the marginal costs of new programming models for providers and users. It's a simple win-win situation.

Vendors traditionally hesitate to introduce new programming models because of the additional efforts involved in making it accessible to developers and tools. It is not uncommon to see new deployment models, management tools, and tool suites brought forward to introduce a new programming language such as BPEL. Is that justified?

Similarly, why should users be happy to be confronted with the necessity of learning more than the most necessary parts that help making their developer life easier?

Speaking about proprietary technologies, this means that vendors can use SCA to provide access to new or proprietary technologies faster and with less efforts. Users should expect a lower barrier of entry to domain specific technologies.


What you should take away from this article is that SCA is primarily not an attempt to replace or revolutionize your favorite technology. It adds an abstraction of assembly that you can use as needed.


Is it about SOA? If we say SOA is about abstracting connectivity details, being able to juggle varieties of transport protocols and programming models for integration and application development, then SCA is about simplifying SOA development.

Now, being in OASIS and called Open Composite Services Architecture (OpenCSA), SCA's development will continue in the public. Stay tuned!


Rate this Article

Adoption Stage

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

IBM's SCA deserves a mention by Justin Wood

IBM's Websphere Integration Developer product is a really strong SCA implementation.

SCA & SDO by Alvaro Gareppe

Very illustrative article! SCA specification brings to SOA a common invocation model, allowing all kind of invocation methods. A very important addition in this concern is SDO. SDO works with SCA as a common data model. SDO specification gives to SOA the functionality to handle business objects as the data outside the components.

I have been developing with WebSphere Integration Developer (IBM software that implements SCA & SDO specifications) and WebSphere Process Server since last year and the implementation provided by this tools (especially WID, as an IDE) is very powerfull and allows an independent implementation of the service components and the assembly of the components.

Little summary… SCA gives SOA a technologic independent implementation, and specially detach the components from each other.

C++ & Java in SCA by Patrick Leonard

Henning, this is a nice overview, very helpful.

I wanted to respond to your comment about C++ / Java integration. Rogue Wave's HydraSCA actually does host both Java and C++ components in the same runtime and they can communicate in-memory (without web services) for higher performance, or of course with web services if you prefer. It's JNI under the covers, but the developer just sees SCA components.

Re: IBM's SCA deserves a mention by Johan Eltes

IBM did found the acronym and pioneered the architecture approach. The should have credit for that. And for initiating OSOA. But it isn't an implementation of OSOA SCA. As an example, Integration Developer / WebSphere Process Server does not support dependency injection, which is a critical feature to keep SCA none-intrusive to service component business logic.

Re: IBM's SCA deserves a mention by Alvaro Gareppe

If I not getting all wrong with "support of dependency injection" you mean accessing other service using @reference annotation instead of using a lookup function...

Of course that is important that every implementation of SCA implements every aspect of the specification... and I agree with Johan that it should be like that.

On the other hand... the target of the dependency injection is to keep track of “what services are used by a component”, mostly for tracking the impact of changes...Ii think that all of that is possible with the assembly diagrams provided by WID. With this diagram I know, if I made a change, what components would be affected and what components will have to be, i.e., retested

But, again, is true that every implementation on SCA should follow exactly the specification... this will be important if in the future "they" want to make the specification become standard

Re: IBM's SCA deserves a mention by Johan Eltes

Do you really need annotations when a .componentType file is provided for the implementation class? Annotations will require an import statement that will require SCA libraries on your classpath, although you may reuse the class in a none-sca set-up (e.g. in a pure Spring environment).

Re: SCA & SDO by PJ Murray

Very illustrative article! SCA specification brings to SOA a common invocation model, allowing all kind of invocation methods. A very important addition in this concern is SDO. SDO works with SCA as a common data model. SDO specification gives to SOA the functionality to handle business objects as the data outside the components.

In addition to providing a data object, SDO also provides a common data access API - currently in Java and C++, but with additional languages coming.

Re: IBM's SCA deserves a mention by Henning Blohm

Justin, sorry for not mentioning IBM's product. I should have. SCA evolved quite a bit and many integration aspects have been added, so that I was focused on what's going on right now.

Thanks, Henning

Ps.: Sorry for that late reply as well. I didn't check for a few days and didn't get any notice (or didn't notice) that there are comments

Re: C++ & Java in SCA by Henning Blohm


that sounds very interesting. Can you provide a pointer to more background information?


Re: SCA & SDO by Henning Blohm

Right. SDO is an important utility in SCA. However, SCA does not depend on SDO as DTO implementation. During our implementation work we found that SDO is very useful however.


Re: C++ & Java in SCA by Patrick Leonard

sure, it's in our product called HydraSCA. this is fairly high level, I can get you more technical info if you like:

that sounds very interesting. Can you provide a pointer to more background information?


To annotate or not to annotate.... by Mike Edwards

Ah, whether to annotate or whether to include metadata in separate files is a long debate. There are definitely some developers who prefer to keep everything relevant to a code module inside the code module. Annotations provide a standard means to do this in Java. Agreed, use of the annotations does tie the code to the annotation libraries - but as you say, the code does not require the annotations to be read in order to work.

In SCA it is also possible to keep the metadata outside the code modules, using things like componentType files.

Equally important, the assembler of an SCA application can override some aspects of the metadata, should that be necessary when composing the application.

So, I think SCA provides a useful level of flexibility in the creation of components and in their assembly into a larger application.

As for the question of injection - it is a style of programming which aims to eliminate the use of technical APIs - all that the component developer gets to worry about are business interfaces - the ones that are offered by the component and the ones that are used by the component. Of course, SCA gets to have it both ways - there ARE APIs which allow the programmer to go fetch reference proxies, if that style of programming better suits the requirements.

Yours, Mike.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

12 Discuss