The SCA Debate
The Service Component Architecture (SCA) was originally created by a group of vendors, including IBM, Oracle, BEA, and SAP, and has been handed over to OASIS in March 2007. SCA defines a programming and assembly model for developing and composing services within a Service-oriented Architecture. Services, or Components, may be developed in Java or any other programming language that supports the SCA programming model, i.e. any language whose binding has been specified by the SCA specs.
The SCA programming model has been defined by Microsoft's competitors, who control the specifications, and whose main focus is on portability of code rather than interoperablity according to Chappell. The only .NET language supported by SCA is C++, which does not play an important role in the .NET world. Even if Microsoft would design a C# or VB.NET binding, nothing will be gained in regard of portability, because both languages are bound to the homogenous Microsoft platform. In addition Microsoft already provides an analogous programming model: Windows Communication Foundation.
First, it's important to understand that SCA is purely about portability--it has nothing to do with interoperability. To connect applications across vendor boundaries, SCA relies on standard Web services, adding nothing extra. [...] and so Microsoft not supporting SCA will in no way affect anyone's ability to connect applications running on different vendor platforms.
The assembly model, which is defined by the Service Component Definition Language (SCDL), does not add to interoperability either:
The language just doesn't define much. And since all of the components in a single SCDL-defined composite must run on the same vendor's infrastructure, Microsoft's lack of support doesn't affect anyone's ability to define SCA composites that include both, say, Java and .NET components. This wouldn't be possible even if Microsoft did support SCDL.
SCA's focus on portability is the main reason, why neither Microsoft nor any customer would benefit from Microsoft embracing SCA:
Given the competitive realities, Microsoft supporting SCA today is about as likely as an embrace of EJB would have been a decade ago. Yet even if the company wanted to, there's not much there for Microsoft to embrace. Given SCA's complete focus on portability rather than interoperability, the set of programming languages it supports, and the minimalist nature of SCDL, Microsoft's support of this emerging technology would provide almost no benefit to customers.
Stefan Tilkov agrees and even poses the question "whether the whole thing is worth the effort after considering his [David Chappell] arguments". In his follow up Stefan states that "Interoperability clearly tops portability" and he is doubtful about the success of SCA:
To me, a portable, cross-platform assembly model and programming model has no chance to succeed — there’s just too much agreement requirement for our industry. [...] it seems there was no clear vendor advantage to CORBA, either … which somehow did not make MSFT ever join it, either.
it’s a machine readable description of the logic of the composite application, at a useful level of granularity for application and service management. This is something I can use in my application infrastructure to better understand relationships and dependencies. It brings the concepts of the application world to a higher level of abstraction (than servlets, beans, rows etc), one in which I can more realistically automate tasks such as policy propagation, fail-over automation, impact analysis, etc.
In this regard Microsoft might very well benefit from supporting SCA according to Vambenepe. He thinks that a Microsoft effort to support SCA would make it a lot easier for him ", and all the management vendors, to efficiently manage composite applications that have components running on both Microsoft and Oracle, for example". Don Box is asking for arguments in favour of SCA and is not convinced by Vambenepe arguments.
It will be very interesting to see how SCA will influence the SOA market and how Microsoft will finally respond to the debate. The SCA Interview with SCA standards members and users, published on InfoQ today, addresses some of the issues of the SCA debate and gives further insight and understanding of SCA's role and future.
SCA assemblies are the "last mile" of interoperability
I think this point that you make: “The assembly model, which is defined by the Service Component Definition Language (SCDL), does not add to interoperability either” is, IMHO, incorrect. I was made by Dave Chappell as well.
The Web Services standard stack does not support an “interoperable” assembly mechanism. As a matter of fact, if you are adopting the WS-I Basic Profile, you are forbidden to use “outbound” operations, so your services cannot expose “references” (using SCA lingo). The only assemblies Web Services can support are of the Client/Server type (this is tenet if you build a SOA with WS only).
Further if you want your service to participate in multiple assemblies simultaneously you have to code a an assembly context management layer between your soap stack and your business logic. This layer is thin and is low value for an infrastructure software vendor like Microsoft, but it has a huge value for a customer building connected systems at the presentation, process and information layers.
SCA is two distinct pieces, IBM should probably have separated them more clearly, but Assemblies are the “last mile” of interoperability, while the programming model is purely about portability.
At the end of the day it is inconsequential that Microsoft's participate or not in SCA, because someone, if not Microsoft, will build this thin layer on top of WCF to enable the creation of SCA components running in the CLR. Incidentally, SCA is offering a distributed CLR system for any language, not just Microsoft's CLR languages. Being able to assemble Java, BPEL, php, C++ components is a powerful value proposition for a customers.
This is how I compare a SCA world with the "interoperable" world of .Net
SCA world_____________| Microsoft World
Any language support CLR____| MS languages support
Distributed CLR_____________| Local CLR
Peer-to-peer assemblies______| Binary assemblies
Homogeneous progr. model___|Homogeneous prog. model
Putting it that way, I don't see what the folks in Redmond have left to lose in adopting SCA's assembly model, unless it is an ego thing.
Re: SCA Debate
Although Jean-Jacques does not like the CORBA analogy, I agree with Stefan that SCA will require the same agreements and requirements of any vendor as CORBA did (and failed to gain).
I'm thinking of a service landscape instead of a system or assembly landscape. All management information along with policy enforcement and all other IT Management data should be included in a Service Registry. The Service Repository (included in the Registry/Repository solution) contains standardized policies, contracts, schemas and interfaces, which are associated by metadata stored within the registry. Let's simply use the standards that already exist and (more or less) solve interoperability issues. IT Management data does not have to(MUST NOT) be interoperable. Thus these data might very well be consumed and stored in a proprietary way. The basis for this data will be the standardized WSDLs, XSDs, ... Assemblies might or should be realized by composite services. The key component of a SOA is the service! We have to get rid of the traditional system/application/module/assembly metaphor.
Re: SCA Debate
JJ, I'm sorry, but your .NET/SCA comparison just isn't correct. There's nothing even vaguely like the CLR in SCA, for example, whether local or distributed. You might be right that some SCA vendors will provide a way to include WCF components in their composites, but it's important to understand that these extensions will be completely proprietary--the SCA specs don't define how to do this. Nor could Microsoft support SCA in this way except by building extensions that are specific to each vendor's SCA runtime--there is no SCA standard for how components interact within a domain, and a composite (today, at least) must exist within a single domain.
Satadru seems to imply that SCA will allow composites to be assembled across different technology stacks, i.e., SCA runtimes from different vendors. This is a common belief, but it's not correct. This restriction is the primary reason why I've argued that SCA doesn't have much to do with SOA: I can only create composites when all of the components are within the same vendor's domain. To me, this kind of single-vendor limitation is antithetical to SOA.
Also, Satadru correctly points out that SCA allows creating runtimes that can automatically choose bindings and make some QOS decisions. But the only reason to define a standard for this is to allow portability (or interoperability, I suppose, although SCA doesn't address this). Any vendor could build this functionality on their own if they chose, avoiding the entanglements of standards committees. What SCA adds--and it's a useful thing--is the ability to have some portability of skills and code across vendor implementations. That's why it's important.
Re: SCA Debate
The ability to create SCA composites with components running on different SCA containers is nice but the lack of it is not as restrictive as you might think it is. It's quite unlikely that SCA vendors will only build single technology component implementations (the C&I models as per the spec) - look at Apache Tuscany 1.0 implementation - they already support Java, BPEL for C&I and they also provide an 'SCA native' extension to the runtime that supports C++ and scripting language based component implementations. So as long as I can break my application down into a *number* of composites each of which supports an appropriate stack it should not be a problem. If I have Java, C++, BPEL as well as COBOL assets and I need to construct a composite application I will need to build two composites - one with the Java, C++, BPEL components running on a la Tuscanny and the other composite containing the COBOL component (as long as I can find an SCA runtime that supports COBOL). The composites can interoperate between them via published services and references which ultimately will end up using interoperable protocols such as WS so I don't see this a s a big restriction unless I'm missing something.
You also make the point a vendor could build functionality to hide all the QoS and low-level binding/communication infrastructure details from the developer but that in my mind is the biggest danger of vendor lock-in - need evidence? This is exactly what BEA Workshop tried to do in the Java world 5 years back, albeit via proprietary constructs such as controls and it did not really go down well in the wider Java community though I strongly suspect that model did help seed some of the ideas that we see in SCA today.
Re: SCA Debate
And I think we agree on the second point. The value of a portability standard, such as SCA, is to reduce vendor lock-in.
Re: SCA Debate
I think we agree on the CLR aspects of SCA. The mere fact that SCA enables multiple processes to communicate concurrently is something that is not supported by Microsoft's CLR. You can of course implement it by hand using WCF or .Net Remoting, but you do not have any "out-of-the-box" mechanism to "assemble" a complex set of concurrent processes in .Net.
Now you say:"Satadru seems to imply that SCA will allow composites to be assembled across different technology stacks, i.e., SCA runtimes from different vendors. This is a common belief, but it's not correct. This restriction is the primary reason why I've argued that SCA doesn't have much to do with SOA: I can only create composites when all of the components are within the same vendor's domain."
Again, I am not sure this is correct, there assembly definition can (and must) be deployed to all components, providing that the bindings enable interoperability between each vendor infrastructure the component implementation has enough information to configure its endpoints (both services and references). I have created SCA’s metamodel in case you want to take a look at it: www.wsper.org/sca10.png
We both had this discussion in 2004 at the Design Review meeting of WCF, SOA requires an architecture where peer services exchange messages. Today people can only build their SOA in a client server mode. Service enablement is Client/Server, even primitive SOA based business process execution schemes constrain process "steps" to "invoke" a service. Language like Erlang show the direction where things will evolve and thank god, SCA is enabling a "true" service oriented architecture, nothing else but something lie SCA is capable of delivering SOA today.
SCA and SDO are together the last mile of interoperability. SCA enables the configuration of endpoints in an interoperable way, what is impossible to do today. SDO solves another massive problem: how does a service consumer communicate the changes it has made to an XML document. Without SDO (or DataSet in .Net, but DataSet is limited to an ER model, which greatly limit its applicability in arbitrary XML Schemas), people tend to surface the update logic in the service interface which is a massive anti-pattern for SOA.
So I am sorry, Dave, but you are fighting the very necessary interoperability that would make SOA work a lot better. You are keeping everybody behind with the argument that you are making. I was the one in the SDO working group to request that we create a compatibility mode with the DataSet. I don't know what happened but this is better than nothing. I can tell you that everyone in the working group voted for this idea. People want to interoperate with Microsoft.
In an ideal world, Microsoft, would realize that customers NEED SCA and SDO and deliver what their customers need (I am living this as we speak as I participate in a project where a .Net consumer needs to communicate with a Java service and an “interoperable” dataset or sdo would be a great thing to have, I can estimate that we have to add 20% to the project because we don’t have it).
I think it is time to focus on "getting the job done" when it comes to interoperability instead of muddying the water, nobody wins “when interoperability is not good enough”. 2007 saw great progress with WSDL 2.0, WS-TX, WS-ReliableMessaging, WS-Policy 1.5, WS-BPEL 2.0 and BPEL4PEOPLE 1.0, let’s works finish the job, support SCA and SDO as an industry standard.
Re: SCA Debate
- The SCA assembly model has nothing to do with interoperability. Composites in different domains (e.g., running on different vendor's SCA runtimes) typically communicate using standard Web services--SCA adds nothing at all here. In fact, composites communicating across domain boundaries don't even know that they're talking to another SCA-based application. All they see is an ordinary Web service, one that could just as well be implemented using WCF.
- Don't confuse SCA and SDO--they're independent technologies. Unlike SCA, SDO really does have some interoperability aspects, and it's possible that customers might benefit if Microsoft chose to support it. But neither of these technologies requires the other.
- I'm absolutely supportive of SCA. As I've often said in my blog and elsewhere, I think it has the potential to be a quite useful technology. I even invested the time to write the SCA tutorial I linked to in an earlier comment, with the goal of helping people better understand this new standard. Yet none of this changes the reality that, given its current definition, customers would gain little from Microsoft's support of SCA. I'm afraid your arguments are rooted in misunderstandings about what SCA really provides.
Re: SCA Debate
I am not sure why you are using such tactics as to say " Don't confuse SCA and SDO". Shall I remind you that I was involved in the first public draft of the specifications and my text above is unambiguous? SCA and SDO are independent, serve a very different purpose, yet represent the last steps necessary for a true interoperability.
I would like to explain one more time where interoperability plays within SCA.
Here are a couple of facts:
a) From both the WS-I Basic Profile 1.0 and 1.1: "R2303 A DESCRIPTION MUST NOT use Solicit-Response and Notification type operations in a wsdl:portType definition"
b) from the WSDL Schema 2.0 (or WSDL 1.1. for that matter):
If you prefer a UML notation: www.wsper.org/wsdl20.png
The schema or the diagram shows that “a service must have at least one end-point”.
WSDL was designed to "describe" a service from its own point of view (message types, interface and message listeners). This works in a client/server mode because the provider does not need to know anything about the consumer. However, if indeed a provider has outbound operations (notification and solicit-response or out and out-in using the 2.0 vocabulary) you are kind of stuck.
The reason I say “kind of” is because WSDL 2.0 allows you to associate multiple endpoints to different bindings which are themselves associated to specific operations of the interface. So technically, I could express that, but in the end, if I use the endpoint/bindings for that purpose I lose the “Service” concept. I say I “lose” the service concept because now, I am going to have to create a different service element for each assembly in which “my service” participates. I am not sure this was what the authors of WSDL 2.0 had in mind. No matter how you look at it, as soon as you are using peer services rather than client/server ones you need to distinguish between a service and an assembly of services, you can’t have both in one concept, at least I don’t know how to do that.
Let’s imagine a scenario where I need to assemble arbitrary services which have any number of in-out and out-in operations. How do I do that in an "interoperable" way, i.e. a .Net Service and a Java Service each need to have some information about where to send the "out-in" and "out" operations. Right?
There is only two solutions to correct this defect:
a) Remove outbound operations from WSDL (since they are forbidden by WS-I, why are they even there in 2.0?)
b) Externalize the endpoint definitions like SCA is doing it today.
In other words the <endpoint> element of WSDL need to be defined at assembly time. Service containers such as WCF already support multiple endpoints "per service" so that's fairly trivial to do. Now that the endpoints are defined at assembly time, you can of course "inject" the inbound counterpart into the outbound operation. This means that an outbound operation of a service may target two different endpoints as the service participates in two different assemblies. This is what an SCA assembly definition is doing. You just need to “distribute” an SCA definition to all the services involved in the assembly and they can configure themselves to receive messages from other participants in the assembly.
If you look on this UML diagram: www.wsper.org/sca10.png you can see that SCA enables a separate binding between reference and service. This means that the "service" can have access to the endpoint definition that is the target of a given outbound operation. Of course you could have a different endpoint for each operation.
This is 100% interoperability, again if your .Net service does not know anything about outbound references, well, you are going to have to do that by hand.
I regret that the SCA assembly model is convoluted with a larger component model because this give some people excuses to discard it completely and leave end-users like me to pay the price of not having this technology. The good news is that the Java community understands this problem and has lined up to solve it. Now if you want to advise Microsoft to be left behind, we leave in a free society and you are entitled to express any opinion you want.
Re: SCA Debate
actually even Microsoft recognize the importance of an assembly model. This is what Dino Chiesa said last June about SCA:
"Some of SCA goes beyond the basic communication plumbing, and attempts to address the larger and more complex issues surrounding the description, modeling, assembly and management of distributed systems. These are hard problems; Microsoft has been working in this area for many years, and steadily delivering infrastructure to address customer needs. For example, is shipping technology. In the .NET Framework 3.5, we'll deliver integration between WCF and WF, and the version of Visual Studio mated to .NET 3.5, Visual Studio 2008, will provide modeling tools to truly democratize the construction and deployment of conversational service-oriented applications. System Center 2007, currently shipping, enables management of distributed applications. We expect to continue to steadily deliver evolutionary innovations in these areas over time."
Martin Thompson Jul 27, 2014