1. So, Kevin, can you introduce yourself and tell us a little bit about what you are doing?
I am a solutions architect in the developer platform evangelism team. For the last 3 years I have worked with Java ISVs around interoperability between .NET and Java migration, adoption .NET platform generally with Java solutions. Prior to that I was a J2EE architect so my roots are in Java and more recently in .NET.
2. Is the Java and .NET war over?
Yes. First of all I don't think it's a war. I think there's lots of reasons for using different technologies in the different stacks. We did some stuff that's really good, there's a lot of history around J2EE. It's been around for a long time, did a lot of work around enterprise-wide server-side functionality. Even if you buy into one platform being ultimately better than another, the reality of the day is lots of organizations are going to have both anyway because of acquisitions or different project teams choosing one or the other, skill sets that are available, things like that. I think the .NET platform is getting considerably better, in a lot of ways it is superior to the J2EE platform, but that's just a huge install base out there, and there are lots of people that are going to choose the platforms for whatever reasons, non technical or otherwise. I don't focus on winning the all or nothing war, I focus on what's the best thing to use wih different parts of the solution.
3. Once you decide to do Java and .NET what are the approaches for interoperability?
Probably the key one is web services, one of the problems that we had 5 years ago was that we still didn't have a lot of good mechanisms with CORBA as a 'de facto' moving towards standards-based way of doing interoperability but that wasn't as widely adopted, there were still some issues with basically the implementation, the availability of CORBA on different platforms. One of the things we did is create this web services mechanism, which has things like interoperability, protocols like HTTP, interoperable data formats, XML, ways to describe them through XSDs, XML schemas. The broadest most uniform cross platform interoperability mechanism is web services. Where it gets interesting is where we start adding capabilities above that in terms of just exchanging data and being able to run our protocols, we want security, reliability, transactionality, we have a bunch of operational characteristics that we want, so that's where the cross platform interoperability starts getting interesting and we had different implementations of different stacks, we got emerging standards in that regard. In terms of the technology that we have to support, web services does interoperate. The .NET stack has web services capabilities right of box and it's had that since 1.0. It's quite easy for people to create web services with the .NET SDK and with Visual Studio.
We can take a web service that somebody else produces in Java, consume the WSDL, the web services definition language, generate a .NET proxy, and for the most part within the confines the standardized data types within the schemas we can have a pretty good interoperability story with those platforms. When we start looking at some of the more complicated operational characteristics like security, transactionality, interaction through addressing and things like that, and be able to find a policy so that we can have knowledge of what kind of security mechanisms we have to use to interoperate with the particular one, that's when we started adding capabilities on top of the base stack. We have something called web services enhancements, it's a free additional set of frameworks, that sit on top of .NET, it is also a plugin that fits into Visual Studio, so that you can do all the web services enhancements and the WS-* in terms of configuring the policy within the Visual Studio, there's a config file and things that support that. There's coding-based but there's also configuration. The web services enhancements, what we call WSE has been around for 2-3 years, the 3.0 version now, the very next version of those enhanced web services capabilities are actually going to be supported in something called Windows Communications Foundation which is part of a .NET 3.0 set of frameworks to be released later this year and that's the next version of capability that we have to support all those WS-* protocols, it supports even more than the WSE package does. Those are the technologies that we have available and the tooling available for that.
First of all security is a big area that's important to a lot of people. Today without using anything above the base platform because you do transport-based security SSL over HTTP. Where that breaks down is that it's only a point-to-point solution, so we need something which can survive multiple hops, through orchestrations of web services calls. We really means that we need message-based security so we have WS-Security as one of the core standard there, that supports being able to digitally sign and encrypts the payloads of those messages. There's different kind of token types that we can use as asymmetric algorithm keys to support encryption. So WS-Security is a key one that a lot of platforms are supporting today.
In terms of other protocols that are important, there's things like reliable messaging, which is a way of being able to do asynchronous message patterns, and to be able to have some reliability and guarantee around sending a web service call and getting a response and getting a 'once and only once delivery', there's addressing which has to do with being able to have indirection and routing going on within the web services so you can pass around endpoints and decide how to call back to your self through multiple hops. The WS security works pretty reliably and if you use standard token types like X.509 certificates or username tokens, a lot of the platform support fairly robust digital signing and encryption. One of the things that is emerging through the standards too is related to security protocols like WS-Trust, which is being able to create a 3rd party trusted security token service so you can have a number of parties interacting together, and there's a 3rd party which generates things like standardized, interoperable, security tokens so that there's not just a 'point to point' digital signing and encryption scenario now, there's a number of collaborating parties that are sharing tokens and doing it in an efficient way for secure conversations.
REST is interesting because what hinges on is basically simple web protocol around requesting a payload and delivering a payload through GET and PUT HTTP messages. It's well known, it works well. It concentrates on getting an XML envelope or something that you are actually getting or putting back. For simple scenarios within a trust context where you don't care about security or you don't care about indirection, you don't care about some of those operational semantics on top of what you are doing, REST is probably a decent approach. And again from a programming model point of view it's actually simpler than relying on WSDLs and SOAP and things like that, so really it's just hinging XML schemas. Where it breaks down is where you actually want to have some of those additional operational characteristics and you can't do that with REST, because you are using the HTTP protocol, you don't have a place to insert headers that specify what the security tokens are and things like that. That's when you start to have to get into some of the more complicated WS-* support and the technologies that we have now.
There's some classic ones that they have to do with any new technology. Sometimes when people start embracing web services they want to have web services everywhere, and they don't realize that there's a few common idioms that are really useful to consider with web services like, you need coarse-grained document-oriented calls, chatty interfaces are not going to work so well with web services. The other thing is that there are a number of protocols that we have talked about, WS-Security and so on, and again there is a temptation to want to use those things because if you're doing web services it means you're doing all this other stuff. I would caution people to actually examine what they really need to do and probably there's a non-trivial number of cases where transport-based security would be just fine. Or to your earlier question a RESTful web service might actually be sufficient.
The key is understanding the platforms you need to interoperate on, the semantics that you actually care about. The other thing that's interesting is that there can be an aversion to using web services and going to other techniques because of the perception that the performance is not going to be good. I'm exchanging this large XML envelope over the wire so that can't be good. But I advise anybody to do some testing right off-the-bat to see what the performance actually is because a lot of times the content of the material going over the wire is not the biggest area of performance concern anyway in the application, there's a lot of activity going on to generate the data to use it, transform it, add security to it rather than sending this stuff over the wire. Before passing off the web services approach there's lots of reasons for trying to do some testing on that and identify really what the performance of that is going to be.
7. What are some other interoperability approaches. Tell us about bridging
There are some other techniques, as I mentioned earlier, we had some other techniques like CORBA-based techniques before. Clearly the platforms have some specific techniques Java uses things like RMI as a distribution technique, .NET uses .NET remoting, even COM. In terms of cross-platform interoperability there is a class of technologies which I would call bridging technologies which bridge those kinds of technologies together. There are some commercial and open source providers for these bridging toolkits which basically do things like bridge together IIOP from the Java side onto .NET remoting on our side. What it does, is it actually runs over an efficient protocol like TCP and does binary serialization, then transform the serialization formats between one format to another. You can actually run this in a way that's use to do it, looks like RMI-based distribution techniques or .NET remoting, but it's going cross platform now. The good thing about that is it's usually as good or better. Now, if performance is a problem in some cases this could be a solution to that, although again, my cautions around assuming performances is going to be bad. The other thing is that it's a very natural programming model.
What happens here is that the other platform's classes that you interact with get proxies directly to your space. Unlike a web service call where you actually create a proxy of a web service, which is really an interface on top of some other code, these bridging technologies proxy the actual code. The kind of coding that you are doing on the Java side, will be exactly the same, using the same types, the same methods, the same data types, everything very natural. You can open a C Sharp file, and look at the code and it looks exactly like the code that you were using within the Java application. So that's great for Java developers to approach that, it's easy, you don't have to jump into the web services ocean, there's a lot of stuff that has to do with web services. The down side is that it's tightly coupled: anytime you make any kind of change to those Java classes you have to regenerate that proxy whereas the web services techniques provides loose coupling and some encapsulation from the actual implementation, which is a good thing. The other thing is that this actually requires an additional runtime in some cases, and again if you receive a commercial provider this is extra stuff on top of the infrastructure that you already have. But, it's worth considering as an alternative for sure.
The best case for those are things like intra-solution interoperability. The scenario where you got an existing J2EE application server and you wanted to build a Windows presentation foundation rich client it's all deployed together as one solution. And this is not necessarily going over the web, this can be within an enterprise. This is the case where you've got a tightly coupled scenario already. When you go to make business logic changes on the Java server side, you are probably going to make changes on the client side. This is part of one solution; it's not solutions talking to each other. That's the case where you're not giving up a whole lot by using a bridging approach because you have already got a tightly coupled semantic scenario already. Within solutions, these are good approaches.
There are some other what I would characterize as resource-based interoperability solutions, and we have EAI (Enterprise Application Integration) hubs like BizTalk, Tibco, Vitria, webMethods, tools like that, and even just messaging queues like MQSeries, MSMQ, Sonic and other JMS-based providers. There's a good case to be made that if a solution already includes those, because a lot of Java solutions may end up going with a bus-oriented architecture, maybe they already had that as a technique for integrating the parts of their own solution. Many of those providers, those implementations, provide .NET adaptors that you could integrate into the message bus. MQSeries does this, Sonic does this: you can basically take a .NET component use it within your .NET application, and receive and post messages to the queue, exactly the same implementation of the queue. You've got an indirect interoperability technique going on there. If you want a JMS-specific type of API or if the provider you have doesn't support the .NET client natively, you can use the bridging technologies actually to build a bridge between the JMS client that you have from that implementation to .NET as well.
There are some great opportunities there. If you look at the more hub oriented products like BizTalk, they operate on the basis of posting and receiving messages, in an asynchronous-durable way, very much like a message queue but there's an orchestration component inside of it some other auditing, logging and transformation capabilities in there. That's a similar situation where there'll either be a native adaptor on that hub that allows you from a .NET point of view to post a message in there. There may be a web services interface technique, a lot of them support that now so you post your message either in a web services way or a RESTful way, and if there isn't anything like that, again you can use those bridging techniques. Those are characterized as very similar techniques but they hinge on the idea basically taking the adaptors that you've got or the interface methods that you've got and either using the .NET ones that are supplied or building one of those yourself using some of those techniques, either the web services or the bridging technique.
There are 2 reasons for that: one is that web services is the 'de facto' way of doing interoperability across many different platforms today. We can have solutions which operate well against Java and .NET but if you talk about mainframes and you talk about other kind of systems, web services is much more the complete platform-neutral, heavily invested by many companies around standards, work on standard protocols, message formats etc. Some of the other techniques are there but this is much more ubiquitous as a standard technique.
What Microsoft is doing is, we host thse things called plug-fests. What we do on a once-a-quarter basis is we invite in all the people with each of the major companies (Oracle, IBM, Sun, etc.) to bring the folks that are working on the implementation of their WS-* standards into the lab (we have a lab across the street where we invite them in). Everybody gets set up to run a basic set of tests against all those protocols, and there's a lot of work that goes on even within the context of that to understand what's working today, what's not, what I need to do, I can tweak this now it's working. So basically we, in conjunction with a lot of those other guys, we're doing a lot of work to figure it out where we interoperate with them, where they interoperate with us, what we need to do differently now, and we do this on a quarterly basis and we publish the results. We're doing this to try and make sure that we know where we interoperate well with those platforms on all those protocols and where we don't, what we need to make sure that it's going to work.
That would be what I would characterize as one of the kinds of bridging; if you look at some of the commercial vendors of the bridging technologies they support that type of mechanism. It's an in-proc channel type, shared memory or something like that. At the end of the day I would characterize bridging, as opposed to web services or another technique, as a particular channel type and a particular serialization format, and in practice it's a really efficient one, granted, because what it does is it co-hosts those VMs together. But in terms of what you end up doing to develop and program they look very much the same because you're still proxying the other platform's classes into your space and you are running them in very efficient ways so that the interactions of those are pretty performant.
12. But it's web services, isn't it supposed to 'just work'?
In terms of basic data exchange it works pretty well. I think where there's still lot of room for differentiation and some work to do are some of the higher order operational semantics. And everybody has their different flavour of standards that they want to support. IBM was pushing WS-Reliability as opposed to WS-ReliableMessaging. And some people are really focusing on WS-Discovery as opposed to something else. When you look at basic things like WS-Security and transactionality and messaging and even indirect routing, I think there's a lot of synergy in all those protocols, and once it gets beyond there, everybody has an idea of the things they want to push and what they think is important. But that's great, that's the way that the business is. We all believe that what we are doing is what people need to do: Enterprise level interoperability and orchestration and everything else. We are working behind the scenes, with a lot of these folks too not just in the plug-fests but we have regular meetings between the folks in Oracle, and in Sun around the standard groups. We're doing a lot of stuff to forward our message around, as everybody else is and I think over time everybody is getting more and more sets of protocols they are agreeing on. But there's still lots of work there, it's not all a done deal at this point. And I think there's lots of configuration details: which particular hashing algorithm do I use for my asymmetric key encryption? How do I support identity? Is it SAML? Is it SAML 1.1? Is it the SAML protocol? There's lots of variations still within the details of those protocols.
13. Any final words about Interop?
I think we've come a long way. I think 3 or 4 years ago when we started hatching this web services SOA story, we have basic interop but we had a lot of challenges then even on data interop and I think we've solved a lot of those. I think more and more people are adopting web services now as a technique. We've great tooling around that. WCF is a great new version of that, in terms of the platform and what we are doing within Visual Studio to support that. There's a whole lot of interoperable bindings that come right of the box for WCF so it makes interoperability much more approachable and easy. What it means, too, is that there's a lot of great opportunities that are going to be really doable around combining technologies. You asked a question about 'the wars' earlier, and one of the things that this is going to help is that we have less of those wars because it's not an all-or-nothing decision. I can decide to use Office because Office is on every desktop in my organization, but I have 5 years worth of intellectual property embedded in that J2EE application I bought. I don't want to go up to somebody and say "Oh by the way you have to change all that and rip it out". I want to have the ability to combine these things. All this means is that there's a much better ability to do this and it's going to work a lot better, more opportunities basically to combine things.