31:19 video length
Bio Andrew has overall responsibility for the OMG's technology adoption process, and also chairs the Architecture Board, the group of distinguished technical contributors from OMG member organizations, which oversees the technical consistency of the OMG's specifications. Andrew is always happy to answer questions about OMG's specifications or process.
The Code Generation conference is the leading event on the practical applications of Model-Driven Software Development (MDSD). It offers a high quality learning experience by combining discussions and debates with a diverse range of practical tutorials and workshops. It will also give you the opportunity to share experiences and knowledge with fellow developers.
Hi. My name is Andrew Watson, as you said. I’m Technical Director at OMG Object Management Group. We had the initials long before the teenagers start saying to each other, OMG! That’s so cool ; and in fact we had the OMG domain name for many, many years omg.com; we sold it for lot of money to a company that wanted to use it for a gossip site, a film gossip site; but yes,. OMG, Object Management, been doing standards for more than 20 years.
First of all, overall, we’re not for profit; so yes, we took a few thousand from the people who we sold the domain to and used it to offset the membership fees.
We’ve been going, as I’ve said, we’ve been going for more than 20 years when we started from way back; a lot of people will remember us for Corba which was an Object-based middleware; and Corba is still going; it’s still embedded in telephone exchanges and network management systems all over the world; people say that Corba is over, it had its days, actually it’s not true; it’s like a lot of mature technologies and the kind of thing that’s running your world, running your telephones but you may not know it.
More recently, we’d done other work on middleware that’s middleware specification called DDS Data Distribution Service, that’s being used in things like high frequency trading application and stock exchanges, that’s a publish/subscribe middleware system; but from the point of view of the Code Generation Conference here this week, I would have thought the things that are mostly of interest are OMG’s and modeling specifications. We’re the home of UML the Unified Modeling Language, where the home of other modeling language to do with business process modeling like BPMN and SPVR the semantics of business vocabulary and rules.
Basically if there’s modeling standardization, visual modeling standardization going on in the industry today, it’s a fair bet we’re involved.
It’s taken hold in a lots of places but in particular, in embedded, we produced a specification some years ago a stripped down version of Corba for real-time and embedded systems and 20 years ago, a big computer system that filled the room had to be networked to other IT systems for moving information around to print your gasbill or manage the telephone network. Well, today the handheld phone that you got in your pocket, of course, has got more CPU power than that big IT system from years ago; and probably today, there are probably two or three CPUs inside your phone.
So we’re starting to see embedded systems quite small embedded systems that have multiple systems, multiple CPUs inside them; they need to be linked together; they need to be networked together; so literally for some application like software defined radios, we’re seeing handheld applications that have Corba-based networks inside them, never mind using something like Corba to link to other applications. You’re seeing it inside, for instance, automated applications; you’re seeing 60 - 100 CPUs in a family car talking to each other over an embedded network inside the car; and Corba is finding its way into all sorts of embedded applications like this when distribution is part of the embedded application.
So Corba - Common Object Request Broker Architecture - it’s about linking encapsulated entities together, objects together. You know, computer science is like everything that has science in the name, it’s not really a science but of the unifying principles that it does have perhaps the most important one is encapsulation; if you’re building big complex systems and by big, I mean, big in the sense of yes, complicated not necessarily physically big; then, in order to understand how they work and design them and modularize the design problem, well you use the encapsulation principle; you embed the complexity inside simple or you embed the complexity inside encapsulated entities that have as simple an interface as possible; it just so happens that 15 or 20 years ago, they got a fashionable name, we started calling them objects; a principle that had been around for years and David Parnas' information hiding principle was at least another 10 or 15 years older than objects; but 20 years ago, we started calling these things objects and Corba became the standard middleware of choice, a way of linking these things together.
So yes, Corba middleware is about linking together encapsulated things and it happens that we call those encapsulated things objects.
Absolutely and this is one of the reasons why Corba and other middleware is necessary. You could do the same job in tying your solution to one particular programming language or one particular embedded operating system but it’s in the nature of these problems that we’re tying together systems written in different programming language using different operating systems sometimes because we have different parts of the system which are of different ages; so the older parts were written using whatever programming language was fashionable 10 years ago and I guess that would be C++, the modern parts are written using whatever language is fashionable today, it might be Java; and as I speak of fashion, you know, to some extent it is about fashion rather than actual capability.
So you need a common infrastructure, a common software bus to link together these bits that use different kinds of technology and that’s exactly what Corba was designed to do from the beginning to be independent of programming language, independent of operating system; independent of particular vendor and their particular technology choices and yet allow vendors and users to integrate system using all these different technologies with each other.
DDS stands for Data Distribution Service and it uses a different paradigm from Corba. Corba is based on this idea of objects being linked together and communicating with each other; DDS is a publish/subscribed system where each entity on the network that has something to tell the rest of the world, as it were, publishes that information as a packet of data and anything that’s interested in a particular kind of information can register an interest and say, "I’m interested in temperatures that are coming from the temperature sensor on to the bonnet of the car; or radar tracks that are being tracked by the radar in air traffic control system or whatever it might be.
So DDS came originally from the military world, particularly naval military where the different weapon systems and sensors on a ship need to be integrated together; need to share information but is now is increasingly being used in places like financial services systems where people are doing high frequency share trading and so on and a host of other applications where timeliness of the information, the real-time nature of the information is crucial, DDS has built into it at a very low, very fundamental level, a notion of quality of service so that you can when you’re providing information or more particularly when you’re consuming information specify not only what information you’re interested in but things like the timeliness, a 5-second old data point in your radar tracking system may not be of any interest because once it’s 5-seconds out of data, it’s more useful to get a new one than to look at an old one.
DDS has built deeply into it, a model of the world this notion of quality of service and timeliness of information and is really taking off in a number of real-time application because of that.
For all middleware really you end up with two sets of interfaces to the middleware; and as you said, one of them is the transport protocol, the way the information is moved around on the wire and when you’re linking together subsystems built by different manufacturers, this is usually the crucial interface because if you buy one part of your system from vendor A, different parts from vendor B, and plug a network cable between them, it’s the protocol over that cable over the wire that each needs to understand and with correctly in order for the information to flow.
So to give a specific example of that, the UK Ministry of Defense here has recently published a specification for what it calls a generic vehicle architecture, GVA. GVA is a specification for how information will be moved around between different subsystem inside of a jeep, a truck or a tank and it uses exactly what we’ve just been talking about - a wire protocol, the DDS wire protocol and says, "If you’re going to publish information within this vehicle, this is the protocol that you must use."
For portability of applications between different implementations of the middleware, then the other kind of interface becomes important and that is the portability interface that’s from the programmer's point of view, that’s the set of APIs that you must use in order to get information out of your application, that you as a programmer are writing, into the middleware implementation, DDS or Corba, the same principles apply, so that the middleware implementation can then send the information over the network somewhere else; so these two different kinds of interfaces the portability interface and the interoperability interface are the two important ones in specifying a middle ware and in both Corba and DDS and OMG specified both of them.
And just last week, I was in Washington, OMG had one of its quarterly meetings there, and we were very happy to see six different DDS vendors demonstrating that their implementations were compatible at the wire interoperability interface level because they could literally plug them together and we had a demonstration with six separate screens and I tell you, if you held your hands up the screen, you could see the bones through it in that room because there was so much light bouncing around with six separate data projectors throwing up images from six separate implementation of DDS but they’re all sharing the same information you could see exactly the same picture on each of the six separate screens as the data was being shared between this implementation; so in our world and the world of standards, what OMG does publishing a specification is very important but it doesn’t have any value unless the vendors really do carry through and then implement the interoperability route that we’re talking about.
I’m happy to say that that is happening very successfully.
Absolutely, UML, Unified Modeling Language was first adopted by OMG as a standard in about 1997. We’re now at UML version 2.4 with version 2.5 coming along quite soon. Lots of people are putting a lot of effort into maintaining the UML specification and they’re doing it because it’s in the interest of OMG's member companies to do that; they wouldn't be devoting all this time and energy to maintaining and improving UML if it wasn’t commercially relevant to what they’re doing. UML is easily the dominant modeling language for software developers in the world today; or certainly the dominant standard modeling language and certainly the most widely used visual modeling language on the planet.
That’s a very good question and it’s one that has engaged lots of people in the software industry over the last few years. If you have to model a particular application or particular application domain, there is a trade off between using standards versus customizing the modeling language to your particular problem and the usual problem with comparing apples and oranges.
For a particular application domain, you could, of course, build a modeling language that was completely tailored to that domain and faithfully rendered every last detail of the domain and from the point of view of code generation, which is the subject of this conference here this week, that’s a major advantage because the more information you can get into the model, the more precisely the model is tailored to your specific domain; the closer you’ll get to generating a 100% of the application from the model.
On the other hand, if you design your own modeling language for every application problem, it’s likely you’re going to have to do a lot of your own tool making; you’re going to have to create the tools that understand this model and convert it into code or transform the models into whatever other artifacts you need to use.
So at one end of the spectrum, we got the 100% domain specific custom modeling language designed for your application and with that, a lot of work to build the tooling.
At the other end of the spectrum, there are completely standardized modeling language and UML is probably the preeminent example of this; we publish the UML specification; it incorporates something like 13 sublanguage for modeling different aspects of the static and dynamic behavior of software but because it is a standard, it has to be designed to cope with all the different diversity of software development.
The good news is that means that you’ve got lots of tooling available, lots of very capable modeling tools that you can buy from vendors who are all fighting it out in the marketplace competing with each other, making sure their tools are good and competitive; but it probably means that if you use pure UML with no customization, you are that much further a way from what your particular application domain is and therefore, doing things like code generation, again the subject of this week’s conference, is a little harder.
So in between these two extremes, we have a sort of a cross-over between fully standard language and domain specific languages and in practice, it’s in this middle area that everybody lives and works.
Now, you can get to a customized language for your particular domain by taking UML and customizing it. We have a standard mechanism for this in UML called profiles. OMG has defined, I can’t even count off the top of my head, probably at least a dozen standard profiles for different domains; so there's a standard profile for telecoms; there’s a standard profile for service oriented architecture and Cloud applications; there’s standard profile even for doing non-software jobs like systems engineering; and what they do is they take standard UML and customize it in limited ways, the advantage is that you then can use standard UML tools but as I say, you've moved closer to your particular application domain.
The other way of getting a domain specific language is to start with a completely blank sheet of paper, design your own own domain specific language and then go out and use tooling that helps you implement the necessary transforms from your model to code or whatever it is. But here’s the thing, when people do that, they very often end up designing a language that looks a little bit like UML; so even users who don’t use any part of the standard UML tooling, they often informally use UML syntax because everybody understands what the individual diagrams in UML do and how they work. This is, I smile wryly, one or two people who take a very combative line and let you know, "We’re not using UML we’re designing our own language because it’s so much better." and then they design a language that looks a little bit like UML; and I’m very happy that we’re providing value for them even though they don’t recognize that they’ve borrowed what’s in UML.
Everything that OMG does, we hand out free, that’s free both in the sense of free speech, and in the sense of free beer. Free in the sense of free beer, you can download our specifications for no money; free speech you can use them and customize them and borrow ideas from them without having to pay anybody any licensing fees and I’m very happy that even the people who are quite adamant that they’re not using UML, they’re using a domain specific language, that they designed for their problem, often borrow bits of UML. I’m very happy that we could help them.
They customize in small ways and that can mean both restricting a UML feature that you don’t need or adding something that is specific or changing the semantics of UML in particular ways. Basically, it’s amazing how much human beings are visual animals; the great thing about UML is that it provides pictures and a standard syntax for what those pictures mean. Some of the UML profiles actually customize the underlying meaning of those pictures in quite radical ways but provided the pictorial syntax still has some connection to UML, you can use a UML tool to draw your pictures, draw your diagrams and really customize the underlying meaning of the pictures quite a lot for your particular application domain.
Yes and they do; there’s a standard profile definition mechanism and I've got to emphasize that UML profiles are being used not just in the software industry, take a couple of examples, systems engineering is the science of designing and specifying large complex systems that include hardware, software, people even things like fluid flow; system engineers, for instance, are the kind of people who design oil refineries, huge complex piece of machinery for moving and refining large quantities of liquids.
The standard modeling language, visual modeling language used in the systems engineering field today is a thing called SysML which was devised by OMG in collaboration with INCOSE which is the system engineering professional body and it is a UML profile; it uses standard UML diagram types, it adds a couple of new diagram types and people who use SysML they’re not software people at all but they’re using UML tools profiled, using the system UML profile, to make it relevant to their area of interest.
Another example is the thing called UPDM which is a UML profile for MODAF and DoDAF; MODAF and DoDAF are enterprise architectures used in the military world. DoDAF is the U.S. Department of Defence architecture framework; MODAF is the UK Ministry of Defense architecture framework. They’re used for managing the procurements of large systems by the military; UPDM a visual syntax built by profiling UML is used by people who don’t have anything to do with software -they're planning the procurement of aircraft and ships over a 30-year time frame and profiling UML is a very powerful way to adapt the UML syntax for many for the different application domains.
The OMG divides its activities into two broad groups: platform activities and domain activities and up until now, we talked about platform activities - these are the standards that apply ... we believe very broadly across many different application areas; now, UML tooling is used by people building embedded system, big network systems, financial systems- all kinds of software. Similarly, middleware used very broadly across many different application areas.
The other side of OMG is a domain technology committee which works on domain specific interoperability specifications that are customized to solve the problem of one particular application domain; so healthcare for instance is one. We work closely with an organization called HL7 which has lots of expertise in the healthcare domain. We, in our collaboration with HL7, we provide some healthcare expertise but also lots of modeling expertise; the net result of that is we build between us in collaboration with HL7, we build interoperability specifications targeted at healthcare problems like the exchange of medical records.
Another example, we have a finance task force again in our domain technology committee that’s looking at interoperability specifications for the finance industry. Right now, as everybody knows, if they’ve looked at their savings account or their pension recently or worried about their mortgage being in negative equity and the financial calamity of 2007-2008, was partly caused, many people now believe by a lack of regulation in the finance industry; so the pendulum is swinging back the other way; the regulators in both Europe and the U.S. are looking at increasing financial regulation.
OMG is working through our finance domain taskforce with both the regulators and the banks on both sides of the Atlantic to help provide those regulations in a precise form actually again using modeling technology rather than being handed regulations as a big stack of paper written in natural language in English or German or whatever. We hope and many people in the finance industry agree with us that in future, the regulators will be able to provide the regulations written in a precise form, a model which will remove any uncertainty or certainly reduce the uncertainty about what the regulations actually mean.
And then when it comes to the reporting of their activities from the finance houses back to regulators, you will be able to send the data back essentially as instances of these models.
If the regulators are going to be able to do anything with the avalanche of data that will come back as a result of these enhanced reporting requirements, there has to be precise definitions of the data; not sending back a 300-page natural language document - no one's going have time to read it; so for the regulators on the financial institutions, to be able to handle this new regulator regime, they have to be more precise about what regulations mean and what the data they hand back means. Our finance domain task force is working on that problem and we ran a workshop just a couple of weeks ago in New York where we had almost 200 people in the room from big finance houses, big banks, regulators discussing exactly these problems. So with specifications from OMG, financial business ontology, a set of precisely defined terms for what can be used to define finance products. So, yes, they touched on a couple of areas in which are working on domain technology committee but we have a number of very interesting vertical interoperability activities but that’s what underlies all of what OMG does, it's about precisely specifying the interoperability between separate components in large complex system.
Yes, it’s fascinating, being the technical director, I have to try to keep track of all of these and you know, there’s an old joke in English that an expert is somebody who knows more and more about less and less, until he knows everything about nothing. I have the opposite problem, I have to I don’t work in any detail in any of these areas but have to keep track of what OMG is doing and it’s huge number of areas; so I find I know less and less about more and more until I know nothing about everything.
I can give you the three paragraph summary of what OMG is doing in any of these areas but very quickly if you want to dive deeper into any of these areas, I would send you to OMG’s experts in each of these topics I’ve talked about who can give you a lot more information; but it’s a fascinating organization to work with, to work for because there’s so many different, as you say, immediate relevant problem that we’re coming up with standard solutions to help address.
14. And so let me close with the final question the ultimately relevant question for our InfoQ audience: how will the OMG deal with the coming decline of object orientation which is under threat of many different paradigms? What will you do? Will you rename yourself? Will you come up with other standards? Do you think object orientation is going a way or not?
At a fundamental level, no, because as I mentioned at the beginning and to bring the conversation full circle, the fundamental approach that information technologists have for dealing with complexity, for dealing with building and specifying large system, and let’s face it, large software system are the most complex things ever created by man. A large application with hundreds of thousands of lines of code in it by any measure is more complicated than the space shuttle or the jumbo jet or big ocean liner and the fundamental approach to dealing with specifying large complex system is divide and conquer, breaking a big problem into smaller problem and the way that you specify the interface between the smaller problems but you encapsulate the smaller problem and keep the complexity inside and you define a simpler external interface that subsystem uses to communicate with other subsystem. Now, you can call that principle any number of things - you can call them information hiding, you can call it encapsulation; you could call it object-based design. Now the name object-based design may go a way, people may start calling them something else, calling them components, modules, a number of different names - they’re all fundamentally names for the same thing, the same underlying concept.
OMG doesn’t have a monopoly on working on that - that’s been around since dawn of the computer age and will continue to be around for the rest of the computer age but what it does mean is what OMG is doing will continue to be relevant because it’s about helping designers manage the complexity of building large complex system whether healthcare, or finance, or embedded systems or whatever it is.
The object word may go away but what we’re doing will remain relevant for the foreseeable future.