Book Excerpt and Interview: Dynamic SOA and BPM: Best Practices for Business Process Management and SOA Agility
A new book by Marc Fiammante, "Dynamic SOA and BPM: Best Practices for Business Process Management and SOA Agility", describes how to build flexible SOA/BPM systems with an approach that is based on many years of practical experience obtained during dozens of enterprise SOA implementations.
In his book Marc leads his readers through several major SOA/BPM implementation steps, including:
- Streamlining the Enterprise Architecture for Dynamic BPM and SOA, answering the questions:
- “Where should my enterprise be flexible? Which of the enterprise objectives requires accrued variability?
- What is the business priority of the enterprise that drives change? In what sequence can I expect changes? What conditions will evolve first?
- What is the benefit/risk ratio of implementing that variability? Is there an internal sponsor?”
- Implementing Dynamic Enterprise Information, discussing impact of enterprise information changes for SOA implementation.
- “Even though coping with information changes is business as usual in the IT world, we strive to limit the impact of an information change, particularly if that information is carried by service interfaces and handled in business processes across the enterprise.”
- Implementing Variable Services, discussing service implementation patterns
- “Implementing services variability is essential to limit the impact of changes in chains of consumers and providers of services.”
- Implementing Dynamic Business Processes, discussing how to build agile enterprise through automation end-to-end enterprise processes, based on services.
- “Enterprises must respond quickly and efficiently to shifting market requirements, regulations, and customer needs. In a competitive market, they must look at new products”
The book also contains a lot of information on IBM's SOA/BPM tools, which can be used for implementing these steps.
IBM Press provided Infoq readers with the excerpt from the book describing techniques for ensuring process variability.
InfoQ spoke with Marc Fiammante to understand in more detail the motivation and ideas behind this book:
Infoq: Despite many publications stating otherwise, in your book you consider both BPM and SOA to be closely related and seem to imply that both should be tackled together. Would you consider implementing SOA without BPM and vice versa?
Even though one may start SOA and BPM approaches in isolation, let me address the premise for stating that SOA and BPM are better together.
First, I do not consider SOA to be Client/Server or Object-Oriented (OO) on Web Services, but a business contractual approach between two business parties. This approach keeps the ownership of each party tied to the way they want to implement their part of the contract while tying them together.
When processes are designed without this ownership and there is freedom of implementation, then all of the parties’ implementation gets exposed globally. The consequence is that multiple parties or business owners end up controlling parts of a common model with different life cycles. Each change request from one party on process elements they would consider as private, requires a lot of negotiation with the other parties, while a contractual approach would limit the impact to the interface. I often use a train analogy: does the train engineer care about the menu of the restaurant wagon? Rather, he only cares only that the train follows and leaves control of what is happening in each of the wagons to the specific responsible persons.
There is an implicit or abstract process realized by the connection of wagons together, as a flexible process requires the capability of replacing a wagon by another without affecting the overall train.
The connections between the wagons are implemented as flexible business contracts that will be implemented as flexible services.
Returning to BPM and SOA, we have to differentiate the high-level end-to-end model that must leave flexibility for what happens privately behind each business contract that represents services.
Another consideration is the cost of process delivery and test. This cost is proportional to the process model cyclomatic complexity. My experience shows a change in one given process model will incur a test effort of approximately one-half of a person-day multiplied by the cyclomatic complexity, even if the change addresses only a small part of the process. To reduce the cost of change, the processes must be modularized. But there still is a need to connect the process modules or components together. This connection must not propagate the changes, and the natural approach is then to define flexible services as the link.
Infoq: Proponents of guerilla SOA consider global SOA overhaul to be a recipe for disaster. Would you consider a “project-based” SOA implementation to be a reasonable first step or do you think that the true value of SOA can be achieved only on the enterprise level?
I view SOA as an enabler for flexibility and I state to my customers: “if you don’t need it, don’t use it”. One way to consider it is: one wouldn’t put expansion joints everywhere in a building, but rather limits these flexible connections to the places where one expects the connected parties to vary. A reasonable first step can even be a very small first project consisting of a single service. I have two precise examples in mind: the first one is from a car manufacturer who evaluated the ROI of one “Bill of Material Service” to be several million Euros in the first year because of the savings on warranty costs if failures could be related to parts origin. The second one is a messenger company for which the service was about creating parcel labels with the appropriate variations for domestic or international and any country variation.
To identify the places where SOA has value we usually perform an assessment based on the OSIMM maturity model, and using the same model we can create a vision and a roadmap to establish where SOA would have value. The second phase will start small to establish business value and feasibility. The stabilization will look at other opportunities for SOA based on identified business value.
Infoq: When describing approaches for Dynamic enterprise information you are suggesting usage of “xsd:any” and/or name/value pairs for the parts of information that might vary. Such an approach assumes that a custom marshalling is going to be done for at least parts of payload. Would it be easier implement a custom marshalling for the whole document instead? Can you suggest any other XML design approaches?
Since we usually recommend the WS-I basic profile for interoperability and the standard J2EE JAX-WS (previously JAX-RPC), we follow the implications of such standards including the marshalling and unmarshalling of JavaBeans. This article from Sun clearly states that xsd:any will be handled as a
I am not however recommending xsd:any as the approach to take, but the fact is industry standards such as OAGIS use it for all of their extensibility, and customers also use it. My preference goes to the Characteristic Value pattern that the TeleManagement forum uses and it is XML-equivalent.
On the usage of “xsd:any” to allow flexibility, let me give you the example of one of my large customers. This customer uses 3 WSDLs for the same service:
1/ a client WSDL; 2/ An ESB and registry exposed WSDL with xsd:any for larger compatibility and; 3/ a provider side WSDL.
All 3 WSDLs are able to carry compatible XML payloads. This approach reduces the impact of a change from the provider side and allows previous versions of the services to be compatible with evolutions of the provider side.
Infoq: Following the W3CWeb Services Activity Statement, you write that Web services provide a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks. But Web services do not specify the actual business data exchange, thus providing virtually no support for the semantic data exchange. Do you believe that interoperability and loose coupling can be achieved without semantic data exchange?
I do believe that the true value of Web Services comes from an agreement on the business information that is carried by the services. The semantic can then be explicated with standards such as SAWSDL, or it is implicit because there is an industry agreement on a common and flexible information model used for integration purposes: the “Data Sharing” space, as named by the US Federal Data Reference Model.
So, in summary, loose coupling can be achieved if a contractual approach that also addresses the business payload is followed, but it does not require the formal Semantic capture using an RDF or OWL.
Infoq: In your book you are writing that “Semantically, a Web Service is just a parametric state transition request on a target resource”. Does this mean that you consider them semantically equivalent? What in your mind, is the difference between the two?
I wrote this in the context of the Restful Web Services and the question is: using a Restful service, can I always find a way to achieve what I could do with more classical Web Services? My position is yes, I think that semantic equivalence can always be found between Web Services operations, an event, an action and a state transition of a resource, provided that the resource is clearly identified.
As an example: a transferMoney operation should rather be expressed as the creation, or MoneyTransfer resource, and its internal state transition from submitted to completed.
Extending this approach to what industry standards are doing, Web Services operations are in the majority of cases, a verb and a noun and two transfer objects for the request and the response. I will take OAGIS 9.3 which uses that approach of verbs+nouns for its Web Services http://www.oagi.org The following list contains the 13 verbs that OAGIS uses to act on its entities (nouns). I have placed next to the OAGIS verb the appropriate REST verb and if required, an additional resource object which is needed to carry the state:
Acknowledge (PUT an acknowledgment object); Cancel (DELETE); Change (POST); Confirm (PUT a confirmation object); Get (GET); Load (PUT); Notify (PUT a Notification object); Post (PUT); Process (PUT of an order for the corresponding entity); Respond (PUT a response object); Show (GET); Sync (POST an update to a non owner of the data); and Update (POST to the owner of the data).
For reference, here are the nouns used by OAGIS as corresponding to a variety of objects:
ActualLedAllocateResource, AllocateResource, BOM, BudgetLedger, CarrierRoute, Catalog, ChartOfAccounts, ConfirmWIP, CostingActivity, Credit, CreditStatus, CreditTransfer, CreditTransferIST, CurrencyExchangeRate, CustomerPartyMaster, DebitTransfer, DebitTransferIST, DispatchList, EmployeeWorkSchedule, EmployeeWorkTime, EngineeringChangeOrder, EngineeringWorkDocument, Field, InspectDelivery, InventoryBalance, InventoryConsumption, InventoryCount, Invoice, InvoiceLedgerEntry, IssueInventory, ItemMaster, JournalEntry, Location, LocationService, MaintenanceOrder, MatchDocument, MergeWIP, MoveInventory, MoveWIP, OnlineOrder, OnlineSession, Operation, Opportunity, PartyMaster, Payable, PaymentStatus, PaymentStatusIST, Personnel, PickList, PlanningSchedule, PriceList, ProductAvailability, ProductionOrder, ProductionPerformance, ProductionSchedule, ProjectAccounting, ProjectMaster, PurchaseOrder, Quote, Receivable, ReceiveDelivery, ReceiveItem, RecoverWIP, RemittanceAdvice, RequireProduct, Requisition, RFQ, RiskControlLibrary, Routing, SalesLead, SalesOrder, SequenceSchedule, Shipment, ShipmentSchedule, ShipmentUnit, SplitWIP, SupplierPartyMaster, UOMGroup, WIPStatus.
As referenced by the article the transfer can be any negotiated type, whether XML, or XHTML. My reason to expose JSON particularly is that it shows an interesting variability approach to the information it carries. The article states, “Using MIME types and the HTTP Accept header is a mechanism known as content negotiation, which lets clients choose which data format is right for them and minimizes data coupling between the service and the applications that use it.”
Infoq: According to your book “Commonly, processes are modeled with a process modeling standard (such as XPDL or WSBPEL)”. Do you consider these languages to be modeling or execution ones?
Even though WS-BPEL is an execution language for processes and XPDL more of an interchange format for processes, they have been used as tools for process model interchange support, however none of them defines a visual notation as does BPMN. This gap has been identified by the BMPN 2.0 standards team and I strongly hope we have some converged notation, an exchange model and an execution model in the near future.
Infoq: When talking about service routing and business processes you are discussing business rules for routing and externalized routing based on policies. Is there also a place for dynamic routing using service Registry? Can you compare these routing approaches?
With WebSphere Fabric we do store the routing policies in the registry and as a consequence, there is a place for dynamic routing support with registries. We need however to differentiate where the routing decision takes place from where the policy is stored. Usually the endpoints and policies from the registry are cached in the ESB performing the routing, while usually caching efficiency patterns and the routing evaluation is performed in the bus on the fly, and not in the registry which would require a remote interaction.
Efficient content-based routing leads to first a mapping of the context and the content to an agreed structure with semantics usually accessible with a SBVR-like human readable language (http://www.omg.org/spec/SBVR/1.0/). The performance implications of doing such content or context analysis require a local processing with an in-memory resolution of the policies or rules.
You can see in the following real policy example that the Bold and Italic elements of the policy used for routing can be understood by a business person but under the cover, has an explicit link to the service request content (Product, Triple Play) or context (Channel, Web).
For the ActivateConnection Business Service
Product is Triple Play AND Channel is Web AND Elite Status is Gold AND
Role is Self Service OR Customer is Residential Then
Use the Activate Fiber To Home Process
Infoq: In your book you are raising an important issue of limiting the amount of data owned by business process. Can you give a more specific recommendation on the topic?
Early process model approaches were differentiating the process flow from the data flow. The introduction of Web Services and WS-BPEL led to a merger of the control information and the rest of the payload. Even though theoretically this merger allows any information to be used as a process decision, my experience in business process analysis shows that it is rarely the case. My recommendation is to perform business information modeling at the same time the process modeling is performed. Then, clearly identify the control elements of the information and create specific services that only explicitly expose these elements.
Then, identify the services that are relevant for the “Information as a Service” domains (services that first implement the CRUD for the rest of the payload including the non-control part). In addition to the CRUD interfaces, additional operations can encapsulate additional analysis of the information and expose the result of the analysis as a simple decision. The result can be used by the process to perform more intelligent actions without having to carry the information in the process. Then, before triggering a process component, the caller should use the “Information as a Service” interface to persist the bulk of the information and trigger the process using only the minimum control elements. If the process requires an interaction with a target system that requires more information than the control information, then a mediation in the bus can be used outside of the process to complement the required information before reaching the application.
Thus, if the analysis of the payload changes, it is encapsulated outside of the process and does not affect its lifecycle.
Infoq: In your book you define the difference between opportunistic - event driven approaches and deterministic - services approaches as: “if the issuer of the event message expects a specific action as a consequence of this event this is a strict service oriented architecture (SOA) interaction… if the issuer of the message does not expect a specific action as the result of the event message, but if listeners examine the message and apply some rules to make out if they have to do something about it, then we are in a true event driven architecture (EDA)”. On another hand, what you describe as EDA is very close to business rules/policies based routing, described earlier in the book. So is there a significant architecture difference between SOA and EDA?
Events can indeed trigger services and in that respect there is continuity between SOA and EDA.
In an SOA, the requester of the service directly triggers the provider of the service even if there is some rules resolution in the middle to select the appropriate provider. In an event-driven architecture the source of the event is not usually the requester of a service, but there will be “observers” that watch for the events and which can become the requesters of services, optionally through the use of rules. In EDA the observer is the service requester; in SOA the event is a request and there is no need for observers.
This is why I call it opportunistic as the observers decide if they handle the event and trigger a service, while the initial source of the event is not the service requester. The initial source of the event cannot be expecting a particular handling as it has no control on the chain of observers.
The usually recognized design pattern for EDA is the observer pattern from the Gang of Four (GOF) pattern while the usual design pattern for SOA is the Bridge Pattern. Of course you can connect the design patterns to combine SOA and EDA.
Infoq: One of the tenets of SOA is the assumption that services are stateless. Stateless means a lot of different things to different people. When you are writing about state and state management in your book, which state are you referring to? Conversation state? Execution state? Something else?
I would call this the “adaptation state”. To get the applications exposed with the appropriate granularity you need an adaptation and granularity matchmaking adaptation layer as close as possible to the target application. The stateless service is exposed as the reusable service and is the result of that adaptation. No state is exposed from that service.
Infoq: When discussing Service life-cycle management you are discussing usage of WebSphere Service Registry Repository, but not Rational Asset manager. Do you consider that Registry and Repository serve the same purpose? Do you see usage of both tools?
Yes I see usage of both tools and I would even complement it with the Change and Configuration Management “Tivoli Change and Configuration Management Database (CCMDB)”.
In development, a software project includes assets that are addressing domains outside of the registry and repository domains. These can be project plans, documentations and developer guides, images and presentations, test plans, use cases and requirements; really, anything that is related to the project. In the Service Registry and Repository, only the elements that relate to services for design time, code time and run time are usually relevant. In the CCMDB you will find additional information about the software components, such as middleware versions, hardware levels and configurations.
Chris Mattmann Apr 15, 2014