Case Study: Composite Applications at Safeco
Safeco is an Insurance company, headquartered in Seattle which provides auto, homeowners and small-business policies. Safeco prides itself to providing personalized service and support when a customer files a claim. Safeco relies on a large national network of independent agents and brokers to help customers identify their best possible coverage. Agent surveys have consistently positioned our web-based sales-and-service platform as a leader in the industry. This platform was developed on .Net and legacy IMS applications in the back-end.
In early 2006, Safeco initiated the development of a Service Oriented Architecture to support the business in two strategic areas: new product development and business process improvements. The task of supporting these teams from an IT perspective is challenging because new products, solutions and improvements are often specified without consideration for department or system boundaries. Furthermore, we needed to improve significantly our response time to deliver solutions in order to meet market and financial goals, while lowering our implementation costs.
An Introduction to Service Oriented Architecture
Service Oriented Architecture is a good fit to meet these goals in terms of transformation and agility because SOA offers a new reusability model which enables the composition and construction of new solutions by reusing and extending existing assets. The performance, scalability, reliability, security and interoperability achieved today by distributed technologies and by web services technologies in particular enables the reuse of IT assets wherever the cost of operating them is the lowest. Traditionally, IT has often spent a large portion of its budget replicating assets (data or code) and keeps synchronizing them for each change that happens. This new paradigm amounts to a normalization of the information system, a logical one, not physical. This normalization is achieved with a new kind of software agents, services, which provide access to specific data or business functionality.
Yet, SOA goes well beyond the concept of services, it is supported by a complete application model, a composite application model, where (Figure 1):
- All traditional operations are formal activities with well defined boundaries be it user interactions, service invocations or entire processes.
- The activities become loosely coupled, (i.e. do not run within the same technology or call stack, under the same security authority, with a dedicated connection between them).
- New composition technologies enable a combination of services to perform collaboratively complex business functions as part of enterprise business processes
Figure 1 In SOA, traditional aspects of an application become loosely coupled
The fundamental paradigm shift in Service Oriented Architecture involves transforming existing applications into "Systems of record" wrapped with a service interface that implement activities of type Act, Record, Inform and Compute. The key success factor of this transformation is to externalize the state of business process instances from the content of the business objects captured in the systems of record. Services are generally "context" independent, i.e. they have as little knowledge as possible about why the consumer is invoking them at a particular moment in time. The role of a service implementation is to enforce the system of record's integrity, almost as a pure data access layer.
The context of usage of these services to achieve a particular goal is managed by the business process tier (Figure 2). Most often, a Service Oriented Architecture would rely on a business process engine to perform this function.
Once the tiers of an application model are loosely coupled it becomes easier to deploy a business activity monitoring (BAM) infrastructure which monitors events (the occurrence of a particular state) across the flow of messages. Some BAM infrastructure may even help correlate occurrences across a complex flow of message (Complex Event Processing).
Figure 2 SOA Application Model
In the context of this project, Safeco has chosen Microsoft Windows Communication Foundation for the Enterprise Services tier, IBM WebSphere Process Server for the process tier and ASP.Net for the presentation tier (Figure 2).
WCF is one of the best service containers to-date. Microsoft was the first one to innovate and provide a service container with a programming model that was technology agnostic: the same code written in .Net could be deployed in various distributed technologies, not just web-services. WCF is complemented with the Service Factory which is a wizard which enables developers to easily create a complete service project, either starting from a contract definition, (WSDL first), or from a set of classes that will expose a series of operations, (Classes first). Java later followed the lead of WCF and developed a new component model (SCA) where similarly; any java code could be invoked using a variety of distributed technologies chosen at deployment time.
SCA augmented this new programming model with an "Assembly mechanism" which allow an integration developer to rapidly assemble web services and heterogeneous components, (written in Java, C++, BPEL,...) following the dependency injection pattern. SCA's assembly mechanism provides a viable alternative to the use of a registry at runtime to route calls to a logical endpoint rather than a physical one. The middleware technology for any given assembly is chosen at assembly time, rather than at component development time. IBM WebSphere Process Server is a BPEL based engine which supports an early draft of the SCA specification. Process Server is not a "business savvy" business process engine. It is rather a process centric integration platform that can consume business-level process definitions modeled in WebSphere Business Modeler. As such it enables the assembly of complex solutions, typically implemented by integration developers rather than business analysts.
ASP.Net was chosen at the presentation tier level mostly because of the large community of developers that Safeco has in this technology. Other choices might influence our standard presentation technology in the near future. However, we expect that our Service Oriented Architecture will always give us the opportunity to choose the most appropriate delivery mechanism to support any given user interaction.
Safeco's SOA Center of Excellence
We created a SOA Center of Excellence as part of the Enterprise Architecture group with domain and solution architects as well as developers, QAs and business analysts coming from our delivery organization. This model was chosen to keep our trained resources within the same group to be able to execute a series of projects together. Over time, we expect that the Center of Excellence model will be replaced by small SOA groups across our organization that bring expertise for any given project.
Figure 3. Safeco's SOA Center of Excellence
A small center of excellence will however remain in effect in the future to support methodologies, standards, governance processes and manage the service registry. The main goal of this core group is to establish best practices at design time to maximize reusability of services.
Problem Domain: All "Quote & Issue" and "Renewal" processes require that we match a customer's declaration of incidents, (tickets, accidents,...), with his or her Motor Vehicle Registry (MVR) record. Because of the cost incurred when obtaining these records, we actually order MVR records fairly late in the process. To compound the problem, some States in the United States are capable of providing a record in "real-time", while others provide a nightly batch feed which contains almost all the records ordered on any given day. The goal of this project is to create an enterprise component which can process all the matching requests by comparing an MVR record to a policy record. In the future, this component will be used in all Quote & Issue and renewal processes.
We were also tasked to automate the manual verification process that occurs for the batch states which is not supported by our Q&I Sales and Service platform. The current state of the MVR process performed by the Verification Unit involves a manual process and several screen navigations in separate systems to perform the reconciliation manually. In addition, the current manual process may create multiple endorsements on a policy based on the order in which MVR records are worked, such as the case of two drivers on the policy that require updates to MVR information.
Figure 4. The current MVR verification process for batch states
In this process, customers sign up for a policy with the help of a Safeco agent. If the customer has records in a batch state, an MVR record is ordered as the policy is issued based on the information provided by the customer. Within a couple of days, the record is sent to us via a batch file and a customer service representative will manually review the record and policy information to decide whether the policy needs to be re-priced. If this is the case, an updated policy is sent to the agent and the customer. About 35% of MVR records match the policy information. The matching logic is complex and regulated by the individual States. It is expensive to train a CSR to learn all the business rules.
The complexity of this logic often creates quality issues. Furthermore, policies may have more than one driver. In the past this created multiple updates to the policy and possibly several re-pricings or referrals to an underwriter. One of the key requirements in the design of the matching service was to support multiple drivers on the same policy.
Solution Overview: This project was a good fit for SOA. It would enable us to develop an enterprise class matching service reusing some of the logic that was already implemented in one of our legacy applications. A solution based on a process engine would be a great way to rapidly develop the automation infrastructure around the MVR reconciliation process. Our solution aimed at directly updating the policy records when a match was found and to present a matching report which would include all drivers of a policy to our CSR. The CSR would ultimately use a legacy application to re-price these policies.
Figure 5 The intended MVR Verification Process
Drill Down: MVR Reconciliation Solution
Our SOA business analyst received 3 months of training prior to this project during which he and others developed a working knowledge of IBM process tools (WebSphere Business Modeler and WebSphere Integration Developer). He developed an As-Is model as well as a To-Be model. Once we were satisfied with the To-Be model. The business analyst performed a simulation of the To-Be process assuming 35%records matching rate which would need no further verification and a lower verification time based on the matching report delivered directly by the matching service, (compared to MVR records which are often cryptic and for which the codes vary by state).
From this analysis we identified a need to develop a human task which will present a work item to our CSRs as a verification report. From the verification report CSRs would have the information necessary to update our policy system.
The To-Be business process was imported to create the initial BPEL implementation of the process.
Two main services participated in the solution: the MVR record matching service and the Policy service. The solution consumed the getPolicy, updatePolicy and matchMVRPolicy operations.
Figure 6. The MVR Solution Technical Architecture
Each day an MVR file is uploaded in one of our legacy system (Figure 6). Every morning the WebSphere Message Broker processes this file and invokes our Process Server instance to create a business process instance for each policy identified by an MVR record. All records that correspond to different drivers of the same policy are merged into the corresponding policy process instance. Once the file is processed, each process instance is triggered to invoke the getPolicy operation, followed by the matchMVRPolicy operation. At this point the MVR records are part of the process instance context and don't need to be stored elsewhere. If there is a match, the BPEL implementation invokes the updatePolicy operation and terminates the process instance. Otherwise a work item is instantiated and waits in the process instance context until a user claims the task and completes it.
Drill Down: Process Modeling, Simulation, Execution
In the past, Business Process Engines vendors and Business Process standards such as BPEL or BPML have touted the capability to enable business users to model a process which can be deployed directly to an engine to execute it.
Our experience is that this operation is not as seamless as some may have expressed it. We did not have however a particular level of expectation as we started our SOA initiative.
Figure 7. The model-driven advantage
We have experienced that some processes enable this paradigm better than others. Because of the nature of the MVR process, (driver merge, specific business logic for service invocations,...) the business view and the resulting BPEL diverged significantly. We have worked on other business processes where the BPEL generation from the business view is a lot more straightforward. As a rule of thumb, wherever there is "system invocation logic embedded in the BPEL implementation" we expect the divergence to be high.
Drill Down: MVR Record merging
It was key in this project that records belonging to different drivers of the same policy be processed together to avoid multiple updates to the policy and notifications to the agent and customer.
This was implemented directly in the BPEL definition which supports the MVR solution.
Figure 8. BPEL Implementation of the MVR record merge
The first activity initiates a business process instance when the Process Server receives a new MVR record. As part of the receive in the processMotorVehicleReport operation, we establish the correlation set necessary to direct other MVR records with the same policy number to this new process instance.
The BPEL continues with a while-loop which receives the other messages correlated to this process instance or times out once the batch file processing is complete. Once all records have been received they are processed.
Drill Down: Microsoft / IBM interoperability
Because of our choice of a heterogeneous platform, we had to tackle the interoperability head-on. Our presentation tier built in ASP.Net invokes Process Server services to claim human tasks and Process Server invokes WCF services as part of the process implementation (Figure 6).
We would qualify the interoperability to be quite good as long as you know which settings of each stack can interoperate with the other. At present, WebSphere Process Server can only interoperate with WCF's basicHttpBinding which is based on:
- HTTP 1.1
- WSDL 1.1
- SOAP 1.1
- WSS SOAP Message Security 1.0
- WSS SOAP Message Security Username Token Profile 1.0 & X.509 Token Profile 1.0
This means that the most advanced features of WCF, such as the use of WS-Policy or SOAP 1.2 cannot be leveraged.
We also encountered problems relative to the way each product handles namespaces. WCF adopts a service view and each service artifact lives in a service specific namespace while Process Server adopts a business object view layered on top of the service interface. So if a policy schema is common to a Policy service and a Matching service, that particular policy schema must be shared at the WSDL level, because there is a single policy schema within the business process definition. We had to tweak the WCF generated namespaces to make the resulting WSDL consumable by Process Server.
This is a general issue of web services technologies since there are no specific "business object model" which represent resources exchanged as part of service invocations. Each vendor is free to adopt schemes that better represent its product line or customer base.
Drill Down: Security Interoperability
It is beyond the scope of this paper to describe Safeco's security infrastructure. In terms of interoperability, we have explored several combinations of security mechanisms to support authentication, integrity and confidentiality capabilities.
Web Services Security is based on the concept of message based security: the SOAP envelope of the message is sent in clear, but the SOAP body is transmitted in an encrypted form.
On the WCF side, Message based security can be accomplished in a number of ways; however there are many variations on how this is supported by various environments. Some of the more common forms are:
|basicHttpBinding||UserName encrypted via Certificate|
- Certificate Authentication: in addition to providing security via encryption, WCF supports using the client certificate as a method of identifying the client.
- Windows Authentication: is accomplished by effectively passing a "WindowsPrinciple" across the service boundary. WCF based services can then impersonate the client so that they operate within the privilege context of the client's identity.
- UserName Token Authentication: "UserName Tokens" provide a mechanism for transmitting the client's identity to the server when the client's identity does not correspond to a valid Windows Identity within the service environment.
Remarkably, in our project, we were able to achieve 100% configurability of the security mechanisms, i.e. there are no code dependencies on the mechanisms we use. The security settings can be configured at deployment time.
Interoperability for both digital signature and encryption was achieved between Process Server and WCF.
We ran into a small issue which is a result of a non-normative errata that Oasis published but not everyone followed. Microsoft did not implement the errata, IBM did.
As a result, the request message contains the following key identifier namespace:
The v3 in X509v3SubjectKeyIdentifier was introduced by the errata, and not supported by Microsoft. The fix is either to enter the name space value manually or to upgrade to the latest version of Websphere, where it seems IBM reverted their implementation of the non-normative errata.
The learning curve is steep. It is not so much any given aspect which is difficult to learn, it is rather the sheer number of technologies and tools one must be familiar with to execute the complete project.
We requested the help of two consultants, one from IBM and one from Microsoft to execute this first project. This turned out to be extremely helpful for resolving interoperability, configuration and deployment issues.
From our perspective, it seems difficult to have the expectation to involve business users in modeling activities. An experienced business analyst can easily translate business requirements into business process models. This model is generally not deployable in the process engine as-is. An integration developer needs to be involved to make the process executable. Initially SOA projects should be treated as traditional implementation projects. Over time, there is value in providing an as-deployed model to a business team. When we will be in the position to add operational metrics to this as-is model, we expect the value to become high enough to convince the business team to engage in modeling activities and establish a closed-loop model driven delivery process.
This project demonstrated most of the benefits SOA can bring to an organization.
First, we could reuse legacy code that was otherwise impossible to reuse without exposing it as a service using web services technologies. After we deployed our solution in production, the Q&I team which owns this code made some changes to the implementation of the Matching Service. These modifications were immediately available to the MVR solution which would not have been the case if the code had been replicated.
Second, web services technologies enable interoperability which in turn is a key success factor when building composite applications. We have not experienced significant difficulties in this space and we can only expect that the degree of interoperability will increase with support of SOAP 1.2, WS-Transaction and WS-ReliableExchange... which we did not need for this project.
Third, we were able to deliver a complex solution integrating over 5 systems in less than 8 weeks with a team of 4 developers, 2 QAs and 2 Architects. The production infrastructure, including security, was built as a separate project with fewer resources.
Fourth, the process implementation was done with less than 20 lines of code which were written for a special mapping capability. The assembly between the process component, the services and the human CSR was achieved with SCA. These capabilities demonstrate that a model driven approach is effective to implement real-world solutions and this approach eliminates over 90% of the code at the process level. If we were to code this process implementation using a traditional application model, (e.g. J2EE, or .Net), without an orchestration engine, this implementation would have taken several thousand lines of code. Furthermore this code is stateful which would have made it harder to debug or change later on.
Overall Safeco has been successful in deploying its first iteration of a service oriented, process centric, model driven application model.
Our current SOA infrastructure is incomplete. The next phase will continue building our capabilities in terms of Business Activity Monitoring, Registry and Management & Monitoring.
We have reached a strong buy-in from our business customers which appreciate the lower cost of delivery, the speed with which we deliver as well as our ability to change the solutions once it is in production. We have established a pipeline of projects which can be addressed by SOA. Governance and methodology are key to reusability and to speed of delivery. We will therefore keep reinforcing them as part of an ongoing effort to reach the highest level of maturity.
The key benefit of Service Oriented Architecture is its ability to create IT assets which can be reused and extended when building new solutions. Future projects that are capable of reusing these assets will be that much easier, faster and cheaper to build while limiting the need for integration. As such SOA is a key enabler of transformation, better business/IT alignment and agility. Utilizing SOA, Safeco IT was able to deliver the solution in a very short development cycle meeting market and financial goals, while at the same time lowering software development implementation costs.
Java != SCA and JBI was before WCF
SCA was developed by BEA and IBM and only recently handed over to an open standards body (OASIS). There is only one open implementation available which is still in its infancy.
Microsoft was rather late, introducing WCF with .NET 3.0 and there is still much uncertainty and doubt around this framework.
Each of these specifications has its merits and drawbacks, but in your case, JBI would have been worth a look, as your infrastructure is missing exactly those things that JBI already provides by default ("Future Directions"). Maybe you should have also invited a consultant from Sun (I don't work for them, just like open and proven standards).
Re: Java != SCA and JBI was before WCF
JBI is merely a belated attempt to standardize integration infrastructure of integration platform of the 90s (java.sun.com/integration/pa1/docs/introduction/...). It is based on a very old "hub & spoke" pattern and requires that the NMR be at the center of the university. It does not even support composition between JBI infrastructures !! So if by any chance two JBI infrastructure made it to your organization, you are on your own to make sure that a binding component in one can talk to a binding component in another or leverage a service engine from one into the other.
It is misleading to associate JBI with SOA. Can I use a JBI infrastructure to expose services? Yes, just like a gazillion technologies that act as a service container. Should I construct my SOA infrastructure on JBI? Hell no.
Dimitar Bakardzhiev Mar 29, 2015