Bridging Subsidiaries With the Cloud to Create a Global API
In the modern business world there are many organisations who have developed a global foot print through the acquisition of other companies to create a presence in different territories. Sometimes these companies can be kept almost completely separate and other times there can be a significant level of integration between the businesses. One of the biggest challenges in this space is how you might integrate these companies to present a single global view of the entire group of companies so that your customers and partners find it simpler to integrate with your organisation.
In this article we will discuss a fictitious sample based on a real-world scenario and look at some of the typical challenges and good practices that you should look to implement to be successful with this type of solution.
In this example we are looking at a company called Acme Employee Assistance Group (referred to now as Acme). Acme is a company who have a large number of local businesses around the world and have a growth strategy which involves acquisition of other businesses in new countries. Their core business is around support services for companies when their employees are travelling around the world. Acme have contracts with companies based in a lot of countries and these contracts are managed locally by the Acme business in that territory. Each country has specific regulatory requirements which means that it would not be easy to create one global application to manage every customer and there is a significant legacy IT investment in most of the Acme businesses.
The big challenge comes when an employee of one of Acme’s customers is travelling and needs assistance. They will contact the local Acme office and then that office will provide the support required to help the employee. This means that the local Acme office needs to be able to access the systems in the employee’s home country.
The following diagram illustrates how this might work using the example of a UK based employee who seeks assistance in Australia.
While the initial example might not sound overly complex, if you multiple this by the number of businesses Acme has around the world this can quickly become a spaghetti mess.
Traditional Solution Model
When you first consider the requirements of this solution, as an Integration Architect you will probably remember back to the old spaghetti integration diagrams you used to see with applications within the enterprise when you didn’t have a centralized integration capability. This problem domain is like an extension of that old challenge but on a global scale.
The below diagram shows how you might see this.
New Solution Model
In the modern day cloud era you now have some architectural options which weren’t really available in the same way before. Using the cloud as your hub and a Platform as a Service (PaaS) based messaging system as one of the cores to this architecture means that you can now focus on the challenge of connecting each business to the cloud but not directly to each other.
The below diagram illustrates this hub and spoke pattern but at a global level.
There are still big hurdles to overcome in this kind of project but with a cloud provider like Windows Azure it is possible to connect your data centre to a cloud messaging system in a very short space of time and with minimal infrastructure requirements. This creates the capability to pass messages around the entire organisation and it now becomes a case of how we would expose that messaging capability to our customers and partners in the best way.
To do this we have 3 key levels in this architecture. Firstly we have a local business Enterprise Application Integration (EAI) capability which is able to connect the cloud messaging system and to integrate with the line of business applications.
In the middle there is the Integration Platform which consists of the messaging capability and an EAI capability for any third party systems we may need to integrate with.
Finally at the public facing end we have an API and user interface which are exposed to those who need to integrate with Acme. The below diagram illustrates this.
(Click on the image to enlarge it)
You can see in this capability diagram that the interfaces exposed are a REST API for application integration and a website for any low-tech partners who need their users to manually view data.
In the next diagram below you can see how these capabilities illustrated above can then relate to physical systems within each business. You can see how each business has an integration product capable of connecting to the messaging system which is also capable of integrating with line of business applications. Examples of these integration systems would be things like BizTalk or Websphere.
(Click on the image to enlarge it)
At this point hopefully you can clearly see a potential architecture which is capable of delivering on this global hybrid integration pattern. The project will still have many challenges, but the new solution approaches offered by the cloud mean we can tackle this in a different way to what we could in the past.
We will now look into some of the key things you would need to think about and some of the practices you will need to adhere to in order to be successful with this kind of project from my experience.
Considerations & Good Practices
As part of this article I wanted to put forward some of my thoughts on the key considerations and good practices which would relate to delivering this kind of solution successfully. Some of them will be specific to the cloud and some of them more general to an integration solution. To break up the considerations and practices into areas we will look at them from the perspective of:
There are other areas which you might also consider such as delivery and operational aspects of the solution but we will leave these to discuss at another time.
This section will discuss some of the key organisational aspects. This can however be a big topic so I will keep this section shorter to focus on some of the more technical areas later.
Common Data Model
In order to present a common API for the organisation there needs to be a common business agnostic data model. If the organisation doesn’t already have a data model this could be one of the hardest parts of the project. A common data model means those parties outside of the organisation have a common representation of an entity regardless of the business it came from. This common data model can be represented as canonical messages.
Local business deals with local problems
Each business is going to have their own unique challenges and they should be handled by each business which will have specialisation in their own domain. Following on from the above point about the common data model, one of the challenges for a local business will be how to map their own data to the common data format.
It will be difficult to see every scenario up front and also there will be some scenarios which may be just too different in each business that it is not worth implementing them. In these exception cases an advisor could just telephone the appropriate business.
Running Costs Shared & Local
The cost model for this project can be a quite interesting one. The local business EAI costs are going to be fairly similar to the normal costing they would have, they may need to scale some systems but these kind of costs should be ones they already understand how to deal with. The new costs which they probably don’t really have a good understanding of are around the REST API and messaging system in the cloud. You would have the challenge of working out how to break down the costs for potential cross charging across businesses. In this case the organisation is likely to have a Windows Azure Enterprise Agreement so the costs would be pretty low anyway.
In reality the messaging costs and web role costs are so low that unless you have really significant load it will often cost more money to try and workout how to break up and manage the cross country charging than it would to actually just run the system. In this case from experience I would make an estimate on the expected running costs based on your expected volumes and then split the money evenly. Unless you are going to have one business who is used much more than the others and a high load the running costs are probably minimal.
Skills & Experience
One of the keys to success in this project is to have people in your team with experience of integration. Often organisations fall into the trap of thinking any developer can solve integration problems but in a large complex project like this specialist skills and experience will be really valuable.
Some of the design focused considerations are discussed below.
Protocol & Messaging Channel
In this architecture we have chosen to use Windows Azure Service Bus as the central messaging hub. This gives us the benefit of a messaging system that supports a number of different protocols and means we will be able to connect to it with a wide range of different technologies. In addition to the libraries on the Windows Azure website for the main programming languages, Windows Azure Service Bus also supports REST and AMQP.
Keep EAI to the edges
One of the lower level considerations for this cloud/hybrid architecture is where do you do the EAI. The EAI should be done as much as possible in the local businesses and not in the cloud integration platform. This usually works well in the businesses where we have a good level of IT support and they have existing integration platforms but in lower tech businesses there can be other options such as putting a BizTalk instance in the cloud which can deal with their EAI challenges.
REST API entry point
In this architecture we chose to expose a resource based model and a REST API to the client applications. The REST API would decouple the clients from the messaging system to a degree to give them a simpler resource model but it would also allow us to include some logic in the cloud to help with some features the API might require. The REST API would also allow us to have a place to centrally control access to resources.
Message and API versioning is likely to be an important consideration in this solution. A project like this one would probably have lots of changes to data throughout the life cycle of the project. We should be able to use the normal API and message versioning techniques with this architecture.
Using asynchronous patterns where possible will help with a number of areas of the solution including performance and scalability. In practice I have found many projects try to avoid using asynchronous messaging but it’s important in this case.
Azure Service Bus Namespaces
Using a single service bus namespace for the cloud based messaging will help Acme to simplify the management of the messaging. This will mean there is one central place where the queue and subscriptions will be held.
Using the new Windows Azure Service Bus paired namespace features should also help to give you an improved availability story.
Messages coming in and out of the API can follow the normal REST patterns but messages going through the Windows Azure Service Bus need to be of a compatible format for all of the Acme local businesses. Ideally Acme would aim to use JSON messages to limit the payload size, but it also depends on the capabilities of the end applications.
In businesses where Acme has BizTalk there are a number of community articles which show how you can decode a JSON message for use inside BizTalk
Azure Service Bus Security
Windows Azure Service Bus uses the Windows Azure Active Directory Access Control (ACS) or Shared Access Secrets to protect access to queues/topics and what permissions you can take on them. This level of security will be useful to centrally configure the access rights which each local business might be allowed.
Queue and Topic Setup
The queue and topic setup for this solution is likely to depend on the messaging patterns which you want to implement. In this solution it’s likely that we would use topics quite a lot to allow the subscription rules to be used to determine which message goes to which local business.
The patterns we used most were a routing RPC pattern which involves a topic routing a message to a subscription and then the local business sending a response message to a session aware response queue and also a one way routing pattern.
Message Routing Rules
Most of the routing rules in this architecture are based on a context property which indicates which country or local business the employee belongs to. Because of the nature of the business use cases we would always know this information from the initial conversation between the employee seeking help and the advisor.
This context property is then added to messages and in Windows Azure Service Bus we can use a simple subscription filter based on this property to route messages to the right local business.
In this architecture one of the common data related problems is around cross reference data. Mapping of codes from one business to another. In this scenario the best approach was to create a centralised master data list for each of the reference data fields and then follow the previous design principle around pushing EAI to the edges and leave each business to deal with mapping of the central values to their own business specific ones.
If possible the use of industry standard codes can really help here, for example to use of ISO Country codes to identify a country would be a good choice for the business agnostic country code.
Again like the common data model this area could be one of the more difficult ones to figure out.
Structured & Unstructured Data
When defining the common data model, it’s often difficult to get agreement on all of the fields. One of the recommendations I would make would be to consider how the data will be used. Some data will be obvious entities such as a name and address which are well known how to structure, others will be attributes which will be key to decision making points in a process. These need to be well defined and easy for a developer to identify in a message so they can be used in the system. Other data however may be important for a user to read, but may not need to be structured in a way which a developer needs to do anything except display it to the user. In this case it might be worth just having an unstructured part of the data model where a local business could populate any data they wish or that was relevant to them. This can also be a useful way of displaying business specific additional info which could vary by local business.
When using a queue based message system it’s important to consider message size. Initially we had some concerns about message size but in reality the message size restriction helped us to focus our messages to ensure that we were not producing large response messages with lots of unused data. This had often been a problem in the past. Using JSON will help you a lot with keeping the payload size down too.
There are techniques you can use with sessions to handle the processing of larger messages but in our case we have so far tried to use this as a constraint which forces us to design small effective messages which would perform specific things. With the scale out possibilities of the architecture it would also be possible for us to send multiple parallel messages for different information if we wanted to aggregate data together into a larger resource to give back to the client.
I hope that this article shows how the cloud and cloud based messaging can offer ways to deliver projects that were often too complicated to do in the past. The key thing in this project is that the infrastructure requirements have changed or been removed to such a degree that you would be able to proof of concept this kind of project very quickly and at low cost, where as in the past the infrastructure requirements would often put a significant cost and complexity on the project that it would be difficult to make the project get off the ground.
It’s important to remember though that some of the challenges would not change at all. It’s still a complicated integration project but hopefully some of my thoughts above on the things to think about and the practices to follow will help you to be successful.
About the Author
Michael Stephenson is an independent integration & cloud specialist based in the UK. He is primarily focused around integration technologies in the Microsoft integration platform such as BizTalk, Windows Azure. Michael has many years of technical leadership and coaching experience and has worked with customers to deliver a number of complex real-world hybrid integration solutions. Michael recently pioneered the BizTalk Maturity Assessment and is also a regular blogger and can be found on Twitter or Linked In.
John Krewson, Steve Ropa and Matt Badgley Nov 24, 2014