BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Thinking Outside-In: How APIs Fulfill the Original Promise of Service-Oriented Architecture

Thinking Outside-In: How APIs Fulfill the Original Promise of Service-Oriented Architecture

Bookmarks

In the early 2000’s, service-oriented architecture, or SOA for short, was on the rise, fueled by the advent and popularity of Web Services. The intent of SOA was ambitious: to decompose business systems into granular, interoperable, and reusable services, which can be easily combined and mashed into new processes and workflows regardless of the underlying technology. As with any good technology hype, SOA entailed a raft of new standards, products, and ambitious “early adopters.” Today, 15 years after the first hype, not as many people talk about SOA anymore – it has long been superseded by mobile, cloud, digital channels, and the Internet of Things at the forefront of the CIOs mind. I experienced this recently in a board meeting with a client where the CIO was considering the integration and infrastructure model for a US$200M IT platform overhaul – when SOA running on top of a ‘service bus’ was proposed as a way forward by a board member, this was quickly swept aside by another person as ‘old fashioned and outdated’ and that ‘digital APIs’ are the new black. In this article I will argue that not only are SOA and APIs very much founded on the same principles – APIs are in fact an evolution of what SOA promised to deliver – lightweight, reusable business services, which are easily developed, deployed, and rewired to adapt to changing requirements. Before I can prove that point, however, let’s begin with a stringent definition of the two terms.

The origins of SOA: the promise of better business technology

SOA emerged in the early 2000s as what was then considered a departure from the classic 'every application as a monolith' paradigm by decomposing and structuring technology capabilities into a set of business services, which are well-defined, reusable, and easy for anyone to consume regardless of the underlying technology. For instance, with the deployment of a new payroll system and process, SOA enabled businesses to integrate the new payroll process into existing HR systems and processes by wrapping the payroll system in a 'business service', which existing HR systems can access and invoke. The big change, however, was that they no longer needed to know about the underlying application stack or vendor of the payroll system or adopt any vendor specific protocols. Instead, SOA enables systems to communicate using open standards such as Web Services or REST, which back then was a big change for large, proprietary, and legacy enterprise applications with no means of interoperability.

In addition to easier and better interoperability, SOA, at the highest level, promised three key benefits to businesses:

  • Increased agility by providing the ability to orchestrate and change workflows across many different business services as opposed to building isolated business processes in existing systems, which are costly or hard to change
  • Improved reuse and less functional duplication by reusing functionality that already exists within existing legacy systems and exposing it for consumption by new applications and business services
  • Greater flexibility to affect change by consciously focusing on decoupling business services, enabling systems to be changed or transitioned without impacting the overall architecture or reliability of operations

Decoupling was a critical enabler to make SOA happen - the term stems from the software engineering discipline and refer to the level and severity of impact a change in one system will have on its dependent systems. The higher or tighter the coupling, the more cumbersome and expensive it is to change things. The lower or looser coupling, the easier it is to change a process or data set within one system without impacting the systems it integrates to.

Now, after over a decade with both large and small efforts to implement SOA in practice, it is questionable if SOA was the success it set out to be. Many large enterprise platforms are still expensive to maintain, many enterprises still struggle with complex legacy technology impeding their time to market, whilst mega projects struggle with taming scope control of software monoliths. The fact is that today we have better and more widespread adoption of standards, which in theory should enable better interoperability and easier reuse. Then, with all the best intentions and investments in place, how is it that business technology has not improved remarkably?

The burden of SOA and Web Services in practice

When referring to SOA in this article, it is important to note that we recognise the difference between SOA and Web Services and that we do not confuse them. We regard SOA as a particular architectural style where business capabilities are decomposed into reusable services, whereas Web Services is a technical implementation including a set of technical standards and protocols, which can enable SOA. On the other hand, since SOA became a well known concept, Web Services have often been synonymous with how to build SOA-style IT systems. Therefore, the next section discusses these two concepts very closely together, acknowledging their practical application.

Whilst SOA itself does not specify how a service oriented architecture should be implemented1, many adopters used Web Services to implement it. There are three reasons why Web Services as an implementation of SOA did not deliver fully on its promise: over-engineering, complex value proposition, and the myth of the canonical information model.

Over-engineering refers to an unintended consequence of the attempt to tackle complex business problems: complex solutions. Despite the intent to remain business focused and technology neutral, Web Services were born out of the age of enterprise frameworks such as J2EE (Java 2 Enterprise Editions), building in layers of architectural abstractions to deliver flexible and agile services. SOAP, the lingua franca for exchanging data between Web Services, was initially short for 'Simple Object Access Protocol', but was later rebadged Service Oriented Architecture Protocol by some to emphasise its affiliation with said topic, evolved from being a relatively simple XML based protocol for remote procedure calls (RPC) to an entire stack of standards and specifications, stacked on top of each other. SOAP was evolved to specify message formats, exchange patterns, so-called transport protocol bindings, message processing models, and extensibility of the protocol itself, all wrapped in a verbosity of angle brackets. This was just one of many standards - on top of that was a myriad of other standards such as WSDL, XSD, UDDI, and MTOM, each with their own versions and compatibility layers. All this made Web Services incredibly complex for people to understand, implement, and test. Too much time was spent on the tool itself as opposed to using the tool to solve business problems. Paradoxically, Web Services, in search of building simple and nimble business services, became so complex in doing so that it collapsed under its own weight.

Complex value proposition refers to the fact that it provided a very rich, but complex value proposition to decision makers. Proponents of SOA, or so-called SOA 'evangelists', promised that SOA and Web Services could solve almost any problem under the sun, from inflexible business processes over slow time to market through to overburdened core banking transformations suffering from scope creep. It almost became a defacto response for IT executives when faced with new, complex problems - "let's do SOA to fix that" became the mantra. To business people SOA was placed over and over as the right solution for the job in many different contexts. The constant repositioning of SOA also sparked misconceptions of what SOA really was -- was it a product, a new middleware platform, or a standard? So when a project didn't deliver, it was very easy to blame the approach and not execution.

The myth of the canonical information model is the emphasis in SOA on building a single, interoperable taxonomy for exchanging data between business services and their underlying systems. In the case of using Web Services to realize SOA, the process would typically result in a set of XML schemas, which were used for validating and formatting messages as they flow from service to service. Whilst this is definitely considered best practice from an information management perspective, it can very quickly turn into “boiling the ocean”: creating a comprehensive enterprise wide information model, which caters for all possible (and impossible) use cases of many different stakeholders and business applications -- for instance, how many definitions of a purchase order would you possibly find in a large enterprise across sales, marketing, manufacturing, IT, let alone accounting? The key driver of complexity in large shared information models is the many layers of interpretations that different people, departments, and applications apply on top of data models in order to produce meaningful knowledge within their respective context. Because of the need for a single model that everyone agrees to it becomes very challenging to move quickly towards the future state, as valuable time is spent in the mental mode of 'analysis paralysis', with very little value-add. These pitfalls have impacted enterprise architecture programs for years.

Meanwhile SOA adopters were struggling to succeed with Web Services, REST (Representational State Transfer), a more lightweight software architectural style, steadily became a popular alternative. Building on the existing HTTP standards, REST provided a simple, stateless mechanism for exchanging any kind of structured data over a distributed network, but unlike Web Services it did not prescribe any particular standards of doing so outside of HTTP itself. Due to its increasingly popularity with developers, REST became a popular alternative to Web Services and built the foundation for today’s API’s.

APIs: a poster child of the smartphone age

The term API, or Application Programming Interface, is an old concept, which describes how software written in different languages or on different computers on a network can interact through a common protocol. For instance, programs written in Visual Basic and C# can integrate using the .NET Framework as the API, which determines things such as data types, inputs, outputs, operations, etc. Similarly, the Windows operating system provides the Win32 API, which mostly all programs regardless of the underlying programming language rely on to draw windows on the screen, open new network connections, and spawn new processes in the background. Web Services also provide a form of API in that applications can interact over the network through an XML based protocol.

In the age of the smartphone, APIs mostly refer to the use of services running on a remote server for retrieving, storing, and processing data used by a mobile app or website. On paper this is not very different to SOA and the use of Web Services for distributed computing – however, in practice, there are some differences, which, in particular, stem from the increased popularity of REST:

The first key difference is the shift from services to resource thinking - this is sometimes referred to by REST community as Resource Oriented Architecture (see our previous articles on InfoQ including ‘ROA: The Rest of REST’ by Brian Sletten and ‘Is REST the future for SOA?’ by Boris Lublinsky). Similar to how WWW was a web of interlinked documents, applications become a web of interlinked resources referring to each other through URLs. Applications tap into these resources and utilise new functionality through standard HTTP requests, no different to how web browsers download documents and images (that is, web resources) via hyperlinks. Whereas service oriented architectures have typically reflected a neat, pyramid like, decomposable hierarchy of services, in practice more than often running on the same middleware, resource oriented architectures are distributed and decentralised in nature, treating clients and servers as one and the same thing.

The second key difference is the use of plain old web standards such as HTTP(S) and JavaScript, deliberately avoiding the complexity of lengthy XML standards. Compared to the almost cathedral like layers of abstractions in contemporary Web Services standards, APIs take a rather minimalist approach using the protocols and languages of the web browser in pursuit of simplicity and reduced overhead. This is particularly important in the new market environment where the consumer of an API is more likely to be a smartphone device with limited bandwidth and battery capacity rather than a multi core desktop computer with endless processing power. That said, it is important to note that HTTP is not a necessity for APIs and REST. For instance, CoAP (RFC 7552 Constrained Application Protocol), which is a network protocol for small and low powered devices on the “Internet of Things”, is based on REST but does not use HTTP as the underlying transport protocol. Also, API platform vendors such as 3scale have developed API technologies that use WebSockets for interactive and fully persistent client-server API communications instead of the stateless HTTP.

The third key difference is deliberate statelessness. A stateless service means that each request to a service is treated as an isolated transaction with no knowledge of preceding requests, similar to how REST and the HTTP protocol work. Stateless is the opposite of stateful services where the service itself is required to automatically keep track of all its clients/consumers across intersections, resulting in much greater complexity. SOA enthusiasts often promoted stateless service design as a sound architectural principle, which architects should consider in their blueprints. However, SOA related Web Service standards such as WS-Transaction and WS-AtomicTransaction were still introduced to handle transactional state across services, both in terms of atomic transactions and so called long running processes. These standards were critical for SOA to fit into corporate enterprise environments governed by rich middleware platforms where transactions are key, but also signalled the departure from the elegance and simplicity originally touted by SOA. APIs, on the other hand, are deliberately stateless, partially due to the nature of the underlying protocol (HTTP) and partially due to its inherent simplicity. As a result, API designers need not worry about transactional management across API call, but it also leaves the API consumer, i.e. the app developer, with more work to manage error handling, rollback, and clean up on the client side. However, it is important to note that efforts have been made in the REST community to make REST stateful and transactional2.

Outside of the pure technology reasons, APIs have gained traction due to the inherent focus on simple, practical deployment. This, in turn, made it easier for technology leaders to convince their bosses that it was worth the investment, simply because it was easy to deliver tangible results very quickly. The API deployment model, that is, where and how APIs are deployed, executed, and accessed by consumers, is often referred to as “microservices”3 – decomposing the business workflow into a set of extremely fine grained services, each of which only does one thing and does it well. A microservice is typically not bigger than 100-1000 lines of code, outside of which it is time to split it into two separate services. Here, the quick witted reader would ask: doesn't that sound exactly like SOA? Conceptually yes, but microservices are different in that they are tied to a set of practical deployment principles, which SOA would never do in its effort to remain agnostic of any particular type of technology. Some of the key deployment principles are design outside-in, lightweight tooling, test automation, self-documentation, programmable deployment, and multi-channel.

Design outside-in means that APIs are designed to be deployed as a commercial offering to third party applications from day one. For instance, if you are writing a microservice API for a legacy mainframe-based warehouse management application, you would write the API so that it removes the complexity of the underlying mainframe technology and provides a simple, clean layer of objects for dealing with stock keeping units, inventory, shelf location, etc, exactly as you would do if you offered the API as a commercial product to a customer, i.e. as a managed warehouse solution. Not only does this encourage developers to strive for a clean, simple design – it also enables the strategic possibility of easily monetize new products and services, which were previously considered an internal support function. For instance, a consumer products retailer with a considerable warehouse and logistics capability, which were until now only used for serving its own stores, may suddenly decide to offer a managed warehouse and third party logistics solution to other businesses. Having simple, clean, and well documented APIs makes this strategy very achievable because the new customer base can quickly integrate existing systems to the mainframe backend. In addition, charging customers for the use of the new warehouse solution becomes trivial because accounts management and billing were embedded in the API design from day one. The use of API versioning means that new API functionality can be deployed gradually and gracefully without interfering with customers depending on existing APIs.

Lightweight tooling refers to the use of simple, lightweight development tools and runtime platform. Whereas SOA in practice was open promoted through the use of monolithic middleware platforms, typically based on Java or .NET, with an integrated development environment bolted on top, APIs have emerged out of much the need for “hacker friendly” command line utilities and text editors. Developers have favoured JSON, a JavaScript based language for describing data structures, over XML due to its focus on readability and limited need for data marshalling from objects to angle brackets on the API client side. Similarly, alternative languages and runtimes such as Node.js (server side JavaScript) have gained on the server side due to its minimalism, speed, and strong open source community backing, enabling developers to personalise their workflow and automate deployment. This has particularly resonated with technology start-ups where new platforms are often adopted to differentiate in the market whereas more traditional enterprise platforms tend to be avoided unless absolutely necessary.

Test automation is the use of scripts to automate unit, functional, and integration testing every time a change is made to the API code base. Whilst this is by no means a new phenomenon or specific to APIs, APIs enable fast testing because of the focus on rapid deployment. Whereas testing of complex middleware stacks can be challenging to automate due to the need for tearing down, reconfiguring, and bringing up multiple tiers at the same time, the lightweight architecture of microservices coupled with a container based infrastructure makes testing of APIs a lot easier and integration defects can often be discovered early.

Programmable infrastructure is the application of DevOps practices to deploy and operate API services and infrastructure. DevOps is a new concept, which applies agile development techniques to systems and network operations. For instance, instead of splitting development and operations into two separate teams or departments, DevOps stresses the need for operations people to work as part of a development team in order to increase collaboration. This is particularly useful for firms where APIs evolve rapidly and new versions are deployed weekly or even daily. Achieving and sustaining efficient DevOps requires significant automation of infrastructure services using tools such as Docker, Chef, and Ansible to deploy and manage releases to the cloud, automate configuration management, and reduce effort spent on manual administration tasks.

Self-scalability refers to the ability of microservices to automatically scale out on the fly without the need for waiting for IT to provision an entirely new virtual server, allocate storage, installing the operating system. etc. Since microservices are often associated with lightweight container technology (such as Docker), it is possible for an instance of a microservice to automatically spawn hundreds or thousands of new instances of itself on the fly during peak demands (and scale down as well). This is very different to the "classic" enterprise IT environment where enterprise services and dependent business applications are deployed in large hypervisors, which take a long time to prepare, provision, and spin up. The key difference lies in the ability of container technology to quickly deploy a new virtual machine image in milliseconds in combination with the deliberate stateless nature of microservices. By having no shared state between service instances, new service instances can be dynamically created, provisioned, and shut down, all transparently to the API consumer.

Multi-channel is the principle of designing all APIs for consumption by multiple devices and end users, providing a seamless experience regardless of the client. A commonly used technique for developing reusable APIs for multiple channels is using standards from mobile development such as JSON4 (Javascript Object Notation) for messages and OAuth5 for security, which ties well in with mobile native mobile/tablet application and web based applications. The lightweight overhead of these standards means that they fit well with devices where battery life and bandwidth are scarce.

All of the above principles are key to why APIs have caused wide disruption and adoption. That said, SOA is much less tied to its physical and practical manifestation. Therefore, we propose, it is probably more correct to view APIs as a practical application of and extension to SOA retrofitted to age of multi-channel retail and mobile devices.

How APIs deliver business value

Companies invest in APIs, not only for their technical elegance, but because they offer improved speed to market for new products and services. In the following we explore this in detail.

Firstly, speed to market stems from the fact that a small group of developed can very quickly build a prototype API and deploy it on a cloud service within days. Depending on the complexity of the underlying system or problem the API encapsulates, it is very easy to build a prototype application or web page, which invokes the API and mashes it up with data from other services. As long as the API contract (that is, the collection of input and output parameters and naming data types used to access the service), a development team can then gradually deploy changes to improve the service without impacting other parts of the overlying applications / service consumers. This model of decoupling application dependencies through simple interfaces fits very well with agile delivery teams that release new software early and often as they can iterate and deliver new features, provided that the API remains the same. Rapid prototyping of enterprise IT services suddenly becomes very real.

Secondly, speed to market stems from testability and automated quality assurance. Whilst SOA conceptually provided a great opportunity to automate testing through continuous integration practices, the influence of heavyweight ESB platforms made it very cumbersome to set up, test, and tear down an entire ESB environment without considerable effort and resources. The use of lightweight containers and low overhead run times for deploying and running APIs means that they are very easy and fast to deploy and script. Every time a change is made to source code, multiple integration tests can be run instantly with errors reported within minutes rather than at the end of a six months development cycle.

Finally, speed to market stems from simplicity, enabling changes to be implemented quickly. Where SOA through Web Services evolved into a web of abstractions supported by very complex, integrated tools, API developers often promote the use of text files and minimalist technologies to develop and deploy services. Changing a data type does not require new schemas to be generated, shared, or parsed. It is just a change in a text file. As a result, developers are more efficient resulting in lower costs to deliver, which, in turn, attracts new investments.

A key observation is that API efforts are often funded and driven outside of the traditional IT organisation. Similar to how mobile apps have grown as “skunkworks” projects emerging out of the marketing department, many companies build APIs as part of a strategy to launch a completely new product or spin-off. By being an integral part of a business model, APIs attract funding and capital that traditional business cases for SOA were struggling to find. A particular pattern I have observed is the strategy to take an in-house back-end function such as billing and payments and turn it into a revenue stream by encapsulating the function in a simple API and reselling it to other businesses on an as-a-service basis. For example, a large retailer had its own PCI-DSS6 compliant payments switch, which it used solely for its own stores. It successfully transformed this switch from a cost centre into a new revenue source by ‘wrapping’ the switch in a new public payments API offered to small retailers - at a lower rate compared to existing payment gateway providers, which were largely dominated by banks.

What it takes to deliver APIs: implications for delivery organisations

A key difference, which appears multiple times in this article, is the notion of how APIs are different to SOA and other enterprise IT approaches because of the way they are delivered. Agile and self-organising teams with development, infrastructure, and automation skills are often seen as the response, but there is much more to it. The transition to an API enabled enterprise requires large changes to the traditional way of planning, building, and running IT – we call it the two speed operating model for IT.

With APIs and other digital initiatives growing out of multi-channel marketing strategies and skunkworks projects, sometimes with only limited involvement of corporate IT, there is an increasing gap between what digital and IT delivery organisations require to operate. Digital focuses on quickly developing and delivering new digital offerings and APIs to the market whereas corporate IT focuses on keeping the lights on. Digital requires new technology capabilities, which can be too complex or costly to deliver on in houses or managed service infrastructure: on-demand cloud infrastructure, which allows for new virtual machines to be provisioned within seconds, software defined networks, and virtual load balancers with horizontal and vertical auto scaling in line with demand. Similarly digital delivery emphasises different skills with designers, developers, and DevOps engineers working in closely knit agile teams to deliver new services, whereas many traditional IT organisations operate on a plan/build/run basis relying on operational, service management, and sourcing experts to excel.

Whilst digital can often inject innovation, both capabilities are required for companies to be successful. Commoditised IT such as email, desktops, or IP telephony have no need for digital delivery or DevOps because their application and use are well tested – they are part and parcel of a robust corporate function. On the other hand, companies need to build digital capabilities to differentiate in the market and remain relevant to consumers. And the former is a requirement for the latter to experiment, fail fast, and succeed. This notion of a two speed IT operating model becomes the fabric on which APIs are developed because it relies on services from both worlds. Digital capabilities deliver the service front-end and deployment environment for APIs in order to quickly deliver and evolve a product. However, IT delivery capabilities are critical to keeping the lights on of the core business and run existing business and legacy business applications. Digital and IT delivery organisations complement each other in large organisations that need both new digital capabilities and stable core IT operations to succeed.

I have seen this modus operandi for IT organisations work particularly well for a very large global retailer, which was looking to build new multi-channel retail capabilities. The existing IT organisation had a reputation of being both slow and costly to run projects, resulting in business executives sourcing new capabilities externally. Whilst new digital initiatives were implemented faster and cheaper, it also meant that the retailer depended almost entirely on other companies to deliver and run its future core processes, which was a notable risk for a company with a digital agenda. We therefore explored idea of a new organisational blueprint in which technology was delivered through two different speeds within the company. The first speed was IT, reporting to the CIO, with its focus on existing business and legacy applications, which were key to keeping the lights on. The second speed was a new digital delivery group, reporting to the chief digital officer, which focused entirely on building new API enabled applications. Each group operated independently with their own technology stacks, delivery methodologies, and leadership, but with the shared purpose of delivering technology that enabled the retailer to differentiate in the marketplace.

APIs for the future - beyond mobile and channels

Moving forward, there are three key roles for APIs in the future beyond the typical areas of mobile and channel: moving to the core of the enterprise, nanoservices7, and connecting to the Internet of Things.

Firstly, moving to the core of the enterprise means challenging the stronghold of dominant enterprise applications and demonstrate that large monolithic platforms need APIs in order to adapt to changing business requirements new channels. Regardless of what enterprise software vendors may claim, what we consider 'core platforms' such as transactional banking platforms in financial services, policy/claims/billing platforms in insurance, merchandising platforms in retail, are all more or less monolithic in nature in that they depend on single, monolithic data stores, centralised processes to complete transactions, and complex interfaces to integrate. To truly embrace digital at the core of how enterprises work, core platforms will transform to be much more distributed and horizontally scalable in nature, similar to how channel platforms and B2B APIs work today. APIs are a means of making that happen by designing core banking, insurance, and retail platforms on the principles mentioned in this article: designing outside in, lightweight tooling, automation etc.

Second, we can expect to see the rise of “nanoservices”, very small microservices that are often deployed in unikernel based containers such as MirageOS8, providing improved horizontal scalability. Unikernel systems deploy applications on top of a shared address space machine image, which is tailored for the exact requirements of the application, leaving out unnecessary code or functionality. This means the resulting image (for deployment on a hypervisor) is very nimble, making it very fast to deploy and scale. For instance, instead of building a service in Java, which runs on the virtual machine inside a Docker container, which is a virtual machine running on top of a generic purpose Linux kernel with virtual memory management, which in turns runs on top of bare metal or in a virtual machine, unikernels does away with the proliferation of abstraction-upon-abstraction. The idea is that current technology has too many abstractions in place, creating too much overhead and complexity. Instead, services are deployed as a single application on top of a minimal kernel with only the exact code in place to make it run (e.g. a TCP/IP stack, threading, and Unicode support). Nano services become very fast to scale out, but the underlying technology is still considered experimental. Nano services are expected to become popular in the age of cyber crime due to the drastically reduced attack surface from stripping the userland and kernel to the bare minimum.

Thirdly, we can expect APIs having a big impact on the increased popularity of the Internet of Things (IoT). Whilst IoT startups are constantly building new innovations, there are little or no standards in place to ensure IoT devices and wearables can communicate. Enter APIs: building a standard API for IoT to interact and exchange data over a network is clearly the next use case for microservices. The real challenge then becomes building an API platform that can run on distributed, lower powered devices and securely route packages between different types of IoT gear, taking into account the mobile and fragile nature of IoT based networks (outside, on the go, mesh layout across a hospital), integrating equipment and technologies from many different IoT vendors.

 


1 As per the OASIS definition of SOA - see OASIS, Reference Model for Service Oriented Architecture 1.0, OASIS Standard, 12 October 2006, http://docs.oasis-open.org/soa-rm/v1.0/soa-rm.pdf

2 See Pardon, G. Pautasso, C. (2014), Atomic Distributed Transactions: a RESTful Design, submission for WS-REST 2014, the Firth International workshop on Web APIs and RESTful Design at WWW2014, 7th April 2014, Seoul, Korea, retrieved online.

3 For a stringent definition of microservices architecture as an architectural style see.

4 JSON is short for JavaScript Object Notation, a way of describing objects and data structures using JavaScript. It is often favored by developers because of its simplicity (the language can be described as a single state machine diagram in one figure) and readability for humans.

5 OAuth is an open protocol for secure authorization used by Microsoft, Google, Facebook, Twitter, and many others. The protocol is developed and maintained by OAuth WG, which is part of IETF. 

6 PCI-DSS, Payment Card Industry Data Security Standard, is an information security standard for organisations that handle payment card informations.

7 We have previously brought stories around nanoservices, which are fine-grained microservices (10-100 lines of code) deployed in lightweight unikernel containers to improve scalability and promote decoupling. Read Mark Little: Microservices? What about Nanoservices? for details.

8 See the MirageOS web site.

About the Author

Anders Jensen-Waud is based in Sydney, Australia, and works as a management consultant with Strategy& (part of the PwC network of firms), a global strategy consulting firm. His work experience include digital strategy and enterprise architecture and its application in banking, insurance, and energy sectors. Recently, Anders was the lead architect for a core insurance platform digitisation program with a large commercial insurer. He holds a MSc degree in Business Administration and Computer Science from Copenhagen Business School. Outside of work, Anders is also a regular contributor to open source software, including FreeBSD and porting the .NET Framework and Roslyn compiler suite to FreeBSD. In his spare time he enjoys spending time with his wife and two sons and long distance running. He can be reached at anders@jensenwaud.com (private) and anders.jensen-waud@strategyand.au.pwc.com (work).

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Common Information Model

    by Jean-Jacques Dubray,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Anders,

    I won't comment much on your article, that'd be a waste of time, I am certain anyone reading the "nanoservice" part would start having significant doubts about the rest of the content, or lack thereof.

    I invite you to read an article I have submitted for publication on SAM, a new pattern that decouples APIs from the front-end.

    The problem with the recommendations that you make (focus on APIs / ditch the service layer underneath) is that you end up building "vertical slices". Many companies, for instance in Sydney, CBA, a PwC customer, have tried that pattern before and that is deadly. That is the core problem of API-based OmniChannel architectures, that problem alone will kill your Digital Transformation, because unlike SOA, it denormalises the business logic.

    I do want to address your point about the Common Information Model. Your claims are justified, however, throwing the baby with the API water it not the right thing to do. What has always been missing for the CIM pattern to work was a tool. I built that tool, it comes as a free and open source eclipse plugin.

    If you want to significantly lower the cost and risks of digital transformations, you actually need:
    a) a CIM-based service layer underneath that provides an intentional and consistent access to the systems of record. Otherwise you end up having no consistency or replicating consistency logic in every vertical slice
    b) to decouple APIs from the front-end, i.e. drop MVC. When you start building APIs to support "screens" you quickly end-up in adopting the vertical slice pattern, unwittingly, and once you do that, you are dead.

  • Re: Common Information Model

    by Florian Sommer,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I don't really get this whole "Vertical Slices" discussion, maybe someone can clarify where my mental model of all this is flawed or whether I just misinterpret the terminology:

    As I understand the thesis: A per-view-endpoint leads to vertical slices - Isn't that dependend on the implementation of my API? I mean - if I am exposing my system's internal details through my API, then yes, I'll have one "vertical slice" per view...

    Maybe we should talk more about a concrete example to avoid pointless discussions based on homonyms (maybe I just misunderstand what is meant with "vertical slice" and that is causing my headache):

    Let's say I am called to build an API on top of an old-school JEE7 web app: There is an @Entity "Customer" class, a "DBCustomerRepo" EJB. And now I'll produce a JAX-RS class for ".../customer/{id}". This class is calling the Repo to receive a Customer Entity - and marshalling this entity directly to the JSON or XML Response.

    Of course that's bad. Everytime I change my Entity / Database Layer, all my clients might break. Vice versa: If I optimize my API to one view, I need to optimize my Entity for this view. When my view changes, I need to change my Database. It's one "vertical slice": View, API 'Endpoint', Repo, Entity - all in a 1-to-1 fashion *nightmare*

    But all real Services which I know have some mechanism to decouple the API layer from the backend layers. For example: The JAX-RS Endpoint calls several Repos to aggregate customer data from several Entities (customer, address, payment-history, whatever) and delegates that to a Mapper which maps those data to a CustomerDTO which is then marshalled - or the mapper uses XMLAdapters instead of an DTO Class or it uses Jackson Nodes or whatever. The point is that the API structure is decoupled from the underlying business logics and database layers. The Customer Resource and the Customer Entity are sharing a name and some fields, but both can evolve independently. The API Layer is "Thin" and just like the UI layer it is an anti pattern to contain too much business logic.

    Thus, even if you produce exactly one Jax-RS Endpoint per view which is highly optimized for this view (e.g. one CustomerMobileDTO, one CustomerSPAFrontendDTO, etc.), it's not a true "vertical slice" anymore. "All you need to do" if the Customer-View needs an additional field 'x' on the customer representation is to adjust the Mapper to map field 'x' (and the CustomerDTO, if existing, to contain field x).

    Of course this can still be a lot of work for more complex scenarios then 'adding field x'. But if you do the 'orchestration' on the client side you also need to adjust the mapping

    I am considering the 'internal project' scenario, where you are using your own API, for example for a single page app or a mobile client etc. In other scenarios noone should come up with the idea of "per view" APIs: If it is a public API with potentially hundreds or thousands of customers, I don't think anyone would try out a "per view" API. I mean, the idea to produce an optimized endpoint for every of your 900 consumers is... a bit too ambitious ^^

    --
    SO... sorry I needed that many words but I really thought a lot about it and I just can't get my head around this implication of "Per-View API -> Vertical Slices". Only if the assumption is that the API is not encapsulating the implementation details.

    If I'm overseeing the obvious please forgive, I'm still a student and still learning.

  • Re: Common Information Model

    by Jean-Jacques Dubray,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Unfortunately, your model does not take into account "team ownership" in large IT organizations, who owns the aggregation of "customer data from several Entities"? What if a business team has decided to go international and now, just for that product in your organization, all addresses need to have a country while the other products don't need to capture the country of the customer?

    When you have a small startup building a monolith the vertical slice start appearing later when you are afraid to break things, yes you can change the aggregator and risk breaking support when you launch this new marketing feature.

    It's just pure dependency management, and unfortunately MVC and patterns like BFF make it just more efficient/less risky to adopt a vertical slice pattern rather producing shared assets. The rationale goes, if you go through the effort of building a view-specific end point, with just a few more lines of code (that you will copy paste in the next vertical slice) you'll reach the database.

    At first everything will feel good, and your manager will be happy because you just lower the risks and costs of delivery. The problem will appear much later when you have a gazillion vertical slices.

  • Resources vs. methods

    by Greg Brown,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Anders,

    I was a REST advocate for many years, but I eventually got sick of trying to model everything in terms of resources. Sometimes it is just more convenient to think in terms of methods, especially when defining an API.

    I created this open-source project as an attempt to combine the flexibility of SOAP with the simplicity of REST:

    github.com/gk-brown/HTTP-RPC

    Like SOAP, it allows callers to define their own verbs (i.e. methods), but like REST, it uses human-readable URLs and JSON rather than complex XML messages and descriptors. It is also inherently stateless, but can be used in a stateful manner if desired.

    Further, it is extremely lightweight. The server JAR file is only 54K, and has zero dependencies aside from a Java 8 VM and a servlet container. Similarly lightweight client libraries for iOS and Android are also provided.

    If you have a chance to check it out, I'd be interested to know what you think.

    Greg

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT