BT

Beyond SOA: A New Enterprise Architecture Framework for Dynamic Business Applications

Posted by Vasile Buciuman-Coman, Michael Chervenic on Mar 28, 2008 |

Part I - Even with All the Requirements and a Best Fit Design, Your Architecture is Very Likely to Fail and Here is Why...

Education, in present management schools, trains operators of corporations. There is almost no attention to designing corporations... Almost never has anyone intentionally and thoughtfully designed an organization to achieve planned growth and stability.
        -Jay W. Forrester, Designing the Future (1998)

Introduction

In a paper called “The Dynamic Business Applications Imperative,” John R. Rymer, a senior analyst with Forrester, points to a fundamental shortcoming of today’s applications:

Today's applications force people to figure out how to map isolated pools of information and functions to their tasks and processes, and they force IT pros to spend too much budget to keep up with evolving markets, policies, regulations, and business models.

IT's primary goal during the next five years should be to invent a new generation of enterprise software that adapts to the business and its work and evolves with it.

Forrester calls this new generation Dynamic Business Applications, emphasizing close alignment with business processes and work (design for people) and adaptability to business change (build for change). At this stage, the requirements for Dynamic Business Applications are clearer than the design practices needed to create them. But the tools are at hand, and pioneers in service-oriented architecture (SOA), business process management (BPM), and business rules — including independent software vendors (ISVs) — have begun showing us the way. The time to start on this journey is now.

In this two-part article we are going to take a holistic view at the development of these Dynamic Business Applications (DBAs) both from an architecture and a methodology perspective.  Our goal is to derive how an application should be build to easily adapt to business changes and other required modifications. With the focus of 21st century enterprises on flexibility, DBAs becomes the breakthrough necessary to make business and IT successful in the decades to come.

Fig 1. Flexibility and Efficiency – the two main drivers for the 21st century enterprise

What Do We Mean By Dynamic?

In software engineering, many frameworks or products claim to be adaptive. A robust definition of how systems change – their dynamics – is necessary before trying to understand how well a solution can adapt to change.

Early object-oriented methodologies recognized[1] that system analysis, in order to be neutral, has to be based on two types of real-world requirements:

  • Real-world entities—collecting information about real-world entities and the relationship between them allows an analyst to start with an objective view of system structure instead of a technical, subjective view
  • Real-world events—system behavior is driven solely by the occurrence of events that change the states of real-world entities 

Within this context, for every system analyzed, we can always identify a single or few entities that are most important. For each entity, there is a triad of associated elements: events, states, and lifecycle. Each event represents a change in state, and the ordered sum of all normal entity states represents a lifecycle. However, there is a clear distinction between events that trigger a change in state as part of the normal flow and events that trigger state changes and are not part of the normal flow. For instance, a possible set of events expected when a product order is submitted includes payment processing and order delivery. When a customer changes the order or when the business changes a price, we cannot consider these actions  part of the normal flow, so they are not associated with the entity (such as order) lifecycle. Instances of the core entity lifecycles alone define what the system will most likely process during normal operations. All other event types, like changes or intermediary steps, are processed differently.

This scenario is familiar to most engineers: a system model has a given core entity structure with a given set of events that form a lifecycle. The system model is clear and easy to follow for both the analyst and the system designer. Modeling tools, like the Finite State Machine, Entity Relationship Diagrams, Entity State Transition Diagrams, and Data Flow Diagrams have been refined for almost two decades to help with this approach. This is how  software with billions of lines of code have been written for complex systems like the Airbus 380 or the F-22, the most advanced fighter plane in the world. Virtualization of the entity lifecycle through Object Flow Diagrams, a basic model that captures both events and state transitions, is critical in this model. In this case, the architecture can be considered static because the entire system state can be determined at any point in time.

 

Fig 2. Event-model, state changes, and lifecycle are at the center of normal operations

This relationship between normal events, states, lifecycle, and the differentiation of normal events from other types of events is essential to understand the proposed framework for dynamic operations. As James Martin and James Odell wrote long time ago in their book “Object-Oriented Analysis and Design,” analysts, designers, and implementers should all use the same system model. Analysts think with data flow diagrams, designers think with structured charts, and programmers thing with Java, and SQL. In the context of data flow, an analyst identifies object types, and thinks about events that change the state of objects.  The same understanding is used by the end users. They should also think in terms of object types, events, changes to the state of the objects, and business rules that trigger and control events. Martin and Odell highlight the importance of Object-Flow Diagrams for system designers: “Event schemas are appropriate for describing processes in terms of events, triggers, conditions, and operations. However, expressing large complicated processes in this way may not be appropriate. Often a system area is too vast or intricate to express the dynamics of events and triggers. Perhaps, in addition, only a high level of understanding is necessary. This is particularly true of strategic-level planning. In situations such as these, an object-flow diagram is useful. Object-flow diagrams (OFDs) are similar to data-flow diagrams (DFDs), because they depict activities interfacing with other activities. In DFDs, the interface passes data. In OO techniques, we do not want to be limited to data passing. Instead, the diagram should represent any kind of thing that passes from one activity to another: whether it be orders, parts, finished goods, designs, services, hardware, software--or data. In short, the OFD indicates the objects that are produced and the activities that produce and exchange them.”

Business analysts supplement the OO methodology with Value Stream Mapping to capture information flows associated with business operations. Value Stream Mapping originated at Toyota and is closely associated with Lean Manufacturing. The U.S. Environmental Protection Agency defines Value Stream Mapping as “ a Lean process-mapping method for understanding the sequence of activities and information flows used to produce a product or deliver a service.” Key words here are “product” and “service.” They shows the unifying role the right information flow plays in the overall enterprise.

When the two concepts of Object-Flow Diagram and Value Stream Mapping are merged together, it results in a framework foundation that fully represents the entire scope of enterprise operations and can be easily translated into OO concepts (Fig. 2).    

However, many systems are not static because their changing behavior cannot be captured by a set of relationships and events. In fact, they operate in an unknowable [2] future and traditional tools to capture their dynamics do not apply. All business applications and most real-world systems fall into this category. These systems, based on their interaction with other external systems,  process three types of changes:

  • Normal operations – are the normal sequences of events that form the main entity or entities lifecycles. With each event, the entity changes its state, a process that is normally easy to define. Within normal operating behavior, there are elements of the operating context that are not supposed to change. For instance, when a customer orders a product, during the entire process of taking the order, preparing the order, and delivering of the order, attributes like price, composition, delivery type are not expected to change.
  • Internal changes – As mentioned above, during the core entity lifecycle, there are certain context elements that are not supposed to change. In the real-world this does not always hold true as management decisions or other factors may force context elements to change. We call changes to internal system attributes internal changes.
  • External or environmental changes – No matter how much a business would like to believe that they “own” a customer, the customer always retains some freedom to change his or her mind. It is very unlikely that a system can always have a “fixed” contract with external systems like customers or suppliers. However, normal operations are most likely to be designed around that “fixed” set of rules. We call changes initiated outside of the system to be external changes.

All three types of changes must be addressed to successfully model a real-world system. This model is orders of magnitude higher in complexity than the model for a static architecture. From the normal operations perspective, both internal and external systems are operating completely independent from the main information flow. Internal and external systems even have their own operations - management has their own decision cycle and customers have their own operating environment - represented by their own information flows. These three systems operate in three parallel universes as far as information flows are concerned. The only solution to handle these three unsynchronized information flows is to implement a separate full change management for each information flow. This complex interaction is captured in Figure 3, the dynamic operations diagram.

Services provided by a cruise operator to their customers could be an example to demonstrate dynamic operations. When ordering, a customer makes decisions about services he plans to participate in while on the cruise. However, not only may he change his mind before and while he is on the cruise, but the cruise operator may need to change those services, to either take advantage of new opportunities or respond to unforeseen events. With a system designed based on current approaches, most of the changes are handled manually at a great operational cost. With a system designed based on the principles of Dynamic Business Application Architecture, changes from customers and from internal management decisions are handled by two change management subsystems, that are fully integrated with normal operations. This will not lower the cost, but it will also help increase the quality of service, and even optimize operations to increase profits. Being totally generic, it can also reuse standard components. In the end is a win-win-win situation for everyone involved, business, IT , and customer.      

 

Fig 3. Dynamic Operations have three dimensions – normal events, internal controls, and external feedback

One of the goals of a Dynamic Business Application is to make it easy to design and implement software to support the dynamic systems common to all businesses. In our experience, the technical approach to build a DBA is most effective when coupled with a new approach to framework first engineering.   

Designing systems for the enterprise requires a framework-first approach

Building dynamic business applications is more difficult than it looks. Engineering, including software engineering, uses a century-old framework first approach. When a bridge, a plane, or even a software application is designed, the same approach is used:  A design team collects the requirements and then applies a well defined set of framework steps to design and build the system. Existing frameworks have been proven very successful when designing bridges and other engineered systems but have been hit or miss propositions when it comes to software for business systems. There is a fundamental difference between these two categories of systems. In the case of bridges and planes, the end result is always a system that has a static architecture: when a change is applied, like increasing the load on the bridge or the speed for a plane, the entire system is likely to be redesigned from scratch. Unfortunately, software is too often designed with a static architecture: every time an unexpected change is introduced into operations, it is very likely that everything is stopped and the system is replaced with a new one that has the added functionality. Because business operates in a continuously changing environment and is more dependent than ever on IT systems, this stop-and-go solution is unacceptable. The cost of continuously upgrading and integrating new systems into the enterprise infrastructure has reached unsustainable levels.  

Reliance on business stakeholder’s input for all of our requirements is the first problem. There is no true “framework first” approach to software design that accounts for dynamic operations when requirements are considered. Even the Carnegie Mellon Software Engineering Institute, states that architecture is driven by scenarios, created based on the stakeholders’ input: “The elicitation of a software-intensive system's business goals is an integral part of standard architecture design and analysis methods. Business goals are the engines that drive the methods, through their realization as scenarios....An ideal design method or process must take into account many stakeholders and many forces.”

As engineers, how do we know when we have the right scenario or if we have the right stakeholder when we collect system requirements? How do we account for future changes to the business, changes that are normally unknowable to any stakeholder involved?

“A fundamental difference exists between an enterprise operator and an enterprise designer. To illustrate, consider the two most important people in successful operation of an airplane. One is the airplane designer and the other is the airplane pilot. The designer creates an airplane that ordinary pilots can fly successfully. Is not the usual manager more a pilot than a designer? A manager runs an organization, just as a pilot runs an airplane. Success of a pilot depends on an aircraft designer who created a successful airplane. On the other hand, who designed the corporation that a manager runs? Almost never has anyone intentionally and thoughtfully designed an organization to achieve planned growth and stability. Education, in present management schools, trains operators of corporations. There is almost no attention to designing corporations.”

In a presentation made few years ago “Designing the Future,” Jay Forrester, an MIT professor and the father of system dynamics, pointed to this problem as a fundamental limitation on how we approach designing systems for the enterprise:

Forrester’s observation further invalidates the current enterprise architecture approach. Stakeholders are even the wrong category to be used as a primary source of knowledge when we need to architect an enterprise system. They are the “users” of the enterprise, not the “designers.” When a plane is designed, aircraft designers are very unlikely to ask pilots or the passengers about the plane architecture. Nevertheless, there is no school, engineering or MBA, that has enterprise design in their curriculum.

So, going back to enterprise architecture, there is a fundamental disconnect. “Users” of the enterprise architecture are business stakeholders, most likely business graduates familiar with business dynamics, but not familiar with the engineering approach. Designers and builders of enterprise systems, the software engineers, are familiar with static frameworks, but not with dynamic frameworks.  In fact, there is no framework that accounts for the dynamics of the business. This is the hole that needs to be filled in terms of Enterprise Architecture Framework. Such a framework would only describe how the enterprise is “designed,” but would also define the development roadmap and the main components used to build systems to support the enterprise.

Fig 4. Business vs. Engineering approach to the enterprise architecture

We propose a fundamental change to how we architect and design information systems, in a way that will not require business stakeholders as the primary input. We recommend utilizing a framework centered on business entity lifecycle and event-model as the primary sources of input into the architecture. Business scenarios are used only as a post-architecture fine tuning exercise, instead of the main driver.

This framework-first approach accounts for the dynamic of the business from the beginning, and is not an afterthought. It implies that the designed enterprise application is dynamically adaptive, instead of statically adaptive as it is implemented by today’s methodologies. This cuts the cost of developing and maintaining software by orders of magnitude and cut into the over 70% of IT spend that is directly linked to changing and maintaining existing systems.

Fig 5.Framework Based Approach to software solution development

In the approach that we are suggesting, the business scenarios are used only as a post-architecture fine tuning exercise, instead of the main driver. The later we push our intuition, experience, and skills input into the design process, the less likely that costly errors will be made. Requirements changes due to stakeholder input can be processed within the already existing solution framework, reducing risk and delays.

A Standish Report study showed only a 3% rate of success for projects over $10 million. Industry consultant and Harvard Business School Professor Andrew McAfee showed that organizations deploying such costly technology as ERP, CRM, supply-chain management, e-commerce and other enterprise applications are achieving a rate of success anywhere from 25 percent to 70 percent. McAfee concluded that "the problems faced were not separate things, they are all examples of the same thing, which is basically an effort to change business processes using IT." Lately, these percentages have improved, but IT still lacks a clear path for complex projects. A framework-first approach capable of integrating business changes into IT implementations and providing a clear translation between business and technology could raise the success rate to close to a 100 percent. Architecting complex systems could take days instead of months or years.

Server Architecture for Dynamic Operations – Forget the SOA, Welcome to the Assembly Line for Information

In early 90s, events played a central role in almost every book about Object Oriented Methodologies. With the arrival of GUI-based operating systems like Windows, platforms for GUI development relied on complex event models to design and build standalone applications. However, in a client-server environment, processing events on the server side was always based on a much simpler model.

When Web-based technologies like J2EE and .NET started to replace traditional client-server applications, both the client and the server went through fundamental transformations. A rich OS event model on the client side was replaced by a Web browser and a primitive scripting language. On the server side, event processing was replaced by a stateless, flat, and static architecture, very similar to the way Web pages are delivered to Web browsers.

In order to emulate real-world requirements asking for a stateful, hierarchical, dynamic, and distributed architecture, the design has to rely heavily on databases to store a wide variety of dynamic information. However, relational databases are by definition excellent at storing relationships between data that is not likely to change. Storing stateful information that is dynamic, distributed and hierarchical requires a different infrastructure that is not relational database centric.

Web-based technologies introduce other challenges to supporting real-world systems. When J2EE application servers are used in a distributed environment, one of the biggest advantage of using Java as a programming language is completely eliminated. Garbage collection for Java Virtual Machines was never designed to automatically remove from memory objects that are exchanged between multiple instances. In this case, the architect and the designer have to orchestrate the entire object lifecycle, regardless of the programming language capabilities. Even when information is stored in a database, the same problem of purging the data becomes a very difficult task. 

As we have seen earlier, once we are able to separate normal operations from changes, the architecture can rely solely on events and lifecycles for design and implementation. Designing systems around a lifecycle has been done for more than a century -it is called the assembly line.

Before the assembly line was introduced at the beginning of 20th century, manufacturing was done more or less the way today’s Services oriented Architecture (SOA) processes information. Each call into an SOA service is normally treated as a stateless call. To account for the history of previous calls, each service has to fully implement how to handle the internal state of the system. When requirements change, almost all services would also change their individual implementations with significant rewrites to code at great cost.


Fig 6. Server Architecture is based on Event Model      

The information architecture that we propose is centered around the event model and lifecycle can be used as a two-way translation platform for business requirements and system architecture. The event model is extended on the next level by four fundamental models: stateful, distributed, hierarchical, and dynamic. All five can be easily interpreted either by the business or by technologists in basic requirements and architectural components:

  • Event Model/Lifecycle – this is the model around which all other models are built. For business users it is the value stream, a sequence of processes that reflects the product/service lifecycle and creates value for customers. For technologists, the same series of events reflects the change in state for object representing the main business entity. As a result, the event model acts as a decoupling element, making the design and the implementation a much easier task. The event model also plays another important role. Because the other four models are built around the event model, it also acts as an integration platform. In fact, the event model is the only way to create components that implement changes, are hierarchical, and are distributed.
  • Stateful Model – for business users this model represents the enterprise dashboard, capturing in its entirety the status of current operations. For technologists it is the overall state of the system, which is the sum of current states for all lifecycle instances, together with their changes.
  • Distributed Model – for business users this model captures various organizations that control the main product/service during its lifecycle. For technologists, the main entity can be controlled only based on the distributed model.
  • Hierarchical Model – all businesses have a hierarchical structure that represents the management layers. Any system design should reflect control hierarchy in the architecture. The same way a VP of Sales cannot give a command to a CEO, a component that is lower in rank cannot send an execution “command” to a component on a higher level.
  • Dynamic Model – the most important model after the event model. Business users use this model to capture all the changes their operations have to deal with on a daily basis. As mentioned before, there are two types of changes: external, like customer input, and internal, like management decisions. For technologists, the dynamic model translates into a plug-in architecture that matches various events, lifecycles, and types of changes.

These five models can be used not only for initial design, but also when requirements change throughout the system lifecycle. Together they form the missing framework steps for adaptive business system design. The five models eliminate the need for uses cases as the primary input for an enterprise architecture. They can be translated into a clear and comprehensive set of engineering questions, similar in a way to those used in the design of a bridge or a plane.     

Fig 7. The Adaptive Architecture is a result of an Information Architecture based on five models

The five models are used to define a generic architecture for Dynamic Business Applications that revolves around changes and the assembly line for information. The most important subsystems are: the Static Model, the Change Management, the Virtual Object to Assemble, and the Event Processing. On the next detail level, we have two more subsystems: System Command & Control and Persistence.

This architecture also solves a long-standing quest to achieve a generic solution for transactional workflow metaphor, a work carried by Jim Gray[3] for many decades. Because the entire design is centered around the execution of a single event, its processing can implement not only the “move to the next assemble step,” but also “move to the previous assemble step.” Its implementation needs to decide which “direction” to go based on a user input. By “linking” multiple steps together, a transactional workflow undo can be implemented using the same architecture.

One of the key elements of a Dynamic Business Application is event processing. Using a new theory of information for adaptive systems, each event can be “executed” using one generic component structure. The components use declarative programming to embed business logic and call workflow engines, schedulers, and business rules engines. This implementation can not only dramatically speed-up the development of an adaptive system but it makes later changes very easy to handle and reduces the need to maintain complex integrations.

We suggest to create a physical model of providing control, operations, and environmental around the event (operations) and event lifecycle. The Lifecycle Controller manages the assembly of information for discreet events. A change management function directs the execution of internal and external changes to the standard event model and to individual events.

Conclusion

We have discussed how the evolution of flat, stateless, static, client-server web-based solutions have contributed to the disconnect between IT architecture and the real-world of hierarchical, stateful, dynamic, distributed business.  We also discussed how traditional engineering approaches do not support the development of adaptive systems capable of supporting the dynamics of business. We have shown that a possible solution to both of these problems can be found in a new model-driven architectural approach.

In the second part of this article will describe a possible architecture for Dynamic Business Applications as well as a case study, introducing an actual implementations of our concepts.

References

[1] Yourdon Systems Method – Model Driven Systems Development – Yourdon Press, 1993

[2] Eric D. Beinhocker - "The Origin of Wealth", HBS Press Book,2006 - In his new book The Origin of Wealth, McKinsey & Company Senior Advisor Eric D. Beinhocker argues that the traditional view of economics as a static, equilibrium-balanced system is going through a radical rethinking involving a multitude of disciplines. The new spin: "complexity economics," in which the economy is viewed as a highly dynamic and constantly evolving system that is all but impossible to predict. This excerpt deals with how companies can set strategy when the future is unknowable.

[3] Mark Whitehorn ,The Register, Interview with Jim Gray - http://www.regdeveloper.co.uk/2006/05/30/jim_gray/

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Wow - A lot to think about here... by Robert Gersna

Looking forward to your possible architecture and a case study with implementations...

The differenc between iterative and incremental & Dynamic Business Applicat by ashraf galal

The article stated, “For instance, a possible set of events expected when a product order is submitted includes payment processing and order delivery.
When a customer changes the order or when the business changes a price, we cannot consider these actions part of the normal flow, so they are not associated with the entity (such as order) lifecycle. “
We usually allow the customer to change the order after it was placed, and up to the return of received products.
This is considered as events that trigger a change in state as part of the normal flow.
The second example, “business changes process” is not considered as a part of the normal flow, but it is considered as part of the system normal flow.
We incorporated such changes in the system following some business rules that
could be applied to any order in a dynamic manner.
Also the Dynamic Business Application framework is very similar to the Iterative and incremental architecture that was defined by Rational Unified Process (RUP) that supports the dynamic change in the business requirements during the development life cycle and from the beginning of the project. I appreciate your comments.
Thank you

Re: The differenc between iterative and incremental & Dynamic Business by Vasile Coman

There is a fundamental difference between the iterative and incremental architecture defined by RUP and this framework. While the RUP describes how changes in requirements are implemented during analyze/design/code phases, this framework shows how changes to normal business operations implemented by the system are handled automatically at the runtime (it is done by including change management modules built around events as part of the basic system architecture and by making a clear distinction between use cases that are describing normal operations and use cases that are capturing changes to normal operations). As mentioned in this article, the second part will describe how this framework is applied in practice, and that will include the distinction between how changes to requirements during coding and changes to normal operations during runtime are handled by the system.

SOA Mischaracterized? by John Quinn

Most SOAs have an orchestration engine, which essentially plays the assembly line role when a message must be routed between multiple services in a specific order. Is this the kind of thing you had in mind? Perhaps I should just wait for part 2.

where I can find Part II by Robin Meteor

where I can find Part II?

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

5 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT