BT

Building Applications, the Workflow Way

by Boris Lublinsky on Jun 10, 2009 |

 

David Chappell starts his new article "The Workflow way" by discussing what does it mean to write great server-side software:

Everybody who writes code wants to build great software. If that software is a server application, part of being great is scaling well, handling large loads without consuming too many resources. A great application should also be as easy as possible to understand, both for its creators and for the people who maintain it. Achieving both of these goals isn’t easy. Approaches that help applications scale tend to break them apart, dividing their logic into separate chunks that can be hard to understand. Yet writing unified logic that lives in a single executable can make scaling the application all but impossible

He then goes into discussing different approaches to achieve such an implementation:

 

  • The simplest implementation is to create a unified application that runs in a single process on a single machine. Such implementation typically has to implement the following:
    • Maintain state, which here is represented by a simple string variable.
    • Get input from the outside world, such as by receiving a request from a client. A simple application could just read from the console, while a more common example might receive an HTTP request from a Web browser or a SOAP message from another application.
    • Send output to the outside world. Depending on how it’s built, the application might do this via HTTP, a SOAP message, by writing to the console, or in some other way.
    • Provide alternate paths through logic by using control flow statements such as if/else and while.
    • Do work by executing appropriate code at each point in the application.
    This simple approach has several advantages:
    • ... the logic can be implemented in a straightforward, unified way. This helps people who work with the code understand it, and it also makes the allowed order of events explicit.
    • ... working with the application’s state is easy. That state is held in the process’s memory, and since the process runs continuously until its work is done, the state is always available.
    The limitations of this approach are:
    When the application needs to wait for input, whether from a user at the console, a Web services client, or something else, it will typically just block. Both the thread and the process it’s using will be held until the input arrives, however long that takes. Since threads and processes are relatively scarce resources, applications that hold on to either one when they’re just waiting for input don’t scale very well.
  • An application that shuts down when it’s waiting for input and then restart it when that input arrives. In this case an application contains the same logic as before, but it’s now broken into separate chunks. When the client’s first request is received, the appropriate chunk is loaded and executed. Once this request has been handled and a response sent back, this chunk can be unloaded - nothing need remain in memory. When the client’s second request arrives, the chunk that handles it is loaded and executed. Such implementations are typical for Web applications, where a particular page serves specific request and then application is waiting for a next request. The advantages of such an architecture are:
    • This approach doesn’t waste resources, since the application isn’t holding on to a thread or a process when it doesn’t need them.
    • ... lets the application run in different processes on different machines at different times. Rather than being locked to a single system... the application can instead be executed on one of several available machines
    These advantages come with the price of additional complexity:
    • ... the various chunks of code must somehow share state. Because each chunk is loaded on demand, executed, then shut down, this state must be stored externally, such as in a database or another persistence store
    • .
    • ... the code no longer provides a unified view of the program’s overall logic... control flow isn’t evident. In fact, the chunk of code that handles the client’s second request might need to begin by checking that the first request has already been done. For an application that implements any kind of significant business process, understanding and correctly implementing the control flow across various chunks can be challenging.
  • Workflow-based application. A workflow-based application does the same things as an ordinary application, including maintaining state, interacting with the outside world, controlling execution flow, performing the application’s work. In a workflow, however, all these things are done by activities. These activities correspond functionally to various parts of a typical program, but rather than using built-in language elements to coordinate activities’ execution, as a traditional program does, execution of activities in a workflow is coordinated by the workflow runtime, which knows how to run activities. The advantages of such an architecture are:
    • ... workflow way gives the developer a unified control flow. Just as in the simple case the program’s main logic is defined in one coherent stream. This makes it easier to understand... The workflow itself expresses the allowed control flow.
    • ... the workflow doesn’t hang around in memory blocking a thread and using up a process while it’s waiting for input. Another advantage is that a persisted workflow can potentially be re-loaded on a machine other than the one it was originally running on. Because of this, different parts of the workflow might end up running on different systems
    Additional advantages of the workflow approach described by David include coordination of parallel work, higher level reuse (activity level reuse), process execution visibility/tracking, etc.

The rest of the David’s article describes specifics of Windows WF implementation, its usage scenarios and its integration with other .NET technologies, including WCF, Dublin, ASP.NET, etc. He also outlines new features of WF introduced in .NET 4.

Although David’s article is describing how Windows WF can be used for building workflow applications, in the words of Tom Baeyens:

... [it] explains the essence of workflow and BPM engines... BPM engines are different from plain programs like Java, C, Cobol etc in 2 key aspects:
  • The runtime state is persistable. At any point during execution of a process, the process execution can be interrupted and stored. Later the execution state can be retrieved from persistent storage and continued.
  • Graphical representation. The second aspect where BPM processes differ from plain programs in languages is that BPM processes are aimed to be represented graphically with boxes and arrows.

The article is a great read for everyone who wants to understand how workflow engines work and what are the appropriate applications for workflow usage.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

the BPM way by Hendri Thijs

I thought the "workflow" way had to do with different scoping. Where as a traditional application has a scope of different subsets of business processes, a BPM system explicitly scopes (sub)processes as a whole. In doing so, changes in and optimizations of these business processes can be done in (near) real time.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

1 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT