BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Designing and Developing Cross-Cutting Features

Designing and Developing Cross-Cutting Features

Every developer has had to integrate with another system, API or component at one point or another.  And, often, we are tasked with a business feature that must span systems. If you’ve been on a project like this or have one in the pipeline then this article will provide strategies to handle the change. Also, this article will guide your thinking about separating system boundaries and what that means for your technical design.

This type of project can be generally classified into one of two categories. The first is an integration project and the later a cross-cutting project. In this article we will address what the differences are between integration projects and cross-cutting projects, why you should care and spend a good amount of time discussing strategies to attack these problems.

As an early note, the logical framework here is not about Aspect Oriented Programming, where the term cross-cutting is part of the everyday vocabulary. AOP is a different technique at a different layer than the intent of this article.

First, let’s address what make a cross-cutting project different from an integration project: the integration project can be just about anything - integrate your app with a vendor’s app, integrate an external API or just integrate data from some back-end process. For this article we are primarily interested in the integration of shared business logic and data across systems. Integration projects are the workhorses of business [Diagram 1]. But they aren’t cross-cutting projects.

[Diagram 1. Integration Project Approaches are not shared across platforms]

(Click on the image to enlarge it)

Cross-cutting projects are slippery beasts. So, let’s try to define it in context:

Cross-cutting projects have the same functionality, ideally shared, across two separate code bases [Diagram 2]. Additionally, acknowledging this difference from integration projects provides the opportunity to address other concerns that slice across an application stack, like performance, manageability and security. Further in the article we will address why these concerns are easier to address in cross-cutting projects than integration projects.

[Diagram 2. Cross-Cutting Features Across Applications Are Horizontal]

(Click on the image to enlarge it)

Right about now, you want to solve this problem using some technology or pattern, a one-size-fits-all approach - but first, wait for the example.

Example: Web Analytics Tracking Tags across Platforms; e.g. Google Analytics

What is interesting here is not calling a web analytics service. That's cake - drop the tag on the page. What makes this interesting, and cross cutting, is that we are dealing with which tracking tags to put on the pages, which is a totally different ball game from what those tags in turn call. In our case, we have Platform A running in parallel to Platform B.

Case Study Platforms

These two platforms jointly serve approximately 400 ecommerce websites based on the same product catalog. Platform A is the incumbent, a battle worn ASP.NET WebForms application that is tightly coupled to stored procedures and SQL Server. This platform is not going away anytime soon.

Platform B is the startup. This platform is loosely based on ASP.NET MVC and has a radically different architecture. It is loosely coupled in most regards - it uses a guided navigation index instead of a database, it refreshes it’s product store via message queue based transactions instead of SQL replication and it’s front end is a complete departure from WebForms with a custom built templating engine.

In short, the only similarity between the two is that they are both .NET.

The sites run on each platform are also different. Platform A will host sites 0-350 and Platform B will host sites 350+ and all new sites. Occasionally, a site will be hosted on both as an A/B test. For example, BlamoSports.com will run on Platform A as site id 236 and then another version of the same site on Platform B as site id 403. In this way, the two Platforms occasionally run the same site, but the great majority of the time are running different sites.

[Diagram 3. Case Study Platforms]

(Click on the image to enlarge it)

Do you still have the same answer? I bet you do and I bet it’s "use a service", or some variation of that. Right? If so, then yes, that’s a great answer and a route the beaten and bruised team has taken before - but that alone does not solve our cross-cutting problems.

A service in our case study provides an endpoint to the bulk of shared functionality, but we’ll soon see that’s not the end of the implementation. After all, we could just spin up a new assembly to share across web apps and deploy that to the farm(s) where we have two stacks running .NET 3.5/4.0. Each platform still has to gather data, maybe do a little transforming of said data, find some common types and negotiate some sort of page level hooks. All of this is required before a service is even called. Therefore, services alone will not solve the problem. So, what to do?

Consider the Long Term for the Short Term

A decision has to be made. It always come down to something as axiomatic as this and there is no way to avoid it. Consider at a macro level we must choose between improving our system by evolving our platform architecture or by simply improving how we add features in our existing architecture.

In [Diagram 3a] our current platforms are demonstrated. Platform A is much more tightly coupled, and since Platform B moves in a similar business feature prioritization schedule as A, it also shares some characteristics. In our case study, we are closer to a tight coupling design than lose coupling. Ideally we’d like to move horizontally and keep evolving our platform to be more loosely coupled for the myriad of reasons we won’t even try to get into here.

[Diagram 3a. Evolving the Architecture]

(Click on the image to enlarge it)

It’s not easy to move in the direction of loose coupling though, especially for anything larger than a small system. It costs money and resources, often past the budget for a given feature. When this is the case it’s often better to move vertically by improving what you have and keep the architecture. This is demonstrated in [Diagram 3b]. Cross-Cutting techniques are most often used in this domain - improving without significantly altering the architecture.

[Diagram 3b. Improving the existing architecture]

(Click on the image to enlarge it)

Feature Team

If at all possible, prefer a feature team to knock out a cross-cutting project. A feature team that is responsible and empowered to make changes across application platforms has a much greater chance of success than one whose hands are tied in bureaucracy (layers of scrum masters, committees, possessive managers, architects). This team will most likely need to touch data-sources, service layers, infrastructure pieces, components and UI’s. The size or your organization and complexity of your systems will drive this decision.

Another approach is to use component teams. This simply won’t scale in most environments while really tying the hands of the project owner of the cross-cutting feature. Major coordination will be needed between lots of teams, personalities and schedules to make these kinds of cross-cutting changes. If at all possible, run with the feature team - this is what they are for.

The last approach isn't really an "approach". It's managing by accident, where developers of each application will just do it on their own. For instance, developers on platform A say "sure, we can build this feature in and here is our timeframe" and developers of platform B give a similar response. They do not agree to coordinate timelines or technical structure. This will not result in a cross-cutting feature and integration costs will be high with repeated work and stumbles.

A Priori Knowledge of the Systems

You don’t have to be completely knowledgeable about each individual system before starting, but you will need some up front knowledge. If you're practicing your trade in some flavor of an agile environment, create some tasks around investigation and get going, then, take a few steps back, collect some thoughts and see if a more cohesive plan can be created. By now, the major pieces should be a little less foggy and the team has a feel for what they’re getting into.

Next, consider the level of effort for changing platform A on a regular project, as E. So, effort E is equivalent to E x1.

Unfortunately changing two platforms in a cross-cutting fashion is not E x2. It’s probably more like E xN where N is the multiplier of effort. There are different ways to calculate this but that’s a different story and may just add too much speculation to time lines while ignoring the experience of the team. Just know that this type of change may be far from linear. That is the point to understand.

Choose an Attack Vector

A software system is not 2-dimensional, as in it’s not flat - there are many ways to attack the project.

The Pacific Naval battles during WWII included ship, submarine, air and ground fighting. When the Americans identified an enemy ship, maybe one without many escorts, they had options on how to engage that vessel. They could send over a few destroyers and start shelling, they could launch some torpedoes from underwater or strafe and bomb from the air. The enemy was not 2-dimensional so it was necessary to choose an attack vector.

It’s more of a similitude with software systems than divestiture because there are many attack vectors as well. For instance, in our case, do we want to introduce analytics into the application domain at the page level in web forms and the controller layer in MVC? Do we want to move the functionality higher up the stack, maybe into some kind of base class hierarchies? Or, how about slamming in everything we need in some kind of post render processing? There are many options and each project and team will have a variation of responses.

With so many options it’s important to identify what the attack vector is and be able to describe it - at the very least, to be able to describe the mental model of the solution to peers. The attack vector is basically a framework of constraints - whereby in technical terms, the architecture.

To integrate the analytics and tracking tags described, our attack vector was page_render - and we decided that each platform would be responsible for building a context object that is passed into one source.

[Diagram 4. Platform Independent Context Objects]

(Click on the image to enlarge it)

Triangulate

Once you have the attack vector it’s important to discriminate against other parts of the system dirtying up your model (Evans, 2001). This requires you to set up some other baselines to draw a point to your work. What does this mean?

Again, back to pushing web analytics into two code bases - we need to identify in each platform exactly where in the client a reference to the analytics tag generating service will be constructed and used. As well, it’s required we understand how to create the context objects to pass to this service. Now that we know these two requirements (points in space) we can measure the angle to the actual cross-cutting change (tracking tags) and calculate how far we are from being able to accomplish this goal. If the triangulation model here doesn’t work for you - just think about is as focus, focus, focus on the attack vector by analyzing the other parts surrounding the change.

Vector Alternatives

Choosing an attack vector and triangulating isn’t the only way to address a cross-cutting project - it’s just one approach that works well because of the purposeful contrast it creates. It may not work in your situation because of limited ability to create a feature team, starkly different technology stacks (Platform A versus Platform B) or lack of engineering experience. Another approach is to utilize the domain knowledge that your teams already have to generate a deeper dive analysis.

A deeper dive analysis equates to general up front analysis of both systems done by members of the feature team coordinated by a technical lead or project manager. In this way, n members from the feature team analyze and inspect Platform A alongside Platform A engineers while the other n members analyze Platform B with Platform B engineers. This is a classic analysis approach. The feature team correlates the results where decisions are reached on the main hooks into each application and what the shared code/service/library will look like.

The end result should look the same, only the approach differs.

A Proxy

In our case study platforms, both are running on the Microsoft stack with different version of ASP.NET. That makes things noticeably easier, as now we can deploy a shared .NET assembly that contains an interface and proxy object. The proxy points to a Tracker Service, which is indeed, the service we mentioned as still required at the start of this article. The proxy provides an identical interface for both platforms to work with, and an interface deployed in the assembly hides the internals of the proxy well.

But why this shared proxy object concept, and not just directly sharing the endpoints? It's a matter of philosophy and preference (again, framework of constraints). We want to keep the cross-cutting change isolated in a nice slice. If we start exposing too many end points and URI's directly to the clients it begins to break down the model, because really, at all costs, we want integrity to the design so we can get some miles out of it.

[Diagram 5. Shared Proxy]

(Click on the image to enlarge it)

 

Building the Context

How the context is built depends on each platform, as it will be noticeably different. For instance, Platform A (ASP.NET Webforms) has a totally different set of business logic and code. So, to extract a site id (e.g. site_id) requires calling up to a Page base class. While, in Platform B, ASP.NET MVC, we have the site_id available in the execution context. Thus, building the context is platform specific, but that's probably not surprising.

We chose to represent our context object consistently as JSON. If this were 10 years ago we would have used XML, but as always, time's change. JSON means that we don't have to worry about serialization of different types per platform. For instance, a serialized object developed in Platform A has a different structure or field naming than in Platform B. That means that we’ll have inconsistency in the approach. Instead each context builder simply agrees to build an object in a shared and agreed upon standard. A notation format like JSON is easier to transfer than serialized types, even if just in terms of describing the design.

Service

The service is ho-hum and rather irrelevant to the discussion. Both Platform A and Platform B will use the same service. Suffice to say, it receives a context object represented as a JSON object, does a bunch of work, and returns a collection of fully-qualified tracking tags that the caller then inserts into their HTML rendering process. See [Listing 1] for an example of what these tags look like in the wild.

[Listing 1. Example of tags]

var _gaq = _gaq || [];

_gaq.push(["aaa._setAccount", "UA-13963949-9"],["aaa._setCustomVar", 1, "psid", "10005", 1],["aaa._trackPageview", _gaContentOutput]);

 

http://sometracker.com/8999/order_complete/id/888333_8999

Render

Rendering the html to the browser is similar to building a local context document (we know it's a document now - not just an object as when we started). Separate platforms will render the output tags from the service differently and it's not important to our analysis which technique is used.

For instance, Platform A is ASP.NET Webforms - therefore, we can render all the tags to a particular server side control in the pre-render event. Not the prettiest, but no harm, no foul. While, Platform B is ASP.NET MVC, and pre-render doesn't quite fit the scheme there. Instead, we output it as part of our view without much ceremony. There are many approaches on any platform - what's important, especially to take away, isn't the final mile details but observing and acknowledging the hooks that make cross-cutting features possible.

Don’t start on the wire (service)

In our problem solution we now have code in both platforms that creates an agreed upon document described with JSON. Each platform makes use of a shared proxy assembly that receives the context document. That proxy knows how to talk to a real service somewhere. Most likely that service is somewhere over the wire. But it doesn't have to start like that. In fact, unless you have masochistic tendencies, do not do start on the wire.

Calling remote services, even cute and simple REST services are not for the start of a cross-cutting project. You can eventually built up a suite of functional tests to do this, but wait until you have your footing, or even better, until you feel like you're climbing this mountain and you know that your carabineer and harness are fully functional. Until then, make use of your favorite IOC/DI (inversion of control or dependency injection) framework.

If you don’t have a IOC/DI framework, you can still swap implementations manually via the interface by using a simple factory pattern. This may be all that is needed.

The shared proxy is now going to help with more than just abstracting out the service details - we have the interface to swap out implementations. And not just for traditional unit tests where we are isolating dependencies. We can use our DI container to do end to end functional tests with a different implementation - easily - on either platform, or both. Performing small regressions of the system without involving the network or remote services is a boon for productivity. Of course, someone should still make sure the service endpoint is running and tested itself.

Summary

We've covered the differences between a cross-cutting project and an integration project while demarcating the two. We've also examined a case study in building and developing a cross-cutting feature across two different platforms. This has demonstrated key techniques you can employ when entering a cross-cutting project: identify attack vectors, find commonalities, acknowledge what is different, build towards a common interface, use a document format to exchange information and isolate remote services during development by using a IOC/DI container. Following these techniques, the next time you're presented with what looks like an integration project you will be able to identify if it's really cross-cutting and provide extra value.

About the Author

Stephen Rylander is a technical lead with Morningstar focusing on web platforms on a global scale. He focuses on technical design, high quality code, teams and automation. Stephen has worked on four different public web platforms handling millions of daily page views - from finance to ecommerce to music and entertainment. He is adept at working with globaly distributed teams and has accepted that enjoying software development more than most people is just somethign he'll have to live with. He is also married, loves to cook and has two beautiful children.

Rate this Article

Adoption
Style

BT