BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles A Fusion of Proven Ideas: A Look Behind S#arp Architecture

A Fusion of Proven Ideas: A Look Behind S#arp Architecture

This item in japanese

Writing applications for the web is painful. From organizing and testing AJAX scripts to simulating state in a stateless environment, programming web based applications is a practice which demands careful attention in all stages of planning and development. To further exacerbate the issue, web developers also face typical development struggles such as tackling the object/relational impedance mismatch, selecting the most appropriate set of tools – amongst the myriad of available options – to improve productivity, and infusing the appropriate architecture into a project to bring a solution to production quickly while ensuring the long term maintainability of the code.

Recent, and still evolving, technologies and techniques are progressively addressing these development challenges, but none of which acts as a silver bullet alone. But by leveraging the strengths of various, carefully selected technologies and techniques, we are able to realize tremendous gains in productivity and maintainability without sacrificing quality. This article highlights a number of maturing trends in web development, their benefit towards delivering value to the client, and their use within S#arp Architecture, a framework based on ASP.NET MVC which attempts to leverage the best of these techniques and technologies.

Trends Having Achieved Critical Mass

If there is one word that can describe the software development industry, it’s “change.” Our industry is quite young and very much in its infancy when compared to more established disciplines such as civil engineering. The most tangible side effect of this growth period that we are going through is the amount of change which the industry is experiencing, and will likely continue to experience for some time.

An example of such volatility includes the rapidity in which project methodologies rise to stardom and fade into oblivion as painful experiments in management gone awry. Another is the rise and fall of techniques and technologies as newer alternatives usurp the benefits of the former. Take the Model View Presenter (MVP) pattern within the world of ASP.NET, for instance. This design pattern was a technique leveraged to introduce greater testability into the world of ASP.NET, but at the cost of added complexity. Recently, Microsoft has introduced ASP.NET MVC as an alternative to ASP.NET; an alternative which achieves a new level of testability not previously available with ASP.NET. Consequently, ASP.NET MVC has made the MVP pattern obsolete within .NET web development for testability of controller logic which would often have ended up in the code behind of an ASP.NET page. This is not to say that the principles behind MVP are now invalid, only that a technology has emerged which better exemplifies MVP’s goals of testability with a proper separation of concerns.

So while the software industry continues to experience dramatic change, there are particular trends and ideals which have become foundational in the development and delivery of high quality, maintainable projects. Conversely to how the implementation of these ideas may change over time, the ideas themselves represent strong underpinnings of successful software that will have a lasting impact on software development. What follows is a brief review of some of these ideals which have achieved a critical mass in acceptance within the development community and which will, accordingly, leave their mark on the future of software development.

Abstracted Infrastructure

It wasn’t too long ago that I used to dread writing CRUD functionality for a new object with the same amount of ominous anticipation as having to repaint my house. It would be a long exercise in redundant tedium filled with many touch ups to mistakes made. From writing stored procedures and ADO.NET adapters to testing fragile JavaScript validation, I would find most of my day filled with wiring up infrastructural details and hoping that I would not have to touch the code again once it was written.

A paradigm shift has matured over the past decade wherein infrastructural details such as these are seen as a menial task better left to dedicated utilities. The challenge has been finding the right tools for the job to facilitate the availability of the infrastructural functionality while allowing the software to be ignorant of the implementation details. NHibernate is a good example of such a tool. NHibernate handles persisting plain .NET objects to and from an underlying relational database. It does so in such a manner as to leave the objects themselves ignorant of how the persistence actually occurs while addressing the object/relational impedance mismatch. Furthermore, it does so without requiring a single line of ADO.NET or stored procedure coding. While NHibernate is a wonderful tool, it more importantly represents the realization of a lofty goal, providing a solid solution to hiding tedious and fragile infrastructural functionality which used to consume a significant portion of development activity.

As time progresses, the software industry will surely be introduced to an even greater number of maturing tools and techniques which abstract an increasing number of tedious infrastructural concerns to be relegated as an afterthought of development. For example, with the maturation of NHibernate, additional add-ons, such as Fluent NHibernate with its auto mapping capabilities, have already come along to further lessen the burden of managing data access management. As mirrored in the prophetic writings of Douglas Hofstadter in Gödel, Escher, Bach, the introduction of appropriate abstractions is a natural progression of software development and should be encouraged, accordingly.

Loose Coupling

A common bane of legacy software is tight coupling. (I use the word “legacy” here in an unfairly demeaning way to describe software that you may have unwillingly adopted from another developer; or even from yourself from years earlier!) Examples of tight coupling include bi-directional dependencies between two objects, objects which have concrete dependencies to services such as data access objects, and unit tests which cannot test the behavior of service-dependent code without having all of the services online and available. Tight coupling leads to fragile code, code which is difficult to test, and code which makes every developer run and hide when faced with the task of making modifications. In light of this, it has become almost axiomatic that a key to successful software is the characteristic of loose coupling.

Wikipedia aptly describes loose coupling as “a resilient relationship between two or more systems.” Accordingly, a positive side effect of loose coupling is that you should be able to change one side of a programmatic relationship without adversely affecting the other. To illustrate, there is no other design pattern, in my opinion, that better exemplifies the ideal of loose coupling than Separated Interface, aka Dependency Inversion (not to be confused with Dependency Injection). The technique is often used for cleanly separating a data access layer from a domain layer.

For example, assume that a controller or application service, within an MVC application, needs to communicate with a data access repository object to retrieve a number of items from the database. (The repository in this instance is the “service.”) To resolve this requirement, the simplest solution is to have the controller create a new instance of the repository itself. In other words, the controller is creating a concrete dependency to the repository; e.g., via the new keyword. Unfortunately, this approach brings with it a number of ill effects of tight coupling:

  • It’s difficult to unit test the controller without having a live database to support the repository’s queries. This live database requirement also leads to fragile unit tests when any of the data is left in a modified state by a previously run test. When testing controller logic, you’re primarily interested in verifying the behavior of the controller, not whether or not the repository it is dependent on can successfully communicate with the database. Additionally, testing with a “live service,” or in this case, a repository which is communicating with a live database, the unit testing performance slows to a crawl; consequently, developers stop running the unit tests and quality suffers.
  • It’s difficult to swap out the implementation details of the service – the repository – without having to also modify the controller which is instantiating the service. Assume that you’d like to switch out the repository using, e.g., ADO.NET in favor of a web service. Having a concrete dependency on the former makes it more difficult to switch to the latter without making a number of modifications to the controller which instantiates and uses it. In many cases, this can lead to shotgun surgery while introducing the change – another smell indicative of a problem.
  • It’s unclear as to just how many service dependencies the controller actually has. In other words, if the controller is invoking the creation of a number of service dependencies, it’s difficult for a developer to ascertain what is the logical boundary, or the scope of responsibility, of that controller. Alternatively, if a controller’s dependencies were passed to it via its constructor, it would be simpler for a developer to understand the overall responsibility scope of the controller.

Alternatively, the controller could be given its service dependency as a parameter to its constructor. A key improvement, while doing so, is that the controller should only be aware of the interface of the service dependency, rather than the concrete implementation itself. To further illustrate, compare the following two code snippets of a controller within the context of an ASP.NET MVC application.

The following controller creates its service dependency directly, the CustomerRepository, and has a concrete dependency to it, accordingly

public class CustomerController {

   public CustomerController() {}

   public ActionResult Index() {

      CustomerRepository customerRepository = new CustomerRepository();

      return View(customerRepository.GetAll());

   }

}

Conversely, the following controller is given its service dependency as an interface parameter to its constructor. It is said to be loosely coupled to its service dependency.

public class CustomerController {

   public CustomerController(ICustomerRepository customerRepository) {

      this.customerRepository = customerRepository;

   }

   public ActionResult Index() {

      return View(customerRepository.GetAll());

   }

   private ICustomerRepository customerRepository;

}

When compared to the drawbacks of tight coupling, this clean separation brings with it a number of benefits:

  • The domain layer remains in ignorant bliss of how to create a repository and any implementation details of the repository, outside of its publicly exposed interface. Consequently, it's easier to switch out the data-access implementation details (e.g., from ADO.NET to a web service) without having to modify the controller itself. This assumes that the replacement implements the same interface as the former.
  • Having dependencies on interfaces, instead of on concrete implementations, makes it easier to inject a test double repository while unit testing. This keeps unit tests blazing fast, eliminates the need to maintain test data in the database, and places the focus on testing the behavior of the controller rather than on its integration with the database.

A subject not detailed here is the dependency injection necessary to support a separated interface and other loose coupling techniques. Further details on this subject are discussed within the article Dependency Injection for Loose Coupling. (Note that the “mock” object described in the article is actually a “stub.” This and other “test double” nomenclature is described by Martin Fowler in Mocks Aren’t Stubs.) Finally, a useful read which details moving towards the separated interface design pattern is Refactoring Service Dependencies to Separated Interface.

Test-Driven Development

Simply put, test-driven development (TDD) works in delivering higher quality, more maintainable software while resulting in a simpler design overall, than when developing without it. By no means is test-driven development just a passing fad in development techniques; it is a technique which is increasingly gaining acceptance as a pivotal facet of successful software development and one which will be here for the long haul through the maturing of our industry.

The basic idea behind TDD is to begin the development effort with a question, asked of the system under development. For example, if you’re developing a banking application, you may want to ask the system if it is capable of successfully handling a deposit form a customer. The key is that the question is asked before the implementation details of the behavior are implemented. A benefit of this is that it places focus on looking at the desired behavior of the system before the system is actually written.

To illustrate, the general coding steps, while following the guidelines of test-driven development, are as follows:

  1. Write your test as if the target objects and desired behavior already existed.
  2. Compile the solution and see the compilation break.
  3. Write just enough code to get it to compile.
  4. Run the unit test and see if fail.
  5. Write just enough code to get the unit test to pass.
  6. Refactor if necessary!

While test-driven development is here to stay, exactly how it is used in daily development efforts is still evolving. For instance, a recent trend in test-driven development is known as Behavior-Driven Development; this approach attempts to bridge the gap “between the technical language in which the code is written and the domain language spoken by the business.” In other words, behavior-driven development infuses TDD with Domain-Driven Design (or the other way around if you prefer), discussed next.

Domain-Driven Design

For the last trend which has achieved critical mass, I feel compelled to include Domain-Driven Design (DDD). This approach to software development places focus squarely on the domain and domain logic, rather than on the technologies and relational database model needed to support the technical solution. As with behavior-driven design, DDD proposes a number of techniques and patterns to better align the language and efforts of the development team with those of the client. Ideally, a client should be able to read the domain layer of a DDD application and see a strong reflection of their own business represented within the coding logic itself.

In my own experiences with various approaches, I see domain-driven design as a natural evolution of earlier programmatic approaches, such as model-driven development in which the data within the database, and its corresponding model, is seen as the core of the application and everything else is done simply to manipulate that data. (Both Castle ActiveRecord and the ADO.NET Entity Framework are good examples of solid model-driven design utilities.) Conversely, with DDD, the database is seen as an infrastructural detail necessary to support the domain and associated logic. In fact, just as there is no spoon, there is no database in DDD. Obviously, there is a database, but the point is that the domain strives to remain in blissful “persistence ignorance” with respect to how the underlying mechanisms of data storage and retrieval are implemented.

But domain-driven design is much more than just separating the concern of data persistence from the domain. A major tenant of DDD is placing the behavior of the domain within the domain objects themselves. For example, instead of having a separate CustomerAccountLogic class to determine if a CustomerAccount is up to date on payments, you would simply ask the CustomerAccount itself for this information. In this way, the behavior of the domain is embedded into the model itself.

The above is a very small sampling of ideas from the quickly maturing DDD approach to software development. For more information on domain-driven design, read Domain-Driven Design Quickly, which is a concise summary of Eric Evans’ classic book, Domain-Driven Design.

Fusing these ideas with S#arp Architecture

While every project has unique needs, and no framework provides a one size fits all, there are certain challenges and opportunities which present themselves during the development of almost every web based application. But with so many options available, it’s difficult for a developer to decide which set of tools and techniques are appropriate for a given project and for cleanly addressing the most commonly faced development challenges. For instance, if you’re looking for a .NET dependency injection tool – or Inversion of Control (IoC) container – you can take your pick from Spring.NET (which is much more than simply an IoC utility), Unity, Castle Windsor, Ninject, StructureMap, and many others. And that’s just the IoC selection! To make the matter even more difficult, it is a challenging endeavor to then determine the most appropriate amount of judicious planning to put into an architecture which leverages the benefits of the selected tools and techniques without stifling flexibility.

What’s currently lacking, at least in the world of .NET web development, is a common architecture and foundation for application development which combines best of breed technologies and techniques, using recent technologies based on proven practices, while taking into account the availability of high quality tools developed by the open source community. S#arp Architecture is a response to this need. The open source S#arp Architecture attempts to leverage the proven practices described in this article with carefully selected tools to increase development productivity, while enabling a high level of quality and maintainability.

A sampling of the tools and techniques leveraged by S#arp Architecture are as follows:

  • The Separated Interface pattern, in conjunction with the Dependency Injection pattern, for removing concrete dependencies on a data access layer from the domain and controller logic;
  • The Repository pattern, for encapsulating data access concerns within discrete classes adhering to the Single Responsibility Principle;
  • The Model-View-Controller pattern, realized with ASP.NET MVC, for introducing a clean separation of concerns between the view and controller logic;
  • NHibernate, and its Fluent NHibernate extension, for removing the need to develop and maintain low level data storage and retrieval coding while keeping the domain in blissful ignorance of the persistence mechanism;
  • Common Service Locator, with a default of Castle Windsor, to provide a loosely coupled means of interacting with the developers preferred IoC container;
  • SQLite in-memory database for running behavior-driven tests versus those that emphasize integration with a persistent database;
  • Visual Studio Templates and T4 Toolbox to generate project infrastructure and common CRUD scaffolding for each domain object to remove hours of redundant and tedious coding.

This sampling serves to demonstrate that while there is no singular silver bullet, immense value can be found from selecting solid development practices paired with the appropriate tools.

A Domain-Driven Architecture to Bring it all Together

I feel that a key idea, encapsulated within S#arp Architecture, is the technique of inverting the relationship between the domain and data access layers. In typical application architectures, especially those which adhere to the recommendations of Microsoft, the flow of dependencies begins with presentation layer, which then depends on the business layer, which then finally depends on the data layer. While this is an oversimplification of the implementation details, it is generally suggested that the data layer be the bottom layer which everything else depends on; this is very model-driven by design.

While a model-driven approach certainly has its benefits, and is very appropriate in many situations, it is not in line with the domain-driven goals that this article has presented. Having the domain depend directly on the data layer frequently introduces another drawback of introducing a bi-directional dependency between the domain objects and the data access code. A former professor of mine, Dr. Lang, said that we won’t be an expert in our field until we’ve made every mistake possible. While I’m still working on becoming an expert, I have learned that a bi-directional dependency between domain objects and data access code is ripe with trouble. (This was a hard learned lesson that took me one step closer to becoming an expert.)

So how do we get through this conundrum and come up with a clean design for reflecting these ideas? The solution is to cleanly separate the various application concerns and to invert the relationship between the domain and data access layers, using separated interfaces, defined in the domain layer, for the data access layer to implement. In this way, all layers of the application can depend on only the interfaces of the data access (or other external service) layer while remaining ignorant of the implementation details. This leads to a design which is more loosely coupled, easier to unit test, and more stable during the maintenance phases of project development.

The below diagram illustrates the architecture advocated by S#arp Architecture, exemplifying the technique of inverting the traditional dependencies between the domain and data access layers using the separated interface pattern. Each box represents a separate, physical assembly with direction of dependencies indicated, accordingly.

ddd

Note that in the diagram, the data layer, which defines the repositories’ concrete implementation details, depends on the core domain layer. The core layer, in addition to defining the domain model and logic, also defines the repository interfaces, which may be leveraged by various other layers, such as the application services layer to communicate with the repositories while remaining isolated from the implementation details. Some would suggest that the presentation layer – YourProject.Web in the above diagram – should not have a direct dependency on the domain layer. Alternatively, a data transfer object (DTO) layer could be introduced to better isolate the domain layer from the presentation layer when data is handed to the view for presentation.

So What?

The bottom line is that our job as software delivery specialists is to deliver a solution which meets the needs of the client in a timely fashion while adhering to a high level of quality and maintainability. With decades of proven practices learned before us, there is no need to reinvent the wheel, within individual applications, with respect to tried and true design patterns and the implementation of basic tools to address our most frequent development challenges, such as data persistence and unit testing. The real challenge lies in putting forth the right amount of judicious planning and prefactoring to improve productivity while not stifling our ability to flexibly meet unexpected needs in creative ways. I hope that the techniques and tools described in this article, along with their assemblage within S#arp Architecture, serve to illustrate that while there are a thousand ways to skin a cat, it often makes sense to have a solid starting point built upon the wisdom and lessons learned of the software giants who have come before us.

To Learn More

S#arp Architecture has been in active development for close to a year, evolving to become an increasingly simpler and more powerful architectural foundation for the rapid development of solid, domain-driven applications. You can download the release candidate of S#arp Architecture from http://code.google.com/p/sharp-architecture. The 1.0 GA release is anticipated to follow closely the release of ASP.NET MVC 1.0. Your input and experiences are most welcome at the S#arp Architecture discussion forums.

About the author

http://devlicio.us

Billy McCafferty is a long time developer and a hopeless romantic when it comes to writing beautiful software. Billy currently leads a double life between helping to run a small training and consulting company known as Codai (which will be getting a new website very very soon) and filling the role of lead developer and architect with Parsons Brinckerhoff. After Billy gets his life back – which should be after the release of S#arp Architecture 1.0 – expect to see him soon at ALT.NET and other development conferences.

Rate this Article

Adoption
Style

BT