BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Domain-Driven Design in an Evolving Architecture

Domain-Driven Design in an Evolving Architecture

Bookmarks

Domain driven design can be most readily applied to stable domains in which the key activity is for developers to capture and model what is in users' heads. But it becomes more challenging when the domain itself is in a state of flux and development. This is common in Agile projects, and happens also when the business itself is trying to evolve. This article examines how we used DDD in the context of a two-year programme of work to rethink and rebuild guardian.co.uk. We show how we ensured the evolving perceptions of our end-users were reflected in the software architecture, and how we implemented that architecture to ensure future changes. We provide details of important project processes and of specific evolutionary steps in the model.

Top level headings:

  1. Background to the programme
  2. Starting out with DDD
  3. Processes for DDD in a growing programme
  4. Evolving the domain model
  5. Evolution at the code level
  6. Some final lessons of DDD in an evolving architecture
  7. Appendix: A concrete example

1. Background to the programme

Guardian.co.uk has a long history of news, comment and features and currently receives over 18 million unique users and 180 million page impressions a month. For most of this period the site ran on its original, pre-Java technology, but in February 2006 work began on a major programme of work to move it onto a more modern platform, of which the earliest phase debuted in November 2006 when the new-look Travel site launched, continued with a new front page in May 2007, and more followed. Although just a handful of people started on the work in that February the team later peaked at 104.

However, the reasons behind the programme were much more than just wanting a fresh look. Of core importance was that many years of experience had taught us there were better ways to structure our content, better ways to commercialise our content, and better ways of developing the software behind it.

In essence, the way we thought about our work had progressed beyond what our software could handle. This is why DDD was so valuable to us.

We will look briefly at two aspects of where conceptual mismatches in our legacy software was holding us back, first with problems for our internal users, and then with problems for our developers. These are the kinds of things that DDD helps us resolve.

1.1. Problems for our internal users

Journalism is an old profession with an established training, qualification and career structure, yet it was impossible for new, journalistically-trained editorial staff members to join us and work effectively with our web tools, even within a few months of arrival. To be a very effective user it was not sufficient to understand the key concepts of our CMS and website, but also how they were implemented.

For example, the concept of caching content (which should be a technical optimisation strictly internal to the system) was exposed to editorial staff; they needed to place content in the cache to ensure it was launched, and needed to understand the cache workflow to diagnose and resolve problems with the CMS tools. This was clearly an unreasonable requirement to place on them.

1.2. Problems for developers

Conceptual mismatches manifested themselves on the technical side, too. For example, one of the CMS concepts was an "artifact", which was sufficiently core that all developers worked with them every day. One of the team once admitted that it was a full nine months before he realised that these "artifacts" were in fact just web pages. Cryptic language and the software that had grow up around it had obscured the real nature of his work.

As another example, it was particularly time-consuming to generate an RSS feed from our content. Even though each section's front page contained an obvious list of primary content plus accompanying furniture, the underlying software did not distinguish between the two. Thus the logic to extract an RSS feed from a page was fuzzy logic of the sort "get each item on the page, and if its geometry is approximately half way across, and if it's of longer-than-average length, then it's probably the primary content, so we can extract the links and make them into a feed."

It had become clear to us that a divergence between how people thought about their work (launches, web pages, RSS feeds) and how it was implemented (caching workflows, "artifacts", fuzzy logic) was having a tangible and costly impact on our effectiveness.

2. Starting out with DDD

This section sets the scene for our use of DDD: why we chose it, its place in the system's architecture, and our initial domain model. In later sections we'll look at how we spread the initial domain knowledge to an expanded team, how we evolved the model, and how we evolved our coding techniques around this.

2.1. Choosing DDD

The first aspect of DDD that appealed was that of the ubiquitous language, and embedding users' own concepts directly in the code. This clearly addressed our issue of conceptual mismatches described above. That alone was a valuable insight, but by itself perhaps not much more than "object orientedness done properly".

What took it further was the technical language and concepts that DDD brought with it: entities, value objects, services, repositories, and so on. This ensured that in taking on a very large-scale project our large development team had the chance to develop consistently -- essential to maintain quality over the long term. Even when our lower level code atrophied, as we will show later, that common techncial language enabled us to bring everyone back together and improve code quality.

2.2. Embedding the domain model into the system

This section shows DDD's place in the overall system's architecture.

Our system is built up of three main components: our user facing website rendering application; our tools application which faces the editors and is used to create and manage content; and our feeds application which routes data in and out of the system. Each of these applications is a Java web application constructed using Spring and Hibernate, with Velocity as our templating language.

We can view these applications as having the following layout:

The Hibernate layer provides data access, using EHCache as Hibernate’s second level cache. The Model layer contains the domain objects and repositories, with services residing in the next layer above. Above this we have our Velocity template layer which provides page rendering logic. Finally, the topmost layer contains the controllers that act as entry points into the application.

Looking at this layered architecture for the application it is tempting to think that the model is simply one self-contained layer of our application. This idea is broadly true, but there are some subtle differences between the model layer and the other layers: as we are using domain driven design we require a ubiquitous language, not only for us to use as people when talking about the domain but also to be used everywhere in the application. The model layer exists not only to isolate business logic from rendering logic but also to provide the vocabulary for other layers to use.

Also, the model layer can be built as a standalone unit of code and exported as a JAR into many applications that are dependent on it. This is not true of any of the other layers. This has an implication for building and releasing applications: Changing code in the model layer in our infrastructure must be a global change across all of our applications. We can change a Velocity template in our front-end website and only have to deploy the front-end application, not the admin application or the feeds. If we change the logic of an object in our domain model we must roll out all of our applications that are dependent on the model as we only have (and want) one view of the domain.

There is a danger that this approach to domain modelling could lead to a monolithic model that becomes expensive to change if the business domain is very large. We are aware of this, and as our domain is constantly growing we need to ensure that this layer does not become too unwieldy. Currently this is not causing us a problem and the domain layer is quite large and complex. Working in an Agile environment we look to perform full production rollouts of all our applications every two weeks anyway. However, we are constantly looking at the cost of change of the code in this layer. If it begins to rise to an unacceptable level we will probably have to look at splitting the single model into multiple smaller models and providing adaption layers between each sub-model. We did not do this at the start of the project though, favouring the simplicity of a single model over the more complex dependency management issues that we would have to resolve with multiple models.

2.3. Early domain modelling

Very early on in the project, long before anyone reached for a computer keyboard and started working on code, we had decided that we would co-locate our developers, QAs, BAs and business people in the same room for the duration of the project. At this stage we had a small team of business and technical people, and we required only a modest first release. This ensured our model and our process were both fairly simple.

Our first objective was to get as clear an understanding of what our editors (a key constituent of our business representatives) were expecting for the first few iterations of the project. We sat as a whole team with the editors and talked through their ideas, allowing ideas to be questioned and clarified by representatives of the various functions until we thought that we had a reasonable understanding, in English, of what our editors needed.

Our editors decided that their highest priority at this early point in the project was for the system to be able to produce web pages which could display articles and a system of categorising articles.

Their initial requirements can be summarised as follows.

  • We should only be able to associate one article with any given URL.
  • We would need to be able to change the way that the resulting page was rendered by selecting a different template.
  • We need to group our content into broad sections for management, i.e. news, sport, travel
  • The system must be able to display a page containing links to all articles in a given categorisation.

Our editors required a very flexible approach to categorisation of articles. The approach they designed was based around the use of keywords. Each keyword defined a subject that content could be about. Each article could be associated with many keywords, as an article could be about many topics.

Our site has many editors, each owning different sections of the content. Each section required its own navigation and ownership of its own keywords.

From the language used by the editors it seemed that we were introducing a few key entities into our domain:

  • Page The owner of the URL. Responsible for selecting a template to render the content.
  • Template A layout for a page, which might be changed at any time. The technical people implemented a template as a Velocity file on the disc.
  • Section A very broad categorisation of pages. Sections each have an editor and a common look and feel for pages within them. Examples of sections are News, Travel, and Business.
  • Keyword A way of describing a subject that exists within a section. Used to categorise articles, keywords drives automatic navigation. They will be associated with a page so that and automatic page of all articles about a given subject can be produced.
  • Article A piece of textual content that we can deliver to a user.

After we had extracted this information we began modelling the domain. A decision that was made early on in the project was that the editors owned the domain model and were responsible, with help from the technical team, for designing it. This was quite a shift for our editors who were not used to this form of technical design. We found that by having workshops featuring the editors, a few developers and the technical architects we were able to sketch out and evolve the domain model using simple, low-tech approaches. An area of the model would be discussed and candidate solutions drawn, using felt pens, file cards and Blu-Tak. Each candidate model would be discussed, with the technical team advising the editors of the implications of each refinement in the design.

Although quite slow at first the process was fun. The editors found it very hands on; they were able to scribble things out and move objects around and then get immediate feedback from the developers as to whether the resulting model would meet their requirements. The technical people were all quite surprised and pleased with how quickly everyone became proficient in the process and all felt confident that the resulting system was going to satisfy their clients.

It was also interesting to watch the domain language evolve. Sometimes the object that was going to become the Article was spoken about as a "story". Obviously we cannot have a ubiquitous language that features multiple names for the same entity so this was a problem. It was our editors who spotted that they were not using a common language to describe things and they resolved to call the object an article. After that, any time anyone said "story" someone would say "Don’t you mean Article?. This constant communal process of correction is a powerful force when working to design a ubiquitous language.

The resulting model that our editors initially designed looked like this:

[The relationship between a page and its section is derived by the keywords that are applied to the article that is the page’s core content. The section of the page is given by the section of the first keyword applied to the article

As not all the team had been involved in all stages of the process we presented the model to them and hung a representation of it on the wall. The developers then began the Agile development build, and as the editors were co-located with the developers, BAs and QAs, any questions about the model and its intentions could be got "straight from the horse’s mouth" at any point during the development.

After a few iterations the system began to take shape and we had build tools to create and manage Keywords, Articles and Pages. The editors were able to use them as they were built and to advise on changes. It was generally felt that this simple core model was working and could go on to form the basis of the first release of the site.

3. Processes for DDD in a growing programme

After the initial release our project team grew with both technical people and business representatives, and we intended the domain model to evolve. It was clear that we needed an organised way to introduce newcomers to domain model and to manage the system's evolution.

3.1. Inductions for new staff

DDD is a core part of the induction process. Non-technical staff join the project throughout its life because the programme of work is sweeping across various editorial areas in turn, and we bring in the section editors when the time is right. Technical staff join the project throughout its life simply because we are continually hiring new staff.

Our induction process includes a DDD session for each of these two audiences, and though the detail varies, the high-level agenda covers the same two areas: what DDD is and why it's important; and specific areas of the domain model itself.

Important things we stress when describing DDD are:

  • The domain model is very much owned by the business representatives. This is about extracting concepts from the heads of the business representatives and embedding them into the software, not taking ideas from the software and trying to train the business representatives.
  • The technical team is a key stakeholder. We will still argue hard around specific details.

It's important to cover specific areas of the domain model itself because this gives the inductees a real handle on the specific issues in the project. There are several dozen objects in our domain model, so we only focus on a few of the high level and more obvious ones, which in our case is various kinds of content and our keyword concepts discussed in this article. We do three things here:

  • We draw the concepts and relationships on a whiteboard, and so we can provide a tangible demonstration of how the system-so-far works.
  • We make sure an editor is on hand to explain a lot of the domain model, to emphasise the fact that it's not owned by the technical team.
  • We explain some of the historical changes we've made to get to this point, so inductees can understand (a) this is not set in stone but is changable, and (b) what kind of role they can play in forthcoming planning conversations to develop the model further.

3.2. DDD in planning

Induction is essential, but that knowledge is only exercised practically when we start to plan each iteration.

3.2.1. Using the ubiquitous language

The common language DDD forces allows business people, technical staff, and designers to meet round the same table to plan and prioritise specific tasks. This means more meetings are relevant for the business people, they get closer to the technical people and understand the technical process more. One colleague developed her role from editorial assistant on the project to key decision-maker; she commented that by being in iteration kick-off meetings she got to see for herself how technical staff estimated and (often passionately) assessed tasks, and she came to appreciate much more the balance between features and effort. If she did not share a language with the technical team she would not have been in that meeting and that understanding would not have been acquired.

There are two important principles we use when using DDD in the planning phase:

  1. The domain model is owned by the business; and
  2. There needs to an authoritative business source for the domain model.

Business ownership of the domain model is explained during inductions, but only comes into play here. This means that the technical team's key role is to listen and understand, not to explain what is and isn't possible. Requirements extraction requires mapping the conceptual domain model to concrete feature requirements, and where there is a mismatch to challenge and query the business representative. Where there is a mismatch then either the domain model needs to change or the feature requirement needs to be addressed at a higher level ("What do you want to achieve with this feature?").

The authoritative business source for the domain model is explicitly needed by the nature of our organisation. We are building a single software platform that needs to meet the demands of many editorial teams who don't necessarily see the world in the same way. Guardian does not operate a "command and control" structure that many companies do; rather, editorial desks are given a lot of freedom and a licence to develop their own site sections and audiences in the way they see fit. Consequently different editors have slightly different understandings and perspectives on the domain model, and this has the potential to undermine a single ubiquitous language. Our solution is to identify and embed business representatives who have a responsibility across all editorial desks. For us, this is the Production team, the people who deal with the day-to-day mechanics of building sections, specifying layouts, etc. They are the super-users desk editors rely on for expert tools advice, and so they are the people the technical team use to be holders of the domain model and who ensure consistency across the very large body of software. They are certainly not the only business representatives, but they are the ones who remain consistently with the technical people.

3.2.2. Problems planning with DDD

Unfortunately, we have found particular challenges with applying DDD in the planning process, and particularly in an Agile environment where planning is continual. These problems are:

  1. The nature of writing software to a new and uncertain business model;
  2. Being tied to an old model;
  3. Business people going native.

We discuss these in turn...

When Eric Evans writes about creating the domain model the perspective is that the business representatives have a model in their heads and this needs to be extracted; even if their model isn't explicit they do understand the core concepts and by-and-large these can be explained to technical people. However in our case we were changing our model -- indeed, changing our business -- without knowing the exact details of what we were moving to. (We will see specific examples of this shortly.) Certain ideas were obvious or had been established early on (e.g. we would have articles, and keywords) but many were not (there was some resistance to introducing the idea of pages; how keywords related to each other was entirely up for grabs). Our textbook provided no guide to resolving these issues. However the principles of Agile development did:

  • Build the simplest thing. Though we couldn't settle on all details early on we usually understood enough to build the next piece of useful functionality.
  • Release frequently. By releasing this functionality we were able to see how it worked in the field. Further tweaks and evolutionary steps became most apparent from this (and inevitably they were not always what we expected).
  • Minimise the cost of change. With these tweaks and evolutionary steps inevitable it was essential to reduce the cost of change. For us this included automated build process, automated testing, and so on.
  • Refactor often. After several steps of evolution we would see technical debt accumulate and this needed to be addressed.

 

Related to this was the second problem: having too many mental ties to our old model. For our example, our legacy system required editors and production staff to lay out pages individually, while a vision for the new system offered automatic page generation based on keywords. In this new world a page on, say, Guantánamo Bay would appear without any human intervention, simply by virtue of the fact that much content would be given the Guantánamo Bay keyword. However, that turned out to be an overly mechanistic vision held only by the technical team, who had hoped to cut down manual labour and constant curation of all the pages. By contrast the editorial staff valued highly the human insight they brought to the process of not just reporting but also presenting the news; to them, individual layout was essential to highlight the most important stories (rather than just the latest ones) and to treat different subjects with a different approach and sensitivity (e.g. 9/11 versus Web 2.0 coverage).

There is no one-size-fits-all solution to this kind of problem, but we found two keys to success: to focus on business issues, not technical issues; and to be mindful of the phrase "creative conflict". In the example here there was a difference of opinion, but by both parties expressing their motivation in business terms we were working on the same playing field. The solution was a creative one, born of understanding everyone's motivations, and therefore addressing everyone's concerns. In this case we built a number of templates the editors could select from and switch between, each with a different feel, impact, etc. Additionally, a key area of each template allowed manually selected stories to be displayed, with the rest of the page being automated content (being careful not to duplicate content) and for the this manual area to be switched off at any time if curation was becoming onerous, so making the page fully automatic.

The third challenge we found was with business people going native, which is to say they could become so deeply embedded in the technology, and so caught up in its nuances, that they could forget what it might be like for an internal user new to the system. There are danger signs when a business representative finds it hard to communicate how things work to their peers, or specify features which are of limited value. In the first edition of Extreme Programming Explained Kent Beck says an on-site customer can be secured by stressing that interaction with the technical team will never take more than a moderate percentage of their day. But on working with a team with several dozen developers, BAs and QAs, we found even three fulltime business representatives was sometimes insufficient. With business people spending so much time with technical people, losing touch with their peers can be a real problem. These are human problems with human solutions. Solutions are to provide personal backup and support, rotating new business people into the team (probably starting out assisting, building up to a key decision-making role), allowing representatives time to return to their core jobs for, say, a day a week, and so on. In fact, this has an additional advantage that it gives more business representatives exposure to the software development, and so spreads skills and experience.

4. Evolving the domain model

In this section we look at how the model evolved in later stages of the programme

4.1. Evolutionary step 1: Beyond articles

Shortly after the initial release the editors required the system to be capable of dealing with more than one type of content, beyond an Article. Although this was no surprise to us we explicitly decided against thinking about this too much while we built the first version of the model.

This is a key point: Rather than try to architect the whole system up front we focussed on the whole team gaining a good understanding of the model and the modelling process in small manageable chunks. It is not a mistake to have to change your model later as understanding increases or changes. This approach is compatible with the coding principle of YAGNI (You Ain’t Gonna Need It) as it stops developers introducing extra complexity and therefore stops bugs creeping in. It also allows the whole team time to gain a shared understanding of the system in bite sized chunks. We regard producing a working bug-free system today as more important than producing a beautiful, all encompassing model tomorrow.

The new types of content that our editors required in the next iterations were Audio and Video. Our technical team sat with our editors and went through the domain modelling process again. From talking to our editors it was clear that Audio and Video were similar to Articles: it should be possible to place a Video or Audio on a Page. Only one piece of content was allowed per page. Video and Audio could be categorised by Keywords. Keywords could belong in Sections. The editors also stated that in future iterations they would be adding more types of content and felt that this was the time to understand how we would evolve the model of content types over time.

It was clear to our developers that our editors wanted to introduce two new terms into our language explicitly: Audio and Video. It was also clear that Audio, Video and Article all had something in common: they all were types of Content. Our editors were not familiar with the concept of inheritance so the technical team were able to teach the editors about inheritance so that they could correctly express the model as they saw it.

There was a clear lesson here: By using agile development techniques to break down the process of software development in small chunks we were also able to smooth off the learning curve for our business people. They were able to increase their understanding of the domain-modelling process over time rather than spend a large time up front learning all of the components of object oriented design.

Here is the new model that our editors designed with the new content types added.

This single evolution to the model is the result of a number of smaller evolutions to our ubiquitous language. We now have three extra words: Audio, Video and Content; our editors have learned about inheritance and can use it in future iterations of the model; and we have a future expansion strategy for adding new content types and have made this simple for our editors. If the editors require a new content type and that new content type is to have broadly the same relationship with Pages and Keywords as our existing content types then the editors can ask the development team to produce a new type of Content. By generating the model gradually we are improving our efficiency as a team as our editors no longer have to go through a lengthy domain modelling process to add new types of Content.

4.2. Evolutionary step 2:

As our model was extended to include more types of Content it needed to be categorised more flexibly. We began adding extra types of metadata to our domain model, but it was not exactly clear where the editors’ final intentions lay. However, this didn't worry us too much because we used the same approach to modelling metadata as we did with content, breaking down the requirements into manageable chunks and adding each into our domain.

The first metadata type that our editors wanted to add was the concept of a Series. A Series is a grouping of related Content that has an implicit date-based order. We have many examples of Series in our newspapers and needed to translate this concept for the web.

Our initial thoughts about this were quite simple. We would add Series as a domain object that could be associated with Content and Page. This object would be used to aggregate the content that was associated with the series. If a reader visited a piece of Content and that content was in a Series we would be able to link from that Page to the previous and next Content within the Series. We would also be able to link to and generate the Series index Page which would show all Content in the Series.

Here is the model for series that our editors designed:

Meanwhile, in another part of the forest, our editors were thinking about some more metadata that they would like to associate with Content. Currently Keywords describe what a piece of content is about. The editors also required that the system dealt with content differently depending on the Tone of the Content. Examples of different Tones are reviews, obituaries, reader offers, and letters. By introducing Tone, then we could show this to readers, and allow them to find similar content (other obituaries, reviews, etc). This felt like a different kind of relationship than Keyword or Series. Like Series a Tone could be attached to a piece of Content and have a relationship with a Page.

Here is the model for Tone that our editors designed:

Upon completion of development we had a system that could categorise content by Keyword, Series or Tone. However, the editors had some concerns about the amount of technical work that was required to reach this point. They brought these concerns up with the technical team when we next evolved the model, and were able to suggest solutions.

4.3. Evolutionary step 3: Refactoring the metadata

The next evolution to our model that our editors wanted to introduce followed on in a similar vein to the addition of Series and Tone. Our editors wanted to add the concept of Content having a Contributor. A Contributor is someone who creates content, and might be a writer for articles or a producer for videos. Like Series, a Contributor can have a Page on the system which will automatically aggregate all of the Content that they have produced.

The editors also saw another problem. They felt that with the introduction of both Series and Tone they had to specify a large amount of very similar detail to the developers. They had to ask for a tool to be built to create a Series and another tool built to create a Tone. They had to specify how these objects related to Content and Page. Each time they found that they were specifying very similar development tasks for both types of domain objects; it was time-consuming and repetitive. The editors became even more concerned with the addition of Contributor into the mix, and with yet more metadata types likely to follow. It looked like yet again they would have to specify and manage a large amount of expensive development work, all of which was very similar.

This was clearly become a problem. It seemed that our editors had spotted something wrong with our model that the developers had not. Why was it so expensive to add new metadata objects? Why were they having to specify the same work over and over again? A question asked by our editors was "Is this just 'how software development works', or is there a problem with the model?" The technical team felt that the editors were onto something, as they were obviously not seeing the domain in the same way as the editors. We held another domain modelling session with the editors to try and identify the problem.

In the meeting our editors suggested that all of the existing types of metadata could in fact be derived from the same base idea. All of the metadata objects (Keyword, Series, Tone and Contributor) could have a many-to-many relationship with Content and all required their own Page. (In previous versions of the model we were having to derive the relationship between the objects and Page.) We refactored the model to introduce a new superclass called Tag and subclassed the other metadata. The editors loved the use of the technical term "superclass" and declared that this whole refactoring was to be called "Super-Tag", although eventually came down to earth.

With the introduction of Tags adding Contributor and other expected new metadata types was simple, as we would be able to leverage the existing tools functionality and frameworks.

Our revised model now looked like this:

It was fascinating to find our business representatives looking at the development process and the domain model in this way, and an excellent example of the ability of domain driven design to promote a shared understanding that works in both directions: we were discovering that our technical teams had a good and consistent understanding of the business problems that we were trying to solve and, almost as an unexpected bonus, the business representatives were able to "see inside" the development process and change it to better suit their needs. Editors were now not only capable of translating their requirements into a domain model, but also designing and overseeing refactorings of the domain model to ensure that it was kept up to date with our current understanding of the business problems.

The editors' ability to plan refactorings of the domain model and execute them successfully is one key to our success with domain driven design at guardian.co.uk.

5. Evolution at the code level

Previously we looked at evolutionary aspects of the domain model. But DDD has an impact on the code level, too, and evolving business demands meant there were changes there, too. We look at some of these now.

5.1. Structuring the model

When structuring a domain model the first thing to identify is the aggregates that occur within the domain. An aggregate can be thought of as a collection of related objects that have references between each other. These objects should not directly reference objects in other aggregates; that is the job of the aggregate root. 

Looking at the examples of our model as defined above we can start to see flavours of objects forming. We have the Page and Template objects which act together to give our web pages a URL and a look and feel. As the entry point into our system is the Page it felt obvious that the Page was the aggregate root here.

We also have a caggregate with Content being the aggregate root. We have seen that Content has subtypes of Article, Video, Audio etc, and we regard each of these as a sub-aggregate of content with the core Content object as its aggregate root.

We can also see another aggregate forming. This is the collection of metadata objects: Tag, Series, Tone, etc. These form the tag aggregate with Tag as its aggregate root.

The Java programming language provides an ideal way to model these aggregates. We can use Java packages to model each aggregate, and standard POJOs to model each domain object. Domain objects that are not the aggregate root and are only used within the aggregate can have package scoped constructors so that they cannot be constructed outside of the aggregate.

The package structure for the above model looks like this (“r2” is the name of our suite of applications):

 com.gu.r2.model.page 
com.gu.r2.model.tag
com.gu.r2.model.content
com.gu.r2.model.content.article
com.gu.r2.model.content.video
com.gu.r2.model.content.audio

We have broken down the content aggregate into sub-packages because the content objects tend to have many aggregate-specific supporting classes (not shown on our simplified diagrams here). All of the tag-based objects tend to be much simpler so we have kept them in the same package rather than introduce extra complexity.

However, we have come to realise that the above package structure could cause us problems later and intend to change it. The problem can be illustrated by looking at a sample of the package structure from our front-end application to see how we are structuring our controllers:

 com.gu.r2.frontend.controller.page
com.gu.r2.frontend.controller.articl

Here we see that our codebase is starting to fragment. We have extracted all of our aggregates into packages but we do not have a single package that contains every object that is related to that aggregate. This means that we could have difficulties resolving dependencies if we wished to break up the application in the future due to the domain becoming too large to manage as a single unit. This is not really causing us problems at this point but we are going to refactor our application such that we do not have as many cross-package dependencies. An improved structure would be:

 com.gu.r2.page.model   (domain objects in the page aggregate)
com.gu.r2.page.controller (controllers providing access to aggregate)
com.gu.r2.content.article.model
com.gu.r2.content.article.controller
...
etc

We do not have any other enforcements of domain driven design principles in our codebase other than convention. It would be possible to create annotations or marker interfaces to mark aggregate roots in an attempt to really lock down development in the model packages, reducing the chances of developers making mistakes in modelling. But instead of these mechanical enforcements we rely on more human techniques such as pair programming and test driven development to ensure that standard conventions are followed across the codebase. If we do spot something that has been created that violates our design principles (which is fairly rare) then we will talk to the developers and ask them to refine the design. We much prefer this lightweight approach as it leads to less clutter in the codebase, and improved code simplicity and readability. It also means our developers learn better why certain things are structured the way they are, rather than doing things simply because they are forced to.

5.2. Evolution of the core DDD concepts

An application built following the principles of domain driven design will feature four broad types of objects: entities, value objects, repositories and services. In this section we will look at examples of these from our application.

5.2.1. Entities

Entities are objects that exist within an aggregate and have identity. Not all entities are aggregate roots but only entities can be aggregate roots. 

The concept of an entity is one that developers, particularly those that use relational databases, are quite familiar with. However, we found that this seemingly well-understood concept could cause some confusion.

The confusion seemed to be related in part to our use of Hibernate to persist our entities. As we are using Hibernate we generally model our entities as simple POJOs. Each entity has properties that can be accessed with setter and getter methods. Each property is mapped in an XML file defining how it should be persisted in the database. In order to create a new persisted entity the developer needed to create a database table for storage, create the appropriate Hibernate mapping file and create a domain object with the relevant properties. As the developers spent some time working on the persistence mechanism they sometimes seemed to feel that the purpose of the entity object was simply persistence of data rather than the execution of business logic. When they then came to implement business logic they tended to implement the it in service objects rather than the entity objects themselves.

An example of this type of mistake can be seen in this (simplified) code snippet. We have a simple entity object to represent a football match:

 public class FootballMatch extends IdBasedDomainObject
{
private final FootballTeam homeTeam;
private final FootballTeam awayTeam;
private int homeTeamGoalsScored;
private int awayTeamGoalsScored;

FootballMatch(FootballTeam homeTeam, FootballTeam awayTeam) {
this.homeTeam = homeTeam;
this.awayTeam = awayTeam;
}

public FootballTeam getHomeTeam() {
return homeTeam;
}

public FootballTeam getAwayTeam() {
return awayTeam;
}
public int getHomeTeamScore() {
return homeTeamScore;
}

public void setHomeTeamScore(int score) {
this.homeTeamScore = score;
}

public void setAwayTeamScore(int score) {
this.awayTeamScore = score;
}
}

This entity object uses FootballTeam entities to model the teams, and looks like the type of object that any Java developer using Hibernate will be familiar with. Each property of this entity is persisted in the database, and although that detail is not really important from the perspective of domain driven design, our developers were elevating persisted properties to a higher status than they deserved. This can be shown when we try and work out from a FootballTeam object who won the game. The sort of thing that our developers were doing was to create another so-called domain object that looked like this:

 public class FootballMatchSummary { 

public FootballTeam getWinningTeam(FootballMatch footballMatch) {
if(footballMatch.getHomeTeamScore() > footballMatch.getAwayTeamScore()) {
return footballMatch.getHomeTeam();
}
return footballMatch.getAwayTeam();
}
}

A moment's thought should suggest that something has gone wrong. We have created a new class called a FootballMatchSummary which exists in our domain model but does not mean anything to the business. It seems to be acting as a service for the FootballMatch object, providing functionality that really should be on theFootballMatch domain object. What seemed to be causing the confusion was that the developers were viewing the purpose of the FootballMatch entity object as simply to reflect the information persisted in the database and not to answer all of the business questions. Our developers were thinking of the entity as an entity in a traditional ORM sense rather than as a business-owned and business-defined domain object.

This sort of reluctance to place business logic in the domain objects can lead to a rather anaemic domain model and a proliferation of confusing service objects if left unchecked -- as we will see in a moment. As a team we now take a critical look at any service objects that are created to see whether they actually contain business logic. We also have a strict rule that developers cannot create new object types in the model that do not mean anything to the business.

As a team, we were also further confused by entity objects at the start of the project, and again this confusion was related to persistence. In our application most of our entities are related to content and most of them are persisted. There are times however when an entity is not persisted but is created by a factory or repository at run time.

A good example of this is "tag combiner pages". We persist a representation of all pages created by editors in the database but we can automatically generate pages that aggregate content from a combination of tags, such as USA + Economics or Technology + China. Because the total number of all possible tag combinations is astronomical we cannot possibly persist all of these pages, yet the system must still be able to generate them. When rendering tag combiner pages we must instantiate new non-persisted instances of the Page class at run time. Early on in the project we had a tendency to regard these non-persisted objects as something different from a “real” persisted domain object, and were not as thorough in our modelling of them. In fact, these automatically generated entities are no different to persisted entities from the viewpoint of the business, and therefore from the viewpoint of domain driven design. They have an equally defined meaning to the business whether or not they are persisted, and there are therefore simply domain objects; there is no concept of "real" or "not-as-real" domain objects.

5.2.2. Value objects

Value objects are properties of entities that do not have a natural identity that means anything within the domain, but which do express a concept that has meaning within the domain. These objects are important as they add clarity to the ubiquitous language.

An example of the clarifying abilities of value objects can be seen by looking at our Page class in more detail. Any Page on our system has two possible URLs. One URL is the public facing URL that readers type into their web browsers to access content. The other URL is the internal URL that the content lives on when served directly from our application server. Our web servers look at any incoming URL that has been requested by a user and translate it into an internal URL on the appropriate backend CMS server.

A simplistic view of these two possible URLs would be to model them both as string objects on the Page class:

 public String getUrl(); 
public String getCmsUrl();

However, this is not particularly expressive. It is difficult to know exactly what these methods will return by looking at their signatures other than the fact that they return strings. Also, imagine the case where we want to load a page from a data access object based on its URL. We may have a method signature that looks like:

public Page loadPage(String url);

Which URL is required here? The public facing one or the CMS url? It is impossible to tell without inspecting the code for the method. It is also difficult to have a conversation with the business when talking about URLs for pages. Which one do we mean? There is no object in our model that represents each type of URL, hence there is no term in our vocabulary.

There more trouble brewing here. We may have differing validation rules for internal and external URLs, and wish to execute different operations on them. How can we encapsulate this logic correctly if we do not have anywhere to put it? Logic that manipulates URLs certainly does not belong on Page and we don’t want to introduce more needless service objects.

The evolution suggested by domain driven design is that we model these value objects explicitly. We should create simple wrapper classes that represent the value objects to type them. If we do so our signature on Page now looks like this:

 public Url getUrl();
public CmsPath getCmsPath();

We can now pass CmsPath or Url objects around in the application safely, and have a conversation with our business representatives about this code in a language that they will understand.

5.2.3. Repositories 

Repositories are objects that exist within an aggregate to provide access to instances of that aggregate’s root object while abstracting away any persistence mechanisms. These objects are asked business questions and respond with domain objects.

It is tempting to think of repositories as technical objects similar to data access objects with functionality related to database persistence rather than business objects that exist within the domain. But repositories are domain objects: they answer business questions. A repository is also always associated with an aggregate and returns instances of its aggregate root. If we require a Page object we will go to the PageRepository. If we require a List of Page objects that answers a specific business problem we will also go to the PageRepostory.

We found that a good way of thinking about repositories is to see them as facades onto a collection of data access objects. They then become the point of integration between the business questions that need asking about a particular aggregate and the data transfer objects that provide the low level functionality.

Here we can see this in action by taking a small sample of code from our page repository:

 private final PageDAO<Page> pageDAO; 
private final PagesRelatedBySectionDAO pagesRelatedBySectionDAO;

public PageRepository(PageDAO<Page> pageDAO,
EditorialPagesInThisSectionDAO pagesInThisSectionDAO,
PagesRelatedBySectionDAO pagesRelatedBySectionDAO) {
this.pageDAO = pageDAO;
this.pagesRelatedBySectionDAO = pagesRelatedBySectionDAO;
}

public List<Page> getAudioPagesForPodcastSeriesOrderedByPublicationDate(Series series, int maxNumberOfPages) {
return pageDAO.getAudioPagesForPodcastSeriesOrderedByPublicationDate(series, maxNumberOfPages);
}

public List<Page> getLatestPagesForSection(Section section, int maxResults) {
return pagesRelatedBySectionDAO.getLatestPagesForSection(section, maxResults);
}

Our repository contains business questions: Get Pages for a specific Series of Podcasts ordered by PublicationDate. Get the latest Pages for a specific Section. We can see the business domain language in use here. This is not merely a data access object it is a domain object in its own right in the same way that a Page or Article is a domain object.

It took us a while to realise that regarding repositories as domain objects could help us overcome technical problems with the implementation of our domain model. In our model we can see that Tag and Content have a bidirectional many-to-many relationship. We are using Hibernate as our ORM tool so we mapped this such that Tag had the following method:

 public List<Content> getContent();

And Content had the following method:

 public List<Tag>  getTags(); 

Although this implementation was the correct expression of the model as our editors saw it we had built ourselves a problem. It was possible for developers to write code like this:

 if(someTag.getContent().size() == 0){
... do some stuff
}

The problem here is that if the tag in question is one with a large volume of content ("News", for example) we could end up loading hundreds of thousands of content items into memory just to see whether a tag had any content. This obviously caused huge performance and stability issues on the site.

As we evolved our model and understanding of domain driven design we realised that sometimes we had to be pragmatic: certain traversals of the model could be dangerous and should be avoided. In this case we used a repository to answer the questions in a safe manner, sacrificing some small areas of clarity and purity of the model for performance and stability of the system.

5.2.4. Services

Services are objects that manage the execution of business problems by orchestrating the interaction of domain objects. Our understanding of services is something that has evolved the most as our processes have evolved.

The primary problem is that it is quite easy for developers to create services that really should not exist; they either end up containing domain logic that should exist in domain objects or they actually represent missing domain objects that have not been created as part of the model.

Early on in the project we started to find services cropping up with names like ArticleService. What is this? We have a domain object called Article; what is the purpose of an article service? On inspection of the code we found that this class seemed to be following a similar pattern to the FootballMatchSummary object discussed above, containing domain logic that really belonged on the core domain object.

In order to address this behaviour we performed a code review of all services in the application and executed refactorings to move the logic into appropriate domain objects. We also instigated a new rule: Any service object must have a verb in its name. This simple rule stops developers from creating a class like an ArticleService. We can instead create a classes like an ArticlePublishingService and ArticleDeletionService. Moving to this simple naming convention has certainly helped us move domain logic into the right place, but regular code reviews of services are still required to ensure that we are keeping on track and modelling our domain as close to the view of the business as is practical.

6. Some final lessons of DDD in an evolving architecture

Despite the challenges we have found significant advantages in using DDD in an evolving and Agile environment, and we learnt these lessons among others:

  • You don't have to understand the whole of the domain to add business value. You don’t even need a full knowledge of domain driven design. All members of the team can reach a shared understanding of the model as much as they need at any time.
  • It is possible (even essential) to evolve the model and the process over time and to correct previous mistakes as our shared understanding improves.

The full domain model for our system is very much larger than the simplified version described here, and is constantly evolving as our business expands. In a the dynamic world of a large scale website innovation is always happening; we always want to stay ahead of the game and break new ground, and it is sometimes difficult for us to get the model exactly right first time. Indeed, our business representatives often wish to experiment with new ideas and approaches. Some will bear fruit and others will not be successful. It is the ability of the business to incrementally extend an existing domain model -- and even to refactor it when it no longer meets their needs -- that provides the foundation for much of the innovation that occurs while developing guardian.co.uk.

7. Appendix: A concrete example

To see how our domain model yields real-world results, here is an example, starting with a single piece of content...

8. About the authors

Nik Silver is Head of Software Development at Guardian News & Media. He introduced Agile software development to the company in 2003 and is responsible for software development, front-end development and quality assurance. Nik occasionally writes about Guardian's technical work on blogs.guardian.co.uk/inside, and about wider software issues at his own site, niksilver.com.

Matthew Wall is Software Architect at Guardian News & Media, specialising in developing large scale web applications in an Agile environment. His primary concern at the moment is the development of the next generation web platform for guardian.co.uk. He has given various talks on this and related subjects at JAOO, ServerSide, QCon, XTech and OpenTech.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Great DDD article!

    by Michael Cohen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Wow! This was a great, pragmatic article on DDD. Thanks to Nik and Matthew for writing this.



    I would be interested to know if there were any interesting integration issues with legacy systems that had to be dealt with when creating the domain model. I’m also curious how the controllers worked with the domain model in tandem with the Ehcache.



    It would be great to see more DDD articles like this from InfoQ :).

  • I would like to offer a purely technical NET 3.5/Linq/NHibermate transform

    by Damon Wilder Carr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Is that something if you were credited we could use? Would anyone be interested ( I do realize many readers could do this just as easily as I could).



    You hit a sweet spot of 'not to light not to technical' that serves an area we have not prepared any material for (we do very good on the other ends as those are so much easier!)..



    You also have that 'immediately recognizable thing' we can all see in people who are facing the same challenges and your comments make you a stand-out as your what we assert a leader in this area must be : No compromise in EITHER technical, communication or abstract design master. It's a tall order to fill. I'd love your specific ideas on how you see this getting easier moving forward and the role DSLs will play (more interestingly the role of different TYPES of DSLs targeted to different groups).



    This is just so well done!



    I will work up the working example and all the content as well for your review. My email is damon at agilefactor dot com.



    Kind Regards,

    Damon Wilder Carr

    blog.domaindotnet.com

  • Colour modelling

    by Hugh Beveridge,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for the article. I am interested to know whether you used colour-modelling techniques to produce your domain model? Judging by the images provided you didn't.


    As DDD is becoming more prominent I have yet to come across any DDD material that mentions the powerful colour-modelling techniques and Archetypal Domain Shape (or ADS, also known as the "Domain Neutral Component"). The ADS and colour object archetypes give any object modelling effort a significant head start and boost the quality of the domain model, through better flexibility, decoupling and identification of object responsibilities. The techniques evolved out of work by the gifted object modeller Peter Coad while working with Jeff De Luca and others (and a set of colour post-it notes) on a large project in Singapore in the '90s.


    That project also spawned Feature Driven Development. FDD process 1 has always been: Develop an Overall Model, which has in turn built up a lot of best-practice around approaches to modelling workshops (Jeff De Luca's training courses are priceless).

    I wonder whether the DDD world will catch up with these best-practice techniques? In my experience once you start using colours there is no turning back.

    For those interested, here are some links to more info:


    Here's an introduction to colour archetypes

    There is a lot of information about domain modelling at the FDD site

    PDF versions of the ADS are available on Jeff De Luca's website



    Hugh Beveridge

  • Re: Great DDD article!

    by Matthew Wall,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    With regard to your question about legacy systems:


    guardian.co.uk has been live on the internet for around 10 years and, until recently, we have had a huge legacy system running in our production environment. The legacy system is very different to the new system. It is implemented in templates, TCL scripts and perl and has no strong domain model. Also, all of our content is stored in the legacy database. The legacy system was difficult to test and maintain, and the problems caused by the lack of a strong model in the system led us towards domain driven design in the new system.



    Our approach to the project was one of migration. In each phase we would identify a set of content & functionality in the legacy system that we needed to migrate into the new one. We would create a domain model for this new functionality and migrate the data and functionality from the old stack into the new. This meant that for the most part we could view the new system as a green fields project, the integration between the system was minimal. There were a few interesting cases however, both systems needed to be able to link between each other so we had to introduce the concept of Tag into our legacy system and LegacyArticle into our new system. It was quite simple to join both systems together with simple mapping tables mapping legacy concepts to our new Tags. Also, most of the language used by the business around the legacy system was quite technical and based around the names of database tables, CARTICLE, CARTIFACT etc. This meant that there wasn't much chance of collision between the newly forming domain language and the legacy system. There were a few concepts that existed in both however, for example the concept of a Section. This term was also used in the legacy system but had a different meaning. We got round this potential confusion by simply prefixing any terms that existed in both languages with "Legacy" when referring to the legacy concepts.



    You ask an interesting question about controllers and EHcache. I don't really think that core objects like controllers or caching frameworks are domain objects in their own right, they feel more like essential infrastructure objects than objects that are owned by the business. However, domain driven design leaves a footprint across the whole application. The primary entry point into our application is a used requesting a Page. The primary controller that serves this request is the PageController. This feels right, as Page is an aggregate root. The PageController is not a domain object in it's own right, but it's place in the application & it's name are determined by the domain.



    Caching is similar. It is simple for us to calculate when any domain entity has been modified by an editor so we place all of our domain objects in a single cache region (called the DomainObjectCacheRegion) and issue appropriate cache clearing messages when entities are modified. The more complex case is repository queries that cannot easily be de-cached, such as "Get me the related content for this particular piece of content". In this case we store each of the queries in their own cache region. Each cache region tends to contain a single type of domain object, such as lists of pages, articles, job information etc. So, while we don't regard the caching framework itself as part of the business domain, the structure of the cache itself is organised around the domain model.

  • Re: Colour modelling

    by Matthew Wall,

    Your message is awaiting moderation. Thank you for participating in the discussion.


    We didn't explicitly use colour modelling techniques when designing our domain model. In fact, we didn't actually use UML. We used a very low tech approach of simple hand-drawn diagrams, file cards and blu-tak. The reason for this is that to us the most important output from any domain modelling session is a shared understanding of the model between both our business representatives and our technical team. Our business representatives are not technical, and while they can now understand the cut down UML syntax that we use in our modelling sessions they don't have the time or headspace to go any further.



    We're also not really interested in generating code from our domain model representations. This leads interestingly into the feature driven development question. We don't use feature driven development per se, but the definitions of the processes involved in feature driven development look quite similar to the agile techniques that we use here at guardian.co.uk. We only feed units of fuctionality that add business value into our development iterations. These feel like features to me. We inspect our code and model and refactor it at the start of each iteration in a way similar to that suggested by feature driven development. Because we are always working on small, incremental, cohesive improvements each domain modelling session is small and simple (often quite informal) as the size of the changes are small. We don't find that our lightweight approaches are causing us any problems. It is simple for us all to gain a shared understanding of each iteration of the model. We are able to deliver functionality according to the model, on time and on budget in a way that our business representatives can understand without complicating our modelling process.

  • Re: I would like to offer a purely technical NET 3.5/Linq/NHibermate transf

    by Nik Silver,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Damon, regards content on InfoQ, you'll need to contact the folks there. I can address your technical questions though...



    In terms of complexity for the future, I would predict both a positive and a negative.



    The positive is that this programme of work is particularly large effort, and one we all hope we won't have to repeat for a very long time. So there's a positive that the domain model will be much more stable and embedded in people's minds much more. Stability should help make for a less complex working life, and there may be areas where we are able to refactor the domain model, and so make that less complex.



    On the other hand I do worry that less intense work on the domain model will lead to an atrophy of rigour. For example, if you're no longer defining aggregate roots so often, will you forget lots of good practices and lessons? I don't know, but I think it's a danger. I'm unsure if the positive will outweight the negative. We'll see.



    Regards DSLs, there are two things that approach that.



    One is in our automated functional test suite which uses Selenium's Java client driver. This has been of some value, but not as much as we'd have hoped. That's primarily because of the problems inherent in testing a constantly-changing GUI, and also partly (but less so) because in my view Java is too powerful to be used as a DSL -- you can get people doing clever things which others don't understand.



    The second is Apache Velocity, which provides a very simple templating language for the front-end developers. It's pitched at pretty much the right level: flexible enough to be useful, but not so flexible enough that you end up doing incomprehensible things. When things do need to be complex then we shift it back into the application code. There are still problems here around lack of tools support (refactoring and code analysis doesn't exist for Velocity in the same way as it does for Java). But that's another story altogether.



    Hope that provides some answers.

  • Re: I would like to offer a purely technical NET 3.5/Linq/NHibermate transf

    by Damon Wilder Carr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Much appreciate your feedback. Your answers show your deep practical experience base. I do have some concerns that I was not clear in my questions.



    You said when describing a DSL:

    ".... primarily because of the problems inherent in testing a constantly-changing GUI..."



    So my comments are about what you do before testing. Your listing a problem I had though mostly solved. For example, we assume as a ‘critical success factor' quite literally the existence of MVP/MVC or other design moving any 'reuse potential' code out of their view implementations.



    In fact, this is our first assessment made when working with others. Short of this baseline, we assert forces will drive the solution into very bad places and the goal of everyone (which you cover in this article) would be nearly impossible , at least where you have complex views (say Ajax driven web in one view and mobile apps for another over the same domain).





    Are you saying THIS IS NOT a prerequisite in your opinion? We would love your thoughts on alternatives, as this is a tangible pain-point.





    See the footnote (1) for our detailed assumptions and practices around this how we (and I know of many others) are supporting this via Continuous Integration with assumed design in place.



    Next is your statement:
    "...in my view Java is too powerful to be used as a DSL -- you can get people doing clever things which others don't understand..."

    I do not follow. How do you associate this to DSL implementation? Indeed how could any language be 'too powerful' for DSL realization?



    In my experience and opinion, you only are limited when faced with languages not powerful enough and if lucky enough to have too much that is nothing new. We have always had under skilled developers going nuts and abusing the power of languages.



    <strong>OO Multiple Inheritance / Fragile Base Class for Example...</strong>



    Most would agree that C# was not a candidate language for internal DSL creation until the 3.0 release.



    As an example, the addition of extension methods allow fluent APIs to evolve even around others's APIs, without source modification (what Fowler is calling ‘Expression Builder' in one part of this, in his upcoming DSL book).



    Also C# 3.0 provides another core enabler in Linq for DSLs as now we can manipulate the expression trees at run-time (well we always could but it is just within the grasp now of far more than before knowing it is still non-trivial).



    Would you agree that DSLs would meet their true potential when they are fully integrated 'graphical as well as text' in implementation and are fully realized for use outside the development team?



    This is the ‘how are we going to be tactical in iteration and discovery yet execute on strategic items not relevant for current iterations.



    This has been a core concern of ours since around 2000 as explicit and likely an implicit concern for well before that.



    "...I do worry that less intense work on the domain model will lead to (lack of I assume you mean) rigour."



    We all walk this high wire.





    "...if you're no longer defining aggregate roots so often, will you forget lots of good practices and lessons? I don't know, but I think it's a danger"





    Again, this is elaborating the idea above no. Your example is a good one. We must balance the two perspectives and most (many) we have find COMPLETELY overemphasized the arguably much easier aspects of the code and short-term iterations.

    Kind Regards,

    Damon Wilder Carr






    blog.domaindotnet.com

    (1) Continuous integration as prerequisite to Agile and Domain-Driven Success







    • Doing this first teaches an organization what to expect

    • If it fails, Agile would of likely been a vastly larger failure

      • The best will learn from failure and keep going



    • Ability for ‘all the time' knowledge of the operational state of your assets

      • Not about ‘does it build' except in early stages, but about ‘does it prove itself as valid' now and for the future assume fundamental change is happening around and within it.

      • Extends TDD with explicit mandates for mock frameworks solving the dependency issues.



    • Your MVC/MVP work allows associated service layers to participate in the above

    • Although many have solutions that do some of the above, we have not found a reasonable solution for true ‘regressions without mocks driving real view implementation on the C.I. server'.

    • We must offer transparency and no barriers to understanding the work at the level anyone desires. This is optimized step (option but very value add) of your code as data warehouse/repository for understanding metrics you care about

    • We cannot control the context of execution of the C.I. server so we are driven to add the condition of 'proof driven mocks'






    We ask developers to think about how they will prove their code works now and in the future via automated means. Nothing new, but the ‘prove' aspect means they are held to their code only, and their designs but mocks will handle dependencies. This eliminates most arguments against it.



    We also expect many metrics from the C.I. server and value added services like build versioning and automatic packaging with archive, alerts with targeted filters to the concerns of an alert receiver (no detailed alerts to the CTO but always to the developers, etc.)



    This is a main reason we are effective in fundamental change. We are now operating from facts of our code ecosystem.

  • Re: I would like to offer a purely technical NET 3.5/Linq/NHibermate transf

    by Nik Silver,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Damon, acceptance criteria are agreed before development starts, but it's a by-product of the Agile process that changes will occur after development starts, including changes to the acceptance criteria. So late front-end changes to occur frequently, as do changes to previously-established features.



    On your other point, I agree that Java isn't really a DSL, even if we only use a controlled subset. This is why I described it as one of "two things that approach that". It's not a DSL, but it approaches that. It's too powerful in my opinion because its users (when it's used like a DSL) do not have advanced Java skills, yet it can be refactored and rewritten so that you need some reasonably advanced Java skills to understand it.

  • Modelling

    by Neil Murphy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    20+ years ago I developed domain models using ERDs with business users, who took to using them like a duck to water. I developed systems with users using RAD methods. Amazing how much is forgotten and then trotted out as something amazingly new.

  • DSL + Code Generation + CI

    by Raghavendra Keshavamurthy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The main focus of this exercise seems to have been to get the business users own the domain model and the share a common ubiquitous language between the technical and the business folks. These are good achievements no doubt, but from a development point of view, I think the full power of MDD is realized only when you have a DSL, code generation templates and CI processes to go along with the model. In fact MDD without this does not differ significantly from developing systems using ERD and RAD as mention by Neil Murphy. For example how would you ensure that your domain model and code is in sync? The real goal would be to allow the business users to control not just the domain model but the entire business side of the application as a whole.
    Would be interested in your views on this.

  • Re: DSL + Code Generation + CI

    by Nik Silver,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Raghavendra, we haven't seriously looked at creating a DSL (other than in Velocity) and code generation templates (though we do have a CI pipeline). I think it may be something we look at in the future when we better understand our pain points and opportunities to optimise our work. However, at the stage the work is now, the scope of development is too wide to introduce those kind of things for a particular aspect of the application and to be sure the cost will be outweighed by the benefit.



    We don't have a very big problem with (your example) ensuring the domain model and code are in sync. It's a pretty simple relationship, and we have a general understanding that no-one messes with the model layer without an explicit business need backed up by completed analysis. Obviously that kind of implicit social contract can go wrong (as in the example of the FootballMatchSummary) but it's picked up through pair programming and moving developers across the different streams of work which ensures further peer review.



    As for business users controlling the entire business side of the domain, I think there are two requirements for that which we don't have today. One is the single authoritative voice of decision-making. As described in Section 3.2.1 the Production team play this role significantly, but there are still a lot of individuals who have a voice. All this is captured by our business analysts, and therefore they are the ones who might be users of the DSL, not the business users directly. Another requirement is business users who have the time to learn and maintain the DSL code. I suspect this responsibility would take editors, production staff, sales people, etc, too far away from their day-to-day jobs.



    None of this is saying a DSL, code generation templates, etc, wouldn't be appropriate, only that I think they're not appropriate for the organisational environment as it is today. That environment will change, the balance of our work will change, and people's roles and responsibilities will evolve. The tools you outline are likely to be seriously considered in the near future.

  • Re: I would like to offer a purely technical NET 3.5/Linq/NHibermate transf

    by Damon Wilder Carr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I see where your coming from.. Thanks for the feedback. It's much appreciated.

    Damon

  • Re: Modelling

    by Damon Wilder Carr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    ERD != Domain Model

    Damon

  • Great

    by Cheung Ryan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Great article, this really help me out to understanding DDD. And thanks for sharing!

  • Services

    by Normen Müller,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi, in section "2.2. Embedding the domain model into the system" right beneath the graphics you state:


    The Model layer contains the domain objects and repositories, with services residing in the next layer above.


    The "next layer above" is not depicted within the graphics, right? It would be in between "Web Site" and "Digital Content Platform", correct?

    Best, /nm

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT