Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Automated Builds: How to Get Started

Automated Builds: How to Get Started


The first part of this series discussed some of the benefits of automating your build and deployment processes. There are many reasons you may want to do this - to allow your developers to focus on core business instead of administration, to reduce the potential for human error, to reduce the time spent on deployment, and a variety of others. Whatever your motivations are, automating your build process is always the right answer.

In this article, we will take a common example of a corporate web application for a fictional financial institution, and walk through fully automating their build process.

Case Description

Our company is 3rd National Bank, a local financial institution. Our online banking application consists of the front-end web application (ASP.NET); a RESTful service (WebAPI) for connecting from mobile applications; a series of internal services (WCF) which use a traditional domain-driven design to separate out business logic, domain objects, and database access (Entity Framework); and a SQL Server database.

The software team uses Mercurial as their source control system, and delivers features regularly, using a feature-branch strategy - a branch is created for each new feature or bug, and once tested, the code is merged into the main line for release.

Currently all of the build and deployment steps are done manually by the software team, causing developers to spend several hours every week maintaining their repositories and servers instead of writing code. We’re trying to change that, and automate as much of the process as possible.

Build Scripts

Build scripts are the first step toward automating your build. These scripts come in all shapes and sizes – they can be shell or batch scripts, XML-based, or written in a custom or an existing programming language; they can be auto-generated or hand-coded; or they can be totally hidden inside an IDE. Your process may even combine multiple techniques. In this example, we'll use NAnt as our build script engine.

In our environment, we have separate Visual Studio solutions for the front-end web application, the external service, and the internal service application, and a database solution for the SQL database. We’ll create a single master build script file,, which looks something like this:

<project name="3rd National Bank" default="build">
   <target name="build">
     <call target="createStagingEnvironment" />
     <nant buildfile="BankDB/" target="build" />
     <nant buildfile="ServiceLayer/" target="build" />
     <nant buildfile="OnlineBanking/" target="build" />
     <nant buildfile="ExternalServices/" />

This script doesn’t actually do anything – instead it just makes calls to each of the four solutions.­­­ Each solution gets its own build file, which contains all the code required to compile and prepare its part of the application.

Now let's take a look at a build script for one of these solutions. Each solution follows the same basic steps: prepare, compile, and stage. Here is a basic build script for - the syntax for this is pretty straightforward:

<project name="ServiceLayer">
   <property name="msbuildExe" value="c:\windows\\framework\v4.0.30319\msbuild.exe" />
   <target name="build">
     <call target="prepare" />
     <call target="compile" />
     <call target="stage" />
   <target name="prepare">
     <!-- Implementation omitted -->
   <target name="compile">
     <exec program="${msbuildExe}">
       <arg value="ServiceLayer.sln" />
       <arg value="/p:Configuration=Release" />
       <arg value="/t:rebuild" />
   <target name="stage">
     <copy todir="../deploy/BankWcf">
       <include name="WcfServices/**/*.svc" />
       <include name="WcfServices/**/*.asax" />
       <include name="WcfServices/**/*.config" />
       <include name="WcfServices/**/*.dll" />

The preparation steps may involve building an AssemblyInfo file, or rebuilding proxies, or any number of other things. The compilation step in this case is simply calling MSBuild, the build engine Visual Studio uses to compile a solution. Finally, after everything builds successfully, the last step copies out the appropriate files into a staging area, to be picked up later.

We do the same thing for the other three solutions, modifying them appropriately based on the different project types.

Writing your build scripts is just like writing any other kind of code – there are endless ways of accomplishing the same result. You can use the command-line compiler executables directly instead of MSBuild. You can build the projects individually rather than building the full solution. You can use MSDeploy to stage out or deploy your application instead of defining a filter and copying files. In the end, it's all about what you're comfortable with. As long as your scripts produce consistent output, there's no wrong way to write them.

Continuous Integration

Now that we have build scripts, we need something that will call them. We could run our build scripts from the command line – but since we're trying to automate everything, we need a machine to run the scripts when appropriate. This is where continuous integration comes in.

Let's use TeamCity, a product by JetBrains, for our CI. It has a very reasonable pricing model, and offers a fantastic user experience for setting up projects. After an easy installation on our new build server, we're ready to get started.

In TeamCity, your first step is setting up a project. The project consists of a name, along with a collection of build configurations. Let’s create a project called “3rd National Bank”.

We’re going to want to set up a build template, which will represent the settings used for the mainline as well as any branches we want to put under CI. We’ll set up our version control settings, selecting Mercurial as our source code repo, the default branch, credentials, and a place to check the files out to. Next is a build step, selecting NAnt and our master build file. If we have a unit test project, we’ll simply add another build step to run NUnit or MSTest or whatever we’re using. Finally, we’ll select a build trigger tied to our version control, which means the build will run every time someone pushes code to the main repository.

There are lots of other useful things you can do in TeamCity, like defining failure conditions based on build output, dependencies on other builds, and custom parameters, which you can explore as you need. But we’ve got all we need for a basic build now.

Let’s create a new build configuration from this template, called “Main Line”. This will represent the top of the version control tree, where stable production-ready code lives. Since we also have feature branches out there, we can create as many more build configurations from the template as we need, one for each feature, and we should only need to make minor tweaks to the source control settings. We now have not only our mainline, but every open feature building automatically upon code checkin, all in just a couple minutes.

When a feature is done and merged into the mainline for release, the build configuration for that branch can simply be deleted.


Now that our CI system has built our code, run our tests, and staged out a release, we can talk about deployment. Like anything else, there are many different strategies you can use to deploy your applications. Here are a few basic strategies you may want to use to deploy a web application in IIS:

  • Simply backup your existing applications and copy the new code on top. You never have to worry about touching your configuration.
  • Copy your code out to a brand new versioned directory on your web server. You can do this in advance. When you are ready, re-point IIS to the new directory. You can take an extra step of having a "staging" web application in IIS that you point to the new version, with which you can run some preliminary tests against prior to making the switch.
  • Don’t hide the versions; include them in your URL: and - the root application of will send you to the newest application using a simple config setting or IIS setting. 3.4 will remain alive, and only customers going to the root application will see the new one. This gives you the opportunity to not interrupt existing sessions. After an hour or so, when all the sessions on 3.4 are gone, you can safely remove v3.4 from IIS.

Your team can determine what's best for you - it depends on your organization's policy toward outage windows and uptime requirements, as well as your database design strategy. For our example, we'll assume we have a 1-hour weekly outage window, so we'll pick the simple file copy strategy which gives us time to back up and deploy the code and database and test it, prior to turning things back on.

Your CI system has staged out your release, so it's simply a matter of getting these files out to your production servers and deploying your database changes. If you have file system access to your build and web servers from your desktop, copying the files can be as simple as executing a batch file that looks like:

robocopy /E \\web1\www\BankWeb \\web1\Backups\BankWeb\%DATETIME%
robocopy /E \\web1\www\BankRest \\web1\Backups\BankRest\%DATETIME%
robocopy /E \\svc1\www\BankWcf \\svc1\Backups\BankWcf\%DATETIME%

robocopy /E \\build1\BankWeb\deploy \\web1\www\BankWeb
robocopy /E \\build1\BankRest\deploy \\web1\www\BankRest
robocopy /E \\build1\BankWcf\deploy \\svc1\www\BankWcf

If you don't have full file system access, you'll need to find a more creative way to deploy your files. Of course, you can Remote Desktop into your server or an intermediary server to execute a batch file or manually copy files, but remember we're trying to automate this, so the fewer steps, the better. Ideally, you'll have a trusted system in the middle, which has the ability to deploy files to the web servers after it authenticates you. DubDubDeploy is one option that copies files from a trusted server over HTTP to allow you to deploy without access to the web server's file system.

Deploying a database can be done many ways, again depending on your organization. In this example, we are using a database project, so it is as simple as executing a single command which takes the project, automatically compares to your production database, and executes the change script, along with any custom SQL you’ve written that builds seed data. If you're comfortable letting the system do this on its own, it's just a matter of executing a command:

msbuild.exe /target:Deploy /p:TargetDatabase=3rdNational;TargetConnectionString=”Server=db1;Integrated Security=True”;Configuration=release BankDb.sqlproj

Of course, you can execute this any number of ways - you can add it to your NAnt script as a target, or add it to TeamCity, or run it manually, or put it in a deployment batch file.


We've come a long way; when we started building, running tests, staging, backing up, and deploying our code was all done manually. Now there are scripts that compile our code, a system that continuously and consistently executes these scripts and runs our unit tests, and a simple repeatable deployment task.

If you don't have the time to set up everything at once, you don't have to. You can do this a little at a time, and still benefit from each step. For example, you can probably put together a basic build script for your entire set of applications in less than an hour. Maybe another hour to test and debug it to ensure it builds things the same way you're used to. Now, without even adding CI or other automation, you've made it easier to build and stage your app, so next time you deploy manually, you don't even have to open your IDE. Maybe you'll have time next week or next month to create a simple CI project, which you can improve the following month. Before you know it, you'll have a fully automated process.

I've specifically used .NET, NAnt, and TeamCity for these examples, but the fundamental principles can be applied to anything. Whatever operating system, programming languages, server technologies, source control strategies, and team structure you have, automation is possible, affordable, and well worth the effort.

About the Author

Joe Enos is a software engineer and entrepreneur, with 10 years’ experience working in .NET software environments. His primary focus is automation and process improvement, both inside and outside the software world. He has spoken at software events across the United States about build automation for small software development teams, introducing the topics of build scripts and continuous integration.

His company’s first software product, DubDubDeploy, was recently released - the first in a series of products to help improve software teams manage their build and deployment process. His team is currently working on fully automating the .NET build cycle.

Rate this Article