BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Database Continuous Delivery

Database Continuous Delivery

Bookmarks

A fast moving world; Agile, DevOps & Automation

As business needs are the most significant driver of change, doing better with less and delivering it sooner is what differentiates leading and successful companies from the rest.

When a competitor delivers relevant features, faster and with better quality than you, you’re eventually going to lose market share. 'Agile Development' was born from the need to move more quickly and deal with ever changing requirements, with best quality that can be assured, usually with not enough resources.

You just can't wait six months to the next roll-out or release; waterfall methodology's big release concept doesn’t cut it anymore.

Agility is what is expected from technology companies and IT divisions.

The next natural is linking development with operations. This has given rise to 'DevOps'.

To effectively master Agile sprint deployments and to practice DevOps, one needs to be able to implement deployment and process automation. Otherwise deployments and releases will require manual steps and processes, which are not always accurately repeatable, prone to human error, and cannot be handled with high frequency.

Continuous Integration, Continuous Delivery and Continuous Deployment are the common principles and practices to structurally handle the process of automation and set ground rules for the many participants in the development, build, test, and release of the software process.

These principles are not new, but they are gaining traction and adoption as they prove their benefits, just like Agile development did some years ago.

Continuous Integration, Delivery & Deployment
As a set of principles and practices, Continuous Integration, Continuous Delivery and Continuous Deployment are not a case of 'one size fits all.' It is important to understand that every company might have its own unique challenges and these practices should be tuned to fit organizational structure and culture processes.

Continuous Integration

Continuous Integration is designed to streamline development and prevent integration problems.

This goal is usually achieved with the help of build servers. These servers receive code changes from the version control repository, automatically build it and run unit test to verify the changes, and assure quick feedback to the developers. Unit tests might run from time to time, or even after each change has been committed (checked-in) thus preventing or quickly alerting developers to code changes that might break other code, or failures to pass tests.

In addition to running code-centric unit-test to assure code completeness running integration tests or application level regression test helps complete the picture, and assure quality levels. 

Quick feedback regarding integration problems and automated tests to assure quality help achieve high visibility of overall development and save time locating problems, contributing to overall savings to development and integration time as well as insuring higher quality.

Continuous Delivery

Continuous Delivery is the next automation step after continuous integration. While striving to become efficient, lean, and more agile, we can start planning and making sure each change is "releasable" to make sure we always have a tested build ready for deployment.

Moving changes between the different lifecycle stages should be done automatically and the overall process looks like the diagram on the right:

(Click on the image to enlarge it)

Checking in changes in development ð building the deploy package ð running unit tests ð then actually moving the changes to testing and later staging environment ð running acceptance tests.

In case of a failure, we should get automated alerts for the problem, go back to development, and restart the cycle.

Once the process is completed, a fully tested application is available to be released to production with a click of button. Actual deployment to production would be manually actuated, followed by rerunning of the regression test.

As all changes are tested and accounted for, and deployment between previous lifecycle stages had also been tested, the actual deployment to production becomes much easier and encapsulates significantly less risk.

Following Continuous Delivery practices means we always have a releasable version at hand, thus enabling timely releases based on business decisions, time-to-market, etc.

Continuous Deployment

Continuous Deployment takes the next step of pushing changes automatically to production (unlike Continuous Delivery) and runs the concluding set of tests there.

Leveraging Continuous Deployment in a SAAS type of application or product (like Facebook, Amazon etc.) makes a lot of sense, as a company can stream and throttle traffic to a new feature, do A/B testing to evaluate new changes, run the old release side by side with the new release and overall measure and manage changes with confident.

Continuous deployment may be risky, as we are removing the human factor from the approval button, and we need to remember that even thou it makes sense in the scenarios described above, it doesn't always make sense from a business perspective (I might be wrong, but I can't imagine a developer at a bank, pushing changes that go automatically to production, without someone approving these changes before they actually go live).

Measuring Continuous Processes Success

Success from continuous processes is usually clear, and focuses around these areas:

  1. More rapid changes – being able to react quicker
  2. Less changes backed out – higher code quality, quicker time to market
  3. More stable releases – less defects making it out to end customers
  4. Better collaboration between Development and Operations (DevOps…)

By automating “everything” and moving tested, focused updates and process “upstream” we look at better service, happier customers, and better bottom line.

Safe Database Continuous Delivery

Dealing with database deployments is tricky; unlike other software components such as code or compiled code, a database is not a collection of files. The database is a container of our most valued asset – the business data, which must be preserved. It holds all application content, customer transactions, etc. In order to promote database changes, a transition code needs to be developed - scripts to handle database schema structure (table structure), database code (procedures, functions, etc.), and content used by the application (metadata, lookup content, or parameters tables).

Achieving automation by scripting database objects change-scripts into traditional version control is limited, inflexible, disconnected from the database itself, and may be untrue and prone to miss updates of the target environment because of conflicting changes. Using 'compare & sync' tools is a risky thing to automate. The two concepts are incompatible, as one is unaware of the other.

A simplified automation process is based on Build Once Deploy Many system, it should look like the following:

Check-In changes to Version Control --> Build --> deploy to Test ->run unit tests --> deploy to next level -> run more tests --> … --> deploy to UAT -> test --> deploy to production

One Build step and many Deploy & Test steps, hence the Build Once Deploy Many.

While it works with the native code binaries, this is not the case when dealing with database deployment, as you can't copy & replace but rather transform the previous version to the new version, while keeping the business data stored in the database.

There are many scenarios where the target environment might have changed after scripts were created and before they were run. As examples, a critical fix being made in the database out of process, or parallel work by another team. Each breaking expected results and causing problems in the Build Once Deploy Many approach.

(Click on the image to enlarge it)

[Build Once Deploy Many fails for Database Deployments]

Database Automation – Build & Deploy on Demand

With the build and deploy on demand approach, the database delta script to upgrade the database from current version to the next version is generated when needed, ensuring an up-to-date target state is being validated and taken into account.

After successfully executing upgrade to pre-production, the script is saved and reused, so production upgrades will be based on tested scripts from pre-prod run.

(Click on the image to enlarge it)

[Build & Deploy on Demand] 

Confidence with Automation

Without confidence in the automation, no one will ever use it. With the lack of notification on conflicts within the database code between environments it is difficult for DBAs or developers to rely on a script generated by simple Compare & Sync method.

The same issue of parallel development, sandbox, and branches was solved many years ago in file-based version control by having notification on the Check-In event. When there is a conflict, the developer receives an alert that the code s/he checks-in was changed by someone else during the time it was checked-out. The developer merges the changes on the local PC and then check-in the merge code.

A database is different as it is not stored on the developer PC; the code exists in every environment, and anyone with sufficient permissions can modify it. The merge event should be resolved when generating the delta script, with the base-line approach (the same approach that modern file-based version control tools use).

[Baseline Impact Analysis]

Checking the boxes

Realizing that database continuous processes require robust database version control, safe deployment automation, a clear work process, and automation of that process help us to define the type of solution we should look for when implementing CD.

Following are a few examples of these challenges, solved with the help of DBmaestro.

  1. Enforced database version control. This makes sure all database changes follow a mandatory documentation process, so we can always know who did what, when and why. A basic building block for the rest of the process.
  2. Using task based development assists in associating each introduced change with a change request, trouble ticket, or a work item. Later deployments can rely on this information to help decide what changes need to be shipped as part of the pending release and what changes have been postponed.
  3. A base line aware deployment, connected to the version control and performing an impact analysis for introduced changes, helps us safely understand what changes should be deployed. More importantly, it makes clear which changes should be discarded, so not to overrule important changes already deployed by other teams, or override hot fixes to production. 
  4. Automation interfaces (web-services, command line API, etc.) are a must in order to create a harmonious process, dealing with database changes as part of the whole picture, tightly coupled with the delivery of Java or .Net code changes. Being able to raise red-flags and "stopping the line" automatically if required is a mandatory step for continuous processes. The last thing we want is to blindly push changes to production – so getting an alert that something does not follow our safety assumptions, can make the biggest difference between sleeping tight and having nightmares.

Summary

The database creates a real challenge for automation, hence participation in continuous processes. Scripting database objects change-scripts into traditional version or using 'compare & sync' tools is either an inefficient or risky thing to automate, as the two concepts are unaware of the other. A better solution needs to be implemented in the shape of Continuous Delivery and DevOps for database.

Database Continuous Delivery should follow the proven best practices of change management, enforcing a single change process over the database, and enable dealing with deployment conflicts to eliminate the risk of code overrides, cross updates, and merges of code while plugging into the rest of the release process.

About the Author

Yaniv Yehuda is the Co-Founder and CTO of DBmaestro, an Enterprise Software Development Company focusing on database development and deployment technologies. Yaniv is also the Co-Founder and the head of development for Extreme Technology, an IT service provider for the Israeli market. Yaniv was a captain in Mamram, the Israel Defense Forces computer centers where he served as a software engineering manager.

Rate this Article

Adoption
Style

BT