BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Datical Adds Continuous Delivery Capabilities for the Database

Datical Adds Continuous Delivery Capabilities for the Database

Bookmarks

Database release automation provider, Datical, has announced a new version, Datical 5, which provides a central command centre with enhanced database release visibility that allows developers to manage and rework database code changes as they would application code changes without having to go through a manual process. Security for database credentials and auditability has also been enhanced.

Developers and data professionals can access a collection of dashboards that illustrate the status and velocity of database releases and a management interface for tracking database changes across the release pipeline. The impact of changes can be simulated before they are deployed to an environment whilst enforcing audit and compliance rules.

The new release addresses the challenges associated with iterative database development and allows developers to check-in an updated version of the database code that needs rework. Reworking database changes today is challenging since the instant a change is applied data is added, modified, reorganised or deleted. To rework a change, database administrators must manually undo the previous change. This adds additional work and presents a potential constraint in the application release process.

Datical 5 automates the backing-out of unwanted database changes and the application of the updated changes. Managing database environment credentials and access can be a high-risk process as a result of lack of standardisation and unified control. A new centralised credential store provides a standard approach to managing database credentials across the enterprise. It also handles access control of stakeholders to provide appropriate access to the centralised management interface improving data security. This helps to organise users and enforce privileges for viewing deployment status or making database changes.

As Eduardo Piairo wrote in his article, Why and How Database Changes Should be Included in the Deployment Pipeline:

Databases and applications should coexist in the same deployment pipeline. In other words, they should share a single development lifecycle. Database development should not be handled differently from application development.

Piairo advises using a code driven approach to database deployments:

When all database changes are described with scripts, source control can be applied and the database development process can include continuous integration and continuous delivery activities, namely taking part in the deployment pipeline.

Piairo also describes the steps to take when automating database deployments:

The first step is to describe the database change using a script. Normally this step is done by a human. The next step is to build and validate an artefact containing the database changes. This step should be fully automated. In others words, we need to use scripts for building the package, running unit/integration tests, and sending the package to an artefact management system and/or to a deployment server/service. Normally, build servers already include these steps by default. The last step would be to automate the deployment of the database changes in the target server/database. As in the previous step, you can use the deployment server built-in options or build your own customised scripts.

In his blog post, What's Wrong With Using Jenkins to Push Database Changes?, senior product marketing manager at Datical, Sanjay Challa explains:

Database changes can certainly be committed to source code control. These scripts can be checked out of source code control and packaged into an artefact with a CI tool. The resulting artefact could be pushed to an artefact repository, from where a release automation tool could be used to deploy to environments along the release pipeline, all the way to production.

But Challa goes on to advise caution with this approach:

It's necessary to recall that databases have state. Consequently, database changes must be carefully managed because you do not want to corrupt their state. While it's possible to simply replace an older version of an application by overwriting it with an updated version, the same simply isn't true for the database. A bad database release can result in irrecoverable data loss for the organisation or a substantial outage for an application. It's simply not possible to 'blow away' an old version of the database and overwrite it with an updated version as is the case with most application executables.

Fundamentally, relying solely on build, configuration, and release automation tools for managing database deployments puts data at risk. Given the consequences of a bad change, database changes are often handled in a separate, manual process. Organisations are left living with the lower application release velocity and lower code quality that are inherent with a manual database change process. In the end, if relying solely on build and release automation tools, application and database changes will never flow through the release pipeline at the same pace.

Challa explains that there are a number of features required to run database changes through the same single, unified release pipeline through which application code flows. It's common for a database change to require rework and, because of the need of the database to retain state, reworking the change requires more effort than any other type of code. Typically, manual effort must be exerted by a DBA to revert a database environment to allow a developer to rework a change. Without manually undoing the change that needs rework, a different roll-forward is done in lower level environments and a different change is applied to higher level environments that were never exposed to the original change. This breaks the fundamental DevOps concept of 'build once, deploy often' as the deployment to higher environments is inconsistent with the deployment to lower environments.

When developers can treat database code just like application code and check an updated version of the database change into source code control, having a tool that can intelligently sanitise lower level environments where an older version of the change was made and apply the updated version (and properly applies only the updated change to higher level environments that were never exposed to the old version of the change) reduces the risk of damaging the database state, eliminates the manual effort, and allows for a consistent artefact that can be deployed through the pipeline.

Challa goes on to explain that the validation of database code is typically a completely manual process that doesn't pass through the kind of automated tests we expect in a continuous integration or delivery pipeline. Datical have addressed this constraint with their Dynamic Rules Engine to allow organisations to codify standards and best practices and automate the validation of database code. It uses an object-based rules engine, as opposed to a simple regular-expression engine, so that functional rules (such as limiting the total number of indices per table) can be achieved. This eliminates the manual effort and provides fast feedback on database changes submitted to source code control. Datical's Change Management Simulator builds on this by constructing an in-memory model of the target database, applying the proposed changes to the model and verifying that the final state of the model meets expectations.

Datical 5 is currently available in beta and will be generally available in Q2 2018.

Rate this Article

Adoption
Style

BT