BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles SQL Server Source Control and Deployment with Visual Studio

SQL Server Source Control and Deployment with Visual Studio

Bookmarks

This article introduces the SQL Server Data Tools (SSDT) feature of Visual Studio 2013. This feature is used to manage databases using a source code first methodology. Generally speaking, all modifications to tables, views, etc. are done entirely through Visual Studio as source code. The source code is then compared to the live database and a deployment script is compiled.

Prerequisites

  • SQL Server 2005 or later
  • Visual Studio 2013

If using SQL Server 2014, make sure you have applied the SSDT updates. These can be found in Visual Studio’s Updates and Extensions menu.

Reverse Engineering a Database

SQL Server Data Tools can reverse engineer an existing database into an empty project using the Import command. It should be noted that the project must be completely empty. Once you start adding items the Import command will no longer be available for databases.

To begin, create a new SQL Server Database project. Right-click on the project in the Solution Explorer and choose Import then Database.

Database projects do not enforce any particular folder structure. Unlike C#, which uses folders as the default namespace, SSDT will happily allow you to place an object for one schema in folder intended for another. That said, the import tool can recommend some folder structures as a starting point. The options are:

  • None
  • Schema
  • Object Type
  • Schema/Object Type

Even for small databases I recommend using the Schema/Object Type layout. If you use Schema only, then you will find yourself constantly opening files just to see what type they are. And if you use Object Type only, then developers will be encouraged to drop everything into the dbo schema.

Database Settings

Common database settings are found on the “Project Settings” tab. If you don’t find a setting you need, you can add it using normal SQL. You can see an example of this in the Storage heading below.

Debug Options

Before we leave the project settings window, you will want to setup your debug database. This is the database that you will automatically deploy to every time the application is run. If you use a database name that doesn’t exist it will automatically be created the first time the project is run.

Generally speaking you will want to use the “Drop object in target but not in project” option. Without it, your debug database will tend accumulate objects that you created and later discarded during the development process.

There is one final step for setting up the database for automatic deploys. You need to open the properties for the Solution and indicate that the database project is a dependency for the startup application.

Storage

If you need additional or specialized filegroups you can use the built-in object templates. These will create scripts such as this:

ALTER DATABASE [$(DatabaseName)]
ADD FILEGROUP [FileGroup1]

Note the use of the sqlcmd variable [$(DatabaseName)]. This will come in handy when you want to deploy multiple copies of the same database project to one server.

Security

A common mistake when setting up a new database is to grant everyone full access from the beginning with the assumption that you will lock it down later. Security settings can be surprisingly subtle, especially when using advanced features. So it is preferable to create user account with no rights from the beginning, then grant them access to specific objects and features as needed.

Schema

The import process places schema definitions in the Security folder rather than in the folder names for the schema. Again, folder locations are not important and you may move the schema definition if you so choose.

Logins and Users

Though logins are technically a server object, not a database object, you can include them in the project. If the login already exists at deployment time it will simply skip over it.

When you create a login, do not insert “USE master”. The deployment tool will handle that for you.

By default newly created login and users will not have the ability to connect to the server. This means you will need to add a “GRANT CONNECT TO [userName]” line to your script.

Other Security Objects

Other security object such as roles, asymmetric keys, and certificates can be created in a likewise fashion.

Creating Tables

When you create a new table, right-click on the folder you wish to place it in and then click Add à Table. Enter both the schema and table name separated by a dot. You can then start filling out the columns using either the designer or the SQL editor.

Indexes

Indexes can be added as part of the table definition or as stand-alone commands in the same file. In theory you could also put each index in its own file, but that tends to lead to unnecessary clutter.

When adding indexes as separate statements you will need to use the batch separator GO between each statement.

Documentation

In Visual Studio 2013, there is no excuse for not commenting your tables and columns. Table level descriptions have to be set in the properties pane. For columns you can also use the properties pane, but most people will be happier showing it as a column on the table designer.

This is not a new feature for SQL Server, which has long supported the ability to document tables and columns. What it does do is replace the verbose sp_addextendedproperty syntax.

EXEC sys.sp_addextendedproperty 
@name=N'MS_Description', 
@value=N'The employee key, which is called empId in some older tables.', 
@level0type=N'SCHEMA',
@level0name=N'dbo', 
@level1type=N'TABLE',
@level1name=N'Employee', 
@level2type=N'COLUMN',
@level2name=N'EmployeeKey'

Deployment

Automatic Deployment while Debugging

To enable automatic deployment, you first need to edit the Debug tab in the database project settings. Usually you will want to change the connection string. Automatically recreating the database isn’t a bad idea if it is small and you have good data population scripts. Otherwise, you’ll want to tell it to drop objects not under source control. This prevents you from accidentally using objects that may not be available in your production environment.

Next you’ll need to mark the database project as a dependency for the startup project. You should do this in the Solution properties dialog, not as a project reference.

Publish

Right-clicking on a database project will allow you to access the publication dialog. After entering the connection string for your target database you’ll want to check the advanced tab for applicable settings. For example, for a production deployment you’ll probably want to trigger a backup. For a development or staging server, it is important to drop objects that are not in the project. Read these carefully, there is a lot to consider.

 

The next thing you should do is save the publication script. This creates a .publish.xml file that you will be using on a regular basis. To load it, simply double-click on it from the Solution Explorer pane.

If deploying to your local machine, you can go ahead and push “Publish”. If deploying to a shared environment use the “Generate Script” option instead. Most of the time, you won’t want to review the script itself. Instead, you’ll want to press “View Preview” from the Data Tools Operations pane to see a summary of what the tool wants to do.

If you are happy with the pending operations, rerun the publication script with the “Publish” option.

Modifying Deployment Scripts

There will be times when you need to manually alter the publication script. Usually this happens when making changes to a table that may result in data loss. For example, making a column non-nullable or changing its data type.

You can’t change deployment scripts directly in Visual Studio. Instead, you’ll need to copy them into SQL Server Management Studio. Make sure you enable SQL CMD mode from the query menu, as it will need settings such as the database name.

Another reason to modify a deployment script is to break it into smaller pieces. Say for example, you needed to add several indexes to a large table in addition to some other changes. Since building the indexes may be very time consuming, you may decide to delay the index change until after hours but still run the rest of the script.

Refactoring Logs

SQL Server Data Tools maintains a refactoring log that reduces the need for manually editing deployment scripts. Unfortunately this feature is easy to accidently circumvent. If you edit a column name from the designer window, it will record that change in the log. But if instead you edit the raw SQL that defines the table, the log isn’t updated.

You won’t notice the problem right away, but when you go to deploy the database it will want to drop and add a column instead of performing the rename that you had intended. If this happens you’ll need to either rollback your changes and do it “the right way”, which means using the table designer, or manually edit the deployment script.

Creating Views, Functions, and Stored Procedures

If you application is ORM based you can probably skip this section.

Basics

As with tables, there are no rules about where you put views, functions, and stored procedures. When using the Schema/ Object Type folder structure, the default is to have one folder for each of those three categories. Another option would be to follow the SQL Server Management Studio pattern and break it down into fine-grained sub-folders.

There are two ways to add new items to the project. You can start with an item template, wherein you would simply fill in the blanks such as the parameter list or view body.

Alternately, you could first create the object in SQL Server Management Studio. I’ll often do this when I’m not sure what I want and need to edit it in a tight loop. Once I’m confident in what I have, I’ll save it as a SQL file and then import into my project.

Adding Security

Assuming the database is properly locked down, you’ll need to grant permissions on various objects to users. While there are numerous ways you can do this, the approach I recommend is to put the GRANT statements in the same file as the view, function, or stored procedure that they apply to. This allows you to see at a glance if the users you think can execute the procedure actually can.

Pre- and Post-Deployment Scripts

SQL Server Data Tools allow you to create a single pre-deployment script and a single post-deployment script. These are not one-time migration scripts, they will run each and every time a deployment is performed.

By convention these scripts are named Script.PreDeployment.sql and Script.PostDeployment.sql. They are not true SQL scripts, but rather a variant known as SQL CMD. This is what allows it to reference other SQL scripts.

This is important because it is really easy to clutter your pre and post-deployment scripts. To avoid that clutter, I recommend that you treat the script as a table of contents where each set of operations is stored in separate script files. Here is a typical example of a pre-deployment script:

PRINT N'Enabling CLR';
EXEC sp_configure 'clr enabled' , '1';
RECONFIGURE;
:r ".\OneTimeScripts\Migrate records to not use customer type 6.sql"

And the matching post-deployment script:

:r .\Data\UserType.sql
:r .\Data\CustomerType.sql
:r .\Data\AccountLevel.sql

Timing

Pre-deployment scripts are run after the database is created, but before any objects are added to it. This allows you to modify server and database-wide settings that aren’t exposed directly by SSDT.

As the name implies, post-deployment happens last, after all of the database objects have been updated.

Writing One-Time Scripts

Writing one-time scripts is somewhat difficult. First of all, the SSDT has no concept of a one-time script. Since it doesn’t know whether or not a script has been run before, the best you can do is surround your scripts with if-statements that make the determination.

The other method for dealing with one-time scripts is to manually modify the deployment script. This gives you a lot of flexibility but isn’t repeatable; you’ll have to reapply the same changes for each environment.

Patterns for Populating Tables

Another feature lacking in SSDT is the ability to populate tables. Ideally lookup tables would be stored in source control just like other database objects. But since that isn’t offered to us, we have to make due with alternative approaches.

One such approach is the MERGE statement. First you declare a temp table that looks like your destination table. Then, you populate this with the data you want the actual table to contain. Then you can use the MERGE statement to perform the necessary inserts, updates, and deletes.

Another approach is to write an insert/update procedure. You can then call this procedure for each row in the destination table. This allows for more complex logic, but doesn’t lend itself to deleting obsolete records.

Linked Databases

For large systems using schemas as namespaces is often not enough. For various reasons including deployment cycles, security, performance, and disaster recovery, objects are often distributed across multiple databases or even multiple servers.

As long as you plan accordingly, SSDT can handle this. The major limitation you have to watch out for is circular references. One SSDT managed database can reference another, even on a different server, so long as you can establish a clear compilation order. In this sense it is no different than a .NET project.

A typical scenario is a new database project referencing a legacy database. To begin, script the old database and import it into an empty project. Note that you don’t have to manage the legacy database using SQL Server Data Tools, you just need the metadata it represents. If you want, you can just import the objects your new database depends on.

Next you need to add a database reference. This is where you have to make an important decision as to whether the older database should reside on the same server or not. Changing your mind later is possible, but tedious and error prone. Below you see the add reference dialog:

(Click on the image to enlarge it)

The database variable, and optional server variable, take the place of the actual names in your user defined functions and stored procedures. Make sure you use the pattern [$(variable)], otherwise you may get compiler errors. When you publish, you will be asked to fill these in with real values.

If you press the Load Values button these variables will be filled in using the defaults defined at the project level. To save time in future deployments, you can save these values as part of the publication profile.

SQL CLR

Though frowned upon in the past, there are times when using C# code in SQL Server makes a lot of sense. There are scenarios where you will see better performance using JIT compiled .NET code over interpreted T-SQL. And for some data types, such as geometry/geography, you don’t actually have a choice.

A major main problem with SQL CLR is deployment. Normally, deploying a .NET assembly to SQL Server involves manually copying the DLLs somewhere and then manually adding them one at a time. SSDT’s publishing tool eliminates this issue by encoding the assembly as a SQL statement that is inlined with the rest of the deployment script.

External Projects and Assemblies

By convention, assemblies added to an SSDT project are first added to the project in a folder called “Assemblies”. This isn’t required, but it does make it easier to manage DLLs. If you follow this advice, do this step first.

For either type, you can then add a reference to the project or assembly using the normal .NET add references dialog.

The final step is to expose the types and functions that you want to use to SQL. The templates for this can be found in the SQL CLR folder of the Add Items dialog. This is no different from registering SQL CLR types and functions manually.

Internal Projects

An internal project is a C# project that lives inside the database project. It is automatically built and deployed with the database, so you don’t have to worry about managing references and assemblies. You don’t get access to the CLR version of static code analysis, but otherwise it acts just like any other C# project.

A useful feature of internal projects is that you don’t need to explicitly register your functions with SQL. The compiler sees the SqlFunction attribute and automatically generates the correct SQL prototype in the project defined default namespace.

Recompilation Issues

When using a CLR project, internal or external, recompilation can become an issue. Especially if you are using a SQL CLR function as part of a computed column.

Essentially the problem is Visual Studio will often recompile a C# project even if nothing changed. Since this new version has a new hashcode, the publication tool thinks it is a new version and redeploys the whole assembly. In the aforementioned computed column scenario, this requires recalculating every row in the affected tables.

If this becomes a problem, move all of your SQL CLR code into an external project. Then copy a compiled version of the project into the Assemblies folder as if it were a third-party library. Do note that there is a risk here that you will forget to update the compiled assembly when making changes.

Full Text Search

SQL Server Data Tools only has partial support for Full Text Search. It handles creating and managing Full Text Catalogs well enough. And you can certainly add FTS indexes to any table that needs them. The problem you can run into is it only supports empty Stop Lists. Since the contents of the Stop List are considered to be data, not schema, SSDT doesn’t know how to make sure the list is up to date.

The work-around is to manually populate the Stop List using post-deployment scripts much as you would any other lookup table. Once created, the publication tool won’t alter it.

Version Numbers

SQL Server doesn’t have a strong concept of version numbers. In theory you can enter a version number that is compiled into a Data-tier Application (dacpac). In practice this doesn’t actually work. There isn’t a good way to interrogate this value and you can only auto-increment dacpac versions using post-build scripts.

One option you may consider is creating an internal project with a single function called dbo.GetDatabaseVersion. This could read the version number from the C# assembly. If you use wildcards in your assembly version, this will be automatically incremented for you.

Another option is to create dbo.GetDatabaseVersion as a T-SQL function. Then give it a hard-coded value that you manually increment as needed.

Brett Gerhardi suggests that we instead use an extended property for the database version. For example:

EXEC sp_addextendedproperty @name='DbVersion', 
@value ='1.015', 
@level0type = NULL,
@level0name = NULL, 
@level1type = NULL,
@level1name = NULL, 
@level2type = NULL,
@level2name = NULL

This would have to be manually updated unless you add a pre-build script.

Continuous Integration

In theory you can automate deployment of databases using SqlPackage.exe. Anuj Chaudhary has a blog post titled SqlPackage.exe - Automating SSDT Deployment that walks you through the process. In practice, you will run into occasions where the changes are too complex for SSDT to handle without manual intervention.

About the Author

Jonathan Allen has been writing news report for InfoQ since 2006 and is currently the lead editor for the .NET queue. If you are interested in writing news or educational articles for InfoQ please contact him at jonathan@infoq.com.

Rate this Article

Adoption
Style

BT