BT

AWS Adds Native SQL Server Backups to Database-as-a-Service

| by Richard Seroter Follow 5 Followers on Aug 08, 2016. Estimated reading time: 5 minutes |

AWS released an update to its Relational Database Service (RDS) that allows users to take advantage of native SQL Server backup and restore functionality. AWS beat Microsoft Azure to market with this capability which simplifies database migration and disaster recovery scenarios.

Until now, RDS—which in addition to SQL Server supports MySQL, PostgreSQL, and Oracle database engines—only offered the ability to execute storage volume, or instance backups. These AWS-specific snapshots backup all databases on the server and support point-in-time recovery. In a blog post about native backup support in RDS for SQL Server, Amazon’s Jeff Barr explained how this feature makes RDS more attractive to SQL Server DBAs.

SQL Server native backups include all database objects: tables, indexes, stored procedures and triggers. These backups are commonly used to migrate databases between different SQL Server instances running on-premises or in the cloud. They can be used for data ingestion, disaster recovery, and so forth. The native backups also simplify the process of importing data and schemas from on-premises SQL Server instances, and will be easy for SQL Server DBAs to understand and use.

Native backups require three things to work: an Amazon S3 bucket to store backup files, an AWS “role” to access the S3 bucket, and a reference to “SQLSERVER_BACKUP_RESTORE” in the RDS instance configuration, called an option group, for SQL Server. The coupling with Amazon S3 gives users a durable off-box storage option, and according to AWS, makes coordination with on-premises databases possible.

 Those backups can be restored to an on-premises copy of SQL Server or to another RDS-powered SQL Server instance.  You can also copy backups of your on-premises databases to S3 and then restore them to an RDS SQL Server instance.

RDS for SQL Server provides full database backups, not incremental, and these backups can be encrypted through integration with the AWS Key Management Service.

Native backups use the standard SQL Server .bak file. AWS says that .bak files are “heavily optimized” and represent the “fastest way to backup and restore databases.” They called out a series of usage scenarios for native backup and restore.

  • Easily migrate databases to Amazon RDS.
  • Easily move databases between Amazon RDS SQL Server DB instances.
  • Easily import and export data.
  • Easily migrate schemas, stored procedures, triggers and other database code.
  • Easily change your storage type or storage capacity.
  • Backup and restore single databases, instead of entire DB instances.
  • Create copies of databases for testing, training, and demonstrations.
  • Store and transfer backup files into and out of Amazon RDS through Amazon S3, giving you an added layer of protection for disaster recovery.

For DBAs who can’t use native backup and restore, AWS prepared documentation that points out a handful of cloud-friendly ways to import and export SQL Server data.

One person that was excited about this new functionality was Brent Ozar, a popular SQL Server expert and founder of Brent Ozar Unlimited, a boutique firm that specializes in SQL Server optimization. In a blog post entitled HOLY COW. Amazon RDS SQL Server Just Changed Everything, Ozar calls this a “really, really, really big deal.”

Until now, the single biggest problem has been that both Azure SQL DB and Amazon RDS SQL Server don’t give you access to backup files. If you wanted to get your data out, you were hassling with things like import/export wizards, BCP, or sync apps.

Microsoft’s Azure SQL Database offering doesn’t yet offer this feature, and Ozar hopes that Amazon’s progress with SQL Server will force Microsoft to quickly catch up. In his blog post, Ozar spells out specific use cases for native backup and restore.

“I’m on-premises, and I want to use the cloud as DR.” Just keep taking your full backups as normal, but use a tool like Cloudberry Drive to automatically sync them to Amazon S3. When disaster strikes (or preferably, when you want to test and document this process long before disaster strikes), spin up an Amazon RDS SQL Server instance and restore your backups. Presto, you’re back in business. (I’m glossing over all the parts about setting up web and app servers, but that’s a developer/devops/sysadmin problem, right?)

“I have big databases, and I want to experiment with the cloud, but can’t I upload fast.” Ship your USB hard drive to Amazon with your backups, they’ll copy ’em into S3, and then you can spin up RDS instances. Got more data? Check out Amazon Snowball.

“I’m using the cloud, and I want cross-provider DR.” Run your primary SQL Server in Amazon RDS, schedule regular backups to Amazon S3, and then use a cross-provider file sync tool or roll your own service to push those backup files from Amazon S3 over to Azure or Google Drive. When disaster strikes at Amazon (or if you just want to bail out of Amazon and switch cloud providers), just restore that backup somewhere else. Same thing if you want to refresh a local dev or reporting server, too.

“I’m using the cloud, but I might outgrow Platform-as-a-Service.” PaaS makes management dramatically easier, but both Amazon and Azure set limits on how large your databases can get. Putting your database in Amazon RDS or Azure SQL DB is basically a bet that your data will grow more slowly than their database size limits. If you bet wrong – which is a great thing because your data skyrocketed, usually indicating that you’re in the money – you have an easy transition into IaaS (self-managed SQL Server in the cloud) rather than the painful hell of dealing with data exports.

There is no additional charge for this RDS feature, although users will incur the regular charges for using Amazon S3 storage. AWS made this feature available for any edition of SQL Server supported on RDS, and works with both single availability zone or multi availability zone deployment options. As to limitations, AWS doesn’t let users back up a database larger than 1 TB, or restore a database larger than 4 TB. Users also can’t restore a backup to the instance that created it. Backups can only be restored to a brand new database instance.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT