Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Hosted-services / Azure

Scheduled backup of Azure SQL Virtual Machine data

5.00/5 (6 votes)
14 Oct 2013CPOL5 min read 25.9K   91  
Using SQL job agent to backup databases in virtual machines to an Azure blob container, SQL, Cloud, SQL-Server, Azure

Introduction

This article is a short "how to" that demonstrates backup of data that is running inside an SQL 2012 Virtual Machine on the Azure platform from the Virtual Machine to a blob container.

Image 1

Background

For various reasons including security and privacy, we have had to host some recent multi-tenant applications in separate MS SQL Databases. The databases themselves are hosted in multiple SQL 2012 data-servers, running on multiple different Virtual Machines. While there are a number of good database backup providers on the market, none had a pricing model that suited our particular configuration, so we had to put something together ourselves - this article is the result, hopefully it can be of benefit to someone else!

The method detailed here is based on "BACKUP TO URL" that is outlined here.

The setup

We have one core database that contains table with a listing of each separate client database; this acts as our starting point. We get a list of these databases, and backup each one in turn. You dont need this of course, you can simply enumerate all databases in the Data-server if you wish, this is simply our approach. To do the actual backup, we use the enhanced "BACKUP" SQL command that allow the use of a URL as a target device.

The target information required for the remote backup "device" is as follows:

http[s]://ACCOUNTNAME.Blob.core.windows.net/<CONTAINER>/<FILENAME.bak> 

The relevant parts are:

(1) You need to enter your ACCOUNTNAME on azure
(2) You give a container name (we called ours "SQLBackup"
(3) You give a filename (we created one dynamically)

The full SQL is simple:

SQL
BACKUP DATABASE AdventureWorks2012 TO URL = 'https://ACCOUNTNAME.blob.core.windows.net/mycontainer/AdventureWorks2012_1.bak' WITH CREDENTIAL = 'mycredential', STATS = 5; 

Apart from the items (1..3) above, you are providing a "CREDENTIAL" to the command. The credential is details of your ACCOUNTNAME on Azure PLUS your blob container SHARED KEY.

To get both, go to your Azure dashboard and get the keys by selecting your container storage, and clicking "manage keys" at the bottom of the screen

Image 2

Image 3

Once you have the keys, you then need to get your blob container name – this should be the same as your storage account name but if not it can be found at the bottom of the blob dashboard:

Image 4


To create the credential called for example “MyCred”, you need to run this piece of SQL:

IF NOT EXISTS (SELECT * FROM sys.credentials WHERE credential_identity = 'MyCred') CREATE CREDENTIAL MyCred WITH IDENTITY = 'MyCred' ,SECRET = [YOUR SECRET KEY]' ;


The credential “MyCred” is now available for use.


To be able to run a test, we need to set up a folder in your blob container – we called ours “SQLBackup” – name yours whatever you wish.


We should have enough now to be able to run a test. Lets assume the following:

Database name = “MyDatabase”
Credential = “MyCred”
Storage Container = “MyAzureStorage”
Storage folder within the container = “SQLBackup”
Secret = “ABC123”


Our test command is therefore as follows:

BACKUP DATABASE MyDatabase TO URL = 'http://MyAzureStorage.blob.core.windows.net/sqlbackup/MyDatabase.bak' WITH CREDENTIAL = 'MyCred', COMPRESSION ,STATS = 5

If we run that, we should see a result something like this:


Processed 384 pages for database 'MyDatabase, file MyDatabase' on file 1.

100 percent processed.

Processed 2 pages for database MyDatabase', file MyDatabase_log' on file 1.

BACKUP DATABASE successfully processed 386 pages in 0.173 seconds (17.397 MB/sec).


We can confirm the data has been transferred correctly by looking in our Azure blob container


Image 5

Image 6

Wrapping it all together

Ok, here’s the pain removal part… let’s put together a stored procedure that iterates through each database and backs it up. The logic is as follows:

(1) Create a temp table to store a list of databases (delete it first if it exists)

SQL
create procedure BackupToAzure as
begin
-- delete temp table if exists

IF EXISTS
( SELECT * FROM tempdb.dbo.sysobjects WHERE ID =
OBJECT_ID('tempdb.dbo.#TablesToBackup'))
BEGIN 
DROP TABLE #TablesToBackup
END
-- create temp table
Create table #TablesToBackup(DBName
nvarchar(100)) 


(2) Select the table data ad insert into the temp table. In our case, we are storing the database name along with the data-server name, separated with a colon, so we use the “CharIndex” function to split out what we want first

SQL
-- populate
insert into #TablesToBackup
select
SUBSTRING(LinkName,CHARINDEX(':', LinkName)+1,100) as DBaseName from ClientDatabases

We then use a table cursor to iterate through the temp table, building an execution string as we go, made up of the URL, FileName, etc.

SQL
declare @C cursor
set @C = cursor for 
       select
* from  #TablesToBackup
declare @F nvarchar(100)
open @C
while 0=0 
       begin
              fetch next from @C into @F
              if not(@@FETCH_STATUS = 0)
break
                     DECLARE @pathName
NVARCHAR(512) 
                     DECLARE @CMD
NVARCHAR(300)
                     DECLARE @URL
NVARCHAR(300)
                     DECLARE @Break
NVARCHAR(5)
                     DECLARE @FileDate NVARCHAR(20)
                     SET QUOTED_IDENTIFIER
OFF
                     Set @URL = 'https://MyAzureStorage.blob.core.windows.net/sqlbackup/'
                     Set @Break = '_' 
                     Set @FileDate =
Convert(varchar(8), GETDATE(), 112) + '.bak'
                     Set @Cmd = @URL + @F
+ @Break + @FileDate
                     BACKUP DATABASE @F
                     TO URL = @CMD WITH
FORMAT, CREDENTIAL = 'MyCred', COMPRESSION ,STATS = 5
                     SET QUOTED_IDENTIFIER
ON
       end    
close @C
deallocate @C
       
DROP TABLE #TablesToBackup
end

OK, almost there. If we run the script, it creates the stored procedure. When we execute the stored procedure, it iterates successfully through each database and backs them up in turn.

Image 7


That’s great, the only thing is we don’t want to have to go in manually and run the stored procedure ourselves, so we create a scheduled task to carry it out for us. In MS Management Studio, select SQL Server Agent, right-click and create new job.


Image 8

Give the job a name..

Image 9

And add a new “Step”
Image 10

It is important to select the database that the Stored Procedure is located otherwise you may get an error. In the “Command” box, enter your “EXEC” command with the name if your stored procedure.

Finally, create a new SCHEDULE item and fill in the timing details you want…
Image 11

The final thing is to be able to view whats going on – for this, we need access to the job history. This is accessed with a right-click on the job task.
Image 12


The history gives information on the health of the process

Image 13

The only thing remaining is to backup the backup! … for the moment we are testing out RedGates solution to backup the *blob container* that contains all of our client data – I may report on that as we see long term results.

As with all my articles, I ask that you pease take a second to rate it so I can track what people are interested in!

Many thanks and happy coding.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)