Azure Pricing Demystified: Azure SQL

This is the inaugural entry of a series of explainers for Azure pricing, which is one of the most confusing and daunting aspects of considering a move to the cloud.

Azure SQL

This is your basic managed SQL Server in the cloud. You don’t have to deal with backups, performance tuning, maintenance, replication, or scaling. These things are all handled by Microsoft for a fee that is higher than running SQL Server in a VM yourself.

Be aware that Azure SQL does not have complete feature parity with full SQL Server and while that gap is getting smaller all the time, some complex applications may have trouble porting.

How do I choose a tier that I need?

There are a number of levers you can pull in selecting a pricing tier to meet your needs:

  • Throughput
  • Database size
  • Number of databases
  • Backup retention time

By considering these details you can make an informed decision on what tier you need ranging from an economical $5 per month Basic database, to the absolutely insane $16k per month Premium P15.

Throughput

The main driver in picking a tier is the required throughput. Pricing tiers are broken down according to a made up metric called a DTU, or Database Transaction Unit.

You can think of a DTU as “one query per second” where the query is of the sort of size and shape of a typical web application query.

“Aha!” you might think. “I’ll just batch my updates into one enormous statement that does 1000 inserts in one query!”

Nope! Remember its equivalent to a “typical” web application query, and actually includes a blend of IO, RAM, and CPU used during processing. So while that giant query might be a single transaction, it uses the IO of a 1000 queries and would bump against the max throughput limits. Depending on the tier, that query would take multiple seconds to complete.

So in general, you’ll want to consider how many queries per second your system needs to support to meet user requirements. But if your applications queries are heavier than a “typical” web application, you’ll be needing more scale.

You can read all about DTUs in much more confusing detail here: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu

Database size

Next up consider database sizes. This is the physical size of the database on disk.

I’d expect the vast majority of real world applications will get by just fine in the Standard level, which gives you up to 250GB. If you need more, you’ll need to be on Premium.

Number of Databases

OK, so my system has 12 databases, and they have different throughput needs at different times. Any one of them might spike up to 100 DTUs, but rarely at the same time. Is there a way to share DTU across a bunch of databases?

Great question! And yes! It sounds like you’d be interested in a an elastic database pool.

In the elastic pool offering, you choose a tier by considering the maximum DTUs a single database will require, as well as the maximum DTUs the entire pool may need. Throughput and disk usage are then shared across the pool, and individual databases are allowed up to a maximum throughput.

The number of databases in the pool are also a factor, but are so large they only make sense if you are running a SAAS where each of your customers has their own physical database. Even the Basic plan in the elastic tier gives you 100 databases per pool, which is more than enough for most common scenarios.

Whoa, this is weird: The Standard 50 tier has 50 DTUs shared across the pool, but allows 100 DTUs per database. Whats up with that? How can a single database use more than the total allowed in the pool?

Another great question. Thanks for paying attention!

My understanding is that the shared DTUs in the pool is an average of utilization across each database, but that the max DTU per database is a hard limit at the database level. So for example, a single database in your pool could spike up to 100 DTUs for a few moments, but the average usage across the all the databases for that billing hour is still less than 50.

Backup Retention Time

Another thing to consider is the length of time you want backups to persist.

Each level offers Point in Time Recovery down to the millisecond, but how far back you can recover depends on if you’re in Basic, Standard, or Premium.

  • Basic: 7 days
  • Standard: 14 days
  • Premium: 35 days

Choosing a service tier

Here are some questions you can ask yourself that can guide you into picking the right tier:

Do I have more than 4-5 databases?

An elastic database pool may be a good choice for you.

How big are my databases?

For the single database offering, size on disk immediately buckets you into Basic, Standard, or Premium.

Less than 2GB? You can use Basic. Less than 250GB? You can use Standard. Less than 500GB? You can use Premium, up to P6. Less than 1TB? You’ll need at least a P11 and $7k/mo.

Do I care about point in time recovery more than a week old?

Likewise, this can drive your decision into the Basic, Standard, or Premium tiers

How much throughput do I need?

Once you’re bucketed into Basic, Standard, or Premium, you should consider how active your database will be. At the Basic tier, 5 DTUs is fine for demo applications and prototypes, but will struggle with anything more than a development load.

Chatty applications like content management systems are going to want closer to an S3. Batch job processing can also require additional horsepower to complete in a reasonable time.

Once you get to about the middle of the Premium tier (and running bills in the multiple thousands for a single database), I start to wonder if you’re not better off running SQL Server in a VM. You’ll have to manage operating system updates, software updates, backups, and tuning yourself, but you’re obviously deeply invested in SQL Server technology and it would be prudent to have staff skilled in those areas.

I’m a cheapskate, what else can I do?

There’s some other approaches you can take to save on hosting costs.

One I’ve written about before is to bundle all your applications into one database using SQL Server schemas.

So for example, I run multiple demo and hobby projects that all use the same database, but each works under a schema unique to that application. Instead of just using dbo, I have podcasts, gastracker, efh and other acronyms.

Another trick you could try is automated scaling.

If your database is mostly used for occasional reporting but has a huge spike in throughput overnight when batch jobs run against it, you could use Azure command line tools to automatically scale it up and down on a schedule. So during the day it hangs out at a little S0, but at 11PM you have it scale up to S3, and then back down when the jobs are finishes.

You only pay S3 rates for the hours it was scaled at S3. For some workloads this makes a lot of sense.

Conclusion

I hope this helps you understand how Azure prices database tiers so you can make a good decision. I know its pretty confusing and its annoying to have to figure out all these concepts just to pick a tier.

I really like Azure SQL for most workloads, and would prefer it over running SQL Server in a VM whenever possible. It’s nice to focus on your business and your application instead of ancillary topics like operating system patches.

Good luck!