Author

Josh Dreyfuss

January 21, 2026

Where traditional SQL Server best practices fall short

Author

Josh Dreyfuss

5 minutes
January 21, 2026

Issues with managing cloud block storage underlying SQL Servers keep popping up, even among experienced teams. Storage costs are spiralling and traditional best practices aren’t actually built for a modern cloud environment. The cloud makes it easy to add another disk, but issues with cost or performance are rarely about too few disks. 

Fortunately, most SQL Server storage issues in clouds like Azure don’t come from bad technology choices, but rather from some outdated practices and misalignment. In our upcoming webinar on January 29 at 9:30AM PT, we will be exploring the storage underlying SQL Servers in Azure, where teams make suboptimal choices today, and how to move forward in an easier and more efficient way. 

Ahead of the webinar, I wanted to share some of what we will be discussing when it comes to SQL Server best practices and modes of operation today. 

SQL teams must balance three competing factors

SQL Server teams are juggling multiple factors at any given time, and need to decide which to weigh more heavily for any given situation. Teams are constantly balancing:

  • Uptime (availability, failover safety, maintenance windows)

  • Performance (latency, throughput, tail behavior)

  • Cost / Rightsizing (avoiding overprovisioning and optimizing disks when needed)

Sometimes, these factors are at odds with each other. For example, from a storage perspective, uptime benefits by ensuring that there is more than enough disk space to prevent a disk running out of capacity and causing downtime. However, from a cost and rightsizing perspective, building in a lot of extra capacity is a big drain on spend. Additionally, while expanding and adding new disks is fairly easy, shrinking disks is impossible to do natively in the cloud, so teams often end up stuck with the overprovisioned volumes.. 

How teams work around Azure Limitations for SQL Server today:

In Azure, disks are purchasable in a 2x model: each available disk size is double the previous option (e.g. 128GB, 256GB, 512GB, 1024GB, etc). This can lead to significant costs and overprovisioning for teams using one disk per VM. One extremely common workaround to this issue is to spin up multiple small disks and attach them to a single SQL Server, rather than spin up one large disk. 


At first glance, this makes sense as a response to cloud pricing and traditional best practice guidance:

  • Smaller disks appear cheaper per GB

  • Increasing column count promises aggregated IOPS and throughput

  • Aligns with traditional SAN-era thinking that “more disks = more performance”

  • Encouraged by historical SQL Server guidance

Operationally, it’s sensible as well. It makes it easier to control incremental growth, and feels modular and flexible. 

While this may be functional, these approaches lead to a lack of optimization that causes significant wasted spend on unused space. 

The unintended consequences of multiple smaller disks

Leveraging and using multiple small disks actually ends up hurting teams for several reasons. 

Performance fragmentation

SQL Server does not distribute I/O evenly by default. A common misconception is that presenting multiple disks automatically results in balanced, striped I/O. In reality, even when total IOPS appears healthy, uneven I/O patterns can lead to higher tail latency.

When multiple disks are mounted to a single location for SQL Server data or log files, SQL Server does not inherently stripe I/O across them. To benefit from the combined performance of multiple disks in Azure, the storage layer must be explicitly configured (for example, with the correct column count). Without this, performance remains limited by individual disk capabilities rather than aggregate throughput.

VM-Level bottlenecks make disk math irrelevant

Another factor to consider is that the VM itself is a bottleneck. There are caps on IOPS and throughput at the VM level that cannot be circumvented by adding more disks. Being aware of where the true bottlenecks are will help teams from spinning their wheels in the wrong places. 

Operational complexity becomes the hidden tax

A final unintended consequence of leveraging multiple smaller disks is an operational one. With more disks, there is simply more to manually maintain for the DBA and keep an eye on, which can eat up engineering time and effort. It is also important to not forget whether the disk is actually attached and mounted or just left idle.

The path forward: autonomous SQL Server storage rightsizing

To set up SQL Server storage in a modern Azure environment, teams need to think differently and embrace new ways of operating. One such method is to leverage autoscaling technology to autonomously rightsize your SQL Server storage on a continuous basis, ensuring that you always have the right size of disk, performance and amount of storage that your data requires. 

To learn more about autoscaling, where traditional SQL Server best practices fall short, and how modern teams are addressing these issues, check out our upcoming webinar

You may also like!