Azure Ultra Disk Storage Configuration for SAP HANA

 



Ultra Disk

A new Azure storage type called Azure Ultra Disk will be soon introduced by Microsoft whose capabilities are not bound to the disk size anymore and you can define these capabilities as follows:
  • Size of a disk ranging from 4 GiB to 65,536 GiB
  • IOPS range from 100 IOPS  to 160K IOPS (maximum depends on the VM types as well)
  • Storage throughput from 300 MB/sec to 2000 MB/sec 

It allows you to define a single disk that fulfills your size, IOPS, as well as disk throughput range while running a configuration mix between Ultra disk and Premium Storage which also results in limiting the usage of Ultra Disk to the performance critical /hana/data as well as /hana/log volumes while covering the other volumes with Azure Premium Storage. It also offers better read latency as compared to Premium Storage which is advantageous when you want to reduce the HANA startup time and the subsequent load of the data into memory. 

Production recommended storage solution with pure Ultra disk configuration

The main advantage with Azure Ultra Disk is that the values for IOPS and throughput can be adapted without the need to shut down the VM or halting the workload applied to the system. As storage snapshots with Ultra Disk storage is not available, it blocks the VM snapshots usage with Azure Backup Services. In this configuration you can also keep the /hana/data and /hana/log volumes on the same disk.

NFS v4.1 volumes on Azure NetApp Files

Azure NetApp Files offers native NFS shares that can be easily used for /hana/shared, /hana/data, and /hana/log volumes which needs the usage of v4.1 NFS protocol as the NFS protocol v3 is not supported for the HANA related volumes usage when basing the shares on ANF.

If you are considering Azure NetApp Files for the SAP Netweaver and SAP HANA, then you should also know the following:

  • The minimum capacity pool is 4 TiB

  • The minimum volume size is 100 GiB

  • Azure NetApp Files and VMs, where Azure NetApp Files volumes will be mounted, must be in the same Azure virtual network or in peered virtual networks in the same region.

  • The selected virtual network must have a subnet delegated to Azure NetApp Files.

  • The throughput of an Azure NetApp volume is a function of the volume quota and service level, as documented in Azure NetApp Files and you should make sure that the resulting throughput meets the HANA system requirements while sizing the HANA Azure NetApp volumes.

  • Azure NetApp Files provides export policy which allows you to control the allowed clients, the access type, etc.(i.e. Read & Write, Read only, etc.)

  • Azure NetApp Files isn't zone aware and cannot be deployed in all Availability zones in an Azure region yet, hence you should be wary of the potential latency implications in some Azure regions.

  • You should deploy the VMs in close proximity to the Azure NetApp storage for low latency as it is very important for SAP HANA workloads. While working with your Microsoft representative you should ensure that the VMs and the Azure NetApp Files volumes are deployed in close proximity.

  • The User ID for sidadm and the Group ID for sapsys on the VMS must match the configuration in Azure NetApp Files.

Sizing for HANA database on Azure NetApp Files

If you design the infrastructure for SAP in Azure you should also know about some of the following minimum storage requirements by SAP, which translates into minimum throughput characteristics:

  • Enable read/write on /hana/log  of a 250 MB/sec with 1 MB I/O sizes.
  • Enable read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes.
  • Enable write activity of at least 250 MB/sec for /hana/data with 16 MB and 64 MB I/O sizes.

The Azure NetApp Files throughput limits per 1 TiB of volume quota are:

  • Premium Storage Tier- 64 MiB/s
  • Ultra Storage Tier- 128 MiB/s

Note- You can also re-size Azure NetApp Files volumes dynamically, without unmounting the volumes, or stopping the VMs, or stopping SAP HANA which allows the flexibility to match your application with both the expected and unforeseen throughput demands.




Comments

Popular posts from this blog

Deployment (Part 3)

Deployment (Part 1)

Project Resourcing (Part 2)