Planning for Implementing SAP Solutions on Azure (part 4 of 5)

 



To read part 1 please click here
To read part 2 please click here
To read part 3 please click here
To read part 5 please click here



Ultra SSD

A new Azure Storage type called Azure Ultra SSD is in the process of introduction by Microsoft and its disk capabilities are not bound to the disk size which mainly differentiate it from Azure Storage. You can also define the following capabilities for Ultra SSD:
  • Size of a disk ranging from 4 GiB to 65,536 GiB
  • IOPS range from 100 IOPS to 160K IOPS (maximum depends on the VM SKU)
  • Storage throughput from 300 MB/sec to 2,000 MB/sec 

You have an option of attaching both UltraSSD and Premium Storage disks to the same Azure VMs which helps you to restrict the usage of UltraSSD for the performance critical/hana/data and /hana/log/volumes as well as implement other volumes with Premium Storage.

SAP HANA Dynamic Tiering 2.0

It provides the ability to offload less frequently access data from memory into extended storage, isn't supported by SAP BW or S4HANA, and mainly used in the cases consist of native HANA applications.
The following set of mandatory requirements must be followed to ensure supportability fro DT 2.0 on Azure VMs:
  • DT 2.0 must be installed on a dedicated Azure VM and should not run on the same VM  where SAP HANA runs.
  • The SAP HANA and DT 2.0 must be deployed after enabling the Azure accelerated networking.
  • SAP HANA and DT 2.0 must be deployed within the same Azure Vnet.
  • Storage type for DT 2.0 VM must be Azure Premium Storage.
  • Multiple Azure disks must be attached to the DT 2.0 VM.
  • It's necessary to create a striped volume across the Azure disks.

If you want to install DT 2.0 on a dedicated VM, you need throughput between the DT 2.0 VM and the SAP HANA VM of 10 Gb minimum. Hence, it is necessary to place all the VMs within the same Azure Vnet as well as enable Azure accelerated networking.

For SAP HANA scale-out, the /hana/shared directory should be shared between the SAP HANA VM and DT 2.0 VM and it is recommended to use same architecture for SAP HANA scale-out, which relies on dedicated VMs acting as a highly available NFS server. The customer can decide if HA is mandatory or if it is enough to use a dedicated VM with enough storage capacity to act as a backup server.

SQL Server

All  the SAP certified VM types (except A-series VMs, tempb data, and log files) can be placed on the non-persisted D:\ drive for SQL server deployments which also provide better I/O latency as well as throughput (except A-Series VMs).

If you place tempb data files and log file into a folder on D:\drive, you should be sure that the folder does exist after a VM restart and if the SQL Server service is run in the user context of non-Windows Administrator user, you should assign to that user right Perform volume maintenance tasks.

Database Compression

Before uploading to Azure the recommendations to perform Database Compression are provided for the following reasons:
  • The amount of data to be uploaded is lower.
  • The duration of the compression execution is shorter assuming that one can use stronger hardware with more CPUs or higher I/O bandwidth or less I/O latency on-premises.
  • Smaller database sizes might lead to less costs for disk allocation.

Storing Database Files Directly on Azure Blob Storage

SQL Server 2014 and later have the possibility of storing databases files directly on Azure Blob without the 'wrapper' of the VHD around them while enabling the scenarios where you can easily overcome the limits of IOPS that would be enforced by a limited number of disks that can be mounted to some smaller VM types. If you want to deploy a SAP SQL Server database this way instead of 'wrapping' it into VHDs, you should:
  • The Storage Account used should be in the same Azure Region as the one that is used to deploy the VM SQL Server is running in.
  • Instead of accounting against the VM's storage I/O quota, the traffic against the storage blobs representing the SQL Server data and log files, will be accounted into the VM's network bandwidth of the specific VM type.
  • If you push I/O file through the network quota, you may strand the storage quota mostly while using the overall bandwidth of the VM only partially.
  • The IOPS and I/O throughput performance targets that Azure Premium Storage has for the different disk sizes do not apply anymore, even if the blobs you created are located on Azure Premium Storage resulting in placing of SQL Server data files and log files directly on blobs that are stored on it.
  • Host-based caching as available for Azure Premium Storage disks is not available when placing SQL Server data files directly on Azure blobs.
  • Azure Write Accelerator can't be used on M-series VMs to support sub-millisecond writes against the SQL Server transaction log files.  

It is recommended for production systems to avoid this configuration and instead choose the placements of SQL Server data and log files in Azure Premium Storage VHDs rather than diectly on Azure blobs. 

                                                                                                                                                                                                                                                                                                                                                                                                                                            


To read part 1 please click here
To read part 2 please click here
To read part 3 please click here
To read part 5 please click here









Comments

Popular posts from this blog

Deployment (Part 3)

Project Resourcing (Part 2)

Design Planning (Part 3)