Planning for Implementing SAP Solutions on Azure (part 4 of 5)
Ultra SSD
- Size of a disk ranging from 4 GiB to 65,536 GiB
- IOPS range from 100 IOPS to 160K IOPS (maximum depends on the VM SKU)
- Storage throughput from 300 MB/sec to 2,000 MB/sec
You have an option of attaching both UltraSSD and Premium Storage disks to the same Azure VMs which helps you to restrict the usage of UltraSSD for the performance critical/hana/data and /hana/log/volumes as well as implement other volumes with Premium Storage.
SAP HANA Dynamic Tiering 2.0
- DT 2.0 must be installed on a dedicated Azure VM and should not run on the same VM where SAP HANA runs.
- The SAP HANA and DT 2.0 must be deployed after enabling the Azure accelerated networking.
- SAP HANA and DT 2.0 must be deployed within the same Azure Vnet.
- Storage type for DT 2.0 VM must be Azure Premium Storage.
- Multiple Azure disks must be attached to the DT 2.0 VM.
- It's necessary to create a striped volume across the Azure disks.
If you want to install DT 2.0 on a dedicated VM, you need throughput between the DT 2.0 VM and the SAP HANA VM of 10 Gb minimum. Hence, it is necessary to place all the VMs within the same Azure Vnet as well as enable Azure accelerated networking.
For SAP HANA scale-out, the /hana/shared directory should be shared between the SAP HANA VM and DT 2.0 VM and it is recommended to use same architecture for SAP HANA scale-out, which relies on dedicated VMs acting as a highly available NFS server. The customer can decide if HA is mandatory or if it is enough to use a dedicated VM with enough storage capacity to act as a backup server.
SQL Server
If you place tempb data files and log file into a folder on D:\drive, you should be sure that the folder does exist after a VM restart and if the SQL Server service is run in the user context of non-Windows Administrator user, you should assign to that user right Perform volume maintenance tasks.
Database Compression
- The amount of data to be uploaded is lower.
- The duration of the compression execution is shorter assuming that one can use stronger hardware with more CPUs or higher I/O bandwidth or less I/O latency on-premises.
- Smaller database sizes might lead to less costs for disk allocation.
Storing Database Files Directly on Azure Blob Storage
- The Storage Account used should be in the same Azure Region as the one that is used to deploy the VM SQL Server is running in.
- Instead of accounting against the VM's storage I/O quota, the traffic against the storage blobs representing the SQL Server data and log files, will be accounted into the VM's network bandwidth of the specific VM type.
- If you push I/O file through the network quota, you may strand the storage quota mostly while using the overall bandwidth of the VM only partially.
- The IOPS and I/O throughput performance targets that Azure Premium Storage has for the different disk sizes do not apply anymore, even if the blobs you created are located on Azure Premium Storage resulting in placing of SQL Server data files and log files directly on blobs that are stored on it.
- Host-based caching as available for Azure Premium Storage disks is not available when placing SQL Server data files directly on Azure blobs.
- Azure Write Accelerator can't be used on M-series VMs to support sub-millisecond writes against the SQL Server transaction log files.
It is recommended for production systems to avoid this configuration and instead choose the placements of SQL Server data and log files in Azure Premium Storage VHDs rather than diectly on Azure blobs.
Comments
Post a Comment