Foundations of Azure for SAP Workloads

 



Azure for SAP Workloads

Whenever you use Microsoft Azure, you can easily run SAP applications across the development, test as well as teh production scenarios in Azure, while readily taking advantage of its scalability, flexibility, and cost savings. With the expanded partnership between Microsoft and SAP, you can be fully supported on every platform. Azure can be considered unique for SAP HANA as it enables hosting more memory and CPU resource in the scenarios involving SAP HANA, while offering the use of customer-dedicated bare-metal hardware. 

SAP and Microsoft have a strong partnership along with a long history of working together that has mutual benefits for the customers. Microsoft is providing constantly updated platform and new certification details to SAP ensuring that Microsoft Azure is the best platform to run your SAP workloads.

Azure Virtual Machines (VMs)

Azure VMs consists of the primary Infrastructure as a Service (IaaS) compute service offerings available in Azure and when we compare it with the other compute services, Azure VMs offers the greatest degree of control over the configuration of the virtual machine as well as its operating system. Azure VMs that you provision are available in specific sizes that are grouped into special categories, including the following:
  • General purpose (including the B, Dsv3, Dv3, Dsv2, Av2, and DC sizes) providing balanced CPU-to-memory ratio, ideal for testing and development, small to medium databases, and low to medium traffic web servers. 

  • Compute optimized (including the Fsv2 size) providing high CPU-to-memory ratio, good for medium traffic web servers, network appliances, batch processes, and application servers.

  • Memory optimized (including the Esv3, Ev3, M, GS, G, DSv2, Dv2 sizes) providing high memory-to-CPU ratio, great for relational databases servers, medium to large caches, and in-memory analytics. 

  • Storage optimized (including the Lsv2, and Ls sizes) providing high disk throughput and IO, ideal for big data, SQL, NoSQL databases, data warehousing and large transactional databases.

  • GPU (including the NV, NVv2, NC, NCv2, NCv3, ND, and NDv2 sizes) providing specialized virtual machines targeted by heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning, available with single or multiple GPUs. 

  • High performance compute (including the H size) providing the fastest and most powerful CPU Azure VMs with optional high-throughput network interfaces (RDMA).  

There is a limit on the throughput and Input/Output Operations Per Second (IOPS) that individual disks supports. With the help of the standard HDD storage, you should expect about 60 MBps IOPS, but if you need to increase per-volume performance beyond these limits, you can accomplish this by creating multiple-disk volumes. 

Constrained vCPU capable VM sizes

Azure have certain VM sizes through which you can constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory, storage, and I/O bandwidth. The vCPU count can be constrained to one half or one quarter of the original VM size and these new VM sizes have a suffix that specifies the number of active vCPUs to make them easier for you to identify. 

The licensing fees charged for the SQL Server or Oracle are constrained to the new vCPU count, and the other products are charged based on the vCPU count resulting in a 50% to 75% increase in the ratio of the VM specs to active (billable) vCPUs. The new VM sizes allows customer workloads to use the same memory, storage, and I/O bandwidth while optimizing their software licensing cost.

Network bandwidth allocation

The network bandwidth allocated to each virtual machine is metered on the egress (outbound) traffic from the virtual machine. All network traffic leaving the virtual machine is counted towards the allocated limit, regardless of destination. 

Expected outbound throughput and the maximum number of network interfaces entirely depends on the VM size. The throughput limit applies to the virtual machine and is unaffected by the following factors:

  • Number of network interfaces- The bandwidth limit is cumulative of all outbound traffic from the virtual machine.
  • Accelerated networking- Though the feature can be helpful in achieving the published limit, it does not change the limit.
  • Traffic destination- All destinations counts towards the outbound limit.
  • Protocol- All outbound traffic over all protocols counts towards the limit.  

Data flows

The Azure networking stack maintains state for each direction of  a TCP/UDP connection in data structures called 'flows'. A typical TCP/UDP flow can have 2 flows created, between which one is for the inbound while another is for the outbound direction. Data transfers between the endpoints requires creation of several flows in addition to those that perform the data transfer. Some examples are the flows created for DNS resolution and load balancer health probes. 

The Azure networking stack supports 250K total network flows while providing a good performance for VMs with 8 CPU cores or more and 100K total flows for VMs less than 8 CPU cores. The network performance degrades gracefully after extending this limit, for additional flows up to a hard limit of 1M total flows, 500K inbound and 500K outbound, after which additional flows are dropped. 

Like other cloud-based services, Azure VMs are also more agile than on-premises virtual machines. You can provision and scale them vertically on an as-needed basis, without investing in dedicated hardware as well as get benefited for the pricing model applicable to Azure VMs. When you run Azure VM you have to pay for the compute on a per-second basis and the price is calculated based on their size, the operating system, as well as any licensed software installed on the VM. A running virtual machine requires the allocation of Azure compute resources. Therefore, to avoid the corresponding charges, whenever you are not using it, you should change its state to Stopped (Deallocated).   

Comments

Popular posts from this blog

Deployment (Part 3)

Deployment (Part 1)

Project Resourcing (Part 2)