Very Large Database (VLDB) Migration to Azure (part 1)

 




Source System Optimizations

For the Source Export of VLDB systems, you have to perform the following tasks:
  • Purge Technical Tables and Unnecessary Data.

  • You have to separate the R3load processes from the DBMS server if you want to maximize the export performance.

  • You should run R3load on fast new Intel CPU instead of UNIX servers for better performance as 128 GB RAM of two-socket commodity Intel servers will cost little saving days or weeks of tuning/optimization or consulting time.

  • 10 GB high Speed Network with minimal network ideally jumps between the source DB server and the Intel R3load servers.

  • Usage of physical servers is highly recommended for the R3load export servers as virtualized R3load servers did not offer good performance or reliability at some customers sites at an extremely high network throughput.

  • Sequence larger tables to the start of the Orderby.txt.

  • Configure Semi-parallel Export/Import using Signal Files. 

  • Unsorted Export is profitable for Large exports on larger tables and to review its net impact is highly recommended as importing unsorted exports to databases that have a clustered index on the primary key will be slower.

  • Configure Jumbo Frames between source DB server and Intel R3load servers.

  • Adjust memory settings on the source database server to optimize for sequential read/export tasks.

Network Upload Optimizations

Jumbo Frames are ethernet frames are generally larger than the default 1500 bytes but their typical sizes are 9000 bytes. Extra networking features like Receive Side Scaling (RSS) can be easily switched on or configured to distribute network processing across multiple processors Running R3load servers on VMWare has proven to make network tuning for Jumbo Frames and RSS more complex but it is not recommended until very expert skill level available.

You can upload R3load dump files through:

  1. Copy from on-premises R3load export servers to Azure blob storage via Public Internet with AzCopy.
  2. Copy from on-premises R3load export servers to an Azure VM or blob storage via a dedicated ExpressRoute connection using AzCopy, Robocopy or similar tool.

Target System optimizations

  1. Use latest possible OS with latest patches.
  2. Use latest possible DB with latest patches.
  3. Use latest possible SAP Kernel with latest patches (e.g. Upgrade from 7.45 kernel to 7.49 or 7.53).
  4. You should use the largest available Azure VM which can be lowered to a smaller VM after the import process.
  5. Multiple transaction log files should be created along with the first transaction log file on the local non-persistent SSD while extra files can also be created on P50 disks.
  6. SQL Server Max Degree of Parallelism is usually set to 1 and certain specific index build operations as well as tables benefits from MAXDOP.
  7. Accelerated Networking is mandatory for DB and R3load servers.
  8. The maximum memory a single SQL Server query can easily request from Resource Governor must be limited which can also prevent the index build operations from requesting very large memory grants.
  9. The Secondary indexes for very large tables can be removed from the STR file and built ONLINE with scripts after the important portion of the import is finished and post processing tasks like STMS configuring are occurring. 
  10. Customers who use SQL Server TDE should pre-create the database as well as Transaction Log files, then enable TDE prior to starting the import.The overhead importing to a TDE database is relatively low. 

















Comments

Popular posts from this blog

Deployment (Part 3)

Deployment (Part 1)

Deployment (Part 2)