Understanding End-to-End Machine Learning Process (Part 5 of 5)

 





To read part 1, please click here
To read part 2, please click here
To read part 3, please click here
To read part 4, please click here







Deploying Models

This step is also known as inferencing or scoring a model. You can clearly see the deployment and operation of an ML pipeline when the model is tested on live data in production, which is generally done to get deeper insight and data to improve the model continuously. So, if you collect the model's performance over time, it will guarantee the model's improvement in it. There are two main architectures for ML-scoring pipelines:
  • Batch scoring using pipelines- It's an offline process in which you can evaluate an ML model against a batch of data. The result of this scoring is usually not time-critical, and the to be scored data is also larger than the model.

  • Real-time scoring using a container-based web service endpoint- It's a technique in which you can score single data inputs (common in stream processing for scoring single events in real time), which makes this task highly time-critical, and the execution is blocked until the resulting score is computed. 

Developing & Operating Enterprise-Grade ML Solutions

In order to operationalize ML projects, we have to use automated pipelines and development-operations (DevOps) methodologies like Continuous Integration (CI) and Continuous Delivery/Continuous Deployment (CD), also known as MLOps. Hence, the two different automated pipelines are:
  • Training pipeline- This one contains loading datasets, transformation, model training, and registering final models; and could be triggered via changes in the datasets or possible detected data drift in a deployed model.

  • Deployment pipeline- In this one, we have loading of models from the registry, creating and deploying Docker images, creating and deploying operational scripts, and the final deployment of the model to the target; and could be triggered by any new versions of an ML model.

Now, after having these pipelines, we can look upon Azure DevOps (and other tooling) to create a life cycle for our ML projects containing the following parts:

  • Creating or retraining a model- Here, training pipelines are used to create or retrain our model while version-controlling the pipelines and the code.

  • Deploying the model & creating scoring files & dependencies- In this one, we can use a deployment pipeline to deploy a specific mode version while version-controlling the pipeline and the code.

  • Creating an audit trail- Through CI/CD pipelines and version control, we can create an audit trail for all assets while ensuring integrity and compliance.

  • Monitoring model in production- Here, we can monitor the performance and possible data drift, which might trigger retraining of the model automatically. 








To read part 1, please click here
To read part 2, please click here
To read part 3, please click here
To read part 4, please click here
















































Comments

Popular posts from this blog

Query, Visualize, & Monitor Data in Azure Sentinel

Planning for Implementing SAP Solutions on Azure (Part 2 of 5)

Work with String Data Using KQL Statements