Implementing MLOps for Training Custom Models with Azure AI Document Intelligence
Published Mar 01 2024 04:37 AM 4,613 Views
Microsoft

DocumentDataExtraction.png

 

Addressing the challenges of effectively maintaining custom models in Azure AI Document Intelligence, this article explores adapting the concepts of MLOps into your delivery strategy. The goal is to provide guidance on how to ensure that custom document analysis models are not only accurate but remain effective for users throughout their lifecycle.

 

Key Challenges

 

Balancing model accuracy with operational scalability

Ensuring that custom models remain accurate over time while also scalable and efficient in operation is challenging. Teams are required to collect and process many documents while exploring techniques to improve accuracy. The operational overhead of managing document storage, running pre-processing flows, training models, and ensuring they are efficiently deployed without affecting the user experience requires a meticulous approach.

 

Implementing strategies for model improvement

Finding the right approach to model retraining presents many complexities. Teams are required to establish seamless mechanisms for monitoring, collecting feedback, and recreating training data to retrain a model. While automations establish best practices with faster delivery, they pose challenges for invalid feedback or malicious actors to affect a model’s analysis. Introducing manual processes need to be carefully managed to also prevent compromising the integrity of a model.

 

Managing multiple models and versions

Teams building custom models with Azure AI Document Intelligence will create multiple variants. This poses a challenge for future maintenance when identifying models and whether they are still relevant or in use. Teams need to consider strategies to ensure effective management, providing rollback capabilities, and minimizing disruptions to end-users during updates.

 

Recommendations

As teams adopting MLOps practices when utilizing Azure AI Document Intelligence to build custom models for document analysis, you should:

 

  • Adopt MLOps practices to streamline the end-to-end lifecycle management of custom models. Utilize automation pipelines to perform model training, evaluation, and deployment to reduce manual errors. Adopt a semantic versioning strategy to improve the management of models. Leverage Azure Blob Storage for document collection to organize and manage training data efficiently.
  • Establish continuous feedback and retraining mechanisms. Embed feedback mechanisms into intelligence applications to allow users to submit changes and inform retraining activities. Establish a human-in-the-loop to oversee user feedback and submit recommendations for retraining. Utilize established automation pipelines that can be triggered by reviewers to rollout changes continuously without manual intervention.
  • Enhance model deployment strategies to minimize user impact when establishing multiple model versions. Utilize approaches such as blue/green deployments and feature flags to manage model updates, enabling gradual rollout with rollback capabilities. Implement monitoring to track model performance, usage, and identify potential issues early. Use the data to inform decisions regarding model retraining.

 

Challenge Overview

In an ever-evolving demand for AI integration in SaaS products, leveraging the Azure AI services to enhance user experience is no longer just an advantage; it’s a necessity. Azure AI Document Intelligence provides a powerful platform for extracting valuable data from a variety of document types, transforming manual, time-consuming tasks into automated, efficient processes. However, the challenges many engineering teams integrating this service face in analyzing and generating a custom model accurately mirror the complexities that any data science team face in building custom machine learning models.

 

This is leading engineering teams to ask, “How do we effectively implement continuous improvement to custom document analysis models?”

 

This article focuses on adapting the concepts of MLOps to custom models created within Azure AI Document Intelligence. The goal is to provide you with guidance to ensure that models are not only accurate but remaining effective for users consuming them throughout their lifecycle.

 

What is MLOps?

MLOps represents the blend of machine learning with DevOps practices, aiming to streamline the lifecycle of ML projects. At its core, MLOps is about enhancing efficiency and reliability in deploying ML models, taking advantage of automation, team collaboration, continuous integration (CI), deployment (CD), testing, and monitoring of changes made.

 

Diagram representing the lifecycle of MLOps; collecting data, processing it, training a model, validating, packaging, and deploying, completing the cycle with monitoring and feedbackDiagram representing the lifecycle of MLOps; collecting data, processing it, training a model, validating, packaging, and deploying, completing the cycle with monitoring and feedback

 

Implementing these practices is important because:

 

  • Automation is crucial in reducing manual errors and increasing efficiency. It allows for the automatic training, evaluation, and deployment of models. This minimizes the need for human intervention and speeding up iteration cycles.
  • Collaboration among team members is essential to ensure that models are not only accurate but also deployable and maintainable in production environments.
  • Transparency in the processes ensures that all stakeholders have visibility into the model's performance, the changes being made, and the impact the changes make. This is critical for maintaining trust and for continuous improvement.

 

To understand more about MLOps, dive deeper into our Microsoft Learn training paths.

 

Applying MLOps to Azure AI Document Intelligence

Building and deploying a custom model in Azure AI Document Intelligence doesn't require deep machine learning understanding. However, the process mirrors the same challenges that are resolved by implementing MLOps in a machine learning model’s lifecycle.

 

This approach provides a transformative strategy that ensures the seamless integration of AI into document processing workflows, to enhance the efficiency and accuracy of the models over time. Let’s delve into best practices for preparing custom models in Azure AI Document Intelligence for production, highlighting the implementation of MLOps to achieve operational excellence and scalability.

 

Technical diagram demonstrating the application of MLOps to the custom model creation process in Azure AI Document IntelligenceTechnical diagram demonstrating the application of MLOps to the custom model creation process in Azure AI Document Intelligence

 

Collecting and processing documents for custom models in Azure AI Document Intelligence

The foundation of creating an accurate custom model in Azure AI Document Intelligence with MLOps starts with the collection and processing of relevant documents of a given type. This step is critical as the quality and diversity of the content in the documents directly impact the model’s performance.

 

To improve how you collect and process documents, consider the following:

 

  • Diverse sources of the same document type: Aim to gather documents from a variety of sources of the same type, e.g., an invoice, but also consider documents with various layouts, e.g., tables that span multiple pages, signatures in different locations, handwritten and digitized. Ensure you have enough volume of documents to provide variety to the model, but not too much that you lose quality and introduce noise.
  • Storing your training data: Storing your model’s training documents in Azure Blob Storage will ease the model creation, enabling you to use both Azure AI Document Intelligence Studio, as well as interacting with the APIs via code. This will be important for automating the process of retraining later. Track the documents that are being used in your model so that you can manage ones that are included in the trained model.
  • Implement pre-processing: Before feeding documents into your custom model, apply pre-processing techniques to standardize your document formats, e.g., de-skewing scanned documents, ensuring correct page ordering. Using the Azure AI Document Intelligence Studio, analyze and accurately label the key detail you want to extract from your documents. High-quality labeling is crucial for training successful models.

 

With your documents ready, we can now start considering how we build, manage, and deploy our models to customers.

 

Continuous integration and deployment of Document Intelligence models

Continuous integration and continuous deployment (CI/CD) practices are central to MLOps, enabling teams to integrate changes, automate testing, and deploy models more reliably and quickly. To apply these MLOps practices in Azure AI Document Intelligence, let’s explore some important factors that apply to the training of custom models once we have collected and pre-processed our documents.

 

Model versioning

Model versioning is the cornerstone of effective MLOps practices. Using versions for models allows teams to track, manage, and rollback models to previous states. This ensures that only the best-performing versions are available in a production environment.

 

For Azure AI Document Intelligence, consider using semantic versioning in the model ID. When implementing semantic versioning, establish a strategy for what you consider a major, minor, or patch change in collaboration across your team. This ensures that everyone understands the scope for deploying new model versions and eases the identification of changes in models.

 

As an example for implementing semantic versioning in Azure AI Document Intelligence models, consider:

 

  • Major: Breaking changes in the document template or field labels which invalidates previous models.
  • Minor: New training data based on the current document template.
  • Patch: Fixes to previously processed documents such as correcting pre-processing steps, e.g., de-skewing, page ordering, and labelling.

 

With a versioning strategy established, we can start to train and register our models with Azure AI Document Intelligence.

 

Testing model changes

Testing changes is a crucial aspect of MLOps, ensuring that updates maintain or improve performance without introducing regressions. A comprehensive testing strategy should encompass several layers, from data validation to performance testing.

 

Consider the following for effectively testing model changes:

 

  • Establish a baseline for quality and performance: With each training of a model, it is critical to have an established baseline to compare against for both the accuracy of the results, as well as the time taken to analyze the documents. Establish a defined training set of analyzed documents and their results to compare against for consistency in subsequent changes.
  • Running automated tests to validate changes: With an established set of test data, it is important to monitor for significant changes in the accuracy of new model versions compared to data previously established through testing. Performing this manually can be cumbersome when working with a large training set. Consider implementing an automated process that calls the Azure AI Document Intelligence APIs to analyze results for new models that is run in a pipeline.

 

For more information on using the SDKs for Azure AI Document Intelligence, explore our GitHub samples that demonstrate their usage.

 

Deploying custom Azure AI Document Intelligence models

When you’re ready for your customers to start consuming your model, establishing a deployment strategy is critical to minimizing downtime and ensuring a smooth user experience during updates.

 

For applications integrating with Azure AI Document Intelligence, implementing an API gateway allows you to rollout changes while minimizing application updates. The proxy allows you to establish model deployment strategies such as:

 

  • Blue/green deployments: This strategy involves deploying the new (green) version alongside the current (blue) within an identical environment. After testing and validating with a select user group, traffic can gradually shift from the blue to green model. This approach minimizes downtime and risk by allowing an instant rollback if issues arise.
  • Feature flags: Implementing feature flags allows you to dynamically toggle between model versions without redeploying changes. This is particularly useful for A/B testing, as well as gradually introducing new features or models to users.
  • API versioning: With a proxy in front of your Azure AI Document Intelligence, you can run multiple models simultaneously while you phase them out and update client applications. Consider providing an endpoint for a model with API versioning that matches your deployed model versions.

 

To further learning about deployment strategies in general, including blue/green deployments and feature flags, the Microsoft Learning path for DevOps provides a comprehensive overview.

 

MLOps techniques for gathering user feedback for Document Intelligence model retraining

Enhancing the performance and accuracy of custom models is important to maintain long-term value for customers. An effective approach to achieve this is through monitoring in the Document Intelligence model’s lifecycle as defined by MLOps.

 

Let’s explore approaches to monitoring via feedback from users and leveraging it for efficient model retraining.

 

User Feedback Loops

User feedback loops provide a critical step in the iterative improvement of custom models. As well as the expected performance and usage monitoring, user feedback loops enable the collection of real-world insights. User feedback provides details into how the model is performing under scenarios you may not be able to test otherwise.

 

Implementing a robust user feedback loop involves:

 

  • Direct integration into client applications: It is important to embed a feedback mechanism directly into the applications that take advantage of Azure AI Document Intelligence models. Use intuitive UI elements that encourage users to report inaccuracies or provide suggestions without disrupting their work. An implementation can start with a simple form that allows users to correct the extracted data and submit back. For more interactive feedback, implement a UI that mimics the capabilities of Azure AI Document Intelligence Studio for users to analyze documents and re-label them using the existing fields of the model.
  • Streamlining feedback collection: The process of user feedback should be as seamless as possible. It is important to create structured feedback that you can easily convert to the required format for retraining. Consider using the existing schemas from Azure AI Document Intelligence to map user feedback to shorten the lifecycle. You must also securely store the analyzed documents and OCR analysis results to enable effective retraining with the user feedback.

 

The Azure AI Document Intelligence custom template user feedback loop sample provides a demonstration of this approach within a Python Jupyter notebook. Implementing the more interactive approach, this sample shows how a user could interact with a document to provide corrections for incorrect or missing information extracted by a model.

 

Human-in-the-loop Reviews

While user feedback can be directly integrated via automated retraining, human oversight is crucial to prevent introducing invalid feedback or malicious use. A human-in-the-loop, providing a review, ensures that feedback is accurately interpreted and applied into the model.

 

When implementing a strategy for feedback reviews, consider:

 

  • Establishing a review panel: Gather a group of subject matter experts, e.g., the team responsible for model creation, to review feedback from users on a regular basis. Implement alerts for user submissions to ensure that feedback is actioned. The review panel should be empowered to follow the MLOps process, integrate feedback, test, and rollout changes.
  • Routine quality assurance: Implement routine checks on the submitted feedback queue to ensure that the feedback is relevant and is being correctly users to inform model retraining efforts. It is also important to review feedback that has previously been integrated into the model for continued relevance.

 

Efficient Model Retraining

To capitalize on the insights from user feedback and human-in-the-loop reviews, it is important to establish an effective model retraining process. Here is where automation plays a key role in making this viable at scale by:

 

  • Triggering automated CI/CD pipelines: Model retraining should be integrated in a workflow that can be triggered in an event, such as a schedule or submission of a user feedback review. The workflow should automate the guidance provided in this article including model versioning, testing, and deployment. This ensures that model updates are readily available with the latest feedback-driven improvements and deployed seamlessly.

 

Conclusion

As the demand for intelligent AI applications grows, teams must establish best practices for production readiness. The challenges and complexities of implementing effective strategies for model improvement highlight the need for a robust, iterative approach. Applying the principles of MLOps enhances the longevity of custom models in Azure AI Document intelligence.

 

By adopting these MLOps practices, teams can leverage a well-defined framework to deliver reliable AI solutions that meet their ever evolving customer expectations.

 

Further Reading

 

Co-Authors
Version history
Last update:
‎Mar 04 2024 08:01 AM
Updated by: