Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI's most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral's OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
Next, you need to create a deployment to obtain the inference API and key:
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral.... Let’s review samples for some popular clients.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.