It starts with frontier science.

State of the art models with cutting edge capabilities, from cloud to edge.

Voxtral
Mistral Small
Document AI
Mistral_Large
Codestral
Mistral Medium
Nemo
Ministral

Tailored Arrow Right Orange for You.

Our premier models are designed to be yours to tune, customize, distill, and deploy.

Available for commercial use.

Mistral Large 3

One of the best OSS models in the world: open-weight, general-purpose, flagship multimodal and multilingual model.

Mistral Large 3

Mistral Large 3 is our largest model to date, featuring 41B active parameters and 675B total parameters, with a large 256k context window, and offers powerful agentic capabilities.

Ministral Family

3B, 8B, and 14B brings best-in-class frontier AI to the edge.

Ministral Family

Combining compact efficiency with multimodal and multilingual capability. Engineered for edge devices, self-hosted systems, and robotics, these models seamlessly blend language, vision, and reasoning into highly efficient architectures

Magistral

Specialized, transparent, and multilingual reasoning.

Magistral

Complex thinking, backed by deep understanding, with transparent reasoning you can follow and verify. The model excels in maintaining high-fidelity reasoning across numerous languages, even when switching between languages mid-task.

Medium 3

State-of-the-art performance at 8X lower cost.

Medium 3

Delivering a range of enterprise capabilities including hybrid or on-premises / in-VPC deployment, custom post-training, and integration into enterprise tools and systems.

Mistral Small

Enterprise-ready, compact powerhouse.

Mistral Small

The most powerful model in its size class, combining efficiency with remarkable capabilities. Ideal for production deployments requiring balance between performance and resource usage.

Document AI

Enterprise-grade document processing.

Document AI

Extract and understand complex text, handwriting, tables, and images from any document, with 99%+ accuracy across global languages.

Codestral

Elevating code generation.

Codestral

Purpose-built for code generation and understanding, optimized for developer workflows.

Voxtral

Voxtral is a family of audio models with state-of-the-art speech to text capabilities.

Voxtral

It delivers the right balance between frontier performance, affordable pricing, and flexible deployments.

Mistral Embed

Enabling internal semantic search.

Mistral Embed

State-of-the-art embedding model for semantic search and content organization.

Multimodal Models

Vision pioneer, multimodal mastery.

Multimodal Models

Combine text, image, and structured data understanding in a single model. Process diverse input types while maintaining consistent quality across modalities.

Mistral Moderation

Intelligent content safety at scale.

Mistral Moderation

A fine-tuned model offering customizable content moderation across nine safety categories in multiple languages. Designed for both raw text and conversational content with high accuracy and pragmatic safety guardrails.

Free open-weight models for research.

Free to use under the Apache 2.0 license.

Devstral

The best open-source model for coding agents.

Mistral Small

Enterprise-ready, compact powerhouse.

Magistral

Specialized, transparent, and multilingual reasoning.

Voxtral

Audio model with state-of-the-art speech to text capabilities.

Custom models for your own needs.M

Make models your own through fine-tuning and customization. Adapt our models to your specific use cases while maintaining core performance.

Unique models tailored to your business, delivering very high accuracy at significantly lower cost.

Transform general-purpose LLMs into domain-specialized intelligence with Mistral AI's custom pre-training and model distillation services.

Custom Pre Training
Open
plug-icon

Community hardened, open-weight models developed by the world’s best scientists.

Customizable
pen-icon

Customize and control the full stack, from models to UX.

Private
key-icon

Deploy in your own environment so you own your data, models, and core competencies.

Efficient
thunder-icon

Tap into small, super-efficient models tailored for productivity.

Ready to get started?

Deploy our models Folder anywhere with flexible infrastructure options spanning cloud providers, edge, VPC and on-premises environments.

icon-computer-black
Self-hosted
Deploy Mistral models on virtual cloud, edge, or on-premises. Self-hosted deployments offer more advanced levels of customization and control. Your data stays within your walls.
icon-earth-black
Mistral Cloud
Get started with Mistral models in a few clicks via our developer platform hosted on Mistral’s infrastructure and build your own applications and services. Our servers are hosted in EU.
icon-laptop-black
Cloud providers
Access our models via your preferred cloud provider (Google Cloud, AWS, Azure, IBM, Snowflake, NVIDIA, Outscale) and use your cloud credits. Mistral models are available on Azure AI Studio, AWS Bedrock, Google Cloud Model Garden, IBM Watsonx, and Snowflake.

Model licenses explained.

Apache 2.0

Mistral research license

Mistral commercial license

Access to weights

check-black
check-black
check-black

Deployment for research purposes and individual usage

check-black
check-black
check-black

Creation of derivatives (e.g. fine-tuning) for research purposes and individual usage

check-black
check-black

The same license applies to derivatives

check-black

The same license applies to derivatives

Deployment for commercial purposes (internal & external use cases)

check-black
cross-mark

Requires Mistral commercial license

check-black

Creation and usage of derivatives (e.g. fine-tuning) for commercial use cases

check-black
cross-mark

Requires Mistral commercial license

check-black

Custom terms & support (self-deployment)

check-black
cross-mark
check-black

Get started with Mistral models.

It's time to get to the frontier.