Get started with Mistral models in a few clicks via our developer platform hosted on Mistral’s infrastructure and build your own applications and services. Our servers are hosted in EU.
We release the world’s most capable open models, enabling frontier AI innovation.
Our portable developer platform serves our open and optimized models for building fast and intelligent applications. Get started for free!
State of the art models across a variety of sizes, available to experiment under the Mistral Research License and take to production with the Commercial License.
Top-tier reasoning for high-complexity tasks and sophisticated problems.
Vision-capable large model with frontier reasoning capabilities.
Enterprise-grade small model.
State-of-the-art Mistral model trained specifically for code tasks.
Our most powerful edge model. Successor to Mistral 7B.
Sets the benchmark in commonsense, reasoning, and function-calling in the sub-10B category.
Our most efficient edge model.
The most capable in its category, ideal for low-power, low-latency on-device computing and edge use cases.
State-of-the-art semantic for extracting representation of text extracts.
A classifier service for text content moderation.
For more details on the various pricing options, check out our pricing page here: See pricing.
Free to use under the Apache 2.0 license.
Vision-capable small model.
Variant of Mistral-7B, optimized for solving advanced mathematics problems.
A Mamba2 language model designed for coding tasks.
A state-of-the-art 12B small model built in collaboration with NVIDIA.
Mixtral 8x22B set a new standard for performance and efficiency, with only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. It is natively capable of function calling, which enables application development and tech stack modernisation at scale.
A high-quality sparse mixture of experts (SMoE) with open weights. Matches or outperforms GPT3.5 on most standard benchmarks, particularly in multilingual capabilities and code.
The first Mistral model, engineered for superior performance and efficiency. The model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost.
These models also have commercial licenses for business purposes: Explore commercial licenses.
We allow you to fine-tune our models in an easy, effective & cost-efficient way, and thus use smaller and better-suited models to solve your specific use cases. Fine-tuning can be done with our open-source fine-tuning code as well as on La Plateforme with our efficient Fine-tuning API.
Leverage Mistral’s unique expertise in training models by using our highly efficient fine-tuning service to specialize both our open-source and commercial models.
Benefit from Mistral fine-tuning code to perform fine-tuning on Mistral open-source models on your own.
Model | API Name | Description | Input (/M tokens) | Output (/M tokens) |
---|---|---|---|---|
Mistral Large 24.11 | mistral-large-latest | Top-tier reasoning for high-complexity tasks and sophisticated problems. | $2 | $6 |
Pixtral Large | pixtral-large-latest | Vision-capable large model with frontier reasoning capabilities. | $2 | $6 |
Mistral Small 24.09 | mistral-small-latest | Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis. | $0.2 | $0.6 |
Codestral | codestral-latest | State-of-the-art Mistral model trained specifically for code tasks. | $0.2 | $0.6 |
Ministral 8B 24.10 | ministral-8b-latest | Powerful model for on-device use cases. | $0.1 | $0.1 |
Ministral 3B 24.10 | ministral-3b-latest | Most efficient edge model. | $0.04 | $0.04 |
Mistral Embed | mistral-embed | State-of-the-art semantic for extracting representation of text extracts. | $0.1 | |
Mistral Moderation 24.11 | mistral-moderation-latest | A classifier service for text content moderation. | $0.1 |
Model | API Name | Description | Input (/M tokens) | Output (/M tokens) |
---|---|---|---|---|
Mistral Large 24.11 | mistral-large-latest | Top-tier reasoning for high-complexity tasks and sophisticated problems. | 1.8€ | 5.4€ |
Pixtral Large | pixtral-large-latest | Vision-capable large model with frontier reasoning capabilities. | 1.8€ | 5.4€ |
Mistral Small 24.09 | mistral-small-latest | Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis. | 0.18€ | 0.54€ |
Codestral | codestral-latest | State-of-the-art Mistral model trained specifically for code tasks. | 0.18€ | 0.54€ |
Ministral 8B 24.10 | ministral-8b-latest | Powerful model for on-device use cases. | 0.09€ | 0.09€ |
Ministral 3B 24.10 | ministral-3b-latest | Most efficient edge model. | 0.04€ | 0.04€ |
Mistral Embed | mistral-embed | State-of-the-art semantic for extracting representation of text extracts. | 0.09€ | |
Mistral Moderation 24.11 | mistral-moderation-latest | A classifier service for text content moderation. | 0.09€ |
Models used with the Batch API cost 50% lower than prices shown above.
Free modelsModel | API Name | Description | Input (/M tokens) | Output (/M tokens) |
---|---|---|---|---|
Pixtral 12B | pixtral-12b | Vision-capable small model. | $0.15 | $0.15 |
Mistral NeMo | mistral-nemo | State-of-the-art Mistral model trained specifically for code tasks. | $0.15 | $0.15 |
Mistral 7B | open-mistral-7b | A 7B transformer model, fast-deployed and easily customisable. | $0.25 | $0.25 |
Mixtral 8x7B | open-mixtral-8x7b | A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total. | $0.7 | $0.7 |
Mixtral 8x22B | open-mixtral-8x22b | Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B. | $2 | $6 |
Model | API Name | Description | Input (/M tokens) | Output (/M tokens) |
---|---|---|---|---|
Pixtral 12B | pixtral-12b | Vision-capable small model. | 0.13€ | 0.13€ |
Mistral NeMo | mistral-nemo | State-of-the-art Mistral model trained specifically for code tasks. | 0.13€ | 0.13€ |
Mistral 7B | open-mistral-7b | A 7B transformer model, fast-deployed and easily customisable. | 0.2€ | 0.2€ |
Mixtral 8x7B | open-mixtral-8x7b | A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total. | 0.65€ | 0.65€ |
Mixtral 8x22B | open-mixtral-8x22b | Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B. | 1.9€ | 5.6€ |
Model | One-off training (/M tokens) | Storage | Input (/M tokens) | Output (/M tokens) |
---|---|---|---|---|
Mistral NeMo | $1 | $2 per month per model | $0.15 | $0.15 |
Mistral Large 24.11 | $9 | $4 per month per model | $2 | $6 |
Mistral Small | $3 | $2 per month per model | $0.2 | $0.6 |
Codestral | $3 | $2 per month per model | $0.2 | $0.6 |
Model | One-off training (/M tokens) | Storage | Input (/M tokens) | Output (/M tokens) |
---|---|---|---|---|
Mistral NeMo | 0.9€ | 1.8€ per month per model | 0.13€ | 0.13€ |
Mistral Large 24.11 | 8.2€ | 3.8€ per month per model | 1.8€ | 5.4€ |
Mistral Small | 2.7€ | 1.8€ per month per model | 0.18€ | 0.54€ |
Codestral | 2.7€ | 1.8€ per month per model | 0.18€ | 0.54€ |
Tokens are numerical representations of words or parts of words. On average, one token is roughly equivalent to 4 characters or 0.75 words in English.
Mistral AI provides a fine-tuning API through La Plateforme, making it easy to fine-tune our open-source and commercial models. There are three costs related to fine-tuning:
Access our latest products via our developer platform, hosted in Europe
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user",
content="Who is the most renowned French painter?")
]
La Plateforme is developers’ preferred way to access all Mistral Al’s models. Hosted and served on Mistral’s infrastructure, in Europe.
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user",
content="Who is the most renowned French painter?")
]
We distribute two categories of models:
Apache 2.0 | Mistral Research License | Mistral Commercial License | |
---|---|---|---|
Access to weights | ✅ | ✅ | ✅ |
Deployment for research purposes and individual usage | ✅ | ✅ | ✅ |
Creation of derivatives (e.g. fine-tuning) for research purposes and individual usage | ✅ | ✅ The same license applies to derivatives | ✅ The same license applies to derivatives |
Deployment for commercial purposes (internal & external use cases) | ✅ | ❌ Requires Mistral Commercial License | ✅ |
Creation and usage of derivatives (e.g. fine-tuning) for commercial use cases | ✅ | ❌ Requires Mistral Commercial License | ✅ |
Custom terms & support (self-deployment) | ❌ | ❌ | ✅ |