Get started with Mistral models in a few clicks via our developer platform hosted on Mistral’s infrastructure and build your own applications and services. Our servers are hosted in EU.
We release the world’s most capable open models, enabling frontier AI innovation.
Our portable developer platform serves our open and optimized models for building fast and intelligent applications. We offer flexible access options!
A state-of-the-art 12B small model built in collaboration with NVIDIA.
Top-tier reasoning for high-complexity tasks, for your most sophisticated needs.
State-of-the-art Mistral model trained specifically for code tasks.
State-of-the-art semantic for extracting representation of text extracts.
A Mamba2 language model designed for coding tasks.
Variant of Mistral-7B, optimized for solving advanced mathematics problems.
State-of-the-art semantic for extracting representation of text extracts.
Access our latest products via our developer platform, hosted in Europe
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user",
content="Who is the most renowned French painter?")
]
La Plateforme is developers’ preferred way to access all Mistral Al’s models. Hosted and served on Mistral Al infrastructure, in Europe.
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"
client = MistralClient(api_key=api_key)
messages = [
ChatMessage(role="user",
content="Who is the most renowned French painter?")
]
We allow you to fine-tune our models in an easy, effective & cost-efficient way, and thus use smaller and better-suited models to solve your specific use cases. Fine-tuning can be done with our open-source fine-tuning code as well as on La Plateforme with our efficient Fine-tuning API.
Benefit from Mistral fine-tuning code to perform fine-tuning on Mistral open-source models on your own.
Leverage Mistral’s unique expertise in training models by using our highly efficient fine-tuning service to specialize both our open-source and commercial models.
We distribute two categories of models:
Apache 2.0 | Mistral Research License | Mistral Commercial License | |
---|---|---|---|
Access to weights | ✅ | ✅ | ✅ |
Deployment for research purposes and individual usage | ✅ | ✅ | ✅ |
Creation of derivatives (e.g. fine-tuning) for research purposes and individual usage | ✅ | ✅ The same license applies to derivatives | ✅ The same license applies to derivatives |
Deployment for commercial purposes (internal & external use cases) | ✅ | ❌ Requires Mistral Commercial License | ✅ |
Creation and usage of derivatives (e.g. fine-tuning) for commercial use cases | ✅ | ❌ Requires Mistral Commercial License | ✅ |
Custom terms & support (self-deployment) | ❌ | ❌ | ✅ |
Model | API Name | Description | Input | Output |
---|---|---|---|---|
Mistral Nemo | open-mistral-nemo-2407 | Mistral Nemo is a state-of-the-art 12B model developed with NVIDIA. | $0.3 /1M tokens | $0.3 /1M tokens |
Mistral Large 2 | mistral-large-2407 | Top-tier reasoning for high-complexity tasks, for your most sophisticated needs. | $3 /1M tokens | $9 /1M tokens |
Model | API Name | Description | Input | Output |
---|---|---|---|---|
Mistral Nemo | open-mistral-nemo-2407 | Mistral Nemo is a state-of-the-art 12B model developed with NVIDIA. | 0.27€ /1M tokens | 0.27€ /1M tokens |
Mistral Large 2 | mistral-large-2407 | Top-tier reasoning for high-complexity tasks, for your most sophisticated needs. | 2.7€ /1M tokens | 8.2€ /1M tokens |
Model | API Name | Description | Input | Output |
---|---|---|---|---|
Codestral | codestral-2405 | State-of-the-art Mistral model trained specifically for code tasks. | $1 /1M tokens | $3 /1M tokens |
Mistral Embed | mistral-embed | State-of-the-art semantic for extracting representation of text extracts. | $0.1 /1M tokens |
Model | API Name | Description | Input | Output |
---|---|---|---|---|
Codestral | codestral-2405 | State-of-the-art Mistral model trained specifically for code tasks. | 0.9€ /1M tokens | 2.8€ /1M tokens |
Mistral Embed | mistral-embed | State-of-the-art semantic for extracting representation of text extracts. | 0.1€ /1M tokens |
Model | One-off training | Storage | Input | Output |
---|---|---|---|---|
Mistral Nemo | $1 /1M tokens | $2 per month per model | $0.3 /1M tokens | $0.3 /1M tokens |
Codestral | $3 /1M tokens | $2 per month per model | $1 /1M tokens | $3 /1M tokens |
Mistral Large 2 | $9 /1M tokens | $4 per month per model | $3 /1M tokens | $9 /1M tokens |
Model | One-off training | Storage | Input | Output |
---|---|---|---|---|
Mistral Nemo | 0.9€ /1M tokens | 1.9€ per month per model | 0.27€ /1M tokens | 0.27€ /1M tokens |
Codestral | 2.7€ /1M tokens | 1.9€ per month per model | 0.9€ /1M tokens | 2.8€ /1M tokens |
Mistral Large 2 | 8.2€ /1M tokens | 3.8€ per month per model | 2.7€ /1M tokens | 8.2€ /1M tokens |
Model | API Name | Description | Input | Output |
---|---|---|---|---|
Mistral 7B | open-mistral-7b | A 7B transformer model, fast-deployed and easily customisable. | $0.25 /1M tokens | $0.25 /1M tokens |
Mixtral 8x7B | open-mixtral-8x7b | A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total. | $0.7 /1M tokens | $0.7 /1M tokens |
Mixtral 8x22B | open-mixtral-8x22b | Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B. | $2 /1M tokens | $6 /1M tokens |
Mistral Small | mistral-small-latest | Cost-efficient reasoning, optimised for high volume use cases that require low latency. | $1 /1M tokens | $3 /1M tokens |
Mistral Medium | mistral-medium-latest | Our first commercial model. | $2.75 /1M tokens | $8.1 /1M tokens |
Model | API Name | Description | Input | Output |
---|---|---|---|---|
Mistral 7B | open-mistral-7b | A 7B transformer model, fast-deployed and easily customisable. | 0.2€ /1M tokens | 0.2€ /1M tokens |
Mixtral 8x7B | open-mixtral-8x7b | A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total. | 0.65€ /1M tokens | 0.65€ /1M tokens |
Mixtral 8x22B | open-mixtral-8x22b | Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B. | 1.9€ /1M tokens | 5.6€ /1M tokens |
Mistral Small | mistral-small-latest | Cost-efficient reasoning, optimised for high volume use cases that require low latency. | 0.9€ /1M tokens | 2.8€ /1M tokens |
Mistral Medium | mistral-medium-latest | Our first commercial model. | 2.5€ /1M tokens | 7.5€ /1M tokens |
Mistral AI provides a fine-tuning API through La Plateforme, making it easy to fine-tune our open-source and commercial models. There are three costs related to fine-tuning: