Mistral technology

AI models

We have shipped the most capable open models to accelerate AI innovation, and drives the field forward with frontier models.

Developer platform

Our portable developer platform serves our open and optimized models for building fast and intelligent applications. We offer flexible access options!

AI models

Optimized
Open

We’re committed to empower the AI community with open technology. Our open models sets the bar for efficiency, and are available for free, with fully permissive license.

Mistral 7B

Our very first. A 7B transformer model, fast-deployed and easily customisable. Small, yet very powerful for a variety of use cases.

Mixtral 8x7B

Currently the best open model. A 7B sparse Mixture-of-Experts (SMoE). Uses 12B active parameters out of 45B total.

  • English and code
  • 8k context window
  • Fluent in English, French, Italian, German, Spanish, and strong in code
  • 32k context window
  • Apache 2.0 License
  • Concise, useful, unopinionated, with fully modular moderation control

Our optimized commercial models are designed for performance and are available via our flexible deployment options.

Mistral Small

Cost-efficient reasoning for low-latency workloads.

Mistral Large

Top-tier reasoning for high-complexity tasks.

Mistral Embed

State-of-the-art semantic for extracting representation of text extracts.

  • Fluent in English, French, Italian, German, Spanish, and strong in code
  • Context window of 32k tokens, with excellent recall for retrieval augmentation
  • Native function calling capacities, JSON outputs
  • Concise, useful, unopinionated, with fully modular moderation control

Performance first

We’re constantly innovating to provide the most capable and efficient models.

State-of-the-art technology

Mistral ranks second among all models generally available through an API, and provide top-tier reasoning capabilities.

MMLU
MMLU

Measured independently

Our technology is regularly compared to the competition by independent entities, with very favorable results.

LMSys Chatbot Arena

Mistral Medium ranks second among all LLMs, according to human preferences.

Artificial Analysis

Mixtral 8x7B sets the highest bar in performance-cost efficiency.

Deploy anywhere

La Plateforme
La Plateforme

Get started with Mistral models in a few clicks via our developer platform hosted on Mistral’s infrastructure and build your own applications and services. Our servers are hosted in EU.

Cloud platforms
Cloud platforms

Access our models via your preferred cloud provider and use your cloud credits. Our open models are currently available via our cloud partners (GCP, AWS, Azure, NVIDIA).
Mistral Large is available on Azure AI.

Self-deployment
Self-deployment

Deploy Mistral models on virtual cloud or on-prem. Self-deployment offers more advanced levels of customisation and control. Your data stays within your walls. Try deploying our open models, and contact our team to deploy our optimized models similarly.

La Plateforme

Access our latest products via our developer platform, hosted in Europe

Built for developers

La Plateforme is developers’ preferred way to access all Mistral Al’s models. Hosted and served on Mistral Al infrastructure, in Europe.

  • Our best models at the best price : Get access to our models at an unmatched price/performance point
  • Guides & community : Use our guides and community forums to build your own application and services
  • Secure by design : Your data are encrypted at rest (AES256) and in transit (TLS 1.2+); our servers are in the EU
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-tiny"

client = MistralClient(api_key=api_key)

messages = [
    ChatMessage(role="user",
    content="Who is the most renowned French painter?")
]