![](/images/news/mistral-nemo/mistral-nemo.png)
Mistral NeMo
- 18 juil. 2024
- Par Mistral AI team
Mistral NeMo: our new best small model. A state-of-the-art 12B model with 128k context length, built in collaboration with NVIDIA, and released under the Apache 2.0 license.
En savoir plusMistral NeMo: our new best small model. A state-of-the-art 12B model with 128k context length, built in collaboration with NVIDIA, and released under the Apache 2.0 license.
En savoir plusAs a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialised in code generation, available under an Apache 2.0 license.
En savoir plusAs a tribute to Archimedes, whose 2311th anniversary we're celebrating this year, we are proud to release our first Mathstral model, a specific 7B model designed for math reasoning and scientific discovery. The model has a 32k context window published under the Apache 2.0 license.
En savoir plusFine-tune and deploy your custom Mistral models using Mistral fine-tuning API and SDK.
En savoir plusEmpowering developers and democratising coding with Mistral AI.
En savoir plusMistral AI introduces the MNPL to promote sustainable openness in AI.
En savoir plus