- November 7, 2024
- Mistral AI team
This could’ve been a tweet, but… While the AI developer community was faced with several API price hikes in the past few weeks, we’re trying to make sure to continue bringing frontier AI to you at affordable price points. To that end, we’re introducing the batch API, available today on La Plateforme.
The batch API introduces a more efficient way to process high-volume requests to Mistral models at 50% lower cost than that of a synchronous API call. If you’re building AI applications where you prioritize volume of data over synchronous responses, the batch API can be an ideal solution. You simply upload your batch file, and once the requests have been processed, download and use the output file. For detailed instructions, check out our batch API documentation.
Popular applications for the batch API include customer feedback and sentiment analysis, document summarization and translation in bulk, vector embedding to prepare search indexes, and data labeling.
The batch API is available for all models served on la Plateforme, and coming soon to our cloud provider partners. Rate of usage is limited to 1 million ongoing requests per workspace.
Please be sure to let us know what you think, and contact us for custom or private deployments.