Mistral: Mixtral 8x22B Instruct

Detailed specifications for implementing Mistral: Mixtral 8x22B Instruct in your RAG applications.

Model Overview

Released:April 17, 2024

Introducing Mistral's official instruct fine-tuned version of Mixtral 8x22B, a cutting-edge model designed to deliver exceptional performance with remarkable cost efficiency. With 39 billion active parameters out of a total 141 billion, this model sets a new standard for balancing power and affordability. Key features include:

  • Advanced capabilities in mathematics, coding, and logical reasoning.
  • A massive context length of 64k tokens, enabling deep and comprehensive understanding.
  • Multilingual fluency in English, French, Italian, German, and Spanish.

For detailed performance benchmarks, explore the official launch announcement here.

Architecture

Modality
text->text
Tokenizer
Mistral

Pricing

OperationRate
Prompt0.000002
Completion0.000006
Image0
Request0

Provider Details

Context Length
65,536 tokens
Max Completion
0 tokens
Moderation
Not Enabled

Ready to implement Mistral: Mixtral 8x22B Instruct?

Start building powerful RAG applications with our flexible pricing plans and developer-friendly API.