Mistral: Mistral 7B Instruct v0.3

Detailed specifications for implementing Mistral: Mistral 7B Instruct v0.3 in your RAG applications.

Model Overview

Released:May 27, 2024

Introducing a cutting-edge, high-performance 7.3 billion parameter model, meticulously optimized for speed and extended context length. This state-of-the-art model builds upon the foundation of Mistral 7B Instruct v0.2, delivering significant enhancements:

  • Expanded Vocabulary: Now featuring an extended vocabulary of 32,768 tokens for improved language understanding and generation.
  • v3 Tokenizer Support: Fully compatible with the latest v3 tokenizer for seamless integration and enhanced performance.
  • Function Calling Capability: Equipped with support for function calling, enabling more dynamic and interactive applications. (Note: Function calling support may vary depending on the provider.)

Experience the next level of AI-driven solutions with this advanced model, designed to meet the demands of modern, high-stakes environments.

Architecture

Modality
text->text
Tokenizer
Mistral

Pricing

OperationRate
Prompt0.00000003
Completion0.000000055
Image0
Request0

Provider Details

Context Length
32,768 tokens
Max Completion
4,096 tokens
Moderation
Not Enabled

Ready to implement Mistral: Mistral 7B Instruct v0.3?

Start building powerful RAG applications with our flexible pricing plans and developer-friendly API.