Mistral: Mixtral 8x7B Instruct (nitro)
Detailed specifications for implementing Mistral: Mixtral 8x7B Instruct (nitro) in your RAG applications.
Model Overview
Released:December 10, 2023
Introducing Mixtral 8x7B Instruct, an advanced pretrained generative model developed by Mistral AI, specifically designed for chat and instruction-based applications. This cutting-edge Sparse Mixture of Experts (MoE) model integrates 8 specialized feed-forward networks, collectively comprising 47 billion parameters. Fine-tuned by Mistral AI, Mixtral 8x7B Instruct delivers exceptional performance and versatility, making it an ideal solution for sophisticated conversational and instructional tasks.
Architecture
- Modality
- text->text
- Tokenizer
- Mistral
Pricing
Operation | Rate |
---|---|
Prompt | 0.00000054 |
Completion | 0.00000054 |
Image | 0 |
Request | 0 |
Provider Details
- Context Length
- 32,768 tokens
- Max Completion
- 0 tokens
- Moderation
- Not Enabled
Ready to implement Mistral: Mixtral 8x7B Instruct (nitro)?
Start building powerful RAG applications with our flexible pricing plans and developer-friendly API.