Mistral: Mixtral 8x7B Instruct

Detailed specifications for implementing Mistral: Mixtral 8x7B Instruct in your RAG applications.

Model Overview

Released:December 10, 2023

Introducing Mixtral 8x7B Instruct, an advanced pretrained generative model developed by Mistral AI, specifically designed for chat and instruction-based applications. This cutting-edge Sparse Mixture of Experts (MoE) model integrates 8 specialized feed-forward networks, collectively comprising 47 billion parameters. Fine-tuned by Mistral AI, Mixtral 8x7B Instruct delivers exceptional performance and versatility for a wide range of conversational and instructional tasks.

Architecture

Modality
text->text
Tokenizer
Mistral

Pricing

OperationRate
Prompt0.00000024
Completion0.00000024
Image0
Request0

Provider Details

Context Length
32,768 tokens
Max Completion
4,096 tokens
Moderation
Not Enabled

Ready to implement Mistral: Mixtral 8x7B Instruct?

Start building powerful RAG applications with our flexible pricing plans and developer-friendly API.