Mistral: Mistral Nemo

Detailed specifications for implementing Mistral: Mistral Nemo in your RAG applications.

Model Overview

Released:July 19, 2024

Introducing a cutting-edge 12-billion-parameter model, developed by Mistral in collaboration with NVIDIA, featuring an impressive 128,000-token context length. This advanced model is designed to handle a wide range of languages, including English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi, making it a truly multilingual solution. Additionally, it supports function calling, enhancing its versatility for various applications. Released under the permissive Apache 2.0 license, this model offers both flexibility and accessibility for developers and researchers alike.

Architecture

Modality
text->text
Tokenizer
Mistral

Pricing

OperationRate
Prompt0.00000015
Completion0.00000015
Image0
Request0

Provider Details

Context Length
128,000 tokens
Max Completion
0 tokens
Moderation
Not Enabled

Ready to implement Mistral: Mistral Nemo?

Start building powerful RAG applications with our flexible pricing plans and developer-friendly API.