Mistral: Codestral Mamba
Detailed specifications for implementing Mistral: Codestral Mamba in your RAG applications.
Model Overview
Released:July 19, 2024
Introducing a cutting-edge 7.3 billion parameter Mamba-based model, expertly crafted for code generation and reasoning tasks. This advanced model delivers linear-time inference, enabling seamless processing of theoretically infinite sequence lengths, and boasts an expansive 256k token context window for handling complex tasks with ease. Optimized for speed, it ensures rapid responses, making it an ideal tool for boosting coding productivity.
Performance-wise, it rivals state-of-the-art transformer models in both code-related and reasoning tasks, offering exceptional accuracy and efficiency. Best of all, it’s available under the Apache 2.0 license, granting users the freedom to use, modify, and distribute it without restrictions. Whether you're a developer, researcher, or enthusiast, this model is designed to elevate your workflow and innovation.
Architecture
- Modality
- text->text
- Tokenizer
- Mistral
Pricing
Operation | Rate |
---|---|
Prompt | 0.00000025 |
Completion | 0.00000025 |
Image | 0 |
Request | 0 |
Provider Details
- Context Length
- 256,000 tokens
- Max Completion
- 0 tokens
- Moderation
- Not Enabled
Ready to implement Mistral: Codestral Mamba?
Start building powerful RAG applications with our flexible pricing plans and developer-friendly API.