Aion-labs Models
Explore the Aion-labs language and embedding models available through our OpenAI Assistants API-compatible service.
AionLabs: Aion-1.0
- Context Length:
- 131,072 tokens
- Architecture:
- text->text
- Max Output:
- 32,768 tokens
Pricing:
Aion-1.0 is a multi-model system designed for high performance across various tasks, including reasoning and coding. It is built on DeepSeek-R1, augmented with additional models and techniques such as Tree of Thoughts (ToT) and Mixture of Experts (MoE). It is Aion Lab's most powerful reasoning model.
AionLabs: Aion-1.0-Mini
- Context Length:
- 131,072 tokens
- Architecture:
- text->text
- Max Output:
- 32,768 tokens
Pricing:
Aion-1.0-Mini 32B parameter model is a distilled version of the DeepSeek-R1 model, designed for strong performance in reasoning domains such as mathematics, coding, and logic. It is a modified variant of a FuseAI model that outperforms R1-Distill-Qwen-32B and R1-Distill-Llama-70B, with benchmark results available on its Hugging Face page, independently replicated for verification.
AionLabs: Aion-RP 1.0 (8B)
- Context Length:
- 32,768 tokens
- Architecture:
- text->text
- Max Output:
- 32,768 tokens
Pricing:
Aion-RP-Llama-3.1-8B ranks the highest in the character evaluation portion of the RPBench-Auto benchmark, a roleplaying-specific variant of Arena-Hard-Auto, where LLMs evaluate each other’s responses. It is a fine-tuned base model rather than an instruct model, designed to produce more natural and varied writing.
Ready to build with Aion-labs?
Start using these powerful models in your applications with our flexible pricing plans.