Arliai Models
Explore the Arliai language and embedding models available through our OpenAI Assistants API-compatible service.
ArliAI: QwQ 32B RpR v1 (free)
- Context Length:
- 32,768 tokens
- Architecture:
- text->text
Pricing:
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself.
The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.
ArliAI: QwQ 32B RpR v1
- Context Length:
- 32,768 tokens
- Architecture:
- text->text
- Max Output:
- 32,768 tokens
Pricing:
QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself.
The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.
Ready to build with Arliai?
Start using these powerful models in your applications with our flexible pricing plans.