Skip to main contentThe Model Tier Selector offers developers four levels of high-performance model pools: Lite, Efficient, Performance, and Auto. Each tier strikes a distinct balance between capability and credit consumption. This allows you to precisely match the right level of AI power to your specific R&D scenario and task complexity.
Just as smart vehicles offer driving modes such as Eco, Comfort, and Sport to adapt to different road conditions, the Model Tier Selector lets you seamlessly switch between cost, efficiency, and output quality. This ensures every task runs in its optimal mode — predictable in cost, and reliable in results.
Choosing the Right Model Tier
Qoder’s Model Tier Selector includes four preset tiers, each optimized for a specific trade-off between performance and cost.
You can choose the appropriate tier based on the following guidance.
Auto (Smart Routing)
-
Description: Powered by Qoder’s adaptive optimization algorithm, Auto intelligently selects the optimal model for each task and scenario, dynamically balancing system load to deliver consistent performance and stability.
-
Recommended for: Most everyday development scenarios. Ideal as the default option.
-
Credit usage: ~1.0×, saving around 10% compared to Performance tier.
-
Description: Uses the best available model to ensure peak output quality and performance.
-
Recommended for: Complex or high-stakes tasks such as core feature implementation, system architecture design, deep debugging, and code refactoring, etc.
-
Credit usage: ~1.1×, higher credit usage as it prioritizes maximum output quality
Efficient
-
Description: Selects highly cost-effective model that maintain core quality while significantly reducing credit consumption.
-
Recommended for: Routine development tasks like code generation, unit test creation, daily Q&A, or design documentation, etc.
-
Credit usage: ~0.3×, saving over 75% compared to Performance tier.
Lite
-
Description: Provides access to a basic model
-
Recommended for: Basic R&D work such as simple logic implementation, and quick Q&A, etc.
-
Credit usage: 0× (Free)
Limitations: Does not support multimodal input and may experience delays during peak hours.
Credit Consumption Comparison
The table below intuitively demonstrates the Credit cost overhead for each tier when performing AI coding tasks of similar complexity, using specific task examples.
-
Credit Consumption Multiplier: Represents the rate at which Credits are consumed by each tier to complete the same task.
-
Single Example Task Consumption: Simulates the average Credit amount required to complete a moderately complex AI coding task in different tiers.
| Tier | Credit Consumption Multiplier | Single Example Task Consumption |
|---|
| Auto | ~1.0x | 10 Credits |
| Performance | ~1.1x | 11 Credits |
| Efficient | ~0.3x | 3 Credits |
| Lite | Free | 0 Credits |
Note: Due to variations in tasks and codebases, actual consumption multipliers may differ.
How to Switch Between Model Tiers
-
Open the Model Selector
In the AI Chat panel, click the “select model” dropdown in the input box. It displays your current tier by default (e.g., “Auto”).
-
Choose a Model Tier
Available options: Auto, Performance, Efficient, and Lite.
-
Apply Instantly
The selected model tier takes effect immediately for all subsequent messages in the current conversation.
Your chat context remains intact — no need to restart.
Note: Only the Lite tier will be available if you run out of Credits. Upgrade or acquire more Credits to unlock other tiers.
FAQ
-
Can I switch tiers within the same conversation?
Yes. You can switch tiers at any time using the “Select Model” dropdown in the AI Chat panel. The new selection takes effect instantly for all subsequent messages, allowing you to adjust dynamically as your task evolves.
-
How is credit consumption calculated for each tier?
Your Credit consumption is still determined by the number of tokens processed per request and the per-token rate of the model you’re using. For a comparison of Credit usage across model tiers on the same task, see the examples in this guide. For full billing details, please refer to our official documentation.
-
Is the Lite mode completely free? Are there any limitations?
Yes, Lite mode is currently free, but responses may be slower during high-traffic periods, and it does not yet support multimodal input.