Supported Models
Foundation models supported on the Decisional platform
Model | Provider | Best for |
---|---|---|
auto | Decisional | Automatically selects the most appropriate model based on query requirements |
claude-3.5-sonnet | Anthropic | Conversational tasks, summarization, and natural language generation.- Scenarios requiring balanced performance and clarity. |
llama-v3-70b | Meta | Fastest model with strong performance for high-volume queries.- Ideal for applications needing lower latency and an open-source foundation (hosted on Groq). |
llama-4-scout | Meta | Efficient processing while maintaining strong performance - Ideal for deployments where resources are limited but quality can’t be compromised. |
llama-4-maverick | Meta | High-performance model that outperforms GPT-4o and Gemini 2.0 on various benchmarks - Best for complex tasks requiring advanced capabilities with its 400B total parameters (17B active). Requires more than a single GPU. |
gemini-2.5 | Long context processing with multimodal capabilities - Excels at handling complex reasoning tasks with a massive 1 million token context window (expanding to 2 million). Particularly strong at mathematical reasoning, scientific problem-solving, and advanced coding applications where deep reasoning is required. | |
o1 | OpenAI | Deep reasoning for highly complex workflows or queries.- when combined with advanced_reasoning it will deliver the most thorough analysis. |
o3-mini | OpenAI | Lightweight, cost-effective model for simpler queries.- Good for prototyping or quick interactions where complexity is minimal. |
gpt-4.1 | OpenAI | Simple tasks requiring basic language understanding.- Strong general-purpose performance |
Decisional offers multiple AI models with varying capabilities and performance profiles. You can rely on the default auto
setting or explicitly choose a model to tailor performance, speed, and cost to your requirements.
Default
- Let Decisional automatically choose the best-suited model based on your query. This is recommended if you’re unsure which model fits your use case or want a balanced approach without manually tuning.
Warning
Combining reasoning models with advanced reasoning: model = "o1"
, advanced_reasoning = true
Higher-level models with advanced reasoning can significantly slow down response times. Use this combo only when you need the deepest analysis possible.
Selecting a Model
You can specify which model to use when making a query by including the model
parameter in your request:
Query API Example
Workflow API Example