Supported Models
Foundation models supported on the Decisional platform
Model | Provider | Best for |
---|---|---|
auto | Decisional | Automatically selects the most appropriate model based on query requirements |
claude-3.5-sonnet | Anthropic | Conversational tasks, summarization, and natural language generation.- Scenarios requiring balanced performance and clarity. |
llama-v3-70b | Meta | Fastest model with strong performance for high-volume queries.- Ideal for applications needing lower latency and an open-source foundation (hosted on Groq). |
o1 | OpenAI | Deep reasoning for highly complex workflows or queries.- when combined with advanced_reasoning it will deliver the most thorough analysis. |
o3-mini | OpenAI | Lightweight, cost-effective model for simpler queries.- Good for prototyping or quick interactions where complexity is minimal. |
gpt-4o | OpenAI | Simple tasks requiring basic language understanding.- Strong general-purpose performance |
Decisional offers multiple AI models with varying capabilities and performance profiles. You can rely on the default auto
setting or explicitly choose a model to tailor performance, speed, and cost to your requirements.
Default
auto
Let Decisional automatically choose the best-suited model based on your query. This is recommended if you’re unsure which model fits your use case or want a balanced approach without manually tuning.
Warning
Combining reasoning models with advanced reasoning: model = "o1"
, advanced_reasoning = true
Higher-level models with advanced reasoning can significantly slow down response times. Use this combo only when you need the deepest analysis possible.
Selecting a Model
You can specify which model to use when making a query by including the model
parameter in your request:
Query API Example
Workflow API Example