ModelProviderBest for
autoDecisionalAutomatically selects the most appropriate model based on query requirements
claude-3.5-sonnetAnthropicConversational tasks, summarization, and natural language generation.- Scenarios requiring balanced performance and clarity.
llama-v3-70bMetaFastest model with strong performance for high-volume queries.- Ideal for applications needing lower latency and an open-source foundation (hosted on Groq).
llama-4-scoutMetaEfficient processing while maintaining strong performance - Ideal for deployments where resources are limited but quality can’t be compromised.
llama-4-maverickMetaHigh-performance model that outperforms GPT-4o and Gemini 2.0 on various benchmarks - Best for complex tasks requiring advanced capabilities with its 400B total parameters (17B active). Requires more than a single GPU.
gemini-2.5GoogleLong context processing with multimodal capabilities - Excels at handling complex reasoning tasks with a massive 1 million token context window (expanding to 2 million). Particularly strong at mathematical reasoning, scientific problem-solving, and advanced coding applications where deep reasoning is required.
o1OpenAIDeep reasoning for highly complex workflows or queries.- when combined with advanced_reasoning it will deliver the most thorough analysis.
o3-miniOpenAILightweight, cost-effective model for simpler queries.- Good for prototyping or quick interactions where complexity is minimal.
gpt-4.1OpenAISimple tasks requiring basic language understanding.- Strong general-purpose performance

Decisional offers multiple AI models with varying capabilities and performance profiles. You can rely on the default auto setting or explicitly choose a model to tailor performance, speed, and cost to your requirements.

Default

  • Let Decisional automatically choose the best-suited model based on your query. This is recommended if you’re unsure which model fits your use case or want a balanced approach without manually tuning.

Warning

Combining reasoning models with advanced reasoning: model = "o1", advanced_reasoning = true Higher-level models with advanced reasoning can significantly slow down response times. Use this combo only when you need the deepest analysis possible.

Selecting a Model

You can specify which model to use when making a query by including the model parameter in your request:

Query API Example

curl -X POST https://api.getdecisional.ai/api/v1/knowledge-engines/{id}/query \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
  "query": "Complex analysis question?",
  "advanced_reasoning": true,
  "model": "llama-v3-70b"
}'

Workflow API Example

curl -X POST https://api.getdecisional.ai/api/v1/workflows \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{
  "value": {
    "name": "Analysis Workflow",
    "query": "Comprehensive analysis of revenue growth over past 5 years?",
    "advanced_reasoning": true,
    "model": "o1"
  }
}'