Documentation Index Fetch the complete documentation index at: https://docs.getdecisional.ai/llms.txt
Use this file to discover all available pages before exploring further.
Query a knowledge engine with a natural language question. This endpoint uses Server-Sent Events (SSE) to stream the response back to the client.
The natural language query to ask the knowledge engine
A name for this query workflow
Context ID used in case of thread mode for chat based workflows
Flag to enable advanced reasoning for the query
The model to use for the query
Example Request
{
"query" : "What were the company's revenue figures for 2023?" ,
"name" : "Revenue Analysis" ,
"advanced_reasoning" : true ,
"model" : "llama-v3-70b"
}
The response is streamed using Server-Sent Events (SSE) with the following event types:
message: Contains intermediate response chunks as they are generated
error: Contains error messages if something goes wrong
done: Final event containing the complete response with metadata
Example Response Stream
// Intermediate messages
event : message
data : "Based on the available information, "
event : message
data : "the company's revenue in 2023 was "
event : message
data : "$1.2 billion, representing a 15% increase from 2022."
// Final response with complete result
event : done
data : {
"id" : "wf_abc123xyz789" ,
"name" : "Revenue Analysis" ,
"query" : "What were the company's revenue figures for 2023?" ,
"type" : "query" ,
"knowledge_engine_id" : "kng_abc123xyz789" ,
"status" : "processed" ,
"response" : "Based on the available information, the company's revenue in 2023 was $1.2 billion, representing a 15% increase from 2022." ,
"citations" : [
{
"text" : "In fiscal year 2023, total revenue reached $1.2B, up 15% YoY" ,
"source" : "Annual Report 2023" ,
"page" : 45
}
],
"created_at" : 1679644800
}
Notes
The streaming response allows for real-time display of the AI’s response as it’s being generated
The final done event includes the complete response along with metadata and citations
If an error occurs, the stream will emit an error event and close the connection
Clients should handle connection closure appropriately using the close event
The streaming response requires a client that supports Server-Sent Events (SSE). Most modern browsers and HTTP clients support this feature.
Response Object
Unique identifier for the workflow
The original query that was asked
Type of workflow (always “query”)
ID of the knowledge engine that was queried
Status of the workflow (“processed” when complete)
The complete response text
Array of citations supporting the response The relevant text from the source
Name of the source document
Page number in the source document
Unix timestamp when the workflow was created
Basic authentication header of the form Basic <encoded-value> , where <encoded-value> is the base64-encoded string username:password .
The natural language query
Enable advanced reasoning
Available options:
auto,
claude-4.5-sonnet,
llama-v3-70b,
llama-4-scout,
llama-4-maverick,
gemini-2.5,
gpt-5.4,
claude-4.6-haiku,
gpt-4.1
Context ID for maintaining conversation history in chat mode
Streaming response using Server-Sent Events
The response is of type string .