POST
/
api
/
v1
/
knowledge-engines
/
{id}
/
query
Query Knowledge Engine
curl --request POST \
  --url https://api.getdecisional.ai/api/v1/knowledge-engines/{id}/query \
  --header 'Content-Type: application/json' \
  --data '{
  "query": "<string>",
  "name": "<string>",
  "advanced_reasoning": true,
  "model": "llama-v3-70b",
  "context_id": "<string>"
}'
"<string>"
Query a knowledge engine with a natural language question. This endpoint uses Server-Sent Events (SSE) to stream the response back to the client.

Request Format

query
string
required
The natural language query to ask the knowledge engine
name
string
required
A name for this query workflow
context_id
string
Context ID used in case of thread mode for chat based workflows
advanced_reasoning
boolean
Flag to enable advanced reasoning for the query
model
string
The model to use for the query

Example Request

{
  "query": "What were the company's revenue figures for 2023?",
  "name": "Revenue Analysis",
  "advanced_reasoning": true,
  "model": "llama-v3-70b"
}

Response Format

The response is streamed using Server-Sent Events (SSE) with the following event types:
  • message: Contains intermediate response chunks as they are generated
  • error: Contains error messages if something goes wrong
  • done: Final event containing the complete response with metadata

Example Response Stream

// Intermediate messages
event: message
data: "Based on the available information, "
event: message
data: "the company's revenue in 2023 was "
event: message
data: "$1.2 billion, representing a 15% increase from 2022."
// Final response with complete result
event: done
data: {
"id": "wf_abc123xyz789",
"name": "Revenue Analysis",
"query": "What were the company's revenue figures for 2023?",
"type": "query",
"knowledge_engine_id": "kng_abc123xyz789",
"status": "processed",
"response": "Based on the available information, the company's revenue in 2023 was $1.2 billion, representing a 15% increase from 2022.",
"citations": [
{
"text": "In fiscal year 2023, total revenue reached $1.2B, up 15% YoY",
"source": "Annual Report 2023",
"page": 45
}
],
"created_at": 1679644800
}

Notes

  • The streaming response allows for real-time display of the AI’s response as it’s being generated
  • The final done event includes the complete response along with metadata and citations
  • If an error occurs, the stream will emit an error event and close the connection
  • Clients should handle connection closure appropriately using the close event
The streaming response requires a client that supports Server-Sent Events (SSE). Most modern browsers and HTTP clients support this feature.

Response Object

id
string
Unique identifier for the workflow
name
string
Name of the workflow
query
string
The original query that was asked
type
string
Type of workflow (always “query”)
knowledge_engine_id
string
ID of the knowledge engine that was queried
status
string
Status of the workflow (“processed” when complete)
response
string
The complete response text
citations
array
Array of citations supporting the response
created_at
number
Unix timestamp when the workflow was created

Path Parameters

id
string
required

Knowledge engine ID

Body

application/json

Response

200
text/event-stream

Streaming response using Server-Sent Events

The response is of type string.