AI Request Contract
Manual supplemental document. This file is not generated by
scripts/doc-scanner.js.
Purpose
This document describes the request data built by the SheetNext front end and the response format that the configured AI_URL endpoint must accept.
It does not define your internal server implementation.
The exact client behavior is based on src/core/AI/AI.js.
Client Configuration
Configure the AI endpoint when creating SheetNext:
const SN = new SheetNext(container, {
AI_URL: 'https://your-server.example.com/api/ai',
AI_TOKEN: 'optional-token'
})
AI_URL: required if you want to use the built-in AI runtimeAI_TOKEN: optional; when present, the browser sendsAuthorization: Bearer <token>
What the Front End Sends
The front end sends an HTTP POST request to AI_URL with:
Content-Type: application/json- optional
Authorization: Bearer <token>
Request body:
{
"messages": [
{
"role": "system",
"content": "System prompt text"
},
{
"role": "user",
"content": "User instruction"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "exampleTool",
"description": "Tool description",
"parameters": {
"type": "object",
"properties": {}
}
}
}
],
"isUserStart": true
}
Request Field Meaning
messages
This is the main Chat Completions-style message array built by the client.
The front end assembles it from:
- a system prompt
- previous conversation history
- optional attachment messages
- a final system snapshot describing current workbook context
Each message uses the usual chat structure:
{
"role": "user",
"content": "plain text"
}
For attachment cases, content can also be an array of parts:
{
"role": "user",
"content": [
{
"type": "text",
"text": "User uploaded attachments:"
},
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,..."
}
}
]
}
So from the server perspective:
messagesshould be accepted as-iscontentmay be either a string or a multimodal content array
tools
This is the built-in tool definition array generated by SheetNext and sent together with the messages.
The server should accept it as-is.
If your model provider supports tool/function calling, you can forward these definitions directly or map them to your provider’s equivalent format.
isUserStart
Boolean flag:
true: this request starts a new user turnfalse: this request is a follow-up turn in the current workflow
The front end sends it for server-side flow control if needed.
What the Server Must Accept
At minimum, the configured AI_URL endpoint should accept:
{
"messages": "Chat Completions-style message array",
"tools": "SheetNext tool definitions",
"isUserStart": true
}
In practical terms:
messages: requiredtools: required by the current client contractisUserStart: required by the current client contract
What the Front End Accepts Back
The current client expects a streaming response and parses it as SSE-style data: lines.
Recommended response header:
Content-Type: text/event-stream
Stream example:
data: {"type":"text","delta":"Hello"}
data: {"type":"text","delta":" world"}
data: [DONE]
Each event should be one data: ... line followed by a blank line.
Supported Response Chunks
text
Append assistant text to the UI.
{
"type": "text",
"delta": "Hello"
}
tool_call
Incrementally stream a tool call. arguments may be partial.
{
"type": "tool_call",
"tool_call": {
"index": 0,
"id": "call_1",
"type": "function",
"function": {
"name": "setCellValue",
"arguments": "{\"range\":\"A1\""
}
}
}
tool_call_complete
Provide the completed tool call payload.
{
"type": "tool_call_complete",
"tool_call": {
"index": 0,
"id": "call_1",
"type": "function",
"function": {
"name": "setCellValue",
"arguments": "{\"range\":\"A1\",\"value\":123}"
}
}
}
usage
Optional usage information.
{
"type": "usage",
"usage": {
"input_tokens": 1200,
"output_tokens": 260,
"total_tokens": 1460
}
}
web_search
Optional status chunk recognized by the current client.
{
"type": "web_search",
"status": "in_progress"
}
or:
{
"type": "web_search",
"status": "completed"
}
error
Structured stream error.
{
"error": {
"message": "Model request failed"
}
}
Error Handling
- non-2xx HTTP responses are treated as request failures
- a 2xx response can still fail by streaming an
errorpayload - plain text or HTML error bodies are shown as error content
- unknown chunk types are ignored by the current client
Recommended Server Strategy
The simplest integration is:
- receive the JSON body from SheetNext
- read
messages,tools, andisUserStart - call your model service
- convert the model output into the SSE chunk format above
- stream it back to the browser
As long as the server accepts the request body above and returns the chunk format above, the current SheetNext AI front end can work with it.