# AI Request Contract

> Manual supplemental document. This file is not generated by `scripts/doc-scanner.js`.

## Purpose

This document describes the request data built by the SheetNext front end and the response format that the configured `AI_URL` endpoint must accept.

It does not define your internal server implementation.

The exact client behavior is based on `src/core/AI/AI.js`.

## Client Configuration

Configure the AI endpoint when creating `SheetNext`:

```js
const SN = new SheetNext(container, {
  AI_URL: 'https://your-server.example.com/api/ai',
  AI_TOKEN: 'optional-token'
})
```

- `AI_URL`: required if you want to use the built-in AI runtime
- `AI_TOKEN`: optional; when present, the browser sends `Authorization: Bearer <token>`

## What the Front End Sends

The front end sends an HTTP `POST` request to `AI_URL` with:

- `Content-Type: application/json`
- optional `Authorization: Bearer <token>`

Request body:

```json
{
  "messages": [
    {
      "role": "system",
      "content": "System prompt text"
    },
    {
      "role": "user",
      "content": "User instruction"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "exampleTool",
        "description": "Tool description",
        "parameters": {
          "type": "object",
          "properties": {}
        }
      }
    }
  ],
  "isUserStart": true
}
```

## Request Field Meaning

### `messages`

This is the main Chat Completions-style message array built by the client.

The front end assembles it from:

- a system prompt
- previous conversation history
- optional attachment messages
- a final system snapshot describing current workbook context

Each message uses the usual chat structure:

```json
{
  "role": "user",
  "content": "plain text"
}
```

For attachment cases, `content` can also be an array of parts:

```json
{
  "role": "user",
  "content": [
    {
      "type": "text",
      "text": "User uploaded attachments:"
    },
    {
      "type": "image_url",
      "image_url": {
        "url": "data:image/png;base64,..."
      }
    }
  ]
}
```

So from the server perspective:

- `messages` should be accepted as-is
- `content` may be either a string or a multimodal content array

### `tools`

This is the built-in tool definition array generated by SheetNext and sent together with the messages.

The server should accept it as-is.

If your model provider supports tool/function calling, you can forward these definitions directly or map them to your provider's equivalent format.

### `isUserStart`

Boolean flag:

- `true`: this request starts a new user turn
- `false`: this request is a follow-up turn in the current workflow

The front end sends it for server-side flow control if needed.

## What the Server Must Accept

At minimum, the configured `AI_URL` endpoint should accept:

```json
{
  "messages": "Chat Completions-style message array",
  "tools": "SheetNext tool definitions",
  "isUserStart": true
}
```

In practical terms:

- `messages`: required
- `tools`: required by the current client contract
- `isUserStart`: required by the current client contract

## What the Front End Accepts Back

The current client expects a streaming response and parses it as SSE-style `data:` lines.

Recommended response header:

```text
Content-Type: text/event-stream
```

Stream example:

```text
data: {"type":"text","delta":"Hello"}

data: {"type":"text","delta":" world"}

data: [DONE]
```

Each event should be one `data: ...` line followed by a blank line.

## Supported Response Chunks

### `text`

Append assistant text to the UI.

```json
{
  "type": "text",
  "delta": "Hello"
}
```

### `tool_call`

Incrementally stream a tool call. `arguments` may be partial.

```json
{
  "type": "tool_call",
  "tool_call": {
    "index": 0,
    "id": "call_1",
    "type": "function",
    "function": {
      "name": "setCellValue",
      "arguments": "{\"range\":\"A1\""
    }
  }
}
```

### `tool_call_complete`

Provide the completed tool call payload.

```json
{
  "type": "tool_call_complete",
  "tool_call": {
    "index": 0,
    "id": "call_1",
    "type": "function",
    "function": {
      "name": "setCellValue",
      "arguments": "{\"range\":\"A1\",\"value\":123}"
    }
  }
}
```

### `usage`

Optional usage information.

```json
{
  "type": "usage",
  "usage": {
    "input_tokens": 1200,
    "output_tokens": 260,
    "total_tokens": 1460
  }
}
```

### `web_search`

Optional status chunk recognized by the current client.

```json
{
  "type": "web_search",
  "status": "in_progress"
}
```

or:

```json
{
  "type": "web_search",
  "status": "completed"
}
```

### `error`

Structured stream error.

```json
{
  "error": {
    "message": "Model request failed"
  }
}
```

## Error Handling

- non-2xx HTTP responses are treated as request failures
- a 2xx response can still fail by streaming an `error` payload
- plain text or HTML error bodies are shown as error content
- unknown chunk types are ignored by the current client

## Recommended Server Strategy

The simplest integration is:

1. receive the JSON body from SheetNext
2. read `messages`, `tools`, and `isUserStart`
3. call your model service
4. convert the model output into the SSE chunk format above
5. stream it back to the browser

As long as the server accepts the request body above and returns the chunk format above, the current SheetNext AI front end can work with it.
