Skip to main content
Meilisearch’s chat completions endpoint works as a built-in RAG (Retrieval Augmented Generation) system: for each user message, Meilisearch searches your indexes, then passes the retrieved documents to the LLM to generate a grounded response. This guide shows you how to build a multi-turn chat interface on top of this. Make sure you have completed the setup guide before continuing.
In code examples, replace WORKSPACE_NAME with the name of your workspace. On Meilisearch Cloud, the default workspace name is cloud.

Streaming is required

All requests to the chat completions endpoint must include "stream": true. Non-streaming (stream: false) is not yet supported and returns a 501 Not Implemented error.

Send a streaming request

Send a POST request to /chats/{workspace}/chat/completions with stream: true:
curl -N \
  -X POST 'MEILISEARCH_URL/chats/WORKSPACE_NAME/chat/completions' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "model": "PROVIDER_MODEL_UID",
    "stream": true,
    "messages": [
      {
        "role": "user",
        "content": "What movies are about artificial intelligence?"
      }
    ]
  }'
This basic request works and the LLM will search your indexes and generate an answer. However, without Meilisearch tools, you get no visibility into what is being searched and no way to maintain conversation context across follow-up questions. The next section explains how to address this.

Meilisearch tools

Meilisearch provides three special tools that improve the chat experience. Declare them in the tools array of your request and Meilisearch intercepts them — they are never forwarded to the LLM provider.
These tool definitions must include the exact parameter schemas below. Missing or incorrect parameters will prevent the tools from working.
ToolPurpose
_meiliSearchProgressReports what searches are being performed in real time
_meiliSearchSourcesReturns the documents used by the LLM to formulate its answer
_meiliAppendConversationMessageAsks the client to append internal tool calls and results to the conversation history
The call_id field links _meiliSearchProgress and _meiliSearchSources events together: both carry the same call_id, allowing you to associate a search query with the documents it returned. _meiliAppendConversationMessage is the key to multi-turn conversations. Since the endpoint is stateless, Meilisearch uses this tool to expose the internal search tool calls and their results back to the client. You must push these messages into your messages array before the next request, or the LLM will lose the context from previous searches and produce lower-quality answers.

Tool schemas

{
  "type": "function",
  "function": {
    "name": "_meiliSearchProgress",
    "description": "Provides information about the current Meilisearch search operation",
    "parameters": {
      "type": "object",
      "properties": {
        "call_id": {
          "type": "string",
          "description": "The call ID to track the sources of the search"
        },
        "function_name": {
          "type": "string",
          "description": "The name of the function being executed"
        },
        "function_parameters": {
          "type": "string",
          "description": "The parameters of the function being executed, encoded in JSON"
        }
      },
      "required": ["call_id", "function_name", "function_parameters"],
      "additionalProperties": false
    },
    "strict": true
  }
}

Complete example: progress, sources, and history

The following example combines all three tools and demonstrates the full recommended usage: streaming progress, displaying sources, and maintaining conversation history for multi-turn questions.
JavaScript (Fetch)
const MEILISEARCH_TOOLS = [
  {
    type: 'function',
    function: {
      name: '_meiliSearchProgress',
      description: 'Provides information about the current Meilisearch search operation',
      parameters: {
        type: 'object',
        properties: {
          call_id: { type: 'string' },
          function_name: { type: 'string' },
          function_parameters: { type: 'string' },
        },
        required: ['call_id', 'function_name', 'function_parameters'],
        additionalProperties: false,
      },
      strict: true,
    },
  },
  {
    type: 'function',
    function: {
      name: '_meiliSearchSources',
      description: 'Provides sources of the search',
      parameters: {
        type: 'object',
        properties: {
          call_id: { type: 'string' },
          documents: { type: 'array', items: { type: 'object' } },
        },
        required: ['call_id', 'documents'],
        additionalProperties: false,
      },
      strict: true,
    },
  },
  {
    type: 'function',
    function: {
      name: '_meiliAppendConversationMessage',
      description: 'Append a new message to the conversation based on what happened internally',
      parameters: {
        type: 'object',
        properties: {
          role: { type: 'string' },
          content: { type: 'string' },
          tool_calls: { type: ['array', 'null'] },
          tool_call_id: { type: ['string', 'null'] },
        },
        required: ['role', 'content', 'tool_calls', 'tool_call_id'],
        additionalProperties: false,
      },
      strict: true,
    },
  },
];

const messages = [];

async function chat(userMessage) {
  messages.push({ role: 'user', content: userMessage });

  const response = await fetch('MEILISEARCH_URL/chats/WORKSPACE_NAME/chat/completions', {
    method: 'POST',
    headers: {
      Authorization: 'Bearer MEILISEARCH_KEY',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      model: 'PROVIDER_MODEL_UID',
      stream: true,
      messages,
      tools: MEILISEARCH_TOOLS,
    }),
  });

  const reader = response.body?.getReader();
  if (!reader) throw new Error('No readable stream on response');
  const decoder = new TextDecoder();
  let answer = '';
  let buffer = '';
  const pendingToolCalls = {};

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += decoder.decode(value, { stream: true });
    const lines = buffer.split('\n');
    buffer = lines.pop() ?? ''; // retain any incomplete trailing line

    for (const line of lines) {
      if (!line.startsWith('data: ') || line === 'data: [DONE]') continue;

      const chunk = JSON.parse(line.slice(6));
      const delta = chunk.choices[0]?.delta;

      // Accumulate answer tokens
      if (delta?.content) {
        answer += delta.content;
        process.stdout.write(delta.content);
      }

      // Accumulate tool call arguments (they may arrive in multiple chunks)
      for (const toolCall of delta?.tool_calls ?? []) {
        if (toolCall.id) {
          pendingToolCalls[toolCall.id] = { name: toolCall.function.name, args: '' };
        }
        const pending = toolCall.id
          ? pendingToolCalls[toolCall.id]
          : Object.values(pendingToolCalls).at(-1);
        if (pending && toolCall.function?.arguments) {
          pending.args += toolCall.function.arguments;
        }
      }
    }
  }

  // Process completed tool calls
  for (const call of Object.values(pendingToolCalls)) {
    const args = JSON.parse(call.args);

    if (call.name === '_meiliSearchProgress') {
      // Show real-time search progress in the UI
      const params = JSON.parse(args.function_parameters);
      console.log(`Searched "${params.q}" in index "${params.index_uid}"`);
    }

    if (call.name === '_meiliSearchSources') {
      // Display source documents alongside the answer
      console.log('Sources used:', args.documents);
    }

    if (call.name === '_meiliAppendConversationMessage') {
      // Append internal search context to maintain quality in follow-up questions
      messages.push(args);
    }
  }

  messages.push({ role: 'assistant', content: answer });
}

// First question
await chat('What movies are about artificial intelligence?');
// Follow-up — the agent uses the search context from the previous turn
await chat('Which one has the best reviews?');

Troubleshooting

Empty reply from server (curl error 52)

Causes:
  • Chat completions feature not enabled
  • Missing authentication in requests
Solution:
  1. Enable the feature (see setup guide)
  2. Include the Authorization header in all requests

”Invalid API key” error

Cause: Using the wrong type of API key Solution:
  • Use the “Default Chat API Key”
  • Do not use search or admin API keys for chat endpoints
  • Find your chat key with the list keys endpoint

”Socket connection closed unexpectedly”

Cause: Usually means the LLM provider API key is missing or invalid in workspace settings Solution:
  1. Check workspace configuration:
    curl \
      -X GET 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
      -H "Authorization: Bearer MEILISEARCH_KEY"
    
  2. Update with a valid API key:
    curl \
      -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
      -H "Authorization: Bearer MEILISEARCH_KEY" \
      -H "Content-Type: application/json" \
      --data-binary '{ "apiKey": "your-valid-api-key" }'
    

No search progress visible

Cause: The _meiliSearchProgress tool is not declared in the request Solution: The search still runs and the LLM still answers, but without _meiliSearchProgress you receive no visibility into what searches are being performed. Add all three Meilisearch tools to your request as shown in the complete example.

Next steps

One-shot summarization

Generate single AI answers without conversation history.

Stream chat responses

Handle streaming responses for a real-time experience.

Display source documents

Show users which documents were used to generate responses.

Configure guardrails

Restrict AI responses to topics covered by your data.

Chat completions API reference

Full reference for the chat completions endpoint.

Reduce hallucination

Techniques to keep AI responses grounded in your data.