Tool Use
OpenAI-compatible function calling across all Gateway providers
The Gateway supports OpenAI-compatible tool use across every supported provider. You define functions; the model decides when to call them; your code executes them and returns results; the model continues the conversation.
Drop-in for Vercel AI SDK
The Gateway is designed to plug into @ai-sdk/openai with a custom baseURL. It meets strict OpenAI chat-completions semantics for tools — including streaming deltas, parallel calls, and the role:"tool" round-trip.
Defining tools
Constraints
- Up to 128 tools per request.
- Tool names must match
^[a-zA-Z0-9_-]{1,64}$. parametersis a JSON Schema withtype:"object"at the root.- Tool names within a single request must be unique.
strict: trueon a function definition is supported for OpenAI (passthrough) and validated server-side for other providers on the Gateway.
tool_choice values
| Value | Behavior |
|---|---|
"auto" | Model decides whether to call a tool (default when tools are present). |
"none" | Model must not call any tool. |
"required" | Model must call at least one tool. |
{type:"function",function:{name:"X"}} | Force the model to call the named tool. |
Tool-call response
When the model decides to call a tool, finish_reason is "tool_calls" and the assistant message contains a tool_calls array:
argumentsis always a JSON-encoded string — parse it before executing.tool_calls[].idis stable across the request. For providers that don't emit IDs natively (Gemini), the Gateway synthesizescall_<uuid>. For Anthropic'stoolu_XXXIDs, the Gateway prefixes withcall_for OpenAI-client compatibility.
Parallel tool calls
If the model emits multiple tool_calls in one response, execute them concurrently and return one role:"tool" message per call. Claude and OpenAI emit parallel calls natively; Gemini emits multiple functionCall parts per response.
Continuing the conversation
Send the tool result back as a role:"tool" message, alongside the prior assistant turn:
- The
tool_call_idmust match anidthe assistant emitted earlier in this request. contentis always a string (typically a JSON-stringified tool result). If your tool errors, pass a plain-text error string — the Gateway does not require valid JSON.- Tool result size cap: 256 KB. Larger payloads are truncated with a visible
…[truncated by gateway: tool result exceeded 256KB]suffix.
Unsupported model + tools
Some models don't support tool calling. Requesting tools on them returns HTTP 400 with code: "tool_unsupported_for_model":
deepseek-reasoner,deepseek-r1— DeepSeek's reasoning variants do not expose tools. Usedeepseek-chatinstead.
Errors
All tool-use errors follow the OpenAI envelope shape. See Errors for the full code list. The most common:
| Code | When |
|---|---|
tool_schema_invalid | parameters is not a valid JSON Schema object. |
tool_choice_invalid | tool_choice references a function name not in tools[]. |
tool_call_id_mismatch | A role:"tool" message has a tool_call_id that wasn't emitted. |
tool_unsupported_for_model | Model doesn't support tools (e.g. deepseek-reasoner). |
tool_call_invalid_arguments | With strict:true, the model emitted arguments that failed schema validation. |
tool_provider_error | Upstream provider returned an error mid-stream. |
See also
- Streaming — full SSE grammar for tool-call deltas.
- Caching with tools — response cache bypass +
cache_controlpassthrough. - Provider compatibility — matrix of what each provider supports.