ConcurredConcurred API
Gateway API

Tool Use

OpenAI-compatible function calling across all Gateway providers

The Gateway supports OpenAI-compatible tool use across every supported provider. You define functions; the model decides when to call them; your code executes them and returns results; the model continues the conversation.

Drop-in for Vercel AI SDK

The Gateway is designed to plug into @ai-sdk/openai with a custom baseURL. It meets strict OpenAI chat-completions semantics for tools — including streaming deltas, parallel calls, and the role:"tool" round-trip.

Defining tools

{
  "model": "claude",
  "messages": [
    { "role": "user", "content": "What's the weather in Paris?" }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather for a city.",
        "parameters": {
          "type": "object",
          "properties": {
            "city": { "type": "string", "description": "City name" }
          },
          "required": ["city"]
        }
      }
    }
  ],
  "tool_choice": "auto"
}

Constraints

  • Up to 128 tools per request.
  • Tool names must match ^[a-zA-Z0-9_-]{1,64}$.
  • parameters is a JSON Schema with type:"object" at the root.
  • Tool names within a single request must be unique.
  • strict: true on a function definition is supported for OpenAI (passthrough) and validated server-side for other providers on the Gateway.

tool_choice values

ValueBehavior
"auto"Model decides whether to call a tool (default when tools are present).
"none"Model must not call any tool.
"required"Model must call at least one tool.
{type:"function",function:{name:"X"}}Force the model to call the named tool.

Tool-call response

When the model decides to call a tool, finish_reason is "tool_calls" and the assistant message contains a tool_calls array:

{
  "id": "chatcmpl-abc",
  "object": "chat.completion",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": null,
      "tool_calls": [
        {
          "id": "call_abc123",
          "type": "function",
          "function": {
            "name": "get_weather",
            "arguments": "{\"city\":\"Paris\"}"
          }
        }
      ]
    },
    "finish_reason": "tool_calls"
  }],
  "usage": {
    "prompt_tokens": 312,
    "completion_tokens": 48,
    "total_tokens": 360,
    "prompt_tokens_details": { "cached_tokens": 0 }
  }
}
  • arguments is always a JSON-encoded string — parse it before executing.
  • tool_calls[].id is stable across the request. For providers that don't emit IDs natively (Gemini), the Gateway synthesizes call_<uuid>. For Anthropic's toolu_XXX IDs, the Gateway prefixes with call_ for OpenAI-client compatibility.

Parallel tool calls

If the model emits multiple tool_calls in one response, execute them concurrently and return one role:"tool" message per call. Claude and OpenAI emit parallel calls natively; Gemini emits multiple functionCall parts per response.

Continuing the conversation

Send the tool result back as a role:"tool" message, alongside the prior assistant turn:

{
  "model": "claude",
  "messages": [
    { "role": "user", "content": "What's the weather in Paris?" },
    {
      "role": "assistant",
      "content": null,
      "tool_calls": [
        {
          "id": "call_abc123",
          "type": "function",
          "function": { "name": "get_weather", "arguments": "{\"city\":\"Paris\"}" }
        }
      ]
    },
    {
      "role": "tool",
      "tool_call_id": "call_abc123",
      "content": "{\"temp_c\": 14, \"condition\": \"cloudy\"}"
    }
  ],
  "tools": [ /* same tool defs as before */ ]
}
  • The tool_call_id must match an id the assistant emitted earlier in this request.
  • content is always a string (typically a JSON-stringified tool result). If your tool errors, pass a plain-text error string — the Gateway does not require valid JSON.
  • Tool result size cap: 256 KB. Larger payloads are truncated with a visible …[truncated by gateway: tool result exceeded 256KB] suffix.

Unsupported model + tools

Some models don't support tool calling. Requesting tools on them returns HTTP 400 with code: "tool_unsupported_for_model":

  • deepseek-reasoner, deepseek-r1 — DeepSeek's reasoning variants do not expose tools. Use deepseek-chat instead.

Errors

All tool-use errors follow the OpenAI envelope shape. See Errors for the full code list. The most common:

CodeWhen
tool_schema_invalidparameters is not a valid JSON Schema object.
tool_choice_invalidtool_choice references a function name not in tools[].
tool_call_id_mismatchA role:"tool" message has a tool_call_id that wasn't emitted.
tool_unsupported_for_modelModel doesn't support tools (e.g. deepseek-reasoner).
tool_call_invalid_argumentsWith strict:true, the model emitted arguments that failed schema validation.
tool_provider_errorUpstream provider returned an error mid-stream.

See also

On this page