Skip to content

Models

configure models

Models are configured per agent using a model URI. Open an agent, then set the Default model field in the agent editor. Providers must be configured first; see Providers.

model URI format

{scheme}://{provider}/{model}?{params}
  • scheme can be llm (text), vlm (vision), or temb (text embeddings).
  • provider must match a configured provider name (for example openai or groq).
  • model is the provider's model identifier.
  • params is optional query string settings forwarded to the provider.

supported query params

Chat Completions (openai providers)

OpenAI-compatible chat completion providers support these query parameters on the model URI:

  • frequency_penalty (float, -2.0 to 2.0)
  • presence_penalty (float, -2.0 to 2.0)
  • max_completion_tokens (int, > 0)
  • max_tokens (int, > 0)
  • temperature (float, 0.0 to 2.0)
  • parallel_tool_calls (bool, default true)
  • service_tier (auto, default, flex, scale, priority)
  • verbosity (low, medium, high)

Responses API (open_responses providers)

Responses providers support these query parameters on the model URI:

  • max_output_tokens (int, > 0)
  • temperature (float, 0.0 to 2.0)
  • top_p (float, 0.0 to 1.0)
  • parallel_tool_calls (bool, default true)
  • reasoning_effort (string, optional)
  • include_reasoning_encrypted (bool, default true)

examples

  • llm://openai/gpt-4o
  • llm://groq/llama-3.1-70b?temperature=0.2
  • llm://openai/gpt-5-2025-08-07?reasoning_effort=high