Skip to main content

Shroud supported models

Shroud validates the model field (JSON body and/or X-Shroud-Model) against an allowlist per provider. The lists below match the deployed Shroud configuration in the monorepo:

shroud/config/providers/<provider>.toml[provider.models].allowed

If you add or rename models in production, update those TOML files first, then refresh this page.


How to use these IDs

  • Set X-Shroud-Provider to the provider name (see Shroud guide).
  • Pass the model as model in the JSON body (OpenAI-style chat) or X-Shroud-Model, using the exact strings in the tables below.
  • For google / gemini, IDs are the short Gemini model names (e.g. gemini-2.5-pro).
  • For openrouter, the allowlist is empty in config, meaning any OpenRouter model slug is accepted (e.g. anthropic/claude-3.5-sonnet). OpenRouter’s catalog is authoritative.

OpenAI (X-Shroud-Provider: openai)

Model ID
gpt-4o
gpt-4o-mini
gpt-4.1
gpt-4.1-mini
gpt-4.1-nano
o1
o3
o3-mini
o4-mini

Anthropic (X-Shroud-Provider: anthropic)

Model ID
claude-sonnet-4-5-20250929
claude-haiku-4-5-20251001
claude-opus-4-6

Google Gemini (X-Shroud-Provider: google or gemini)

Model IDNotes
gemini-2.5-pro
gemini-2.5-flashDefault called out in Shroud provider config comments
gemini-2.0-flashStill on the allowlist; Google may deprecate or return errors for some accounts

Mistral (X-Shroud-Provider: mistral)

Model ID
mistral-large-latest
mistral-medium-latest
mistral-small-latest
codestral-latest

Cohere (X-Shroud-Provider: cohere)

Model ID
command-r-plus
command-r
command-light

OpenRouter (X-Shroud-Provider: openrouter)

OpenRouter is itself a model routing gateway — it maintains its own catalog of available models and handles model resolution and validation on its backend. Because of this, Shroud’s config uses an empty allowlist for OpenRouter: there is no static list to maintain on the Shroud side. Any model slug that OpenRouter supports is accepted (for example openai/gpt-4o, anthropic/claude-sonnet-4, google/gemini-2.5-pro). See OpenRouter models for the full catalog.


LLM Token Billing (Stripe AI Gateway)

When your org has LLM Token Billing enabled, Shroud can route traffic through the Stripe AI Gateway instead of your provider API key. You still send the same X-Shroud-Provider (openai, anthropic, google, mistral, cohere, etc.).

Shroud rewrites the request body so Stripe receives provider/model (e.g. openai/gpt-4o-mini, google/gemini-2.5-pro) when the body’s model has no /. If you already pass a qualified id (contains /), it is left unchanged.

The stripe provider entry in Shroud also uses an empty model allowlist; gateway-side availability follows Stripe’s documentation, not the fixed tables above.


Policy allowlists (shroud_config)

Per-agent allowed_models / denied_models in shroud_config should use the same strings you send in requests (after any Stripe rewrite, the logical model is still the one you chose in your client). See Shroud — per-agent configuration.


Errors

If the model is not allowed for that provider, Shroud returns an error indicating the model is not permitted on that provider. Use an ID from the tables above (or a valid OpenRouter slug when using OpenRouter).