AI Assistant

Setup

Configure AI Assistant by adding your AI provider API keys and customizing assistant behavior.

AI Assistant requires an API key from a supported AI provider. This page covers the administrator setup process.

Requirements

  • Administrator access to your Directus instance
  • API key from at least one supported provider: OpenAI, Anthropic, or Google (see below)
  • Users must have App access - Public (non-authenticated) or API-only users cannot use AI Assistant
  • AI Assistant can be disabled at the infrastructure level using the AI_ENABLED environment variable

Alternatively, you can use an OpenAI-compatible provider like Ollama or LM Studio for self-hosted models.

Note that all users of AI Assistant will share a single API key from your configured provider. Usage limits and costs will be shared across all users. See your provider's dashboard for monitoring usage details and costs.

Get an API Key

You'll need an API key from at least one provider. Choose based on which models you want to use.

Configure Providers in Directus

Go to Settings → AI in the Directus admin panel.

AI Settings page in Directus

Enter Your API Keys

Add your API key for one or more providers:

  • OpenAI API Key - Enables GPT-4 and GPT-5 models
  • Anthropic API Key - Enables Claude models
  • Google API Key - Enables Gemini models
API keys are encrypted at rest in the database and masked in the UI.

Configure Allowed Models

For each provider, you can restrict which models are available to users. Use the Allowed Models dropdown next to each API key field to select the models users can choose from.

  • If no models are selected, no models from that provider will be available
  • You can add custom model IDs by typing them and pressing Enter (useful when new models are released)

This is useful for:

  • Controlling costs by limiting access to expensive models
  • Ensuring compliance by only allowing approved models
  • Simplifying the user experience by reducing model choices

Save Settings

Click Save to apply your changes. AI Assistant is now available to all users with App access.

OpenAI-Compatible Providers

In addition to the built-in providers, Directus supports any OpenAI-compatible API endpoint. This allows you to use self-hosted models, alternative providers, or private deployments.

For best results, use built-in cloud providers. Local models vary significantly in their tool-calling capabilities and may produce inconsistent results. If using OpenAI-compatible providers, we recommend cloud-hosted frontier models over locally-run models on personal hardware.

OpenAI Compatible Provider Settings

Configuration

In Settings → AI, scroll to the OpenAI-Compatible section and configure:

FieldDescription
Provider NameDisplay name shown in the model selector (e.g., "Ollama", "LM Studio")
Base URLThe API endpoint URL (required). Must be OpenAI-compatible.
API KeyAuthentication key if required by your provider
Custom HeadersAdditional HTTP headers for authentication or configuration
ModelsList of models available from this provider

Model Configuration

For each model, you can specify:

FieldDescription
Model IDThe model identifier used in API requests
Display NameHuman-readable name shown in the UI
Context WindowMaximum input tokens (default: 128,000)
Max OutputMaximum output tokens (default: 16,000)
Supports AttachmentsWhether the model can process images/files
Supports ReasoningWhether the model has chain-of-thought capabilities
Provider OptionsJSON object for model-specific parameters
The Provider Options field allows you to pass provider-specific parameters to the AI SDK. This is useful for enabling features like extended thinking or custom sampling parameters. See the Vercel AI SDK documentation for details.

Example Configurations

Custom System Prompt

Optionally customize how the AI assistant behaves in Settings → AI → Custom System Prompt.

The default system prompt provides the AI with helpful instructions on how to interact with Directus and is tuned to provide good results.

If you choose to customize the system prompt, it's recommended to use the following template as a starting point:

Leave blank to use the default behavior.

Managing Costs

AI Assistant uses your own AI provider API keys. Every message and tool call costs money. Be mindful of usage, especially with larger models. You are responsible for the costs of your usage.

Tips for controlling costs:

  • Use faster, cheaper models (GPT-5 Nano, Claude Haiku 4.5, Gemini 2.5 Flash) for simple tasks
  • Use Allowed Models to restrict access to expensive models
  • Disable unused tools - disabled tools are not loaded into context, reducing token usage
  • Set spending limits in your provider dashboard:
  • Consider self-hosted models via OpenAI-compatible providers for cost control

Next Steps

User Guide

Learn how users interact with AI Assistant.

Available Tools

See what actions the AI can perform.

Tips & Best Practices

Get the most out of AI Assistant.

Security

Access control and data protection.

Get once-a-month release notes & real‑world code tips...no fluff. 🐰