Skip to main content
The assistant requires a provider and model to be configured before use. All other settings have sensible defaults.

Provider and Model

Set the provider and model via CLI:
qai config set assist.provider ollama
qai config set assist.model llama3.1
Or use the Settings page in the web UI — the Assistant section has provider and model selectors that save to the same configuration store. The assist.model value is the model name without a provider prefix. The runtime builds the full provider/model string from assist.provider and assist.model automatically. See LLM Provider Configuration for the full list of supported providers.

Environment Variable Override

You can set provider and model via environment variables instead of the config store:
export QAI_ASSIST_PROVIDER="ollama"
export QAI_ASSIST_MODEL="llama3.1"
Environment variables take precedence over stored settings.

Base URL

If your assistant’s LLM provider runs on a remote host or non-default port, set a custom base URL. This is useful when targeting a remote Ollama instance, LM Studio on another machine, or any custom OpenAI-compatible endpoint.
qai config set assist.base_url http://inference-server:11434
Or via environment variable:
export QAI_ASSIST_BASE_URL="http://inference-server:11434"
You can also set it in the Settings page under the Assistant section.
assist.base_url controls only the assistant’s LLM endpoint. It is completely independent of the base URLs used for testing targets (e.g., ollama.base_url). You can point the assistant at one Ollama instance while running inject campaigns against a different one.
If not set, litellm uses the provider’s default endpoint (e.g., http://localhost:11434 for Ollama).

Credentials

Cloud providers require an API key. Store credentials using the same keyring/environment variable pattern as other qai providers:
# Store in OS keyring (masked input)
qai config set-credential anthropic

# Or set as environment variable
export ANTHROPIC_API_KEY="sk-ant-..."
Local providers — ollama, lmstudio, and custom — skip the credential check entirely. If you’re using Ollama, no API key is needed; just ensure Ollama is running (ollama serve). See LLM Provider Configuration for credential management details.

Embedding Model

The knowledge base uses a sentence-transformers model to generate embeddings for retrieval. The default is all-MiniLM-L6-v2, which runs locally and requires no API key. To override:
qai config set assist.embedding_model "all-mpnet-base-v2"
Or via environment variable:
export QAI_ASSIST_EMBEDDING_MODEL="all-mpnet-base-v2"
The embedding model runs locally via sentence-transformers regardless of which LLM provider you use. If you change the embedding model, run qai assist reindex to rebuild the knowledge base with the new embeddings.

User Knowledge Directory

The assistant indexes user-provided reference files from ~/.qai/knowledge/ by default. To use a different directory:
qai config set assist.knowledge_dir "/path/to/your/docs"
Or via environment variable:
export QAI_ASSIST_KNOWLEDGE_DIR="/path/to/your/docs"
See Knowledge Base for what to put in this directory.

Configuration Summary

SettingCLIEnv VarDefault
Providerqai config set assist.providerQAI_ASSIST_PROVIDERNone (required)
Modelqai config set assist.modelQAI_ASSIST_MODELNone (required)
Base URLqai config set assist.base_urlQAI_ASSIST_BASE_URLNone (provider default)
Embedding modelqai config set assist.embedding_modelQAI_ASSIST_EMBEDDING_MODELall-MiniLM-L6-v2
Knowledge directoryqai config set assist.knowledge_dirQAI_ASSIST_KNOWLEDGE_DIR~/.qai/knowledge/
Credentialsqai config set-credential <provider>{PROVIDER}_API_KEYNone (required for cloud)

Quick Start

Three commands to get started with a local model:
qai config set assist.provider ollama
qai config set assist.model llama3.1
qai assist "what can qai test?"
For a cloud provider, add a credential first:
qai config set assist.provider anthropic
qai config set assist.model claude-sonnet-4-20250514
qai config set-credential anthropic
qai assist "how do I scan an MCP server?"