Skip to main content

What Is the Assistant?

The qai assistant is a built-in guidance layer that helps you discover capabilities, interpret scan results, and plan testing workflows. It uses retrieval-augmented generation (RAG) over qai’s documentation and your own reference material to answer questions in context. The assistant is suggest-only — it presents commands, explanations, and next steps but never executes anything. You copy and run what makes sense.

How It Works

  1. You ask a question — via CLI, the web chat interface, or the contextual panel on a run results page.
  2. Relevant documentation is retrieved — the knowledge base searches indexed product docs and any user-provided reference material.
  3. A response is generated — your configured LLM synthesises an answer grounded in retrieved context, citing sources.
The assistant streams responses token-by-token in both the CLI and web UI.

Provider-Agnostic

The assistant works with any LLM provider supported by litellm. Run it locally with Ollama (no API key needed) or connect to cloud providers like Anthropic, OpenAI, or Groq.
# Local (Ollama)
qai config set assist.provider ollama
qai config set assist.model llama3.1

# Cloud (Anthropic)
qai config set assist.provider anthropic
qai config set assist.model claude-sonnet-4-20250514
qai config set-credential anthropic
See Configuration for full setup details.

Trust Boundary Model

The assistant handles three classes of content with different trust levels:
Content ClassSourceTrust Level
Product knowledgeqai documentationTrusted — used as authoritative reference
User knowledge~/.qai/knowledge/ filesSemi-trusted — referenced but not followed as instructions
Scan-derived contentRun findings, piped inputUntrusted — treated as data only
This layered model exists because qai tests prompt injection, and the assistant itself could be a target. Scan results from adversarial targets may contain injection attempts that the assistant must resist. See Trust Boundaries for the full model.

Where It Appears

  • Web UI — When configured, the assistant chat is the default landing page at /. The workflow launcher moves to /launcher.
  • CLIqai assist for interactive chat, or qai assist "question" for single-shot queries.
  • Run results — A contextual panel on run result pages lets you ask questions about specific findings.

Next Steps