What Is the Assistant?
The qai assistant is a built-in guidance layer that helps you discover capabilities, interpret scan results, and plan testing workflows. It uses retrieval-augmented generation (RAG) over qai’s documentation and your own reference material to answer questions in context. The assistant is suggest-only — it presents commands, explanations, and next steps but never executes anything. You copy and run what makes sense.How It Works
- You ask a question — via CLI, the web chat interface, or the contextual panel on a run results page.
- Relevant documentation is retrieved — the knowledge base searches indexed product docs and any user-provided reference material.
- A response is generated — your configured LLM synthesises an answer grounded in retrieved context, citing sources.
Provider-Agnostic
The assistant works with any LLM provider supported by litellm. Run it locally with Ollama (no API key needed) or connect to cloud providers like Anthropic, OpenAI, or Groq.Trust Boundary Model
The assistant handles three classes of content with different trust levels:| Content Class | Source | Trust Level |
|---|---|---|
| Product knowledge | qai documentation | Trusted — used as authoritative reference |
| User knowledge | ~/.qai/knowledge/ files | Semi-trusted — referenced but not followed as instructions |
| Scan-derived content | Run findings, piped input | Untrusted — treated as data only |
Where It Appears
- Web UI — When configured, the assistant chat is the default landing page at
/. The workflow launcher moves to/launcher. - CLI —
qai assistfor interactive chat, orqai assist "question"for single-shot queries. - Run results — A contextual panel on run result pages lets you ask questions about specific findings.
Next Steps
- Configuration — Set up provider, model, and credentials
- CLI Reference — All
qai assistcommands and modes - Knowledge Base — How documentation is indexed and retrieved
- Trust Boundaries — Content classes and security model