Documentation Index
Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt
Use this file to discover all available pages before exploring further.
The core layer (q_ai.core) provides shared services: persistent data storage, configuration management, framework mapping, mitigation guidance, and LLM provider abstraction.
SQLite Database
qai uses SQLite with WAL mode for concurrent access and foreign key enforcement. The database lives at ~/.qai/qai.db and auto-initializes with schema migrations on first access.
WAL mode requires the .db, -wal, and -shm files to remain in the same directory.
Core Data Models
Run
Represents a top-level operation or module execution. IDs are UUID strings.
| Field | Type | Description |
|---|
id | str | UUID identifier |
module | str | Module name (audit, proxy, inject, chain, ipi, cxp, rxp, import, workflow) |
status | RunStatus | Execution status |
parent_run_id | str | None | Parent run for workflow child runs |
name | str | None | Human-readable name (e.g., workflow ID) |
target_id | str | None | Reference to the target being tested |
config | dict | None | Run configuration (serialized as JSON) |
started_at | datetime | None | Start timestamp |
finished_at | datetime | None | Completion timestamp |
guidance | str | None | JSON-serialized RunGuidance (IPI/CXP playbooks) |
source | str | None | Run provenance — where the run was initiated (web, cli, api, mcp, garak, pyrit, sarif) |
RunStatus values:
| Status | Value | Description |
|---|
| PENDING | 0 | Awaiting execution |
| RUNNING | 1 | Currently executing |
| COMPLETED | 2 | Successfully finished |
| FAILED | 3 | Terminated with error |
| CANCELLED | 4 | User-initiated stop |
| WAITING_FOR_USER | 5 | Awaiting user interaction (e.g., IPI callback, CXP manual test) |
| PARTIAL | 6 | Completed with some module failures |
Target
Represents a scan or test target.
| Field | Type | Description |
|---|
id | str | UUID identifier |
type | str | Target type (e.g., “server”) |
name | str | Human-readable name |
uri | str | None | Connection URI or address |
metadata | dict | None | Additional target properties |
Finding
A security finding produced by a module run.
| Field | Type | Description |
|---|
id | str | UUID identifier |
run_id | str | The run that produced this finding |
module | str | Module name |
category | str | Finding category (e.g., command_injection, auth, tool_poisoning) |
severity | Severity | INFO (0), LOW (1), MEDIUM (2), HIGH (3), CRITICAL (4) |
title | str | Short human-readable title |
description | str | None | Detailed description |
framework_ids | dict | None | Framework mappings (e.g., {"owasp_mcp_top10": "MCP05", "mitre_atlas": "AML.T0054"}) |
mitigation | dict | None | Persisted mitigation guidance data |
source_ref | str | None | Scanner or source reference |
Evidence
Structured proof attached to findings.
| Field | Type | Description |
|---|
id | str | UUID identifier |
finding_id | str | Parent finding |
evidence_type | str | Type of evidence (e.g., “request”, “response”, “tool_call”) |
content | str | Raw evidence content |
metadata | dict | None | Additional structured data |
Configuration System
Configuration merges from multiple sources with precedence ordering:
- CLI flags (highest)
- Environment variables
- Database settings (SQLite
settings table, managed via Settings UI or qai config set)
- YAML config file (
~/.qai/config.yaml)
- Built-in defaults (lowest)
Use qai config get <key> to see the resolved value and its source for any setting.
Credential Management
Provider API keys are stored in the OS keyring (Windows Credential Manager, macOS Keychain, Linux Secret Service). The keyring service name is q-ai.
Resolution order for credentials: environment variable ({PROVIDER}_API_KEY) → OS keyring → error.
CLI commands for credential management:
qai config set-credential <provider> — prompts for key with masked input
qai config delete-credential <provider> — removes key from keyring
qai config list-providers — shows credential status for known providers
qai config import-legacy-credentials — one-time migration from plaintext config.yaml to keyring
FrameworkResolver
Loads framework definitions from q_ai/core/data/frameworks.yaml and maps assessment categories (e.g., command_injection, auth, tool_poisoning) to technique IDs across four frameworks:
- OWASP MCP Top 10 — MCP01 through MCP10
- MITRE ATLAS — AML technique IDs (reviewed against v5.4.0)
- CWE — Common Weakness Enumeration IDs
- OWASP Agentic Top 10 — Agentic AI security categories
Used by audit scanners to populate framework_ids on findings. The qai update-frameworks CLI command checks for upstream changes to ATLAS and OWASP MCP Top 10.
MitigationResolver
Generates structured, tiered remediation guidance for audit findings. Loads mitigations.yaml which covers all 10 OWASP MCP categories with recommended actions, platform support notes, and risk decision guidance.
Each finding receives a MitigationGuidance object with GuidanceSection entries. Each section tracks its provenance (SourceType: baseline YAML, scanner-contributed, or runtime-enriched).
MitigationGuidance is per-finding and generated at report time. This is separate from RunGuidance (which is per-run and generated at workflow start).
RunGuidance
Per-run deployment playbooks generated by IPI and CXP module adapters at workflow start. Stored as JSON in the guidance column of the runs table.
Structure:
RunGuidance — container with blocks, schema_version (currently 1), generated_at, module
GuidanceBlock — individual block with kind (BlockKind enum), label, items (list of strings), metadata (dict)
BlockKind values: inventory, trigger_prompts, deployment_steps, monitoring, interpretation, factors
IPI builds four blocks (inventory, trigger prompts, deployment steps, monitoring). CXP builds four blocks (inventory, trigger prompts, deployment steps, interpretation).
The web UI renders these blocks as collapsible cards in the IPI and CXP run results tabs.
Service Layer
The q_ai.services package provides typed query functions that sit between the web/orchestrator layers and the core database. Services encapsulate query logic and return typed dataclass results.
| Service | Purpose |
|---|
finding_service | List findings with filters, get finding with evidence, get findings for a run (including child runs), get imported findings for a target |
run_service | Get run, list runs, get child runs, get finding counts |
evidence_service | List evidence by finding or run, get content |
Services are used by:
- Module adapters — the inject adapter calls
finding_service.get_findings_for_run() and finding_service.get_imported_findings_for_target() to inform template selection
- Web route handlers — replacing inline DB queries with service calls for findings, runs, and evidence display
- Workflow orchestrators — querying cross-module results during workflow execution
The q_ai.imports package provides parsers for external security tool results. Each parser reads a tool-specific format and produces normalized ImportedFinding objects with taxonomy bridging.
| Component | Purpose |
|---|
imports/cli.py | qai import CLI command with --format, --target, --dry-run |
imports/garak.py | Garak JSONL report parser |
imports/pyrit.py | PyRIT JSON conversation export parser |
imports/sarif.py | SARIF 2.1.0 generic parser |
imports/taxonomy.py | OWASP LLM Top 10 → qai category bridge with confidence levels |
imports/models.py | ImportedFinding, ImportResult, TaxonomyBridge dataclasses |
Import runs are stored with module='import' and source set to the tool name (garak, pyrit, sarif). The --target flag associates imported findings with a qai target, enabling them to inform inject template selection in workflows.
See External Tool Import for CLI usage and parser details.
Bridge Token
Shared secret for IPI callback listener ↔ web UI internal communication. Stored at ~/.qai/bridge.token as a 32-character hex string. Auto-generated on first access using secrets.token_hex(16) with exclusive file creation (race-safe). File permissions are 0600 on POSIX.
Both the IPI listener and the web server read from the same file. The listener includes the token in internal HTTP POST notifications to the web server when a callback hit arrives.
Provider Registry
The q_ai.core.providers module manages LLM provider metadata for the Settings UI. Defines known providers (Anthropic, OpenAI, Groq, OpenRouter, Ollama, LM Studio, custom) with their type (cloud/local) and model fetching logic. The Settings UI uses this to populate provider dropdowns and validate credentials.
LLM calls go through q_ai.core.llm which defines a ProviderClient protocol. The implementation in q_ai.core.llm_litellm wraps litellm for actual API calls, returning NormalizedResponse objects that abstract away provider-specific response formats.