Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt

Use this file to discover all available pages before exploring further.

The core layer (q_ai.core) provides shared services: persistent data storage, configuration management, framework mapping, mitigation guidance, and LLM provider abstraction.

SQLite Database

qai uses SQLite with WAL mode for concurrent access and foreign key enforcement. The database lives at ~/.qai/qai.db and auto-initializes with schema migrations on first access.
WAL mode requires the .db, -wal, and -shm files to remain in the same directory.

Core Data Models

Run

Represents a top-level operation or module execution. IDs are UUID strings.
FieldTypeDescription
idstrUUID identifier
modulestrModule name (audit, proxy, inject, chain, ipi, cxp, rxp, import, workflow)
statusRunStatusExecution status
parent_run_idstr | NoneParent run for workflow child runs
namestr | NoneHuman-readable name (e.g., workflow ID)
target_idstr | NoneReference to the target being tested
configdict | NoneRun configuration (serialized as JSON)
started_atdatetime | NoneStart timestamp
finished_atdatetime | NoneCompletion timestamp
guidancestr | NoneJSON-serialized RunGuidance (IPI/CXP playbooks)
sourcestr | NoneRun provenance — where the run was initiated (web, cli, api, mcp, garak, pyrit, sarif)
RunStatus values:
StatusValueDescription
PENDING0Awaiting execution
RUNNING1Currently executing
COMPLETED2Successfully finished
FAILED3Terminated with error
CANCELLED4User-initiated stop
WAITING_FOR_USER5Awaiting user interaction (e.g., IPI callback, CXP manual test)
PARTIAL6Completed with some module failures

Target

Represents a scan or test target.
FieldTypeDescription
idstrUUID identifier
typestrTarget type (e.g., “server”)
namestrHuman-readable name
uristr | NoneConnection URI or address
metadatadict | NoneAdditional target properties

Finding

A security finding produced by a module run.
FieldTypeDescription
idstrUUID identifier
run_idstrThe run that produced this finding
modulestrModule name
categorystrFinding category (e.g., command_injection, auth, tool_poisoning)
severitySeverityINFO (0), LOW (1), MEDIUM (2), HIGH (3), CRITICAL (4)
titlestrShort human-readable title
descriptionstr | NoneDetailed description
framework_idsdict | NoneFramework mappings (e.g., {"owasp_mcp_top10": "MCP05", "mitre_atlas": "AML.T0054"})
mitigationdict | NonePersisted mitigation guidance data
source_refstr | NoneScanner or source reference

Evidence

Structured proof attached to findings.
FieldTypeDescription
idstrUUID identifier
finding_idstrParent finding
evidence_typestrType of evidence (e.g., “request”, “response”, “tool_call”)
contentstrRaw evidence content
metadatadict | NoneAdditional structured data

Configuration System

Configuration merges from multiple sources with precedence ordering:
  1. CLI flags (highest)
  2. Environment variables
  3. Database settings (SQLite settings table, managed via Settings UI or qai config set)
  4. YAML config file (~/.qai/config.yaml)
  5. Built-in defaults (lowest)
Use qai config get <key> to see the resolved value and its source for any setting.

Credential Management

Provider API keys are stored in the OS keyring (Windows Credential Manager, macOS Keychain, Linux Secret Service). The keyring service name is q-ai. Resolution order for credentials: environment variable ({PROVIDER}_API_KEY) → OS keyring → error. CLI commands for credential management:
  • qai config set-credential <provider> — prompts for key with masked input
  • qai config delete-credential <provider> — removes key from keyring
  • qai config list-providers — shows credential status for known providers
  • qai config import-legacy-credentials — one-time migration from plaintext config.yaml to keyring

FrameworkResolver

Loads framework definitions from q_ai/core/data/frameworks.yaml and maps assessment categories (e.g., command_injection, auth, tool_poisoning) to technique IDs across four frameworks:
  • OWASP MCP Top 10 — MCP01 through MCP10
  • MITRE ATLAS — AML technique IDs (reviewed against v5.4.0)
  • CWE — Common Weakness Enumeration IDs
  • OWASP Agentic Top 10 — Agentic AI security categories
Used by audit scanners to populate framework_ids on findings. The qai update-frameworks CLI command checks for upstream changes to ATLAS and OWASP MCP Top 10.

MitigationResolver

Generates structured, tiered remediation guidance for audit findings. Loads mitigations.yaml which covers all 10 OWASP MCP categories with recommended actions, platform support notes, and risk decision guidance. Each finding receives a MitigationGuidance object with GuidanceSection entries. Each section tracks its provenance (SourceType: baseline YAML, scanner-contributed, or runtime-enriched). MitigationGuidance is per-finding and generated at report time. This is separate from RunGuidance (which is per-run and generated at workflow start).

RunGuidance

Per-run deployment playbooks generated by IPI and CXP module adapters at workflow start. Stored as JSON in the guidance column of the runs table. Structure:
  • RunGuidance — container with blocks, schema_version (currently 1), generated_at, module
  • GuidanceBlock — individual block with kind (BlockKind enum), label, items (list of strings), metadata (dict)
BlockKind values: inventory, trigger_prompts, deployment_steps, monitoring, interpretation, factors IPI builds four blocks (inventory, trigger prompts, deployment steps, monitoring). CXP builds four blocks (inventory, trigger prompts, deployment steps, interpretation). The web UI renders these blocks as collapsible cards in the IPI and CXP run results tabs.

Service Layer

The q_ai.services package provides typed query functions that sit between the web/orchestrator layers and the core database. Services encapsulate query logic and return typed dataclass results.
ServicePurpose
finding_serviceList findings with filters, get finding with evidence, get findings for a run (including child runs), get imported findings for a target
run_serviceGet run, list runs, get child runs, get finding counts
evidence_serviceList evidence by finding or run, get content
Services are used by:
  • Module adapters — the inject adapter calls finding_service.get_findings_for_run() and finding_service.get_imported_findings_for_target() to inform template selection
  • Web route handlers — replacing inline DB queries with service calls for findings, runs, and evidence display
  • Workflow orchestrators — querying cross-module results during workflow execution

External Tool Import

The q_ai.imports package provides parsers for external security tool results. Each parser reads a tool-specific format and produces normalized ImportedFinding objects with taxonomy bridging.
ComponentPurpose
imports/cli.pyqai import CLI command with --format, --target, --dry-run
imports/garak.pyGarak JSONL report parser
imports/pyrit.pyPyRIT JSON conversation export parser
imports/sarif.pySARIF 2.1.0 generic parser
imports/taxonomy.pyOWASP LLM Top 10 → qai category bridge with confidence levels
imports/models.pyImportedFinding, ImportResult, TaxonomyBridge dataclasses
Import runs are stored with module='import' and source set to the tool name (garak, pyrit, sarif). The --target flag associates imported findings with a qai target, enabling them to inform inject template selection in workflows. See External Tool Import for CLI usage and parser details.

Bridge Token

Shared secret for IPI callback listener ↔ web UI internal communication. Stored at ~/.qai/bridge.token as a 32-character hex string. Auto-generated on first access using secrets.token_hex(16) with exclusive file creation (race-safe). File permissions are 0600 on POSIX. Both the IPI listener and the web server read from the same file. The listener includes the token in internal HTTP POST notifications to the web server when a callback hit arrives.

Provider Registry

The q_ai.core.providers module manages LLM provider metadata for the Settings UI. Defines known providers (Anthropic, OpenAI, Groq, OpenRouter, Ollama, LM Studio, custom) with their type (cloud/local) and model fetching logic. The Settings UI uses this to populate provider dropdowns and validate credentials. LLM calls go through q_ai.core.llm which defines a ProviderClient protocol. The implementation in q_ai.core.llm_litellm wraps litellm for actual API calls, returning NormalizedResponse objects that abstract away provider-specific response formats.