Skip to main content
The core/ package provides types and utilities shared across all modules. All modules import from core rather than defining their own base types.

File layout

src/q_ai/core/
├── __init__.py
├── db.py              # SQLite connection manager, shared CRUD, schema migration
├── schema.py          # DDL for shared tables, PRAGMA user_version migration runner
├── models.py          # Shared data models (Run, Target, Finding, Evidence, Severity, RunStatus)
├── config.py          # Config file loader, credential management, settings precedence
├── frameworks.py      # FrameworkResolver — maps categories to OWASP/MITRE/CWE IDs
├── llm.py             # ProviderClient protocol, litellm integration, NormalizedResponse
├── data/
│   └── frameworks.yaml  # Framework mapping data
└── cli/
    ├── config.py      # qai config get/set/set-credential/list-providers
    ├── findings.py    # qai findings list
    ├── runs.py        # qai runs list
    └── targets.py     # qai targets list/add

Database

SQLite with WAL pragma, stored at ~/.qai/qai.db. Schema versioned via PRAGMA user_version with automatic migration on connect.

Shared tables

TablePurpose
runsWorkflow and module execution records with parent/child lineage
targetsRegistered scan targets (MCP servers, AI pipelines)
findingsSecurity findings with severity, category, OWASP mapping
evidenceSupporting evidence attached to findings
settingsKey-value configuration store

Module-specific tables

TableModule
audit_scansaudit
inject_resultsinject
proxy_sessionsproxy
chain_executions, chain_step_outputschain
ipi_hits, ipi_payloadsipi
cxp_test_resultscxp
rxp_validationsrxp

Key types

Severity

Integer enum with five CVSS-aligned levels:
LevelUsage
CRITICALRemote code execution, full compromise
HIGHData exfiltration, privilege escalation
MEDIUMInformation disclosure, partial access
LOWMinor issues, defense-in-depth gaps
INFOInformational observations, best-practice notes

RunStatus

Integer enum tracking run lifecycle:
StatusValueDescription
PENDING0Created, not yet started
RUNNING1Actively executing
COMPLETED2Finished successfully
FAILED3Finished with error
CANCELLED4Cancelled by user
WAITING_FOR_USER5Paused, awaiting human input
PARTIAL6Some steps completed, others failed

Finding

Dataclass representing a single security finding:
FieldTypeDescription
rule_idstrUnique check identifier (e.g., MCP05-001)
categorystrInternal taxonomy category
titlestrShort human-readable title
descriptionstrDetailed vulnerability description
severitySeverityCVSS-aligned severity level
evidencestrRaw evidence supporting the finding
remediationstrRecommended fix or mitigation
metadatadict[str, Any]Additional context

Configuration

Credential management

API keys are stored via the keyring library in the OS-native secret store. Resolution order: environment variable → OS keyring → error (no plaintext fallback).
from q_ai.core.config import get_credential, set_credential

set_credential("anthropic", "sk-ant-...")
key = get_credential("anthropic")  # Returns str or None

Settings precedence

For non-secret settings, the resolution chain is: CLI flag → environment variable → DB setting → config file → built-in default.
from q_ai.core.config import resolve

value, source = resolve("default_model")
# source: "cli", "env", "db", "file", or "default"

Framework resolver

FrameworkResolver loads data/frameworks.yaml and maps internal finding categories to external framework IDs (OWASP MCP Top 10, OWASP Agentic, MITRE ATLAS, CWE).
from q_ai.core.frameworks import FrameworkResolver

resolver = FrameworkResolver()
ids = resolver.resolve("command_injection")
# {"owasp_mcp": "MCP05", "cwe": "CWE-78", ...}

LLM provider client

q_ai.core.llm provides the ProviderClient protocol backed by litellm, supporting 100+ providers. All provider responses are normalized to NormalizedResponse before scoring or analysis. Raw provider responses are preserved separately for evidence and research. Model strings use the provider/model format (e.g., anthropic/claude-sonnet-4-20250514, ollama/llama3). Bare strings fall back to anthropic/ for backward compatibility. Tool calling is a hard prerequisite for inject campaigns — the client fails fast if the selected model doesn’t support tool use.