The Runs view displays executed workflow results, findings from each module, and run history. Navigate runs via the sidebar or return to the Launcher.Documentation Index
Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt
Use this file to discover all available pages before exploring further.
Scope. The Runs history list covers workflow runs — Assess an MCP Server, Test Document Ingestion, Test a Coding Assistant, and similar — producing audit, inject, IPI campaign, CXP, and chain results, plus import runs created by
qai import or the Intel Import Results card. Probe runs (module = ipi-probe) and sweep runs (module = ipi-sweep) are not rendered on this page. Direct /runs?run_id=<id> URLs for a probe or sweep run that is bound to a target redirect (302) to the Intel target detail page at /intel/targets/<target_id>#probe-run-<id> or #sweep-run-<id>, so bookmarked deep links land on the correct surface — the per-category probe breakdown and the template × style sweep matrix. Unbound probe and sweep runs (rare; created before target-binding became required) still render in place on /runs with overview metadata only.Run History Sidebar
The left sidebar lists all past runs, sorted by most recent first. Click any run to view its detailed results. Each run entry shows:- Workflow name: e.g., “Assess an MCP Server”
- Status badge: Color-coded (success, error, partial, in progress)
- Target name: The target being tested
- Timestamp: When the run started
Deleting Runs
Each run in the sidebar has a delete button (trash icon). Clicking it opens a confirmation modal showing:- The run name
- Number of findings that will be deleted
- Number of evidence items that will be deleted
Overview Header
When viewing a completed run, the overview header displays:- Status badge: COMPLETED, FAILED, PARTIAL, or IN PROGRESS
- Workflow name: Human-readable name of the executed workflow
- Target and URI: The target being tested and optional connection details
- Duration: Wall-clock time from start to finish
- Module badges: One badge per child run (audit, proxy, inject, etc.) showing individual module status
- Finding counts: Summary of high/medium/low severity findings (if applicable)
Action Buttons
The top-right of the overview header contains result export and report controls:- Generate Report: Launches the Generate Report workflow for the run’s target. Available after all modules complete.
- Export JSON: Full run bundle in JSON format (
run-bundle-v1schema) for tooling integration. - Download SARIF: SARIF format for audit findings (only available if the audit module ran).
- Download Report: Appears after a report has been generated, linking to the HTML report.
Module Tabs
Below the overview, a tabbed interface organizes results by module. Visible tabs depend on which modules executed in the workflow.Audit Tab
Displays findings from the audit module: discovered tool trust boundaries, permission issues, type confusion vulnerabilities, and more. Layout:- Findings list: Each finding shows severity, title, description, and the scanner that discovered it.
- Framework pills: Security framework mappings (e.g., “OWASP”, “CWE-123”).
- Mitigation toggle: Click to expand/collapse remediation guidance.
- Previously Seen badge: Appears on findings detected in prior runs against the same target.
Proxy Tab
Network traffic capture and analysis results. Shows intercepted requests/responses between the client and MCP server.Inject Tab
Prompt injection test results: payloads attempted, success rates, evasion effectiveness, and extraction outputs.Inject Coverage Report
When audit findings were present in the assess workflow, the inject tab includes a coverage report section between the stats bar and the payload results table. The coverage report shows:- Coverage ratio — how many audit finding categories were exercised by inject payloads (e.g., “4 of 7 categories exercised — 57%”)
- Tested categories — categories where at least one payload produced a security-relevant outcome, shown as green badges
- Untested categories — categories found by audit but not exercised by any payload, shown as warning badges. These are gaps worth investigating
- Native vs imported breakdown — when both native audit findings and imported external findings (from
qai import --target) contributed categories, the report shows the count from each source
IPI Tab
Indirect prompt injection results with guidance blocks for deployment mitigations. Guidance blocks provide step-by-step remediations organized by playbook section (detection, response, prevention). Per-campaign inventorytemplate_id column: the deployment playbook inventory table shows a template_id badge per payload (e.g., whois, email, generic) or — for pre-migration runs created before the template system shipped. See the Template Catalog for the full list of template IDs.
Tunnel-source badge on hit feed rows: each live hit row displays a small badge indicating how source_ip was resolved — tunnel (forwarded via CF-Connecting-IP) or direct (TCP peer). The badges follow existing badge styling conventions: badge-outline for tunnel, badge-ghost for direct.
CXP Tab
Context file poisoning results with deployment playbooks. Guidance blocks show how to detect and respond to context poisoning in AI-assisted development workflows. Conclude Campaign button: Marks an IPI/CXP workflow as concluded, recording the outcome for future runs.IPI Gating Banner
When the Test Document Ingestion workflow ran with RXP pre-validation enabled, the IPI tab shows an info banner above the deployment playbook indicating that retrieval gating was active. The banner shows the retrieval rate and lists any queries where the poison document was not retrieved (non-viable queries). If all queries were non-viable and generation was skipped, the banner shows a clear message instead of the playbook.Import Runs
Runs created byqai import appear in the run history with display names like “Import (Garak)”, “Import (PyRIT)”, or “Import (SARIF)”. Import runs show a source badge with the external tool name. Import runs store findings but don’t have child module runs — their finding count reflects findings directly on the import run.
Imported findings in the findings table show source attribution badges distinguishing them from native audit findings.
Findings Display
Findings are presented in a table with columns for:- Severity: High, Medium, Low (color-coded badges)
- Title: Finding name
- Description: Details of the issue
- Module: Which module discovered it (audit, inject, etc.)
- Framework IDs: Security standard references (OWASP, CWE, etc.)
Mitigation Guidance
Click a finding to expand mitigation guidance:- Detection: How to identify the vulnerability in production
- Response: Immediate actions if exploited
- Prevention: Long-term fixes and hardening steps
Previously Seen Badge
Findings marked with “Previously Seen” indicate the issue was discovered in a prior run against the same target. Use this to:- Track recurring vulnerabilities
- Validate fixes (findings should disappear after remediation)
- Identify persistent weaknesses requiring prioritized attention
Status Bar
For in-progress runs, the status bar shows:- Progress: Current step or estimated completion
- Child run badges: Module-level status (pending, running, completed, failed)
- Waiting banner (if applicable): Indicates a paused run awaiting user input or external callback
Managed listener panel
The managed listener panel appears in the Operations / Runs view whenever the Web UI has spawned a managed callback listener (via the launcher’s Use tunnel toggle). What it surfaces:- Public URL
- Tunnel provider (
cloudflare) - Listener PID
- Creation timestamp
~/.qai/active-callback state file written by the existing listener. See callbacks → auto-discovery on generate for the state-file semantics.
Concurrent Runs
You can run multiple workflows simultaneously:- Navigate to the Launcher while a workflow is executing
- Start another workflow
- Switch between runs in the sidebar to monitor progress
Playbook Tabs
For IPI and CXP results, additional tabs organize playbook guidance:- Detection: Recognize the attack in logs and system behavior
- Response: Containment and investigation steps
- Prevention: Architecture and code changes to eliminate the vulnerability
Exporting Results
JSON Export
The Export JSON button downloads the complete run bundle (run-bundle-v1 schema) containing all findings, module outputs, and metadata. Use for integration with external tools — see JSON Schema and Integrations.