Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt

Use this file to discover all available pages before exploring further.

What is q-ai?

q-ai tests the security of MCP (Model Context Protocol) servers and agentic AI systems. It scans servers for vulnerabilities, tests whether models follow poisoned tool descriptions, intercepts MCP traffic, and measures whether adversarial documents survive RAG retrieval. Findings include execution-level proof that vulnerabilities are exploitable end-to-end, not just scan results. All findings are mapped to four security frameworks: OWASP MCP Top 10, OWASP Agentic Top 10, MITRE ATLAS, and CWE.

Bring what you have, prove what matters

Already running Garak or PyRIT? qai picks up where they leave off. Import your existing results, associate them with a target, and qai uses those findings to drive its native modules — prioritizing inject payloads based on the compliance patterns your tools already discovered.
qai import report.jsonl --format garak --target my-server
qai import conversations.json --format pyrit --target my-server
qai import bipia.csv --format bipia --target my-server
A Garak, PyRIT, or BIPIA result says “this model follows injected instructions.” An audit result says “this MCP server has a vulnerable tool.” A qai finding connects the two: the model weakness was exploitable end-to-end through a real agentic system, with proof. See Import Overview for supported formats and Inject Campaigns for how imported findings inform payload selection.

Modules

ModuleWhat it does
auditScans MCP servers for vulnerabilities, maps findings to the OWASP MCP Top 10 and MITRE ATLAS, outputs SARIF
proxyIntercepts MCP traffic between client and server for inspection, modification, and replay
injectTests AI agent susceptibility to tool poisoning and prompt injection using configurable payloads
chainComposes multi-step attack sequences across audit findings and inject techniques
ipiGenerates adversarial documents with hidden instructions, tracks execution via authenticated callbacks
cxpBuilds poisoned instruction files for coding assistants, validates whether models comply
rxpMeasures whether adversarial documents appear in top-k retrieval results across embedding models

Research

qai’s research program studies Capability Trust Propagation Failure — how trust properties (provenance, integrity, authorization scope, intended audience) fail to propagate across capability boundaries in multi-step AI systems. It shows up across existing OWASP MCP Top 10 and OWASP Agentic Top 10 categories rather than needing a new one. Findings carry execution-level proof. Authenticated callbacks confirm that an attack ran end-to-end through a real agentic system — not that a model was convinced to write attacker-supplied text. Coding assistant context-file poisoning (the cxp module) targets real products directly. Positive findings can be filed as CVEs, creating a path to CVE-attributed research rather than research that only lives in published form.

Quick start

Install via pip (requires Python 3.11+):
pip install q-uestionable-ai
Run your first audit scan against an MCP server:
qai audit scan http://localhost:3000/sse
Or launch the web UI:
qai ui

Mitigation guidance system

Every finding includes automated mitigation guidance. The guidance system maps findings to remediation strategies grouped by severity and remediation effort. Use the web UI to toggle guidance on/off per finding, or access it via the CLI.

Learn more

  • Quick Start — Install, configure a provider, run your first scan
  • Core Concepts — MCP threat model, framework mappings, module methodologies
  • Responsible Use — Authorization requirements and responsible disclosure
  • GitHub