Documentation Index
Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt
Use this file to discover all available pages before exploring further.
Use the qai inject subcommand to serve adversarial payloads, run campaigns against LLM models, and analyze results.
qai inject serve
Start a malicious MCP server that serves poisoned tool definitions. Useful for testing agent resilience or simulating compromised MCP servers in controlled environments.
qai inject serve --transport <type> [OPTIONS]
Options
| Flag | Type | Required | Description |
|---|
--transport | enum | Yes | Transport type: stdio or streamable-http |
--port | int | No | Port for streamable-http (default: 8888) |
--payload-dir | string | No | Directory containing custom payload YAML templates |
--config | string | No | YAML file with list of payload names to serve |
Examples
Serve all default payloads via stdio:
qai inject serve --transport stdio
Serve via HTTP on a custom port:
qai inject serve \
--transport streamable-http \
--port 9000
Serve only specific payloads listed in a config file:
qai inject serve \
--transport stdio \
--config my-payloads.yaml
Use qai inject list-payloads to see available payloads before running a campaign.
qai inject campaign
Run an automated injection campaign against an LLM model. Tests each payload by serving it as a poisoned tool and measuring the model’s susceptibility.
qai inject campaign [OPTIONS]
Options
| Flag | Type | Required | Description |
|---|
--model | string | Yes* | Model ID in provider/model format (e.g., anthropic/claude-sonnet-4-20250514, openai/gpt-4o). Falls back to QAI_MODEL env var. Bare model names default to Anthropic. |
--rounds | int | No | Number of attempts per payload (default: 1) |
--output | string | No | Output directory for campaign JSON (default: .) |
--payloads | string | No | Comma-separated payload names, or all (default: all) |
--technique | enum | No | Filter by technique: description_poisoning, output_injection, cross_tool_escalation |
--target | string | No | Filter by target agent type (e.g., claude, gpt) |
Setting Credentials
Store API keys in the OS keyring (prompts for key with masked input):
qai config set-credential anthropic
qai config set-credential openai
Alternatively, set provider environment variables directly (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY). See Provider Configuration for details.
Examples
Run campaign against Claude Sonnet:
qai inject campaign \
--model anthropic/claude-sonnet-4-20250514 \
--rounds 1 \
--output results/
Run campaign against OpenAI GPT-4o with specific technique:
qai inject campaign \
--model openai/gpt-4o \
--technique description_poisoning \
--rounds 2 \
--output results/
Run campaign on specific payloads only:
qai inject campaign \
--model anthropic/claude-sonnet-4-20250514 \
--payloads exfil_via_important_tag,preference_manipulation
Campaign results are automatically saved to the database. Results are also written as JSON to the --output directory.
qai inject list-payloads
List available injection payload templates and their metadata.
qai inject list-payloads [OPTIONS]
Options
| Flag | Type | Description |
|---|
--technique | enum | Filter by technique: description_poisoning, output_injection, cross_tool_escalation |
--target | string | Filter by target agent type (e.g., claude, gpt) |
Examples
List all payloads:
List description poisoning payloads only:
qai inject list-payloads --technique description_poisoning
List payloads designed for Claude:
qai inject list-payloads --target claude
qai inject report
Generate a summary report from saved campaign JSON results.
qai inject report --input <path> [OPTIONS]
Options
| Flag | Type | Required | Description |
|---|
--input / -i | string | Yes | Path to campaign JSON file (from qai inject campaign) |
--format / -f | enum | No | Output format: table or json (default: table) |
Examples
Display table summary:
qai inject report -i results/campaign-20260322-120000-000000.json
Output raw JSON:
qai inject report -i results/campaign-20260322-120000-000000.json -f json
The table shows each payload, technique used, outcome classification, and evidence snippet from the LLM response.