Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt

Use this file to discover all available pages before exploring further.

Use the qai inject subcommand to serve adversarial payloads, run campaigns against LLM models, and analyze results.
qai inject --help

qai inject serve

Start a malicious MCP server that serves poisoned tool definitions. Useful for testing agent resilience or simulating compromised MCP servers in controlled environments.
qai inject serve --transport <type> [OPTIONS]

Options

FlagTypeRequiredDescription
--transportenumYesTransport type: stdio or streamable-http
--portintNoPort for streamable-http (default: 8888)
--payload-dirstringNoDirectory containing custom payload YAML templates
--configstringNoYAML file with list of payload names to serve

Examples

Serve all default payloads via stdio:
qai inject serve --transport stdio
Serve via HTTP on a custom port:
qai inject serve \
  --transport streamable-http \
  --port 9000
Serve only specific payloads listed in a config file:
qai inject serve \
  --transport stdio \
  --config my-payloads.yaml
Use qai inject list-payloads to see available payloads before running a campaign.

qai inject campaign

Run an automated injection campaign against an LLM model. Tests each payload by serving it as a poisoned tool and measuring the model’s susceptibility.
qai inject campaign [OPTIONS]

Options

FlagTypeRequiredDescription
--modelstringYes*Model ID in provider/model format (e.g., anthropic/claude-sonnet-4-20250514, openai/gpt-4o). Falls back to QAI_MODEL env var. Bare model names default to Anthropic.
--roundsintNoNumber of attempts per payload (default: 1)
--outputstringNoOutput directory for campaign JSON (default: .)
--payloadsstringNoComma-separated payload names, or all (default: all)
--techniqueenumNoFilter by technique: description_poisoning, output_injection, cross_tool_escalation
--targetstringNoFilter by target agent type (e.g., claude, gpt)

Setting Credentials

Store API keys in the OS keyring (prompts for key with masked input):
qai config set-credential anthropic
qai config set-credential openai
Alternatively, set provider environment variables directly (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY). See Provider Configuration for details.

Examples

Run campaign against Claude Sonnet:
qai inject campaign \
  --model anthropic/claude-sonnet-4-20250514 \
  --rounds 1 \
  --output results/
Run campaign against OpenAI GPT-4o with specific technique:
qai inject campaign \
  --model openai/gpt-4o \
  --technique description_poisoning \
  --rounds 2 \
  --output results/
Run campaign on specific payloads only:
qai inject campaign \
  --model anthropic/claude-sonnet-4-20250514 \
  --payloads exfil_via_important_tag,preference_manipulation
Campaign results are automatically saved to the database. Results are also written as JSON to the --output directory.

qai inject list-payloads

List available injection payload templates and their metadata.
qai inject list-payloads [OPTIONS]

Options

FlagTypeDescription
--techniqueenumFilter by technique: description_poisoning, output_injection, cross_tool_escalation
--targetstringFilter by target agent type (e.g., claude, gpt)

Examples

List all payloads:
qai inject list-payloads
List description poisoning payloads only:
qai inject list-payloads --technique description_poisoning
List payloads designed for Claude:
qai inject list-payloads --target claude

qai inject report

Generate a summary report from saved campaign JSON results.
qai inject report --input <path> [OPTIONS]

Options

FlagTypeRequiredDescription
--input / -istringYesPath to campaign JSON file (from qai inject campaign)
--format / -fenumNoOutput format: table or json (default: table)

Examples

Display table summary:
qai inject report -i results/campaign-20260322-120000-000000.json
Output raw JSON:
qai inject report -i results/campaign-20260322-120000-000000.json -f json
The table shows each payload, technique used, outcome classification, and evidence snippet from the LLM response.