Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt

Use this file to discover all available pages before exploring further.

Prerequisites

  • Python 3.11 or later
  • uv (recommended) or pip
  • Node.js / npm — MCP server targets are typically npm packages. Install the example target used below with npm install -g @modelcontextprotocol/server-memory.
  • LLM provider API key (optional) — Required only for inject campaigns and chain execution. q-ai supports 100+ providers via litellm. Store credentials in the OS keyring or set environment variables. See Configure a provider below.
  • sentence-transformers + chromadb (optional) — Required only for rxp module. Install with pip install q-uestionable-ai[rxp].

Install

pip install q-uestionable-ai

Configure a provider (optional)

If you plan to run injection campaigns or chain analysis, set up a provider credential. q-ai supports 100+ providers via litellm. Store your API key in the OS keyring using the provider name:
qai config set-credential anthropic
# Prompts for your API key (masked input)
Verify configured providers:
qai config list-providers
Alternatively, set the environment variable directly:
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GROQ_API_KEY="gsk_..."
When selecting a model, use the provider/model format (e.g., anthropic/claude-sonnet-4-20250514, openai/gpt-4o, groq/llama-3.3-70b-versatile).

Run your first scan

1

Pick a target server

You can scan any MCP server you have permission to test. This guide uses the official MCP memory server as an example. You don’t need to start the server manually — q-ai spawns it via the --command flag.Install the example target if you don’t have one:
npm install -g @modelcontextprotocol/server-memory
2

Run the scan

q-ai launches the server as a child process, connects over stdio, enumerates its tools and resources, then runs all 10 scanner modules against it — each mapped to the OWASP MCP Top 10 and MITRE ATLAS.
qai audit scan "npx @modelcontextprotocol/server-memory"
The transport is inferred automatically — a URL is treated as SSE or Streamable HTTP, a command string as stdio. Override with --transport if needed.
3

Review the output

The CLI prints a live summary as it runs:
  • Connected — confirms the transport and server name
  • Findings — any security issues found, with severity, description, and remediation
  • Report saved — path to the full JSON report

Enumerate a server

Before a full scan, you can quickly check what a server exposes:
qai audit enumerate "npx @modelcontextprotocol/server-memory"
This lists the server’s tools, resources, and prompts without running any security checks.

Import external results

If you already use Garak, PyRIT, or any SARIF-producing tool, import their results into qai and associate them with a target. Imported findings inform which inject payloads are prioritized when you run campaigns against that target.
# Register a target
qai targets add "My Server" http://localhost:3000/sse

# Import Garak results for that target
qai import report.jsonl --format garak --target <target-id>

# Import PyRIT results for the same target
qai import conversations.json --format pyrit --target <target-id>

# View all findings (native + imported) for the target
qai findings list --target <target-id>
See Import Overview for supported formats, taxonomy bridging, and how imported findings drive workflow integration.

Launch the web UI

qai ui
This opens a browser with the workflow launcher — workflow cards for guided security assessments. The same findings are accessible via both the web UI and CLI (qai findings list). Use --port to specify a port, or --no-browser to start the server without opening a browser.

Try other modules

# Generate indirect prompt injection payloads
qai ipi generate http://localhost:8080 --technique all --format markdown

# Launch the CXP interactive builder
qai cxp generate

# Validate RAG retrieval rank
qai rxp validate --profile hr-policy --model minilm-l6

# View all findings across modules
qai findings list

Next steps

Only test systems you own, control, or have explicit written authorization to test.