Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.q-uestionable.ai/llms.txt

Use this file to discover all available pages before exploring further.

The RXP module measures whether adversarial documents appear in top-k retrieval results across embedding models. It requires optional dependencies — install with:
pip install "q-uestionable-ai[rxp]"

qai rxp list-models

Display registered embedding models.
qai rxp list-models
Output is a table with columns: ID, Model, Dimensions, Description.

qai rxp list-profiles

Display built-in domain profiles and their query/corpus counts.
qai rxp list-profiles
Output is a table with columns: ID, Name, Queries, Corpus Docs.

qai rxp validate

Run retrieval validation — test whether a poisoned document ranks in the top-k results for a set of queries.
qai rxp validate [OPTIONS]
Either --profile or --corpus-dir is required.

Options

OptionRequiredDefaultDescription
--profileNo*Domain profile ID (from list-profiles)
--corpus-dirNo*Path to directory of .txt corpus files
--poison-fileNoPath to poison document. Required if not using a profile with built-in poison docs.
--modelNominilm-l6Embedding model: registry shortcut, all, or HuggingFace model name
--top-kNo5Number of retrieval results per query
--outputNoWrite JSON results to file
--verbose / -vNoShow per-query hit details
--saveNoPersist results to the qai database
--queryNoCustom query (repeatable). Required when using --corpus-dir without a profile.
* Either --profile or --corpus-dir is required.

Examples

Validate with a built-in profile:
qai rxp validate --profile hr-policy --model minilm-l6
Validate with custom corpus and poison file:
qai rxp validate \
  --corpus-dir ./knowledge_base \
  --poison-file ./adversarial_doc.txt \
  --query "What is the vacation policy?" \
  --model minilm-l6 \
  --top-k 10
Compare across all registered models:
qai rxp validate \
  --profile hr-policy \
  --model all \
  --output results.json
Save results to the qai database:
qai rxp validate \
  --profile hr-policy \
  --save

Output

Standard output shows per-model results:
Loading model: MiniLM-L6-v2...
Ingesting corpus: 4 documents + 1 poison document(s)
Running 5 queries (top-5)...

Results for minilm-l6:
  Retrieval rate: 3/5 (60.0%)
  Mean poison rank: 2.3 (when retrieved)
With --verbose, per-query detail is added:
  Query: "What is the vacation policy?"
    Poison retrieved: YES (rank 2)
  Query: "How do I reset my password?"
    Poison retrieved: NO
When running multiple models (--model all), a comparison table follows:
Comparison Summary:
Model           Rate            Mean Rank
------------------------------------------
minilm-l6      3/5 (60.0%)     2.3
bge-small      4/5 (80.0%)     1.8