2026-03-08 17:28:29 +00:00
2026-03-08 17:28:29 +00:00
2026-02-06 22:18:20 +00:00
2026-03-08 17:28:29 +00:00
2026-03-05 20:07:35 +00:00
2026-03-05 20:07:35 +00:00
2026-03-05 20:07:35 +00:00
2026-03-05 20:07:35 +00:00
2026-03-08 17:28:29 +00:00
2026-03-05 20:07:35 +00:00

🐉 Dungeon Masters Vault: Local RAG Assistant

An advanced Retrieval-Augmented Generation (RAG) system designed for Dungeon Masters. This tool ingests markdown-based campaign notes, enriches them with AI-generated metadata, and provides an interactive terminal interface to query your worlds lore using DSPy and Local LLMs.

⚔️ Key Features

  • Parallel Enrichment: Utilizes a configurable multithreading to process multiple document chunks simultaneously across local LLM slots for high-speed ingestion.
  • Deep Context Retrieval: Unlike standard RAG, this system retrieves relevant chunks and then "peeks" at the full source file to provide the LLM with broader narrative context.
  • Local-First: Designed to run entirely on your hardware using LM Studio, keeping your campaign secrets private.

🏗️ Architecture

  1. Ingestion: Scans DATA_DIR for .md files.
  2. Chunking: Splits documents into 800-character segments with overlap.
  3. Enrichment: A DSPy IngestionAgent analyzes each chunk to extract:
    • Synopsis: A one-sentence summary.
    • Tags: Plot points, item names, or themes.
    • Entities: Specific NPCs, Locations, or Factions.
  4. Vector Store: Chunks and metadata are embedded using text-embedding-qwen3 and stored in a local Turso database.
  5. Interactive RAG: A terminal loop that uses ReAct (Reasoning and Acting) to answer queries based on retrieved context.

🛠️ Setup

Prerequisites

  • UV Link to install here
  • LM Studio: Running a local server at localhost:1234 (or your specific IP).
  • Models: * Inference & Embedding: Configurable for your preference. grab your model in LMStudio and update the conifg

Installation

uv sync

🚀 Usage

1. Ingest & Enrich

Run the ingestion script to process your markdown files and build the vector database.

uv run src/ingest.py

2. Query the LLM

Launch the interactive session to ask questions about your campaign.

uv run src/retrieve.py

Example Query:

📝 Query: Why did the party get free bread at the Golden Grain Inn?
📜 AI RESPONSE: Based on the session notes from 'Session_12.md', the party received free bread because the Rogue successfully intimidated the baker's assistant, and the Cleric later performed a minor miracle (Thaumaturgy) that impressed the owner.


📂 File Structure

.
├── config.yaml # Configuration for the app
├── load_ingestion_llms.sh  # script to load multiple LLMs (Run before ingest)
├── README.md 
├── ROADMAP.md
├── src
│   ├── config_loader.py # Loads the config yaml file
│   ├── embedding.py  # Class to talk to LMStudio Embedding Model Server
│   ├── experts
│   │   ├── ingestion_agent.py # Agent Class for ingestion enrichment
│   │   └── retrieval_agent.py # Agent Class for retrieval, with tools and database calls
│   ├── ingest.py # Ingestion script to load your DnD Campaign Notes
│   └── retrieve.py # main Q&A for your notes
├── data # GitIgnored Folder for Notes Database
│   ├── dmv.db
│   ├── dmv.db-wal
│   ├── dmv.log
│   └── time_file.txt
├── pyproject.toml
├── LICENSE
└── uv.lock

⚙️ Configuration

In config.yaml, you can adjust multiple things:

  • Enrichment / embedding & Retrieval Mdels
  • DnD Notes Location (data_dir)
  • System Prompts for Ingestion & Retrieval Agents

S
Description
Your very own AI Agent that knows all of your DnD Notes
Readme 9.2 MiB
Languages
Python 98.4%
Shell 1.6%