the idea here is to drop the vectors and semantic search, in favour of optimised knowledge base and llm tool calling. 

the current implementation loads the closest semantic chunks based on semantics. 

what if. 
  we ingest and enrich with a focus on tagging entities (knowing our qa will be around entities)
  we transform, grouping all entity related infornation together
  we load that grouped information out into toon files. 
  we give the agent a tool to load 1 or more toon file based on entites in the question. 

the context window for modern llm is big enough to fit the entire campain notes, but we still risk poison or confusion if we fill the context window with irrelevant notes. 

also wonder if we should give the full file at enrichment rather than chunks? worth experimenting... 
