From 64ac61fa57d1dbca1f7a6dab21b2f5d773e70a20 Mon Sep 17 00:00:00 2001 From: Jake Pullen Date: Sun, 1 Mar 2026 09:43:56 +0000 Subject: [PATCH] =?UTF-8?q?docs:=20=F0=9F=93=9C=20Updated=20TODO?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- TODO | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/TODO b/TODO index 8dc84f1..d2c1d61 100644 --- a/TODO +++ b/TODO @@ -1,10 +1,21 @@ -Test new embeddings -Benchmark / rate embeddings & vectors +context engineering, - only include vector hits that are x distance? -Is RAG still the "thing"? - What is the cutting edge - - "Context Engineering" is the current evolution, although GraphRaG has been a thing? - - Context Engineering seems to be finding the balance of how to provide just the right amount of context to get best results. - Too little context and the llm doesnt have enough info to give an accurate answer - Too much conflicting context (poison) - too much context (confusion) +AI in the middle - make the ai generate the string for vector search + +instruction tuned embeddings? + +entity chunking & re-ranking + +bredth vs depth = separate workflows + +examples into prompts & better prompts + +common model attributes - temp & top-k + +QA specific embedding models? + + + +Evaluation metrics, how good is it doing? +rate my response!? \ No newline at end of file