glad it helped. The “rewrite the claim 5 ways” is exactly what I use for RAG too, but make them hard rewrites (change structure/order/formality, not just synonyms) so retrieval stays stable when the user asks the same thing in different shapes. On hallucination checking: it works if you force the checker to act like a verifier, not a debate bot. My rule is simple — no quoted evidence span from the retrieved context = “Not enough info”. I run a strict JSON judge that only outputs SUPPORTED / CONTRADICTED / NEI plus the exact quote it used, so you can log it as a TrustStack-style receipt. And on “not enough info” being the hardest: agreed, most models are trained to fill the silence. The workaround is to manufacture NEI mechanically instead of hoping the model learns humility by accident — take a supported claim, then nudge it over the line by adding a number/timeframe, swapping “can”→“will”, “some”→“most”, or injecting a “because”. Then your verifier will correctly label those as NEI because the evidence isn’t there. That gives you a clean dataset that trains restraint in RAG: answer when grounded, otherwise say “not enough info” and ask for more context. Liam @RFTSystems