A field report from rolling out retrieval-augmented generation in production, including cache bugs, bad embeddings, and how we fixed them.
When we first rolled out a RAG-based assistant for our internal SRE team, nothing in the vendor docs really prepared us for the messy parts.
The first painful incident happened on a Monday morning. A runbook query returned an outdated PostgreSQL failover procedure because:
Two weeks later, we saw a spike in “no relevant context found” errors during incident calls. The vector DB was healthy; the problem turned out to be:
The marketing pages sold RAG as magic. In reality it behaves more like a database: if you don’t design for drift, invalidation, and observability, it will betray you at the worst moment.
Get the latest tutorials, guides, and insights on AI, DevOps, Cloud, and Infrastructure delivered directly to your inbox.
A real story of removing console-only changes, adding drift detection, and getting Terraform back in charge.
Practical game day scenarios for CI/CD: broken rollbacks, permission issues, and slow feedback loops—and how we fixed them.
Explore more articles in this category
We ran the same workload on both for half a year. The break-even point isn't where most blog posts say it is — and the latency story has more nuance than throughput-per-dollar charts admit.
Six months running RAG in production taught us that the retrieval step matters far more than the model. Concrete techniques that moved the needle, with before/after numbers.
Battle-tested prompt patterns from running LLM features in production: structured output, chain-of-thought, and graceful failure handling.
Evergreen posts worth revisiting.