The need for a knowledge base
Let's face it: keeping a complex codebase in your head is a nightmare. As developers, we constantly wrestle with context, and our LLMs often hit the same wall. They generate code incredibly fast, but this speed quickly overwhelms their token budgets and our ability to keep up. We end up with code we barely understand and a hefty "knowledge debt." What we desperately need is a knowledge base, a crystal-clear map of our architecture, conventions, and all those hidden gotchas. The problem? Building and, more importantly, maintaining living documentation is a human struggle. It's time-consuming, goes stale fast, and is usually the first thing to get sidelined. Sure, we can even get an LLM to kickstart this knowledge base, giving us a starting point. But the game-changer is how we ask the LLM to maintain it.
Making a knowledge partner
The solution isn't a new human workflow; it's a core directive we embed into our LLM interactions. We give our AI a mission: "When you're working on code or answering questions, run through this specific cycle every single time." This turns our LLM into a self-improving, deeply knowledgeable partner that constantly updates and perfects our project's documentation.
Here's the detailed playbook, it's like writing the operating manual for your AI assistant, often in a simple instruction file like Claude.md:
1. READ
"Before you even think about generating code or an answer, thoroughly review the provided Knowledge Base markdown files. Your job is to pull out all the relevant architecture details, coding conventions, past decisions, known issues, and conceptual models that apply to the current task. Prioritise this context from our Knowledge Base over any general information you might have.
2. VERIFY
"Okay, you've got a proposed solution or some generated code. Now, you need to actively check this against the actual codebase. Don't try to re-index everything
3. IMPLEMENT
"Great. With your understanding from the READ phase and the confidence from your VERIFY checks, it's time to finalise. Provide the requested code, solution, or answer now. Make sure it's precise, accurate, and fully incorporates all that specific context from the Knowledge Base and your verification findings."
4. LEARN
"You've just completed a task. Now, reflect: What new insights did you gain? Any patterns emerge? Did you clarify any conventions or stumble upon specific edge cases? Your mission is to write these learnings directly into our Knowledge Base. Format them as markdown additions or updates to existing files, providing clear file paths (e.g., architecture/new-service.md, gotchas/api-limit.md ). You are actively documenting what you've learned for everyone's future benefit."
This continuous "Learn" phase is absolutely transformative. Your LLM updates crucial files, adding to architecture/deps.md or recording a new issue in gotchas.md, ensuring your knowledge base is always current and useful. And yes, you are completely in control here. Your instruction file, like Claude.md, explicitly directs the LLM on how to structure and update these markdown files.
Ultimately, this iterative process elevates your LLM from a cool code generator to an indispensable development partner. It skilfully manages token limits, develops a truly deep understanding of your specific codebase, and evolves right alongside your project. The result? Reduced token costs, no more full codebase re-indexes, and up-to-date human-readable documentation that dramatically boosts everyone's understanding.
This isn't just smarter prompting; it's shifting from trial-and-error to a truly intelligent, knowledge-centric partnership.
Member discussion