Progressive Disclosure: The secret to taming AI context (without losing your mind)
The first time I heard the term Progressive Disclosure, I was driving. I was listening to Maximiliano Contieri's talk at the latest Nerdearla, and as soon as he said those words, something clicked in my head.
At the time, I had been fascinated by the concept of Smart Brevity (precise, to-the-point communication), but this was different. It was a technical and conceptual strategy for becoming better at building with Artificial Intelligence.
As a frontend developer, constantly dealing with UI/UX, the term felt strangely familiar. In interfaces, it means not overwhelming the user and showing only the information they need at each step. But what happens when we apply this to Models (LLMs)? Magic.
A few days after that talk, a teammate showed me Engram, a tool that promised to solve one of the biggest problems we face today with AI. When I looked into how it worked under the hood, I realized it used exactly this concept. So I decided to dig deeper into why Progressive Disclosure is the technique every developer should master today.
The new bottleneck: Context management
Maxi Contieri explained it with a brilliant analogy in his talk: 30 years ago, programmers fought with RAM, dealing with pointers and the Garbage Collector. Today, our job is to manage AI context.
The naive approach to giving an LLM "memory" is to search for a keyword and dump the entire file into the prompt. If we throw a huge wall of text at it, three things happen:
- Costs skyrocket (you're paying for thousands of unnecessary tokens).
- The model slows down.
- The AI starts hallucinating (it suffers from the Lost in the Middle problem, where it forgets what was in the middle of the context window).
That's where Progressive Disclosure comes to the rescue.
The debate on X: Obsidian vs. Engram
This problem came up recently in a debate on X (Twitter), where several developers, including Gentleman Programming, were discussing knowledge management tools.
Many Obsidian-based plugins tend to grab entire Markdown files and run a basic grep on them, injecting everything into the AI and consuming context extremely fast. In contrast, the alternative that shines is Engram.
Why is Engram more efficient? Because it uses pure search engineering combined with Progressive Disclosure:
- FTS5 (Full-Text Search 5): A super lightweight SQLite module for blazing-fast local text search.
- BM25: A ranking algorithm that does more than just search for keywords; it evaluates the document's real relevance while filtering out noise.
- Progressive Disclosure: Instead of giving the LLM the entire document, the system first shows it a summarized "menu" (metadata or short descriptions). The AI agent reads that menu and decides which specific memory block needs to be expanded to solve the task.
Goodbye to "Waterfall 2.0" and endless documentation
These days, it's common to see repositories flooded with Markdown documentation autogenerated by AI agents. It's massive, endless, and frankly, not very human.
Applying Progressive Disclosure to the way we document saves us. Instead of creating a monolithic wall of text, documentation should have lightweight index files and leave the deep detail only for when it is strictly necessary to consult it.
This connects with another fundamental warning from Contieri about the trend of Spec-Driven Development taken to the extreme. There's a current fantasy in the industry: write a gigantic product document, hand it to an AI agent, go to sleep, and wake up to a finished system.
That used to be called Waterfall development, and we already know it fails. Software development is still iterative and incremental. We can't blindly trust AI to understand massive requirements all at once. We need short cycles, review the code (and not just play blindly in the terminal), validate security, and always keep the human in the loop (Human in the loop).
Conclusion
Whether we're deciding how our AI reads our knowledge bases or how we structure the "Skills" of our agents, the rule is clear: less is more.
Progressive Disclosure is not just a technical trick to save tokens. It's a philosophy that forces us to break down complex problems into digestible steps, keeping control over our code and ensuring that, no matter how autonomous AI becomes, we remain the architects of the solutions.