DOCTIO
Who Steers the Steering Docs?
Cursor and Kiro have dominated the conversational landscape of development in the past few months, the coding world even got a new and special term for their potential misapplication: "vibe-coding." But these platforms are "quality-in-quality-out" systems. Development with them requires a different approach from traditional development, especially in greenfield environments.
We can use `.cursorrules` and `project md` files to steer this process somewhat, reining in AI stochasticity and thereby hallucination, but this only abstracts the problem by a single degree. I realized in the course of development that what was needed was a meta layer: hierarchical, ordinal contexts for the agent to follow, giving it a better sense of space, scale, and arrangement.
So I built a vertical slice of a long-held concept, "Reko," in a single day, by resolving that question with a protocol I call DOCTIO. DOCTIO is a governance layer for steering documentation. It is a platform-agnostic workflow that gives the agent not only the "Five Ws" it needs to do its best work, but also dictates priority.
DOCTIO enforces a strict hierarchy of needs that the AI must respect:
- Domain (What is this app, what does it do? Why should it be made?)
- Objects (Who are the players? What are their contracts?)
- Code Ethos (How should this be built?)
- Tests (What is success? More on this in a moment.)
These four layers are fleshed out completely before finally moving to the fifth step: Implementation. You read that right: this is pure TDD, especially from the agent's perspective. We define what is acceptable and what acceptable looks like before EVER beginning implementation.
All of this information is passed in through markdown documents at a fraction of the models' context windows, and yet during implementation it now has the context necessary for deterministic, standardized, internally-consistent output. Obviously, MCP use only enhances the effect and ease.
Optimization, the final, ever-evolving step, is handled through two specific types of documents:
- The Change Order: A formal, discrete, single-responsibility feature request
- The Instructional Order: A change to the foundational documents themselves (leaving MVP, new ethical considerations, etc.)
With these, the realities of development - that no plans survive first contact with the enemy - are accepted and integrated into the process itself. Where traditional steering docs capture basic rules, DOCTIO provides the process for evolution. If the implementation conflicts with the Domain, the agent knows exactly which one wins. It steers "vibe-coding" into deterministic engineering.
One more word about testing in DOCTIO: Beyond just a step in the process, there is more to how the protocol applies tests than meets the eye. In traditional TDD, we write binary pass/fail tests. This brings attention to the failing test (Good!), but it still needs a human, with context, to resolve it (Bad - at least for autonomous agents). Binary failures can only tell the AI "No." They don't tell it "Which way?"
To solve this, DOCTIO establishes a "behavioral corridor" using a method I call 'ABA Triangulation.' This approach applies the principles of Control Theory and Boundary Value Analysis to AI generation.
By defining three different input/output sets for each test, we give the AI a complete picture of the playing field. Every test requires three anchors:
- Anchor A (The Ceiling): The clear upper-limit boundary.
- Anchor B (The Target): The original, binary test (The Expectation).
- Anchor A' (The Floor): The clear lower-limit boundary.
With these points set, we aren't just checking for bugs; we are creating a feedback loop for autonomous testing. We give the agent vector-based feedback: the directionality and magnitude necessary to self-correct without human intervention. In essence, it’s like calibrating a thermostat with an upper limit, the desired temperature, and a lower limit. If the temperature is off, you know immediately whether to heat or cool with a single point of data.
This logic rolls right into the Code Ethos step of DOCTIO, preparing the ground for the actual Tests step that immediately follows. There, describing how you want these tests (and any failures) handled gives you fine-grained control over how autonomously you want your testing - and fixing - to be handled.
Key Takeaway
I design governance systems for AI-assisted development, thinking realistically about how teams will use them.