
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
Series
MICA SeriesPart 5 of 5

Glossary: terms used in this article
πΈ MICA (Memory Invocation & Context Archive): A governance schema for AI context management. Defines how context should be structured, trusted, scored, and handed off across sessions.
πΈ Fail-Closed Gate: An admission rule that excludes a context item if it fails a required threshold β regardless of its score on other dimensions. No exceptions. Introduced in v0.1.7.
πΈ README-as-Protocol: The pattern in which an AI session's natural behavior of reading the README first is formalized as the primary invocation mechanism. No installation required. Introduced in v0.1.8.
πΈ Invocation Protocol: The schema-level declaration of how a MICA archive reaches an AI session β and how the session confirms it was loaded. Formalized as a required field in v0.1.8.
πΈ Session Report Format: The structured opening report the model must produce at session start to confirm the archive was loaded. Required in v0.1.8.
πΈ Design Invariant Entry: A structured governance rule with identity, rule text, and severity. Replaced plain string invariants in v0.1.8.
πΈ Self-Test Policy: Machine-evaluable checks that validate the archive against the real project state β file existence, hash integrity, and README sync. Required in v0.1.8.
πΈ Playbook: The operator-facing discipline layer that sits outside the schema. The schema enforces structure; the Playbook enforces judgment.
πΈ Context Engineering: The practice of shaping what the model sees, in what order, with what boundaries, and under what assumptions β not just what you ask it, but what it actually knows at runtime.
πΈ CTX: A collection-first context packaging approach that gathers relevant workspace material and delivers it to the model. In this article, CTX represents the collection layer of context engineering β answering, βWhat does the AI see?β
1. What Parts 1 Through 4 Actually Established
The first four parts of this series already narrowed the problem considerably.
- Part 1 defined the failure mode. The issue was not that long-running AI work needed βmore prompt.β The issue was that a model can only act on what it actually knows right now, and most project context systems still treat that as a document problem instead of a governance problem.
- Part 2 established the first hard boundary. A schema can exist, and the model can still have no reliable way to know it exists. That is the difference between a document and a constraint.
- Part 3 moved governance into the schema. Provenance, deviations, and semantic rules could no longer remain outside the system in READMEs, comments, or team habits. They had to become machine-readable structure.
- Part 4 made that structure operative. The model already treated the README as its natural entry surface. Once that behavior was declared as an invocation protocol, the schema stopped being a passive archive and became a runtime contract.
That progression defines what MICA is actually trying to solve.
It is not trying to be βbetter prompting.β
It is not trying to be βmore retrieval.β
It is trying to answer a narrower question:
How does governed context reach the model, under declared rules, with confirmable load and auditable change?
That is the bridge into the broader landscape.
2. Context Engineering Was Never Just Prompting

One clarification matters before going further.
Context engineering is not just prompt writing.
At the broadest level, it is the practice of shaping what the model sees, in what order, with what boundaries, and under what assumptions. Prompts are one part of that. Retrieval is another. File selection, memory handoff, system instructions, and workspace state all belong to the same larger question:
What does the model actually know right now?
By that definition, MICA is part of context engineering.
Not because it retrieves context.
Not because it packs more tokens into a window.
But because it governs which context is allowed to shape the session, under what trust conditions, and with what record when those conditions are tested.
That distinction matters, because most of the field has focused on a different layer.
3. The Conversation Was Already Happening
Context engineering is not a new idea, and MICA does not claim to have invented the conversation.
The term was amplified by Andrej Karpathy and others, but the underlying practice β designing what the model sees, not just what you ask it β had already been emerging in serious AI work.
Collection-first tools already existed. CTX is a useful example of that layer: it gathers relevant workspace material and delivers it to the model without manual copy-paste. It answers an important question well:
What does the AI see?
At the same time, some of the sharper practitioner writing was already moving beyond collection alone. One such example was an OpenAI Developer Community post by Serge Liatko, βPrompt Engineering Is Dead, and Context Engineering Is Already Obsolete: Why the Future Is Automated Workflow Architecture with LLMs.β
The value of that piece was not the author's status, but the precision of the problem it named: manually maintained context eventually reaches its ceiling.
Once system state changes faster than humans can keep context aligned, the real question is no longer just how to collect context, but how to automate its ownership, maintenance, and validation as the system evolves.
That was an important move forward.
But one layer still remained underdefined.
4. The Missing Layer Was Already Visible

The same missing layer had already shown up elsewhere.
In Your Agentic Stack Has Two Layers. It Needs Three, I argued that the usual stack had matured around two strong layers:
- MCP / tool calls β how the agent talks to systems
- agent skills β what the agent can do
But something was still missing above both:
- the layer that decides whether the agent should do it, under what constraints, and toward what end
That is the layer of intent, authority, and governance.
The same problem appears in context engineering.
A context pipeline can be excellent at retrieval and still be weak at governance. It can gather the right files, summarize the right notes, and deliver the right-looking material β and still fail to answer:
- what is authoritative?
- what is provisional?
- what must never be violated?
- what changed since last time?
- how does the session prove it loaded the governed archive at all?
Those are not collection questions.
They are governance questions.
5. Where CTX Stops and MICA Begins

CTX solves the collection problem.
It answers:
What does the AI see?
That is a necessary layer. Without it, context management collapses back into manual copy-paste, repeated explanation, and fragile session startup.pro
But it does not answer the next set of questions:
- Where did this context item come from, and can that claim be verified?
- What happens when a file changes between sessions?
- Which constraints are non-negotiable, and what is the consequence when they are violated?
- Who approved the last change to the archive, and can that decision be audited?
- When the session begins, how does the system confirm the archive was actually loaded?
Those are not retrieval questions.
They are governance questions.
That is the narrow but consequential difference.
CTX collects context and delivers it.
MICA governs what trust that context carries, what invariants it must not violate, what happens when it changes, and how the session proves that the governed archive was actually loaded.
One answers:
What does the AI see?
The other answers:
Under what rules does the AI operate β and what is the record when those rules are tested?
They are different layers. Neither replaces the other.
6. The Part That Was Still Open

A recurring theme in serious context-engineering discussion is that context cannot remain a hand-curated artifact forever. It has to become a function of system state.
But that still leaves one question open:
Who owns the specification for each step's input β and how is this versioned, tested, and audited as requirements shift?
MICA is a concrete, working answer to that gap.
Not the only possible answer. Not necessarily the final one. But a real one.
Its claim is not that context engineering needed to be invented.
Its claim is that context engineering still needed a governance layer with at least four properties:
- machine-addressable invariants
- versioned and auditable change records
- self-tests against the real project state
- declared invocation with confirmable session load
That is the layer MICA was built to supply.
7. What Governance Actually Means Here
Governance is an overloaded word. In this context it means something specific.
Provenance
Every context item must declare where it came from in a way that can be checked.
Auditability
Changes to the archive are recorded when they happen, not reconstructed later from memory.
Invariant enforcement
Constraints are not vague README prose. They are structured entries with identity and severity.
Self-testing
The archive is checked against the real project state, not only against its own internal shape.
Invocation confirmation
The model does not silently ignore the archive. Session start requires a structured acknowledgment that the governed archive was loaded.
None of these are abstract principles in MICA.
They are structural requirements in a running schema.
8. What This Is Not
It is worth being precise about the boundary.
MICA does not generate context automatically from the codebase. That is not its job. Collection-first systems already exist, and they are valuable. MICA governs what happens once context has been identified.
MICA does not replace human judgment. A schema can require structure, audit trail, drift response, and self-tests. It cannot eliminate operator discipline. That is why the boundary between schema and playbook matters.
MICA is also not a finished system. Parts 1 through 4 of this series were explicit about what each version got wrong, what each version corrected, and what remained unresolved.
That design history is part of the claim, not an embarrassment to it.
9. The Actual Claim

The actual claim is not that MICA solves all of context engineering.
It is this:
A small operation can already run a governed AI context system β with verifiable provenance, deviation audit trail, structured invariants, self-testing, and declared invocation β without waiting for future tooling that does not yet exist.
That claim is demonstrated by the design history already covered in this series:
- v0.1.0 made scoring implementable
- v0.1.5 brought governance structure into the schema
- v0.1.7 made scoring fail-closed
- v0.1.8 made invocation declared and confirmable
- v0.1.8.1 clarified the remaining runtime ambiguities
And in practice, governance at runtime can look as concrete as this:
If a critical check fails, the session does not proceed. That is what governance looks like when it becomes operative.
What can be said now is narrower and more solid: the gap is real, collection-first systems solve one side of it, and MICA addresses the governance side. The conversation about what comes after context engineering was already happening; MICA is one concrete answer to that part of it.
10. What Comes Next

Part 4 ended with a specific question:
Where does MICA sit in the context engineering landscape that already existed around it?
This post is the answer.
It sits inside context engineering β but not at the collection layer.
It does not compete with retrieval-first systems by trying to collect more files, pack more tokens, or automate more handoff. It begins after that layer. Its job is to govern what enters the session, what remains authoritative, what drift means, what violations matter, and how the model proves it actually loaded the governed archive at all.
That is why the answer is narrower than most people expect, and more specific than most framings allow.
MICA is not βthe future of all context engineering.β
It is a governance answer to the part of context engineering that collection alone does not solve.
Part 6 will move back down from landscape to concrete operation.
It will show what MICA looks like in an actual project context: what a session opening report looks like, what a deviation log entry looks like in practice, and what happens when a self-test flags drift.
After that comes the harder question: what remains unresolved.
The series continues only where there is something concrete to specify, test, or correct.

Named decision from this post: Governance is not a layer you add after context engineering works. It is the layer that makes context engineering trustworthy β by declaring what is authoritative, recording what changes, and confirming what the AI actually loaded.
MICA is part of the Flamehaven governance-first AI systems practice. Schema, technical report, and production instance: flamehaven.space. Open-source tooling: AI-SLOP-Detector. All schema references follow the v0.1.8.1 Universal standard unless a specific earlier version is named.
Β
Share
Continue the series
View all in seriesPrevious in MICA Series
The Model Already Read the README. MICA v0.1.8 Made It a Protocol
Series continuation
This is currently the latest published entry.