Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

FLAMEHAVEN FileSearch: Why This RAG Engine Feels Different from the Usual Stack
A technical look at FLAMEHAVEN FileSearch: BM25+RRF hybrid retrieval, chunk-addressable indexing, deterministic DSP vectors, and the trade-offs behind a lower-overhead self-hosted RAG engine.

The Next AI Moat May Not Be the Harness Alone: A Mathematically Governed Self-Calibrating Code-Review Layer
As AI harness patterns normalize, differentiation is shifting toward governed self-calibration and implementation fidelity. This piece explores how history-driven, bounded adaptation creates a new layer of defensible AI infrastructure — one that turns local code evolution into a competitive moat.

My AI Maintainer Kept Making Wrong Calls. So I Made It Report Its State Before Touching Anything.
Part 6 moves from landscape to operation. This is what MICA looks like when it is actually running inside a real maintenance workflow — session report, self-test, drift, invariants, and operator judgment.

The Harness Is the Product: What the Claude Code Leak Actually Revealed About AI Agent Architecture
The Claude Code leak exposed more than source. It revealed that modern AI agent performance depends heavily on the harness around the model.

The Stake Was Governance Outside the Schema. MICA v0.1.5 Pulled It In
v0.1.0 through v0.1.4 made the schema more implementable. v0.1.5 was the first version to ask a different question — what if governance itself belongs inside the schema? Here is what that looked like, and what it still could not do.

The Schema Existed. The Model Had No Way to Know.
v0.0.1 proved that context could be structured. It did not prove that the structure could govern what shaped the session. Three failures — and why only one made the others meaningless.

Prompt, Pray & Push: Why Your AI Agent Keeps Failing You
The one concept that turns expensive spaghetti into great agentic engineering.

The Pull Request Illusion: How AI Is Hollowing Out Software’s Last Line of Defense
GitHub Just Added a Switch to Turn Off Pull Requests. That’s Not a Feature. It’s a Warning.

AI Agents Are Poisoning Your Codebase From the Inside
Explore how AI-generated code can silently degrade software quality through weakened tests, rising code churn, and duplication—and how teams can prevent it with better governance.

Why I Stopped Treating Complexity as a Bug
On intent, governance, and why “clean code” heuristics fail in AI-generated systems

I’m Not Building AI Demos. I’m Building AI Audits (ASDP + Slop Gates)
Learn how ASDP and AI Slop Gates turn AI trust into auditable evidence, with CI/CD checks, drift policies, and governance artifacts that block weak, narrative-driven systems.

The Real Risk in the Age of AI Coding Isn’t Bugs
Is your AI code production-ready or just 'AI Slop'? Learn how to detect convincingly empty code, measure Logic Density (LDR), and stop 'Vibe Coding' from becoming hidden technical debt.
Showing page 1 of 2 · 15 matching posts