Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

The Alchemy of Ego - How AI Turns Unfinished Thought Into Fluent Certainty
A personal essay on how AI can turn unfinished thoughts into fluent certainty, why internal coherence is not external proof, and why falsifiability, failure conditions, and visible execution matter in AI-assisted thinking.

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

The Difference Between a Harness and a Leash
A practical essay on why most AI 'harnesses' are still leashes: guides shape behavior, but only justified external measurement creates a real governance boundary.

The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See
An analysis of how no-code and AI-generated healthcare apps create regulatory liability when patient data flows are deployed without prior mapping, auditability, or compliance architecture.

FLAMEHAVEN FileSearch: Why This RAG Engine Feels Different from the Usual Stack
A technical look at FLAMEHAVEN FileSearch: BM25+RRF hybrid retrieval, chunk-addressable indexing, deterministic DSP vectors, and the trade-offs behind a lower-overhead self-hosted RAG engine.

The Next AI Moat May Not Be the Harness Alone: A Mathematically Governed Self-Calibrating Code-Review Layer
As AI harness patterns normalize, differentiation is shifting toward governed self-calibration and implementation fidelity. This piece explores how history-driven, bounded adaptation creates a new layer of defensible AI infrastructure — one that turns local code evolution into a competitive moat.

My AI Maintainer Kept Making Wrong Calls. So I Made It Report Its State Before Touching Anything.
Part 6 moves from landscape to operation. This is what MICA looks like when it is actually running inside a real maintenance workflow — session report, self-test, drift, invariants, and operator judgment.

The Harness Is the Product: What the Claude Code Leak Actually Revealed About AI Agent Architecture
The Claude Code leak exposed more than source. It revealed that modern AI agent performance depends heavily on the harness around the model.

After Auditing 10 Bio-AI Repositories, I Think We're Scaling the Wrong Layer
After auditing 10 open-source Bio-AI repositories, one pattern stood out: the field is scaling packaging faster than verification. Here is what that gap actually costs.

I Audited 10 Open-Source Bio-AI Repos. Most Could Produce Outputs. Few Could Establish Trust.
I audited 10 visible repositories. Most could produce outputs. Very few could establish what those outputs meant.

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Bio-AI Repository Audit 2026: A Technical Report on 10 Open-Source Systems
We audited 10 prominent open-source Bio-AI repositories using code inspection and STEM-AI trust scoring. 8 of 10 scored T0: trust not established. Here is what the code actually shows.
Showing page 1 of 3 · 35 matching posts