Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

The Alchemy of Ego - How AI Turns Unfinished Thought Into Fluent Certainty
A personal essay on how AI can turn unfinished thoughts into fluent certainty, why internal coherence is not external proof, and why falsifiability, failure conditions, and visible execution matter in AI-assisted thinking.

Each /slop Is a Calibration Signal — AI-SLOP Detector v3.6.0 and the Claude Code Skill
Every /slop invocation records to a project-scoped history. After 10 re-scanned files, bounded self-calibration adjusts detection weights for your codebase. Here is the mechanism, the data, and what actually shipped in v3.6.0.

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

When an AI Pipeline Passes — But One Path Still Must Be Held: EXP-034
EXP-034 tested whether a method-locked Bio-AI governance pipeline could survive modal expansion, AlphaFold EBI observer wiring, and AG-live measurement without breaking its PASS/BLOCK judgment baseline.

The Sheepwave Has a New Shape: OpenMythos and the Rise of Architecture Hype
A technical-opinion essay on OpenMythos, Claude Mythos, README-driven AI hype, and why architecture claims need source-level verification before becoming public belief.

The Difference Between a Harness and a Leash
A practical essay on why most AI 'harnesses' are still leashes: guides shape behavior, but only justified external measurement creates a real governance boundary.

OpenMythos v0.5.0 Code Review - Audit Report
OpenMythos collected thousands of GitHub stars and dominated AI discourse for a week. This is what happens when you actually read the code — and why the people who do always arrive too late to matter.

FLAMEHAVEN FileSearch: Why This RAG Engine Feels Different from the Usual Stack
A technical look at FLAMEHAVEN FileSearch: BM25+RRF hybrid retrieval, chunk-addressable indexing, deterministic DSP vectors, and the trade-offs behind a lower-overhead self-hosted RAG engine.

The Next AI Moat May Not Be the Harness Alone: A Mathematically Governed Self-Calibrating Code-Review Layer
As AI harness patterns normalize, differentiation is shifting toward governed self-calibration and implementation fidelity. This piece explores how history-driven, bounded adaptation creates a new layer of defensible AI infrastructure — one that turns local code evolution into a competitive moat.

It Gets Smarter Every Scan: AI-SLOP Detector v3.5.0 and the Self-Calibration Loop
AI-built apps are starting to fail in public. Not every failure is static-analysis territory, but many share the same upstream condition: plausible-looking code passing review without carrying enough real logic. AI-SLOP Detector v3.5.0 adds a self-calibration loop to reduce that gap.

Can AI Review Physics? Yes — That Is Why We Built SPAR
SPAR is a deterministic framework for claim-aware review: checking whether an output deserves the claim attached to it.

Bridging the Gap: From AI Slop to Mathematical Governance
A mathematical framework for detecting AI-generated code slop using AST distributions, Jensen-Shannon divergence, and geometric governance gates.
Showing page 1 of 6 · 68 matching posts