Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

The Alchemy of Ego - How AI Turns Unfinished Thought Into Fluent Certainty
A personal essay on how AI can turn unfinished thoughts into fluent certainty, why internal coherence is not external proof, and why falsifiability, failure conditions, and visible execution matter in AI-assisted thinking.

Each /slop Is a Calibration Signal — AI-SLOP Detector v3.6.0 and the Claude Code Skill
Every /slop invocation records to a project-scoped history. After 10 re-scanned files, bounded self-calibration adjusts detection weights for your codebase. Here is the mechanism, the data, and what actually shipped in v3.6.0.

The Difference Between a Harness and a Leash
A practical essay on why most AI 'harnesses' are still leashes: guides shape behavior, but only justified external measurement creates a real governance boundary.

The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See
An analysis of how no-code and AI-generated healthcare apps create regulatory liability when patient data flows are deployed without prior mapping, auditability, or compliance architecture.

The Next AI Moat May Not Be the Harness Alone: A Mathematically Governed Self-Calibrating Code-Review Layer
As AI harness patterns normalize, differentiation is shifting toward governed self-calibration and implementation fidelity. This piece explores how history-driven, bounded adaptation creates a new layer of defensible AI infrastructure — one that turns local code evolution into a competitive moat.

The Harness Is the Product: What the Claude Code Leak Actually Revealed About AI Agent Architecture
The Claude Code leak exposed more than source. It revealed that modern AI agent performance depends heavily on the harness around the model.

The Centaur’s Equation: Why the Stubborn Expert Wins in the Era of Infinite AI
Why Evaluation Ownership is the Ultimate Defensive Asset in the AGI Economy

What Anthropic’s 81k Survey Reveals About What the AI Market Still Gets Wrong
Users Don’t Want Faster AI — They Want AI That Helps Them Live Better Without Losing Their Humanity.

The Repo Is Right There. Why Are You Checking Their CV?
In 2026, AI researchers and engineers use the same words to mean opposite things. This is not a communication problem. It is an incentive problem with a vocabulary leak and it's where most AI projects actually fail.

What AI Changed About Research Code — and What It Didn’t
The old bottleneck was writing the code. The new bottleneck is proving that the code still means what the theory meant.

Is MCP Really Dead? A History of AI Hype — Told Through the Rise and Fall of a Protocol
When a protocol doesn’t die — it just stops being interesting. A forensic look at MCP, OpenClaw, and the psychology of AI hype cycles.

The Pull Request Illusion: How AI Is Hollowing Out Software’s Last Line of Defense
GitHub Just Added a Switch to Turn Off Pull Requests. That’s Not a Feature. It’s a Warning.
Showing page 1 of 3 · 30 matching posts