Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

The Alchemy of Ego - How AI Turns Unfinished Thought Into Fluent Certainty
A personal essay on how AI can turn unfinished thoughts into fluent certainty, why internal coherence is not external proof, and why falsifiability, failure conditions, and visible execution matter in AI-assisted thinking.

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

Can AI Review Physics? Yes — That Is Why We Built SPAR
SPAR is a deterministic framework for claim-aware review: checking whether an output deserves the claim attached to it.

Bridging the Gap: From AI Slop to Mathematical Governance
A mathematical framework for detecting AI-generated code slop using AST distributions, Jensen-Shannon divergence, and geometric governance gates.

I Audited 10 Open-Source Bio-AI Repos. Most Could Produce Outputs. Few Could Establish Trust.
I audited 10 visible repositories. Most could produce outputs. Very few could establish what those outputs meant.

Bio-AI Repository Audit 2026: A Technical Report on 10 Open-Source Systems
We audited 10 prominent open-source Bio-AI repositories using code inspection and STEM-AI trust scoring. 8 of 10 scored T0: trust not established. Here is what the code actually shows.

Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust
STEM-AI is a governance audit framework for public medical AI repositories. It scores README integrity, cross-platform consistency, and code infrastructure — because benchmarks alone don't tell you if a bio-AI tool is safe to trust.

The Repo Is Right There. Why Are You Checking Their CV?
In 2026, AI researchers and engineers use the same words to mean opposite things. This is not a communication problem. It is an incentive problem with a vocabulary leak and it's where most AI projects actually fail.

I Built an Ecosystem of 46 AI-Assisted Repos. Then I Realized It Might Be Eating Itself.
An ecosystem of 46 AI-assisted repos can become a closed loop. This article explores structural blind spots, self-validating toolchains, and the need for external validators to create intentional friction.

How do you know when your entire AI pipeline is wrong — not just one model? (EXP-033)
EXP-033 shows how to validate an entire AI pipeline, not just one model, using five-gate checkpoints, reproducible PASS/BLOCK parity, AlphaGenome on/off testing, and fully traceable governance decisions.

What AI Changed About Research Code — and What It Didn’t
The old bottleneck was writing the code. The new bottleneck is proving that the code still means what the theory meant.

What an AI Reasoning Engine Built for Alzheimer's Metabolic Research: A Code Walkthrough
A code walkthrough of an AI reasoning engine for Alzheimer’s metabolic research, showing how literature ingestion, causal inference, and executable biomarker scaffolds generate falsifiable pre-validation hypotheses.
Showing page 1 of 2 · 22 matching posts