Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

When an AI Pipeline Passes — But One Path Still Must Be Held: EXP-034
EXP-034 tested whether a method-locked Bio-AI governance pipeline could survive modal expansion, AlphaFold EBI observer wiring, and AG-live measurement without breaking its PASS/BLOCK judgment baseline.

The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See
An analysis of how no-code and AI-generated healthcare apps create regulatory liability when patient data flows are deployed without prior mapping, auditability, or compliance architecture.

I Audited 10 Open-Source Bio-AI Repos. Most Could Produce Outputs. Few Could Establish Trust.
I audited 10 visible repositories. Most could produce outputs. Very few could establish what those outputs meant.

Bio-AI Repository Audit 2026: A Technical Report on 10 Open-Source Systems
We audited 10 prominent open-source Bio-AI repositories using code inspection and STEM-AI trust scoring. 8 of 10 scored T0: trust not established. Here is what the code actually shows.

Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust
STEM-AI is a governance audit framework for public medical AI repositories. It scores README integrity, cross-platform consistency, and code infrastructure — because benchmarks alone don't tell you if a bio-AI tool is safe to trust.

How do you know when your entire AI pipeline is wrong — not just one model? (EXP-033)
EXP-033 shows how to validate an entire AI pipeline, not just one model, using five-gate checkpoints, reproducible PASS/BLOCK parity, AlphaGenome on/off testing, and fully traceable governance decisions.

What an AI Reasoning Engine Built for Alzheimer's Metabolic Research: A Code Walkthrough
A code walkthrough of an AI reasoning engine for Alzheimer’s metabolic research, showing how literature ingestion, causal inference, and executable biomarker scaffolds generate falsifiable pre-validation hypotheses.

Chaos Engineering for AI: Validating a Fail-Closed Pipeline with Fake Data and Math
A case study in AI governance showing how synthetic invalid inputs, structural disagreement, SIDRCE ethics checks, and end-to-end reliability scoring triggered a safe BLOCK verdict in a biomedical pipeline.

From 97% Model Accuracy to 74% Clinical Reliability: Building RSN-NNSL-GATE-001
Learn how RSN-NNSL-GATE-001 turns high model accuracy into system-level clinical reliability by blocking unsafe AI pipeline decisions, measuring end-to-end risk, and enforcing fail-closed governance.

When Adding Chai-1 and Boltz-2 Exposed Hidden Model Disagreement(Trinity Protocol Part)
See how adding Chai-1 and Boltz-2 to an AlphaFold workflow exposed hidden model disagreement, increased drift, and revealed why failed convergence can be the most valuable signal in computational biology.

Orchestrating AlphaFold 3 & 2 with Python: Handling AI Hallucinations using Adapter Patter (Trinity Protocol Part 1)
Learn how to orchestrate AlphaFold 3 and AlphaFold 2 with Python using the Adapter Pattern to detect AI hallucinations, measure structural drift, and improve protein prediction reliability.
Showing page 1 of 2 · 15 matching posts