Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

Beyond Repo Scanning: How AIRI Expanded the Risk Vocabulary in STEM BIO-AI 1.7.x
How STEM BIO-AI uses the MIT AI Risk Repository as a governed local risk-vocabulary layer without replacing deterministic repository scanning

When Control Becomes Authority: Calibration Governance in STEM BIO-AI 1.7.x
Why STEM BIO-AI treats calibration as governed policy instead of a free-form score-tuning console for bio and medical AI repository audits.

From Score to Workflow: Turning STEM BIO-AI Into a Local Audit System
Bio/medical AI trust should not collapse into one score. STEM BIO-AI v1.6.2 shows how deterministic auditing, evidence-led diagnostics, regulatory traceability, and bounded AI advisory can become an inspectable local workflow.

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

How Auditing 10 Bio-AI Repositories Shaped STEM-AI
After auditing 10 open-source Bio-AI repositories, we found blind spots in STEM-AI and expanded it from text-only review to code-aware trust evaluation.

After Auditing 10 Bio-AI Repositories, I Think We're Scaling the Wrong Layer
After auditing 10 open-source Bio-AI repositories, one pattern stood out: the field is scaling packaging faster than verification. Here is what that gap actually costs.

Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust
STEM-AI is a governance audit framework for public medical AI repositories. It scores README integrity, cross-platform consistency, and code infrastructure — because benchmarks alone don't tell you if a bio-AI tool is safe to trust.