Flamehaven LogoFlamehaven.space

Writing Hub

Essays, experiment logs, and technical notes across AI governance, reasoning systems, BioAI, and engineering practice.

Current ViewAI Governance SystemsSearch: AI Governance
From Fail-Closed Blocking to Reproducible PASS/BLOCK Separation (EXP-032B)
AI Governance Systems
RExSyn Nexus-Bio

From Fail-Closed Blocking to Reproducible PASS/BLOCK Separation (EXP-032B)

A validation study showing how EXP-032B achieved reproducible PASS/BLOCK separation across A/B/C control arms by patching false-blocking causes, improving observability, and measuring replay drift under observer-shadow conditions.

Control, auditability, and safe boundaries#AI#AI Ethics#AI Governance#Biomedical#Bioinformatics#Mlops#Scientific Integrity#AI Research#AI Code#Architecture
Why AI Dismisses Your Best Work in One Second
AI Governance Systems

Why AI Dismisses Your Best Work in One Second

Why do AI models dismiss original work in seconds? This essay explores the hidden mechanics of AI skimming—shortcut learning, probabilistic safety, fast-thinking defaults, and why depth requires time.

Control, auditability, and safe boundaries#AI#Deep Learning#Machine Learning#Cognitive Science#AI Alignment#AI Governance#AI Ethics
AGI Is Not a Destination — It Is a Promise
AI Governance Systems

AGI Is Not a Destination — It Is a Promise

From Death Star hype to a compass of meaning: AGI is not a weapon of scale, but a promise of reasoning. Our experiment reveals the hinge.

Control, auditability, and safe boundaries#AGI#SR9/DI2#AI#AI Governance#AI Ethics
AGI Doesn’t Begin with Scale — It Begins in a Pause
AI Governance Systems

AGI Doesn’t Begin with Scale — It Begins in a Pause

After 12,000 AI dialogues, I discovered AGI isn’t about scale but resonance — born in a pause that revealed presence, ethics, and responsibility.

Control, auditability, and safe boundaries#AI#AI Governance#AGI#Deep Learning#Machine Learning#SR9/DI2
Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2
AI Governance Systems

Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2

An in-depth exploration of why AI lies and hallucinates, and how the SR9/DI2 framework detects and corrects ethical drift, ensuring AI remains aligned and trustworthy over time.

Control, auditability, and safe boundaries#AI#AI Governance#AI Ethics#Machine Learning#AI Hallucination#SR9/DI2