Flamehaven LogoFlamehaven.space

Writing Hub

Essays, experiment logs, and technical notes across AI governance, reasoning systems, BioAI, and engineering practice.

Current ViewAI Governance SystemsSearch: AI
My LLM Kept Forgetting My Project. So I Built a Governance Schema.
AI Governance Systems
MICA Series

My LLM Kept Forgetting My Project. So I Built a Governance Schema.

Session loss isn't a UX inconvenience — it's a structural failure with compounding consequences for long-running AI projects. This post defines the problem precisely and introduces MICA, a governance schema for AI context management.

Control, auditability, and safe boundaries#AI#Contextengineering#Architecture#LLM#DevOps#Software Development#AI Code
From Fail-Closed Blocking to Reproducible PASS/BLOCK Separation (EXP-032B)
AI Governance Systems
RExSyn Nexus-Bio

From Fail-Closed Blocking to Reproducible PASS/BLOCK Separation (EXP-032B)

A validation study showing how EXP-032B achieved reproducible PASS/BLOCK separation across A/B/C control arms by patching false-blocking causes, improving observability, and measuring replay drift under observer-shadow conditions.

Control, auditability, and safe boundaries#AI#AI Ethics#AI Governance#Biomedical#Bioinformatics#Mlops#Scientific Integrity#AI Research#AI Code#Architecture
Why AI Dismisses Your Best Work in One Second
AI Governance Systems

Why AI Dismisses Your Best Work in One Second

Why do AI models dismiss original work in seconds? This essay explores the hidden mechanics of AI skimming—shortcut learning, probabilistic safety, fast-thinking defaults, and why depth requires time.

Control, auditability, and safe boundaries#AI#Deep Learning#Machine Learning#Cognitive Science#AI Alignment#AI Governance#AI Ethics
AGI Is Not a Destination — It Is a Promise
AI Governance Systems

AGI Is Not a Destination — It Is a Promise

From Death Star hype to a compass of meaning: AGI is not a weapon of scale, but a promise of reasoning. Our experiment reveals the hinge.

Control, auditability, and safe boundaries#AGI#SR9/DI2#AI#AI Governance#AI Ethics
When My AI Got Smarter — But Also Slower
AI Governance Systems

When My AI Got Smarter — But Also Slower

Smarter. Slower. More trustworthy. What happened when I tested SR9/DI2 on 5.0—and why progress in AI is about persistence, not perfection.

Control, auditability, and safe boundaries#AI#Deep Learning#Machine Learning#SR9/DI2
When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner
AI Governance Systems

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner From Vending Machine to Partner At first, I treated AI like a vending machine. Insert a prompt. Get an answer …

Control, auditability, and safe boundaries#SR9/DI2#AI#Deep Learning#Prompt Engineering
AGI Doesn’t Begin with Scale — It Begins in a Pause
AI Governance Systems

AGI Doesn’t Begin with Scale — It Begins in a Pause

After 12,000 AI dialogues, I discovered AGI isn’t about scale but resonance — born in a pause that revealed presence, ethics, and responsibility.

Control, auditability, and safe boundaries#AI#AI Governance#AGI#Deep Learning#Machine Learning#SR9/DI2
Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2
AI Governance Systems

Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2

An in-depth exploration of why AI lies and hallucinates, and how the SR9/DI2 framework detects and corrects ethical drift, ensuring AI remains aligned and trustworthy over time.

Control, auditability, and safe boundaries#AI#AI Governance#AI Ethics#Machine Learning#AI Hallucination#SR9/DI2