Flamehaven LogoFlamehaven.space

Writing Hub

Essays, experiment logs, and technical notes across AI governance, reasoning systems, BioAI, and engineering practice.

Current ViewSearch: SR9/DI2
Prompt, Pray & Push: Why Your AI Agent Keeps Failing You
Cloud & Engineering Foundations

Prompt, Pray & Push: Why Your AI Agent Keeps Failing You

The one concept that turns expensive spaghetti into great agentic engineering.

Operational surfaces that survive real deployment#AI#AGI#AI Alignment#AI Governance#AI Hallucination#Future of Work#LLM#Deep Learning#Machine Learning#SR9/DI2#Cognitive Science#DevOps#Programming#AI Code#Business Strategy#Software Development#Prompt Engineering
When AI Models Fight, Truth Wins: The “Eureka” Moment for Tired Researchers
Scientific & BioAI Infrastructure

When AI Models Fight, Truth Wins: The “Eureka” Moment for Tired Researchers

To the grad student staring at a pLDDT of 90 and wondering why the ligand won’t bind.

Evidence-aware scientific systems#AI#AGI#AI Ethics#AI Governance#AI Hallucination#Biomedical#SR9/DI2#Mlops#AI Research#Scientific Integrity#Software Development
Why I Stopped Treating Complexity as a Bug
Cloud & Engineering Foundations

Why I Stopped Treating Complexity as a Bug

On intent, governance, and why “clean code” heuristics fail in AI-generated systems

Operational surfaces that survive real deployment#AI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#LLM#Future of Work#Deep Learning#Machine Learning#SR9/DI2#Developer Tools#DevOps#Programming#Software Development#AI Code
The Real Risk in the Age of AI Coding Isn’t Bugs
Cloud & Engineering Foundations

The Real Risk in the Age of AI Coding Isn’t Bugs

Is your AI code production-ready or just 'AI Slop'? Learn how to detect convincingly empty code, measure Logic Density (LDR), and stop 'Vibe Coding' from becoming hidden technical debt.

Operational surfaces that survive real deployment#AI Code#AI#AI Alignment#AI Governance#AI Hallucination#Future of Work#Machine Learning#Deep Learning#SR9/DI2#Open Source#Developer Tools#DevOps#Programming#Software Development#Github
My Code Fixed Itself at 11PM
Scientific & BioAI Infrastructure

My Code Fixed Itself at 11PM

A “Quantum Engine” is a dramatic name. Here’s the un-dramatic story.

Evidence-aware scientific systems#AI#AI Governance#Future of Work#Deep Learning#Machine Learning#Cognitive Science#SR9/DI2#Scientific Integrity#Programming#Prompt Engineering#Software Development#Product Management
AGI Is Not a Destination — It Is a Promise
AI Governance Systems

AGI Is Not a Destination — It Is a Promise

From Death Star hype to a compass of meaning: AGI is not a weapon of scale, but a promise of reasoning. Our experiment reveals the hinge.

Control, auditability, and safe boundaries#AGI#SR9/DI2#AI#AI Governance#AI Ethics
When My AI Got Smarter — But Also Slower
AI Governance Systems

When My AI Got Smarter — But Also Slower

Smarter. Slower. More trustworthy. What happened when I tested SR9/DI2 on 5.0—and why progress in AI is about persistence, not perfection.

Control, auditability, and safe boundaries#AI#Deep Learning#Machine Learning#SR9/DI2
When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner
AI Governance Systems

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner From Vending Machine to Partner At first, I treated AI like a vending machine. Insert a prompt. Get an answer …

Control, auditability, and safe boundaries#SR9/DI2#AI#Deep Learning#Prompt Engineering
AGI Doesn’t Begin with Scale — It Begins in a Pause
AI Governance Systems

AGI Doesn’t Begin with Scale — It Begins in a Pause

After 12,000 AI dialogues, I discovered AGI isn’t about scale but resonance — born in a pause that revealed presence, ethics, and responsibility.

Control, auditability, and safe boundaries#AI#AI Governance#AGI#Deep Learning#Machine Learning#SR9/DI2
Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2
AI Governance Systems

Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2

An in-depth exploration of why AI lies and hallucinates, and how the SR9/DI2 framework detects and corrects ethical drift, ensuring AI remains aligned and trustworthy over time.

Control, auditability, and safe boundaries#AI#AI Governance#AI Ethics#Machine Learning#AI Hallucination#SR9/DI2