Flamehaven LogoFlamehaven.space

Writing Hub

AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.

Current ViewAI Governance SystemsSearch: AI Alignment
The Difference Between a Harness and a Leash
AI Governance Systems

The Difference Between a Harness and a Leash

A practical essay on why most AI 'harnesses' are still leashes: guides shape behavior, but only justified external measurement creates a real governance boundary.

Control, auditability, and safe boundaries#AI Governance#AI Alignment#AI#LLM#DevOps#Prompt Engineering#Product Management#Architecture#Data Orchestration#Contextengineering#Software Development
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
AI Governance Systems
MICA Series

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Control, auditability, and safe boundaries#AI#AI Ethics#AI Alignment#AI Governance#Future of Work#Deep Learning#Machine Learning#Cognitive Science#DevOps#Software Development#AI Code#Architecture#Contextengineering#Security
The Model Already Read the README. MICA v0.1.8 Made It a Protocol
AI Governance Systems
MICA Series

The Model Already Read the README. MICA v0.1.8 Made It a Protocol

v0.1.7 made scoring a contract with fail-closed gates. v0.1.8 recognized that README-first behavior could serve as invocation — and formalized it as a schema-level protocol. This article uses simplified examples to show how the invocation gap that had existed since v0.0.1 was finally closed

Control, auditability, and safe boundaries#AI#AI Ethics#AI Alignment#AI Governance#Mlops#SR9/DI2#Deep Learning#Machine Learning#Cognitive Science#DevOps#Contextengineering#AI Code#Business Strategy#Software Development#Prompt Engineering
Your Agentic Stack Has Two Layers. It Needs Three.
AI Governance Systems
Governed Reasoning

Your Agentic Stack Has Two Layers. It Needs Three.

Most agentic stacks cover tools and skills, but miss intent governance. Learn why a third layer is needed to stop AI drift, scope creep, and technically correct systems heading in the wrong direction.

Control, auditability, and safe boundaries#AI#AGI#AI Alignment#AI Governance#AI Hallucination#LLM#Deep Learning#Machine Learning#SR9/DI2#Cognitive Science#Prompt Engineering#AI Code#Contextengineering#Architecture
LOGOS LawBinder: From Governed Reasoning to Audit-Grade Execution
AI Governance Systems
Governed Reasoning

LOGOS LawBinder: From Governed Reasoning to Audit-Grade Execution

This article explains how LOGOS v1.4.1 improves production AI reasoning with multi-engine orchestration, complexity-aware governance, and audit-friendly failure tracing.

Control, auditability, and safe boundaries#AI#AGI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#Mlops#SR9/DI2#Cognitive Science#Machine Learning#Deep Learning#Contextengineering#Architecture#Software Development#Prompt Engineering
LawBinder v1.3.0: Governance as a Kernel (Not a Guardrail)
AI Governance Systems
Governed Reasoning

LawBinder v1.3.0: Governance as a Kernel (Not a Guardrail)

LawBinder v1.3.0 shows how AI governance can run like a kernel, using deterministic Rust-based enforcement, replayable audit signatures, and bounded-latency policy checks in the critical path.

Control, auditability, and safe boundaries#AI#AGI#AI Alignment#AI Hallucination#AI Governance#LLM#Deep Learning#Machine Learning#SR9/DI2#AI Code#Architecture#Contextengineering
Undo Beats IQ: Building Flamehaven as a Governed AI Runtime (Not a Prompt App)
AI Governance Systems
Governed Reasoning

Undo Beats IQ: Building Flamehaven as a Governed AI Runtime (Not a Prompt App)

Project note, essay, or technical log from the Flamehaven writing archive.

Control, auditability, and safe boundaries#AI#AGI#Architecture#Security#DevOps#AI Governance#AI Alignment
Turning a Research Paper into a Runnable System
AI Governance Systems
Governed Reasoning

Turning a Research Paper into a Runnable System

Turn a research paper into a runnable system. This article shows how HRPO’s core equations were implemented with bounded policy lag, KL rejection, and execution checks to test real-world fidelity.

Control, auditability, and safe boundaries#AI#AI Ethics#AI Alignment#AI Governance#Deep Learning#Machine Learning#SR9/DI2#AI Research#Scientific Integrity#AI Code#Architecture#Contextengineering
Why AI Dismisses Your Best Work in One Second
AI Governance Systems

Why AI Dismisses Your Best Work in One Second

Why do AI models dismiss original work in seconds? This essay explores the hidden mechanics of AI skimming—shortcut learning, probabilistic safety, fast-thinking defaults, and why depth requires time.

Control, auditability, and safe boundaries#AI#Deep Learning#Machine Learning#Cognitive Science#AI Alignment#AI Governance#AI Ethics
When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner
AI Governance Systems

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner From Vending Machine to Partner At first, I treated AI like a vending machine. Insert a prompt. Get an answer …

Control, auditability, and safe boundaries#AI#AGI#AI Alignment#SR9/DI2#Prompt Engineering#Software Development#Contextengineering#AI Code
Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2
AI Governance Systems

Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2

An in-depth exploration of why AI lies and hallucinates, and how the SR9/DI2 framework detects and corrects ethical drift, ensuring AI remains aligned and trustworthy over time.

Control, auditability, and safe boundaries#AI#AGI#AI Alignment#SR9/DI2#Machine Learning#Deep Learning#Contextengineering#AI Code#Architecture#Data Orchestration