Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

LOGOS v1.4.1: Building Multi-Engine AI Reasoning You Can Actually Trust
LOGOS v1.4.1 is a multi-engine AI reasoning orchestrator that enforces consensus, traces failures, and applies governance profiles to reduce drift and make production reasoning more trustworthy.

When the Michelin Recipe Fails in Your Kitchen
Why 2026 Marks the End of DIY AI — and the Rise of the AI Meal Kit

LawBinder v1.3.0: Governance as a Kernel (Not a Guardrail)
LawBinder v1.3.0 shows how AI governance can run like a kernel, using deterministic Rust-based enforcement, replayable audit signatures, and bounded-latency policy checks in the critical path.

Why I Stopped Treating Complexity as a Bug
On intent, governance, and why “clean code” heuristics fail in AI-generated systems

I’m Not Building AI Demos. I’m Building AI Audits (ASDP + Slop Gates)
Learn how ASDP and AI Slop Gates turn AI trust into auditable evidence, with CI/CD checks, drift policies, and governance artifacts that block weak, narrative-driven systems.

The Real Risk in the Age of AI Coding Isn’t Bugs
Is your AI code production-ready or just 'AI Slop'? Learn how to detect convincingly empty code, measure Logic Density (LDR), and stop 'Vibe Coding' from becoming hidden technical debt.

HRPO-X v1.0.1: from HRPO paper production-hardened runnable code
Project note, essay, or technical log from the Flamehaven writing archive.

Turning a Research Paper into a Runnable System
Turn a research paper into a runnable system. This article shows how HRPO’s core equations were implemented with bounded policy lag, KL rejection, and execution checks to test real-world fidelity.

When My AI Got Smarter — But Also Slower
Smarter. Slower. More trustworthy. What happened when I tested SR9/DI2 on 5.0—and why progress in AI is about persistence, not perfection.

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner
When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner From Vending Machine to Partner At first, I treated AI like a vending machine. Insert a prompt. Get an answer …

Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2
An in-depth exploration of why AI lies and hallucinates, and how the SR9/DI2 framework detects and corrects ethical drift, ensuring AI remains aligned and trustworthy over time.
Showing page 4 of 4 · 47 matching posts