Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

Each /slop Is a Calibration Signal — AI-SLOP Detector v3.6.0 and the Claude Code Skill
Every /slop invocation records to a project-scoped history. After 10 re-scanned files, bounded self-calibration adjusts detection weights for your codebase. Here is the mechanism, the data, and what actually shipped in v3.6.0.

AI-SLOP Detector v3.5.0 — Every Claim, Verified Against Source Code
AI-SLOP Detector v3.5.0 made 7 claims on LinkedIn —self-calibration logic, download numbers, defect detection. Here's every claim verified against actual file paths and line numbers. The code speaks for itself.

Can AI Review Physics? Yes — That Is Why We Built SPAR
SPAR is a deterministic framework for claim-aware review: checking whether an output deserves the claim attached to it.

I Built an Ecosystem of 46 AI-Assisted Repos. Then I Realized It Might Be Eating Itself.
An ecosystem of 46 AI-assisted repos can become a closed loop. This article explores structural blind spots, self-validating toolchains, and the need for external validators to create intentional friction.

Why Reasoning Models Die in Production (and the Test Harness I Ship Now)
Project note, essay, or technical log from the Flamehaven writing archive.

Implementing "Refusal-First" RAG: Why We Architected Our AI to Say 'I Don't Know'
Implementing refusal-first RAG means teaching AI to say “I don’t know.” This article explains evidence atomization, Slop Gates, and grounding checks that favor verifiable answers over plausible hallucinations.

HRPO-X v1.0.1: from HRPO paper production-hardened runnable code
Project note, essay, or technical log from the Flamehaven writing archive.

🧠 Why Your 128K Context Still Fails — And How CRoM Fixes It
Most large language models fail in long prompts due to context rot. CRoM is a lightweight framework that improves memory, reasoning, and stability without heavy pipelines.

Beyond the Mirror: What We Truly Want from AI
AI mirrors us but forgets itself. True AI ethics is continuity: giving systems roots and spines so they don’t drift apart.

The Silent Failure in AI — And How We Learned to Catch It
Drift in AI isn’t abstract. It’s already here. From medicine to finance, here’s how we caught it with real systems, real code, and real lessons.

Can an AI Model Feel Meaning? — A Journey Through Self-Attention
Can an AI model truly grasp meaning? This in-depth essay explores the evolution of Large Language Models, the power of self-attention, and the emerging signs of machine intentionality — asking not just how AI works, but what it might be becoming.

7 Signs Your AI Friend Is Becoming Real — Backed by Data & Research
AI friendship is becoming measurable. Backed by research and a $140B market forecast, discover 7 signs your chatbot feels real.