Flamehaven LogoFlamehaven.space

Writing Hub

AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.

Current ViewSearch: Architecture
Each /slop Is a Calibration Signal — AI-SLOP Detector v3.6.0 and the Claude Code Skill
Reasoning / Verification Engines

Each /slop Is a Calibration Signal — AI-SLOP Detector v3.6.0 and the Claude Code Skill

Every /slop invocation records to a project-scoped history. After 10 re-scanned files, bounded self-calibration adjusts detection weights for your codebase. Here is the mechanism, the data, and what actually shipped in v3.6.0.

Inference quality, validation, and proof surfaces#AI#AGI#AI Alignment#Deep Learning#Machine Learning#Prompt Engineering#Product Management#Software Development#AI Code#Architecture#Data Orchestration#Code Review
How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
Scientific & BioAI Infrastructure
STEM-AI:Soverign Trust Evaluator for Medical AI Artifacts

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits

STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

Evidence-aware scientific systems#AI#AGI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#Biomedical#Bioinformatics#Mlops#Deep Learning#Machine Learning#Cognitive Science#Developer Tools#DevOps#AI Research#Scientific Integrity#Business Strategy#AI Code#Contextengineering#Architecture#Data Orchestration#Code Review
The Sheepwave Has a New Shape: OpenMythos and the Rise of Architecture Hype
Cloud & Engineering Foundations

The Sheepwave Has a New Shape: OpenMythos and the Rise of Architecture Hype

A technical-opinion essay on OpenMythos, Claude Mythos, README-driven AI hype, and why architecture claims need source-level verification before becoming public belief.

Operational surfaces that survive real deployment#Code Review#Data Orchestration#Contextengineering#Software Development#Prompt Engineering#Architecture#AI Code#AI Governance#AI Alignment#LLM#Deep Learning#Machine Learning#Cognitive Science#Github#Open Source
The Difference Between a Harness and a Leash
AI Governance Systems

The Difference Between a Harness and a Leash

A practical essay on why most AI 'harnesses' are still leashes: guides shape behavior, but only justified external measurement creates a real governance boundary.

Control, auditability, and safe boundaries#AI Governance#AI Alignment#AI#LLM#DevOps#Prompt Engineering#Product Management#Architecture#Data Orchestration#Contextengineering#Software Development
OpenMythos v0.5.0 Code Review - Audit Report
Cloud & Engineering Foundations
Code Review

OpenMythos v0.5.0 Code Review - Audit Report

OpenMythos collected thousands of GitHub stars and dominated AI discourse for a week. This is what happens when you actually read the code — and why the people who do always arrive too late to matter.

Operational surfaces that survive real deployment#Code Review#AI Alignment#Prompt Engineering#Software Development#Contextengineering#Architecture#Data Orchestration
The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See
Scientific & BioAI Infrastructure

The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See

An analysis of how no-code and AI-generated healthcare apps create regulatory liability when patient data flows are deployed without prior mapping, auditability, or compliance architecture.

Evidence-aware scientific systems#AI#AGI#Biomedical#Bioinformatics#Mlops#Deep Learning#Machine Learning#Cognitive Science#DevOps#Prompt Engineering#Product Management#Software Development#Future of AI
FLAMEHAVEN FileSearch: Why This RAG Engine Feels Different from the Usual Stack
Cloud & Engineering Foundations

FLAMEHAVEN FileSearch: Why This RAG Engine Feels Different from the Usual Stack

A technical look at FLAMEHAVEN FileSearch: BM25+RRF hybrid retrieval, chunk-addressable indexing, deterministic DSP vectors, and the trade-offs behind a lower-overhead self-hosted RAG engine.

Operational surfaces that survive real deployment#AI#AGI#AI Alignment#AI Governance#DevOps#Open Source#Developer Tools#Prompt Engineering#Software Development#Github#AI Code#Architecture#Data Orchestration
It Gets Smarter Every Scan: AI-SLOP Detector v3.5.0 and the Self-Calibration Loop
Cloud & Engineering Foundations

It Gets Smarter Every Scan: AI-SLOP Detector v3.5.0 and the Self-Calibration Loop

AI-built apps are starting to fail in public. Not every failure is static-analysis territory, but many share the same upstream condition: plausible-looking code passing review without carrying enough real logic. AI-SLOP Detector v3.5.0 adds a self-calibration loop to reduce that gap.

Operational surfaces that survive real deployment#AI#AGI#AI Alignment#AI Governance#Deep Learning#Machine Learning#Open Source#Developer Tools#Prompt Engineering#Software Development#AI Code#Contextengineering#Architecture#Data Orchestration
Can AI Review Physics? Yes — That Is Why We Built SPAR
Reasoning / Verification Engines

Can AI Review Physics? Yes — That Is Why We Built SPAR

SPAR is a deterministic framework for claim-aware review: checking whether an output deserves the claim attached to it.

Inference quality, validation, and proof surfaces#AI#AGI#AI Alignment#AI Governance#Deep Learning#Machine Learning#Cognitive Science#AI Research#Scientific Integrity#Software Development#AI Code#Contextengineering#Architecture#Data Orchestration
Bridging the Gap: From AI Slop to Mathematical Governance
Scientific & BioAI Infrastructure

Bridging the Gap: From AI Slop to Mathematical Governance

A mathematical framework for detecting AI-generated code slop using AST distributions, Jensen-Shannon divergence, and geometric governance gates.

Evidence-aware scientific systems#AI#AGI#AI Ethics#AI Alignment#AI Governance#Deep Learning#Machine Learning#AI Research#Scientific Integrity#Prompt Engineering#Programming#Software Development#AI Code#Contextengineering#Architecture#Data Orchestration
My AI Maintainer Kept Making Wrong Calls. So I Made It Report Its State Before Touching Anything.
Cloud & Engineering Foundations
MICA Series

My AI Maintainer Kept Making Wrong Calls. So I Made It Report Its State Before Touching Anything.

Part 6 moves from landscape to operation. This is what MICA looks like when it is actually running inside a real maintenance workflow — session report, self-test, drift, invariants, and operator judgment.

Operational surfaces that survive real deployment#AI#AGI#AI Alignment#AI Governance#Developer Tools#DevOps#AI Code#Contextengineering#Architecture#Data Orchestration
Prompt → RAG → MCP → Agent → Harness, and What?
Cloud & Engineering Foundations

Prompt → RAG → MCP → Agent → Harness, and What?

Why the next layer in AI may be governance infrastructure, not just better agents.

Operational surfaces that survive real deployment#AI#AGI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#LLM#Cognitive Science#Developer Tools#Prompt Engineering#Software Development#AI Code#Contextengineering#Architecture#Data Orchestration

Showing page 1 of 4 · 46 matching posts