Flamehaven LogoFlamehaven.space

Writing Hub

AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.

Current ViewSearch: Security
The Harness Is the Product: What the Claude Code Leak Actually Revealed About AI Agent Architecture
Cloud & Engineering Foundations

The Harness Is the Product: What the Claude Code Leak Actually Revealed About AI Agent Architecture

The Claude Code leak exposed more than source. It revealed that modern AI agent performance depends heavily on the harness around the model.

Operational surfaces that survive real deployment#AI#AGI#AI Alignment#AI Governance#LLM#Deep Learning#Machine Learning#DevOps#Prompt Engineering#Software Development#Product Management#AI Code#Contextengineering#Architecture#Security#Data Orchestration
I Audited 10 Open-Source Bio-AI Repos. Most Could Produce Outputs. Few Could Establish Trust.
Scientific & BioAI Infrastructure

I Audited 10 Open-Source Bio-AI Repos. Most Could Produce Outputs. Few Could Establish Trust.

I audited 10 visible repositories. Most could produce outputs. Very few could establish what those outputs meant.

Evidence-aware scientific systems#AI#AI Ethics#AI Alignment#AI Governance#Biomedical#Bioinformatics#Future of Work#LLM#Open Source#DevOps#Scientific Integrity#Prompt Engineering#Github#AI Code#Contextengineering#Architecture#Security#AI Research
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
AI Governance Systems
MICA Series

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Control, auditability, and safe boundaries#AI#AI Ethics#AI Alignment#AI Governance#Future of Work#Deep Learning#Machine Learning#Cognitive Science#DevOps#Software Development#AI Code#Architecture#Contextengineering#Security
Bio-AI Repository Audit 2026: A Technical Report on 10 Open-Source Systems
Scientific & BioAI Infrastructure

Bio-AI Repository Audit 2026: A Technical Report on 10 Open-Source Systems

We audited 10 prominent open-source Bio-AI repositories using code inspection and STEM-AI trust scoring. 8 of 10 scored T0: trust not established. Here is what the code actually shows.

Evidence-aware scientific systems#AI#AGI#AI Alignment#AI Governance#Biomedical#Bioinformatics#Mlops#Deep Learning#Machine Learning#DevOps#AI Research#Scientific Integrity#Software Development#AI Code#Contextengineering#Architecture#Security
Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust
Scientific & BioAI Infrastructure
STEM-AI:Soverign Trust Evaluator for Medical AI Artifacts

Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust

STEM-AI is a governance audit framework for public medical AI repositories. It scores README integrity, cross-platform consistency, and code infrastructure — because benchmarks alone don't tell you if a bio-AI tool is safe to trust.

Evidence-aware scientific systems#AI#AI Ethics#AI Alignment#AI Governance#Biomedical#Bioinformatics#LLM#Cognitive Science#AI Research#Scientific Integrity#Software Development#Architecture#Contextengineering#Security
I Built an Ecosystem of 46 AI-Assisted Repos. Then I Realized It Might Be Eating Itself.
Reasoning / Verification Engines
Governed Reasoning

I Built an Ecosystem of 46 AI-Assisted Repos. Then I Realized It Might Be Eating Itself.

An ecosystem of 46 AI-assisted repos can become a closed loop. This article explores structural blind spots, self-validating toolchains, and the need for external validators to create intentional friction.

Inference quality, validation, and proof surfaces#AI#AGI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#Mlops#Machine Learning#Deep Learning#SR9/DI2#Cognitive Science#Scientific Integrity#AI Research#Software Development#Business Strategy#Security#Architecture#Contextengineering#AI Code
Implementing "Refusal-First" RAG: Why We Architected Our AI to Say 'I Don't Know'
Reasoning / Verification Engines
Governed Reasoning

Implementing "Refusal-First" RAG: Why We Architected Our AI to Say 'I Don't Know'

Implementing refusal-first RAG means teaching AI to say “I don’t know.” This article explains evidence atomization, Slop Gates, and grounding checks that favor verifiable answers over plausible hallucinations.

Inference quality, validation, and proof surfaces#AI#AGI#AI Alignment#AI Governance#AI Hallucination#Mlops#Machine Learning#Deep Learning#SR9/DI2#Cognitive Science#Security#Architecture#Contextengineering
Undo Beats IQ: Building Flamehaven as a Governed AI Runtime (Not a Prompt App)
AI Governance Systems
Governed Reasoning

Undo Beats IQ: Building Flamehaven as a Governed AI Runtime (Not a Prompt App)

Project note, essay, or technical log from the Flamehaven writing archive.

Control, auditability, and safe boundaries#AI#AGI#Architecture#Security#DevOps#AI Governance#AI Alignment
Built a SaaS in 30 Minutes? When “No-Code Hype” Meets the Operational Wall
Cloud & Engineering Foundations

Built a SaaS in 30 Minutes? When “No-Code Hype” Meets the Operational Wall

Where no-code hype hits the operational wall: auth, billing, security, cost.

Operational surfaces that survive real deployment#DevOps#Developer Tools#AI#Future of Work#LLM#Programming#Prompt Engineering#Software Development#Product Management