Flamehaven LogoFlamehaven.space

Writing Hub

AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.

Current ViewSearch: Cognitive Science
How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits
Scientific & BioAI Infrastructure
STEM-AI:Soverign Trust Evaluator for Medical AI Artifacts

How Do You Trust the AI Auditor? STEM-AI v1.1.2 and Memory-Contracted Bio-AI Audits

STEM-AI v1.1.2 binds a bio/medical AI repository audit to a machine-checkable memory contract, then demonstrates it on a real open-source bioinformatics repository.

Evidence-aware scientific systems#AI#AGI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#Biomedical#Bioinformatics#Mlops#Deep Learning#Machine Learning#Cognitive Science#Developer Tools#DevOps#AI Research#Scientific Integrity#Business Strategy#AI Code#Contextengineering#Architecture#Data Orchestration#Code Review
When an AI Pipeline Passes — But One Path Still Must Be Held: EXP-034
Scientific & BioAI Infrastructure
RExSyn Nexus-Bio

When an AI Pipeline Passes — But One Path Still Must Be Held: EXP-034

EXP-034 tested whether a method-locked Bio-AI governance pipeline could survive modal expansion, AlphaFold EBI observer wiring, and AG-live measurement without breaking its PASS/BLOCK judgment baseline.

Evidence-aware scientific systems#AI#AGI#AI Ethics#AI Governance#AI Alignment#AI Hallucination#Biomedical#Bioinformatics#SR9/DI2#Machine Learning#Deep Learning#Cognitive Science#Data Orchestration#Code Review
The Sheepwave Has a New Shape: OpenMythos and the Rise of Architecture Hype
Cloud & Engineering Foundations

The Sheepwave Has a New Shape: OpenMythos and the Rise of Architecture Hype

A technical-opinion essay on OpenMythos, Claude Mythos, README-driven AI hype, and why architecture claims need source-level verification before becoming public belief.

Operational surfaces that survive real deployment#Code Review#Data Orchestration#Contextengineering#Software Development#Prompt Engineering#Architecture#AI Code#AI Governance#AI Alignment#LLM#Deep Learning#Machine Learning#Cognitive Science#Github#Open Source
The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See
Scientific & BioAI Infrastructure

The $100 Million Blind Spot: What No-Code Healthcare Builders Still Don't See

An analysis of how no-code and AI-generated healthcare apps create regulatory liability when patient data flows are deployed without prior mapping, auditability, or compliance architecture.

Evidence-aware scientific systems#AI#AGI#Biomedical#Bioinformatics#Mlops#Deep Learning#Machine Learning#Cognitive Science#DevOps#Prompt Engineering#Product Management#Software Development#Future of AI
Can AI Review Physics? Yes — That Is Why We Built SPAR
Reasoning / Verification Engines

Can AI Review Physics? Yes — That Is Why We Built SPAR

SPAR is a deterministic framework for claim-aware review: checking whether an output deserves the claim attached to it.

Inference quality, validation, and proof surfaces#AI#AGI#AI Alignment#AI Governance#Deep Learning#Machine Learning#Cognitive Science#AI Research#Scientific Integrity#Software Development#AI Code#Contextengineering#Architecture#Data Orchestration
Prompt → RAG → MCP → Agent → Harness, and What?
Cloud & Engineering Foundations

Prompt → RAG → MCP → Agent → Harness, and What?

Why the next layer in AI may be governance infrastructure, not just better agents.

Operational surfaces that survive real deployment#AI#AGI#AI Ethics#AI Alignment#AI Governance#AI Hallucination#LLM#Cognitive Science#Developer Tools#Prompt Engineering#Software Development#AI Code#Contextengineering#Architecture#Data Orchestration
After Auditing 10 Bio-AI Repositories, I Think We're Scaling the Wrong Layer
AI Governance Systems
STEM-AI:Soverign Trust Evaluator for Medical AI Artifacts

After Auditing 10 Bio-AI Repositories, I Think We're Scaling the Wrong Layer

After auditing 10 open-source Bio-AI repositories, one pattern stood out: the field is scaling packaging faster than verification. Here is what that gap actually costs.

Control, auditability, and safe boundaries#AI#AGI#AI Ethics#AI Governance#Mlops#Cognitive Science#Open Source#DevOps#AI Code#Architecture#Github#Software Development
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
AI Governance Systems
MICA Series

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

Control, auditability, and safe boundaries#AI#AI Ethics#AI Alignment#AI Governance#Future of Work#Deep Learning#Machine Learning#Cognitive Science#DevOps#Software Development#AI Code#Architecture#Contextengineering#Security
The Model Already Read the README. MICA v0.1.8 Made It a Protocol
AI Governance Systems
MICA Series

The Model Already Read the README. MICA v0.1.8 Made It a Protocol

v0.1.7 made scoring a contract with fail-closed gates. v0.1.8 recognized that README-first behavior could serve as invocation — and formalized it as a schema-level protocol. This article uses simplified examples to show how the invocation gap that had existed since v0.0.1 was finally closed

Control, auditability, and safe boundaries#AI#AI Ethics#AI Alignment#AI Governance#Mlops#SR9/DI2#Deep Learning#Machine Learning#Cognitive Science#DevOps#Contextengineering#AI Code#Business Strategy#Software Development#Prompt Engineering
Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust
Scientific & BioAI Infrastructure
STEM-AI:Soverign Trust Evaluator for Medical AI Artifacts

Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust

STEM-AI is a governance audit framework for public medical AI repositories. It scores README integrity, cross-platform consistency, and code infrastructure — because benchmarks alone don't tell you if a bio-AI tool is safe to trust.

Evidence-aware scientific systems#AI#AI Ethics#AI Alignment#AI Governance#Biomedical#Bioinformatics#LLM#Cognitive Science#AI Research#Scientific Integrity#Software Development#Architecture#Contextengineering#Security
The Schema Existed. The Model Had No Way to Know.
Cloud & Engineering Foundations
MICA Series

The Schema Existed. The Model Had No Way to Know.

v0.0.1 proved that context could be structured. It did not prove that the structure could govern what shaped the session. Three failures — and why only one made the others meaningless.

Operational surfaces that survive real deployment#AI#AI Alignment#AI Governance#Deep Learning#Machine Learning#SR9/DI2#Cognitive Science#DevOps#Developer Tools#AI Code#Contextengineering#Architecture
95% of AI Businesses Will Die. Here’s How to Not Be One of Them.
Cloud & Engineering Foundations

95% of AI Businesses Will Die. Here’s How to Not Be One of Them.

What the data, a founder’s confession, and 70 years of tech history tell us about who actually survives.

Operational surfaces that survive real deployment#AI#AGI#AI Ethics#AI Alignment#Future of Work#LLM#Deep Learning#Machine Learning#Cognitive Science#Developer Tools#AI Code#Startups#Software Development#Prompt Engineering

Showing page 1 of 3 · 32 matching posts