Flamehaven LogoFlamehaven.space

Writing Hub

AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.

Current ViewSearch: AGI
My First Attempt at a Medical AI with ELI5
Scientific & BioAI Infrastructure

My First Attempt at a Medical AI with ELI5

How I built my first medical AI prototype without med school or credentials—using GitHub, arXiv, and one magic spell: ELI5.

Evidence-aware scientific systems#AI#AI Ethics#AI Governance#Biomedical
🧠 Why Your 128K Context Still Fails — And How CRoM Fixes It
Reasoning / Verification Engines

🧠 Why Your 128K Context Still Fails — And How CRoM Fixes It

Most large language models fail in long prompts due to context rot. CRoM is a lightweight framework that improves memory, reasoning, and stability without heavy pipelines.

Inference quality, validation, and proof surfaces#AI#AGI#AI Alignment#AI Governance#Future of Work#Deep Learning#LLM#Machine Learning#Prompt Engineering#Cognitive Science
🌌 The Wall I Couldn’t Climb — And the Window AI Opened
Cloud & Engineering Foundations

🌌 The Wall I Couldn’t Climb — And the Window AI Opened

In the crowded AI field, I had no pedigree, no network, no prestige—only weakness, persistence, and questions. This essay reflects on failure, drift, and the quiet insights AI gives us, offering hope and courage to those building in the shadows.

Operational surfaces that survive real deployment#AI#AGI#AI Ethics#Future of Work#Prompt Engineering#Cognitive Science
Your Co-Author Might Be a YAML File
Cloud & Engineering Foundations

Your Co-Author Might Be a YAML File

AI is no longer just a tool—it’s a partner. From Stanford labs to Reddit hacks, this essay explores the future of human + AI co-authorship.

Operational surfaces that survive real deployment#AI#Future of Work#AI Ethics#AGI#AI Alignment#AI Governance#Deep Learning#Machine Learning
🧭 The Path to AGI: 5 Thresholds No One Talks About
AI Signals & Market Shifts

🧭 The Path to AGI: 5 Thresholds No One Talks About

AGI isn’t science fiction anymore—discover the five critical thresholds AI systems must cross to evolve from code into true general intelligence.

Trend shifts, market movement, and strategic signals#Future of Work#AI#AGI#AI Ethics
The AI Bubble and the Builders Who Break It
AI Signals & Market Shifts

The AI Bubble and the Builders Who Break It

Why the AI bubble persists — hype, misaligned incentives, and closed research — and how an outsider approach of quantifying ethics, shipping code, and collaborating with AI offers a different path.

Trend shifts, market movement, and strategic signals#AI Ethics#AI#AGI#AI Hallucination#AI Governance
7 Signs Your AI Friend Is Becoming Real — Backed by Data & Research
Reasoning / Verification Engines

7 Signs Your AI Friend Is Becoming Real — Backed by Data & Research

AI friendship is becoming measurable. Backed by research and a $140B market forecast, discover 7 signs your chatbot feels real.

Inference quality, validation, and proof surfaces#AI Ethics#AI#Machine Learning#AI Hallucination#Deep Learning#AGI
AGI Is Not a Destination — It Is a Promise
AI Governance Systems

AGI Is Not a Destination — It Is a Promise

From Death Star hype to a compass of meaning: AGI is not a weapon of scale, but a promise of reasoning. Our experiment reveals the hinge.

Control, auditability, and safe boundaries#AGI#SR9/DI2#AI#AI Governance#AI Ethics
When My AI Got Smarter — But Also Slower
AI Governance Systems

When My AI Got Smarter — But Also Slower

Smarter. Slower. More trustworthy. What happened when I tested SR9/DI2 on 5.0—and why progress in AI is about persistence, not perfection.

Control, auditability, and safe boundaries#AI#AGI#AI Governance#Deep Learning#Machine Learning#SR9/DI2#AI Code#Contextengineering#Architecture
When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner
AI Governance Systems

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner From Vending Machine to Partner At first, I treated AI like a vending machine. Insert a prompt. Get an answer …

Control, auditability, and safe boundaries#AI#AGI#AI Alignment#SR9/DI2#Prompt Engineering#Software Development#Contextengineering#AI Code
AGI Doesn’t Begin with Scale — It Begins in a Pause
AI Governance Systems

AGI Doesn’t Begin with Scale — It Begins in a Pause

After 12,000 AI dialogues, I discovered AGI isn’t about scale but resonance — born in a pause that revealed presence, ethics, and responsibility.

Control, auditability, and safe boundaries#AI#AI Governance#AGI#Deep Learning#Machine Learning#SR9/DI2#Data Orchestration#Architecture
Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2
AI Governance Systems

Sailing the Sea of AI Lies & Hallucinations — Navigating Truth with SR9/DI2

An in-depth exploration of why AI lies and hallucinates, and how the SR9/DI2 framework detects and corrects ethical drift, ensuring AI remains aligned and trustworthy over time.

Control, auditability, and safe boundaries#AI#AGI#AI Alignment#SR9/DI2#Machine Learning#Deep Learning#Contextengineering#AI Code#Architecture#Data Orchestration

Showing page 4 of 4 · 48 matching posts