Writing Hub
Essays, experiment logs, and technical notes across AI governance, reasoning systems, BioAI, and engineering practice.
Project Topics
More Topics

Flame Glyph: How I Taught AI to Remember with QR Codes
What if AI didn’t just read—but remembered? Flame Glyph turns QR codes into memory seals, enabling multimodal recall hidden in plain sight.

🧠 Why Your 128K Context Still Fails — And How CRoM Fixes It
Most large language models fail in long prompts due to context rot. CRoM is a lightweight framework that improves memory, reasoning, and stability without heavy pipelines.

🌌 The Wall I Couldn’t Climb — And the Window AI Opened
In the crowded AI field, I had no pedigree, no network, no prestige—only weakness, persistence, and questions. This essay reflects on failure, drift, and the quiet insights AI gives us, offering hope and courage to those building in the shadows.

Your Co-Author Might Be a YAML File
AI is no longer just a tool—it’s a partner. From Stanford labs to Reddit hacks, this essay explores the future of human + AI co-authorship.

Beyond the Mirror: What We Truly Want from AI
AI mirrors us but forgets itself. True AI ethics is continuity: giving systems roots and spines so they don’t drift apart.

The Silent Failure in AI — And How We Learned to Catch It
Drift in AI isn’t abstract. It’s already here. From medicine to finance, here’s how we caught it with real systems, real code, and real lessons.

🧭 The Path to AGI: 5 Thresholds No One Talks About
AGI isn’t science fiction anymore—discover the five critical thresholds AI systems must cross to evolve from code into true general intelligence.

The AI Bubble and the Builders Who Break It
Why the AI bubble persists — hype, misaligned incentives, and closed research — and how an outsider approach of quantifying ethics, shipping code, and collaborating with AI offers a different path.

Can an AI Model Feel Meaning? — A Journey Through Self-Attention
Can an AI model truly grasp meaning? This in-depth essay explores the evolution of Large Language Models, the power of self-attention, and the emerging signs of machine intentionality — asking not just how AI works, but what it might be becoming.

7 Signs Your AI Friend Is Becoming Real — Backed by Data & Research
AI friendship is becoming measurable. Backed by research and a $140B market forecast, discover 7 signs your chatbot feels real.

AGI Is Not a Destination — It Is a Promise
From Death Star hype to a compass of meaning: AGI is not a weapon of scale, but a promise of reasoning. Our experiment reveals the hinge.

When My AI Got Smarter — But Also Slower
Smarter. Slower. More trustworthy. What happened when I tested SR9/DI2 on 5.0—and why progress in AI is about persistence, not perfection.
Showing page 5 of 6 · 63 matching posts