AI-SLOP Detector v2.7.0 — Why We Built a Linter We Actually Use
Enterprise-grade data processing pipeline with fault-tolerant
This Is What “Passing Lint” Looks Like
And yes — it can still be slop.
"""
Enterprise-grade data processing pipeline with fault-tolerant
error handling and production-ready scalability features.
"""
import torch
import numpy as np
from typing import Optional, Dict, List
def process_data(data: Optional[Dict[str, List[float]]]) -> Dict:
"""Production-ready scalable data processing."""
if data is None:
return {}
return {k: v for k, v in data.items()}
Zero lint errors.
But:
torch/numpyare never used- 13 lines of docstring for 3 lines of logic (4.3× inflation)
- “enterprise-grade”, “production-ready” — no structural evidence
- The function is a dictionary identity wrapped in confident language
Traditional linters check form. This tool checks substance.
Why This Became Non-Optional
In complex systems — especially regulated or audit-sensitive ones — “code that looks correct” isn't just annoying.
It’s risky and expensive.
When AI becomes a primary production input, the failure mode isn’t syntax.
It’s substance deficit:
- unused architectural weight
- decorative abstractions
- evidence-free maturity claims
That’s the class of problem this tool targets.
What v2.7.0 Actually Fixes
Previously, the VS Code extension surfaced only a portion of what the CLI detected.
Now it surfaces ~95% of that analysis directly inside the editor.
The goal wasn’t new rules. It was alignment.
What’s Now Visible in VS Code
# 1️⃣ Docstring Inflation (line-level)
Docstring inflation: process_data (13 doc / 3 impl = 4.3x)
# 2️⃣ Evidence-Based Claim Validation
"production-ready" claim lacks evidence: tests, logging, error handling
# 3️⃣ Hallucinated Dependencies
Hallucinated dependency: torch
Imports that exist — but serve no verified purpose.
# 4️⃣ Pattern Suggestions
Warnings now include actionable fix guidance.
# 5️⃣ 1500ms Debounced Analysis
Lint-on-type runs after you pause typing — not on every keystroke.
# 6️⃣ LDR (Logic Density Ratio) in Status Bar
LDR approximates the ratio of executable logic to structural scaffolding.
It’s not perfect. But it’s a useful early signal.
Coverage Change
| Diagnostic | v2.6.3 | v2.7.0 | | ------------------- | --------- | ---------------- | | Pattern issues | ✔️ | ✔️ + suggestions | | Docstring inflation | ❌ | ✔️ | | Evidence validation | ❌ | ✔️ | | Hallucinated deps | ❌ | ✔️ | | LDR status | ❌ | ✔️ | | Lint-on-type | Immediate | 1500ms debounce |
Coverage: ~40% → ~95%.
If we stop using it, we’ll stop updating it.
This project isn’t driven by marketing cycles.
We don’t update it to announce something. We update it because our own systems demand it.
If a new module exposes a failure mode, the detector evolves.
If it slows down our workflow, we fix it.
We ship what we use. And if someday we stop using it, we’ll stop updating it.
Until then, it stays part of our gate.
Install
# CLI
pip install ai-slop-detector
slop-detector myfile.py
# VS Code
ext install Flamehaven.vscode-slop-detector
Marketplace: https://marketplace.visualstudio.com/items?itemName=Flamehaven.vscode-slop-detector