
AI-SLOP Detector v3.5.0 — Every Claim, Verified Against Source Code
AI-SLOP Detector v3.5.0 made 7 claims on LinkedIn —self-calibration logic, download numbers, defect detection. Here's every claim verified against actual file paths and line numbers. The code speaks for itself.

I published a LinkedIn post about AI-SLOP Detector's self-calibration system and download numbers. Someone asked the reasonable question: "Can you actually back that up?"
Yes. Here's the source.
This isn't a feature announcement. It's a line-by-line audit of seven claims against the actual codebase. Every VERDICT links to a real file and real line numbers. The repo is public — go check it yourself.
What was claimed
Claim | Verdict |
Every scan is recorded | ✅ TRUE |
Repeat scans become calibration signal | ✅ TRUE |
Updates only when signal is strong enough | ✅ TRUE |
Visible policy artifact ( .slopconfig.yaml) | ✅ TRUE |
Explicit numeric limits govern calibration | ✅ TRUE |
Detects empty/stub/phantom/disconnected code | ✅ TRUE |
~1.4K downloads last week | ✅ TRUE |
All seven. No fabrications. No inflated numbers. Here's the proof.
Claim 1: "Every scan is recorded"
Source:
src/slop_detector/history.py, lines 116–180Auto-invoked on every CLI run. The only opt-out is
--no-history. Each scan writes to SQLite at ~/.slop-detector/history.db and stores:deficit_score,ldr_score,inflation_score,ddc_usage_ratio
n_critical_patterns,fired_rules
git_commit,git_branch,project_id
Schema is now at v5, auto-migrated on startup through every release from v2.9.0 to v3.5.0.
VERDICT: TRUE. The record() call is real. The schema is versioned. The behavior is not optional.
Claim 2: "Every re-scan becomes signal"
Source:
src/slop_detector/history.py, lines 221–246Source:
src/slop_detector/ml/self_calibrator.py, lines 301–309Single-scan files produce no calibration events. Only repeat scans generate
improvement or fp_candidate labels. The threshold is hardcoded in SQL, not assumed.VERDICT: TRUE. The repeat-scan requirement is enforced at the query level, not in documentation.
Claim 3: "Updates only when the signal is strong enough"
Source:
src/slop_detector/ml/self_calibrator.py, lines 37–54 (constants) and 251–262 (enforcement)Gate 1 — confidence gap check (line 251):
Gate 2 — score delta check (line 262):
Two independent guards. Both must pass before any weight update applies.
VERDICT: TRUE. Ambiguous signal is rejected twice before touching configuration.
Claim 4: "Leaves behind a visible policy every time it changes"
Source:
src/slop_detector/ml/self_calibrator.py, docstring line 17–18When
--apply-calibration is passed and status == "ok", optimal weights are written to .slopconfig.yaml. Plain-text YAML. Human-readable. Git-versionable. Every calibration change is a diff.VERDICT: TRUE. The policy artifact is explicit. You can
git blame it.Claim 5: "Explicit limits govern calibration"
Source:
src/slop_detector/ml/self_calibrator.py, lines 37–54No ML model. No learned bounds. Every constraint is a named constant with a comment explaining why it exists. The calibration space is a bounded grid, not an open optimization landscape.
VERDICT: TRUE. Every limit is auditable. Nothing is opaque.
Claim 6: "Detects empty implementations, phantom dependencies, disconnected pipelines"
These are the three canonical defect patterns AI code generation produces at scale. Each has a dedicated module.
Defect class | Implementation |
Empty/stub functions | src/slop_detector/metrics/ldr.py — LDRCalculator detects pass, ..., raise NotImplementedError, TODO |
Phantom/unused imports | src/slop_detector/metrics/hallucination_deps.py — AST-based import vs usage analysis via HallucinatedDependency dataclass |
Disconnected pipelines | src/slop_detector/metrics/ddc.py — DDC (Declared Dependency Completeness) usage ratio |
Function clone clusters | src/slop_detector/patterns/python_advanced.py — Jensen-Shannon Divergence on 30-dim AST histograms, JSD < 0.05 = clone |
The clone detection is worth noting. JSD on AST histograms catches structural duplication that string similarity misses entirely. LLMs produce a lot of this — same function logic, slightly renamed.
VERDICT: TRUE. Each defect class has a named module with a working implementation.
Claim 7: "~1.4K downloads in the past week"
Source: pypistats.org API (
mirrors=false), queried 2026-04-15"~1.4K" is within 0.5% of 1,407. Mirrors excluded means bot traffic is stripped — these are real install invocations.
VERDICT: TRUE. Verified against pypistats in real time. The number is not rounded up.
Why this format exists
Most open-source project posts make claims. Few back them up with file paths and line numbers.
That gap is the same problem AI-SLOP Detector is built to close. AI-generated code makes claims too — functions that look complete, imports that look used, pipelines that look connected. Static analysis finds the gap between what the code says and what it does.
This post applies the same standard to the project's own marketing copy. If a claim can be verified, it should be. If it can't, it shouldn't be made.
The codebase is public: github.com/flamehaven01/AI-SLOP-Detector
Pull requests welcome. Audits welcome more.
Verified by static code analysis + pypistats API, 2026-04-15
Share
Related Reading
Reasoning / Verification Engines
Can AI Review Physics? Yes — That Is Why We Built SPAR
Reasoning / Verification Engines
I Built an Ecosystem of 46 AI-Assisted Repos. Then I Realized It Might Be Eating Itself.
Reasoning / Verification Engines