๐ช๐ต๐ ๐๐-๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ฒ๐ฑ ๐๐ผ๐ฑ๐ฒ ๐ข๐ณ๐๐ฒ๐ป ๐๐ผ๐ผ๐ธ๐ โ๐๐ผ๐บ๐ฝ๐น๐ฒ๐๐ฒโ โ ๐ฏ๐๐ ๐๐๐ปโ๐โ๐ฎ๐ป๐ฑ ๐๐ต๐ ๐ ๐ฏ๐๐ถ๐น๐ ๐๐-๐ฆ๐๐ข๐ฃ ๐๐ฒ๐๐ฒ๐ฐ๐๐ผ๐ฟ
A few days ago, I published a repository called: โHRPO-X v1.0.1 โ Hybrid Reasoning Policy Optimization Framework.โ I genuinely believed it was solid work: โช๏ธPaper-inspired architecture โช๏ธClean folder structure โช๏ธConfi...
A few days ago, I published a repository called: โHRPO-X v1.0.1 โ Hybrid Reasoning Policy Optimization Framework.โ I genuinely believed it was solid work:
- Paper-inspired architecture
- Clean folder structure
- Configs in place
- Interfaces and classes defined
- Even internal audit checks passing Then I saw this comment: โ๐จ๐ ๐๐๐๐๐๐๐๐ โ ๐๐ ๐จ๐ฐ ๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐๐๐๐๐๐๐๐๐.โ
At first, I ignored it. Then I re-read the code. They were right.
๐พ๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐
The issue wasnโt intent or effort. It was density.
AI tools are great at producing structurally correct artifacts:
- Proper folder hierarchies
- Configuration files
- Class and interface definitions
- Clean pipelines and entry points
Most linters, CI checks, and even internal audits focus on exactly these signals.
But AI often fails at something more subtle:
- ๐ด๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐๐๐๐๐ ๐ ๐๐๐๐๐๐
You end up with code that is:
- Empty functions
- Minimal logic
- Documentation that outweighs implementation.
๐ป๐๐๐โ๐ ๐๐๐๐ ๐ฐ ๐๐๐๐ ๐จ๐ฐ ๐บ๐๐๐.
๐ช๐ต๐ ๐ฒ๐ ๐ถ๐๐๐ถ๐ป๐ด ๐๐ผ๐ผ๐น๐ ๐บ๐ถ๐๐ ๐ถ๐
Traditional tools ask:
- Does it compile?
- Is the structure valid?
They rarely ask:
- How much real logic is here?
- Is the documentation proportional to the code? That gap is where AI-generated slop thrives.
๐ฆ๐ผ ๐ ๐ฏ๐๐ถ๐น๐ ๐๐-๐ฆ๐๐ข๐ฃ ๐๐ฒ๐๐ฒ๐ฐ๐๐ผ๐ฟ
I built it to measure the gap between appearance and substance. It statically analyzes Python code using signals like:
- Logic Density Ratio (LDR)
- Buzzword Inflation
- Unused dependencies (DDC)
- Common AI-generated patterns These are combined into a single Deficit Score (0โ100) that reflects how hollow a codebase might be. This isnโt about blaming AI or developers.
๐พ๐๐ ๐๐๐๐ ๐๐ ๐๐๐๐๐๐
This tool isnโt about blaming:
- AI
- No-code or Low-code Developers
Itโs for anyone who has looked at a repository and thought: โThis looks impressiveโฆ but something feels off.โ
AI-SLOP Detector gives language and metrics to that intuition. It helps reviewers, educators, and teams explain why a codebase feels wrong โ even when everything appears structurally correct.
๐ ๐ณ๐ถ๐ป๐ฎ๐น ๐ป๐ผ๐๐ฒ
This project came from embarrassment, frustration, and curiosity โ but it led to a clearer understanding of a growing problem in the AI era.
If this resonates with your experience reviewing AI-generated code, Iโd love to hear how youโve been dealing with it.