Relying entirely on only AI coding is a trap π
And the worst kind of trap, the one you don't realize you are in until it's too late (and costly) π
I am coming out of the other side for a project that's been running on AI-only code for a few weeks now
Here is what I found, and how you can avoid the same pitfalls π
Code entropy is real, gradual, and sneaky
With AI I never start from a blank project, and this time wasn't different.
Always provide manually implemented code to copy for the AI, otherwise you will lose control from the start π¬
Let's call this "Entropy 0". You coded, you know, it works π«‘
Now AI enters the loop. And it's fast, and (mostly) clean.
Even if slowly, entropy is starting to build up π
For any pattern missing in the codebase (or that the AI simply did not find or follow), a new extraneous (and likely wrong) pattern is formed.
The more AI owns the code, the more bad patterns will proliferate.
After a few weeks of no manual clean up, refactoring, or proper review, the code will deteriorate π€―
That's exactly what I saw: large chunks of code that are hard to understand, and many unnecessary APIs.
The cost of AI entropy
At this point, there is no path to "come back", only move forward. Which means two costly options:
- Remove entire parts of the code, and rewrite manually
- Try to understand the AI code, and refactor where needed
Both of these solutions required high coding expertise π
The cost of the above fixes depends on the state of your project, how long AI owned it, and how much expertise you have in writing code.
I would not recommend prompting AI harder to fix the code ππΌββοΈ
I tried that as well. You end up chasing the next minor bug. As bugs become more and more scattered and subtle, the product deteriorates as the code become more and more slop π€¦
A few preemptive fixes
The question becomes: how do you avoid this situation?
First "obvious" and "easy" solution is proper code reviews of what AI wrote.
Theoretically, this should prevent any bad pattern from sneaking in the code, keeping entropy in check π€
This solution risks to slow down the AI implementation to the point of making manual coding faster:
- Writing the prompt
- Waiting for the AI plan
- Reviewing the plan
- Waiting for the plan to be implemented
- Reviewing the code
- Manually fix the code
"Soft" reviews (i.e. skimming the code here and there) may not be enough, AI mistakes slip in unnoticed in the cracks π¬
Automated solution
The second option is to tight automatic checks to become even more strict.
I mean, extremely strict.
Basic
effect, linting, strict typing did not prevent slop from the AI code π€
I am therefore moving to an even stricter setup, work in progress right now, based on the following:
- Explicitly typed
effectcode and patterns, with proper code reviews - Effect LSP
- Strict linting
- Custom lint rules
First, making effect code more stringent (e.g. more branded types, explicitly types).
Second, extending linting rule with also custom implemented ones (Oxlint JS Plugins).
For example, adding rules that stop AI from using code that normally would be valid, but not entrusted to AI (e.g.
useEffectπ€¦)
By adding more and more automatic checks, eventually the AI will have no options but to write correct and clean code (hopefully, I will report soon π)
The other part of this endeavour is making the core libraries and APIs even more type safe.
That's also work in progress, with a few updates and improvements:
See you next π
