My AI workflow is changing again, as yours should as well π
In the span of 6 months:
- AI mostly as smart auto-suggestions in IDE
- AI good for a few obvious contained/repetitive tasks
- AI main driver of specific set of tasks
To today, where multiple AI implement in parallel multiple large features π€―
And, again, effect shines. Here is my unbreakable AI effect setup as of today π
Is it Effect, or just the AI?
First, the obvious question:
Is really
effectthat makes the difference, or AI just works anyway?
Anecdotes arguing one side or the other are plenty.
But, there are a few realities making effect a solid choice:
- AI thrives with guardrails, and
effectis full of them (types ππΌββοΈ) - AI needs a feedback to verify changes, and
effectis "if it compiles it works" (again, types ππΌββοΈ) - AI reasons better with explicit code patterns, and
effectforces you to handle everything explicitly
These (and more) theoretically favours effect with AI. Easier to plan, easier to implement, easier to verify βοΈ
Repository setup
Here is what you need:
- A monorepo with proper typescript and linting setup (I use
oxlint) - Strict types at the core (database and data modelling)
- A
verifycommand for AI to check the implementation - A clone of the
effectrepo locally
I still believe the initial setup is up to you (monorepo, typescript, linting) ποΈ
A few types/schemas sit at the core of any project. In my case, I have a file with all the schemas for the database table, data models, and API endpoints.
This file must be as strict as possible, and always in good shape π§±
Same for your database schema (I use Drizzle). Don't let AI blindly mess with these core components: always check changes to these files!
Inside AGENTS.md/CLAUDE.md I tell the AI to run pnpm run verify to check changes, and iterate if any error appears:
{
"name": "my-project",
"private": true,
"scripts": {
"dev": "pnpm --parallel -r dev",
"build": "pnpm -r build",
"typecheck": "pnpm -r typecheck",
"format": "oxfmt",
"lint:all": "oxlint",
"lint": "oxlint --quiet",
"verify": "pnpm run format && pnpm run typecheck && pnpm run lint:all",
"knip": "knip"
},
"packageManager": "[email protected]",
"devDependencies": {
"@types/node": "^25.0.1",
"knip": "^5.73.3",
"oxfmt": "^0.16.0",
"oxlint": "^1.31.0",
"typescript": "^5.9.3"
}
}Finally, inside .agents (added to .gitignore) I have a clone of the main effect repo, and I inform the AI of that:
### Effect
You can inspect the official `effect` repo inside [.agents/effect](./.agents/effect/) to learn how to use all the `effect` APIs.
### Example workflow
A full-stack change usually requires:
- Updating the database schema
- Updating the API signature
- Updating the backend API implementation
- Updating the frontend API requests and layout
Run `pnpm run verify` to verify each change, which runs formatting, checks for type errors and linting errors and warnings.My orchestration workflow
With all of this in place, changes are safe enough to just let the AI roll β‘οΈ
Not just one AI anymore, but multiple AIs in parallel π
Tools for multiple AIs orchestration are coming (e.g. Codex), but you can use the terminal(s) as well.
For a large change, here is the workflow I use (successfully):
- Run a
planmode describing in the details the changes, and ask specifically to break down all the work in smaller tasks (no implementation) - The tasks are generated by the previous plan as multiple
.mdfiles, with dependencies between each other - Spin up multiple terminals, and run
planmode for all the AIs that can safely run in parallel, pointing to their respective.mdtask
Then sit back and watch. A few points to notice:
- Make the plain as detailed as necessary, with project-specific terms the AI can search
- Point to as many files as possible if you know where the change must occur
- Always run
planmode for mid/large size changes
Refinements
With the loop and types setup, the AI should get to a final working (and solid) state.
Run the app now, and do some manual code, UI and API testing π€
First, make sure the core schemas and database are not compromised (e.g. as assertions, wrong schema relations/types, etc.).
Second, skim the code to verify that the core patterns are maintained (for example in my codebase I want xstate over useState, all explicit typing and such).
Third, run the app and check the UI and UX, and spin up a few more AIs to fix minor details (no need plan mode).
Make sure to
/clearthe context in between smaller tasks. The leaner the better π
I have a few large codebases running effect.
I recently implemented a few big migrations (e.g. +6,636 -14,482 PR). It was all orchestrated by the above workflow. And with effect, it all worked first try!
Now is the time to jump into effect. I repeat this since months (years?) now.
But at this point there is no choice: as AI gets better, effect codebases will auto-implement themselves π€―
See you next π
