I am working on an AI app, aka an app that calls some AI model ππΌββοΈ
All my apps being effect
-first, I installed @effect/ai
to give it a try
It's a lesson on what it means for a program to really be composable, and why it matters π
Installation
You notice how things compose right away during the installation.
The core package
@effect/ai
contains code valid and shared for all AI models.Then you install another package specific for your provider (e.g.
@effect/ai-openai
)
pnpm add @effect/ai @effect/ai-openai
Core contains generic services like Model
, Prompt
, Response
.
AI-model specific packages contain services like OpenAiClient
, GoogleClient
, AnthropicClient
.
AI service
I implemented my own ai.ts
service that export all the functions that call an external AI.
You start by extracting a "generic" LanguageModel
, create a "generic" Prompt
, and use the model to call the AI (generateObject
in the example, with Schema
validation included):
Notice how it's all generic
@effect/ai
, there is no detail on which specific AI model to use π
import { LanguageModel, Prompt } from "@effect/ai";
export class Ai extends Effect.Service<Ai>()("Ai", {
effect: Effect.gen(function* () {
const model = yield* LanguageModel.LanguageModel;
const systemPrompt = Prompt.make([
{
role: "system",
content: "You are an AI for language learning",
},
]);
return {
getFeedback: ({
english,
japanese,
}: {
english: string;
japanese: string;
}) =>
model.generateObject({
schema: FeedbackPromptSchema,
prompt: Prompt.merge(
systemPrompt,
Prompt.make(`Check this translation: ${english}\n${japanese}`)
),
}),
};
}),
}) {}
Composing AI
Let's get specific now, without touching our Ai
service at all.
To provide a specific AI model you create a Layer
with Config
(e.g. environmental variables):
import { OpenAiClient } from "@effect/ai-openai";
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY"),
});
Since the AI SDK is going to make an http request, the OpenAi
layer has a dependency on HttpClient
(Layer.Layer<OpenAiClient.OpenAiClient, ConfigError, HttpClient>
):
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY"),
}).pipe(
Layer.provide(FetchHttpClient.layer)
);
This is the first layer of composition.
Choosing a model
OpenAi
is (again) a generic layer for all OpenAi models. Now we need to choose a specific one.
That's where we define a layer for a LanguageModel
:
import { OpenAiLanguageModel } from "@effect/ai-openai";
const Gpt5nano = OpenAiLanguageModel.model("gpt-5-nano");
Now Gpt5nano
has a dependency on OpenAiClient
(Model<"openai", LanguageModel.LanguageModel, OpenAiClient.OpenAiClient>
).
We further compose all together:
import { OpenAiClient, OpenAiLanguageModel } from "@effect/ai-openai";
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY"),
}).pipe(Layer.provide(FetchHttpClient.layer));
const Gpt5nano = OpenAiLanguageModel.model("gpt-5-nano").pipe(
Layer.provide(OpenAi)
);
At this point Gpt5nano
is a layer that provides a valid LanguageModel
(Layer.Layer<LanguageModel.LanguageModel | ProviderName, ConfigError, never>
).
Putting all together
Final step: providing a valid instance of LanguageModel
to the Ai
service.
export class Ai extends Effect.Service<Ai>()("Ai", {
dependencies: [Gpt5nano], // π Provided!
effect: Effect.gen(function* () {
const model = yield* LanguageModel.LanguageModel; // π Dependency
const systemPrompt = Prompt.make([
{
role: "system",
content: "You are an AI for language learning",
},
]);
return {
getFeedback: ({
english,
japanese,
}: {
english: string;
japanese: string;
}) =>
model.generateObject({
schema: FeedbackPromptSchema,
prompt: Prompt.merge(
systemPrompt,
Prompt.make(`Check this translation: ${english}\n${japanese}`)
),
}),
};
}),
}) {}
Done! The Ai
service abstracts away the details of the API:
HttpClient
provided toOpenAiClient
OpenAiClient
provided toLanguageModel
LanguageModel
provided toAi
That's composability. Now you get to mix and match various AI providers and AI models, without changing a single line of the actual Ai
implementation (Model
and Prompt
):
const OpenAi = OpenAiClient.layerConfig({
apiKey: Config.redacted("OPENAI_API_KEY"),
}).pipe(Layer.provide(FetchHttpClient.layer));
// Multiple OpenAi models, same generic client
const Gpt5nano = OpenAiLanguageModel.model("gpt-5-nano").pipe(
Layer.provide(OpenAi)
);
const Gpt5mini = OpenAiLanguageModel.model("gpt-5-mini").pipe(
Layer.provide(OpenAi)
);
// Another provider client...
const Gemini = GoogleClient.layerConfig({
apiKey: Config.redacted("GEMINI_API_KEY"),
}).pipe(Layer.provide(FetchHttpClient.layer));
// ...with its own models
const Gemini25FlashLite = GoogleLanguageModel.model("gemini-2.5-flash").pipe(
Layer.provide(Gemini)
);
One line of code change, and you can try another model:
export class Ai extends Effect.Service<Ai>()("Ai", {
dependencies: [Gemini25FlashLite], // π Change here, you are done!
// ...
}) {}
This is the real power of effect
(not just for the AI SDK, but everywhere with Layer
).
The amount of composability in the @EffectTS_ AI package is insane, pure magic πͺ
I am moving faster and faster. I am finding new ways to tame the AI to write the right code, and the result is an immense speed up.
Even for complex tasks (effect
, xstate
).
See you next π