When developing or testing generative AI applications, having a system that allows you to switch between different AI APIs depending on the use case can significantly improve efficiency.
In this article, I’ll walk you through how to build a unified wrapper function in TypeScript that lets you switch between multiple AI APIs with a single interface. We’ll also implement logic to fetch data from a webpage, convert it to plain text, and then return it in a structured JSON format.
Price Comparison of Major AI APIs
As generative AI becomes more widely used in application development and business process automation, APIs like OpenAI (ChatGPT), Anthropic Claude, and Google Gemini have become the go-to tools for developers.
While I won’t go into feature-by-feature comparisons in this article, I will say this:
Gemini is by far the most cost-effective in terms of pricing.
In my experience, when working within the standard usage range, the quality difference between models is minimal. For that reason, I tend to prefer Gemini for cost-conscious use cases. (As of May 2025)
Model/API | Input(1M Token) | Output(1M Token) |
---|---|---|
OpenAI GPT-3.5 Turbo | $0.50 | $1.50 |
OpenAI GPT-4o | $2.5 * | $10 |
Claude 3.5 Haiku | $0.80 * | $4 |
Claude 3.7 Sonnet | $3 * | $15 |
Gemini 1.5 Flash | $0.075 | $0.30 |
Gemini 2.5 Flash | $0.15 | with reasoning: $0.6 without reasoning: $3.5 |
Gemini also has usage limits under its free tier, but you can make up to 1,500 requests per day.
※ Some OpenAI and Claude models support prompt caching, which can reduce costs.
Project Structure & Wrapper Design
We'll structure the API handlers like this:
src/ai/
├── index.ts // Unified API wrapper
├── openai.ts // OpenAI handler
├── claude.ts // Claude handler
└── gemini.ts // Gemini handler
index.ts – Unified Wrapper
import { ZodTypeAny } from "zod";
import { fetchParsedClaude } from "./claude";
import { fetchParsedGemini } from "./gemini";
import { fetchParsedOpenAiGPT4o } from "./openai";
// ---- Type Definitions---- //
export type Option = {
max_tokens?: number;
};
type OpenAIArgs = {
ai: "openai";
messages: any[];
schema: ZodTypeAny;
option?: Option;
};
type ClaudeArgs = {
ai: "claude";
messages: any[];
schema: ZodTypeAny;
option?: Option;
};
type GeminiArgs = {
ai: "gemini";
messages: any[];
schema: ZodTypeAny;
option?: Option;
};
type AIArgs = OpenAIArgs | ClaudeArgs | GeminiArgs;
// ---- Function Overloads (Optional) ---- //
export function fetchParsedWithAi(args: OpenAIArgs): Promise<any>;
export function fetchParsedWithAi(args: ClaudeArgs): Promise<any>;
export function fetchParsedWithAi(args: GeminiArgs): Promise<any>;
// ---- Implementation ---- //
export async function fetchParsedWithAi(args: AIArgs): Promise<any> {
if (args.ai === "openai") {
return fetchParsedOpenAiGPT4o(args.messages, args.schema, args.option);
} else if (args.ai === "claude") {
return fetchParsedClaude(args.messages, args.schema, args.option);
} else if (args.ai === "gemini") {
return fetchParsedGemini(args.messages, args.schema, args.option);
} else {
throw new Error("Unknown AI type");
}
}
Example Implementations for Each AI
Be sure to create an account on each AI provider’s site and obtain the necessary API keys to set in process.env
for each handler.
Claude (Anthropic Claude 3)
import Anthropic from "@anthropic-ai/sdk";
import { ZodArray, ZodTypeAny } from "zod";
import { Option } from "./";
export const fetchParsedClaude = async (
messages: any[],
schema: ZodTypeAny,
option?: Option
) => {
const anthropic = new Anthropic({
apiKey: process.env.CLAUDE_API_KEY,
});
const completion = await anthropic.messages.create({
model: "claude-3-7-sonnet-20250219", // Adjust the model as needed
max_tokens: 1024,
temperature: 0,
messages: messages,
...option,
});
const block = completion.content.find((c) => c.type === "text") as
| { type: "text"; text: string }
| undefined;
if (!block) {
throw new Error("The response from Claude does not contain a block of type text");
}
const raw = block.text.trim();
if (!raw) {
throw new Error("The response from Claude is empty");
}
// schemaが配列型か判定
const expectsArray = schema instanceof ZodArray;
let firstBraceIndex: number;
let lastBraceIndex: number;
if (expectsArray) {
firstBraceIndex = raw.indexOf("[");
lastBraceIndex = raw.lastIndexOf("]");
} else {
firstBraceIndex = raw.indexOf("{");
lastBraceIndex = raw.lastIndexOf("}");
}
if (firstBraceIndex === -1 || lastBraceIndex === -1) {
throw new Error("The expected JSON format could not be found");
}
const jsonString = raw.slice(firstBraceIndex, lastBraceIndex + 1);
let parsed;
try {
parsed = JSON.parse(jsonString);
} catch (e) {
console.error("Failed to parse JSON:\n", jsonString);
throw e;
}
return schema.parse(parsed);
};
OpenAI (GPT-4o)
import OpenAI from "openai";
import { zodResponseFormat } from "openai/helpers/zod";
import { z, ZodArray, ZodTypeAny } from "zod";
import { Option } from "./";
export const fetchParsedOpenAiGPT4o = async (
messages: any[],
schema: ZodTypeAny,
option?: Option
) => {
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Use "default" if formatName is not provided
const finalFormatName = "data";
const isArraySchema = schema instanceof ZodArray;
// Wrap the schema if it's an array schema
const fixedSchema = isArraySchema
? z.object({ [finalFormatName]: schema })
: schema;
const completion = await openai.beta.chat.completions.parse({
model: "gpt-4o-2024-08-06", // Adjust the model as needed
messages,
response_format: zodResponseFormat(fixedSchema, finalFormatName),
...option,
});
const parsed = completion.choices[0]?.message?.parsed;
if (!parsed) {
throw new Error("The response from OpenAI is empty");
}
// ★ 配列スキーマだったら unwrap して中身だけ返す
return isArraySchema ? parsed[finalFormatName] : parsed;
};
Gemini (Google)
import OpenAI from "openai";
import { ZodArray, ZodTypeAny } from "zod";
import { Option } from "./";
export const fetchParsedGemini = async (
messages: any[],
schema: ZodTypeAny,
option?: Option
) => {
const openai = new OpenAI({
apiKey: process.env.GEMINI_API_KEY,
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});
const response = await openai.chat.completions.create({
model: "gemini-2.0-flash", // Adjust the model as needed
messages,
temperature: 0,
max_tokens: 1024,
...option,
});
const raw = response.choices[0]?.message?.content?.trim();
if (!raw) {
throw new Error("The response from Gemini is empty");
}
// schemaが配列型か判定
const expectsArray = schema instanceof ZodArray;
let firstBraceIndex: number;
let lastBraceIndex: number;
if (expectsArray) {
firstBraceIndex = raw.indexOf("[");
lastBraceIndex = raw.lastIndexOf("]");
} else {
firstBraceIndex = raw.indexOf("{");
lastBraceIndex = raw.lastIndexOf("}");
}
if (firstBraceIndex === -1 || lastBraceIndex === -1) {
throw new Error("The expected JSON format could not be found");
}
const jsonString = raw.slice(firstBraceIndex, lastBraceIndex + 1);
let parsed;
try {
parsed = JSON.parse(jsonString);
} catch (e) {
console.error("Failed to parse JSON:\n", jsonString);
throw e;
}
return schema.parse(parsed);
};
実際の利用例(run.ts)
Let's create src/run.ts
and implement the logic to call the AI
// Prompt generation (modify as needed)
const buildPrompt = (text: string) => `
Please summarize the following text and return it in the following JSON format:
{
"summary": "<summary of the text>"
}
<<<
${text}
>>>
`;
// Define the JSON schema
const DataSchema = z.object({
summary: z.string(),
});
type Data = z.infer<typeof DataSchema>;
// Main process
async function run() {
const url = 'https://xxx.example.com'; // ←The URL of the website you want to load
const { data } = await axios.get(`https://r.jina.ai/${url}`);
const prompt = buildPrompt(data);
const response: Data = await fetchParsedWithAi({
ai: "openai", // ←Change to the AI you want to use
messages: [{ role: "user", content: prompt }],
schema: DataSchema,
});
console.log(JSON.stringify(response, null ,2));
}
Bonus: What is r.jina.ai
?
r.jina.ai
is a free proxy API that converts any webpage into LLM-friendly plain text. It’s incredibly useful when you want to summarize or extract insights from a web page without doing your own scraping.
Conclusion
Creating a unified wrapper function for switching between generative AI APIs enables smooth experimentation and integration. Whether you’re optimizing for cost, performance, or output structure, this setup allows you to test and adapt quickly.
Try integrating this TypeScript × Zod × AI SDK pattern into your next project!