Test Any Public API in 2 Minutes
No Postman collection. No complex config. Just your AI and one sentence.
Tell your AI what you want. One sentence is enough.
Real example: GitHub repos
Inside a Glubean project, open your AI assistant and type:
please create github repo list tests in the explore folderThat’s it. The AI reads package.json, the SDK types, and your project layout, then generates a working test file:
import { test } from "@glubean/sdk";
export const listUserRepos = test(
{ id: "github-list-repos", name: "GET GitHub List Repos", tags: ["explore"] },
async (ctx) => {
const username = ctx.vars.require("GITHUB_USERNAME");
const token = ctx.vars.get("GITHUB_TOKEN");
const headers: Record<string, string> = {
"Accept": "application/vnd.github+json",
"X-GitHub-Api-Version": "2022-11-28",
};
if (token) {
headers["Authorization"] = `Bearer ${token}`;
}
const res = await ctx.http.get(
`https://api.github.com/users/${username}/repos?per_page=5&sort=updated`,
{ headers },
);
const data = await res.json();
ctx.expect(res.status).toBe(200);
ctx.expect(Array.isArray(data)).toBe(true);
ctx.expect(data.length).toBeGreaterThan(0);
const summary = data.map((repo: Record<string, unknown>) => ({
name: repo.name,
stars: repo.stargazers_count,
language: repo.language,
updated_at: repo.updated_at,
}));
ctx.log("Repos", summary);
},
);Click ▶ in the gutter. Done.
Notice what the AI figured out on its own — no hand-holding needed:
ctx.vars.require("GITHUB_USERNAME")for runtime configctx.vars.get("GITHUB_TOKEN")as optional (higher rate limits, not required)- Proper GitHub API headers and versioning
- Status + type + length assertions
- Structured logging with
ctx.log
Not perfect — and that’s fine. The AI used ctx.vars.get for the token, but GITHUB_TOKEN is a secret — it should be ctx.secrets.require("GITHUB_TOKEN") (or ctx.secrets.get if optional). You’ll also need to add GITHUB_USERNAME to your .env file and GITHUB_TOKEN to .env.secrets. You can ask the AI to fix these directly — and from there, it uses the corrected version as a template when generating more tests.
One prerequisite: a Glubean project. Run npx @glubean/cli init first so the AI can see package.json, the @glubean/sdk dependency, and your tests/ / explore/ layout. Without this context, it generates generic Node/Jest-style code instead of Glubean tests.
Try it with any API
The prompt doesn’t need to be fancy. Just say what you want:
- “create tests for the JSONPlaceholder API — create a post, fetch it, delete it”
- “test the OpenWeatherMap current weather endpoint”
- “hit the Stripe prices list API, use secrets for the key”
- “test HackerNews — fetch top 5 story IDs, then fetch each title”
And if you have any document that describes your API — a markdown file, a plain text spec, a JSON example, internal wiki notes, even a Slack message with endpoints listed — just drop it into the project and point the AI at it. There’s no required format. If it describes request structure, AI can generate tests from it.
Multi-step workflows
For API flows that span multiple calls — create, verify, update, cleanup — ask for a builder pattern test:
create a checkout flow test: create a cart, add an item, complete checkout, then clean upThe AI generates a step chain:
import { test } from "@glubean/sdk";
import { http } from "./configure.ts";
export const checkout = test("checkout-flow")
.meta({ tags: ["e2e"] })
.step("create cart", async ({ expect }) => {
const cart = await http.post("carts").json<{ id: string }>();
expect(cart.id).toBeDefined();
return { cartId: cart.id };
})
.step("add item", async ({ expect }, { cartId }) => {
await http.post(`carts/${cartId}/items`, {
json: { productId: "product-123" },
});
const cart = await http.get(`carts/${cartId}`).json<{ items: unknown[] }>();
expect(cart.items).toHaveLength(1);
return { cartId };
})
.step("checkout", async ({ expect }, { cartId }) => {
const order = await http
.post(`carts/${cartId}/checkout`)
.json<{ status: string }>();
expect(order.status).toBe("completed");
return { cartId };
})
.teardown(async (_ctx, state) => {
if (state?.cartId) await http.delete(`carts/${state.cartId}`);
});Each step receives state from the previous step, and .teardown() runs even if a step fails — so test data never leaks.
Level up: add context for better results
The examples above work with zero setup — great for exploring public APIs. For your own APIs, adding context dramatically improves accuracy:
- OpenAPI spec in
context/— AI knows your exact routes and response shapes, no guessing - Skill file — AI follows your SDK conventions, auth patterns, and assertion style
- MCP tools — AI runs tests and fixes failures automatically, in the same chat turn
With all three, generated tests run on first try instead of needing 2-3 rounds of manual fixes.
→ Read the full AI Context Guide for setup instructions.
From exploration to CI
When you’re happy with a test:
- Move the file from
explore/totests/. - Add deeper assertions or schema validation if needed.
- Run in CI with
glubean run tests/— same file, zero migration.
The file you explored with today catches regressions in CI tomorrow.
What’s next?
- AI Context Guide — set up OpenAPI specs, skills, and MCP for maximum generation accuracy
- Quick Start — install the extension and create your first project
- Migrate from Postman/OpenAPI — convert existing collections with AI