Convert Your Existing APIs to Glubean Tests
You do not need to rewrite everything by hand.
Most teams convert 80-90% of their existing API collections with AI assistance. Manual effort goes only into flow design and edge cases.
If your first reaction after the demo is:
“We have hundreds of Postman requests. Are we supposed to rewrite all of this one by one?”
The short answer is: no.
You must run migration inside a Glubean project.
Do not ask AI to convert Postman/OpenAPI files in a random folder. Initialize a real Glubean project first:
npx @glubean/cli initThe AI assistant needs to see package.json, the @glubean/sdk dependency, and your tests/ conventions. Without this context, it will generate generic Node/Jest-style code instead of Glubean SDK tests.
What this migration is really about
Migration is not a text-format conversion problem. It is a test architecture problem:
- Keep request definitions you already have.
- Let AI generate Glubean test code from those sources.
- Make a few structural decisions once (auth, grouping, assertion depth).
- Reuse that pattern across the rest of the collection.
Once the pattern is locked, conversion speed increases dramatically.
Supported source formats
| Format | How to export |
|---|---|
| Postman | Collection → Export → Collection v2.1 (JSON) |
| Apifox | Project → Export → OpenAPI 3.0 or Postman format |
| REST Client | Use existing .http / .rest files directly |
| Swagger / OpenAPI | openapi.yaml or openapi.json |
| RapidAPI | Export as Postman Collection or copy requests as cURL |
| cURL commands | Paste directly into AI prompt |
| Anything else | Markdown, plain text, JSON examples, wiki notes — if it describes request structure, it works |
The table above lists common formats, but there’s no strict requirement. If you have a README with endpoint descriptions, a Confluence page, a JSON request/response sample, or even a Slack thread listing your endpoints — drop it into the project and point the AI at it. As long as the document describes URLs, methods, headers, or payloads, AI can generate tests from it.
Swagger / OpenAPI users — this is also a “generate new tests” workflow.
If you are not migrating old collections and only have an OpenAPI spec, this guide still applies end-to-end. Treat your spec as the source of truth and ask AI to generate fresh Glubean tests by tag or operationId.
This document covers both migration and greenfield test generation.
What converts well automatically
AI assistants can reliably convert:
- HTTP methods and URLs →
ctx.http.get/post/put/patch/delete(...) - Query parameters →
searchParams - Request headers and JSON bodies
- Basic status assertions →
ctx.expect(res.status).toBe(...) - Environment placeholders →
ctx.vars.require(...) - Sensitive values →
ctx.secrets.require(...) - OpenAPI
operationId→ test IDs - OpenAPI tags → test grouping (typically one file per tag/folder)
What still needs your decisions
AI cannot infer business intent from export files alone. You still need to decide:
- Granularity — independent tests vs multi-step flow tests
- Auth model — static token vs dynamic login setup
- Secrets boundary — which values must be
ctx.secrets - Assertion depth — status-only vs key fields vs schema validation
- File layout — one file per resource/tag vs merged files
These five choices determine 80% of migration quality.
For large collections: use Plan mode first.
If your source has more than ~20 requests, start in Plan mode (Cursor: ⌘⇧P → “Plan mode”, or type @plan in chat). This forces the AI to produce a structured proposal — file layout, auth strategy, variable mapping, assertion depth — before writing a single line of code.
Without Plan mode, AI tends to start generating immediately and make early assumptions that cascade into dozens of broken tests. Plan mode is cheap (read-only, no code changes) and catches misunderstandings before they multiply.
AI must ask, not guess.
When migrating large collections, AI will inevitably encounter ambiguity: which variables are secrets vs config, how to handle complex pre-request scripts, whether to split flows into separate tests, etc.
The correct behavior is to stop and ask you. If your AI assistant is looping on errors instead of asking a clarifying question, explicitly tell it:
Stop. Do not guess. List what you're unsure about and ask me before continuing.Common decision points where AI should pause:
- Authentication — static token vs. login flow vs. OAuth refresh
- Variable classification — which values go in
.envvs.env.secrets - Pre-request scripts — complex signing, encryption, or chained calls that have no direct equivalent
- Test granularity — independent tests vs multi-step builder flows
- Assertion depth — status-only vs. key fields vs. full schema validation
Model selection matters (a lot)
Use different model tiers at different migration stages:
| Stage | Recommended model type | Why |
|---|---|---|
| First 1-3 flows (pattern design) | More capable model | Better at multi-step state flow and auth reasoning |
| Bulk conversion after pattern is stable | Faster/cheaper model | High throughput once rules are clear |
| Final review and risk scan | More capable model | Better at catching hidden state coupling and missing assertions |
Practical strategy:
- Start with a stronger model for architecture decisions.
- Freeze your conventions (
configure.ts, naming, auth helper, assertions). - Switch to a faster model for batch conversion.
- Use a stronger model again for final review.
This gives you both quality and speed.
Recommended migration path
Set up and feasibility scan
Make sure you are in a Glubean project root (created by npx @glubean/cli init).
Drop your source file (Postman JSON, OpenAPI spec, or .http file) into the project folder.
Then run a scan-only round first — do not ask AI to generate files yet:
You are in a Glubean project (Node + @glubean/sdk).
Before generating any files, do a feasibility scan only.
Please:
1) Read project context first:
- package.json
- tests/ and explore/ structure
- existing configure.ts (if any)
- @glubean/sdk usage patterns in this repo
2) Read source API assets:
- [Postman/OpenAPI/REST files path]
3) Output a feasibility report (no code generation yet):
- What can be converted automatically
- What requires manual decisions
- Risks / unknowns / blockers
- Proposed file plan (which .test.ts files to create)
- Proposed auth strategy and assertion depth
- A small sample (one endpoint/flow) plan only
4) If anything is ambiguous (auth model, secret classification,
pre-request script logic), list it as an open question —
do NOT guess and continue.
5) Wait for my approval before writing any files.Review the report. Fix any misunderstandings before proceeding.
Convert one representative flow first
After approving the plan:
Approved. Now generate files according to the plan.
Start with one representative flow first, then stop for review.Pick a flow that includes auth, a create/update action, a verification, and cleanup. This becomes your reference pattern.
Lock your house style
Decide and document:
configure.tsstructure- Status/assertion policy
- When to use builder steps vs single tests
- Teardown requirements for data-creating tests
Batch-convert by folder or tag
Convert one Postman folder or OpenAPI tag group at a time. Reuse the pattern from step 2.
Run and fix flagged cases
npm test
# or
glubean run tests/Handle only failures and TODO-marked script logic.
Mapping guide
| Source concept | Glubean equivalent |
|---|---|
| Postman env variable | ctx.vars.require("VAR_NAME") |
| Postman secret variable | ctx.secrets.require("SECRET_NAME") |
Postman pm.collectionVariables.set(k, v) | Return { key: value } from a step; receive in next step’s state |
Postman pm.test("name", fn) | ctx.expect(...) or ctx.assert(...) |
| Postman pre-request auth step | .setup(async (ctx) => { ... }) |
| Postman folder | One *.test.ts file (or grouped exports) |
OpenAPI operationId | test("operation-id", ...) or test({ id: "operation-id" }, ...) |
REST Client ### block | One test export |
| cURL header/body | headers / json in ctx.http options |
Prompt templates by source format
Use the feasibility scan (step 1 above) first, then follow up with one of these:
Postman / Apifox
I have a Postman Collection at [path].
Please convert it into Glubean tests using @glubean/sdk.
Requirements:
- Group by Postman folder, one file per folder under tests/
- Use ctx.vars.require("BASE_URL") for base URL
- Sensitive keys [API_KEY, PASSWORD, TOKEN] must use ctx.secrets.require(...)
- Use builder flow (.setup/.step/.teardown) for stateful sequences
- Use single test(...) for isolated endpoints
- Assertions: status + key fields present
- Create a shared tests/configure.ts with configure({ vars, secrets, http })
- If any script logic is not safely convertible, add a TODO comment
Return:
1) tests/configure.ts
2) generated test files
3) a short "manual follow-up" listconfigure.ts baseline
import { configure } from "@glubean/sdk";
export const { vars, secrets, http } = configure({
vars: { baseUrl: "BASE_URL" },
secrets: { apiToken: "API_TOKEN" },
http: {
prefixUrl: "BASE_URL",
headers: { Authorization: "Bearer {{API_TOKEN}}" },
},
});Then import and use in any test file:
import { test } from "@glubean/sdk";
import { http } from "./configure.ts";
export const listUsers = test("list-users", async (ctx) => {
const data = await http.get("users").json();
ctx.expect(data.length).toBeGreaterThan(0);
});Quality checklist
-
ctx.vars.require(...)is used for non-sensitive runtime config -
ctx.secrets.require(...)is used for tokens, keys, passwords - Stateful flows use builder (
.setup()→.step()→.teardown()) - Data-creating tests include cleanup paths
- Assertions are not status-only for critical endpoints
- Any unconverted dynamic script logic is explicitly marked with TODO
Known limits
These often require manual touch-ups:
- Complex Postman pre-request scripts (custom signing/encryption)
- Dynamic test scripts with branching business logic
- Implicit state dependencies across folders/collections
- APIs with undocumented response variance
That is normal. Migration succeeds when 80-90% is automated and the remaining 10-20% is clear and reviewable.
What’s next?
Ready to start? Set up your project first, then come back here with your export file.
- Quick Start — install the extension and run
npx @glubean/cli init - AI Integration — tips for better AI-generated tests
- Workflow: Explore vs Tests — start in
explore/, promote totests/