Migrate Existing API Assets into Glubean Tests
You do not need to rewrite your suite by hand.
But you also should not treat migration as a blind format conversion job.
The right workflow is:
- start inside a real Glubean project
- use the
glubeanskill - scan the source assets first
- lock one good pattern
- batch-convert from that pattern
That works for Postman, Apifox, OpenAPI, .http files, cURL snippets, and legacy test code.
Run migration inside a real Glubean project.
Do not ask AI to convert collections or specs in a random folder. Start with:
npx @glubean/cli initThe agent needs to see package.json, @glubean/sdk, your tests/ layout, and any existing config helpers. Without that context it will default to generic Node or Jest-style output instead of Glubean patterns.
Use the Glubean skill for migration
Do not approach migration with a generic “convert this collection” prompt.
Install and use the Glubean skill first:
npx glubean config mcp
npx skills add glubean/skillThen invoke the skill in chat when your editor supports it:
/glubean help me migrate these API assets into Glubean testsWhy this matters:
- the skill tells the agent to read project structure first
- it routes migration work through a migration-specific workflow instead of generic code generation
- it nudges the agent toward
configure(), builder flows, types, schemas, and CI-safe structure - it is much more likely to stop and ask when auth or secret mapping is unclear
If your AI tool does not support slash commands, still mention the skill explicitly in the prompt:
Use the glubean skill and follow a phased migration workflow.What migration really means
Migration is not “translate this JSON into code.”
It is a test architecture task:
- keep the useful request definitions you already have
- classify which requests are isolated vs stateful
- choose one auth and config approach
- decide file grouping and assertion depth
- convert one representative slice first
- then reuse that pattern across the rest of the suite
If you skip those decisions, bulk conversion usually creates a large pile of weak or broken tests.
Supported source formats
| Source | Typical input |
|---|---|
| Postman | Collection export (v2.1 JSON) |
| Apifox | OpenAPI export or Postman export |
| Swagger / OpenAPI | openapi.yaml or openapi.json |
| REST Client | .http or .rest files |
| cURL | copied commands |
| RapidAPI | Postman export or cURL |
| Legacy test code | Jest, Supertest, pytest, Playwright API tests, old helper-based suites |
| Docs / notes | Markdown, wiki pages, request samples, README notes |
These sources are all useful, but in different ways:
- Postman and
.httpfiles are strong request seeds - OpenAPI is strong for grouping and schema hints
- legacy test code is strong for business intent and assertion history
- docs and notes are strong for explaining why the API behaves a certain way
OpenAPI-only users can follow this guide too.
If you are not “migrating” old tests and only have an API spec, the same workflow applies: scan first, generate one representative slice, then batch by tag or resource.
What AI can do well
AI is good at converting:
- methods, URLs, headers, query params, and bodies
- placeholders into
varsandsecrets - source grouping hints into candidate file plans
- old assertions into Glubean assertion syntax
- repeated setup into shared
configure()clients or helper flows
AI is not good at guessing:
- auth strategy
- which values are secrets vs public config
- whether two requests belong in one workflow or separate tests
- whether status-only assertions are enough
- what custom pre-request scripts were really doing
Those are the points where the agent should stop and ask.
Guide the migration phase by phase
The user guidance should be explicit and phased:
- use the
glubeanskill - ask for a scan-only migration phase
- confirm auth before any code — this is the most common migration failure
- lock minimal project shape (config, vars/secrets, grouping)
- build one representative slice to validate that shape
- freeze reusable style from the slice, then batch-convert
- run and tighten the migrated suite
The first prompt should produce a plan, not files.
Confirm auth before writing any test code.
Source assets almost never describe auth correctly. Postman inherits from folder-level settings, Apifox uses globals, OpenAPI securitySchemes rarely match runtime behavior, and .http files hardcode expired tokens. Ask the agent to present its auth evidence and proposed strategy, then confirm or correct before code generation starts.
For larger suites, insist on a read-only planning round first.
If the source has more than about 20 requests, or it mixes auth flows and write paths, the first step should be a scan-only report. Do not let the agent start by generating 30 files.
Recommended migration path
Phase 0: Set up the skill and gather sources
Install the skill and MCP if you have not already:
npx glubean config mcp
npx skills add glubean/skillTell the agent where your source assets are — do not copy them into the Glubean project:
/glubean I want to migrate existing API assets into Glubean tests.
Use a phased workflow and do not generate files yet.
Sources:
- Postman collection: /path/to/collection.json
- OpenAPI spec: ../api/openapi.yaml
- Legacy tests: ../backend/tests/api/The agent will read from those paths directly. If you have multiple sources, list them all — the agent should cross-check instead of trusting one export.
Phase 1: Run a feasibility scan only
Start with a scan-only prompt:
/glubean do a migration feasibility scan only. Do not generate files yet.
Read the Glubean project first (package.json, GLUBEAN.md, tests/, config/).
Then read these source assets (do not copy them into the project):
- [paths to Postman / Apifox / OpenAPI / .http / legacy tests]
Return:
1) source inventory
2) what can be converted safely
3) what needs manual decisions
4) open questions and blockers
5) proposed file plan
6) proposed auth + vars/secrets mapping
7) one representative slice to implement first
If auth, secret classification, or pre-request logic is unclear, stop and ask.Review this report before approving code generation.
Phase 2: Confirm auth and lock minimal project shape
Auth is the single most common migration failure. Before approving any code generation, confirm:
- Auth strategy: bearer token, API key, OAuth, cookie, or none
- Secret classification: what goes in
.env.secretsvs.env - Auth plugin: whether
@glubean/authis needed or plainconfigure()headers are enough
If the agent says “I’ll use what Postman has,” push back — Postman env variables and pre-request scripts do not translate directly.
After auth is confirmed, lock the remaining decisions that pollute everything if wrong:
- shared
configure()location and base URL - public vars vs secrets naming
- file grouping: by resource, workflow, or tag
Leave assertion depth, builder boundaries, naming details, and type/schema extraction for after the representative slice.
/glubean I've reviewed the scan. Here is the auth and project shape:
- Auth: [bearer / API key / OAuth / none]
- Secrets: [list what goes in .env.secrets]
- Config: [shared client location]
- Grouping: [by resource / workflow / tag]
Lock this shape and implement only the representative slice.Phase 3: Build one representative slice
The slice validates whether the locked shape actually works in a real flow. Pick one with real complexity:
- auth if the suite needs it
- at least one write if the API has writes
- verification after the mutation
- cleanup if the flow creates data
Review before expanding. If the shape is wrong, fix it now — before it spreads to every file.
Phase 4: Freeze reusable style, then batch-convert
Once the slice is approved, freeze the remaining conventions from what worked:
- assertion depth: status-only vs key fields vs schema
- when to use builder flows vs independent tests
- teardown policy for write tests
- test ID naming conventions
- when to extract to
types/orschemas/
Then batch-convert incrementally:
- one Postman folder at a time
- one OpenAPI tag at a time
- one legacy test module at a time
Avoid all-at-once migration unless the suite is very small.
/glubean the representative slice is approved.
Freeze the migration style from it, then convert the next group only: [folder/tag/module].
Stop after that group and summarize manual follow-up items.Phase 5: Run, fix, and tighten
Run the migrated files and fix:
- setup or teardown problems
- duplicated helper logic
- weak status-only assertions
- TODO-marked manual follow-up items
/glubean run and review the migrated files for this phase.
Fix real failures, tighten weak assertions, and keep unresolved logic as explicit TODOs.How to map old assets to Glubean
| Source concept | Glubean target |
|---|---|
| base URL env var | configure({ http: { prefixUrl: "{{BASE_URL}}" } }) |
| secret token | configure({ secrets: { token: "{{API_TOKEN}}" } }) or auth plugin config |
| repeated authenticated requests | shared configured client such as api |
| source script stores IDs | builder state between steps |
| source folder or tag | candidate file grouping |
| one request block | one test export |
| one workflow with setup and cleanup | builder flow |
| repeated body or response shape | extract to types/ or schemas/ |
Postman
Translate carefully:
- env variables -> vars or secrets
- folder structure -> grouping hints only
pm.test(...)-> Glubean assertions- collection or local variables -> builder state or shared setup
- pre-request scripts -> shared auth/setup or manual follow-up
Do not mechanically port pm.* helpers line by line.
Apifox
Apifox usually works best when exported as:
- OpenAPI for schema-aware planning
- Postman when example requests are clearer than the spec
If both exist, the agent should compare them instead of trusting one blindly.
OpenAPI / Swagger
OpenAPI is useful for:
- operation grouping
- operation IDs
- request and response shape
- deprecated endpoint filtering
It is weaker for runtime auth behavior and business-rule assertions, so those usually need repo context or user input.
REST Client and cURL
Treat these as strong request seeds:
- preserve request shape
- improve assertion depth
- merge duplicated setup into shared config
- convert ad-hoc examples into stable regression tests
Legacy test code
This is often the best source for real test intent.
Keep:
- scenarios
- business assertions
- setup and teardown meaning
Do not keep:
- framework-specific fixtures
- custom client wrappers unless they encode essential business behavior
- assertion syntax that hides the real expectation
Prompt templates
Use the feasibility scan first. Then use a focused follow-up prompt with the skill.
Postman / Apifox
/glubean I have a Postman or Apifox export at [path].
Use the approved migration plan.
Implement only one group first.
Requirements:
- create or reuse a shared configured client
- classify secrets vs vars explicitly
- use builder flows only for stateful sequences
- keep isolated requests as separate tests
- add status + key field assertions
- mark unclear script behavior with TODO instead of guessing
Return:
1) shared config changes
2) one migrated test file
3) short manual follow-up itemsShared client baseline
Prefer a configured client over ad-hoc request setup in every file.
// config/api.ts
import { configure } from "@glubean/sdk";
export const { http: api, vars, secrets } = configure({
vars: { baseUrl: "{{BASE_URL}}" },
secrets: { apiToken: "{{API_TOKEN}}" },
http: {
prefixUrl: "{{BASE_URL}}",
headers: {
Authorization: "Bearer {{API_TOKEN}}",
Accept: "application/json",
},
},
});Then use the shared client in tests:
import { test } from "@glubean/sdk";
import { api } from "../config/api";
export const listUsers = test("list-users", async (ctx) => {
const res = await api.get("users");
ctx.expect(res.status).toBe(200);
});Quality checklist
- the user was told to use the
glubeanskill, not a generic prompt - migration happened inside a real Glubean project
- the first round was a scan, not bulk file generation
- auth was explicitly confirmed before any code was generated
- minimal project shape (config, vars/secrets, grouping) was locked before the slice
- one representative slice was reviewed before expansion
- reusable style was frozen from the slice before batch work
- stateful flows use builders only when needed
- write tests include reliable cleanup
- critical endpoints have more than status-only assertions
- unclear script logic is marked and isolated instead of guessed
Known limits
These usually need manual follow-up:
- custom signing or encryption scripts
- token refresh or browser-based login flows
- implicit state shared across folders or files
- poorly documented business rules
- conflicting evidence between collections, specs, and runtime behavior
That is normal. Success means the migration becomes structured, reviewable, and consistent, not that every source detail is translated literally.
What next?
- Quick Start — initialize the project and install the extension
- Generate with AI — prompt more effectively
- Writing Tests — learn the test shapes and project layout