Catalog
vercel/capture-api-response-test-fixture

vercel

capture-api-response-test-fixture

Capture API response test fixture.

global
internal:true
0installs0uses~490
v1.0Saved May 2, 2026

API Response Test Fixtures

For provider response parsing tests, we aim at storing test fixtures with the true responses from the providers (unless they are too large in which case some cutting that does not change semantics is advised).

The fixtures are stored in a __fixtures__ subfolder, e.g. packages/openai/src/responses/__fixtures__. See the file names in packages/openai/src/responses/__fixtures__ for naming conventions and packages/openai/src/responses/openai-responses-language-model.test.ts for how to set up test helpers.

You can use our examples under /examples/ai-functions to generate test fixtures.

generateText (doGenerate testing)

For generateText, log the raw response output to the console and copy it into a new test fixture.

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { run } from '../lib/run';

run(async () => {
  const result = await generateText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
  });

  console.log(JSON.stringify(result.response.body, null, 2));
});

streamText (doStream testing)

For streamText, you need to set includeRawChunks to true and use the special saveRawChunks helper. Run the script from the /example/ai-functions folder via pnpm tsx src/stream-text/script-name.ts. The result is then stored in the /examples/ai-functions/output folder. You can copy it to your fixtures folder and rename it.

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { run } from '../lib/run';
import { saveRawChunks } from '../lib/save-raw-chunks';

run(async () => {
  const result = streamText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
    includeRawChunks: true,
  });

  await saveRawChunks({ result, filename: 'openai-gpt-5-nano' });
});
Files1
1 files · 552 B

Select a file to preview

Overall Score

72/100

Grade

B

Good

Safety

95

Quality

68

Clarity

78

Completeness

62

Summary

This skill provides guidance for capturing and storing API response test fixtures for provider response parsing tests. It explains how to generate test fixtures for both generateText and streamText functions using example scripts, storing responses in `__fixtures__` subfolders with standardized naming conventions.

Detected Capabilities

Guide fixture generation from API responsesDocument test fixture storage conventionsProvide TypeScript examples for response captureReference helper utilities (saveRawChunks, run)

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

test fixtureapi response mockgenerateText teststreamText testfixture capture

Risk Signals

INFO

Referenced external domain (www.apache.org) in LICENSE file

LICENSE

Referenced Domains

External domains referenced in skill content, detected by static analysis.

www.apache.org

Use Cases

  • Create test fixtures for AI SDK provider responses
  • Set up generateText response mocking
  • Set up streamText response mocking with raw chunks
  • Validate provider parsing logic against real responses

Quality Notes

  • Clear, well-organized instructions with separate sections for generateText and streamText
  • Provides concrete TypeScript code examples showing exact setup patterns
  • References concrete file paths and naming conventions (e.g., `packages/openai/src/responses/__fixtures__`)
  • Instructions depend on helper utilities (saveRawChunks, run) that are referenced but not defined in this skill — assumes developer already has access to them
  • Missing error handling guidance for fixture generation failures (e.g., if saveRawChunks fails, what should the developer do?)
  • No guidance on fixture versioning, maintenance, or deprecation strategies
  • Suggests using examples under `/examples/ai-functions` but doesn't explain how to locate or use them beyond folder paths
  • No coverage of edge cases like handling large responses that need semantic-preserving cuts — only mentions they should be cut but not how to validate correctness
Model: claude-haiku-4-5-20251001Analyzed: May 2, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Add vercel/capture-api-response-test-fixture to your library

Command Palette

Search for a command to run...