capture-api-response-test-fixture

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

API Response Test Fixtures

API响应测试夹具

For provider response parsing tests, we aim at storing test fixtures with the true responses from the providers (unless they are too large in which case some cutting that does not change semantics is advised).
The fixtures are stored in a
__fixtures__
subfolder, e.g.
packages/openai/src/responses/__fixtures__
. See the file names in
packages/openai/src/responses/__fixtures__
for naming conventions and
packages/openai/src/responses/openai-responses-language-model.test.ts
for how to set up test helpers.
You can use our examples under
/examples/ai-functions
to generate test fixtures.
对于供应商响应解析测试,我们的目标是使用来自供应商的真实响应存储测试夹具(除非响应过大,这种情况下建议进行不改变语义的裁剪)。
测试夹具存储在
__fixtures__
子文件夹中,例如
packages/openai/src/responses/__fixtures__
。命名规范可参考
packages/openai/src/responses/__fixtures__
中的文件名,测试助手的设置方法可参考
packages/openai/src/responses/openai-responses-language-model.test.ts
你可以使用
/examples/ai-functions
下的示例来生成测试夹具。

generateText (doGenerate testing)

generateText(doGenerate测试)

For
generateText
, log the raw response output to the console and copy it into a new test fixture.
ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { run } from '../lib/run';

run(async () => {
  const result = await generateText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
  });

  console.log(JSON.stringify(result.response.body, null, 2));
});
对于
generateText
,将原始响应输出记录到控制台,然后复制到新的测试夹具中。
ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { run } from '../lib/run';

run(async () => {
  const result = await generateText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
  });

  console.log(JSON.stringify(result.response.body, null, 2));
});

streamText (doStream testing)

streamText(doStream测试)

For
streamText
, you need to set
includeRawChunks
to
true
and use the special
saveRawChunks
helper. Run the script from the
/example/ai-functions
folder via
pnpm tsx src/stream-text/script-name.ts
. The result is then stored in the
/examples/ai-functions/output
folder. You can copy it to your fixtures folder and rename it.
ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { run } from '../lib/run';
import { saveRawChunks } from '../lib/save-raw-chunks';

run(async () => {
  const result = streamText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
    includeRawChunks: true,
  });

  await saveRawChunks({ result, filename: 'openai-gpt-5-nano' });
});
对于
streamText
,你需要将
includeRawChunks
设置为
true
,并使用专用的
saveRawChunks
助手。通过
pnpm tsx src/stream-text/script-name.ts
/example/ai-functions
文件夹运行脚本。结果将存储在
/examples/ai-functions/output
文件夹中,你可以将其复制到你的夹具文件夹并重命名。
ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { run } from '../lib/run';
import { saveRawChunks } from '../lib/save-raw-chunks';

run(async () => {
  const result = streamText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
    includeRawChunks: true,
  });

  await saveRawChunks({ result, filename: 'openai-gpt-5-nano' });
});