Generation arguments.
Prompt file name.
Optionalvariables?: Record<string, string | number | boolean>Variables to interpolate.
Prompt file name.
Variables to interpolate.
Callback for each stream chunk (optional).
Callback when stream finishes (optional).
Callback when stream errors (optional).
OptionalactiveTools?: (keyof Tools)[]Limit which tools are active without changing types
Optionalexperimental_transform?: StreamTextTransform<Tools> | StreamTextTransform<Tools>[]Stream transformation (e.g., smoothStream())
OptionalmaxSteps?: numberMaximum number of automatic tool execution rounds (multi-step)
OptionalonChunk?: StreamTextOnChunkCallback<Tools>Callback for each stream chunk
OptionalonError?: StreamTextOnErrorCallbackCallback when stream errors. Called after internal tracing records the error.
OptionalonFinish?: StreamTextOnFinishCallback<Tools>Callback when stream finishes. Called after internal tracing records the result.
OptionalonStepFinish?: StreamTextOnStepFinishCallback<Tools>Callback after each step completes
Optionaloutput?: OutputStructured output specification (e.g., Output.object({ schema }))
OptionalprepareStep?: PrepareStepFunction<Tools>Customize each step before execution
OptionalstopWhen?: StopCondition<Tools> | StopCondition<Tools>[]Custom stop conditions for multi-step execution
OptionaltoolChoice?: ToolChoice<Tools>Tool choice strategy: 'auto', 'none', 'required', or specific tool
Optionaltools?: ToolsTools the model can call
AI SDK stream result with textStream, fullStream, and metadata promises.
Use an LLM model to stream text generation.
This function is a wrapper over the AI SDK's
streamText. The prompt file setsmodel,messages,temperature,maxTokens, andproviderOptions. Additional AI SDK options (tools, onChunk, onFinish, onError, etc.) can be passed through.