output
    Preparing search index...

    Function streamText

    • Use an LLM model to stream text generation.

      This function is a wrapper over the AI SDK's streamText. The prompt file sets model, messages, temperature, maxTokens, and providerOptions. Additional AI SDK options (tools, onChunk, onFinish, onError, etc.) can be passed through.

      Type Parameters

      • Tools extends ToolSet = ToolSet
      • Output extends AIOutput<unknown, unknown> = AIOutput<unknown, unknown>

      Parameters

      • args: { prompt: string; variables?: Record<string, string | number | boolean> } & Partial<
            Omit<CallSettings, "maxOutputTokens">,
        > & {
            activeTools?: (keyof Tools)[];
            experimental_transform?:
                | StreamTextTransform<Tools>
                | StreamTextTransform<Tools>[];
            maxSteps?: number;
            onChunk?: StreamTextOnChunkCallback<Tools>;
            onError?: StreamTextOnErrorCallback;
            onFinish?: StreamTextOnFinishCallback<Tools>;
            onStepFinish?: StreamTextOnStepFinishCallback<Tools>;
            output?: Output;
            prepareStep?: PrepareStepFunction<Tools>;
            stopWhen?: StopCondition<Tools> | StopCondition<Tools>[];
            toolChoice?: ToolChoice<Tools>;
            tools?: Tools;
        }

        Generation arguments.

        • prompt: string

          Prompt file name.

        • Optionalvariables?: Record<string, string | number | boolean>

          Variables to interpolate.

        • prompt

          Prompt file name.

        • variables

          Variables to interpolate.

        • onChunk

          Callback for each stream chunk (optional).

        • onFinish

          Callback when stream finishes (optional).

        • onError

          Callback when stream errors (optional).

        • OptionalactiveTools?: (keyof Tools)[]

          Limit which tools are active without changing types

        • Optionalexperimental_transform?: StreamTextTransform<Tools> | StreamTextTransform<Tools>[]

          Stream transformation (e.g., smoothStream())

        • OptionalmaxSteps?: number

          Maximum number of automatic tool execution rounds (multi-step)

        • OptionalonChunk?: StreamTextOnChunkCallback<Tools>

          Callback for each stream chunk

        • OptionalonError?: StreamTextOnErrorCallback

          Callback when stream errors. Called after internal tracing records the error.

        • OptionalonFinish?: StreamTextOnFinishCallback<Tools>

          Callback when stream finishes. Called after internal tracing records the result.

        • OptionalonStepFinish?: StreamTextOnStepFinishCallback<Tools>

          Callback after each step completes

        • Optionaloutput?: Output

          Structured output specification (e.g., Output.object({ schema }))

        • OptionalprepareStep?: PrepareStepFunction<Tools>

          Customize each step before execution

        • OptionalstopWhen?: StopCondition<Tools> | StopCondition<Tools>[]

          Custom stop conditions for multi-step execution

        • OptionaltoolChoice?: ToolChoice<Tools>

          Tool choice strategy: 'auto', 'none', 'required', or specific tool

        • Optionaltools?: Tools

          Tools the model can call

      Returns StreamTextResult<Tools, Output>

      AI SDK stream result with textStream, fullStream, and metadata promises.