output
    Preparing search index...

    Function streamText

    • Use an LLM model to stream text generation.

      This function is a wrapper over the AI SDK's streamText. The prompt file sets model, messages, temperature, maxTokens, and providerOptions. Additional AI SDK options (tools, onChunk, onFinish, onError, etc.) can be passed through.

      Type Parameters

      • Tools extends ToolSet = ToolSet
      • Output extends AIOutput<unknown, unknown> = AIOutput<unknown, unknown>

      Parameters

      • args: { prompt: string; variables?: Record<string, string | number | boolean> } & Omit<
            StreamTextAiSdkOptions<Tools, Output>,
            "onFinish",
        > & { onFinish?: WrappedStreamTextOnFinishCallback<Tools> }

        Generation arguments.

        • prompt: string

          Prompt file name.

        • Optionalvariables?: Record<string, string | number | boolean>

          Variables to interpolate.

        • prompt

          Prompt file name.

        • variables

          Variables to interpolate.

        • onChunk

          Callback for each stream chunk (optional).

        • onFinish

          Callback when stream finishes; receives WrappedStreamTextOnFinishEvent (cost optional).

        • onError

          Callback when stream errors (optional).

      Returns StreamTextResult<Tools, Output>

      AI SDK stream result with textStream, fullStream, and metadata promises.