openai_completion
Generate text using OpenAI's GPT models or compatible APIs (Azure OpenAI, local models).
Overview
Generate text using OpenAI's GPT models or compatible APIs (Azure OpenAI, local models). This step provides access to OpenAI's powerful language models including GPT-4, GPT-4 Turbo, and GPT-3.5. It supports the chat completions API format, which is now the standard for all OpenAI models. You can customize system prompts, control generation parameters, enable JSON mode for structured outputs, and track token usage. The step also works with OpenAI-compatible APIs by changing the base_url. Perfect for text generation, code assistance, data extraction, analysis, translation, and creative tasks.
Quick Start
steps:
- type: openai_completion
api_key: sk-live-123
model: gpt-4o-miniConfiguration
| Parameter | Type | Required | Description |
|---|---|---|---|
api_key | string | Yes | OpenAI-compatible API key sent as a Bearer token. |
model | string | Yes | Chat model identifier (for example 'gpt-4o-mini'). |
input_from | string | No | Dot path selecting the user message content. When omitted, the entire event is serialized to JSON. |
input_key | string | No | Deprecated. Use `input_from` instead. |
system | string | No | Static system prompt text used when 'system_key' does not resolve. |
system_key | string | No | Dot path in the event whose value overrides the static 'system' prompt when present. |
output_to | string | No | Event key where the primary model response (first choice content) is stored.
Default: "openai" |
output_key | string | No | Deprecated. Use `output_to` instead. |
include_usage | boolean | No | When True, token usage statistics are saved under '<output_key>_usage'.
Default: true |
temperature | string | No | Sampling temperature (0.0-2.0 range supported by the API). Lower values produce more deterministic output. |
max_tokens | string | No | Maximum number of tokens the model may generate in the response. |
base_url | string | No | Base API URL for the OpenAI-compatible endpoint. Override when routing through a proxy.
Default: "https://api.openai.com/v1" |
raw_on_error | boolean | No | If True, store raw response body on JSON parse failure under '<output_key>_raw'.
Default: false |
swallow_on_error | boolean | No | If True, leave event unchanged on errors (no injection).
Default: false |
extra_headers | string | No | Dict of additional headers merged (does not remove Authorization unless overwritten). |
response_format | string | No | Optional dict passed as response_format (for JSON schema, etc.). |
Examples
Basic GPT-4 completion
Simple text generation with GPT-4
type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
input_from: user_prompt
output_to: ai_response
max_tokens: 500
Code assistant with system prompt
Specialized assistant for generating code examples
type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
system: You are an expert Python developer. Provide clean, well-commented code with explanations.
input_from: coding_question
output_to: code_solution
temperature: 0.3
max_tokens: 1500
Structured JSON extraction
Force model to output valid JSON for data extraction
type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
system: Extract the person's name, email, and phone number from the text. Return as JSON with keys: name, email, phone.
input_from: message_text
output_to: contact_info
response_format:
type: json_object
temperature: 0.0
Cost-effective with GPT-3.5
Use GPT-3.5 Turbo for high-volume, lower-cost tasks
type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-3.5-turbo
system: Summarize the following text in 2-3 sentences.
input_from: article.content
output_to: article.summary
temperature: 0.5
max_tokens: 200
include_usage: true
Azure OpenAI integration
Use Azure OpenAI Service with custom endpoint
type: openai_completion
api_key: ${env:azure_openai_key}
model: gpt-4
base_url: https://your-resource.openai.azure.com/openai/deployments/gpt-4
input_key: query
output_key: response
max_tokens: 1000
Advanced Options
These options are available on all steps for error handling and retry logic:
| Parameter | Type | Default | Description |
|---|---|---|---|
retries | integer | 0 | Number of retry attempts (0-10) |
backoff_seconds | number | 0 | Backoff (seconds) applied between retry attempts |
retry_propagate | boolean | false | If True, raise last exception after exhausting retries; otherwise swallow. |