anthropic_chat
Generate text completions using Anthropic's Claude AI models via the Messages API.
Overview
Generate text completions using Anthropic's Claude AI models via the Messages API. This step integrates with Anthropic's Claude family of models (including Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku) for natural language generation, analysis, and transformation. You can provide static or dynamic system prompts to guide the AI's behavior, control generation parameters like temperature and token limits, and access usage statistics for monitoring costs. Input can come from a specific field or the entire event. The AI's response is injected into the event for downstream processing. Ideal for content generation, summarization, analysis, question answering, and creative tasks.
Quick Start
steps:
- type: anthropic_chat
api_key: anthropic-sk-ant-123
model: claude-3-5-sonnet-20240620Configuration
| Parameter | Type | Required | Description |
|---|---|---|---|
api_key | string | Yes | Anthropic API key placed in the 'x-api-key' header for every request. |
model | string | Yes | Claude model identifier. |
input_from | string | No | Dot path selecting event content for the user message. When omitted, the entire event is JSON-serialized and used as the prompt. |
input_key | string | No | DEPRECATED: Use 'input_from' instead. Dot path selecting event content for the user message. |
system | string | No | Static system prompt string applied when 'system_key' does not resolve to a value. |
system_key | string | No | Dot path in the event whose value overrides 'system' when present. |
output_to | string | No | Event key that receives the first text block from the Claude response.
Default: "anthropic" |
output_key | string | No | DEPRECATED: Use 'output_to' instead. Event key for the response. |
include_usage | boolean | No | When True, usage statistics are saved under '<output_to>_usage'.
Default: true |
max_tokens | string | No | Maximum number of tokens Anthropic should generate. Defaults to 1024 when not provided. |
temperature | string | No | Sampling temperature (0.0-1.0). Lower values produce more deterministic output. |
top_p | string | No | Nucleus sampling probability cutoff (0-1). Lower values limit the candidate token pool. |
top_k | string | No | Top-K sampling cutoff defining how many candidate tokens are considered at each step. |
stop_sequences | string | No | List of strings that immediately stop generation when encountered. |
base_url | string | No | Base URL for the Anthropic API. Override when routing through a proxy or gateway.
Default: "https://api.anthropic.com" |
anthropic_version | string | No | Value for the 'anthropic-version' header; controls API contract versioning.
Default: "2023-06-01" |
raw_on_error | boolean | No | When True, preserve the raw response text under '<output_key>_raw' if JSON parsing fails.
Default: true |
swallow_on_error | boolean | No | If True, leave the event unchanged on HTTP or parsing errors (no error payload injected).
Default: false |
extra_headers | string | No | Additional HTTP headers merged into each request without replacing the required defaults (x-api-key, anthropic-version, Content-Type, Accept, User-Agent). |
Examples
Simple text generation
Get a completion from Claude with minimal configuration
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
input_from: user_prompt
output_to: ai_response
max_tokens: 1024
Customer support with system prompt
Use Claude as a customer support assistant with specific personality
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system: You are a helpful customer support assistant for a SaaS company. Be friendly, professional, and concise. Always offer to escalate complex issues.
input_from: customer_message
output_to: support_response
temperature: 0.7
max_tokens: 500
Dynamic system prompts from data
Customize AI behavior per-event using dynamic system prompts
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system_key: conversation.system_instruction
input_from: conversation.user_message
output_to: conversation.ai_reply
temperature: 0.8
max_tokens: 2048
include_usage: true
Precise responses with low temperature
Use low temperature for consistent, deterministic outputs
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system: Extract key information and respond in JSON format.
input_from: document.text
output_to: extracted_data
temperature: 0.0
max_tokens: 1500
Long-form content with Claude Opus
Use Claude 3 Opus for complex reasoning and longer outputs
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-opus-20240229
system: You are an expert technical writer. Create detailed, well-structured documentation.
input_from: requirements
output_to: documentation
temperature: 0.6
max_tokens: 4096
include_usage: true
Advanced Options
These options are available on all steps for error handling and retry logic:
| Parameter | Type | Default | Description |
|---|---|---|---|
retries | integer | 0 | Number of retry attempts (0-10) |
backoff_seconds | number | 0 | Backoff (seconds) applied between retry attempts |
retry_propagate | boolean | false | If True, raise last exception after exhausting retries; otherwise swallow. |