AI workflows as code.
Deployed in minutes.

Build, deploy, and scale your AI workflows with YAML and Python – no infrastructure required.

Workflows as Code

No more drag and drop. Script and version control your workflows

Deploy Instantly

No infra setup, no queues to babysit. Scale instantly with no hassle

Transparent Pricing

Start for free, then pay for the credits you need on a monthly basis

Drag-and-drop tools aren't production-grade

Zapier and n8n are great for prototyping, but they're not built for developers. No version control, no proper environments, and deploying changes feels like clicking through a maze. You need workflows as code.

No Version Control

Impossible to set up proper environments or track changes. Your workflows live in a UI, not your repo

Not Developer-First

Drag-and-drop is great for demos, but limited customization means you hit walls fast

Complex Deployment

Clicking through UIs to deploy changes. No CI/CD, no simple deployment workflow

A typical AI workflow in ETLR:

This example shows how ETLR handles a common scenario: receiving webhook data, enriching it, and forwarding it to external services.

1

Receive webhook events

Accept HTTP webhooks from any source - APIs, databases, or third-party services.

2

Add metadata automatically

Enrich your data with timestamps, unique IDs, and tracking information.

3

Transform with custom code

Run your Python functions to normalize, validate, or transform data exactly how you need it.

4

Send to external services

Forward processed data to databases, APIs, or notification services with built-in retry logic.

workflow:
  name: "add_timestamp_normalise_and_post_user"
  input:
    type: http_webhook
  steps:
    - type: add_timestamp
      format: ISO-8601
      field: timestamp
    - type: python_function
      code: |
        def process(event):
            event['full_name'] = f"{event['first_name']} {event['last_name']}"
            return event
      handler: process
    - type: http_request
      url: "https://example.org/users"
      headers: 
        x-api-key: ${env:API_KEY}
      method: POST

Real-time health monitoring:

This example demonstrates a cron workflow that monitors etlr.io every minute and sends Discord alerts for any non-200 responses.

1

Schedule health checks

Run every minute with start_now enabled to begin monitoring immediately.

2

Ping the endpoint

Make an HTTP call with status tracking and 5-second timeout, injecting the response into state.

3

Filter non-200 responses

Only continue the workflow when the status code is not 200.

4

Alert via Discord

Send a webhook notification with status code and latency details.

workflow:
  name: "etlr_healthcheck"
  input:
    type: cron
    cron: '*/1 * * * *'
  steps:
    - type: http_call
      url: https://etlr.io
      include_status: true
      inject: http
      timeout: 5
    - type: filter
      groups:
        - conditions:
            - field: http.status
              op: ne
              value: 200
    - type: discord_webhook
      webhook_url: ${env:WEBHOOK_URL}
      content_template: >-
        Warning: etlr.io ping status=${http.status}
        latency=${http.duration_ms}ms

Simple, transparent pricing

Our pricing is straight forward, credit-based monthly billing.

Free

£0/month
  • ✓ 100 credits included
  • ✓ All integrations
  • ✓ Custom python integrations
  • ✓ Community support

Enterprise

Custom
  • ✓ Unlimited credits
  • ✓ All integrations
  • ✓ Custom python integrations
  • ✓ Dedicated support
  • ✓ SLA guarantees
  • ✓ Bespoke integrations

How Credits Work

Credit Usage

  • • 1 credit per execution flow (all steps)
  • • Credits don't roll over to the next month

Example Execution Flow

Stripe Webhook → Normalize (Python) → Enrich Timestamp → Write to S3

Total: 1 credit