This document covers deploying the ctrl-exec dispatcher and agent as Alpine Linux Docker containers. The application has no awareness of containers - the differences from a bare-metal installation are in how services are started, how configuration is persisted, and how pairing is performed between containers.

For bare-metal or VM installation see INSTALL.md.

Overview

dispatcher container
Runs ctrl-exec-api in the foreground. Exposes port 7445 (API) and optionally 7444 (pairing, only when pairing mode is active). The CA, registry, and ctrl-exec cert are stored on a named volume so they persist across container restarts and image rebuilds.
agent container
Runs ctrl-exec-agent serve in the foreground. Exposes port 7443. Agent cert and config are stored on a named volume. The container pairs with the dispatcher on first start and then serves normally on subsequent starts.

Volumes hold all state. Containers are otherwise stateless and can be rebuilt from the image without loss of pairing or configuration.

The dispatcher and agent each have their own Dockerfile and entrypoint script. Keeping them separate means either can be built and deployed independently. Both can be run together using the full-stack compose.yml.

The build directory should contain:

compose.yml
Dockerfile.dispatcher
Dockerfile.agent
dispatcher-entrypoint.sh
agent-entrypoint.sh
ctrl-exec-<version>.tar.gz    ← download from https://ctrl-exec.io/downloads/

Dispatcher Container

Dockerfile.dispatcher

Before building, download the ctrl-exec release tarball from https://ctrl-exec.io/downloads/ and place it in the same directory as Dockerfile.dispatcher. The Dockerfile extracts it at build time.

FROM alpine:3.21

WORKDIR /opt/ctrl-exec

COPY ctrl-exec-*.tar.gz .

RUN apk add --no-cache bash openssl perl perl-libwww perl-io-socket-ssl perl-json perl-lwp-protocol-https \
    && VERSION=$(ls ctrl-exec-*.tar.gz | sed 's/ctrl-exec-//;s/\.tar\.gz//') \
    && tar xzf ctrl-exec-${VERSION}.tar.gz --strip-components=1 \
    && ./install.sh --dispatcher \
    && ./install.sh --api \
    && rm ctrl-exec-${VERSION}.tar.gz

COPY dispatcher-entrypoint.sh /dispatcher-entrypoint.sh
RUN chmod 755 /dispatcher-entrypoint.sh

EXPOSE 7444 7445

ENTRYPOINT ["/dispatcher-entrypoint.sh"]

install.sh detects Alpine and verifies Perl module availability. The apk add step ensures bash and openssl are present — install.sh requires bash and checks for openssl before proceeding.

Entrypoint script

dispatcher-entrypoint.sh:

#!/bin/sh
set -e

CONF_DIR=/etc/ctrl-exec
CONF_FILE="$CONF_DIR/ctrl-exec.conf"

# --- First-start initialisation ---
# Each block is guarded so restarts are safe and operator-provided
# files on the volume are never overwritten.

# Generate ctrl-exec.conf from environment variables if not present.
# Mount a config file at /etc/ctrl-exec/ctrl-exec.conf to override entirely.
if [ ! -f "$CONF_FILE" ]; then
    echo "[entrypoint] Generating ctrl-exec.conf..."
    cat > "$CONF_FILE" << EOF
# ctrl-exec dispatcher configuration
# Generated by entrypoint on first start.
# To customise, edit this file on the dispatcher-data volume,
# or mount a config file at $CONF_FILE before starting.

cert = $CONF_DIR/ctrl-exec.crt
key  = $CONF_DIR/ctrl-exec.key
ca   = $CONF_DIR/ca.crt

api_bind         = ${API_BIND:-127.0.0.1}
api_port         = ${API_PORT:-7445}
api_auth_default = ${API_AUTH_DEFAULT:-deny}
EOF
    # Only write read_timeout if set - otherwise ctrl-exec uses its own default.
    if [ -n "$READ_TIMEOUT" ]; then
        echo "read_timeout = $READ_TIMEOUT" >> "$CONF_FILE"
    fi
    # Only reference auth_hook if a hook file is already present on the volume.
    if [ -f "$CONF_DIR/auth-hook" ]; then
        echo "auth_hook = $CONF_DIR/auth-hook" >> "$CONF_FILE"
    fi
    echo "[entrypoint] ctrl-exec.conf written."
fi

# Initialise CA and dispatcher cert if not present.
if [ ! -f "$CONF_DIR/ca.crt" ]; then
    echo "[entrypoint] First start: initialising CA..."
    ctrl-exec-dispatcher setup-ca
    ctrl-exec-dispatcher setup-ctrl-exec
    echo "[entrypoint] CA and ctrl-exec cert created."
fi

# Copy the example auth hook for reference if not already present.
# The active hook must be placed at $CONF_DIR/auth-hook by the operator,
# or mounted from the host. Without an active hook, api_auth_default governs.
if [ ! -f "$CONF_DIR/auth-hook.example" ]; then
    cp /opt/ctrl-exec/etc/auth-hook.example "$CONF_DIR/auth-hook.example"
    echo "[entrypoint] Example auth hook copied to $CONF_DIR/auth-hook.example"
fi

echo "[entrypoint] Starting ctrl-exec-api..."
exec ctrl-exec-api

The entrypoint uses exec so ctrl-exec-api runs as PID 1 and receives signals directly from Docker.

Each first-start block is guarded — restarts are safe and any file already present on the volume is never overwritten. Config generated on first start persists on the dispatcher-data volume and is reused on subsequent starts.

Dispatcher Configuration

The dispatcher is configured through environment variables in compose.yml. On first start, the entrypoint generates ctrl-exec.conf from these values and writes it to the dispatcher-data volume. On subsequent starts the existing config file is used unchanged.

To reconfigure, either edit ctrl-exec.conf directly on the volume, or remove it and let the entrypoint regenerate it from updated env vars on the next start.

API_BIND
Network address the API server binds to. Default: 127.0.0.1 (localhost only, not reachable from outside the container). Set to 0.0.0.0 to accept connections on all interfaces — required when accessing the API from the host or external clients.

yaml API_BIND: 0.0.0.0

API_PORT
Port the API server listens on. Default: 7445.
API_AUTH_DEFAULT
Controls API behaviour when the auth hook permits all requests or is the example hook. deny rejects all requests until a real policy hook is in place (default, recommended for production). allow permits all requests — suitable for isolated networks or initial testing.

yaml API_AUTH_DEFAULT: allow

READ_TIMEOUT
Maximum seconds to wait for a script to complete. Omit to use the ctrl-exec default (60 seconds).

yaml READ_TIMEOUT: 120

Auth hook

The auth hook controls who is permitted to run scripts and ping agents. When no hook is present, api_auth_default governs — deny rejects all requests, allow permits all requests.

On first start the entrypoint copies the example hook to /etc/ctrl-exec/auth-hook.example on the volume for reference. It is not installed as the active hook — the operator must place a real policy hook at /etc/ctrl-exec/auth-hook when ready.

To install a hook from the host:

services:
  dispatcher:
    volumes:
      - dispatcher-data:/etc/ctrl-exec
      - dispatcher-registry:/var/lib/ctrl-exec
      - ./auth-hook:/etc/ctrl-exec/auth-hook:ro

See INSTALL.md for the full auth hook interface and examples.

compose.yml (dispatcher only)

services:
  dispatcher:
    build:
      context: .
      dockerfile: Dockerfile.dispatcher
    container_name: dispatcher
    restart: unless-stopped
    environment:
      API_BIND: 127.0.0.1       # set to 0.0.0.0 for external access
      API_PORT: 7445
      API_AUTH_DEFAULT: deny    # set to allow for isolated networks
      # READ_TIMEOUT: 60
    ports:
      - "7444:7444"   # pairing - expose only when actively pairing
      - "7445:7445"   # API
    volumes:
      - dispatcher-data:/etc/ctrl-exec
      - dispatcher-registry:/var/lib/ctrl-exec
      # - ./auth-hook:/etc/ctrl-exec/auth-hook:ro   # optional host-mounted hook

volumes:
  dispatcher-data:
  dispatcher-registry:

Port 7444 is the pairing port. If agents are on the same Docker network and pairing is done container-to-container, 7444 does not need to be published to the host. Publish it only when pairing agents on external hosts.

Agent Container

A containerised agent is useful for testing and proof-of-concept work. In production, agents typically run directly on the VM or host they manage — either installed via install.sh --agent on bare metal, or added to an existing container that already runs the workload the agent needs to reach. A standalone agent container makes most sense when the agent's scripts make calls to other services in the same Docker network.

Dockerfile.agent

The same release tarball used for the dispatcher build is used here. Ensure it is present in the build directory before building.

FROM alpine:3.21

WORKDIR /opt/ctrl-exec

COPY ctrl-exec-*.tar.gz .
RUN apk add --no-cache bash openssl perl perl-io-socket-ssl perl-json \
    && VERSION=$(ls ctrl-exec-*.tar.gz | sed 's/ctrl-exec-//;s/\.tar\.gz//') \
    && tar xzf ctrl-exec-${VERSION}.tar.gz --strip-components=1 \
    && ./install.sh --agent \
    && rm ctrl-exec-${VERSION}.tar.gz

COPY agent-entrypoint.sh /agent-entrypoint.sh
RUN chmod 755 /agent-entrypoint.sh

EXPOSE 7443

ENTRYPOINT ["/agent-entrypoint.sh"]

install.sh detects Alpine and verifies Perl module availability. The apk add step ensures bash and openssl are present — install.sh requires bash and checks for openssl before proceeding. The agent does not need perl-libwww.

Entrypoint script

agent-entrypoint.sh:

#!/bin/sh
set -e

CONF_DIR=/etc/ctrl-exec-agent
CERT="$CONF_DIR/agent.crt"

# DISPATCHER_HOST must be set in the container environment or compose file.
if [ -z "$DISPATCHER_HOST" ]; then
    echo "[entrypoint] ERROR: DISPATCHER_HOST environment variable not set." >&2
    exit 1
fi

# First start: no cert means we have not paired yet.
# Request pairing and then exit. The operator approves on the dispatcher,
# then the container is restarted to begin serving.
if [ ! -f "$CERT" ]; then
    echo "[entrypoint] No cert found - requesting pairing with $DISPATCHER_HOST..."
    ctrl-exec-agent request-pairing --dispatcher "$DISPATCHER_HOST"
    echo "[entrypoint] Pairing request sent. Approve on the dispatcher, then restart this container."
    exit 0
fi

echo "[entrypoint] Cert found - starting ctrl-exec-agent serve..."
exec ctrl-exec-agent serve

The two-phase entrypoint separates the pairing request (a one-shot operation that exits) from normal service operation. The container exits after requesting pairing - Docker's restart policy does not apply to a deliberate exit 0, so the container stays stopped until the operator approves and restarts it.

For automated or orchestrated deployments, --background mode can be used instead. This prints the reqid to stdout and exits immediately, leaving a background process to wait for approval:

ctrl-exec-agent request-pairing --dispatcher "$DISPATCHER_HOST" --background

The reqid can be captured by the orchestration layer and passed to ctrl-exec-dispatcher approve <reqid> on the dispatcher. See REFERENCE.md ### Orchestrated pairing for the full flow.

compose.yml (agent only)

For testing or deployments where the dispatcher runs elsewhere, the agent can be run independently. Set DISPATCHER_HOST to the hostname or IP of the external dispatcher.

services:
  agent:
    build:
      context: .
      dockerfile: Dockerfile.agent
    container_name: agent
    restart: on-failure
    environment:
      DISPATCHER_HOST: <dispatcher-hostname-or-ip>
    ports:
      - "7443:7443"
    volumes:
      - agent-data:/etc/ctrl-exec-agent
      - agent-scripts:/opt/ctrl-exec-scripts
    networks:
      - ctrl-exec-net-agent

volumes:
  agent-data:
  agent-scripts:

networks:
  ctrl-exec-net-agent:

compose.yml (full stack)

Both services in a single compose. depends_on is an implementation choice — the dispatcher functions without agents, and the agent connects to the dispatcher independently at pairing time. Omit or adjust depends_on to suit your startup sequencing requirements.

services:
  dispatcher:
    build:
      context: .
      dockerfile: Dockerfile.dispatcher
    container_name: dispatcher
    restart: unless-stopped
    environment:
      API_BIND: 127.0.0.1
      API_PORT: 7445
      API_AUTH_DEFAULT: deny
    ports:
      - "7445:7445"
    volumes:
      - dispatcher-data:/etc/ctrl-exec
      - dispatcher-registry:/var/lib/ctrl-exec
    networks:
      - ctrl-exec-net

  agent:
    build:
      context: .
      dockerfile: Dockerfile.agent
    container_name: agent
    restart: on-failure
    environment:
      DISPATCHER_HOST: dispatcher   # Docker DNS resolves service name
    ports:
      - "7443:7443"
    volumes:
      - agent-data:/etc/ctrl-exec-agent
      - agent-scripts:/opt/ctrl-exec-scripts
    networks:
      - ctrl-exec-net

volumes:
  dispatcher-data:
  dispatcher-registry:
  agent-data:
  agent-scripts:

networks:
  ctrl-exec-net:

DISPATCHER_HOST: dispatcher uses Docker's internal DNS to resolve the dispatcher service by name. No IP addresses required.

Networking

Container-to-container communication works reliably in two configurations:

  • Shared network — both containers on the same Docker network. Use Docker DNS (service name) or container IP directly. This is the recommended approach for full-stack compose deployments.
  • Different hosts — dispatcher and agent on separate machines, each with ports published to the host. The agent connects to the dispatcher's host IP and published port. This is the intended production pattern.

Same-host, different networks (container → host → container) is not reliable with default Docker networking. Docker's NAT and iptables rules do not consistently route traffic from one bridge network to a published port on another. If the dispatcher and agent must run on the same host in separate compose projects, either connect them to a shared network explicitly:

docker network connect <dispatcher-network> agent

Or set DISPATCHER_HOST to the dispatcher container's IP on the shared network rather than the host IP.

This limitation does not affect real deployments where the dispatcher and agents run on separate hosts.

Pairing Workflow in Docker

Pairing between containers follows the same protocol as bare-metal but uses docker exec instead of separate terminal sessions.

Step 1 - Start the dispatcher

docker compose up -d dispatcher

Wait for the first-start initialisation to complete:

docker logs dispatcher
# [entrypoint] First start: initialising CA...
# [entrypoint] CA and ctrl-exec cert created.
# [entrypoint] Starting ctrl-exec-api...

Step 2 - Start pairing mode on the dispatcher

docker exec -it dispatcher ctrl-exec-dispatcher pairing-mode

This blocks, waiting for requests. Leave it running.

Step 3 - Start the agent container

In another terminal:

docker compose up agent

The entrypoint finds no cert and sends a pairing request:

[entrypoint] No cert found - requesting pairing with dispatcher...

The agent container exits after sending the request. This is expected.

Step 4 - Approve the request

Back in the pairing mode terminal, the request appears:

Pairing request from agent (172.18.0.3) - ID: 00c9845e0001
  Received: 2026-03-06T12:00:00Z
Accept, Deny, or Skip? [a/d/s]:

Type a to approve.

Alternatively, from a third terminal without interactive pairing mode:

docker exec dispatcher ctrl-exec-dispatcher list-requests
docker exec dispatcher ctrl-exec-dispatcher approve <reqid>

Step 5 - Restart the agent

docker compose up -d agent

The entrypoint finds the cert this time and starts serving:

[entrypoint] Cert found - starting ctrl-exec-agent serve...

Step 6 - Verify

docker exec dispatcher ctrl-exec-dispatcher ping agent
docker exec dispatcher ctrl-exec-dispatcher run agent check-disk

Agent Scripts

Scripts run by the agent are stored in the agent-scripts volume. To add a script to a running agent container:

# Copy script into the volume via the running container
docker cp check-disk.sh agent:/opt/ctrl-exec-scripts/
docker exec agent chmod 750 /opt/ctrl-exec-scripts/check-disk.sh
docker exec agent chown root:ctrl-exec-agent /opt/ctrl-exec-scripts/check-disk.sh

# Add to allowlist
docker exec agent sh -c \
    'echo "check-disk = /opt/ctrl-exec-scripts/check-disk.sh" \
    >> /etc/ctrl-exec-agent/scripts.conf'

# Reload config without restart
docker exec agent kill -HUP 1

Alternatively, pre-populate scripts in the Dockerfile or bake them into the image for immutable deployments. For mutable agent fleets, maintain scripts on a separate volume that is populated by a provisioning step before the agent starts.

Encrypted Credentials

Container volumes hold the CA key, ctrl-exec cert, and agent cert. For production deployments these should be protected at rest.

Docker secrets

Docker Swarm supports secrets natively. For standalone Docker, a common pattern is to populate the config volume from a secrets manager at container start via the entrypoint script:

#!/bin/sh
# Retrieve CA key from secrets manager before starting
if [ ! -f /etc/ctrl-exec/ca.key ]; then
    echo "[entrypoint] Fetching CA key from secrets manager..."
    # Example: AWS Secrets Manager
    aws secretsmanager get-secret-value \
        --secret-id ctrl-exec/ca-key \
        --query SecretString \
        --output text > /etc/ctrl-exec/ca.key
    chmod 600 /etc/ctrl-exec/ca.key
fi

The specific mechanism depends on the secrets manager available in the deployment environment (Vault, AWS Secrets Manager, Azure Key Vault, etc.).

Encrypted volumes

LUKS-encrypted volumes or encrypted filesystem layers (dm-crypt) can protect volume contents at rest. This is a host-level concern - the container has no awareness of whether its volume is encrypted.

For cloud deployments, encrypted EBS volumes (AWS), managed disks (Azure), or persistent disks (GCP) provide at-rest encryption with minimal operational overhead. Enable encryption when creating the volume and the platform handles key management.

Minimum viable protection

At minimum, ensure:

  • The host running the ctrl-exec container has restricted access
  • The dispatcher-data volume is not world-readable
  • The CA key is backed up to encrypted offline storage immediately after first-start initialisation
# Backup CA key after first start
docker exec dispatcher cat /etc/ctrl-exec/ca.key \
    | gpg --symmetric --cipher-algo AES256 \
    > ca.key.gpg
# Store ca.key.gpg in offline/offsite storage

Multiple Agents

To run multiple agent containers, give each a unique name and volume:

services:
  agent-db:
    build:
      context: .
      dockerfile: Dockerfile.agent
    environment:
      DISPATCHER_HOST: dispatcher
    volumes:
      - agent-db-data:/etc/ctrl-exec-agent
      - agent-db-scripts:/opt/ctrl-exec-scripts
    networks:
      - ctrl-exec-net

  agent-web:
    build:
      context: .
      dockerfile: Dockerfile.agent
    environment:
      DISPATCHER_HOST: dispatcher
    volumes:
      - agent-web-data:/etc/ctrl-exec-agent
      - agent-web-scripts:/opt/ctrl-exec-scripts
    networks:
      - ctrl-exec-net

Each agent pairs independently and appears separately in ctrl-exec-dispatcher list-agents. The agent hostname in the registry is taken from the hostname reported in the pairing request - set hostname in each container to something meaningful:

agent-db:
  hostname: agent-db

Agent Tags

Tags let you label agents with arbitrary key/value metadata - for example, environment, role, or location. They are returned in the /capabilities response and propagate through API discovery, making it straightforward to filter or identify agents without maintaining separate inventory.

For container deployments, add a [tags] section to agent.conf on the config volume, or inject it via the entrypoint before starting the agent:

# In agent-entrypoint.sh, before exec ctrl-exec-agent serve
cat >> /etc/ctrl-exec-agent/agent.conf <<EOF

[tags]
env  = ${AGENT_ENV:-production}
role = ${AGENT_ROLE:-agent}
EOF

Then pass values via container environment:

agent-db:
  hostname: agent-db
  environment:
    DISPATCHER_HOST: dispatcher
    AGENT_ENV: production
    AGENT_ROLE: database

Tags appear in API discovery responses:

{
  "agent-db": {
    "host": "agent-db",
    "status": "ok",
    "tags": { "env": "production", "role": "database" },
    "scripts": [...]
  }
}

An agent with no [tags] section returns "tags": {}. Tag values are plain strings; no reserved keys.

Rebuilding Images

Because all state is on volumes, images can be rebuilt without losing pairing or configuration:

docker compose build dispatcher
docker compose up -d dispatcher
# Volumes are reattached - CA and registry are intact

If a volume is deleted (e.g. docker compose down -v), the ctrl-exec loses its CA and all agents must be re-paired. The CA backup covers this case.