Skip to content

Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Run with Docker Compose

For more information about using Docker, see the Docker Docs . Make sure that you are using the latest version of Docker. The ones provided via apt and yum may be outdated and cause errors.

We gather Telemetry data in the Percona packages and Docker images.

Review Get more help for ways that we can work with you.

The guide shows you how to deploy a three-node Percona XtraDB Cluster 8.4 using Docker Compose. You generate SSL certificates on the first node and copy them to the other two nodes to enable secure communication.

Percona XtraDB Cluster with Docker

The following procedure is for evaluation and testing only. Do not use these instructions in a production environment because the MySQL certificates generated here are self-signed. In production, generate and store proper certificates and configure storage, security, backup, and monitoring.

High availability and placement: This guide runs all three nodes on a single host. That setup is not highly available. If the host reboots or the disk fails, the whole cluster is down and data can be lost. For production, the three nodes must run on different physical machines (or distinct failure domains), with scheduling such as Docker Swarm or Kubernetes and anti-affinity so one failure does not take out the cluster. A three-node cluster on one laptop is a complex way to lose data when the SSD fails.

Prerequisites

  • Docker and Docker Compose installed (or Podman 4.1 or later and Podman Compose; see Appendix: Podman Alternative)

  • At least 3 GB of memory per container (minimum). Percona XtraDB Cluster is memory-hungry; without enough RAM, the OS OOM killer may terminate the MySQL process (often pxc1) during startup or during the initial state snapshot transfer (SST), and the cluster can fail or restart in a loop. Ensure the host has enough free RAM for three nodes plus overhead (for example, at least 10 GB total). The example Compose file sets a 3 GB limit per container; if the host has more RAM, increase the limit accordingly.

  • Familiarity with Docker volumes and networks

Directory Structure

You must create a separate directory structure to organize your configuration, certificate files, and Docker Compose setup. The directory structure keeps your deployment clean and easy to manage.

Run the following commands to create the directory structure:

mkdir -p pxc-cluster/{certs,conf.d,init}
cd pxc-cluster

Expected result: No output. The current directory is pxc-cluster with subdirectories certs, conf.d, and init.

After running these commands, your working directory (pxc-cluster/) will contain:

PXC cluster directories

The directory structure helps manage configuration files, TLS/SSL certificates, and setup scripts, ensuring a tidy and easy-to-manage deployment.

Configuration Files

  1. Create conf.d/custom.cnf with minimal SSL settings:

    [mysqld]
    ssl-ca=/etc/mysql/certs/ca.pem
    ssl-cert=/etc/mysql/certs/server-cert.pem
    ssl-key=/etc/mysql/certs/server-key.pem
    
  2. Passwords: Docker Secrets (recommended) or .env

    Using a .env file is simple but outdated for secrets: values are visible in docker inspect and process listings. Prefer Docker Secrets (or Podman secrets) so passwords are mounted as files and not passed as environment variables.

    Option A — Docker Secrets (recommended): Create two secret files in the project root. Each file must contain only the password with no newline at the end. Example: generate strong passwords and write them in one step:

    openssl rand -base64 24 | tr -d '\n' > mysql_root_password
    openssl rand -base64 24 | tr -d '\n' > xtrabackup_password
    chmod 600 mysql_root_password xtrabackup_password
    

    Example contents (do not use these; generate your own): * mysql_root_password — one line, the MySQL root password (for example, K8xq2mNp9vLb3wR7yT4cF6hJ1sD0zA5uE). * xtrabackup_password — one line, the XtraBackup replication password (for example, P2nM8kQ4vB7xW1jL9rT3cY6fH0sD5zA8uE).

    Add mysql_root_password and xtrabackup_password to your .gitignore. The Compose file below references these files via secrets: and uses MYSQL_ROOT_PASSWORD_FILE and XTRABACKUP_PASSWORD_FILE so the container reads from the mounted files. The Percona XtraDB Cluster 8.4 image supports these _FILE variables; if your image does not, use Option B.

    Option B — .env (quick local test only): Create a file named .env in the directory root with MYSQL_ROOT_PASSWORD=<password> and XTRABACKUP_PASSWORD=<password>. Add .env to .gitignore. In the Compose file, replace the secrets block and MYSQL_ROOT_PASSWORD_FILE / XTRABACKUP_PASSWORD_FILE with MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} and XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD}.

SSL Certificate Generation

  1. Copy the SSL certificate generation script. Save the script as init/create-ssl-certs.sh:

    #!/bin/bash
    set -e
    CERT_DIR=./certs
    mkdir -p "$CERT_DIR"
    cd "$CERT_DIR"
    Generate CA key and certificate
    openssl genrsa -out ca-key.pem 2048
    openssl req -new -x509 -nodes -days 3650 \
        -key ca-key.pem \
        -subj "/C=XX/ST=State/L=City/O=Organization/CN=RootCA" \
        -out ca.pem
    Generate server key and CSR
    openssl req -newkey rsa:2048 -nodes \
        -keyout server-key.pem \
        -subj "/C=XX/ST=State/L=City/O=Organization/CN=pxc-node" \
        -out server-req.pem
    Sign server certificate with CA
    openssl x509 -req -in server-req.pem -days 3650 \
        -CA ca.pem -CAkey ca-key.pem -set_serial 01 \
        -out server-cert.pem
    Restrict permissions
    chmod 600 *.pem
    
  2. Make the script executable:

    chmod +x init/create-ssl-certs.sh
    

    Expected result: No output.

  3. Run the script to create the certs:

    ./init/create-ssl-certs.sh
    

    Expected result: No output. The script creates ca-key.pem, ca.pem, server-key.pem, server-req.pem, and server-cert.pem in certs/.

    If you need to regenerate certificates, delete the certs/ directory first, then run the script again. Running the script twice without removing existing files can fail or overwrite keys.

    Note: The certificates expire in 10 years. For production environments, implement certificate rotation and expiration monitoring.

  4. Copy Certificates to All Nodes (only for multi-host)

    All three nodes in the cluster must use the same set of SSL certificates. The docker-compose.yml in the guide mounts the same ./certs directory for all three services (pxc1, pxc2, pxc3). For a single-host deployment, you do not need to create separate certificate directories; the certs/ you created in step 5 is used by all nodes.

    If you deploy on separate machines instead of one host, copy the certificates to each machine so each has its own certs/ (or equivalent path) for its Compose project. From node 1, run:

    scp -r ./certs/ user@node2-host:/path/to/pxc-cluster/certs
    scp -r ./certs/ user@node3-host:/path/to/pxc-cluster/certs
    

    Expected result: Progress or confirmation for each file copied to each host. No output indicates success after the transfer completes.

    Ensure each host’s Compose file mounts that host’s certificate directory. For multi-host, see also Multi-host deployment below for firewall, name resolution, and time synchronization.

Multi-host deployment

If you run each node on a separate machine, configure the following in addition to copying certificates.

Firewall: Allow cluster and client traffic between the hosts. Open these ports on each host for the other hosts’ IPs (or subnet):

  • 3306 (MySQL client)
  • 4567 (Galera replication)
  • 4568 (Galera incremental state transfer)
  • 4444 (Percona XtraBackup snapshot transfer)

Name resolution and CLUSTER_JOIN: Containers on one host must reach containers on other hosts by hostname or IP. The example Compose file uses CLUSTER_JOIN=pxc1, which works only when all containers run on the same host (the name pxc1 resolves inside that host’s network). On separate hosts, the host running pxc2 cannot resolve the hostname pxc1 (that container name exists only on the host running pxc1). For multi-host, set CLUSTER_JOIN to the IP address or FQDN of the host that runs the bootstrap node (pxc1), not the container name. For example, on the host that runs pxc2 and pxc3, use CLUSTER_JOIN=<IP_OF_NODE_1> or CLUSTER_JOIN=<FQDN_OF_NODE_1> (replace with the actual IP or FQDN of the machine where pxc1 runs). Add DNS records or /etc/hosts entries on each host so that the value you use in CLUSTER_JOIN resolves to the correct machine.

Time synchronization: PXC and Galera require consistent time across nodes. Synchronize the clock on every host with NTP (or chrony, systemd-timesyncd). Skew between hosts can cause replication issues and cluster instability. Ensure NTP is enabled and running before starting the cluster.

Docker Compose Setup

  1. Create docker-compose.yml:

    The example below uses Docker Secrets (step 2 Option A). If you chose the .env option (Option B), remove the top-level secrets: block and each service’s secrets: list, and set MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} and XTRABACKUP_PASSWORD=${XTRABACKUP_PASSWORD} instead of the _FILE variables.

    Warning: If the host runs out of memory, the cluster can fail. Ensure the host has at least enough free RAM for three nodes plus overhead (for example, 10 GB total). The example below sets a 3 GB memory limit per container (minimum); the daemon will not start a container when the host cannot satisfy the limit. If the host has more RAM, increase the limit (for example, 4G per container for a 12 GB+ host).

    secrets:
      mysql_root_password:
        file: ./mysql_root_password
      xtrabackup_password:
        file: ./xtrabackup_password
    services:
      pxc1:
        image: percona/percona-xtradb-cluster:8.4
        container_name: pxc1
        environment:
          - MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
          - CLUSTER_NAME=pxc-cluster
          - XTRABACKUP_PASSWORD_FILE=/run/secrets/xtrabackup_password
        secrets:
          - mysql_root_password
          - xtrabackup_password
        volumes:
          - ./certs:/etc/mysql/certs:ro,Z
          - ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z
          - ./data-pxc1:/var/lib/mysql,Z
        networks:
          - pxcnet
        ports:
          - "3306:3306"
        command: ["--wsrep-new-cluster"]
        deploy:
          resources:
            limits:
              memory: 3G
        healthcheck:
          test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
          interval: 10s
          timeout: 5s
          retries: 5
      pxc2:
        image: percona/percona-xtradb-cluster:8.4
        container_name: pxc2
        environment:
          - MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
          - CLUSTER_NAME=pxc-cluster
          - CLUSTER_JOIN=pxc1
          - XTRABACKUP_PASSWORD_FILE=/run/secrets/xtrabackup_password
        secrets:
          - mysql_root_password
          - xtrabackup_password
        volumes:
          - ./certs:/etc/mysql/certs:ro,Z
          - ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z
          - ./data-pxc2:/var/lib/mysql,Z
        networks:
          - pxcnet
        depends_on:
          pxc1:
            condition: service_healthy
        deploy:
          resources:
            limits:
              memory: 3G
        healthcheck:
          test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
          interval: 10s
          timeout: 5s
          retries: 5
      pxc3:
        image: percona/percona-xtradb-cluster:8.4
        container_name: pxc3
        environment:
          - MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
          - CLUSTER_NAME=pxc-cluster
          - CLUSTER_JOIN=pxc1
          - XTRABACKUP_PASSWORD_FILE=/run/secrets/xtrabackup_password
        secrets:
          - mysql_root_password
          - xtrabackup_password
        volumes:
          - ./certs:/etc/mysql/certs:ro,Z
          - ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z
          - ./data-pxc3:/var/lib/mysql,Z
        networks:
          - pxcnet
        depends_on:
          pxc1:
            condition: service_healthy
        deploy:
          resources:
            limits:
              memory: 3G
        healthcheck:
          test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
          interval: 10s
          timeout: 5s
          retries: 5
    networks:
      pxcnet:
        driver: bridge
    

    Volume persistence: The example uses bind mounts (./data-pxc1, etc.) so data lives on the host and survives docker compose down. For production, consider named volumes so the runtime manages storage and data is not tied to a single host path. Without either bind mounts or named volumes, database data is ephemeral and is lost when containers are removed.

    SELinux and rootless: The example uses :Z (or :z) on all volume mounts so the runtime relabels them for the container. That is required when running rootless (Podman or Docker rootless) or when SELinux is enabled; otherwise the container cannot read certs/, conf.d/, or the data directories and will fail at startup. The guide does not assume a root daemon; the YAML is written for rootless and SELinux. On systems without SELinux, you can remove ,Z from the volume suffixes if desired.

Deployment

  1. Start the Cluster

    The Compose file sets depends_on: pxc1: condition: service_healthy for pxc2 and pxc3, so they start only after pxc1 passes its healthcheck. That avoids a race where the join nodes start before the bootstrap node is ready and crash or loop. You can run docker compose up -d once (Compose will start pxc1, wait for healthy, then start pxc2 and pxc3), or start pxc1 first and then the others as below.

    Start node 1 to initialize the cluster:

    docker compose up -d pxc1
    
    Expected result: A line such as [+] Running 1/1 - Container pxc1 Started (or similar). The container pxc1 is running.

    (With Podman: podman compose up -d pxc1 or podman-compose up -d pxc1.)

    Wait for pxc1 to be fully healthy before starting pxc2 and pxc3. The bootstrap node must be ready and accepting connections, or the other nodes may fail to join. You can use a wait loop:

    until docker exec pxc1 mysqladmin ping -h localhost &>/dev/null; do
      echo "Waiting for pxc1..."
      sleep 2
    done
    

    Expected result: The loop prints “Waiting for pxc1…” until the node responds, then exits with no further output.

    Then, start the remaining nodes:

    docker compose up -d pxc2 pxc3
    
    Expected result: Lines such as [+] Running 2/2 - Container pxc2 Started - Container pxc3 Started (or similar). Both containers are running.

    (With Podman: podman compose up -d pxc2 pxc3 or podman-compose up -d pxc2 pxc3.)

Validation

  1. Validate the Cluster

    Check the status of each node. Exact commands (run from the host). If you use Docker Secrets, pass the password from the file: -p$(cat mysql_root_password). If you use .env, use -p${MYSQL_ROOT_PASSWORD}. First verify cluster size and node status:

    docker exec pxc1 mysql -uroot -p$(cat mysql_root_password) -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
    docker exec pxc2 mysql -uroot -p$(cat mysql_root_password) -e "SHOW STATUS LIKE 'wsrep_cluster_status';"
    

    Expected result: For wsrep_cluster_size, a row with value 3. Expect wsrep_cluster_status to be Primary. If the value is non-Primary, the node is isolated.

    Then verify additional cluster health indicators on any node (for example, pxc1). Expect wsrep_ready and wsrep_connected to be ON, and wsrep_local_state_comment to be Synced:

    docker exec pxc1 mysql -uroot -p$(cat mysql_root_password) -e "SHOW STATUS LIKE 'wsrep_ready';"
    docker exec pxc1 mysql -uroot -p$(cat mysql_root_password) -e "SHOW STATUS LIKE 'wsrep_connected';"
    docker exec pxc1 mysql -uroot -p$(cat mysql_root_password) -e "SHOW STATUS LIKE 'wsrep_local_state_comment';"
    

    Expected result: wsrep_ready = ON, wsrep_connected = ON, wsrep_local_state_comment = Synced.

    (With Podman: use podman exec instead of docker exec.)

You should see all three nodes joined and synchronized, with the health indicators above showing the expected values.

Backup

Even for testing, back up data before major changes or shutdown. Example: dump all databases from one node (pxc1) to a file on the host. Use the root password from your secret file or .env:

docker exec pxc1 mysqldump -uroot -p$(cat mysql_root_password) --all-databases > backup.sql

Expected result: The file backup.sql is created in the current directory with SQL statements for all databases. If you use .env, substitute -p${MYSQL_ROOT_PASSWORD}. With Podman, use podman exec instead of docker exec.

Troubleshooting

Viewing logs for debugging:

docker compose logs -f pxc1
docker compose logs -f pxc2
docker compose logs -f pxc3

Run one of the commands above to stream that container’s logs (stdout and stderr). The -f option follows the log output; omit -f for a one-off dump. Expected result: Log lines from MySQL and PXC until you press Ctrl+C. With Podman, use podman compose logs -f pxc1 (and so on).

  • Containers exit or fail to start: Check logs with docker compose logs pxc1 (and pxc2, pxc3). Ensure the bootstrap node (pxc1) is healthy before starting pxc2 and pxc3.

  • Cluster size stays at 1: Start pxc1 first, wait for pxc1 to be ready (healthcheck passing), then start pxc2 and pxc3. If nodes start too soon, the nodes may not join; restart pxc2 and pxc3 after pxc1 is up.

  • Permission denied on certs or config: Ensure certs/ and conf.d/ are readable by the container. On systems with SELinux enabled (Docker or Podman), add :Z or :z to all volume mounts (see the SELinux note in Docker Compose Setup and Appendix: Podman Alternative).

  • Nodes cannot reach each other: On a single host, verify all containers are on the same network (docker network inspect pxc-cluster_pxcnet or equivalent). On multiple hosts, see Multi-host deployment for firewall rules, name resolution (DNS or /etc/hosts), and NTP.

Cleanup and Shutdown

To stop the cluster and remove containers (data in data-pxc1/, data-pxc2/, data-pxc3/ is preserved):

docker compose down

Expected result: Containers pxc1, pxc2, and pxc3 are stopped and removed. Project network is removed. Data in data-pxc1/, data-pxc2/, and data-pxc3/ remains on the host.

To stop and remove containers and volumes (the -v option deletes all database data):

docker compose down -v

Expected result: Containers and any named volumes are removed. Bind-mounted data directories (data-pxc1/, etc.) are not removed by this command.

Note: The Compose file in the guide uses bind mounts (./data-pxc1, etc.), not named volumes, so -v removes only any named volumes if present. To fully reset, remove the data directories manually: rm -rf data-pxc1 data-pxc2 data-pxc3 (only when the containers are stopped).

Appendix: Podman Alternative

Podman can be used as an alternative to Docker because Podman supports the same container images. However, Podman is not fully compatible with Docker Compose.

To run the deployment with Podman, you may need to use tools such as podman compose, podman-compose, or a configured Docker Compose compatibility layer, depending on your environment.

Podman uses a different architecture (for example, pods and rootless containers), so networking, volume mounts, and service behavior may differ from Docker Compose.

The deployment is designed and tested with Docker Compose. Podman support is not guaranteed. If you use Podman, verify that the cluster operates correctly before using the cluster beyond testing or experimentation.

You can use the same Compose file and workflow with Podman. Use Podman 4.1 or later; the built-in podman compose subcommand requires 4.1 or newer. With older Podman, use the separate podman-compose tool. Then:

  • Prerequisites: Podman and Podman Compose (for example, pip install podman-compose or your distro’s package) if you are not using built-in podman compose. For rootless Podman, ensure the user has enough resources (for example, sysctl user.max_user_namespaces and subuid/subgid ranges).

  • Commands: Replace docker compose with podman compose or podman-compose, and docker exec with podman exec. Examples:

  • Start node 1: podman compose up -d pxc1 (or podman-compose up -d pxc1)

  • Start other nodes: podman compose up -d pxc2 pxc3

  • Validate: podman exec pxc1 mysql -uroot -p$(cat mysql_root_password) -e "SHOW STATUS LIKE 'wsrep_cluster_size';" (or -p${MYSQL_ROOT_PASSWORD} if using .env)

  • Directory structure: The same layout (pxc-cluster/ with certs/, conf.d/, and init/) works with Podman. Run podman compose from the project directory (for example, pxc-cluster/) so the relative volume paths in the Compose file resolve correctly.

  • Compose file: The docker-compose.yml in the guide works as-is with Podman; the bridge network and volume mounts are supported. For rootless Podman, see the volume mount options below.

Handling rootless permissions (most important for Podman)

In Docker, the daemon runs as root and can override file permissions. In Podman, if you run as a normal user, the MySQL process inside the container (usually UID 1001 or 999) may not have permission to read the files you created on the host. Ensure the files are readable by the container. On systems with SELinux, use the :Z (private) or :z (shared) mount option so Podman relabels the volume for the container. In your docker-compose.yml, use:

volumes:
  - ./certs:/etc/mysql/certs:ro,Z
  - ./conf.d:/etc/percona-xtradb-cluster.conf.d:ro,Z

Apply the same volume options to every service (pxc1, pxc2, pxc3). Without :Z or :z, the container may fail to read certs or config when running rootless on SELinux.

podman-compose vs docker-compose

  • If you use podman-compose (the Python tool): podman-compose reads the directory structure and either the secret files or .env the same way Docker does.

  • If you use docker-compose with the Podman socket: Using docker-compose with the Podman socket is often more stable for PXC. Set DOCKER_HOST to point at the Podman socket (for example, unix:///run/user/$(id -u)/podman/podman.sock) so docker compose talks to Podman.

Network considerations

PXC uses specific ports for cluster communication (4567, 4568, 4444). Podman uses different networking (netavark or CNI). If nodes cannot find each other:

  • Keep using a named network in the Compose file (for example, pxcnet) so Podman can resolve container names (pxc1, pxc2, pxc3).

  • If problems persist, try setting network_mode: slirp4netns on the services, or run the stack rootful to use the default bridge.