27 January 2019

Integrating Patroni into Debian

Patroni is a PostgreSQL high availability solution with a focus on containers and Kubernetes. Until recently, the available Debian packages had to be configured manually and did not integrate well with the rest of the distribution. For the upcoming Debian 10 “Buster” release, the Patroni packages have been integrated into Debian’s standard PostgreSQL framework by credativ. They now allow for an easy setup of Patroni clusters on Debian or Ubuntu.

Patroni employs a “Distributed Consensus Store” (DCS) like Etcd, Consul or Zookeeper in order to reliably run a leader election and orchestrate automatic failover. It further allows for scheduled switchovers and easy cluster-wide changes to the configuration. Finally, it provides a REST interface that can be used together with HAProxy in order to build a load balancing solution. Due to these advantages Patroni has gradually replaced Pacemaker as the go-to open-source project for PostgreSQL high availability.

However, many of our customers run PostgreSQL on Debian or Ubuntu systems and so far Patroni did not integrate well into those. For example, it does not use the postgresql-common framework and its instances were not displayed in pg_lsclusters output as usual.

Integration into Debian

In a collaboration with Patroni lead developer Alexander Kukushkin from Zalando the Debian Patroni package has been integrated into the postgresql-common framework to a large extent over the last months. This was due to changes both in Patroni itself as well as additional programs in the Debian package. The current Version 1.5.5 of Patroni contains all these changes and is now available in Debian “Buster” (testing) in order to setup Patroni clusters.

The packages are also available on apt.postgresql.org and thus installable on Debian 9 “Stretch” and Ubuntu 18.04 “Bionic Beaver” LTS for any PostgreSQL version from 9.4 to 11.

The most important part of the integration is the automatic generation of a suitable Patroni configuration with the pg_createconfig_patroni command. It is run similar to pg_createcluster with the desired PostgreSQL major version and the instance name as parameters:

pg_createconfig_patroni 11 test

This invocation creates a file /etc/patroni/11-test.yml, using the DCS configuration from /etc/patroni/dcs.yml which has to be adjusted according to the local setup. The rest of the configuration is taken from the template /etc/patroni/config.yml.in which is usable in itself but can be customized by the user according to their needs. Afterwards the Patroni instance is started via systemd similar to regular PostgreSQL instances:

systemctl start patroni@11-test

A simple 3-node Patroni cluster can be created and started with the following few commands, where the nodes pg1, pg2 and pg3 are considered to be hostnames and the local file dcs.yml contains the DCS configuration:


for i in pg1 pg2 pg3; do ssh $i 'apt -y install postgresql-common'; done
for i in pg1 pg2 pg3; do ssh $i 'sed -i "s/^#create_main_cluster = true/create_main_cluster = false/" /etc/postgresql-common/createcluster.conf'; done
for i in pg1 pg2 pg3; do ssh $i 'apt -y install patroni postgresql'; done
for i in pg1 pg2 pg3; do scp ./dcs.yml $i:/etc/patroni; done
for i in pg1 pg2 pg3; do ssh @$i 'pg_createconfig_patroni 11 test' && systemctl start patroni@11-test'; done

Afterwards, you can get the state of the Patroni cluster via

ssh pg1 'patronictl -c /etc/patroni/11-patroni.yml list'
+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 | Leader | running |  1 |           |
| 11-test |  pg2   | 10.0.3.41  |        | stopped |    |   unknown |
| 11-test |  pg3   | 10.0.3.46  |        | stopped |    |   unknown |
+---------+--------+------------+--------+---------+----+-----------+

Leader election has happened and pg1 has become the primary. It created its instance with the Debian-specific pg_createcluster_patroni program that runs pg_createcluster in the background. Then the two other nodes clone from the leader using the pg_clonecluster_patroni program which sets up an instance using pg_createcluster and then runs pg_basebackup from the primary. After that, all nodes are up and running:

+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 | Leader | running |  1 |         0 |
| 11-test |  pg2   | 10.0.3.41  |        | running |  1 |         0 |
| 11-test |  pg3   | 10.0.3.46  |        | running |  1 |         0 |
+---------+--------+------------+--------+---------+----+-----------+

The well-known Debian postgresql-common commands work as well:

ssh pg1 'pg_lsclusters'
Ver Cluster Port Status Owner    Data directory                 Log file
11  test    5432 online postgres /var/lib/postgresql/11/test    /var/log/postgresql/postgresql-11-test.log

Failover Behaviour

If the primary is abruptly shutdown, its leader token will expire after a while and Patroni will eventually initiate failover and a new leader election:

+---------+--------+-----------+------+---------+----+-----------+
| Cluster | Member |    Host   | Role |  State  | TL | Lag in MB |
+---------+--------+-----------+------+---------+----+-----------+
| 11-test |  pg2   | 10.0.3.41 |      | running |  1 |         0 |
| 11-test |  pg3   | 10.0.3.46 |      | running |  1 |         0 |
+---------+--------+-----------+------+---------+----+-----------+
[...]
+---------+--------+-----------+--------+---------+----+-----------+
| Cluster | Member |    Host   |  Role  |  State  | TL | Lag in MB |
+---------+--------+-----------+--------+---------+----+-----------+
| 11-test |  pg2   | 10.0.3.41 | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46 |        | running |  1 |         0 |
+---------+--------+-----------+--------+---------+----+-----------+
[...]
+---------+--------+-----------+--------+---------+----+-----------+
| Cluster | Member |    Host   |  Role  |  State  | TL | Lag in MB |
+---------+--------+-----------+--------+---------+----+-----------+
| 11-test |  pg2   | 10.0.3.41 | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46 |        | running |  2 |         0 |
+---------+--------+-----------+--------+---------+----+-----------+

The old primary will rejoin the cluster as standby once it is restarted:

+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 |        | running |    |   unknown |
| 11-test |  pg2   | 10.0.3.41  | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46  |        | running |  2 |         0 |
+---------+--------+------------+--------+---------+----+-----------+
[...]
+---------+--------+------------+--------+---------+----+-----------+
| Cluster | Member |    Host    |  Role  |  State  | TL | Lag in MB |
+---------+--------+------------+--------+---------+----+-----------+
| 11-test |  pg1   | 10.0.3.111 |        | running |  2 |         0 |
| 11-test |  pg2   | 10.0.3.41  | Leader | running |  2 |         0 |
| 11-test |  pg3   | 10.0.3.46  |        | running |  2 |         0 |
+---------+--------+------------+--------+---------+----+-----------+

If a clean rejoin is not possible due to additional transactions on the old timeline the old primary gets re-cloned from the current leader. In case the data is too large for a quick re-clone, pg_rewind can be used. In this case a password needs to be set for the postgres user and regular database connections (as opposed to replication connections) need to be allowed between the cluster nodes.

Creation of additional Instances

It is also possible to create further clusters with pg_createconfig_patroni, one can either assign a PostgreSQL port explicitly via the --port option, or let pg_createconfig_patroni assign the next free port as is known from pg_createcluster:

for i in pg1 pg2 pg3; do ssh $i 'pg_createconfig_patroni 11 test2 && systemctl start patroni@11-test2'; done
ssh pg1 'patronictl -c /etc/patroni/11-test2.yml list'
+----------+--------+-----------------+--------+---------+----+-----------+
| Cluster  | Member |       Host      |  Role  |  State  | TL | Lag in MB |
+----------+--------+-----------------+--------+---------+----+-----------+
| 11-test2 |  pg1   | 10.0.3.111:5433 | Leader | running |  1 |         0 |
| 11-test2 |  pg2   |  10.0.3.41:5433 |        | running |  1 |         0 |
| 11-test2 |  pg3   |  10.0.3.46:5433 |        | running |  1 |         0 |
+----------+--------+-----------------+--------+---------+----+-----------+

Ansible Playbook

In order to easily deploy a 3-node Patroni cluster we have created an Ansible playbook on Github. It automates the installation and configuration of PostgreSQL and Patroni on the three nodes, as well as the DCS server on a fourth node.

Questions and Help

Do you have any questions or need help? Feel free to write to info@credativ.com.

Categories: HowTos PostgreSQL®
Tags: Ansible Debian Patroni PostgreSQL®

About the author

Michael Banck

zur Person

Michael Banck ist seit 2009 Mitarbeiter der credativ GmbH, sowie seit 2001 Mitglied des Debian Projekts und auch in weiteren Open Source Projekten aktiv. Als Mitglied des Datenbank-Teams von credativ hat er in den letzten Jahren verschiedene Kunden bei der Lösung von Problemen mit und dem täglichen Betrieb von PostgreSQL®, sowie bei der Einführung von Hochverfügbarkeits-Lösungen im Bereich Datenbanken unterstützt und beraten.

View posts


Beitrag teilen: