H.2. pg_cluster#

H.2. pg_cluster

H.2. pg_cluster

This section describes how to deploy pg_cluster using ansible based automation tools. The following text will present examples of commands to be entered in the terminal to prepare an SSH session, check if the ansible settings are correct and start the playbook. The admin_user account will be used as an example user. When launching commands in the Customer’s loop, this user must be changed to an account that has passwordless SSH access to all servers (virtual machines) specified in the my_inventory file, as well as access to privileged mode (root). As a result of the playbook operation, a cluster Tantor Certified managed via patroni will be deployed on the servers specified in the my_inventory file.

H.2.1. About pg_cluster

Version: 1.5

GitHub

H.2.2. Architecture

Figure H.1. Architecture

Architecture

H.2.3. Requirements

Playbook requires the following component’s version to be installed:

  • Ansible >= 2.9.10

  • Python3 (with pip module) >= 3.10.0

  • psycopg2 >= 2.5.1 (it’s recommended to install via pip)

  • packaging >= 24 (it’s recommended to install via pip)

H.2.4. Usage (based on OS Astra Linux 1.7)

  1. Create an admin_user user (executed on each node from the inventory file):

    sudo adduser admin_user
    
  2. Grant the admin_user user possibility to enter to privileged mode (root) without entering a password (executed on each node from the inventory file).

    cat >> /etc/sudoers <<EOF
    admin_user ALL=(ALL)       NOPASSWD: ALL
    EOF
    
  3. Check the availability of certified repositories in the system.

  4. Mount the transferred ISO image in the /mnt/iso directory (executed on each node from the inventory file and on the node from which ansible-playbook will be launched).

  5. Connect the repository from the mounted ISO to the system (executed on each node from the inventory file and on the node from which ansible-playbook will be launched).

    echo 'deb [trusted=yes] file:///mnt/iso/repo 1.8_x86-64 main' | && \
    sudo tee /etc/apt/sources.list.d/tantorlabs.list
    
  6. Install ansible (executed on the node from which ansible-playbook will be launched).

    sudo apt install ansible-tantor-all
    
  7. Copy the test case files from the mounted ISO imageto the /opt/pg_cluster directory (executed on the node from which ansible-playbook will be launched):

    sudo cp -r /mnt/iso/pg_cluster /opt
    cd /opt/pg_cluster/
    sudo chmod +w /opt/pg_cluster/inventory/my_inventory
    sudo chmod +w /opt/pg_cluster/inventory/group_vars/keepalived.yml
    
  8. Fill in the inventory configuration file /opt/pg_cluster/inventory/my_inventory containing a list of hosts and their IP addresses (executed on the node from which ansible-playbook will be launched).

  9. In the inventory/group_vars/keepalived.yml file, change the value of the cluster_vip_1 variable to the IP that will be used by keepalived for the allocated virtual address — a single entry point to a DBMS cluster (executed on the node from which ansible-playbook will be launched):

    sed -i \
      's/cluster_vip_1: "xxx\.xxx\.xxx\.xxx"/cluster_vip_1: "<cluster IP address>"/' \
      /opt/pg_cluster/inventory/group_vars/keepalived.yml
    
  10. Generate SSH keys and upload to cluster nodes from the inventory configuration file (executed on the node from which ansible-playbook will be launched):

    ssh-keygen -t rsa -b 4096
    ssh-copy-id admin_user@<IP address of the first node>
    ssh-copy-id admin_user@<IP address of the second node>
    ssh-copy-id admin_user@<IP address of the third node>
    
  11. Test SSH connection on behalf of admin_user (executed on the node from which ansible-playbook will be launched):

    ssh admin_user@<IP address of the first node>
    ssh admin_user@<IP address of the second node>
    ssh admin_user@<IP address of the third node>
    
  12. Make sure that all servers are available for SSH connection with the required user:

    /opt/tantor/usr/bin/ansible all -i inventory/my_inventory \
      -m ansible.builtin.setup -a "filter=ansible_hostname" -u admin_user
    

    The result of the command above will be a response from each of the available servers in the following format:

    <hostname_from_inventory_file> | SUCCESS => {
        "ansible_facts": {
            "ansible_hostname": "<device_hostname>",
            "discovered_interpreter_python": "/usr/bin/<host_python_version>"
        },
        "changed": false
    }
    

    This output for each server described in my_inventory file means successful connection to it via SSH. If as a result of the response from any server the message differed from the above template - check whether it is possible to connect to it via a key from the user name passed using the -u flag. If it is necessary to connect only with password entry (without using keys) - it is necessary to add -kK flags to the command launch and enter the password for SSH connection (-k flag) and for user to switch to privileged mode (root) (-K flag).

    Pay attention to the value of the ansible_hostname variable in the command output. If the value is localhost or localhost.localdomain, check the /etc/hosts file of the machines with incorrect output. Ensure that the real device hostname is set before localhost on the line containing 127.0.0.1.

  13. Launch the ansible-playbook to install the TantorDB database (executed on the node from which ansible-playbook will be launched):

    /opt/tantor/usr/bin/ansible-playbook -i inventory/my_inventory \ 
      -u admin_user -e "postgresql_vendor=tantordb edition=certified major_version=15" pg-cluster.yaml
    

H.2.5. Launch Features

The playbook allows the possibility of separating the pg_data, pg_wal and pg_log directories. If it is necessary to place WAL logs in a separate folder, it is required to make changes to the inventory/groupvars/patroni.yml file:

  • remove the comment for the patroni_pg_wal_dir variable and specify the directory for placing WAL logs in it;

  • for the patroni_bootstrap_initdb variable add the waldir parameter and check that it refers to the patroni_pg_wal_dir variable;

  • for the selected replica creation method (by default patroni_pg_basebackup) add waldir parameter with bulk_wal_dir value;

In case it is necessary to place LOGs: remove the comment for the variable patroni_pg_pg_log_dir and in it specify the directory for placing LOG logs;

H.2.6. Playbook launch

One of the playbook tasks is executed on the same node from which ansible is launched (control server). In case the user under which ansible is run does not have passwordless access to root mode on this server, it is necessary to add the -K flag to the start command and enter the password.

By default, the playbook does not attempt to connect to Tantor repositories and requires the following packages to be available within the system:

  • etcd-tantor-all

  • python3-tantor-all

  • patroni-tantor-all

  • pg_configurator-tantor-all

  • haproxy-tantor-all

  • keepalived-tantor-all

  • pgbouncer-tantor-all

  • wal-g-tantor-all

  • tantor DBMS

Pay attention to last point from the list above. Tantor package should match environment that is used during playbook launch. For example if you want to install tantor-certified-server-15 DBMS using command ansible-playbook -i inventory/my_inventory -u admin_user -e "postgresql_vendor=tantordb edition=certified major_version=15" pg-cluster.yaml -K make sure that package tantor-certified-server-15 is available in your local repository.

H.2.7. Usage examples

Below you can find some common commands for working with the software products included in the pg_cluster solution. Note that the commands and their result may differ depending on the software versions used.

H.2.7.1. Work with etcd:

# on NODE_1
e_host=(
  /opt/tantor/usr/bin/etcdctl
  --endpoints=https://<HOST_1_IP>:2379,https://<HOST_2_IP>:2379,https://<HOST_N_IP>:2379
  --cacert=/opt/tantor/etc/patroni/ca.pem
  --cert=/opt/tantor/etc/patroni/<NODE1_HOSTNAME>.pem  
  --key=/opt/tantor/etc/patroni/<NODE1_HOSTNAME>-key.pem
)

# list etcd members
ETCDCTL_API=3 "${e_host[@]}" member list --debug

# check version
ETCDCTL_API=3  "${e_host[@]}" version

# get key value ("main" is "patroni_scope")
ETCDCTL_API=3  "${e_host[@]}" get /service/main/config

# cleanup patroni cluster configuration
ETCDCTL_API=3  "${e_host[@]}" del /service/main --prefix

H.2.7.2. Manual create user:

# create user
su - postgres -c "psql -A -t -d postgres -c \"CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'repuserpasswd'\""
# check user
su - postgres -c "psql -A -t -d postgres -c \"select * from pg_roles where rolname = 'replicator'\""

H.2.7.3. Manage Patroni Cluster

Patroni includes a command called patronictl which can be used to control the cluster. Let`s check the status of the cluster:

root@node1:~# patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list
+ Cluster: main (7351350415269982209) --+---------+-----------+----+-----------+
| Member  | Host            | Role    | State     | TL | Lag in MB |
+---------+-----------------+---------+-----------+----+-----------+
| node1   | xxx.xxx.xxx.xxx | Leader  | running   |  1 |           |
| node2   | yyy.yyy.yyy.yyy | Replica | streaming |  1 |         0 |
| node3   | zzz.zzz.zzz.zzz | Replica | streaming |  1 |         0 |
+---------+-----------------+---------+-----------+----+-----------+

patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml edit-config should be used only to manage global cluster configuration. It should not contain any node-specific settings like connect_address, listen, data_dir and so on.

Update DCS pg_hba settings:

cat > pg_hba.conf << EOL
host replication replicator 0.0.0.0/0 md5
local all all  trust
host all all 127.0.0.1/32 trust
host all all localhost trust
EOL

cat pg_hba.conf | jq -R -s 'split("\n") | .[0:-1] | {"postgresql": {"pg_hba": .}}' | \
patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml edit-config --apply - --force main

patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml show-config

Change postgresql.conf settings:

cat > postgresql.conf << EOL
"postgresql": {
"parameters": {
    "max_connections" : 101
}
}
EOL

cat postgresql.conf | patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml edit-config --apply - --force main
patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list
patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml restart main

Make switchover:

root@node1:~# patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml switchover
Current cluster topology
+ Cluster: main (7351350415269982209) --+---------+-----------+----+-----------+
| Member| Host            | Role    | State     | TL | Lag in MB |
+-------+-----------------+---------+-----------+----+-----------+
| node1 | xxx.xxx.xxx.xxx | Leader  | running   |  1 |           |
| node2 | yyy.yyy.yyy.yyy | Replica | streaming |  1 |         0 |
| node3 | zzz.zzz.zzz.zzz | Replica | streaming |  1 |         0 |
+-------+-----------------+---------+-----------+----+-----------+
Primary [node1]:
Candidate ['node2', 'node3'] []: node2
When should the switchover take place (e.g. 2024-04-02T13:51 )  [now]:
Are you sure you want to switchover cluster main, demoting current leader node1? [y/N]: y
2024-04-02 12:51:28.04774 Successfully switched over to "node2"
+ Cluster: main (7351350415269982209) --+---------+-----------+----+-----------+
| Member| Host            | Role    | State     | TL | Lag in MB |
+-------+-----------------+---------+-----------+----+-----------+
| node1 | xxx.xxx.xxx.xxx | Leader  | streaming |  2 |           |
| node2 | yyy.yyy.yyy.yyy | Replica | running   |  2 |         0 |
| node3 | zzz.zzz.zzz.zzz | Replica | streaming |  2 |         0 |
+-------+-----------------+---------+-----------+----+-----------+

Switch to Asynchronous mode (default mode):

cat > postgresql.conf << EOL
"postgresql": {
"parameters": {
    "synchronous_commit" : "local"
}
}
"synchronous_mode": false
"synchronous_mode_strict": false
EOL
cat postgresql.conf | patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml edit-config --apply - --force main

Switch to Synchronous mode:

cat > postgresql.conf << EOL
"postgresql": {
"parameters": {
    "synchronous_commit" : "remote_apply"
}
}
"synchronous_mode": true
"synchronous_mode_strict": true
EOL
cat postgresql.conf | patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml edit-config --apply - --force main

Figure H.2. synchronous_commit

synchronous_commit

Reinit failed node, in case if output of patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list provides the information about failed state of the node:

patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list
>>
root@node1:~# patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list
+ Cluster: main (7351350415269982209) --+------------+-----------+----+-----------+
| Member  | Host            | Role    | State        | TL | Lag in MB |
+---------+-----------------+---------+--------------+----+-----------+
| node1   | xxx.xxx.xxx.xxx | Leader  | running      |  1 |           |
| node2   | yyy.yyy.yyy.yyy | Replica | streaming    |  1 |         0 |
| node3   | zzz.zzz.zzz.zzz | Replica | start failed |  1 |         0 |
+---------+-----------------+---------+--------------+----+-----------+

Failed node can be reconfigured to join the cluster using:

patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml reinit node3
>>
root@node1:~# patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list
+ Cluster: main (7351350415269982209) --+---------+-----------+----+-----------+
| Member  | Host            | Role    | State     | TL | Lag in MB |
+---------+-----------------+---------+-----------+----+-----------+
| node1   | xxx.xxx.xxx.xxx | Leader  | running   |  1 |           |
| node2   | yyy.yyy.yyy.yyy | Replica | streaming |  1 |         0 |
| node3   | zzz.zzz.zzz.zzz | Replica | streaming |  1 |         0 |
+---------+-----------------+---------+-----------+----+-----------+

H.2.8. Cluster test

After successful cluster deployment:

# on deployment node run test, the test will take about 5 minutes
# please use the latest possible version of python3
# please run commands from the pg_cluster folder
python3 tools/pg_cluster_backend/pg_cluster_backend.py --operations=10000

To emulate deadlocks, needs to change parameter test.accounts = 100 -> 10 in tools/pg_cluster_backend/conf/pg_cluster_backend.conf.

Simultaneously with the test, you should perform actions with the cluster:

# on NODE_1
patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml list
patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml restart main
shutdown -r now

# on NODE_2
patronictl -c /opt/tantor/etc/patroni/<NODE2_HOSTNAME>.yml list
patronictl -c /opt/tantor/etc/patroni/<NODE2_HOSTNAME>.yml restart main
shutdown -r now

# on NODE_3
patronictl -c /opt/tantor/etc/patroni/<NODE3_HOSTNAME>.yml list
patronictl -c /opt/tantor/etc/patroni/<NODE3_HOSTNAME>.yml restart main
shutdown -r now

# on NODE_1
patronictl -c /opt/tantor/etc/patroni/<NODE1_HOSTNAME>.yml switchover

# on primary node
su - postgres -c "psql -A -t -d test_db -c \"
    select pg_terminate_backend(pid)
    from pg_stat_activity
    where application_name = 'pg_cluster_backend'\""

# on NODE_1
systemctl stop patroni

# on NODE_2
systemctl stop patroni

# on NODE_1
systemctl start patroni

# on NODE_2
systemctl start patroni

# restart all nodes in random order

After completing these steps, the test backend should continue work.
Check how many transaction losses on switchover with asynchronous replication:

SELECT
    sum(balance)::numeric - -- result balance
    ((select count(1) from public.accounts) * 100 + 10000) -- where "--operations=10000"
FROM public.accounts

-- positive value means lost transactions
-- negative value means successfully committed transactions in which the backend received an exception