0s autopkgtest [11:29:30]: starting date and time: 2024-11-13 11:29:30+0000 0s autopkgtest [11:29:30]: git checkout: 6f3be7a8 Fix armhf LXD image generation for plucky 0s autopkgtest [11:29:30]: host juju-7f2275-prod-proposed-migration-environment-15; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.zs1u99xi/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:python3-defaults,src:python3-stdlib-extensions --apt-upgrade patroni --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 '--env=ADT_TEST_TRIGGERS=python3-defaults/3.12.7-1 python3-stdlib-extensions/3.12.7-1' -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-s390x --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-15@bos03-s390x-7.secgroup --name adt-plucky-s390x-patroni-20241113-112929-juju-7f2275-prod-proposed-migration-environment-15-8a0ff9a5-55d3-48c1-a06d-798d1a04feec --image adt/ubuntu-plucky-s390x-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-15 --net-id=net_prod-proposed-migration-s390x -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 194s autopkgtest [11:32:44]: testbed dpkg architecture: s390x 195s autopkgtest [11:32:45]: testbed apt version: 2.9.8 195s autopkgtest [11:32:45]: @@@@@@@@@@@@@@@@@@@@ test bed setup 196s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [73.9 kB] 196s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.3 kB] 196s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/restricted Sources [7016 B] 196s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [76.4 kB] 196s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [849 kB] 197s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [85.8 kB] 197s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [565 kB] 197s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [16.6 kB] 197s Fetched 1689 kB in 1s (2076 kB/s) 197s Reading package lists... 200s Reading package lists... 200s Building dependency tree... 200s Reading state information... 200s Calculating upgrade... 200s The following NEW packages will be installed: 200s python3.13-gdbm 200s The following packages will be upgraded: 200s libgpgme11t64 libpython3-stdlib python3 python3-gdbm python3-minimal 201s 5 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 201s Need to get 252 kB of archives. 201s After this operation, 98.3 kB of additional disk space will be used. 201s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-minimal s390x 3.12.7-1 [27.4 kB] 201s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3 s390x 3.12.7-1 [24.0 kB] 201s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libpython3-stdlib s390x 3.12.7-1 [10.0 kB] 201s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x python3.13-gdbm s390x 3.13.0-2 [31.0 kB] 201s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-gdbm s390x 3.12.7-1 [8642 B] 201s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x libgpgme11t64 s390x 1.23.2-5ubuntu4 [151 kB] 201s Fetched 252 kB in 0s (612 kB/s) 201s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 201s Preparing to unpack .../python3-minimal_3.12.7-1_s390x.deb ... 201s Unpacking python3-minimal (3.12.7-1) over (3.12.6-0ubuntu1) ... 201s Setting up python3-minimal (3.12.7-1) ... 201s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 201s Preparing to unpack .../python3_3.12.7-1_s390x.deb ... 202s Unpacking python3 (3.12.7-1) over (3.12.6-0ubuntu1) ... 202s Preparing to unpack .../libpython3-stdlib_3.12.7-1_s390x.deb ... 202s Unpacking libpython3-stdlib:s390x (3.12.7-1) over (3.12.6-0ubuntu1) ... 202s Selecting previously unselected package python3.13-gdbm. 202s Preparing to unpack .../python3.13-gdbm_3.13.0-2_s390x.deb ... 202s Unpacking python3.13-gdbm (3.13.0-2) ... 202s Preparing to unpack .../python3-gdbm_3.12.7-1_s390x.deb ... 202s Unpacking python3-gdbm:s390x (3.12.7-1) over (3.12.6-1ubuntu1) ... 202s Preparing to unpack .../libgpgme11t64_1.23.2-5ubuntu4_s390x.deb ... 202s Unpacking libgpgme11t64:s390x (1.23.2-5ubuntu4) over (1.18.0-4.1ubuntu4) ... 202s Setting up libgpgme11t64:s390x (1.23.2-5ubuntu4) ... 202s Setting up python3.13-gdbm (3.13.0-2) ... 202s Setting up libpython3-stdlib:s390x (3.12.7-1) ... 202s Setting up python3 (3.12.7-1) ... 202s Setting up python3-gdbm:s390x (3.12.7-1) ... 202s Processing triggers for man-db (2.12.1-3) ... 203s Processing triggers for libc-bin (2.40-1ubuntu3) ... 203s Reading package lists... 203s Building dependency tree... 203s Reading state information... 203s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 203s Hit:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease 204s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 204s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 204s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 205s Reading package lists... 205s Reading package lists... 205s Building dependency tree... 205s Reading state information... 205s Calculating upgrade... 205s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 205s Reading package lists... 206s Building dependency tree... 206s Reading state information... 206s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 212s autopkgtest [11:33:02]: testbed running kernel: Linux 6.11.0-8-generic #8-Ubuntu SMP Mon Sep 16 12:49:35 UTC 2024 212s autopkgtest [11:33:02]: @@@@@@@@@@@@@@@@@@@@ apt-source patroni 215s Get:1 http://ftpmaster.internal/ubuntu plucky/universe patroni 3.3.1-1 (dsc) [2851 B] 215s Get:2 http://ftpmaster.internal/ubuntu plucky/universe patroni 3.3.1-1 (tar) [1150 kB] 215s Get:3 http://ftpmaster.internal/ubuntu plucky/universe patroni 3.3.1-1 (diff) [23.1 kB] 215s gpgv: Signature made Tue Jul 2 12:54:38 2024 UTC 215s gpgv: using RSA key 9CA877749FAB2E4FA96862ECDC686A27B43481B0 215s gpgv: Can't check signature: No public key 215s dpkg-source: warning: cannot verify inline signature for ./patroni_3.3.1-1.dsc: no acceptable signature found 215s autopkgtest [11:33:05]: testing package patroni version 3.3.1-1 216s autopkgtest [11:33:06]: build not needed 216s autopkgtest [11:33:06]: test acceptance-etcd3: preparing testbed 222s Reading package lists... 222s Building dependency tree... 222s Reading state information... 222s Starting pkgProblemResolver with broken count: 0 222s Starting 2 pkgProblemResolver with broken count: 0 222s Done 222s The following additional packages will be installed: 222s etcd-server fonts-font-awesome fonts-lato libio-pty-perl libipc-run-perl 222s libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl libpq5 222s libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 222s patroni-doc postgresql postgresql-16 postgresql-client-16 222s postgresql-client-common postgresql-common python3-behave python3-cdiff 222s python3-click python3-colorama python3-coverage python3-dateutil 222s python3-dnspython python3-etcd python3-parse python3-parse-type 222s python3-prettytable python3-psutil python3-psycopg2 python3-six 222s python3-wcwidth python3-ydiff sphinx-rtd-theme-common ssl-cert 222s Suggested packages: 222s etcd-client vip-manager haproxy postgresql-doc postgresql-doc-16 222s python-coverage-doc python3-trio python3-aioquic python3-h2 python3-httpx 222s python3-httpcore etcd python-psycopg2-doc 222s Recommended packages: 222s javascript-common libjson-xs-perl 222s The following NEW packages will be installed: 222s autopkgtest-satdep etcd-server fonts-font-awesome fonts-lato libio-pty-perl 222s libipc-run-perl libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl 222s libpq5 libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 222s patroni-doc postgresql postgresql-16 postgresql-client-16 222s postgresql-client-common postgresql-common python3-behave python3-cdiff 222s python3-click python3-colorama python3-coverage python3-dateutil 222s python3-dnspython python3-etcd python3-parse python3-parse-type 222s python3-prettytable python3-psutil python3-psycopg2 python3-six 222s python3-wcwidth python3-ydiff sphinx-rtd-theme-common ssl-cert 222s 0 upgraded, 40 newly installed, 0 to remove and 0 not upgraded. 222s Need to get 36.2 MB/36.2 MB of archives. 222s After this operation, 127 MB of additional disk space will be used. 222s Get:1 /tmp/autopkgtest.FwqS2V/1-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [764 B] 222s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-lato all 2.015-1 [2781 kB] 223s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x libjson-perl all 4.10000-1 [81.9 kB] 223s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-common all 262 [36.7 kB] 223s Get:5 http://ftpmaster.internal/ubuntu plucky/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 223s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-common all 262 [162 kB] 223s Get:7 http://ftpmaster.internal/ubuntu plucky/universe s390x etcd-server s390x 3.5.15-7 [10.9 MB] 224s Get:8 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 224s Get:9 http://ftpmaster.internal/ubuntu plucky/main s390x libio-pty-perl s390x 1:1.20-1build3 [31.6 kB] 224s Get:10 http://ftpmaster.internal/ubuntu plucky/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 224s Get:11 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 224s Get:12 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 224s Get:13 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-sphinxdoc all 7.4.7-4 [158 kB] 224s Get:14 http://ftpmaster.internal/ubuntu plucky/main s390x libpq5 s390x 17.0-1 [252 kB] 224s Get:15 http://ftpmaster.internal/ubuntu plucky/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 224s Get:16 http://ftpmaster.internal/ubuntu plucky/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 224s Get:17 http://ftpmaster.internal/ubuntu plucky/main s390x libxslt1.1 s390x 1.1.39-0exp1ubuntu1 [169 kB] 224s Get:18 http://ftpmaster.internal/ubuntu plucky/universe s390x moreutils s390x 0.69-1 [57.4 kB] 224s Get:19 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-ydiff all 1.3-1 [18.4 kB] 224s Get:20 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-cdiff all 1.3-1 [1770 B] 224s Get:21 http://ftpmaster.internal/ubuntu plucky/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 224s Get:22 http://ftpmaster.internal/ubuntu plucky/main s390x python3-click all 8.1.7-2 [79.5 kB] 224s Get:23 http://ftpmaster.internal/ubuntu plucky/main s390x python3-six all 1.16.0-7 [13.1 kB] 224s Get:24 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 224s Get:25 http://ftpmaster.internal/ubuntu plucky/main s390x python3-wcwidth all 0.2.13+dfsg1-1 [26.3 kB] 224s Get:26 http://ftpmaster.internal/ubuntu plucky/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 224s Get:27 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 224s Get:28 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psycopg2 s390x 2.9.9-2 [132 kB] 224s Get:29 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 224s Get:30 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-etcd all 0.4.5-4 [31.9 kB] 224s Get:31 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni all 3.3.1-1 [264 kB] 224s Get:32 http://ftpmaster.internal/ubuntu plucky/main s390x sphinx-rtd-theme-common all 3.0.1+dfsg-1 [1012 kB] 224s Get:33 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni-doc all 3.3.1-1 [497 kB] 224s Get:34 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-16 s390x 16.4-3 [1294 kB] 224s Get:35 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-16 s390x 16.4-3 [16.3 MB] 226s Get:36 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql all 16+262 [11.8 kB] 226s Get:37 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 226s Get:38 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse-type all 0.6.4-1 [23.4 kB] 226s Get:39 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-behave all 1.2.6-6 [98.6 kB] 226s Get:40 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 226s Preconfiguring packages ... 226s Fetched 36.2 MB in 4s (9498 kB/s) 227s Selecting previously unselected package fonts-lato. 227s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55517 files and directories currently installed.) 227s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 227s Unpacking fonts-lato (2.015-1) ... 227s Selecting previously unselected package libjson-perl. 227s Preparing to unpack .../01-libjson-perl_4.10000-1_all.deb ... 227s Unpacking libjson-perl (4.10000-1) ... 227s Selecting previously unselected package postgresql-client-common. 227s Preparing to unpack .../02-postgresql-client-common_262_all.deb ... 227s Unpacking postgresql-client-common (262) ... 227s Selecting previously unselected package ssl-cert. 227s Preparing to unpack .../03-ssl-cert_1.1.2ubuntu2_all.deb ... 227s Unpacking ssl-cert (1.1.2ubuntu2) ... 227s Selecting previously unselected package postgresql-common. 227s Preparing to unpack .../04-postgresql-common_262_all.deb ... 227s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 227s Unpacking postgresql-common (262) ... 227s Selecting previously unselected package etcd-server. 227s Preparing to unpack .../05-etcd-server_3.5.15-7_s390x.deb ... 227s Unpacking etcd-server (3.5.15-7) ... 227s Selecting previously unselected package fonts-font-awesome. 227s Preparing to unpack .../06-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 227s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 227s Selecting previously unselected package libio-pty-perl. 227s Preparing to unpack .../07-libio-pty-perl_1%3a1.20-1build3_s390x.deb ... 227s Unpacking libio-pty-perl (1:1.20-1build3) ... 227s Selecting previously unselected package libipc-run-perl. 227s Preparing to unpack .../08-libipc-run-perl_20231003.0-2_all.deb ... 227s Unpacking libipc-run-perl (20231003.0-2) ... 227s Selecting previously unselected package libjs-jquery. 227s Preparing to unpack .../09-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 227s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 227s Selecting previously unselected package libjs-underscore. 227s Preparing to unpack .../10-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 227s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 227s Selecting previously unselected package libjs-sphinxdoc. 227s Preparing to unpack .../11-libjs-sphinxdoc_7.4.7-4_all.deb ... 227s Unpacking libjs-sphinxdoc (7.4.7-4) ... 227s Selecting previously unselected package libpq5:s390x. 227s Preparing to unpack .../12-libpq5_17.0-1_s390x.deb ... 227s Unpacking libpq5:s390x (17.0-1) ... 227s Selecting previously unselected package libtime-duration-perl. 227s Preparing to unpack .../13-libtime-duration-perl_1.21-2_all.deb ... 227s Unpacking libtime-duration-perl (1.21-2) ... 227s Selecting previously unselected package libtimedate-perl. 227s Preparing to unpack .../14-libtimedate-perl_2.3300-2_all.deb ... 227s Unpacking libtimedate-perl (2.3300-2) ... 227s Selecting previously unselected package libxslt1.1:s390x. 227s Preparing to unpack .../15-libxslt1.1_1.1.39-0exp1ubuntu1_s390x.deb ... 227s Unpacking libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 227s Selecting previously unselected package moreutils. 227s Preparing to unpack .../16-moreutils_0.69-1_s390x.deb ... 227s Unpacking moreutils (0.69-1) ... 227s Selecting previously unselected package python3-ydiff. 227s Preparing to unpack .../17-python3-ydiff_1.3-1_all.deb ... 227s Unpacking python3-ydiff (1.3-1) ... 227s Selecting previously unselected package python3-cdiff. 227s Preparing to unpack .../18-python3-cdiff_1.3-1_all.deb ... 227s Unpacking python3-cdiff (1.3-1) ... 227s Selecting previously unselected package python3-colorama. 227s Preparing to unpack .../19-python3-colorama_0.4.6-4_all.deb ... 227s Unpacking python3-colorama (0.4.6-4) ... 227s Selecting previously unselected package python3-click. 227s Preparing to unpack .../20-python3-click_8.1.7-2_all.deb ... 227s Unpacking python3-click (8.1.7-2) ... 227s Selecting previously unselected package python3-six. 227s Preparing to unpack .../21-python3-six_1.16.0-7_all.deb ... 227s Unpacking python3-six (1.16.0-7) ... 227s Selecting previously unselected package python3-dateutil. 227s Preparing to unpack .../22-python3-dateutil_2.9.0-2_all.deb ... 227s Unpacking python3-dateutil (2.9.0-2) ... 227s Selecting previously unselected package python3-wcwidth. 227s Preparing to unpack .../23-python3-wcwidth_0.2.13+dfsg1-1_all.deb ... 227s Unpacking python3-wcwidth (0.2.13+dfsg1-1) ... 227s Selecting previously unselected package python3-prettytable. 227s Preparing to unpack .../24-python3-prettytable_3.10.1-1_all.deb ... 227s Unpacking python3-prettytable (3.10.1-1) ... 227s Selecting previously unselected package python3-psutil. 227s Preparing to unpack .../25-python3-psutil_5.9.8-2build2_s390x.deb ... 227s Unpacking python3-psutil (5.9.8-2build2) ... 227s Selecting previously unselected package python3-psycopg2. 227s Preparing to unpack .../26-python3-psycopg2_2.9.9-2_s390x.deb ... 227s Unpacking python3-psycopg2 (2.9.9-2) ... 227s Selecting previously unselected package python3-dnspython. 227s Preparing to unpack .../27-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 227s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 228s Selecting previously unselected package python3-etcd. 228s Preparing to unpack .../28-python3-etcd_0.4.5-4_all.deb ... 228s Unpacking python3-etcd (0.4.5-4) ... 228s Selecting previously unselected package patroni. 228s Preparing to unpack .../29-patroni_3.3.1-1_all.deb ... 228s Unpacking patroni (3.3.1-1) ... 228s Selecting previously unselected package sphinx-rtd-theme-common. 228s Preparing to unpack .../30-sphinx-rtd-theme-common_3.0.1+dfsg-1_all.deb ... 228s Unpacking sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 228s Selecting previously unselected package patroni-doc. 228s Preparing to unpack .../31-patroni-doc_3.3.1-1_all.deb ... 228s Unpacking patroni-doc (3.3.1-1) ... 228s Selecting previously unselected package postgresql-client-16. 228s Preparing to unpack .../32-postgresql-client-16_16.4-3_s390x.deb ... 228s Unpacking postgresql-client-16 (16.4-3) ... 228s Selecting previously unselected package postgresql-16. 228s Preparing to unpack .../33-postgresql-16_16.4-3_s390x.deb ... 228s Unpacking postgresql-16 (16.4-3) ... 228s Selecting previously unselected package postgresql. 228s Preparing to unpack .../34-postgresql_16+262_all.deb ... 228s Unpacking postgresql (16+262) ... 228s Selecting previously unselected package python3-parse. 228s Preparing to unpack .../35-python3-parse_1.20.2-1_all.deb ... 228s Unpacking python3-parse (1.20.2-1) ... 228s Selecting previously unselected package python3-parse-type. 228s Preparing to unpack .../36-python3-parse-type_0.6.4-1_all.deb ... 228s Unpacking python3-parse-type (0.6.4-1) ... 228s Selecting previously unselected package python3-behave. 228s Preparing to unpack .../37-python3-behave_1.2.6-6_all.deb ... 228s Unpacking python3-behave (1.2.6-6) ... 228s Selecting previously unselected package python3-coverage. 228s Preparing to unpack .../38-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 228s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 228s Selecting previously unselected package autopkgtest-satdep. 228s Preparing to unpack .../39-1-autopkgtest-satdep.deb ... 228s Unpacking autopkgtest-satdep (0) ... 228s Setting up postgresql-client-common (262) ... 228s Setting up fonts-lato (2.015-1) ... 228s Setting up libio-pty-perl (1:1.20-1build3) ... 228s Setting up python3-colorama (0.4.6-4) ... 228s Setting up python3-ydiff (1.3-1) ... 228s Setting up libpq5:s390x (17.0-1) ... 228s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 228s Setting up python3-click (8.1.7-2) ... 230s Setting up python3-psutil (5.9.8-2build2) ... 230s Setting up python3-six (1.16.0-7) ... 230s Setting up python3-wcwidth (0.2.13+dfsg1-1) ... 230s Setting up ssl-cert (1.1.2ubuntu2) ... 230s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 230s Setting up python3-psycopg2 (2.9.9-2) ... 230s Setting up libipc-run-perl (20231003.0-2) ... 230s Setting up libtime-duration-perl (1.21-2) ... 230s Setting up libtimedate-perl (2.3300-2) ... 230s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 230s Setting up python3-parse (1.20.2-1) ... 230s Setting up libjson-perl (4.10000-1) ... 230s Setting up libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 230s Setting up python3-dateutil (2.9.0-2) ... 230s Setting up etcd-server (3.5.15-7) ... 231s info: Selecting UID from range 100 to 999 ... 231s 231s info: Selecting GID from range 100 to 999 ... 231s info: Adding system user `etcd' (UID 107) ... 231s info: Adding new group `etcd' (GID 111) ... 231s info: Adding new user `etcd' (UID 107) with group `etcd' ... 231s info: Creating home directory `/var/lib/etcd/' ... 231s Created symlink '/etc/systemd/system/etcd2.service' → '/usr/lib/systemd/system/etcd.service'. 231s Created symlink '/etc/systemd/system/multi-user.target.wants/etcd.service' → '/usr/lib/systemd/system/etcd.service'. 231s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 231s Setting up python3-prettytable (3.10.1-1) ... 232s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 232s Setting up sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 232s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 232s Setting up moreutils (0.69-1) ... 232s Setting up python3-etcd (0.4.5-4) ... 232s Setting up postgresql-client-16 (16.4-3) ... 232s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 232s Setting up python3-cdiff (1.3-1) ... 232s Setting up python3-parse-type (0.6.4-1) ... 232s Setting up postgresql-common (262) ... 232s 232s Creating config file /etc/postgresql-common/createcluster.conf with new version 232s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 232s Removing obsolete dictionary files: 233s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 233s Setting up libjs-sphinxdoc (7.4.7-4) ... 233s Setting up python3-behave (1.2.6-6) ... 234s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 234s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 234s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 234s """Registers a custom type that will be available to "parse" 234s Setting up patroni (3.3.1-1) ... 234s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 234s Setting up postgresql-16 (16.4-3) ... 235s Creating new PostgreSQL cluster 16/main ... 235s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 235s The files belonging to this database system will be owned by user "postgres". 235s This user must also own the server process. 235s 235s The database cluster will be initialized with locale "C.UTF-8". 235s The default database encoding has accordingly been set to "UTF8". 235s The default text search configuration will be set to "english". 235s 235s Data page checksums are disabled. 235s 235s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 235s creating subdirectories ... ok 235s selecting dynamic shared memory implementation ... posix 235s selecting default max_connections ... 100 235s selecting default shared_buffers ... 128MB 235s selecting default time zone ... Etc/UTC 235s creating configuration files ... ok 235s running bootstrap script ... ok 235s performing post-bootstrap initialization ... ok 235s syncing data to disk ... ok 238s Setting up patroni-doc (3.3.1-1) ... 238s Setting up postgresql (16+262) ... 238s Setting up autopkgtest-satdep (0) ... 238s Processing triggers for man-db (2.12.1-3) ... 239s Processing triggers for libc-bin (2.40-1ubuntu3) ... 242s (Reading database ... 58728 files and directories currently installed.) 242s Removing autopkgtest-satdep (0) ... 242s autopkgtest [11:33:32]: test acceptance-etcd3: debian/tests/acceptance etcd3 242s autopkgtest [11:33:32]: test acceptance-etcd3: [----------------------- 243s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 243s ++ ls -1r /usr/lib/postgresql/ 243s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 243s + '[' 16 == 10 -o 16 == 11 ']' 243s + echo '### PostgreSQL 16 acceptance-etcd3 ###' 243s + bash -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=etcd3 PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave | ts' 243s ### PostgreSQL 16 acceptance-etcd3 ### 244s Nov 13 11:33:33 Feature: basic replication # features/basic_replication.feature:1 244s Nov 13 11:33:33 We should check that the basic bootstrapping, replication and failover works. 244s Nov 13 11:33:33 Scenario: check replication of a single table # features/basic_replication.feature:4 244s Nov 13 11:33:33 Given I start postgres0 # features/steps/basic_replication.py:8 246s Nov 13 11:33:36 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 247s Nov 13 11:33:37 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 247s Nov 13 11:33:37 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 247s Nov 13 11:33:37 Then I receive a response code 200 # features/steps/patroni_api.py:98 247s Nov 13 11:33:37 When I start postgres1 # features/steps/basic_replication.py:8 250s Nov 13 11:33:40 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 253s Nov 13 11:33:43 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 253s Nov 13 11:33:43 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 253s Nov 13 11:33:43 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 254s Nov 13 11:33:44 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 258s Nov 13 11:33:48 258s Nov 13 11:33:48 Scenario: check restart of sync replica # features/basic_replication.feature:17 258s Nov 13 11:33:48 Given I shut down postgres2 # features/steps/basic_replication.py:29 259s Nov 13 11:33:49 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 259s Nov 13 11:33:49 When I start postgres2 # features/steps/basic_replication.py:8 262s Nov 13 11:33:52 And I shut down postgres1 # features/steps/basic_replication.py:29 265s Nov 13 11:33:55 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 266s Nov 13 11:33:56 When I start postgres1 # features/steps/basic_replication.py:8 269s Nov 13 11:33:59 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 269s Nov 13 11:33:59 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 270s Nov 13 11:33:59 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 270s Nov 13 11:34:00 270s Nov 13 11:34:00 Scenario: check stuck sync replica # features/basic_replication.feature:28 270s Nov 13 11:34:00 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 270s Nov 13 11:34:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 270s Nov 13 11:34:00 And I create table on postgres0 # features/steps/basic_replication.py:73 270s Nov 13 11:34:00 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 271s Nov 13 11:34:01 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 271s Nov 13 11:34:01 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 271s Nov 13 11:34:01 And I load data on postgres0 # features/steps/basic_replication.py:84 271s Nov 13 11:34:01 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 275s Nov 13 11:34:05 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 275s Nov 13 11:34:05 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 275s Nov 13 11:34:05 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 275s Nov 13 11:34:05 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 275s Nov 13 11:34:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 275s Nov 13 11:34:05 And I drop table on postgres0 # features/steps/basic_replication.py:73 275s Nov 13 11:34:05 275s Nov 13 11:34:05 Scenario: check multi sync replication # features/basic_replication.feature:44 275s Nov 13 11:34:05 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 275s Nov 13 11:34:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 275s Nov 13 11:34:05 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 279s Nov 13 11:34:09 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 279s Nov 13 11:34:09 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 279s Nov 13 11:34:09 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 279s Nov 13 11:34:09 Then I receive a response code 200 # features/steps/patroni_api.py:98 279s Nov 13 11:34:09 And I shut down postgres1 # features/steps/basic_replication.py:29 282s Nov 13 11:34:12 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 283s Nov 13 11:34:13 When I start postgres1 # features/steps/basic_replication.py:8 287s Nov 13 11:34:16 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 288s Nov 13 11:34:17 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 288s Nov 13 11:34:18 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 288s Nov 13 11:34:18 288s Nov 13 11:34:18 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 288s Nov 13 11:34:18 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 290s Nov 13 11:34:19 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 290s Nov 13 11:34:19 When I sleep for 2 seconds # features/steps/patroni_api.py:39 291s Nov 13 11:34:21 And I shut down postgres0 # features/steps/basic_replication.py:29 292s Nov 13 11:34:22 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 294s Nov 13 11:34:24 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 294s Nov 13 11:34:24 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 312s Nov 13 11:34:42 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 314s Nov 13 11:34:44 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 314s Nov 13 11:34:44 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 314s Nov 13 11:34:44 Then I receive a response code 200 # features/steps/patroni_api.py:98 314s Nov 13 11:34:44 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 314s Nov 13 11:34:44 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 317s Nov 13 11:34:47 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 317s Nov 13 11:34:47 317s Nov 13 11:34:47 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 317s Nov 13 11:34:47 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 317s Nov 13 11:34:47 And I start postgres0 # features/steps/basic_replication.py:8 317s Nov 13 11:34:47 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 321s Nov 13 11:34:51 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 321s Nov 13 11:34:51 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 324s Nov 13 11:34:54 324s Nov 13 11:34:54 @reject-duplicate-name 324s Nov 13 11:34:54 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 324s Nov 13 11:34:54 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 326s Nov 13 11:34:56 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 330s Nov 13 11:35:00 330s Nov 13 11:35:00 Feature: cascading replication # features/cascading_replication.feature:1 330s Nov 13 11:35:00 We should check that patroni can do base backup and streaming from the replica 330s Nov 13 11:35:00 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 330s Nov 13 11:35:00 Given I start postgres0 # features/steps/basic_replication.py:8 333s Nov 13 11:35:03 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 334s Nov 13 11:35:04 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 337s Nov 13 11:35:07 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 338s Nov 13 11:35:08 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 338s Nov 13 11:35:08 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 338s Nov 13 11:35:08 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 338s Nov 13 11:35:08 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 342s Nov 13 11:35:11 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 343s Nov 13 11:35:13 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 349s Nov 13 11:35:19 349s SKIP FEATURE citus: Citus extenstion isn't available 349s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 349s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 349s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 349s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 349s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 349s Nov 13 11:35:19 Feature: citus # features/citus.feature:1 349s Nov 13 11:35:19 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 349s Nov 13 11:35:19 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 349s Nov 13 11:35:19 Given I start postgres0 in citus group 0 # None 349s Nov 13 11:35:19 And I start postgres2 in citus group 1 # None 349s Nov 13 11:35:19 Then postgres0 is a leader in a group 0 after 10 seconds # None 349s Nov 13 11:35:19 And postgres2 is a leader in a group 1 after 10 seconds # None 349s Nov 13 11:35:19 When I start postgres1 in citus group 0 # None 349s Nov 13 11:35:19 And I start postgres3 in citus group 1 # None 349s Nov 13 11:35:19 Then replication works from postgres0 to postgres1 after 15 seconds # None 349s Nov 13 11:35:19 Then replication works from postgres2 to postgres3 after 15 seconds # None 349s Nov 13 11:35:19 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 349s Nov 13 11:35:19 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 349s Nov 13 11:35:19 349s Nov 13 11:35:19 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 349s Nov 13 11:35:19 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 349s Nov 13 11:35:19 Then postgres1 role is the primary after 10 seconds # None 349s Nov 13 11:35:19 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 349s Nov 13 11:35:19 And replication works from postgres1 to postgres0 after 15 seconds # None 349s Nov 13 11:35:19 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 349s Nov 13 11:35:19 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 349s Nov 13 11:35:19 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 349s Nov 13 11:35:19 Then postgres0 role is the primary after 10 seconds # None 349s Nov 13 11:35:19 And replication works from postgres0 to postgres1 after 15 seconds # None 349s Nov 13 11:35:19 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 349s Nov 13 11:35:19 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 349s Nov 13 11:35:19 349s Nov 13 11:35:19 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 349s Nov 13 11:35:19 Given I create a distributed table on postgres0 # None 349s Nov 13 11:35:19 And I start a thread inserting data on postgres0 # None 349s Nov 13 11:35:19 When I run patronictl.py switchover batman --group 1 --force # None 349s Nov 13 11:35:19 Then I receive a response returncode 0 # None 349s Nov 13 11:35:19 And postgres3 role is the primary after 10 seconds # None 349s Nov 13 11:35:19 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 349s Nov 13 11:35:19 And replication works from postgres3 to postgres2 after 15 seconds # None 349s Nov 13 11:35:19 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 349s Nov 13 11:35:19 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 349s Nov 13 11:35:19 And a thread is still alive # None 349s Nov 13 11:35:19 When I run patronictl.py switchover batman --group 1 --force # None 349s Nov 13 11:35:19 Then I receive a response returncode 0 # None 349s Nov 13 11:35:19 And postgres2 role is the primary after 10 seconds # None 349s Nov 13 11:35:19 And replication works from postgres2 to postgres3 after 15 seconds # None 349s Nov 13 11:35:19 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 349s Nov 13 11:35:19 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 349s Nov 13 11:35:19 And a thread is still alive # None 349s Nov 13 11:35:19 When I stop a thread # None 349s Nov 13 11:35:19 Then a distributed table on postgres0 has expected rows # None 349s Nov 13 11:35:19 349s Nov 13 11:35:19 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 349s Nov 13 11:35:19 Given I cleanup a distributed table on postgres0 # None 349s Nov 13 11:35:19 And I start a thread inserting data on postgres0 # None 349s Nov 13 11:35:19 When I run patronictl.py restart batman postgres2 --group 1 --force # None 349s Nov 13 11:35:19 Then I receive a response returncode 0 # None 349s Nov 13 11:35:19 And postgres2 role is the primary after 10 seconds # None 349s Nov 13 11:35:19 And replication works from postgres2 to postgres3 after 15 seconds # None 349s Nov 13 11:35:19 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 349s Nov 13 11:35:19 And a thread is still alive # None 349s Nov 13 11:35:19 When I stop a thread # None 349s Nov 13 11:35:19 Then a distributed table on postgres0 has expected rows # None 349s Nov 13 11:35:19 349s Nov 13 11:35:19 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 349s Nov 13 11:35:19 Given I start postgres4 in citus group 2 # None 349s Nov 13 11:35:19 Then postgres4 is a leader in a group 2 after 10 seconds # None 349s Nov 13 11:35:19 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 349s Nov 13 11:35:19 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 349s Nov 13 11:35:19 Then I receive a response returncode 0 # None 349s Nov 13 11:35:19 And I receive a response output "+ttl: 20" # None 349s Nov 13 11:35:19 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 349s Nov 13 11:35:19 When I shut down postgres4 # None 349s Nov 13 11:35:19 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 349s Nov 13 11:35:19 When I run patronictl.py restart batman postgres2 --group 1 --force # None 349s Nov 13 11:35:19 Then a transaction finishes in 20 seconds # None 349s Nov 13 11:35:19 349s Nov 13 11:35:19 Feature: custom bootstrap # features/custom_bootstrap.feature:1 349s Nov 13 11:35:19 We should check that patroni can bootstrap a new cluster from a backup 349s Nov 13 11:35:19 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 349s Nov 13 11:35:19 Given I start postgres0 # features/steps/basic_replication.py:8 353s Nov 13 11:35:23 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 354s Nov 13 11:35:24 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 354s Nov 13 11:35:24 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 358s Nov 13 11:35:28 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 359s Nov 13 11:35:29 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 359s Nov 13 11:35:29 359s Nov 13 11:35:29 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 359s Nov 13 11:35:29 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 359s Nov 13 11:35:29 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 359s Nov 13 11:35:29 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 363s Nov 13 11:35:33 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 363s Nov 13 11:35:33 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 369s Nov 13 11:35:39 369s Nov 13 11:35:39 Feature: dcs failsafe mode # features/dcs_failsafe_mode.feature:1 369s Nov 13 11:35:39 We should check the basic dcs failsafe mode functioning 369s Nov 13 11:35:39 Scenario: check failsafe mode can be successfully enabled # features/dcs_failsafe_mode.feature:4 369s Nov 13 11:35:39 Given I start postgres0 # features/steps/basic_replication.py:8 372s Nov 13 11:35:42 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 373s Nov 13 11:35:43 Then "config" key in DCS has ttl=30 after 10 seconds # features/steps/cascading_replication.py:23 373s Nov 13 11:35:43 When I issue a PATCH request to http://127.0.0.1:8008/config with {"loop_wait": 2, "ttl": 20, "retry_timeout": 3, "failsafe_mode": true} # features/steps/patroni_api.py:71 373s Nov 13 11:35:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 373s Nov 13 11:35:43 And Response on GET http://127.0.0.1:8008/failsafe contains postgres0 after 10 seconds # features/steps/patroni_api.py:156 375s Nov 13 11:35:44 When I issue a GET request to http://127.0.0.1:8008/failsafe # features/steps/patroni_api.py:61 375s Nov 13 11:35:45 Then I receive a response code 200 # features/steps/patroni_api.py:98 375s Nov 13 11:35:45 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 375s Nov 13 11:35:45 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}},"slots":{"dcs_slot_1": null,"postgres0":null}} # features/steps/patroni_api.py:71 375s Nov 13 11:35:45 Then I receive a response code 200 # features/steps/patroni_api.py:98 375s Nov 13 11:35:45 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots": {"dcs_slot_0": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 375s Nov 13 11:35:45 Then I receive a response code 200 # features/steps/patroni_api.py:98 375s Nov 13 11:35:45 375s Nov 13 11:35:45 @dcs-failsafe 375s Nov 13 11:35:45 Scenario: check one-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:20 375s Nov 13 11:35:45 Given DCS is down # None 375s Nov 13 11:35:45 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # None 375s Nov 13 11:35:45 And postgres0 role is the primary after 10 seconds # None 375s Nov 13 11:35:45 375s Nov 13 11:35:45 @dcs-failsafe 375s Nov 13 11:35:45 Scenario: check new replica isn't promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:26 375s Nov 13 11:35:45 Given DCS is up # None 375s Nov 13 11:35:45 When I do a backup of postgres0 # None 375s Nov 13 11:35:45 And I shut down postgres0 # None 375s Nov 13 11:35:45 When I start postgres1 in a cluster batman from backup with no_leader # None 375s Nov 13 11:35:45 Then postgres1 role is the replica after 12 seconds # None 375s Nov 13 11:35:45 375s Nov 13 11:35:45 Scenario: check leader and replica are both in /failsafe key after leader is back # features/dcs_failsafe_mode.feature:33 375s Nov 13 11:35:45 Given I start postgres0 # features/steps/basic_replication.py:8 375s Nov 13 11:35:45 And I start postgres1 # features/steps/basic_replication.py:8 375s SKIP Scenario check one-node cluster is functioning while DCS is down: it is not possible to control state of etcd3 from tests 375s SKIP Scenario check new replica isn't promoted when leader is down and DCS is up: it is not possible to control state of etcd3 from tests 378s Nov 13 11:35:48 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 378s Nov 13 11:35:48 And "members/postgres1" key in DCS has state=running after 2 seconds # features/steps/cascading_replication.py:23 379s Nov 13 11:35:49 And Response on GET http://127.0.0.1:8009/failsafe contains postgres1 after 10 seconds # features/steps/patroni_api.py:156 379s Nov 13 11:35:49 When I issue a GET request to http://127.0.0.1:8009/failsafe # features/steps/patroni_api.py:61 379s Nov 13 11:35:49SKIP Scenario check leader and replica are functioning while DCS is down: it is not possible to control state of etcd3 from tests 379s SKIP Scenario check primary is demoted when one replica is shut down and DCS is down: it is not possible to control state of etcd3 from tests 379s SKIP Scenario check known replica is promoted when leader is down and DCS is up: it is not possible to control state of etcd3 from tests 379s SKIP Scenario scale to three-node cluster: it is not possible to control state of etcd3 from tests 379s SKIP Scenario make sure permanent slots exist on replicas: it is not possible to control state of etcd3 from tests 379s SKIP Scenario check three-node cluster is functioning while DCS is down: it is not possible to control state of etcd3 from tests 379s SKIP Scenario check that permanent slots are in sync between nodes while DCS is down: it is not possible to control state of etcd3 from tests 379s Then I receive a response code 200 # features/steps/patroni_api.py:98 379s Nov 13 11:35:49 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 379s Nov 13 11:35:49 And I receive a response postgres1 http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:98 379s Nov 13 11:35:49 379s Nov 13 11:35:49 @dcs-failsafe @slot-advance 379s Nov 13 11:35:49 Scenario: check leader and replica are functioning while DCS is down # features/dcs_failsafe_mode.feature:46 379s Nov 13 11:35:49 Given I get all changes from physical slot dcs_slot_1 on postgres0 # None 379s Nov 13 11:35:49 Then physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # None 379s Nov 13 11:35:49 And logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 10 seconds # None 379s Nov 13 11:35:49 And DCS is down # None 379s Nov 13 11:35:49 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # None 379s Nov 13 11:35:49 Then postgres0 role is the primary after 10 seconds # None 379s Nov 13 11:35:49 And postgres1 role is the replica after 2 seconds # None 379s Nov 13 11:35:49 And replication works from postgres0 to postgres1 after 10 seconds # None 379s Nov 13 11:35:49 When I get all changes from logical slot dcs_slot_0 on postgres0 # None 379s Nov 13 11:35:49 And I get all changes from physical slot dcs_slot_1 on postgres0 # None 379s Nov 13 11:35:49 Then logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 20 seconds # None 379s Nov 13 11:35:49 And physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # None 379s Nov 13 11:35:49 379s Nov 13 11:35:49 @dcs-failsafe 379s Nov 13 11:35:49 Scenario: check primary is demoted when one replica is shut down and DCS is down # features/dcs_failsafe_mode.feature:61 379s Nov 13 11:35:49 Given DCS is down # None 379s Nov 13 11:35:49 And I kill postgres1 # None 379s Nov 13 11:35:49 And I kill postmaster on postgres1 # None 379s Nov 13 11:35:49 Then postgres0 role is the replica after 12 seconds # None 379s Nov 13 11:35:49 379s Nov 13 11:35:49 @dcs-failsafe 379s Nov 13 11:35:49 Scenario: check known replica is promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:68 379s Nov 13 11:35:49 Given I kill postgres0 # None 379s Nov 13 11:35:49 And I shut down postmaster on postgres0 # None 379s Nov 13 11:35:49 And DCS is up # None 379s Nov 13 11:35:49 When I start postgres1 # None 379s Nov 13 11:35:49 Then "members/postgres1" key in DCS has state=running after 10 seconds # None 379s Nov 13 11:35:49 And postgres1 role is the primary after 25 seconds # None 379s Nov 13 11:35:49 379s Nov 13 11:35:49 @dcs-failsafe 379s Nov 13 11:35:49 Scenario: scale to three-node cluster # features/dcs_failsafe_mode.feature:77 379s Nov 13 11:35:49 Given I start postgres0 # None 379s Nov 13 11:35:49 And I start postgres2 # None 379s Nov 13 11:35:49 Then "members/postgres2" key in DCS has state=running after 10 seconds # None 379s Nov 13 11:35:49 And "members/postgres0" key in DCS has state=running after 20 seconds # None 379s Nov 13 11:35:49 And Response on GET http://127.0.0.1:8008/failsafe contains postgres2 after 10 seconds # None 379s Nov 13 11:35:49 And replication works from postgres1 to postgres0 after 10 seconds # None 379s Nov 13 11:35:49 And replication works from postgres1 to postgres2 after 10 seconds # None 379s Nov 13 11:35:49 379s Nov 13 11:35:49 @dcs-failsafe @slot-advance 379s Nov 13 11:35:49 Scenario: make sure permanent slots exist on replicas # features/dcs_failsafe_mode.feature:88 379s Nov 13 11:35:49 Given I issue a PATCH request to http://127.0.0.1:8009/config with {"slots":{"dcs_slot_0":null,"dcs_slot_2":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # None 379s Nov 13 11:35:49 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # None 379s Nov 13 11:35:49 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # None 379s Nov 13 11:35:49 When I get all changes from physical slot dcs_slot_1 on postgres1 # None 379s Nov 13 11:35:49 Then physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # None 379s Nov 13 11:35:49 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # None 379s Nov 13 11:35:49 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # None 379s Nov 13 11:35:49 379s Nov 13 11:35:49 @dcs-failsafe 379s Nov 13 11:35:49 Scenario: check three-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:98 379s Nov 13 11:35:49 Given DCS is down # None 379s Nov 13 11:35:49 Then Response on GET http://127.0.0.1:8009/primary contains failsafe_mode_is_active after 12 seconds # None 379s Nov 13 11:35:49 Then postgres1 role is the primary after 10 seconds # None 379s Nov 13 11:35:49 And postgres0 role is the replica after 2 seconds # None 379s Nov 13 11:35:49 And postgres2 role is the replica after 2 seconds # None 383s Nov 13 11:35:53 383s Nov 13 11:35:53 @dcs-failsafe @slot-advance 383s Nov 13 11:35:53 Scenario: check that permanent slots are in sync between nodes while DCS is down # features/dcs_failsafe_mode.feature:107 383s Nov 13 11:35:53 Given replication works from postgres1 to postgres0 after 10 seconds # None 383s Nov 13 11:35:53 And replication works from postgres1 to postgres2 after 10 seconds # None 383s Nov 13 11:35:53 When I get all changes from logical slot dcs_slot_2 on postgres1 # None 383s Nov 13 11:35:53 And I get all changes from physical slot dcs_slot_1 on postgres1 # None 383s Nov 13 11:35:53 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # None 383s Nov 13 11:35:53 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # None 383s Nov 13 11:35:53 And physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # None 383s Nov 13 11:35:53 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # None 383s Nov 13 11:35:53 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # None 383s Nov 13 11:35:53 383s Nov 13 11:35:53 Feature: ignored slots # features/ignored_slots.feature:1 383s Nov 13 11:35:53 383s Nov 13 11:35:53 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 383s Nov 13 11:35:53 Given I start postgres1 # features/steps/basic_replication.py:8 386s Nov 13 11:35:56 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 386s Nov 13 11:35:56 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 386s Nov 13 11:35:56 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 386s Nov 13 11:35:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 386s Nov 13 11:35:56 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 387s Nov 13 11:35:57 When I shut down postgres1 # features/steps/basic_replication.py:29 389s Nov 13 11:35:59 And I start postgres1 # features/steps/basic_replication.py:8 392s Nov 13 11:36:02 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 393s Nov 13 11:36:03 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 394s Nov 13 11:36:04 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 394s Nov 13 11:36:04 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 394s Nov 13 11:36:04 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 394s Nov 13 11:36:04 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 394s Nov 13 11:36:04 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 394s Nov 13 11:36:04 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 394s Nov 13 11:36:04 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 394s Nov 13 11:36:04 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 394s Nov 13 11:36:04 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 394s Nov 13 11:36:04 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 394s Nov 13 11:36:04 When I start postgres0 # features/steps/basic_replication.py:8 397s Nov 13 11:36:07 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 398s Nov 13 11:36:08 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 398s Nov 13 11:36:08 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 403s Nov 13 11:36:13 When I shut down postgres1 # features/steps/basic_replication.py:29 405s Nov 13 11:36:15 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 406s Nov 13 11:36:16 When I start postgres1 # features/steps/basic_replication.py:8 409s Nov 13 11:36:19 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 409s Nov 13 11:36:19 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 410s Nov 13 11:36:20 And I sleep for 2 seconds # features/steps/patroni_api.py:39 412s Nov 13 11:36:22 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 412s Nov 13 11:36:22 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 412s Nov 13 11:36:22 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 412s Nov 13 11:36:22 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 412s Nov 13 11:36:22 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 412s Nov 13 11:36:22 When I shut down postgres0 # features/steps/basic_replication.py:29 414s Nov 13 11:36:24 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 415s Nov 13 11:36:25 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 415s Nov 13 11:36:25 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 415s Nov 13 11:36:25 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 415s Nov 13 11:36:25 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 417s Nov 13 11:36:27 417s Nov 13 11:36:27 Feature: nostream node # features/nostream_node.feature:1 417s Nov 13 11:36:27 417s Nov 13 11:36:27 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 417s Nov 13 11:36:27 When I start postgres0 # features/steps/basic_replication.py:8 420s Nov 13 11:36:30 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 423s Nov 13 11:36:33 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 425s Nov 13 11:36:34 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 428s Nov 13 11:36:38 428s Nov 13 11:36:38 @slot-advance 428s Nov 13 11:36:38 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 428s Nov 13 11:36:38 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 429s Nov 13 11:36:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 429s Nov 13 11:36:39 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 431s Nov 13 11:36:41 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 432s Nov 13 11:36:42 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 435s Nov 13 11:36:45 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 443s Nov 13 11:36:53 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 443s Nov 13 11:36:53 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 449s Nov 13 11:36:59 449s Nov 13 11:36:59 Feature: patroni api # features/patroni_api.feature:1 449s Nov 13 11:36:59 We should check that patroni correctly responds to valid and not-valid API requests. 449s Nov 13 11:36:59 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 449s Nov 13 11:36:59 Given I start postgres0 # features/steps/basic_replication.py:8 452s Nov 13 11:37:02 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 453s Nov 13 11:37:03 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 453s Nov 13 11:37:03 Then I receive a response code 200 # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 And I receive a response state running # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 And I receive a response role master # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 453s Nov 13 11:37:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 453s Nov 13 11:37:03 Then I receive a response code 200 # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 453s Nov 13 11:37:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 453s Nov 13 11:37:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 453s Nov 13 11:37:03 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 455s Nov 13 11:37:05 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 455s Nov 13 11:37:05 Then I receive a response code 412 # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 455s Nov 13 11:37:05 Then I receive a response code 400 # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 455s Nov 13 11:37:05 Then I receive a response code 400 # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 455s Nov 13 11:37:05 Scenario: check local configuration reload # features/patroni_api.feature:32 455s Nov 13 11:37:05 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 455s Nov 13 11:37:05 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 455s Nov 13 11:37:05 Then I receive a response code 202 # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 455s Nov 13 11:37:05 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 455s Nov 13 11:37:05 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 455s Nov 13 11:37:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 455s Nov 13 11:37:05 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 458s Nov 13 11:37:08 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 458s Nov 13 11:37:08 Then I receive a response code 200 # features/steps/patroni_api.py:98 458s Nov 13 11:37:08 And I receive a response ttl 20 # features/steps/patroni_api.py:98 458s Nov 13 11:37:08 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 458s Nov 13 11:37:08 Then I receive a response code 200 # features/steps/patroni_api.py:98 458s Nov 13 11:37:08 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 458s Nov 13 11:37:08 And I sleep for 4 seconds # features/steps/patroni_api.py:39 462s Nov 13 11:37:12 462s Nov 13 11:37:12 Scenario: check the scheduled restart # features/patroni_api.feature:49 462s Nov 13 11:37:12 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 464s Nov 13 11:37:14 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 464s Nov 13 11:37:14 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 464s Nov 13 11:37:14 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 464s Nov 13 11:37:14 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 464s Nov 13 11:37:14 Then I receive a response code 202 # features/steps/patroni_api.py:98 464s Nov 13 11:37:14 And I sleep for 8 seconds # features/steps/patroni_api.py:39 472s Nov 13 11:37:22 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 472s Nov 13 11:37:22 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 472s Nov 13 11:37:22 Then I receive a response code 202 # features/steps/patroni_api.py:98 472s Nov 13 11:37:22 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 479s Nov 13 11:37:29 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 480s Nov 13 11:37:30 480s Nov 13 11:37:30 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 480s Nov 13 11:37:30 Given I start postgres1 # features/steps/basic_replication.py:8 483s Nov 13 11:37:33 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 484s Nov 13 11:37:34 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 486s Nov 13 11:37:36 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 486s Nov 13 11:37:36 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 486s Nov 13 11:37:36 waiting for server to shut down.... done 486s Nov 13 11:37:36 server stopped 486s Nov 13 11:37:36 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 486s Nov 13 11:37:36 Then I receive a response code 503 # features/steps/patroni_api.py:98 486s Nov 13 11:37:36 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 488s Nov 13 11:37:38 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 491s Nov 13 11:37:41 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 491s Nov 13 11:37:41 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 492s Nov 13 11:37:42 And I sleep for 2 seconds # features/steps/patroni_api.py:39 494s Nov 13 11:37:44 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 494s Nov 13 11:37:44 Then I receive a response code 200 # features/steps/patroni_api.py:98 494s Nov 13 11:37:44 And I receive a response state running # features/steps/patroni_api.py:98 494s Nov 13 11:37:44 And I receive a response role replica # features/steps/patroni_api.py:98 494s Nov 13 11:37:44 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 498s Nov 13 11:37:48 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 498s Nov 13 11:37:48 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 498s Nov 13 11:37:48 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 499s Nov 13 11:37:49 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 499s Nov 13 11:37:49 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 501s Nov 13 11:37:51 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 501s Nov 13 11:37:51 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 501s Nov 13 11:37:51 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 502s Nov 13 11:37:52 502s Nov 13 11:37:52 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 502s Nov 13 11:37:52 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 504s Nov 13 11:37:54 Then I receive a response code 200 # features/steps/patroni_api.py:98 504s Nov 13 11:37:54 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 504s Nov 13 11:37:54 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 504s Nov 13 11:37:54 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 509s Nov 13 11:37:59 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 510s Nov 13 11:37:59 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 510s Nov 13 11:37:59 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 510s Nov 13 11:38:00 Then I receive a response code 503 # features/steps/patroni_api.py:98 510s Nov 13 11:38:00 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 510s Nov 13 11:38:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 510s Nov 13 11:38:00 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 510s Nov 13 11:38:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 510s Nov 13 11:38:00 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 510s Nov 13 11:38:00 Then I receive a response code 503 # features/steps/patroni_api.py:98 510s Nov 13 11:38:00 510s Nov 13 11:38:00 Scenario: check the scheduled switchover # features/patroni_api.feature:107 510s Nov 13 11:38:00 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 512s Nov 13 11:38:02 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 512s Nov 13 11:38:02 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 512s Nov 13 11:38:02 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 514s Nov 13 11:38:04 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 514s Nov 13 11:38:04 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 516s Nov 13 11:38:05 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 516s Nov 13 11:38:05 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 526s Nov 13 11:38:15 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 526s Nov 13 11:38:15 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 529s Nov 13 11:38:19 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 529s Nov 13 11:38:19 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 530s Nov 13 11:38:20 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 530s Nov 13 11:38:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 530s Nov 13 11:38:20 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 530s Nov 13 11:38:20 Then I receive a response code 503 # features/steps/patroni_api.py:98 530s Nov 13 11:38:20 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 530s Nov 13 11:38:20 Then I receive a response code 503 # features/steps/patroni_api.py:98 530s Nov 13 11:38:20 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 530s Nov 13 11:38:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 534s Nov 13 11:38:24 534s Nov 13 11:38:24 Feature: permanent slots # features/permanent_slots.feature:1 534s Nov 13 11:38:24 534s Nov 13 11:38:24 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 534s Nov 13 11:38:24 Given I start postgres0 # features/steps/basic_replication.py:8 537s Nov 13 11:38:27 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 538s Nov 13 11:38:28 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 538s Nov 13 11:38:28 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 538s Nov 13 11:38:28 Then I receive a response code 200 # features/steps/patroni_api.py:98 538s Nov 13 11:38:28 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 538s Nov 13 11:38:28 When I start postgres1 # features/steps/basic_replication.py:8 541s Nov 13 11:38:31 And I start postgres2 # features/steps/basic_replication.py:8 544s Nov 13 11:38:34 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 547s Nov 13 11:38:37 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 547s Nov 13 11:38:37 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 547s Nov 13 11:38:37 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 547s Nov 13 11:38:37 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 547s Nov 13 11:38:37 547s Nov 13 11:38:37 @slot-advance 547s Nov 13 11:38:37 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 547s Nov 13 11:38:37 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 550s Nov 13 11:38:40 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 550s Nov 13 11:38:40 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 551s Nov 13 11:38:41 551s Nov 13 11:38:41 @slot-advance 551s Nov 13 11:38:41 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 551s Nov 13 11:38:41 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 556s Nov 13 11:38:46 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 556s Nov 13 11:38:46 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 557s Nov 13 11:38:47 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 558s Nov 13 11:38:48 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 558s Nov 13 11:38:48 @slot-advance 558s Nov 13 11:38:48 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 558s Nov 13 11:38:48 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 558s Nov 13 11:38:48 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 558s Nov 13 11:38:48 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 558s Nov 13 11:38:48 558s Nov 13 11:38:48 @slot-advance 558s Nov 13 11:38:48 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 558s Nov 13 11:38:48 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 558s Nov 13 11:38:48 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 558s Nov 13 11:38:48 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 558s Nov 13 11:38:48 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 560s Nov 13 11:38:50 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 560s Nov 13 11:38:50 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 560s Nov 13 11:38:50 560s Nov 13 11:38:50 @slot-advance 560s Nov 13 11:38:50 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 560s Nov 13 11:38:50 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 560s Nov 13 11:38:50 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 560s Nov 13 11:38:50 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 560s Nov 13 11:38:50 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 560s Nov 13 11:38:50 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 560s Nov 13 11:38:50 560s Nov 13 11:38:50 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 560s Nov 13 11:38:50 Given I shut down postgres3 # features/steps/basic_replication.py:29 561s Nov 13 11:38:51 And I shut down postgres2 # features/steps/basic_replication.py:29 562s Nov 13 11:38:52 And I shut down postgres0 # features/steps/basic_replication.py:29 564s Nov 13 11:38:54 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 564s Nov 13 11:38:54 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 564s Nov 13 11:38:54 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 566s Nov 13 11:38:56 566s Nov 13 11:38:56 Feature: priority replication # features/priority_failover.feature:1 566s Nov 13 11:38:56 We should check that we can give nodes priority during failover 566s Nov 13 11:38:56 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 566s Nov 13 11:38:56 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 569s Nov 13 11:38:59 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 572s Nov 13 11:39:02 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 573s Nov 13 11:39:03 When I shut down postgres0 # features/steps/basic_replication.py:29 575s Nov 13 11:39:05 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 578s Nov 13 11:39:07 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 578s Nov 13 11:39:07 When I start postgres0 # features/steps/basic_replication.py:8 581s Nov 13 11:39:10 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 581s Nov 13 11:39:11 581s Nov 13 11:39:11 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 581s Nov 13 11:39:11 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 584s Nov 13 11:39:14 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 588s Nov 13 11:39:17 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 589s Nov 13 11:39:19 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 594s Nov 13 11:39:24 When I shut down postgres0 # features/steps/basic_replication.py:29 596s Nov 13 11:39:26 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 596s Nov 13 11:39:26 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 596s Nov 13 11:39:26 596s Nov 13 11:39:26 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 596s Nov 13 11:39:26 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 596s Nov 13 11:39:26 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 596s Nov 13 11:39:26 Then I receive a response code 202 # features/steps/patroni_api.py:98 596s Nov 13 11:39:26 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 597s Nov 13 11:39:27 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 598s Nov 13 11:39:28 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 598s Nov 13 11:39:28 Then I receive a response code 412 # features/steps/patroni_api.py:98 598s Nov 13 11:39:28 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 598s Nov 13 11:39:28 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 598s Nov 13 11:39:28 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 598s Nov 13 11:39:28 Then I receive a response code 202 # features/steps/patroni_api.py:98 598s Nov 13 11:39:28 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 600s Nov 13 11:39:30 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 601s Nov 13 11:39:31 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 604s Nov 13 11:39:34 Then I receive a response code 200 # features/steps/patroni_api.py:98 604s Nov 13 11:39:34 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 608s Nov 13 11:39:38 608s Nov 13 11:39:38 Feature: recovery # features/recovery.feature:1 608s Nov 13 11:39:38 We want to check that crashed postgres is started back 608s Nov 13 11:39:38 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 608s Nov 13 11:39:38 Given I start postgres0 # features/steps/basic_replication.py:8 611s Nov 13 11:39:41 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 612s Nov 13 11:39:42 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 612s Nov 13 11:39:42 When I start postgres1 # features/steps/basic_replication.py:8 615s Nov 13 11:39:45 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 615s Nov 13 11:39:45 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 616s Nov 13 11:39:46 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 616s Nov 13 11:39:46 waiting for server to shut down.... done 616s Nov 13 11:39:46 server stopped 616s Nov 13 11:39:46 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 618s Nov 13 11:39:48 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 618s Nov 13 11:39:48 Then I receive a response code 200 # features/steps/patroni_api.py:98 618s Nov 13 11:39:48 And I receive a response role master # features/steps/patroni_api.py:98 618s Nov 13 11:39:48 And I receive a response timeline 1 # features/steps/patroni_api.py:98 618s Nov 13 11:39:48 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 619s Nov 13 11:39:49 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 620s Nov 13 11:39:50 620s Nov 13 11:39:50 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 620s Nov 13 11:39:50 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 620s Nov 13 11:39:50 Then I receive a response code 200 # features/steps/patroni_api.py:98 620s Nov 13 11:39:50 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 620s Nov 13 11:39:50 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 621s Nov 13 11:39:51 waiting for server to shut down.... done 621s Nov 13 11:39:51 server stopped 621s Nov 13 11:39:51 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 626s Nov 13 11:39:56 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 629s Nov 13 11:39:59 629s Nov 13 11:39:59 Feature: standby cluster # features/standby_cluster.feature:1 629s Nov 13 11:39:59 629s Nov 13 11:39:59 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 629s Nov 13 11:39:59 Given I start postgres1 # features/steps/basic_replication.py:8 632s Nov 13 11:40:02 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 632s Nov 13 11:40:02 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 632s Nov 13 11:40:02 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 632s Nov 13 11:40:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 632s Nov 13 11:40:02 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 632s Nov 13 11:40:02 And I sleep for 3 seconds # features/steps/patroni_api.py:39 635s Nov 13 11:40:05 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 635s Nov 13 11:40:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 635s Nov 13 11:40:05 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 635s Nov 13 11:40:05 When I start postgres0 # features/steps/basic_replication.py:8 638s Nov 13 11:40:08 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 639s Nov 13 11:40:09 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 644s Nov 13 11:40:14 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 644s Nov 13 11:40:14 Then I receive a response code 200 # features/steps/patroni_api.py:98 644s Nov 13 11:40:14 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 644s Nov 13 11:40:14 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 646s Nov 13 11:40:16 646s Nov 13 11:40:16 @slot-advance 646s Nov 13 11:40:16 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 646s Nov 13 11:40:16 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 648s Nov 13 11:40:18 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 654s Nov 13 11:40:24 654s Nov 13 11:40:24 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 654s Nov 13 11:40:24 When I shut down postgres1 # features/steps/basic_replication.py:29 656s Nov 13 11:40:26 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 656s Nov 13 11:40:26 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 657s Nov 13 11:40:27 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 658s Nov 13 11:40:27 Then I receive a response code 200 # features/steps/patroni_api.py:98 658s Nov 13 11:40:28 658s Nov 13 11:40:28 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 658s Nov 13 11:40:28 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 661s Nov 13 11:40:31 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 662s Nov 13 11:40:32 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 662s Nov 13 11:40:32 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 662s Nov 13 11:40:32 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 662s Nov 13 11:40:32 Then I receive a response code 200 # features/steps/patroni_api.py:98 662s Nov 13 11:40:32 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 662s Nov 13 11:40:32 And I sleep for 3 seconds # features/steps/patroni_api.py:39 665s Nov 13 11:40:35 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 665s Nov 13 11:40:35 Then I receive a response code 503 # features/steps/patroni_api.py:98 665s Nov 13 11:40:35 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 665s Nov 13 11:40:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 665s Nov 13 11:40:35 And I receive a response role standby_leader # features/steps/patroni_api.py:98 665s Nov 13 11:40:35 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 665s Nov 13 11:40:35 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 668s Nov 13 11:40:38 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 668s Nov 13 11:40:38 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 668s Nov 13 11:40:38 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 668s Nov 13 11:40:38 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 668s Nov 13 11:40:38 Then I receive a response code 200 # features/steps/patroni_api.py:98 668s Nov 13 11:40:38 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 668s Nov 13 11:40:38 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 668s Nov 13 11:40:38 668s Nov 13 11:40:38 Scenario: check switchover # features/standby_cluster.feature:57 668s Nov 13 11:40:38 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 672s Nov 13 11:40:42 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 672s Nov 13 11:40:42 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 674s Nov 13 11:40:44 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 674s Nov 13 11:40:44 674s Nov 13 11:40:44 Scenario: check failover # features/standby_cluster.feature:63 674s Nov 13 11:40:44 When I kill postgres2 # features/steps/basic_replication.py:34 675s Nov 13 11:40:45 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 675s Nov 13 11:40:45 waiting for server to shut down.... done 675s Nov 13 11:40:45 server stopped 675s Nov 13 11:40:45 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 694s Nov 13 11:41:04 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 694s Nov 13 11:41:04 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 694s Nov 13 11:41:04 Then I receive a response code 503 # features/steps/patroni_api.py:98 694s Nov 13 11:41:04 And I receive a response role standby_leader # features/steps/patroni_api.py:98 694s Nov 13 11:41:04 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 695s Nov 13 11:41:05 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 699s Nov 13 11:41:09 699s Nov 13 11:41:09 Feature: watchdog # features/watchdog.feature:1 699s Nov 13 11:41:09 Verify that watchdog gets pinged and triggered under appropriate circumstances. 699s Nov 13 11:41:09 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 699s Nov 13 11:41:09 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 702s Nov 13 11:41:12 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 702s Nov 13 11:41:12 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 702s Nov 13 11:41:12 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 703s Nov 13 11:41:13 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 703s Nov 13 11:41:13 703s Nov 13 11:41:13 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 703s Nov 13 11:41:13 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 705s Nov 13 11:41:15 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 705s Nov 13 11:41:15 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 705s Nov 13 11:41:15 When I sleep for 4 seconds # features/steps/patroni_api.py:39 709s Nov 13 11:41:19 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 709s Nov 13 11:41:19 709s Nov 13 11:41:19 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 709s Nov 13 11:41:19 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 711s Nov 13 11:41:21 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 711s Nov 13 11:41:21 When I sleep for 2 seconds # features/steps/patroni_api.py:39 713s Nov 13 11:41:23 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 713s Nov 13 11:41:23 713s Nov 13 11:41:23 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 713s Nov 13 11:41:23 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 713s Nov 13 11:41:23 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 715s Nov 13 11:41:25 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 715s Nov 13 11:41:25 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 716s Nov 13 11:41:26 716s Nov 13 11:41:26 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 716s Nov 13 11:41:26 Given I shut down postgres0 # features/steps/basic_replication.py:29 718s Nov 13 11:41:28 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 718s Nov 13 11:41:28 718s Nov 13 11:41:28 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 718s Nov 13 11:41:28 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 718s Nov 13 11:41:28 And I start postgres0 with watchdog # features/steps/watchdog.py:16 721s Nov 13 11:41:31 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 722s Nov 13 11:41:32 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 722s Nov 13 11:41:32 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 749s Nov 13 11:41:59 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.4860.XILESIJx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.4905.XdgHRvcx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.4940.XeXiEmNx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5015.XQNnQKOx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5061.XOkWntKx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5134.XIztGKex 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5183.XIuIyLux 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5186.XEICmlEx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5277.XizxaUdx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5385.XWJWTYkx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5395.XmwWqbrx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5443.XFvgqHOx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5490.XTeCtKsx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5640.XoSFsuOx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5687.XQKbWyNx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5742.XnzrPXsx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5825.XTiGLMMx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5875.XsJHHIUx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.5978.XJFbQwax 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6032.XZboEYHx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6094.XnKZNpVx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6194.XlJVewkx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6291.XzWgHHBx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6334.XYvXziFx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6399.XKzkIenx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6432.XSICFlqx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6606.XACVNYSx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6657.XbkGamWx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6673.XIljfCyx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6711.XDTzHHkx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6758.XgzAGHNx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6763.XMPvFmZx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6799.XSBfIpnx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.6841.XGvCdpSx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7003.Xswlnpdx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7005.XnOsBIix 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7011.XWICnaIx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7143.XOLeYxsx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7189.XDIubRfx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7232.XvkeYHdx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7284.XtVROyDx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7326.XXTUfjZx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7540.XooYOFtx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7583.XdgbVcWx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7654.XLRkWmZx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7737.XHUjlmSx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.7790.XQOnwjdx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8137.XhUbkhFx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8180.XIofIgwx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8315.XpvAzSTx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8377.XHWEnQAx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8460.XIRXiaRx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8559.XTkvOxVx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8669.XGZIqxax 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8803.XhVSXvKx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8846.XAFqKGrx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8848.XMqCDsrx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8851.XltmpnHx 749s Nov 13 11:41:59 Combined data file .coverage.autopkgtest.8863.XowNmUPx 752s Nov 13 11:42:02 Name Stmts Miss Cover 752s Nov 13 11:42:02 ------------------------------------------------------------------------------------------------------------- 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/OpenSSL/SSL.py 1099 597 46% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/OpenSSL/__init__.py 4 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/OpenSSL/_util.py 41 14 66% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/OpenSSL/crypto.py 1082 842 22% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/OpenSSL/version.py 10 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 35 73% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 81 42% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 58 58% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/utils.py 77 29 62% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/__init__.py 3 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/_asyncbackend.py 14 6 57% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/_ddr.py 105 86 18% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/_features.py 44 7 84% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/_immutable_ctx.py 40 5 88% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/asyncbackend.py 44 32 27% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/asyncquery.py 277 242 13% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/edns.py 270 161 40% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/entropy.py 80 49 39% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/enum.py 72 46 36% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/exception.py 60 33 45% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/flags.py 41 14 66% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/grange.py 34 30 12% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/immutable.py 41 30 27% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/inet.py 80 65 19% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/ipv4.py 27 20 26% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/ipv6.py 115 100 13% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/message.py 809 662 18% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/name.py 620 427 31% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/nameserver.py 101 54 47% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/node.py 118 71 40% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/opcode.py 31 7 77% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/query.py 536 462 14% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/quic/__init__.py 26 23 12% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rcode.py 69 13 81% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdata.py 377 269 29% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdataclass.py 44 9 80% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdataset.py 193 133 31% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdatatype.py 214 25 88% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/OPT.py 34 19 44% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/SOA.py 41 26 37% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/TSIG.py 58 42 28% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/ZONEMD.py 43 27 37% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/__init__.py 2 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/__init__.py 2 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/svcbbase.py 397 261 34% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rdtypes/util.py 191 154 19% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/renderer.py 152 118 22% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/resolver.py 899 719 20% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/reversename.py 33 24 27% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/rrset.py 78 56 28% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/serial.py 93 79 15% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/set.py 149 108 28% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/tokenizer.py 335 279 17% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/transaction.py 271 203 25% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/tsig.py 177 122 31% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/ttl.py 45 38 16% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/version.py 7 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/wire.py 64 42 34% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/xfr.py 148 126 15% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/zone.py 508 383 25% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/zonefile.py 429 380 11% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/dns/zonetypes.py 15 2 87% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/etcd/__init__.py 125 63 50% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/etcd/client.py 380 256 33% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/etcd/lock.py 125 103 18% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/idna/__init__.py 4 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/idna/core.py 292 257 12% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/idna/idnadata.py 4 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/idna/intranges.py 30 24 20% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/idna/package_data.py 1 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/__main__.py 199 62 69% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/api.py 770 286 63% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/config.py 371 94 75% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 78 88% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/dcs/etcd3.py 679 124 82% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/dcs/etcd.py 603 253 58% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/ha.py 1244 362 71% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/log.py 219 69 68% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 168 80% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 216 73% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 166 60% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 37 89% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/request.py 62 7 89% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/utils.py 350 104 70% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 42 79% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psutil/__init__.py 951 629 34% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/six.py 504 250 50% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 123 47% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 23 57% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/connection.py 324 99 69% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 124 64% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/contrib/__init__.py 0 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py 257 96 63% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 32 72% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 85 64% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/response.py 562 274 51% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 42 36% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 15 53% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 49 72% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 78 56% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 14 80% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 72 65% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 10 62% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 18 63% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/parser.py 352 198 44% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/reader.py 122 34 72% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/scanner.py 758 437 42% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 752s Nov 13 11:42:02 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 752s Nov 13 11:42:02 patroni/__init__.py 13 2 85% 752s Nov 13 11:42:02 patroni/__main__.py 199 199 0% 752s Nov 13 11:42:02 patroni/api.py 770 770 0% 752s Nov 13 11:42:02 patroni/async_executor.py 96 69 28% 752s Nov 13 11:42:02 patroni/collections.py 56 15 73% 752s Nov 13 11:42:02 patroni/config.py 371 196 47% 752s Nov 13 11:42:02 patroni/config_generator.py 212 212 0% 752s Nov 13 11:42:02 patroni/ctl.py 936 411 56% 752s Nov 13 11:42:02 patroni/daemon.py 76 76 0% 752s Nov 13 11:42:02 patroni/dcs/__init__.py 646 269 58% 752s Nov 13 11:42:02 patroni/dcs/consul.py 485 485 0% 752s Nov 13 11:42:02 patroni/dcs/etcd3.py 679 346 49% 752s Nov 13 11:42:02 patroni/dcs/etcd.py 603 277 54% 752s Nov 13 11:42:02 patroni/dcs/exhibitor.py 61 61 0% 752s Nov 13 11:42:02 patroni/dcs/kubernetes.py 938 938 0% 752s Nov 13 11:42:02 patroni/dcs/raft.py 319 319 0% 752s Nov 13 11:42:02 patroni/dcs/zookeeper.py 288 288 0% 752s Nov 13 11:42:02 patroni/dynamic_loader.py 35 7 80% 752s Nov 13 11:42:02 patroni/exceptions.py 16 1 94% 752s Nov 13 11:42:02 patroni/file_perm.py 43 15 65% 752s Nov 13 11:42:02 patroni/global_config.py 81 18 78% 752s Nov 13 11:42:02 patroni/ha.py 1244 1244 0% 752s Nov 13 11:42:02 patroni/log.py 219 173 21% 752s Nov 13 11:42:02 patroni/postgresql/__init__.py 821 651 21% 752s Nov 13 11:42:02 patroni/postgresql/available_parameters/__init__.py 21 1 95% 752s Nov 13 11:42:02 patroni/postgresql/bootstrap.py 252 222 12% 752s Nov 13 11:42:02 patroni/postgresql/callback_executor.py 55 34 38% 752s Nov 13 11:42:02 patroni/postgresql/cancellable.py 104 84 19% 752s Nov 13 11:42:02 patroni/postgresql/config.py 813 698 14% 752s Nov 13 11:42:02 patroni/postgresql/connection.py 75 50 33% 752s Nov 13 11:42:02 patroni/postgresql/misc.py 41 29 29% 752s Nov 13 11:42:02 patroni/postgresql/mpp/__init__.py 89 21 76% 752s Nov 13 11:42:02 patroni/postgresql/mpp/citus.py 259 259 0% 752s Nov 13 11:42:02 patroni/postgresql/postmaster.py 170 139 18% 752s Nov 13 11:42:02 patroni/postgresql/rewind.py 416 416 0% 752s Nov 13 11:42:02 patroni/postgresql/slots.py 334 285 15% 752s Nov 13 11:42:02 patroni/postgresql/sync.py 130 96 26% 752s Nov 13 11:42:02 patroni/postgresql/validator.py 157 52 67% 752s Nov 13 11:42:02 patroni/psycopg.py 42 28 33% 752s Nov 13 11:42:02 patroni/raft_controller.py 22 22 0% 752s Nov 13 11:42:02 patroni/request.py 62 6 90% 752s Nov 13 11:42:02 patroni/scripts/__init__.py 0 0 100% 752s Nov 13 11:42:02 patroni/scripts/aws.py 59 59 0% 752s Nov 13 11:42:02 patroni/scripts/barman/__init__.py 0 0 100% 752s Nov 13 11:42:02 patroni/scripts/barman/cli.py 51 51 0% 752s Nov 13 11:42:02 patroni/scripts/barman/config_switch.py 51 51 0% 752s Nov 13 11:42:02 patroni/scripts/barman/recover.py 37 37 0% 752s Nov 13 11:42:02 patroni/scripts/barman/utils.py 94 94 0% 752s Nov 13 11:42:02 patroni/scripts/wale_restore.py 207 207 0% 752s Nov 13 11:42:02 patroni/tags.py 38 11 71% 752s Nov 13 11:42:02 patroni/utils.py 350 177 49% 752s Nov 13 11:42:02 patroni/validator.py 301 215 29% 752s Nov 13 11:42:02 patroni/version.py 1 0 100% 752s Nov 13 11:42:02 patroni/watchdog/__init__.py 2 2 0% 752s Nov 13 11:42:02 patroni/watchdog/base.py 203 203 0% 752s Nov 13 11:42:02 patroni/watchdog/linux.py 135 135 0% 752s Nov 13 11:42:02 ------------------------------------------------------------------------------------------------------------- 752s Nov 13 11:42:02 TOTAL 53739 32236 40% 752s Nov 13 11:42:02 12 features passed, 0 failed, 1 skipped 752s Nov 13 11:42:02 46 scenarios passed, 0 failed, 14 skipped 752s Nov 13 11:42:02 466 steps passed, 0 failed, 119 skipped, 0 undefined 752s Nov 13 11:42:02 Took 7m37.597s 752s + echo '### End 16 acceptance-etcd3 ###' 752s + rm -f '/tmp/pgpass?' 752s ### End 16 acceptance-etcd3 ### 752s ++ id -u 752s + '[' 1000 -eq 0 ']' 752s autopkgtest [11:42:02]: test acceptance-etcd3: -----------------------] 753s autopkgtest [11:42:03]: test acceptance-etcd3: - - - - - - - - - - results - - - - - - - - - - 753s acceptance-etcd3 PASS 753s autopkgtest [11:42:03]: test acceptance-etcd-basic: preparing testbed 862s autopkgtest [11:43:52]: testbed dpkg architecture: s390x 862s autopkgtest [11:43:52]: testbed apt version: 2.9.8 862s autopkgtest [11:43:52]: @@@@@@@@@@@@@@@@@@@@ test bed setup 863s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [73.9 kB] 863s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [76.4 kB] 863s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.3 kB] 863s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/restricted Sources [7016 B] 863s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [849 kB] 863s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [85.8 kB] 863s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [565 kB] 863s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [16.6 kB] 863s Fetched 1689 kB in 1s (2247 kB/s) 863s Reading package lists... 865s Reading package lists... 865s Building dependency tree... 865s Reading state information... 865s Calculating upgrade... 866s The following NEW packages will be installed: 866s python3.13-gdbm 866s The following packages will be upgraded: 866s libgpgme11t64 libpython3-stdlib python3 python3-gdbm python3-minimal 866s 5 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 866s Need to get 252 kB of archives. 866s After this operation, 98.3 kB of additional disk space will be used. 866s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-minimal s390x 3.12.7-1 [27.4 kB] 866s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3 s390x 3.12.7-1 [24.0 kB] 866s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libpython3-stdlib s390x 3.12.7-1 [10.0 kB] 866s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x python3.13-gdbm s390x 3.13.0-2 [31.0 kB] 866s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-gdbm s390x 3.12.7-1 [8642 B] 866s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x libgpgme11t64 s390x 1.23.2-5ubuntu4 [151 kB] 866s Fetched 252 kB in 0s (632 kB/s) 866s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 866s Preparing to unpack .../python3-minimal_3.12.7-1_s390x.deb ... 866s Unpacking python3-minimal (3.12.7-1) over (3.12.6-0ubuntu1) ... 866s Setting up python3-minimal (3.12.7-1) ... 866s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 866s Preparing to unpack .../python3_3.12.7-1_s390x.deb ... 867s Unpacking python3 (3.12.7-1) over (3.12.6-0ubuntu1) ... 867s Preparing to unpack .../libpython3-stdlib_3.12.7-1_s390x.deb ... 867s Unpacking libpython3-stdlib:s390x (3.12.7-1) over (3.12.6-0ubuntu1) ... 867s Selecting previously unselected package python3.13-gdbm. 867s Preparing to unpack .../python3.13-gdbm_3.13.0-2_s390x.deb ... 867s Unpacking python3.13-gdbm (3.13.0-2) ... 867s Preparing to unpack .../python3-gdbm_3.12.7-1_s390x.deb ... 867s Unpacking python3-gdbm:s390x (3.12.7-1) over (3.12.6-1ubuntu1) ... 867s Preparing to unpack .../libgpgme11t64_1.23.2-5ubuntu4_s390x.deb ... 867s Unpacking libgpgme11t64:s390x (1.23.2-5ubuntu4) over (1.18.0-4.1ubuntu4) ... 867s Setting up libgpgme11t64:s390x (1.23.2-5ubuntu4) ... 867s Setting up python3.13-gdbm (3.13.0-2) ... 867s Setting up libpython3-stdlib:s390x (3.12.7-1) ... 867s Setting up python3 (3.12.7-1) ... 867s Setting up python3-gdbm:s390x (3.12.7-1) ... 867s Processing triggers for man-db (2.12.1-3) ... 867s Processing triggers for libc-bin (2.40-1ubuntu3) ... 868s Reading package lists... 868s Building dependency tree... 868s Reading state information... 868s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 868s Hit:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease 868s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 868s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 868s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 869s Reading package lists... 869s Reading package lists... 869s Building dependency tree... 869s Reading state information... 870s Calculating upgrade... 870s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 870s Reading package lists... 870s Building dependency tree... 870s Reading state information... 870s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 874s Reading package lists... 874s Building dependency tree... 874s Reading state information... 874s Starting pkgProblemResolver with broken count: 0 874s Starting 2 pkgProblemResolver with broken count: 0 874s Done 874s The following additional packages will be installed: 874s etcd-server fonts-font-awesome fonts-lato libio-pty-perl libipc-run-perl 874s libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl libpq5 874s libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 874s patroni-doc postgresql postgresql-16 postgresql-client-16 874s postgresql-client-common postgresql-common python3-behave python3-cdiff 874s python3-click python3-colorama python3-coverage python3-dateutil 874s python3-dnspython python3-etcd python3-parse python3-parse-type 874s python3-prettytable python3-psutil python3-psycopg2 python3-six 874s python3-wcwidth python3-ydiff sphinx-rtd-theme-common ssl-cert 874s Suggested packages: 874s etcd-client vip-manager haproxy postgresql-doc postgresql-doc-16 874s python-coverage-doc python3-trio python3-aioquic python3-h2 python3-httpx 874s python3-httpcore etcd python-psycopg2-doc 874s Recommended packages: 874s javascript-common libjson-xs-perl 875s The following NEW packages will be installed: 875s autopkgtest-satdep etcd-server fonts-font-awesome fonts-lato libio-pty-perl 875s libipc-run-perl libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl 875s libpq5 libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 875s patroni-doc postgresql postgresql-16 postgresql-client-16 875s postgresql-client-common postgresql-common python3-behave python3-cdiff 875s python3-click python3-colorama python3-coverage python3-dateutil 875s python3-dnspython python3-etcd python3-parse python3-parse-type 875s python3-prettytable python3-psutil python3-psycopg2 python3-six 875s python3-wcwidth python3-ydiff sphinx-rtd-theme-common ssl-cert 875s 0 upgraded, 40 newly installed, 0 to remove and 0 not upgraded. 875s Need to get 36.2 MB/36.2 MB of archives. 875s After this operation, 127 MB of additional disk space will be used. 875s Get:1 /tmp/autopkgtest.FwqS2V/2-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [772 B] 875s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-lato all 2.015-1 [2781 kB] 875s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x libjson-perl all 4.10000-1 [81.9 kB] 875s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-common all 262 [36.7 kB] 875s Get:5 http://ftpmaster.internal/ubuntu plucky/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 875s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-common all 262 [162 kB] 875s Get:7 http://ftpmaster.internal/ubuntu plucky/universe s390x etcd-server s390x 3.5.15-7 [10.9 MB] 876s Get:8 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 876s Get:9 http://ftpmaster.internal/ubuntu plucky/main s390x libio-pty-perl s390x 1:1.20-1build3 [31.6 kB] 876s Get:10 http://ftpmaster.internal/ubuntu plucky/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 876s Get:11 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 876s Get:12 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 876s Get:13 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-sphinxdoc all 7.4.7-4 [158 kB] 876s Get:14 http://ftpmaster.internal/ubuntu plucky/main s390x libpq5 s390x 17.0-1 [252 kB] 876s Get:15 http://ftpmaster.internal/ubuntu plucky/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 876s Get:16 http://ftpmaster.internal/ubuntu plucky/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 876s Get:17 http://ftpmaster.internal/ubuntu plucky/main s390x libxslt1.1 s390x 1.1.39-0exp1ubuntu1 [169 kB] 876s Get:18 http://ftpmaster.internal/ubuntu plucky/universe s390x moreutils s390x 0.69-1 [57.4 kB] 876s Get:19 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-ydiff all 1.3-1 [18.4 kB] 876s Get:20 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-cdiff all 1.3-1 [1770 B] 876s Get:21 http://ftpmaster.internal/ubuntu plucky/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 876s Get:22 http://ftpmaster.internal/ubuntu plucky/main s390x python3-click all 8.1.7-2 [79.5 kB] 876s Get:23 http://ftpmaster.internal/ubuntu plucky/main s390x python3-six all 1.16.0-7 [13.1 kB] 876s Get:24 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 876s Get:25 http://ftpmaster.internal/ubuntu plucky/main s390x python3-wcwidth all 0.2.13+dfsg1-1 [26.3 kB] 876s Get:26 http://ftpmaster.internal/ubuntu plucky/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 876s Get:27 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 876s Get:28 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psycopg2 s390x 2.9.9-2 [132 kB] 876s Get:29 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 876s Get:30 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-etcd all 0.4.5-4 [31.9 kB] 876s Get:31 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni all 3.3.1-1 [264 kB] 876s Get:32 http://ftpmaster.internal/ubuntu plucky/main s390x sphinx-rtd-theme-common all 3.0.1+dfsg-1 [1012 kB] 876s Get:33 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni-doc all 3.3.1-1 [497 kB] 876s Get:34 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-16 s390x 16.4-3 [1294 kB] 876s Get:35 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-16 s390x 16.4-3 [16.3 MB] 876s Get:36 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql all 16+262 [11.8 kB] 876s Get:37 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 876s Get:38 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse-type all 0.6.4-1 [23.4 kB] 876s Get:39 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-behave all 1.2.6-6 [98.6 kB] 876s Get:40 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 877s Preconfiguring packages ... 877s Fetched 36.2 MB in 2s (19.2 MB/s) 877s Selecting previously unselected package fonts-lato. 877s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55517 files and directories currently installed.) 877s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 877s Unpacking fonts-lato (2.015-1) ... 877s Selecting previously unselected package libjson-perl. 877s Preparing to unpack .../01-libjson-perl_4.10000-1_all.deb ... 877s Unpacking libjson-perl (4.10000-1) ... 877s Selecting previously unselected package postgresql-client-common. 877s Preparing to unpack .../02-postgresql-client-common_262_all.deb ... 877s Unpacking postgresql-client-common (262) ... 877s Selecting previously unselected package ssl-cert. 877s Preparing to unpack .../03-ssl-cert_1.1.2ubuntu2_all.deb ... 877s Unpacking ssl-cert (1.1.2ubuntu2) ... 877s Selecting previously unselected package postgresql-common. 877s Preparing to unpack .../04-postgresql-common_262_all.deb ... 877s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 877s Unpacking postgresql-common (262) ... 877s Selecting previously unselected package etcd-server. 877s Preparing to unpack .../05-etcd-server_3.5.15-7_s390x.deb ... 877s Unpacking etcd-server (3.5.15-7) ... 877s Selecting previously unselected package fonts-font-awesome. 877s Preparing to unpack .../06-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 877s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 877s Selecting previously unselected package libio-pty-perl. 877s Preparing to unpack .../07-libio-pty-perl_1%3a1.20-1build3_s390x.deb ... 877s Unpacking libio-pty-perl (1:1.20-1build3) ... 877s Selecting previously unselected package libipc-run-perl. 877s Preparing to unpack .../08-libipc-run-perl_20231003.0-2_all.deb ... 877s Unpacking libipc-run-perl (20231003.0-2) ... 877s Selecting previously unselected package libjs-jquery. 877s Preparing to unpack .../09-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 877s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 877s Selecting previously unselected package libjs-underscore. 877s Preparing to unpack .../10-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 877s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 877s Selecting previously unselected package libjs-sphinxdoc. 877s Preparing to unpack .../11-libjs-sphinxdoc_7.4.7-4_all.deb ... 877s Unpacking libjs-sphinxdoc (7.4.7-4) ... 877s Selecting previously unselected package libpq5:s390x. 877s Preparing to unpack .../12-libpq5_17.0-1_s390x.deb ... 877s Unpacking libpq5:s390x (17.0-1) ... 877s Selecting previously unselected package libtime-duration-perl. 877s Preparing to unpack .../13-libtime-duration-perl_1.21-2_all.deb ... 877s Unpacking libtime-duration-perl (1.21-2) ... 877s Selecting previously unselected package libtimedate-perl. 877s Preparing to unpack .../14-libtimedate-perl_2.3300-2_all.deb ... 877s Unpacking libtimedate-perl (2.3300-2) ... 877s Selecting previously unselected package libxslt1.1:s390x. 877s Preparing to unpack .../15-libxslt1.1_1.1.39-0exp1ubuntu1_s390x.deb ... 877s Unpacking libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 877s Selecting previously unselected package moreutils. 877s Preparing to unpack .../16-moreutils_0.69-1_s390x.deb ... 877s Unpacking moreutils (0.69-1) ... 877s Selecting previously unselected package python3-ydiff. 877s Preparing to unpack .../17-python3-ydiff_1.3-1_all.deb ... 877s Unpacking python3-ydiff (1.3-1) ... 877s Selecting previously unselected package python3-cdiff. 877s Preparing to unpack .../18-python3-cdiff_1.3-1_all.deb ... 877s Unpacking python3-cdiff (1.3-1) ... 877s Selecting previously unselected package python3-colorama. 877s Preparing to unpack .../19-python3-colorama_0.4.6-4_all.deb ... 877s Unpacking python3-colorama (0.4.6-4) ... 877s Selecting previously unselected package python3-click. 877s Preparing to unpack .../20-python3-click_8.1.7-2_all.deb ... 877s Unpacking python3-click (8.1.7-2) ... 877s Selecting previously unselected package python3-six. 877s Preparing to unpack .../21-python3-six_1.16.0-7_all.deb ... 877s Unpacking python3-six (1.16.0-7) ... 877s Selecting previously unselected package python3-dateutil. 877s Preparing to unpack .../22-python3-dateutil_2.9.0-2_all.deb ... 877s Unpacking python3-dateutil (2.9.0-2) ... 877s Selecting previously unselected package python3-wcwidth. 877s Preparing to unpack .../23-python3-wcwidth_0.2.13+dfsg1-1_all.deb ... 877s Unpacking python3-wcwidth (0.2.13+dfsg1-1) ... 877s Selecting previously unselected package python3-prettytable. 877s Preparing to unpack .../24-python3-prettytable_3.10.1-1_all.deb ... 877s Unpacking python3-prettytable (3.10.1-1) ... 877s Selecting previously unselected package python3-psutil. 877s Preparing to unpack .../25-python3-psutil_5.9.8-2build2_s390x.deb ... 877s Unpacking python3-psutil (5.9.8-2build2) ... 878s Selecting previously unselected package python3-psycopg2. 878s Preparing to unpack .../26-python3-psycopg2_2.9.9-2_s390x.deb ... 878s Unpacking python3-psycopg2 (2.9.9-2) ... 878s Selecting previously unselected package python3-dnspython. 878s Preparing to unpack .../27-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 878s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 878s Selecting previously unselected package python3-etcd. 878s Preparing to unpack .../28-python3-etcd_0.4.5-4_all.deb ... 878s Unpacking python3-etcd (0.4.5-4) ... 878s Selecting previously unselected package patroni. 878s Preparing to unpack .../29-patroni_3.3.1-1_all.deb ... 878s Unpacking patroni (3.3.1-1) ... 878s Selecting previously unselected package sphinx-rtd-theme-common. 878s Preparing to unpack .../30-sphinx-rtd-theme-common_3.0.1+dfsg-1_all.deb ... 878s Unpacking sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 878s Selecting previously unselected package patroni-doc. 878s Preparing to unpack .../31-patroni-doc_3.3.1-1_all.deb ... 878s Unpacking patroni-doc (3.3.1-1) ... 878s Selecting previously unselected package postgresql-client-16. 878s Preparing to unpack .../32-postgresql-client-16_16.4-3_s390x.deb ... 878s Unpacking postgresql-client-16 (16.4-3) ... 878s Selecting previously unselected package postgresql-16. 878s Preparing to unpack .../33-postgresql-16_16.4-3_s390x.deb ... 878s Unpacking postgresql-16 (16.4-3) ... 878s Selecting previously unselected package postgresql. 878s Preparing to unpack .../34-postgresql_16+262_all.deb ... 878s Unpacking postgresql (16+262) ... 878s Selecting previously unselected package python3-parse. 878s Preparing to unpack .../35-python3-parse_1.20.2-1_all.deb ... 878s Unpacking python3-parse (1.20.2-1) ... 878s Selecting previously unselected package python3-parse-type. 878s Preparing to unpack .../36-python3-parse-type_0.6.4-1_all.deb ... 878s Unpacking python3-parse-type (0.6.4-1) ... 878s Selecting previously unselected package python3-behave. 878s Preparing to unpack .../37-python3-behave_1.2.6-6_all.deb ... 878s Unpacking python3-behave (1.2.6-6) ... 878s Selecting previously unselected package python3-coverage. 878s Preparing to unpack .../38-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 878s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 878s Selecting previously unselected package autopkgtest-satdep. 878s Preparing to unpack .../39-2-autopkgtest-satdep.deb ... 878s Unpacking autopkgtest-satdep (0) ... 878s Setting up postgresql-client-common (262) ... 878s Setting up fonts-lato (2.015-1) ... 878s Setting up libio-pty-perl (1:1.20-1build3) ... 878s Setting up python3-colorama (0.4.6-4) ... 878s Setting up python3-ydiff (1.3-1) ... 878s Setting up libpq5:s390x (17.0-1) ... 878s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 878s Setting up python3-click (8.1.7-2) ... 878s Setting up python3-psutil (5.9.8-2build2) ... 879s Setting up python3-six (1.16.0-7) ... 879s Setting up python3-wcwidth (0.2.13+dfsg1-1) ... 879s Setting up ssl-cert (1.1.2ubuntu2) ... 880s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 880s Setting up python3-psycopg2 (2.9.9-2) ... 880s Setting up libipc-run-perl (20231003.0-2) ... 880s Setting up libtime-duration-perl (1.21-2) ... 880s Setting up libtimedate-perl (2.3300-2) ... 880s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 880s Setting up python3-parse (1.20.2-1) ... 880s Setting up libjson-perl (4.10000-1) ... 880s Setting up libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 880s Setting up python3-dateutil (2.9.0-2) ... 880s Setting up etcd-server (3.5.15-7) ... 881s info: Selecting UID from range 100 to 999 ... 881s 881s info: Selecting GID from range 100 to 999 ... 881s info: Adding system user `etcd' (UID 107) ... 881s info: Adding new group `etcd' (GID 111) ... 881s info: Adding new user `etcd' (UID 107) with group `etcd' ... 881s info: Creating home directory `/var/lib/etcd/' ... 881s Created symlink '/etc/systemd/system/etcd2.service' → '/usr/lib/systemd/system/etcd.service'. 881s Created symlink '/etc/systemd/system/multi-user.target.wants/etcd.service' → '/usr/lib/systemd/system/etcd.service'. 882s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 882s Setting up python3-prettytable (3.10.1-1) ... 882s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 882s Setting up sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 882s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 882s Setting up moreutils (0.69-1) ... 882s Setting up python3-etcd (0.4.5-4) ... 882s Setting up postgresql-client-16 (16.4-3) ... 883s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 883s Setting up python3-cdiff (1.3-1) ... 883s Setting up python3-parse-type (0.6.4-1) ... 883s Setting up postgresql-common (262) ... 883s 883s Creating config file /etc/postgresql-common/createcluster.conf with new version 883s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 883s Removing obsolete dictionary files: 883s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 884s Setting up libjs-sphinxdoc (7.4.7-4) ... 884s Setting up python3-behave (1.2.6-6) ... 884s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 884s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 884s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 884s """Registers a custom type that will be available to "parse" 884s Setting up patroni (3.3.1-1) ... 884s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 885s Setting up postgresql-16 (16.4-3) ... 885s Creating new PostgreSQL cluster 16/main ... 885s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 885s The files belonging to this database system will be owned by user "postgres". 885s This user must also own the server process. 885s 885s The database cluster will be initialized with locale "C.UTF-8". 885s The default database encoding has accordingly been set to "UTF8". 885s The default text search configuration will be set to "english". 885s 885s Data page checksums are disabled. 885s 885s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 885s creating subdirectories ... ok 885s selecting dynamic shared memory implementation ... posix 885s selecting default max_connections ... 100 885s selecting default shared_buffers ... 128MB 885s selecting default time zone ... Etc/UTC 885s creating configuration files ... ok 885s running bootstrap script ... ok 885s performing post-bootstrap initialization ... ok 885s syncing data to disk ... ok 888s Setting up patroni-doc (3.3.1-1) ... 888s Setting up postgresql (16+262) ... 888s Setting up autopkgtest-satdep (0) ... 888s Processing triggers for man-db (2.12.1-3) ... 889s Processing triggers for libc-bin (2.40-1ubuntu3) ... 892s (Reading database ... 58728 files and directories currently installed.) 892s Removing autopkgtest-satdep (0) ... 893s autopkgtest [11:44:23]: test acceptance-etcd-basic: debian/tests/acceptance etcd features/basic_replication.feature 893s autopkgtest [11:44:23]: test acceptance-etcd-basic: [----------------------- 894s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 894s ○ etcd.service - etcd - highly-available key value store 894s Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; preset: enabled) 894s Active: inactive (dead) since Wed 2024-11-13 11:44:24 UTC; 8ms ago 894s Duration: 12.254s 894s Invocation: 7d3fd57601dc4e05be5706b9f9d9476b 894s Docs: https://etcd.io/docs 894s man:etcd 894s Process: 2549 ExecStart=/usr/bin/etcd $DAEMON_ARGS (code=killed, signal=TERM) 894s Main PID: 2549 (code=killed, signal=TERM) 894s Mem peak: 7M 894s CPU: 72ms 894s 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.788208Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"autopkgtest","data-dir":"/var/lib/etcd/default","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]} 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"warn","ts":"2024-11-13T11:44:24.788304Z","caller":"embed/serve.go:161","msg":"stopping insecure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"} 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"warn","ts":"2024-11-13T11:44:24.788484Z","caller":"embed/serve.go:163","msg":"stopped insecure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"} 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.788500Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8e9e05c52164694d","current-leader-member-id":"8e9e05c52164694d"} 894s Nov 13 11:44:24 autopkgtest systemd[1]: Stopping etcd.service - etcd - highly-available key value store... 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.790775Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"127.0.0.1:2380"} 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.790847Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"127.0.0.1:2380"} 894s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.790854Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"autopkgtest","data-dir":"/var/lib/etcd/default","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]} 894s Nov 13 11:44:24 autopkgtest systemd[1]: etcd.service: Deactivated successfully. 894s Nov 13 11:44:24 autopkgtest systemd[1]: Stopped etcd.service - etcd - highly-available key value store. 894s ++ ls -1r /usr/lib/postgresql/ 894s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 894s + '[' 16 == 10 -o 16 == 11 ']' 894s + echo '### PostgreSQL 16 acceptance-etcd features/basic_replication.feature ###' 894s ### PostgreSQL 16 acceptance-etcd features/basic_replication.feature ### 894s + su postgres -p -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=etcd PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave features/basic_replication.feature | ts' 896s Nov 13 11:44:26 Feature: basic replication # features/basic_replication.feature:1 896s Nov 13 11:44:26 We should check that the basic bootstrapping, replication and failover works. 896s Nov 13 11:44:26 Scenario: check replication of a single table # features/basic_replication.feature:4 896s Nov 13 11:44:26 Given I start postgres0 # features/steps/basic_replication.py:8 899s Nov 13 11:44:29 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 899s Nov 13 11:44:29 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 899s Nov 13 11:44:29 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 899s Nov 13 11:44:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 899s Nov 13 11:44:29 When I start postgres1 # features/steps/basic_replication.py:8 902s Nov 13 11:44:32 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 905s Nov 13 11:44:35 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 905s Nov 13 11:44:35 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 905s Nov 13 11:44:35 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 906s Nov 13 11:44:36 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 910s Nov 13 11:44:40 910s Nov 13 11:44:40 Scenario: check restart of sync replica # features/basic_replication.feature:17 910s Nov 13 11:44:40 Given I shut down postgres2 # features/steps/basic_replication.py:29 911s Nov 13 11:44:41 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 911s Nov 13 11:44:41 When I start postgres2 # features/steps/basic_replication.py:8 914s Nov 13 11:44:44 And I shut down postgres1 # features/steps/basic_replication.py:29 917s Nov 13 11:44:47 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 918s Nov 13 11:44:48 When I start postgres1 # features/steps/basic_replication.py:8 921s Nov 13 11:44:51 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 921s Nov 13 11:44:51 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 921s Nov 13 11:44:51 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 921s Nov 13 11:44:51 921s Nov 13 11:44:51 Scenario: check stuck sync replica # features/basic_replication.feature:28 921s Nov 13 11:44:51 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 921s Nov 13 11:44:51 Then I receive a response code 200 # features/steps/patroni_api.py:98 921s Nov 13 11:44:51 And I create table on postgres0 # features/steps/basic_replication.py:73 921s Nov 13 11:44:51 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 922s Nov 13 11:44:52 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 922s Nov 13 11:44:52 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 922s Nov 13 11:44:52 And I load data on postgres0 # features/steps/basic_replication.py:84 923s Nov 13 11:44:53 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 926s Nov 13 11:44:56 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 926s Nov 13 11:44:56 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 927s Nov 13 11:44:57 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 927s Nov 13 11:44:57 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 927s Nov 13 11:44:57 Then I receive a response code 200 # features/steps/patroni_api.py:98 927s Nov 13 11:44:57 And I drop table on postgres0 # features/steps/basic_replication.py:73 927s Nov 13 11:44:57 927s Nov 13 11:44:57 Scenario: check multi sync replication # features/basic_replication.feature:44 927s Nov 13 11:44:57 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 927s Nov 13 11:44:57 Then I receive a response code 200 # features/steps/patroni_api.py:98 927s Nov 13 11:44:57 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 931s Nov 13 11:45:01 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 931s Nov 13 11:45:01 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 931s Nov 13 11:45:01 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 931s Nov 13 11:45:01 Then I receive a response code 200 # features/steps/patroni_api.py:98 931s Nov 13 11:45:01 And I shut down postgres1 # features/steps/basic_replication.py:29 934s Nov 13 11:45:04 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 935s Nov 13 11:45:05 When I start postgres1 # features/steps/basic_replication.py:8 938s Nov 13 11:45:08 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 938s Nov 13 11:45:08 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 938s Nov 13 11:45:08 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 938s Nov 13 11:45:08 938s Nov 13 11:45:08 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 938s Nov 13 11:45:08 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 940s Nov 13 11:45:10 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 940s Nov 13 11:45:10 When I sleep for 2 seconds # features/steps/patroni_api.py:39 942s Nov 13 11:45:12 And I shut down postgres0 # features/steps/basic_replication.py:29 943s Nov 13 11:45:13 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 945s Nov 13 11:45:15 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 945s Nov 13 11:45:15 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 964s Nov 13 11:45:34 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 966s Nov 13 11:45:36 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 966s Nov 13 11:45:36 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 966s Nov 13 11:45:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 966s Nov 13 11:45:36 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 966s Nov 13 11:45:36 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 969s Nov 13 11:45:39 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 969s Nov 13 11:45:39 969s Nov 13 11:45:39 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 969s Nov 13 11:45:39 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 969s Nov 13 11:45:39 And I start postgres0 # features/steps/basic_replication.py:8 969s Nov 13 11:45:39 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 975s Nov 13 11:45:45 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 975s Nov 13 11:45:45 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 975s Nov 13 11:45:45 975s Nov 13 11:45:45 @reject-duplicate-name 975s Nov 13 11:45:45 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 975s Nov 13 11:45:45 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 977s Nov 13 11:45:47 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 981s Nov 13 11:45:51 982s Failed to get list of machines from http://127.0.0.1:2379/v2: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 982s Failed to get list of machines from http://[::1]:2379/v2: MaxRetryError("HTTPConnectionPool(host='::1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.4743.XPbIMrNx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.4786.XkiuryCx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.4825.XfiOwjnx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.4895.XVjffBhx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.4938.XSZwnzlx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.5014.XAIjqpOx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.5062.XbKijQux 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.5065.XEAWeOKx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.5151.XAbPwaqx 982s Nov 13 11:45:52 Combined data file .coverage.autopkgtest.5243.XUeSJdcx 985s Nov 13 11:45:55 Name Stmts Miss Cover 985s Nov 13 11:45:55 ------------------------------------------------------------------------------------------------------------- 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/OpenSSL/SSL.py 1099 603 45% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/OpenSSL/__init__.py 4 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/OpenSSL/_util.py 41 14 66% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/OpenSSL/crypto.py 1082 842 22% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/OpenSSL/version.py 10 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 35 73% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 81 42% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 58 58% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/utils.py 77 29 62% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 688 15% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 124 23% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 629 21% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/__init__.py 3 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/_asyncbackend.py 14 6 57% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/_ddr.py 105 86 18% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/_features.py 44 7 84% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/_immutable_ctx.py 40 5 88% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/asyncbackend.py 44 32 27% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/asyncquery.py 277 242 13% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/edns.py 270 161 40% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/entropy.py 80 49 39% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/enum.py 72 46 36% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/exception.py 60 33 45% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/flags.py 41 14 66% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/grange.py 34 30 12% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/immutable.py 41 30 27% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/inet.py 80 65 19% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/ipv4.py 27 20 26% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/ipv6.py 115 100 13% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/message.py 809 662 18% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/name.py 620 427 31% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/nameserver.py 101 54 47% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/node.py 118 71 40% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/opcode.py 31 7 77% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/query.py 536 462 14% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/quic/__init__.py 26 23 12% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rcode.py 69 13 81% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdata.py 377 269 29% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdataclass.py 44 9 80% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdataset.py 193 133 31% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdatatype.py 214 25 88% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/OPT.py 34 19 44% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/SOA.py 41 26 37% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/TSIG.py 58 42 28% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/ZONEMD.py 43 27 37% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/__init__.py 2 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/__init__.py 2 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/svcbbase.py 397 261 34% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rdtypes/util.py 191 154 19% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/renderer.py 152 118 22% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/resolver.py 899 719 20% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/reversename.py 33 24 27% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/rrset.py 78 56 28% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/serial.py 93 79 15% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/set.py 149 108 28% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/tokenizer.py 335 279 17% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/transaction.py 271 203 25% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/tsig.py 177 122 31% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/ttl.py 45 38 16% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/version.py 7 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/wire.py 64 42 34% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/xfr.py 148 126 15% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/zone.py 508 383 25% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/zonefile.py 429 380 11% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/dns/zonetypes.py 15 2 87% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/etcd/__init__.py 125 27 78% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/etcd/client.py 380 195 49% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/etcd/lock.py 125 103 18% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/idna/__init__.py 4 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/idna/core.py 292 257 12% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/idna/idnadata.py 4 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/idna/intranges.py 30 24 20% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/idna/package_data.py 1 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/__main__.py 199 67 66% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/api.py 770 429 44% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 19 80% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/config.py 371 110 70% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/daemon.py 76 6 92% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 149 77% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/dcs/etcd.py 603 180 70% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 9 79% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/global_config.py 81 4 95% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/ha.py 1244 617 50% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/log.py 219 71 68% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 239 71% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 91 64% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 256 69% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 7 91% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 13 68% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 12 87% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 92 46% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 200 52% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 174 48% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/request.py 62 7 89% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/tags.py 38 5 87% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/utils.py 350 140 60% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/validator.py 301 211 30% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 49 76% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 50 63% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psutil/__init__.py 951 636 33% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psutil/_compat.py 302 264 13% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 936 25% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 41 57% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/six.py 504 250 50% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 100 57% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 11 79% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/connection.py 324 100 69% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 130 63% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/contrib/__init__.py 0 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py 257 98 62% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 85 64% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/response.py 562 318 43% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 42 36% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 55 68% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 78 56% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 14 80% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 68 67% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 10 62% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 18 63% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/parser.py 352 198 44% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/reader.py 122 34 72% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/scanner.py 758 437 42% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 985s Nov 13 11:45:55 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 985s Nov 13 11:45:55 patroni/__init__.py 13 2 85% 985s Nov 13 11:45:55 patroni/__main__.py 199 199 0% 985s Nov 13 11:45:55 patroni/api.py 770 770 0% 985s Nov 13 11:45:55 patroni/async_executor.py 96 69 28% 985s Nov 13 11:45:55 patroni/collections.py 56 15 73% 985s Nov 13 11:45:55 patroni/config.py 371 196 47% 985s Nov 13 11:45:55 patroni/config_generator.py 212 212 0% 985s Nov 13 11:45:55 patroni/ctl.py 936 663 29% 985s Nov 13 11:45:55 patroni/daemon.py 76 76 0% 985s Nov 13 11:45:55 patroni/dcs/__init__.py 646 308 52% 985s Nov 13 11:45:55 patroni/dcs/consul.py 485 485 0% 985s Nov 13 11:45:55 patroni/dcs/etcd3.py 679 679 0% 985s Nov 13 11:45:55 patroni/dcs/etcd.py 603 232 62% 985s Nov 13 11:45:55 patroni/dcs/exhibitor.py 61 61 0% 985s Nov 13 11:45:55 patroni/dcs/kubernetes.py 938 938 0% 985s Nov 13 11:45:55 patroni/dcs/raft.py 319 319 0% 985s Nov 13 11:45:55 patroni/dcs/zookeeper.py 288 288 0% 985s Nov 13 11:45:55 patroni/dynamic_loader.py 35 7 80% 985s Nov 13 11:45:55 patroni/exceptions.py 16 1 94% 985s Nov 13 11:45:55 patroni/file_perm.py 43 15 65% 985s Nov 13 11:45:55 patroni/global_config.py 81 23 72% 985s Nov 13 11:45:55 patroni/ha.py 1244 1244 0% 985s Nov 13 11:45:55 patroni/log.py 219 173 21% 985s Nov 13 11:45:55 patroni/postgresql/__init__.py 821 651 21% 985s Nov 13 11:45:55 patroni/postgresql/available_parameters/__init__.py 21 3 86% 985s Nov 13 11:45:55 patroni/postgresql/bootstrap.py 252 222 12% 985s Nov 13 11:45:55 patroni/postgresql/callback_executor.py 55 34 38% 985s Nov 13 11:45:55 patroni/postgresql/cancellable.py 104 84 19% 985s Nov 13 11:45:55 patroni/postgresql/config.py 813 698 14% 985s Nov 13 11:45:55 patroni/postgresql/connection.py 75 50 33% 985s Nov 13 11:45:55 patroni/postgresql/misc.py 41 29 29% 985s Nov 13 11:45:55 patroni/postgresql/mpp/__init__.py 89 21 76% 985s Nov 13 11:45:55 patroni/postgresql/mpp/citus.py 259 259 0% 985s Nov 13 11:45:55 patroni/postgresql/postmaster.py 170 139 18% 985s Nov 13 11:45:55 patroni/postgresql/rewind.py 416 416 0% 985s Nov 13 11:45:55 patroni/postgresql/slots.py 334 285 15% 985s Nov 13 11:45:55 patroni/postgresql/sync.py 130 96 26% 985s Nov 13 11:45:55 patroni/postgresql/validator.py 157 52 67% 985s Nov 13 11:45:55 patroni/psycopg.py 42 28 33% 985s Nov 13 11:45:55 patroni/raft_controller.py 22 22 0% 985s Nov 13 11:45:55 patroni/request.py 62 6 90% 985s Nov 13 11:45:55 patroni/scripts/__init__.py 0 0 100% 985s Nov 13 11:45:55 patroni/scripts/aws.py 59 59 0% 985s Nov 13 11:45:55 patroni/scripts/barman/__init__.py 0 0 100% 985s Nov 13 11:45:55 patroni/scripts/barman/cli.py 51 51 0% 985s Nov 13 11:45:55 patroni/scripts/barman/config_switch.py 51 51 0% 985s Nov 13 11:45:55 patroni/scripts/barman/recover.py 37 37 0% 985s Nov 13 11:45:55 patroni/scripts/barman/utils.py 94 94 0% 985s Nov 13 11:45:55 patroni/scripts/wale_restore.py 207 207 0% 985s Nov 13 11:45:55 patroni/tags.py 38 15 61% 985s Nov 13 11:45:55 patroni/utils.py 350 246 30% 985s Nov 13 11:45:55 patroni/validator.py 301 215 29% 985s Nov 13 11:45:55 patroni/version.py 1 0 100% 985s Nov 13 11:45:55 patroni/watchdog/__init__.py 2 2 0% 985s Nov 13 11:45:55 patroni/watchdog/base.py 203 203 0% 985s Nov 13 11:45:55 patroni/watchdog/linux.py 135 135 0% 985s Nov 13 11:45:55 ------------------------------------------------------------------------------------------------------------- 985s Nov 13 11:45:55 TOTAL 53060 33815 36% 985s Nov 13 11:45:55 1 feature passed, 0 failed, 0 skipped 985s Nov 13 11:45:55 7 scenarios passed, 0 failed, 0 skipped 985s Nov 13 11:45:55 68 steps passed, 0 failed, 0 skipped, 0 undefined 985s Nov 13 11:45:55 Took 1m21.190s 985s ### End 16 acceptance-etcd features/basic_replication.feature ### 985s + echo '### End 16 acceptance-etcd features/basic_replication.feature ###' 985s + rm -f '/tmp/pgpass?' 985s ++ id -u 985s + '[' 0 -eq 0 ']' 985s + '[' -x /etc/init.d/zookeeper ']' 985s autopkgtest [11:45:55]: test acceptance-etcd-basic: -----------------------] 986s acceptance-etcd-basic PASS 986s autopkgtest [11:45:56]: test acceptance-etcd-basic: - - - - - - - - - - results - - - - - - - - - - 986s autopkgtest [11:45:56]: test acceptance-etcd: preparing testbed 987s Reading package lists... 987s Building dependency tree... 987s Reading state information... 988s Starting pkgProblemResolver with broken count: 0 988s Starting 2 pkgProblemResolver with broken count: 0 988s Done 988s The following NEW packages will be installed: 988s autopkgtest-satdep 988s 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 988s Need to get 0 B/768 B of archives. 988s After this operation, 0 B of additional disk space will be used. 988s Get:1 /tmp/autopkgtest.FwqS2V/3-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [768 B] 988s Selecting previously unselected package autopkgtest-satdep. 988s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58728 files and directories currently installed.) 988s Preparing to unpack .../3-autopkgtest-satdep.deb ... 988s Unpacking autopkgtest-satdep (0) ... 988s Setting up autopkgtest-satdep (0) ... 990s (Reading database ... 58728 files and directories currently installed.) 990s Removing autopkgtest-satdep (0) ... 990s autopkgtest [11:46:00]: test acceptance-etcd: debian/tests/acceptance etcd 990s autopkgtest [11:46:00]: test acceptance-etcd: [----------------------- 991s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 991s ○ etcd.service - etcd - highly-available key value store 991s Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; preset: enabled) 991s Active: inactive (dead) since Wed 2024-11-13 11:44:24 UTC; 1min 36s ago 991s Duration: 12.254s 991s Invocation: 7d3fd57601dc4e05be5706b9f9d9476b 991s Docs: https://etcd.io/docs 991s man:etcd 991s Process: 2549 ExecStart=/usr/bin/etcd $DAEMON_ARGS (code=killed, signal=TERM) 991s Main PID: 2549 (code=killed, signal=TERM) 991s Mem peak: 7M 991s CPU: 72ms 991s 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.788208Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"autopkgtest","data-dir":"/var/lib/etcd/default","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]} 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"warn","ts":"2024-11-13T11:44:24.788304Z","caller":"embed/serve.go:161","msg":"stopping insecure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"} 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"warn","ts":"2024-11-13T11:44:24.788484Z","caller":"embed/serve.go:163","msg":"stopped insecure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"} 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.788500Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8e9e05c52164694d","current-leader-member-id":"8e9e05c52164694d"} 991s Nov 13 11:44:24 autopkgtest systemd[1]: Stopping etcd.service - etcd - highly-available key value store... 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.790775Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"127.0.0.1:2380"} 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.790847Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"127.0.0.1:2380"} 991s Nov 13 11:44:24 autopkgtest etcd[2549]: {"level":"info","ts":"2024-11-13T11:44:24.790854Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"autopkgtest","data-dir":"/var/lib/etcd/default","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]} 991s Nov 13 11:44:24 autopkgtest systemd[1]: etcd.service: Deactivated successfully. 991s Nov 13 11:44:24 autopkgtest systemd[1]: Stopped etcd.service - etcd - highly-available key value store. 991s ++ ls -1r /usr/lib/postgresql/ 991s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 991s + '[' 16 == 10 -o 16 == 11 ']' 991s + echo '### PostgreSQL 16 acceptance-etcd ###' 991s + su postgres -p -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=etcd PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave | ts' 991s ### PostgreSQL 16 acceptance-etcd ### 993s Nov 13 11:46:03 Feature: basic replication # features/basic_replication.feature:1 993s Nov 13 11:46:03 We should check that the basic bootstrapping, replication and failover works. 993s Nov 13 11:46:03 Scenario: check replication of a single table # features/basic_replication.feature:4 993s Nov 13 11:46:03 Given I start postgres0 # features/steps/basic_replication.py:8 996s Nov 13 11:46:06 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 996s Nov 13 11:46:06 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 996s Nov 13 11:46:06 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 996s Nov 13 11:46:06 Then I receive a response code 200 # features/steps/patroni_api.py:98 996s Nov 13 11:46:06 When I start postgres1 # features/steps/basic_replication.py:8 999s Nov 13 11:46:09 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 1002s Nov 13 11:46:12 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 1002s Nov 13 11:46:12 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 1002s Nov 13 11:46:12 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1003s Nov 13 11:46:13 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 1007s Nov 13 11:46:17 1007s Nov 13 11:46:17 Scenario: check restart of sync replica # features/basic_replication.feature:17 1007s Nov 13 11:46:17 Given I shut down postgres2 # features/steps/basic_replication.py:29 1008s Nov 13 11:46:18 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 1008s Nov 13 11:46:18 When I start postgres2 # features/steps/basic_replication.py:8 1011s Nov 13 11:46:21 And I shut down postgres1 # features/steps/basic_replication.py:29 1014s Nov 13 11:46:24 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1015s Nov 13 11:46:25 When I start postgres1 # features/steps/basic_replication.py:8 1018s Nov 13 11:46:28 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1019s Nov 13 11:46:29 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1019s Nov 13 11:46:29 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1019s Nov 13 11:46:29 1019s Nov 13 11:46:29 Scenario: check stuck sync replica # features/basic_replication.feature:28 1019s Nov 13 11:46:29 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 1019s Nov 13 11:46:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 1019s Nov 13 11:46:29 And I create table on postgres0 # features/steps/basic_replication.py:73 1019s Nov 13 11:46:29 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 1020s Nov 13 11:46:30 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 1020s Nov 13 11:46:30 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 1020s Nov 13 11:46:30 And I load data on postgres0 # features/steps/basic_replication.py:84 1020s Nov 13 11:46:30 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 1023s Nov 13 11:46:33 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 1023s Nov 13 11:46:33 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1025s Nov 13 11:46:35 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1025s Nov 13 11:46:35 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 1025s Nov 13 11:46:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1025s Nov 13 11:46:35 And I drop table on postgres0 # features/steps/basic_replication.py:73 1025s Nov 13 11:46:35 1025s Nov 13 11:46:35 Scenario: check multi sync replication # features/basic_replication.feature:44 1025s Nov 13 11:46:35 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 1025s Nov 13 11:46:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1025s Nov 13 11:46:35 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1029s Nov 13 11:46:39 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1029s Nov 13 11:46:39 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1029s Nov 13 11:46:39 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 1029s Nov 13 11:46:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 1029s Nov 13 11:46:39 And I shut down postgres1 # features/steps/basic_replication.py:29 1032s Nov 13 11:46:42 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1033s Nov 13 11:46:43 When I start postgres1 # features/steps/basic_replication.py:8 1036s Nov 13 11:46:46 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1036s Nov 13 11:46:46 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1036s Nov 13 11:46:46 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1036s Nov 13 11:46:46 1036s Nov 13 11:46:46 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 1036s Nov 13 11:46:46 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 1038s Nov 13 11:46:48 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1038s Nov 13 11:46:48 When I sleep for 2 seconds # features/steps/patroni_api.py:39 1040s Nov 13 11:46:50 And I shut down postgres0 # features/steps/basic_replication.py:29 1041s Nov 13 11:46:51 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 1043s Nov 13 11:46:53 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1043s Nov 13 11:46:53 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 1062s Nov 13 11:47:12 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 1065s Nov 13 11:47:15 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 1065s Nov 13 11:47:15 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 1065s Nov 13 11:47:15 Then I receive a response code 200 # features/steps/patroni_api.py:98 1065s Nov 13 11:47:15 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 1065s Nov 13 11:47:15 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1067s Nov 13 11:47:17 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 1067s Nov 13 11:47:17 1067s Nov 13 11:47:17 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 1067s Nov 13 11:47:17 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 1067s Nov 13 11:47:17 And I start postgres0 # features/steps/basic_replication.py:8 1067s Nov 13 11:47:17 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1074s Nov 13 11:47:24 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 1074s Nov 13 11:47:24 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 1074s Nov 13 11:47:24 1074s Nov 13 11:47:24 @reject-duplicate-name 1074s Nov 13 11:47:24 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 1074s Nov 13 11:47:24 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 1077s Nov 13 11:47:27 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 1081s Nov 13 11:47:31 1081s Nov 13 11:47:31 Feature: cascading replication # features/cascading_replication.feature:1 1081s Nov 13 11:47:31 We should check that patroni can do base backup and streaming from the replica 1081s Nov 13 11:47:31 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 1081s Nov 13 11:47:31 Given I start postgres0 # features/steps/basic_replication.py:8 1084s Nov 13 11:47:34 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1085s Nov 13 11:47:35 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 1088s Nov 13 11:47:38 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1093s Nov 13 11:47:43 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 1093s Nov 13 11:47:43 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 1093s Nov 13 11:47:43 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 1093s Nov 13 11:47:43 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 1096s Nov 13 11:47:46 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 1101s Nov 13 11:47:51 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 1108s Nov 13 11:47:57 1108s SKIP FEATURE citus: Citus extenstion isn't available 1108s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 1108s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 1108s Nov 13 11:47:57 Feature: citus # features/citus.feature:1 1108s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 1108s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 1108s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 1108s Nov 13 11:47:57 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 1108s Nov 13 11:47:57 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 1108s Nov 13 11:47:57 Given I start postgres0 in citus group 0 # None 1108s Nov 13 11:47:57 And I start postgres2 in citus group 1 # None 1108s Nov 13 11:47:57 Then postgres0 is a leader in a group 0 after 10 seconds # None 1108s Nov 13 11:47:57 And postgres2 is a leader in a group 1 after 10 seconds # None 1108s Nov 13 11:47:57 When I start postgres1 in citus group 0 # None 1108s Nov 13 11:47:57 And I start postgres3 in citus group 1 # None 1108s Nov 13 11:47:57 Then replication works from postgres0 to postgres1 after 15 seconds # None 1108s Nov 13 11:47:57 Then replication works from postgres2 to postgres3 after 15 seconds # None 1108s Nov 13 11:47:57 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 1108s Nov 13 11:47:57 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1108s Nov 13 11:47:57 1108s Nov 13 11:47:57 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 1108s Nov 13 11:47:57 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 1108s Nov 13 11:47:57 Then postgres1 role is the primary after 10 seconds # None 1108s Nov 13 11:47:57 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 1108s Nov 13 11:47:57 And replication works from postgres1 to postgres0 after 15 seconds # None 1108s Nov 13 11:47:57 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 1108s Nov 13 11:47:57 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 1108s Nov 13 11:47:57 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 1108s Nov 13 11:47:57 Then postgres0 role is the primary after 10 seconds # None 1108s Nov 13 11:47:57 And replication works from postgres0 to postgres1 after 15 seconds # None 1108s Nov 13 11:47:57 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 1108s Nov 13 11:47:57 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 1108s Nov 13 11:47:57 1108s Nov 13 11:47:57 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 1108s Nov 13 11:47:57 Given I create a distributed table on postgres0 # None 1108s Nov 13 11:47:57 And I start a thread inserting data on postgres0 # None 1108s Nov 13 11:47:57 When I run patronictl.py switchover batman --group 1 --force # None 1108s Nov 13 11:47:57 Then I receive a response returncode 0 # None 1108s Nov 13 11:47:57 And postgres3 role is the primary after 10 seconds # None 1108s Nov 13 11:47:57 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 1108s Nov 13 11:47:57 And replication works from postgres3 to postgres2 after 15 seconds # None 1108s Nov 13 11:47:57 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1108s Nov 13 11:47:57 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 1108s Nov 13 11:47:57 And a thread is still alive # None 1108s Nov 13 11:47:57 When I run patronictl.py switchover batman --group 1 --force # None 1108s Nov 13 11:47:57 Then I receive a response returncode 0 # None 1108s Nov 13 11:47:57 And postgres2 role is the primary after 10 seconds # None 1108s Nov 13 11:47:57 And replication works from postgres2 to postgres3 after 15 seconds # None 1108s Nov 13 11:47:57 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1108s Nov 13 11:47:57 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 1108s Nov 13 11:47:57 And a thread is still alive # None 1108s Nov 13 11:47:57 When I stop a thread # None 1108s Nov 13 11:47:57 Then a distributed table on postgres0 has expected rows # None 1108s Nov 13 11:47:57 1108s Nov 13 11:47:57 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 1108s Nov 13 11:47:57 Given I cleanup a distributed table on postgres0 # None 1108s Nov 13 11:47:57 And I start a thread inserting data on postgres0 # None 1108s Nov 13 11:47:57 When I run patronictl.py restart batman postgres2 --group 1 --force # None 1108s Nov 13 11:47:57 Then I receive a response returncode 0 # None 1108s Nov 13 11:47:57 And postgres2 role is the primary after 10 seconds # None 1108s Nov 13 11:47:57 And replication works from postgres2 to postgres3 after 15 seconds # None 1108s Nov 13 11:47:57 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1108s Nov 13 11:47:57 And a thread is still alive # None 1108s Nov 13 11:47:57 When I stop a thread # None 1108s Nov 13 11:47:57 Then a distributed table on postgres0 has expected rows # None 1108s Nov 13 11:47:57 1108s Nov 13 11:47:57 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 1108s Nov 13 11:47:57 Given I start postgres4 in citus group 2 # None 1108s Nov 13 11:47:57 Then postgres4 is a leader in a group 2 after 10 seconds # None 1108s Nov 13 11:47:57 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 1108s Nov 13 11:47:57 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 1108s Nov 13 11:47:57 Then I receive a response returncode 0 # None 1108s Nov 13 11:47:57 And I receive a response output "+ttl: 20" # None 1108s Nov 13 11:47:57 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 1108s Nov 13 11:47:57 When I shut down postgres4 # None 1108s Nov 13 11:47:57 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 1108s Nov 13 11:47:57 When I run patronictl.py restart batman postgres2 --group 1 --force # None 1108s Nov 13 11:47:57 Then a transaction finishes in 20 seconds # None 1108s Nov 13 11:47:57 1108s Nov 13 11:47:57 Feature: custom bootstrap # features/custom_bootstrap.feature:1 1108s Nov 13 11:47:57 We should check that patroni can bootstrap a new cluster from a backup 1108s Nov 13 11:47:57 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 1108s Nov 13 11:47:57 Given I start postgres0 # features/steps/basic_replication.py:8 1111s Nov 13 11:48:00 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1111s Nov 13 11:48:00 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 1111s Nov 13 11:48:00 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 1115s Nov 13 11:48:05 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 1116s Nov 13 11:48:06 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 1116s Nov 13 11:48:06 1116s Nov 13 11:48:06 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 1116s Nov 13 11:48:06 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 1116s Nov 13 11:48:06 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 1116s Nov 13 11:48:06 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 1120s Nov 13 11:48:10 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 1121s Nov 13 11:48:11 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 1127s Nov 13 11:48:17 1127s Nov 13 11:48:17 Feature: dcs failsafe mode # features/dcs_failsafe_mode.feature:1 1127s Nov 13 11:48:17 We should check the basic dcs failsafe mode functioning 1127s Nov 13 11:48:17 Scenario: check failsafe mode can be successfully enabled # features/dcs_failsafe_mode.feature:4 1127s Nov 13 11:48:17 Given I start postgres0 # features/steps/basic_replication.py:8 1130s Nov 13 11:48:20 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1130s Nov 13 11:48:20 Then "config" key in DCS has ttl=30 after 10 seconds # features/steps/cascading_replication.py:23 1130s Nov 13 11:48:20 When I issue a PATCH request to http://127.0.0.1:8008/config with {"loop_wait": 2, "ttl": 20, "retry_timeout": 3, "failsafe_mode": true} # features/steps/patroni_api.py:71 1130s Nov 13 11:48:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 1130s Nov 13 11:48:20 And Response on GET http://127.0.0.1:8008/failsafe contains postgres0 after 10 seconds # features/steps/patroni_api.py:156 1130s Nov 13 11:48:20 When I issue a GET request to http://127.0.0.1:8008/failsafe # features/steps/patroni_api.py:61 1130s Nov 13 11:48:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 1130s Nov 13 11:48:20 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 1130s Nov 13 11:48:20 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}},"slots":{"dcs_slot_1": null,"postgres0":null}} # features/steps/patroni_api.py:71 1130s Nov 13 11:48:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 1130s Nov 13 11:48:20 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots": {"dcs_slot_0": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 1130s Nov 13 11:48:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 1130s Nov 13 11:48:20 1130s Nov 13 11:48:20 @dcs-failsafe 1130s Nov 13 11:48:20 Scenario: check one-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:20 1130s Nov 13 11:48:20 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 1130s Nov 13 11:48:20 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 1137s Nov 13 11:48:27 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1137s Nov 13 11:48:27 1137s Nov 13 11:48:27 @dcs-failsafe 1137s Nov 13 11:48:27 Scenario: check new replica isn't promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:26 1137s Nov 13 11:48:27 Given DCS is up # features/steps/dcs_failsafe_mode.py:9 1137s Nov 13 11:48:27 When I do a backup of postgres0 # features/steps/custom_bootstrap.py:25 1137s Nov 13 11:48:27 And I shut down postgres0 # features/steps/basic_replication.py:29 1139s Nov 13 11:48:29 When I start postgres1 in a cluster batman from backup with no_leader # features/steps/dcs_failsafe_mode.py:14 1142s Nov 13 11:48:32 Then postgres1 role is the replica after 12 seconds # features/steps/basic_replication.py:105 1142s Nov 13 11:48:32 1142s Nov 13 11:48:32 Scenario: check leader and replica are both in /failsafe key after leader is back # features/dcs_failsafe_mode.feature:33 1142s Nov 13 11:48:32 Given I start postgres0 # features/steps/basic_replication.py:8 1145s Nov 13 11:48:35 And I start postgres1 # features/steps/basic_replication.py:8 1145s Nov 13 11:48:35 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1146s Nov 13 11:48:36 And "members/postgres1" key in DCS has state=running after 2 seconds # features/steps/cascading_replication.py:23 1146s Nov 13 11:48:36 And Response on GET http://127.0.0.1:8009/failsafe contains postgres1 after 10 seconds # features/steps/patroni_api.py:156 1147s Nov 13 11:48:37 When I issue a GET request to http://127.0.0.1:8009/failsafe # features/steps/patroni_api.py:61 1147s Nov 13 11:48:37 Then I receive a response code 200 # features/steps/patroni_api.py:98 1147s Nov 13 11:48:37 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 1147s Nov 13 11:48:37 And I receive a response postgres1 http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:98 1147s Nov 13 11:48:37 1147s Nov 13 11:48:37 @dcs-failsafe @slot-advance 1147s Nov 13 11:48:37 Scenario: check leader and replica are functioning while DCS is down # features/dcs_failsafe_mode.feature:46 1147s Nov 13 11:48:37 Given I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 1147s Nov 13 11:48:37 Then physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1148s Nov 13 11:48:38 And logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1151s Nov 13 11:48:41 And DCS is down # features/steps/dcs_failsafe_mode.py:4 1151s Nov 13 11:48:41 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 1158s Nov 13 11:48:48 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1158s Nov 13 11:48:48 And postgres1 role is the replica after 2 seconds # features/steps/basic_replication.py:105 1158s Nov 13 11:48:48 And replication works from postgres0 to postgres1 after 10 seconds # features/steps/basic_replication.py:112 1158s Nov 13 11:48:48 When I get all changes from logical slot dcs_slot_0 on postgres0 # features/steps/slots.py:70 1158s Nov 13 11:48:48 And I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 1158s Nov 13 11:48:48 Then logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 20 seconds # features/steps/slots.py:51 1162s Nov 13 11:48:52 And physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1162s Nov 13 11:48:52 1162s Nov 13 11:48:52 @dcs-failsafe 1162s Nov 13 11:48:52 Scenario: check primary is demoted when one replica is shut down and DCS is down # features/dcs_failsafe_mode.feature:61 1162s Nov 13 11:48:52 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 1162s Nov 13 11:48:52 And I kill postgres1 # features/steps/basic_replication.py:34 1163s Nov 13 11:48:53 And I kill postmaster on postgres1 # features/steps/basic_replication.py:44 1163s Nov 13 11:48:53 waiting for server to shut down.... done 1163s Nov 13 11:48:53 server stopped 1163s Nov 13 11:48:53 Then postgres0 role is the replica after 12 seconds # features/steps/basic_replication.py:105 1165s Nov 13 11:48:55 1165s Nov 13 11:48:55 @dcs-failsafe 1165s Nov 13 11:48:55 Scenario: check known replica is promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:68 1165s Nov 13 11:48:55 Given I kill postgres0 # features/steps/basic_replication.py:34 1166s Nov 13 11:48:56 And I shut down postmaster on postgres0 # features/steps/basic_replication.py:39 1166s Nov 13 11:48:56 waiting for server to shut down.... done 1166s Nov 13 11:48:56 server stopped 1166s Nov 13 11:48:56 And DCS is up # features/steps/dcs_failsafe_mode.py:9 1166s Nov 13 11:48:56 When I start postgres1 # features/steps/basic_replication.py:8 1169s Nov 13 11:48:59 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1169s Nov 13 11:48:59 And postgres1 role is the primary after 25 seconds # features/steps/basic_replication.py:105 1171s Nov 13 11:49:01 1171s Nov 13 11:49:01 @dcs-failsafe 1171s Nov 13 11:49:01 Scenario: scale to three-node cluster # features/dcs_failsafe_mode.feature:77 1171s Nov 13 11:49:01 Given I start postgres0 # features/steps/basic_replication.py:8 1174s Nov 13 11:49:04 And I start postgres2 # features/steps/basic_replication.py:8 1177s Nov 13 11:49:07 Then "members/postgres2" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1178s Nov 13 11:49:08 And "members/postgres0" key in DCS has state=running after 20 seconds # features/steps/cascading_replication.py:23 1178s Nov 13 11:49:08 And Response on GET http://127.0.0.1:8008/failsafe contains postgres2 after 10 seconds # features/steps/patroni_api.py:156 1178s Nov 13 11:49:08 And replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 1179s Nov 13 11:49:09 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 1180s Nov 13 11:49:10 1180s Nov 13 11:49:10 @dcs-failsafe @slot-advance 1180s Nov 13 11:49:10 Scenario: make sure permanent slots exist on replicas # features/dcs_failsafe_mode.feature:88 1180s Nov 13 11:49:10 Given I issue a PATCH request to http://127.0.0.1:8009/config with {"slots":{"dcs_slot_0":null,"dcs_slot_2":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 1180s Nov 13 11:49:10 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 1186s Nov 13 11:49:16 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 1187s Nov 13 11:49:17 When I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 1187s Nov 13 11:49:17 Then physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 1188s Nov 13 11:49:18 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1188s Nov 13 11:49:18 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1188s Nov 13 11:49:18 1188s Nov 13 11:49:18 @dcs-failsafe 1188s Nov 13 11:49:18 Scenario: check three-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:98 1188s Nov 13 11:49:18 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 1188s Nov 13 11:49:18 Then Response on GET http://127.0.0.1:8009/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 1196s Nov 13 11:49:26 Then postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1196s Nov 13 11:49:26 And postgres0 role is the replica after 2 seconds # features/steps/basic_replication.py:105 1196s Nov 13 11:49:26 And postgres2 role is the replica after 2 seconds # features/steps/basic_replication.py:105 1196s Nov 13 11:49:26 1196s Nov 13 11:49:26 @dcs-failsafe @slot-advance 1196s Nov 13 11:49:26 Scenario: check that permanent slots are in sync between nodes while DCS is down # features/dcs_failsafe_mode.feature:107 1196s Nov 13 11:49:26 Given replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 1196s Nov 13 11:49:26 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 1197s Nov 13 11:49:27 When I get all changes from logical slot dcs_slot_2 on postgres1 # features/steps/slots.py:70 1197s Nov 13 11:49:27 And I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 1197s Nov 13 11:49:27 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 1198s Nov 13 11:49:28 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 1198s Nov 13 11:49:28 And physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 1198s Nov 13 11:49:28 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1198s Nov 13 11:49:28 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1202s Nov 13 11:49:32 1202s Nov 13 11:49:32 Feature: ignored slots # features/ignored_slots.feature:1 1202s Nov 13 11:49:32 1202s Nov 13 11:49:32 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 1202s Nov 13 11:49:32 Given I start postgres1 # features/steps/basic_replication.py:8 1205s Nov 13 11:49:35 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1205s Nov 13 11:49:35 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1205s Nov 13 11:49:35 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 1205s Nov 13 11:49:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1205s Nov 13 11:49:35 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 1205s Nov 13 11:49:35 When I shut down postgres1 # features/steps/basic_replication.py:29 1207s Nov 13 11:49:37 And I start postgres1 # features/steps/basic_replication.py:8 1210s Nov 13 11:49:40 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1210s Nov 13 11:49:40 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1211s Nov 13 11:49:41 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 1211s Nov 13 11:49:41 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1211s Nov 13 11:49:41 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1211s Nov 13 11:49:41 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1211s Nov 13 11:49:41 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1211s Nov 13 11:49:41 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1211s Nov 13 11:49:41 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1211s Nov 13 11:49:41 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1211s Nov 13 11:49:41 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1211s Nov 13 11:49:41 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1211s Nov 13 11:49:41 When I start postgres0 # features/steps/basic_replication.py:8 1214s Nov 13 11:49:44 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 1215s Nov 13 11:49:45 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1215s Nov 13 11:49:45 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 1216s Nov 13 11:49:46 When I shut down postgres1 # features/steps/basic_replication.py:29 1218s Nov 13 11:49:48 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1219s Nov 13 11:49:49 When I start postgres1 # features/steps/basic_replication.py:8 1222s Nov 13 11:49:52 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1222s Nov 13 11:49:52 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 1223s Nov 13 11:49:53 And I sleep for 2 seconds # features/steps/patroni_api.py:39 1225s Nov 13 11:49:55 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1225s Nov 13 11:49:55 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1225s Nov 13 11:49:55 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1225s Nov 13 11:49:55 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1225s Nov 13 11:49:55 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 1225s Nov 13 11:49:55 When I shut down postgres0 # features/steps/basic_replication.py:29 1227s Nov 13 11:49:57 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1228s Nov 13 11:49:58 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1228s Nov 13 11:49:58 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1228s Nov 13 11:49:58 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1228s Nov 13 11:49:58 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1230s Nov 13 11:50:00 1230s Nov 13 11:50:00 Feature: nostream node # features/nostream_node.feature:1 1230s Nov 13 11:50:00 1230s Nov 13 11:50:00 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 1230s Nov 13 11:50:00 When I start postgres0 # features/steps/basic_replication.py:8 1233s Nov 13 11:50:03 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 1236s Nov 13 11:50:06 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 1237s Nov 13 11:50:07 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 1241s Nov 13 11:50:11 1241s Nov 13 11:50:11 @slot-advance 1241s Nov 13 11:50:11 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 1241s Nov 13 11:50:11 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 1241s Nov 13 11:50:11 Then I receive a response code 200 # features/steps/patroni_api.py:98 1241s Nov 13 11:50:11 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 1244s Nov 13 11:50:14 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 1245s Nov 13 11:50:15 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 1248s Nov 13 11:50:18 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 1254s Nov 13 11:50:24 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 1254s Nov 13 11:50:24 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 1260s Nov 13 11:50:30 1260s Nov 13 11:50:30 Feature: patroni api # features/patroni_api.feature:1 1260s Nov 13 11:50:30 We should check that patroni correctly responds to valid and not-valid API requests. 1260s Nov 13 11:50:30 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 1260s Nov 13 11:50:30 Given I start postgres0 # features/steps/basic_replication.py:8 1263s Nov 13 11:50:33 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1263s Nov 13 11:50:33 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 1263s Nov 13 11:50:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 And I receive a response state running # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 And I receive a response role master # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 1263s Nov 13 11:50:33 Then I receive a response code 503 # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 1263s Nov 13 11:50:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 1263s Nov 13 11:50:33 Then I receive a response code 503 # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 1263s Nov 13 11:50:33 Then I receive a response code 503 # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 1263s Nov 13 11:50:33 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 1265s Nov 13 11:50:35 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 1265s Nov 13 11:50:35 Then I receive a response code 412 # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 1265s Nov 13 11:50:35 Then I receive a response code 400 # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 1265s Nov 13 11:50:35 Then I receive a response code 400 # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 1265s Nov 13 11:50:35 Scenario: check local configuration reload # features/patroni_api.feature:32 1265s Nov 13 11:50:35 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 1265s Nov 13 11:50:35 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 1265s Nov 13 11:50:35 Then I receive a response code 202 # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 1265s Nov 13 11:50:35 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 1265s Nov 13 11:50:35 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 1265s Nov 13 11:50:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1265s Nov 13 11:50:35 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 1267s Nov 13 11:50:37 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 1267s Nov 13 11:50:37 Then I receive a response code 200 # features/steps/patroni_api.py:98 1267s Nov 13 11:50:37 And I receive a response ttl 20 # features/steps/patroni_api.py:98 1267s Nov 13 11:50:37 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 1267s Nov 13 11:50:37 Then I receive a response code 200 # features/steps/patroni_api.py:98 1267s Nov 13 11:50:37 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 1267s Nov 13 11:50:37 And I sleep for 4 seconds # features/steps/patroni_api.py:39 1271s Nov 13 11:50:41 1271s Nov 13 11:50:41 Scenario: check the scheduled restart # features/patroni_api.feature:49 1271s Nov 13 11:50:41 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 1273s Nov 13 11:50:43 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1273s Nov 13 11:50:43 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 1273s Nov 13 11:50:43 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 1273s Nov 13 11:50:43 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 1273s Nov 13 11:50:43 Then I receive a response code 202 # features/steps/patroni_api.py:98 1273s Nov 13 11:50:43 And I sleep for 8 seconds # features/steps/patroni_api.py:39 1281s Nov 13 11:50:51 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 1281s Nov 13 11:50:51 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 1281s Nov 13 11:50:51 Then I receive a response code 202 # features/steps/patroni_api.py:98 1281s Nov 13 11:50:51 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 1288s Nov 13 11:50:58 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1289s Nov 13 11:50:59 1289s Nov 13 11:50:59 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 1289s Nov 13 11:50:59 Given I start postgres1 # features/steps/basic_replication.py:8 1292s Nov 13 11:51:02 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1293s Nov 13 11:51:03 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 1295s Nov 13 11:51:05 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1295s Nov 13 11:51:05 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 1295s Nov 13 11:51:05 waiting for server to shut down.... done 1295s Nov 13 11:51:05 server stopped 1295s Nov 13 11:51:05 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1295s Nov 13 11:51:05 Then I receive a response code 503 # features/steps/patroni_api.py:98 1295s Nov 13 11:51:05 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 1296s Nov 13 11:51:06 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 1299s Nov 13 11:51:09 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1299s Nov 13 11:51:09 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1300s Nov 13 11:51:10 And I sleep for 2 seconds # features/steps/patroni_api.py:39 1303s Nov 13 11:51:12 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1303s Nov 13 11:51:12 Then I receive a response code 200 # features/steps/patroni_api.py:98 1303s Nov 13 11:51:12 And I receive a response state running # features/steps/patroni_api.py:98 1303s Nov 13 11:51:12 And I receive a response role replica # features/steps/patroni_api.py:98 1303s Nov 13 11:51:12 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 1306s Nov 13 11:51:16 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1306s Nov 13 11:51:16 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 1306s Nov 13 11:51:16 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 1307s Nov 13 11:51:17 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1307s Nov 13 11:51:17 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 1310s Nov 13 11:51:20 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1310s Nov 13 11:51:20 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 1310s Nov 13 11:51:20 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 1311s Nov 13 11:51:21 1311s Nov 13 11:51:21 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 1311s Nov 13 11:51:21 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 1313s Nov 13 11:51:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 1313s Nov 13 11:51:23 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 1313s Nov 13 11:51:23 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1313s Nov 13 11:51:23 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 1318s Nov 13 11:51:28 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 1319s Nov 13 11:51:28 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1319s Nov 13 11:51:28 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 1319s Nov 13 11:51:29 Then I receive a response code 503 # features/steps/patroni_api.py:98 1319s Nov 13 11:51:29 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 1319s Nov 13 11:51:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 1319s Nov 13 11:51:29 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 1319s Nov 13 11:51:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 1319s Nov 13 11:51:29 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1319s Nov 13 11:51:29 Then I receive a response code 503 # features/steps/patroni_api.py:98 1319s Nov 13 11:51:29 1319s Nov 13 11:51:29 Scenario: check the scheduled switchover # features/patroni_api.feature:107 1319s Nov 13 11:51:29 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 1320s Nov 13 11:51:30 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 1320s Nov 13 11:51:30 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 1320s Nov 13 11:51:30 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 1322s Nov 13 11:51:32 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1322s Nov 13 11:51:32 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 1325s Nov 13 11:51:34 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1325s Nov 13 11:51:34 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 1335s Nov 13 11:51:45 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1335s Nov 13 11:51:45 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 1337s Nov 13 11:51:47 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 1342s Nov 13 11:51:52 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1342s Nov 13 11:51:52 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 1342s Nov 13 11:51:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 1342s Nov 13 11:51:52 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 1342s Nov 13 11:51:52 Then I receive a response code 503 # features/steps/patroni_api.py:98 1342s Nov 13 11:51:52 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 1342s Nov 13 11:51:52 Then I receive a response code 503 # features/steps/patroni_api.py:98 1342s Nov 13 11:51:52 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1342s Nov 13 11:51:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 1346s Nov 13 11:51:56 1346s Nov 13 11:51:56 Feature: permanent slots # features/permanent_slots.feature:1 1346s Nov 13 11:51:56 1346s Nov 13 11:51:56 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 1346s Nov 13 11:51:56 Given I start postgres0 # features/steps/basic_replication.py:8 1349s Nov 13 11:51:59 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1349s Nov 13 11:51:59 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1349s Nov 13 11:51:59 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 1349s Nov 13 11:51:59 Then I receive a response code 200 # features/steps/patroni_api.py:98 1349s Nov 13 11:51:59 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 1349s Nov 13 11:51:59 When I start postgres1 # features/steps/basic_replication.py:8 1352s Nov 13 11:52:02 And I start postgres2 # features/steps/basic_replication.py:8 1355s Nov 13 11:52:05 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 1359s Nov 13 11:52:08 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 1359s Nov 13 11:52:08 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 1359s Nov 13 11:52:08 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 1359s Nov 13 11:52:08 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 1359s Nov 13 11:52:08 1359s Nov 13 11:52:08 @slot-advance 1359s Nov 13 11:52:08 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 1359s Nov 13 11:52:08 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 1362s Nov 13 11:52:12 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 1362s Nov 13 11:52:12 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 1363s Nov 13 11:52:13 1363s Nov 13 11:52:13 @slot-advance 1363s Nov 13 11:52:13 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 1363s Nov 13 11:52:13 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 1367s Nov 13 11:52:17 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1367s Nov 13 11:52:17 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 1368s Nov 13 11:52:18 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 1369s Nov 13 11:52:19 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 1369s Nov 13 11:52:19 @slot-advance 1369s Nov 13 11:52:19 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 1369s Nov 13 11:52:19 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 1369s Nov 13 11:52:19 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 1369s Nov 13 11:52:19 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 1369s Nov 13 11:52:19 1369s Nov 13 11:52:19 @slot-advance 1369s Nov 13 11:52:19 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 1369s Nov 13 11:52:19 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 1369s Nov 13 11:52:19 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 1369s Nov 13 11:52:19 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 1369s Nov 13 11:52:19 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 1370s Nov 13 11:52:20 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 1372s Nov 13 11:52:22 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 1372s Nov 13 11:52:22 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 1372s Nov 13 11:52:22 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 1372s Nov 13 11:52:22 1372s Nov 13 11:52:22 @slot-advance 1372s Nov 13 11:52:22 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 1372s Nov 13 11:52:22 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 1372s Nov 13 11:52:22 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 1372s Nov 13 11:52:22 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 1372s Nov 13 11:52:22 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 1372s Nov 13 11:52:22 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 1372s Nov 13 11:52:22 1372s Nov 13 11:52:22 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 1372s Nov 13 11:52:22 Given I shut down postgres3 # features/steps/basic_replication.py:29 1373s Nov 13 11:52:23 And I shut down postgres2 # features/steps/basic_replication.py:29 1375s Nov 13 11:52:24 And I shut down postgres0 # features/steps/basic_replication.py:29 1376s Nov 13 11:52:26 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 1376s Nov 13 11:52:26 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 1376s Nov 13 11:52:26 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 1378s Nov 13 11:52:28 1378s Nov 13 11:52:28 Feature: priority replication # features/priority_failover.feature:1 1378s Nov 13 11:52:28 We should check that we can give nodes priority during failover 1378s Nov 13 11:52:28 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 1378s Nov 13 11:52:28 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 1381s Nov 13 11:52:31 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 1384s Nov 13 11:52:34 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1389s Nov 13 11:52:39 When I shut down postgres0 # features/steps/basic_replication.py:29 1391s Nov 13 11:52:41 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 1393s Nov 13 11:52:43 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 1393s Nov 13 11:52:43 When I start postgres0 # features/steps/basic_replication.py:8 1396s Nov 13 11:52:46 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1397s Nov 13 11:52:47 1397s Nov 13 11:52:47 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 1397s Nov 13 11:52:47 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 1400s Nov 13 11:52:50 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 1403s Nov 13 11:52:53 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 1404s Nov 13 11:52:54 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 1408s Nov 13 11:52:58 When I shut down postgres0 # features/steps/basic_replication.py:29 1410s Nov 13 11:53:00 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1411s Nov 13 11:53:01 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 1411s Nov 13 11:53:01 1411s Nov 13 11:53:01 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 1411s Nov 13 11:53:01 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 1412s Nov 13 11:53:01 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 1412s Nov 13 11:53:02 Then I receive a response code 202 # features/steps/patroni_api.py:98 1412s Nov 13 11:53:02 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 1413s Nov 13 11:53:03 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 1414s Nov 13 11:53:04 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 1414s Nov 13 11:53:04 Then I receive a response code 412 # features/steps/patroni_api.py:98 1414s Nov 13 11:53:04 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 1414s Nov 13 11:53:04 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 1414s Nov 13 11:53:04 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 1414s Nov 13 11:53:04 Then I receive a response code 202 # features/steps/patroni_api.py:98 1414s Nov 13 11:53:04 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 1416s Nov 13 11:53:06 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 1417s Nov 13 11:53:07 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 1419s Nov 13 11:53:09 Then I receive a response code 200 # features/steps/patroni_api.py:98 1419s Nov 13 11:53:09 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1423s Nov 13 11:53:13 1423s Nov 13 11:53:13 Feature: recovery # features/recovery.feature:1 1423s Nov 13 11:53:13 We want to check that crashed postgres is started back 1423s Nov 13 11:53:13 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 1423s Nov 13 11:53:13 Given I start postgres0 # features/steps/basic_replication.py:8 1426s Nov 13 11:53:16 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1427s Nov 13 11:53:17 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1427s Nov 13 11:53:17 When I start postgres1 # features/steps/basic_replication.py:8 1430s Nov 13 11:53:20 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 1430s Nov 13 11:53:20 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1435s Nov 13 11:53:25 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 1435s Nov 13 11:53:25 waiting for server to shut down.... done 1435s Nov 13 11:53:25 server stopped 1435s Nov 13 11:53:25 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1437s Nov 13 11:53:27 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 1437s Nov 13 11:53:27 Then I receive a response code 200 # features/steps/patroni_api.py:98 1437s Nov 13 11:53:27 And I receive a response role master # features/steps/patroni_api.py:98 1437s Nov 13 11:53:27 And I receive a response timeline 1 # features/steps/patroni_api.py:98 1437s Nov 13 11:53:27 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 1438s Nov 13 11:53:28 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 1440s Nov 13 11:53:30 1440s Nov 13 11:53:30 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 1440s Nov 13 11:53:30 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 1440s Nov 13 11:53:30 Then I receive a response code 200 # features/steps/patroni_api.py:98 1440s Nov 13 11:53:30 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 1440s Nov 13 11:53:30 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 1440s Nov 13 11:53:30 waiting for server to shut down.... done 1440s Nov 13 11:53:30 server stopped 1440s Nov 13 11:53:30 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1442s Nov 13 11:53:32 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1447s Nov 13 11:53:37 1447s Nov 13 11:53:37 Feature: standby cluster # features/standby_cluster.feature:1 1447s Nov 13 11:53:37 1447s Nov 13 11:53:37 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 1447s Nov 13 11:53:37 Given I start postgres1 # features/steps/basic_replication.py:8 1450s Nov 13 11:53:40 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1450s Nov 13 11:53:40 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1450s Nov 13 11:53:40 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 1450s Nov 13 11:53:40 Then I receive a response code 200 # features/steps/patroni_api.py:98 1450s Nov 13 11:53:40 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 1450s Nov 13 11:53:40 And I sleep for 3 seconds # features/steps/patroni_api.py:39 1453s Nov 13 11:53:43 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 1453s Nov 13 11:53:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 1453s Nov 13 11:53:43 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 1453s Nov 13 11:53:43 When I start postgres0 # features/steps/basic_replication.py:8 1456s Nov 13 11:53:46 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1457s Nov 13 11:53:47 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 1458s Nov 13 11:53:48 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 1458s Nov 13 11:53:48 Then I receive a response code 200 # features/steps/patroni_api.py:98 1458s Nov 13 11:53:48 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 1458s Nov 13 11:53:48 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 1458s Nov 13 11:53:48 1458s Nov 13 11:53:48 @slot-advance 1458s Nov 13 11:53:48 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 1458s Nov 13 11:53:48 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 1461s Nov 13 11:53:51 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1466s Nov 13 11:53:56 1466s Nov 13 11:53:56 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 1466s Nov 13 11:53:56 When I shut down postgres1 # features/steps/basic_replication.py:29 1468s Nov 13 11:53:58 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1468s Nov 13 11:53:58 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 1470s Nov 13 11:53:59 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 1470s Nov 13 11:53:59 Then I receive a response code 200 # features/steps/patroni_api.py:98 1470s Nov 13 11:53:59 1470s Nov 13 11:53:59 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 1470s Nov 13 11:53:59 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 1473s Nov 13 11:54:02 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 1474s Nov 13 11:54:03 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 1474s Nov 13 11:54:03 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1474s Nov 13 11:54:03 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 1474s Nov 13 11:54:04 Then I receive a response code 200 # features/steps/patroni_api.py:98 1474s Nov 13 11:54:04 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 1474s Nov 13 11:54:04 And I sleep for 3 seconds # features/steps/patroni_api.py:39 1477s Nov 13 11:54:07 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 1477s Nov 13 11:54:07 Then I receive a response code 503 # features/steps/patroni_api.py:98 1477s Nov 13 11:54:07 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 1477s Nov 13 11:54:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 1477s Nov 13 11:54:07 And I receive a response role standby_leader # features/steps/patroni_api.py:98 1477s Nov 13 11:54:07 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 1477s Nov 13 11:54:07 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 1480s Nov 13 11:54:10 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 1480s Nov 13 11:54:10 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 1480s Nov 13 11:54:10 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 1480s Nov 13 11:54:10 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 1480s Nov 13 11:54:10 Then I receive a response code 200 # features/steps/patroni_api.py:98 1480s Nov 13 11:54:10 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 1480s Nov 13 11:54:10 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 1480s Nov 13 11:54:10 1480s Nov 13 11:54:10 Scenario: check switchover # features/standby_cluster.feature:57 1480s Nov 13 11:54:10 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 1484s Nov 13 11:54:14 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 1484s Nov 13 11:54:14 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 1486s Nov 13 11:54:16 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 1486s Nov 13 11:54:16 1486s Nov 13 11:54:16 Scenario: check failover # features/standby_cluster.feature:63 1486s Nov 13 11:54:16 When I kill postgres2 # features/steps/basic_replication.py:34 1487s Nov 13 11:54:17 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 1487s Nov 13 11:54:17 waiting for server to shut down.... done 1487s Nov 13 11:54:17 server stopped 1487s Nov 13 11:54:17 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 1506s Nov 13 11:54:36 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 1506s Nov 13 11:54:36 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 1506s Nov 13 11:54:36 Then I receive a response code 503 # features/steps/patroni_api.py:98 1506s Nov 13 11:54:36 And I receive a response role standby_leader # features/steps/patroni_api.py:98 1506s Nov 13 11:54:36 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 1507s Nov 13 11:54:37 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 1511s Nov 13 11:54:41 1511s Nov 13 11:54:41 Feature: watchdog # features/watchdog.feature:1 1511s Nov 13 11:54:41 Verify that watchdog gets pinged and triggered under appropriate circumstances. 1511s Nov 13 11:54:41 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 1511s Nov 13 11:54:41 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 1515s Nov 13 11:54:44 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1515s Nov 13 11:54:44 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1515s Nov 13 11:54:44 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 1515s Nov 13 11:54:45 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 1515s Nov 13 11:54:45 1515s Nov 13 11:54:45 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 1515s Nov 13 11:54:45 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 1517s Nov 13 11:54:47 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1517s Nov 13 11:54:47 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 1517s Nov 13 11:54:47 When I sleep for 4 seconds # features/steps/patroni_api.py:39 1521s Nov 13 11:54:51 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 1521s Nov 13 11:54:51 1521s Nov 13 11:54:51 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 1521s Nov 13 11:54:51 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 1523s Nov 13 11:54:53 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1523s Nov 13 11:54:53 When I sleep for 2 seconds # features/steps/patroni_api.py:39 1525s Nov 13 11:54:55 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 1525s Nov 13 11:54:55 1525s Nov 13 11:54:55 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 1525s Nov 13 11:54:55 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 1525s Nov 13 11:54:55 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 1527s Nov 13 11:54:57 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1527s Nov 13 11:54:57 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 1528s Nov 13 11:54:58 1528s Nov 13 11:54:58 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 1528s Nov 13 11:54:58 Given I shut down postgres0 # features/steps/basic_replication.py:29 1530s Nov 13 11:55:00 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 1530s Nov 13 11:55:00 1530s Nov 13 11:55:00 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 1530s Nov 13 11:55:00 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 1530s Nov 13 11:55:00 And I start postgres0 with watchdog # features/steps/watchdog.py:16 1533s Nov 13 11:55:03 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1534s Nov 13 11:55:04 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 1534s Nov 13 11:55:04 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 1557s Nov 13 11:55:27 1558s Failed to get list of machines from http://127.0.0.1:2379/v2: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 1558s Failed to get list of machines from http://[::1]:2379/v2: MaxRetryError("HTTPConnectionPool(host='::1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.10006.XXAKSKdx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.10048.XeCnwxbx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.10050.XBzCHqTx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.10053.XdUPcICx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.10064.XPIoykcx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5551.XTlhPkSx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5594.XPAwinnx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5633.XqoTlXcx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5704.XrPGoMmx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5749.XQsHqKOx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5821.XrQzKzRx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5869.XXhlhKCx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5872.XkHozDLx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.5970.XwcWsRux 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6065.XfFwImfx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6072.XBgVsxkx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6116.XkGdUGqx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6179.XDWkHAox 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6341.XdqzCpyx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6385.XsPPZZQx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6439.XzPstuTx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6525.XbwWVCEx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6834.XRvicbEx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6907.XjLeiKax 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.6963.XfIoWnUx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7200.XTAQKWdx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7251.XNGTRifx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7314.XsVrNDSx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7402.XmKMPVTx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7498.XgxwdlJx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7540.XIYrkGnx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7601.XJromupx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7641.XxUqiRfx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7809.XfdgegAx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7857.XYSymzsx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7872.XgjsGvOx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7910.XZKaNUWx 1558s Nov 13 11:55:28 Skipping duplicate data .coverage.autopkgtest.7956.XaxXvhZx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7961.XYjCOPux 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.7997.XWirLCwx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8038.XWFMpRfx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8202.XieIZdXx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8204.XHswoBjx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8209.XAoVszGx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8357.XRWyZULx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8402.XuWskUnx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8442.XrgKNHWx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8493.XAhEndnx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8544.XOoGYbrx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8755.XNrVPTDx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8797.XKxTroRx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8878.XlAqasrx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.8958.XKMAbRIx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9000.XUZCXDDx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9358.XOITKevx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9403.XsuPZDwx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9549.XHnqSTqx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9611.XTIbFCLx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9662.XVcMNSwx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9770.XnWBpHPx 1558s Nov 13 11:55:28 Combined data file .coverage.autopkgtest.9877.XjEvLcbx 1561s Nov 13 11:55:30 Name Stmts Miss Cover 1561s Nov 13 11:55:30 ------------------------------------------------------------------------------------------------------------- 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/OpenSSL/SSL.py 1099 597 46% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/OpenSSL/__init__.py 4 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/OpenSSL/_util.py 41 14 66% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/OpenSSL/crypto.py 1082 842 22% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/OpenSSL/version.py 10 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 35 73% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 81 42% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 58 58% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/utils.py 77 29 62% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/__init__.py 3 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/_asyncbackend.py 14 6 57% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/_ddr.py 105 86 18% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/_features.py 44 7 84% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/_immutable_ctx.py 40 5 88% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/asyncbackend.py 44 32 27% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/asyncquery.py 277 242 13% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/edns.py 270 161 40% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/entropy.py 80 49 39% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/enum.py 72 46 36% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/exception.py 60 33 45% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/flags.py 41 14 66% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/grange.py 34 30 12% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/immutable.py 41 30 27% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/inet.py 80 65 19% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/ipv4.py 27 20 26% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/ipv6.py 115 100 13% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/message.py 809 662 18% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/name.py 620 427 31% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/nameserver.py 101 54 47% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/node.py 118 71 40% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/opcode.py 31 7 77% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/query.py 536 462 14% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/quic/__init__.py 26 23 12% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rcode.py 69 13 81% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdata.py 377 269 29% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdataclass.py 44 9 80% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdataset.py 193 133 31% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdatatype.py 214 25 88% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/OPT.py 34 19 44% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/SOA.py 41 26 37% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/TSIG.py 58 42 28% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/ZONEMD.py 43 27 37% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/__init__.py 2 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/__init__.py 2 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/svcbbase.py 397 261 34% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rdtypes/util.py 191 154 19% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/renderer.py 152 118 22% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/resolver.py 899 719 20% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/reversename.py 33 24 27% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/rrset.py 78 56 28% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/serial.py 93 79 15% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/set.py 149 108 28% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/tokenizer.py 335 279 17% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/transaction.py 271 203 25% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/tsig.py 177 122 31% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/ttl.py 45 38 16% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/version.py 7 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/wire.py 64 42 34% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/xfr.py 148 126 15% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/zone.py 508 383 25% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/zonefile.py 429 380 11% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/dns/zonetypes.py 15 2 87% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/etcd/__init__.py 125 24 81% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/etcd/client.py 380 192 49% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/etcd/lock.py 125 103 18% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/idna/__init__.py 4 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/idna/core.py 292 257 12% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/idna/idnadata.py 4 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/idna/intranges.py 30 24 20% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/idna/package_data.py 1 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/__main__.py 199 63 68% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/api.py 770 279 64% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/config.py 371 94 75% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 77 88% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/dcs/etcd.py 603 119 80% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/ha.py 1244 319 74% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/log.py 219 69 68% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 173 79% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 216 73% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 163 61% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 34 90% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/request.py 62 6 90% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/utils.py 350 120 66% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 42 79% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psutil/__init__.py 951 629 34% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/six.py 504 250 50% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 100 57% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 9 83% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/connection.py 324 99 69% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 120 65% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/contrib/__init__.py 0 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py 257 96 63% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 85 64% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/response.py 562 310 45% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 42 36% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 47 73% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 78 56% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 14 80% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 68 67% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 10 62% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 18 63% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/parser.py 352 198 44% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/reader.py 122 34 72% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/scanner.py 758 437 42% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 1561s Nov 13 11:55:30 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 1561s Nov 13 11:55:30 patroni/__init__.py 13 2 85% 1561s Nov 13 11:55:30 patroni/__main__.py 199 199 0% 1561s Nov 13 11:55:30 patroni/api.py 770 770 0% 1561s Nov 13 11:55:30 patroni/async_executor.py 96 69 28% 1561s Nov 13 11:55:30 patroni/collections.py 56 15 73% 1561s Nov 13 11:55:30 patroni/config.py 371 196 47% 1561s Nov 13 11:55:30 patroni/config_generator.py 212 212 0% 1561s Nov 13 11:55:30 patroni/ctl.py 936 411 56% 1561s Nov 13 11:55:30 patroni/daemon.py 76 76 0% 1561s Nov 13 11:55:30 patroni/dcs/__init__.py 646 270 58% 1561s Nov 13 11:55:30 patroni/dcs/consul.py 485 485 0% 1561s Nov 13 11:55:30 patroni/dcs/etcd3.py 679 679 0% 1561s Nov 13 11:55:30 patroni/dcs/etcd.py 603 224 63% 1561s Nov 13 11:55:30 patroni/dcs/exhibitor.py 61 61 0% 1561s Nov 13 11:55:30 patroni/dcs/kubernetes.py 938 938 0% 1561s Nov 13 11:55:30 patroni/dcs/raft.py 319 319 0% 1561s Nov 13 11:55:30 patroni/dcs/zookeeper.py 288 288 0% 1561s Nov 13 11:55:30 patroni/dynamic_loader.py 35 7 80% 1561s Nov 13 11:55:30 patroni/exceptions.py 16 1 94% 1561s Nov 13 11:55:30 patroni/file_perm.py 43 15 65% 1561s Nov 13 11:55:30 patroni/global_config.py 81 18 78% 1561s Nov 13 11:55:30 patroni/ha.py 1244 1244 0% 1561s Nov 13 11:55:30 patroni/log.py 219 173 21% 1561s Nov 13 11:55:30 patroni/postgresql/__init__.py 821 651 21% 1561s Nov 13 11:55:30 patroni/postgresql/available_parameters/__init__.py 21 3 86% 1561s Nov 13 11:55:30 patroni/postgresql/bootstrap.py 252 222 12% 1561s Nov 13 11:55:30 patroni/postgresql/callback_executor.py 55 34 38% 1561s Nov 13 11:55:30 patroni/postgresql/cancellable.py 104 84 19% 1561s Nov 13 11:55:30 patroni/postgresql/config.py 813 698 14% 1561s Nov 13 11:55:30 patroni/postgresql/connection.py 75 50 33% 1561s Nov 13 11:55:30 patroni/postgresql/misc.py 41 29 29% 1561s Nov 13 11:55:30 patroni/postgresql/mpp/__init__.py 89 21 76% 1561s Nov 13 11:55:30 patroni/postgresql/mpp/citus.py 259 259 0% 1561s Nov 13 11:55:30 patroni/postgresql/postmaster.py 170 139 18% 1561s Nov 13 11:55:30 patroni/postgresql/rewind.py 416 416 0% 1561s Nov 13 11:55:30 patroni/postgresql/slots.py 334 285 15% 1561s Nov 13 11:55:30 patroni/postgresql/sync.py 130 96 26% 1561s Nov 13 11:55:30 patroni/postgresql/validator.py 157 52 67% 1561s Nov 13 11:55:30 patroni/psycopg.py 42 28 33% 1561s Nov 13 11:55:30 patroni/raft_controller.py 22 22 0% 1561s Nov 13 11:55:30 patroni/request.py 62 6 90% 1561s Nov 13 11:55:30 patroni/scripts/__init__.py 0 0 100% 1561s Nov 13 11:55:30 patroni/scripts/aws.py 59 59 0% 1561s Nov 13 11:55:30 patroni/scripts/barman/__init__.py 0 0 100% 1561s Nov 13 11:55:30 patroni/scripts/barman/cli.py 51 51 0% 1561s Nov 13 11:55:30 patroni/scripts/barman/config_switch.py 51 51 0% 1561s Nov 13 11:55:30 patroni/scripts/barman/recover.py 37 37 0% 1561s Nov 13 11:55:30 patroni/scripts/barman/utils.py 94 94 0% 1561s Nov 13 11:55:30 patroni/scripts/wale_restore.py 207 207 0% 1561s Nov 13 11:55:30 patroni/tags.py 38 11 71% 1561s Nov 13 11:55:30 patroni/utils.py 350 196 44% 1561s Nov 13 11:55:30 patroni/validator.py 301 215 29% 1561s Nov 13 11:55:30 patroni/version.py 1 0 100% 1561s Nov 13 11:55:30 patroni/watchdog/__init__.py 2 2 0% 1561s Nov 13 11:55:30 patroni/watchdog/base.py 203 203 0% 1561s Nov 13 11:55:30 patroni/watchdog/linux.py 135 135 0% 1561s Nov 13 11:55:30 ------------------------------------------------------------------------------------------------------------- 1561s Nov 13 11:55:30 TOTAL 53060 32137 39% 1561s Nov 13 11:55:31 12 features passed, 0 failed, 1 skipped 1561s Nov 13 11:55:31 55 scenarios passed, 0 failed, 5 skipped 1561s Nov 13 11:55:31 524 steps passed, 0 failed, 61 skipped, 0 undefined 1561s Nov 13 11:55:31 Took 8m35.695s 1561s ### End 16 acceptance-etcd ### 1561s + echo '### End 16 acceptance-etcd ###' 1561s + rm -f '/tmp/pgpass?' 1561s ++ id -u 1561s + '[' 0 -eq 0 ']' 1561s + '[' -x /etc/init.d/zookeeper ']' 1561s autopkgtest [11:55:31]: test acceptance-etcd: -----------------------] 1562s autopkgtest [11:55:32]: test acceptance-etcd: - - - - - - - - - - results - - - - - - - - - - 1562s acceptance-etcd PASS 1562s autopkgtest [11:55:32]: test acceptance-zookeeper: preparing testbed 1683s autopkgtest [11:57:33]: testbed dpkg architecture: s390x 1683s autopkgtest [11:57:33]: testbed apt version: 2.9.8 1683s autopkgtest [11:57:33]: @@@@@@@@@@@@@@@@@@@@ test bed setup 1684s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [73.9 kB] 1684s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [849 kB] 1684s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [76.4 kB] 1684s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.3 kB] 1684s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/restricted Sources [7016 B] 1684s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [85.8 kB] 1684s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [565 kB] 1684s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [16.6 kB] 1684s Fetched 1689 kB in 1s (2249 kB/s) 1684s Reading package lists... 1686s Reading package lists... 1686s Building dependency tree... 1686s Reading state information... 1686s Calculating upgrade... 1687s The following NEW packages will be installed: 1687s python3.13-gdbm 1687s The following packages will be upgraded: 1687s libgpgme11t64 libpython3-stdlib python3 python3-gdbm python3-minimal 1687s 5 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 1687s Need to get 252 kB of archives. 1687s After this operation, 98.3 kB of additional disk space will be used. 1687s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-minimal s390x 3.12.7-1 [27.4 kB] 1687s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3 s390x 3.12.7-1 [24.0 kB] 1687s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libpython3-stdlib s390x 3.12.7-1 [10.0 kB] 1687s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x python3.13-gdbm s390x 3.13.0-2 [31.0 kB] 1687s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-gdbm s390x 3.12.7-1 [8642 B] 1687s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x libgpgme11t64 s390x 1.23.2-5ubuntu4 [151 kB] 1687s Fetched 252 kB in 0s (608 kB/s) 1687s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 1687s Preparing to unpack .../python3-minimal_3.12.7-1_s390x.deb ... 1687s Unpacking python3-minimal (3.12.7-1) over (3.12.6-0ubuntu1) ... 1687s Setting up python3-minimal (3.12.7-1) ... 1688s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 1688s Preparing to unpack .../python3_3.12.7-1_s390x.deb ... 1688s Unpacking python3 (3.12.7-1) over (3.12.6-0ubuntu1) ... 1688s Preparing to unpack .../libpython3-stdlib_3.12.7-1_s390x.deb ... 1688s Unpacking libpython3-stdlib:s390x (3.12.7-1) over (3.12.6-0ubuntu1) ... 1688s Selecting previously unselected package python3.13-gdbm. 1688s Preparing to unpack .../python3.13-gdbm_3.13.0-2_s390x.deb ... 1688s Unpacking python3.13-gdbm (3.13.0-2) ... 1688s Preparing to unpack .../python3-gdbm_3.12.7-1_s390x.deb ... 1688s Unpacking python3-gdbm:s390x (3.12.7-1) over (3.12.6-1ubuntu1) ... 1688s Preparing to unpack .../libgpgme11t64_1.23.2-5ubuntu4_s390x.deb ... 1688s Unpacking libgpgme11t64:s390x (1.23.2-5ubuntu4) over (1.18.0-4.1ubuntu4) ... 1688s Setting up libgpgme11t64:s390x (1.23.2-5ubuntu4) ... 1688s Setting up python3.13-gdbm (3.13.0-2) ... 1688s Setting up libpython3-stdlib:s390x (3.12.7-1) ... 1688s Setting up python3 (3.12.7-1) ... 1688s Setting up python3-gdbm:s390x (3.12.7-1) ... 1688s Processing triggers for man-db (2.12.1-3) ... 1688s Processing triggers for libc-bin (2.40-1ubuntu3) ... 1689s Reading package lists... 1689s Building dependency tree... 1689s Reading state information... 1689s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1689s Hit:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease 1689s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 1689s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 1689s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 1690s Reading package lists... 1690s Reading package lists... 1690s Building dependency tree... 1690s Reading state information... 1690s Calculating upgrade... 1691s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1691s Reading package lists... 1691s Building dependency tree... 1691s Reading state information... 1691s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1698s Reading package lists... 1698s Building dependency tree... 1698s Reading state information... 1698s Starting pkgProblemResolver with broken count: 0 1698s Starting 2 pkgProblemResolver with broken count: 0 1698s Done 1698s The following additional packages will be installed: 1698s adwaita-icon-theme at-spi2-common ca-certificates-java 1698s dconf-gsettings-backend dconf-service default-jre default-jre-headless 1698s fontconfig fontconfig-config fonts-dejavu-core fonts-dejavu-mono 1698s fonts-font-awesome fonts-lato gtk-update-icon-cache hicolor-icon-theme 1698s humanity-icon-theme java-common junit4 libactivation-java libapache-pom-java 1698s libapr1t64 libasm-java libasound2-data libasound2t64 1698s libatinject-jsr330-api-java libatk-bridge2.0-0t64 libatk1.0-0t64 1698s libatspi2.0-0t64 libavahi-client3 libavahi-common-data libavahi-common3 1698s libcairo-gobject2 libcairo2 libcares2 libcolord2 libcommons-cli-java 1698s libcommons-io-java libcommons-logging-java libcommons-parent-java 1698s libcups2t64 libdatrie1 libdconf1 libdeflate0 libdrm-amdgpu1 libdrm-radeon1 1698s libdropwizard-metrics-java libeclipse-jdt-core-compiler-batch-java 1698s libeclipse-jdt-core-java libel-api-java libepoxy0 liberror-prone-java 1698s libev4t64 libfindbugs-annotations-java libfontconfig1 libfreetype6 libgbm1 1698s libgdk-pixbuf-2.0-0 libgdk-pixbuf2.0-common libgif7 libgl1 libgl1-mesa-dri 1698s libglapi-mesa libglvnd0 libglx-mesa0 libglx0 libgraphite2-3 libgtk-3-0t64 1698s libgtk-3-common libguava-java libhamcrest-java libharfbuzz0b libio-pty-perl 1698s libipc-run-perl libjackson2-annotations-java libjackson2-core-java 1698s libjackson2-databind-java libjaxb-api-java libjbig0 libjctools-java 1698s libjetty9-extra-java libjetty9-java libjffi-java libjffi-jni 1698s libjnr-constants-java libjnr-enxio-java libjnr-ffi-java libjnr-posix-java 1698s libjnr-unixsocket-java libjnr-x86asm-java libjpeg-turbo8 libjpeg8 1698s libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl libjsp-api-java 1698s libjsr305-java liblcms2-2 liblog4j1.2-java libmail-java libnetty-java 1698s libnetty-tcnative-java libnetty-tcnative-jni libpango-1.0-0 1698s libpangocairo-1.0-0 libpangoft2-1.0-0 libpcsclite1 libpixman-1-0 libpq5 1698s libservlet-api-java libsharpyuv0 libslf4j-java libsnappy-java libsnappy-jni 1698s libsnappy1v5 libspring-beans-java libspring-core-java 1698s libtaglibs-standard-impl-java libtaglibs-standard-spec-java libthai-data 1698s libthai0 libtiff6 libtime-duration-perl libtimedate-perl libtomcat9-java 1698s libvulkan1 libwayland-client0 libwayland-cursor0 libwayland-egl1 1698s libwayland-server0 libwebp7 libwebsocket-api-java libx11-xcb1 libxcb-dri2-0 1698s libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 libxcb-render0 1698s libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcomposite1 libxcursor1 1698s libxdamage1 libxfixes3 libxi6 libxinerama1 libxrandr2 libxrender1 1698s libxshmfence1 libxslt1.1 libxtst6 libxxf86vm1 libzookeeper-java 1698s mesa-libgallium moreutils openjdk-21-jre openjdk-21-jre-headless patroni 1698s patroni-doc postgresql postgresql-16 postgresql-client-16 1698s postgresql-client-common postgresql-common python3-behave python3-cdiff 1698s python3-click python3-colorama python3-coverage python3-dateutil 1698s python3-dnspython python3-eventlet python3-gevent python3-greenlet 1698s python3-kazoo python3-kerberos python3-parse python3-parse-type 1698s python3-prettytable python3-psutil python3-psycopg2 python3-pure-sasl 1698s python3-six python3-wcwidth python3-ydiff python3-zope.event 1698s python3-zope.interface sphinx-rtd-theme-common ssl-cert ubuntu-mono 1698s x11-common zookeeper zookeeperd 1698s Suggested packages: 1698s adwaita-icon-theme-legacy alsa-utils libasound2-plugins 1698s libatinject-jsr330-api-java-doc colord libavalon-framework-java 1698s libexcalibur-logkit-java cups-common gvfs libjackson2-annotations-java-doc 1698s jetty9 libjnr-ffi-java-doc libjnr-posix-java-doc libjsr305-java-doc 1698s liblcms2-utils liblog4j1.2-java-doc libbcpkix-java libcompress-lzf-java 1698s libjzlib-java liblog4j2-java libprotobuf-java pcscd libcglib-java 1698s libyaml-snake-java libaspectj-java libcommons-collections3-java tomcat9 1698s libzookeeper-java-doc libnss-mdns fonts-dejavu-extra fonts-ipafont-gothic 1698s fonts-ipafont-mincho fonts-wqy-microhei | fonts-wqy-zenhei fonts-indic 1698s vip-manager haproxy postgresql-doc postgresql-doc-16 python-coverage-doc 1698s python3-trio python3-aioquic python3-h2 python3-httpx python3-httpcore 1698s python-eventlet-doc python-gevent-doc python-greenlet-dev 1698s python-greenlet-doc python-kazoo-doc python-psycopg2-doc 1698s Recommended packages: 1698s librsvg2-common alsa-ucm-conf alsa-topology-conf at-spi2-core 1698s libgdk-pixbuf2.0-bin libgl1-amber-dri libgtk-3-bin javascript-common 1698s libjson-xs-perl mesa-vulkan-drivers | vulkan-icd libatk-wrapper-java-jni 1698s fonts-dejavu-extra 1698s The following NEW packages will be installed: 1698s adwaita-icon-theme at-spi2-common autopkgtest-satdep ca-certificates-java 1698s dconf-gsettings-backend dconf-service default-jre default-jre-headless 1698s fontconfig fontconfig-config fonts-dejavu-core fonts-dejavu-mono 1698s fonts-font-awesome fonts-lato gtk-update-icon-cache hicolor-icon-theme 1698s humanity-icon-theme java-common junit4 libactivation-java libapache-pom-java 1698s libapr1t64 libasm-java libasound2-data libasound2t64 1698s libatinject-jsr330-api-java libatk-bridge2.0-0t64 libatk1.0-0t64 1698s libatspi2.0-0t64 libavahi-client3 libavahi-common-data libavahi-common3 1698s libcairo-gobject2 libcairo2 libcares2 libcolord2 libcommons-cli-java 1698s libcommons-io-java libcommons-logging-java libcommons-parent-java 1698s libcups2t64 libdatrie1 libdconf1 libdeflate0 libdrm-amdgpu1 libdrm-radeon1 1698s libdropwizard-metrics-java libeclipse-jdt-core-compiler-batch-java 1698s libeclipse-jdt-core-java libel-api-java libepoxy0 liberror-prone-java 1698s libev4t64 libfindbugs-annotations-java libfontconfig1 libfreetype6 libgbm1 1698s libgdk-pixbuf-2.0-0 libgdk-pixbuf2.0-common libgif7 libgl1 libgl1-mesa-dri 1698s libglapi-mesa libglvnd0 libglx-mesa0 libglx0 libgraphite2-3 libgtk-3-0t64 1698s libgtk-3-common libguava-java libhamcrest-java libharfbuzz0b libio-pty-perl 1698s libipc-run-perl libjackson2-annotations-java libjackson2-core-java 1698s libjackson2-databind-java libjaxb-api-java libjbig0 libjctools-java 1698s libjetty9-extra-java libjetty9-java libjffi-java libjffi-jni 1698s libjnr-constants-java libjnr-enxio-java libjnr-ffi-java libjnr-posix-java 1698s libjnr-unixsocket-java libjnr-x86asm-java libjpeg-turbo8 libjpeg8 1698s libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl libjsp-api-java 1698s libjsr305-java liblcms2-2 liblog4j1.2-java libmail-java libnetty-java 1698s libnetty-tcnative-java libnetty-tcnative-jni libpango-1.0-0 1698s libpangocairo-1.0-0 libpangoft2-1.0-0 libpcsclite1 libpixman-1-0 libpq5 1698s libservlet-api-java libsharpyuv0 libslf4j-java libsnappy-java libsnappy-jni 1698s libsnappy1v5 libspring-beans-java libspring-core-java 1698s libtaglibs-standard-impl-java libtaglibs-standard-spec-java libthai-data 1698s libthai0 libtiff6 libtime-duration-perl libtimedate-perl libtomcat9-java 1698s libvulkan1 libwayland-client0 libwayland-cursor0 libwayland-egl1 1698s libwayland-server0 libwebp7 libwebsocket-api-java libx11-xcb1 libxcb-dri2-0 1698s libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 libxcb-render0 1698s libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcomposite1 libxcursor1 1698s libxdamage1 libxfixes3 libxi6 libxinerama1 libxrandr2 libxrender1 1698s libxshmfence1 libxslt1.1 libxtst6 libxxf86vm1 libzookeeper-java 1698s mesa-libgallium moreutils openjdk-21-jre openjdk-21-jre-headless patroni 1698s patroni-doc postgresql postgresql-16 postgresql-client-16 1698s postgresql-client-common postgresql-common python3-behave python3-cdiff 1698s python3-click python3-colorama python3-coverage python3-dateutil 1698s python3-dnspython python3-eventlet python3-gevent python3-greenlet 1698s python3-kazoo python3-kerberos python3-parse python3-parse-type 1698s python3-prettytable python3-psutil python3-psycopg2 python3-pure-sasl 1698s python3-six python3-wcwidth python3-ydiff python3-zope.event 1698s python3-zope.interface sphinx-rtd-theme-common ssl-cert ubuntu-mono 1698s x11-common zookeeper zookeeperd 1698s 0 upgraded, 196 newly installed, 0 to remove and 0 not upgraded. 1698s Need to get 129 MB/129 MB of archives. 1698s After this operation, 441 MB of additional disk space will be used. 1698s Get:1 /tmp/autopkgtest.FwqS2V/4-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [764 B] 1698s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-lato all 2.015-1 [2781 kB] 1699s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x libjson-perl all 4.10000-1 [81.9 kB] 1699s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-common all 262 [36.7 kB] 1699s Get:5 http://ftpmaster.internal/ubuntu plucky/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 1699s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-common all 262 [162 kB] 1699s Get:7 http://ftpmaster.internal/ubuntu plucky/main s390x ca-certificates-java all 20240118 [11.6 kB] 1699s Get:8 http://ftpmaster.internal/ubuntu plucky/main s390x java-common all 0.76 [6852 B] 1699s Get:9 http://ftpmaster.internal/ubuntu plucky/main s390x liblcms2-2 s390x 2.16-2 [175 kB] 1699s Get:10 http://ftpmaster.internal/ubuntu plucky/main s390x libjpeg-turbo8 s390x 2.1.5-2ubuntu2 [150 kB] 1699s Get:11 http://ftpmaster.internal/ubuntu plucky/main s390x libjpeg8 s390x 8c-2ubuntu11 [2146 B] 1699s Get:12 http://ftpmaster.internal/ubuntu plucky/main s390x libpcsclite1 s390x 2.3.0-1 [24.0 kB] 1699s Get:13 http://ftpmaster.internal/ubuntu plucky/main s390x openjdk-21-jre-headless s390x 21.0.5+11-1 [43.8 MB] 1700s Get:14 http://ftpmaster.internal/ubuntu plucky/main s390x default-jre-headless s390x 2:1.21-76 [3182 B] 1700s Get:15 http://ftpmaster.internal/ubuntu plucky/main s390x libgdk-pixbuf2.0-common all 2.42.12+dfsg-1 [7888 B] 1700s Get:16 http://ftpmaster.internal/ubuntu plucky/main s390x libdeflate0 s390x 1.22-1 [46.1 kB] 1700s Get:17 http://ftpmaster.internal/ubuntu plucky/main s390x libjbig0 s390x 2.1-6.1ubuntu2 [33.1 kB] 1700s Get:18 http://ftpmaster.internal/ubuntu plucky/main s390x libsharpyuv0 s390x 1.4.0-0.1 [16.2 kB] 1700s Get:19 http://ftpmaster.internal/ubuntu plucky/main s390x libwebp7 s390x 1.4.0-0.1 [204 kB] 1700s Get:20 http://ftpmaster.internal/ubuntu plucky/main s390x libtiff6 s390x 4.5.1+git230720-4ubuntu4 [217 kB] 1700s Get:21 http://ftpmaster.internal/ubuntu plucky/main s390x libgdk-pixbuf-2.0-0 s390x 2.42.12+dfsg-1 [152 kB] 1700s Get:22 http://ftpmaster.internal/ubuntu plucky/main s390x gtk-update-icon-cache s390x 4.16.5+ds-1 [52.0 kB] 1700s Get:23 http://ftpmaster.internal/ubuntu plucky/main s390x hicolor-icon-theme all 0.18-1 [13.5 kB] 1700s Get:24 http://ftpmaster.internal/ubuntu plucky/main s390x humanity-icon-theme all 0.6.16 [1282 kB] 1700s Get:25 http://ftpmaster.internal/ubuntu plucky/main s390x ubuntu-mono all 24.04-0ubuntu1 [151 kB] 1700s Get:26 http://ftpmaster.internal/ubuntu plucky/main s390x adwaita-icon-theme all 47.0-2 [525 kB] 1700s Get:27 http://ftpmaster.internal/ubuntu plucky/main s390x at-spi2-common all 2.54.0-1 [8774 B] 1700s Get:28 http://ftpmaster.internal/ubuntu plucky/main s390x libatk1.0-0t64 s390x 2.54.0-1 [54.7 kB] 1700s Get:29 http://ftpmaster.internal/ubuntu plucky/main s390x libxi6 s390x 2:1.8.2-1 [35.4 kB] 1700s Get:30 http://ftpmaster.internal/ubuntu plucky/main s390x libatspi2.0-0t64 s390x 2.54.0-1 [79.8 kB] 1700s Get:31 http://ftpmaster.internal/ubuntu plucky/main s390x libatk-bridge2.0-0t64 s390x 2.54.0-1 [66.4 kB] 1700s Get:32 http://ftpmaster.internal/ubuntu plucky/main s390x libfreetype6 s390x 2.13.3+dfsg-1 [431 kB] 1700s Get:33 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-dejavu-mono all 2.37-8 [502 kB] 1700s Get:34 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-dejavu-core all 2.37-8 [835 kB] 1700s Get:35 http://ftpmaster.internal/ubuntu plucky/main s390x fontconfig-config s390x 2.15.0-1.1ubuntu2 [37.4 kB] 1700s Get:36 http://ftpmaster.internal/ubuntu plucky/main s390x libfontconfig1 s390x 2.15.0-1.1ubuntu2 [150 kB] 1700s Get:37 http://ftpmaster.internal/ubuntu plucky/main s390x libpixman-1-0 s390x 0.44.0-3 [201 kB] 1700s Get:38 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-render0 s390x 1.17.0-2 [17.0 kB] 1700s Get:39 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-shm0 s390x 1.17.0-2 [5862 B] 1700s Get:40 http://ftpmaster.internal/ubuntu plucky/main s390x libxrender1 s390x 1:0.9.10-1.1build1 [20.4 kB] 1700s Get:41 http://ftpmaster.internal/ubuntu plucky/main s390x libcairo2 s390x 1.18.2-2 [580 kB] 1700s Get:42 http://ftpmaster.internal/ubuntu plucky/main s390x libcairo-gobject2 s390x 1.18.2-2 [127 kB] 1700s Get:43 http://ftpmaster.internal/ubuntu plucky/main s390x libcolord2 s390x 1.4.7-1build2 [151 kB] 1700s Get:44 http://ftpmaster.internal/ubuntu plucky/main s390x libavahi-common-data s390x 0.8-13ubuntu6 [29.7 kB] 1700s Get:45 http://ftpmaster.internal/ubuntu plucky/main s390x libavahi-common3 s390x 0.8-13ubuntu6 [24.1 kB] 1700s Get:46 http://ftpmaster.internal/ubuntu plucky/main s390x libavahi-client3 s390x 0.8-13ubuntu6 [27.2 kB] 1700s Get:47 http://ftpmaster.internal/ubuntu plucky/main s390x libcups2t64 s390x 2.4.10-1ubuntu2 [281 kB] 1700s Get:48 http://ftpmaster.internal/ubuntu plucky/main s390x libepoxy0 s390x 1.5.10-2 [222 kB] 1700s Get:49 http://ftpmaster.internal/ubuntu plucky/main s390x libgraphite2-3 s390x 1.3.14-2ubuntu1 [79.8 kB] 1700s Get:50 http://ftpmaster.internal/ubuntu plucky/main s390x libharfbuzz0b s390x 10.0.1-1 [536 kB] 1700s Get:51 http://ftpmaster.internal/ubuntu plucky/main s390x fontconfig s390x 2.15.0-1.1ubuntu2 [191 kB] 1700s Get:52 http://ftpmaster.internal/ubuntu plucky/main s390x libthai-data all 0.1.29-2build1 [158 kB] 1700s Get:53 http://ftpmaster.internal/ubuntu plucky/main s390x libdatrie1 s390x 0.2.13-3build1 [20.6 kB] 1700s Get:54 http://ftpmaster.internal/ubuntu plucky/main s390x libthai0 s390x 0.1.29-2build1 [20.7 kB] 1700s Get:55 http://ftpmaster.internal/ubuntu plucky/main s390x libpango-1.0-0 s390x 1.54.0+ds-3 [249 kB] 1700s Get:56 http://ftpmaster.internal/ubuntu plucky/main s390x libpangoft2-1.0-0 s390x 1.54.0+ds-3 [49.5 kB] 1700s Get:57 http://ftpmaster.internal/ubuntu plucky/main s390x libpangocairo-1.0-0 s390x 1.54.0+ds-3 [28.0 kB] 1700s Get:58 http://ftpmaster.internal/ubuntu plucky/main s390x libwayland-client0 s390x 1.23.0-1 [27.6 kB] 1700s Get:59 http://ftpmaster.internal/ubuntu plucky/main s390x libwayland-cursor0 s390x 1.23.0-1 [11.5 kB] 1700s Get:60 http://ftpmaster.internal/ubuntu plucky/main s390x libwayland-egl1 s390x 1.23.0-1 [5584 B] 1700s Get:61 http://ftpmaster.internal/ubuntu plucky/main s390x libxcomposite1 s390x 1:0.4.6-1 [6588 B] 1700s Get:62 http://ftpmaster.internal/ubuntu plucky/main s390x libxfixes3 s390x 1:6.0.0-2build1 [11.3 kB] 1700s Get:63 http://ftpmaster.internal/ubuntu plucky/main s390x libxcursor1 s390x 1:1.2.2-1 [22.7 kB] 1700s Get:64 http://ftpmaster.internal/ubuntu plucky/main s390x libxdamage1 s390x 1:1.1.6-1build1 [6156 B] 1700s Get:65 http://ftpmaster.internal/ubuntu plucky/main s390x libxinerama1 s390x 2:1.1.4-3build1 [6476 B] 1700s Get:66 http://ftpmaster.internal/ubuntu plucky/main s390x libxrandr2 s390x 2:1.5.4-1 [20.8 kB] 1700s Get:67 http://ftpmaster.internal/ubuntu plucky/main s390x libdconf1 s390x 0.40.0-4build2 [40.3 kB] 1700s Get:68 http://ftpmaster.internal/ubuntu plucky/main s390x dconf-service s390x 0.40.0-4build2 [28.6 kB] 1700s Get:69 http://ftpmaster.internal/ubuntu plucky/main s390x dconf-gsettings-backend s390x 0.40.0-4build2 [23.2 kB] 1701s Get:70 http://ftpmaster.internal/ubuntu plucky/main s390x libgtk-3-common all 3.24.43-3ubuntu2 [1202 kB] 1701s Get:71 http://ftpmaster.internal/ubuntu plucky/main s390x libgtk-3-0t64 s390x 3.24.43-3ubuntu2 [2934 kB] 1701s Get:72 http://ftpmaster.internal/ubuntu plucky/main s390x libglvnd0 s390x 1.7.0-1build1 [110 kB] 1701s Get:73 http://ftpmaster.internal/ubuntu plucky/main s390x libglapi-mesa s390x 24.2.3-1ubuntu1 [67.8 kB] 1701s Get:74 http://ftpmaster.internal/ubuntu plucky/main s390x libx11-xcb1 s390x 2:1.8.10-2 [7954 B] 1701s Get:75 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-dri2-0 s390x 1.17.0-2 [7448 B] 1701s Get:76 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-dri3-0 s390x 1.17.0-2 [7616 B] 1701s Get:77 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-glx0 s390x 1.17.0-2 [26.0 kB] 1701s Get:78 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-present0 s390x 1.17.0-2 [6244 B] 1701s Get:79 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-randr0 s390x 1.17.0-2 [19.2 kB] 1701s Get:80 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-sync1 s390x 1.17.0-2 [9488 B] 1701s Get:81 http://ftpmaster.internal/ubuntu plucky/main s390x libxcb-xfixes0 s390x 1.17.0-2 [10.5 kB] 1701s Get:82 http://ftpmaster.internal/ubuntu plucky/main s390x libxshmfence1 s390x 1.3-1build5 [4772 B] 1701s Get:83 http://ftpmaster.internal/ubuntu plucky/main s390x libxxf86vm1 s390x 1:1.1.4-1build4 [9630 B] 1701s Get:84 http://ftpmaster.internal/ubuntu plucky/main s390x libdrm-amdgpu1 s390x 2.4.123-1 [21.2 kB] 1701s Get:85 http://ftpmaster.internal/ubuntu plucky/main s390x libdrm-radeon1 s390x 2.4.123-1 [22.4 kB] 1701s Get:86 http://ftpmaster.internal/ubuntu plucky/main s390x mesa-libgallium s390x 24.2.3-1ubuntu1 [7709 kB] 1701s Get:87 http://ftpmaster.internal/ubuntu plucky/main s390x libvulkan1 s390x 1.3.296.0-1 [143 kB] 1701s Get:88 http://ftpmaster.internal/ubuntu plucky/main s390x libwayland-server0 s390x 1.23.0-1 [36.5 kB] 1701s Get:89 http://ftpmaster.internal/ubuntu plucky/main s390x libgbm1 s390x 24.2.3-1ubuntu1 [33.7 kB] 1701s Get:90 http://ftpmaster.internal/ubuntu plucky/main s390x libgl1-mesa-dri s390x 24.2.3-1ubuntu1 [34.4 kB] 1701s Get:91 http://ftpmaster.internal/ubuntu plucky/main s390x libglx-mesa0 s390x 24.2.3-1ubuntu1 [175 kB] 1701s Get:92 http://ftpmaster.internal/ubuntu plucky/main s390x libglx0 s390x 1.7.0-1build1 [32.2 kB] 1701s Get:93 http://ftpmaster.internal/ubuntu plucky/main s390x libgl1 s390x 1.7.0-1build1 [142 kB] 1701s Get:94 http://ftpmaster.internal/ubuntu plucky/main s390x libasound2-data all 1.2.12-1 [21.0 kB] 1701s Get:95 http://ftpmaster.internal/ubuntu plucky/main s390x libasound2t64 s390x 1.2.12-1 [408 kB] 1701s Get:96 http://ftpmaster.internal/ubuntu plucky/main s390x libgif7 s390x 5.2.2-1ubuntu1 [38.0 kB] 1701s Get:97 http://ftpmaster.internal/ubuntu plucky/main s390x x11-common all 1:7.7+23ubuntu3 [21.7 kB] 1701s Get:98 http://ftpmaster.internal/ubuntu plucky/main s390x libxtst6 s390x 2:1.2.3-1.1build1 [13.4 kB] 1701s Get:99 http://ftpmaster.internal/ubuntu plucky/main s390x openjdk-21-jre s390x 21.0.5+11-1 [235 kB] 1701s Get:100 http://ftpmaster.internal/ubuntu plucky/main s390x default-jre s390x 2:1.21-76 [920 B] 1701s Get:101 http://ftpmaster.internal/ubuntu plucky/universe s390x libhamcrest-java all 2.2-2 [117 kB] 1701s Get:102 http://ftpmaster.internal/ubuntu plucky/universe s390x junit4 all 4.13.2-5 [348 kB] 1701s Get:103 http://ftpmaster.internal/ubuntu plucky/universe s390x libcommons-cli-java all 1.6.0-1 [59.9 kB] 1701s Get:104 http://ftpmaster.internal/ubuntu plucky/universe s390x libapache-pom-java all 33-2 [5874 B] 1701s Get:105 http://ftpmaster.internal/ubuntu plucky/universe s390x libcommons-parent-java all 56-1 [10.7 kB] 1701s Get:106 http://ftpmaster.internal/ubuntu plucky/universe s390x libcommons-io-java all 2.17.0-1 [457 kB] 1702s Get:107 http://ftpmaster.internal/ubuntu plucky/universe s390x libdropwizard-metrics-java all 3.2.6-1 [240 kB] 1702s Get:108 http://ftpmaster.internal/ubuntu plucky/universe s390x libfindbugs-annotations-java all 3.1.0~preview2-4 [48.9 kB] 1702s Get:109 http://ftpmaster.internal/ubuntu plucky/universe s390x libatinject-jsr330-api-java all 1.0+ds1-5 [5348 B] 1702s Get:110 http://ftpmaster.internal/ubuntu plucky/universe s390x liberror-prone-java all 2.18.0-1 [22.5 kB] 1702s Get:111 http://ftpmaster.internal/ubuntu plucky/universe s390x libjsr305-java all 0.1~+svn49-11 [27.0 kB] 1702s Get:112 http://ftpmaster.internal/ubuntu plucky/universe s390x libguava-java all 32.0.1-1 [2692 kB] 1702s Get:113 http://ftpmaster.internal/ubuntu plucky/universe s390x libjackson2-annotations-java all 2.14.0-1 [64.7 kB] 1702s Get:114 http://ftpmaster.internal/ubuntu plucky/universe s390x libjackson2-core-java all 2.14.1-1 [432 kB] 1702s Get:115 http://ftpmaster.internal/ubuntu plucky/universe s390x libjackson2-databind-java all 2.14.0-1 [1531 kB] 1702s Get:116 http://ftpmaster.internal/ubuntu plucky/universe s390x libasm-java all 9.7.1-1 [388 kB] 1702s Get:117 http://ftpmaster.internal/ubuntu plucky/universe s390x libel-api-java all 3.0.0-3 [64.9 kB] 1702s Get:118 http://ftpmaster.internal/ubuntu plucky/universe s390x libjsp-api-java all 2.3.4-3 [53.7 kB] 1702s Get:119 http://ftpmaster.internal/ubuntu plucky/universe s390x libservlet-api-java all 4.0.1-2 [81.0 kB] 1702s Get:120 http://ftpmaster.internal/ubuntu plucky/universe s390x libwebsocket-api-java all 1.1-2 [40.1 kB] 1702s Get:121 http://ftpmaster.internal/ubuntu plucky/universe s390x libjetty9-java all 9.4.56-1 [2790 kB] 1702s Get:122 http://ftpmaster.internal/ubuntu plucky/universe s390x libjnr-constants-java all 0.10.4-2 [1397 kB] 1702s Get:123 http://ftpmaster.internal/ubuntu plucky/universe s390x libjffi-jni s390x 1.3.13+ds-1 [30.7 kB] 1702s Get:124 http://ftpmaster.internal/ubuntu plucky/universe s390x libjffi-java all 1.3.13+ds-1 [112 kB] 1702s Get:125 http://ftpmaster.internal/ubuntu plucky/universe s390x libjnr-x86asm-java all 1.0.2-5.1 [207 kB] 1702s Get:126 http://ftpmaster.internal/ubuntu plucky/universe s390x libjnr-ffi-java all 2.2.15-2 [627 kB] 1702s Get:127 http://ftpmaster.internal/ubuntu plucky/universe s390x libjnr-enxio-java all 0.32.16-1 [33.7 kB] 1702s Get:128 http://ftpmaster.internal/ubuntu plucky/universe s390x libjnr-posix-java all 3.1.18-1 [267 kB] 1702s Get:129 http://ftpmaster.internal/ubuntu plucky/universe s390x libjnr-unixsocket-java all 0.38.21-2 [46.9 kB] 1702s Get:130 http://ftpmaster.internal/ubuntu plucky/universe s390x libactivation-java all 1.2.0-2 [84.7 kB] 1702s Get:131 http://ftpmaster.internal/ubuntu plucky/universe s390x libmail-java all 1.6.5-3 [681 kB] 1702s Get:132 http://ftpmaster.internal/ubuntu plucky/universe s390x libcommons-logging-java all 1.3.0-1ubuntu1 [63.8 kB] 1702s Get:133 http://ftpmaster.internal/ubuntu plucky/universe s390x libjaxb-api-java all 2.3.1-1 [119 kB] 1702s Get:134 http://ftpmaster.internal/ubuntu plucky/universe s390x libspring-core-java all 4.3.30-2 [1015 kB] 1702s Get:135 http://ftpmaster.internal/ubuntu plucky/universe s390x libspring-beans-java all 4.3.30-2 [675 kB] 1702s Get:136 http://ftpmaster.internal/ubuntu plucky/universe s390x libtaglibs-standard-spec-java all 1.2.5-3 [35.2 kB] 1702s Get:137 http://ftpmaster.internal/ubuntu plucky/universe s390x libtaglibs-standard-impl-java all 1.2.5-3 [182 kB] 1702s Get:138 http://ftpmaster.internal/ubuntu plucky/universe s390x libeclipse-jdt-core-compiler-batch-java all 3.35.0+eclipse4.29-2 [2933 kB] 1703s Get:139 http://ftpmaster.internal/ubuntu plucky/universe s390x libeclipse-jdt-core-java all 3.35.0+eclipse4.29-2 [3831 kB] 1703s Get:140 http://ftpmaster.internal/ubuntu plucky/universe s390x libtomcat9-java all 9.0.70-2ubuntu1.1 [6161 kB] 1703s Get:141 http://ftpmaster.internal/ubuntu plucky/universe s390x libjetty9-extra-java all 9.4.56-1 [1199 kB] 1703s Get:142 http://ftpmaster.internal/ubuntu plucky/universe s390x libjctools-java all 2.0.2-1 [188 kB] 1703s Get:143 http://ftpmaster.internal/ubuntu plucky/universe s390x libnetty-java all 1:4.1.48-10 [3628 kB] 1704s Get:144 http://ftpmaster.internal/ubuntu plucky/universe s390x libslf4j-java all 1.7.32-1 [141 kB] 1704s Get:145 http://ftpmaster.internal/ubuntu plucky/main s390x libsnappy1v5 s390x 1.2.1-1 [33.0 kB] 1704s Get:146 http://ftpmaster.internal/ubuntu plucky/universe s390x libsnappy-jni s390x 1.1.10.5-2 [6716 B] 1704s Get:147 http://ftpmaster.internal/ubuntu plucky/universe s390x libsnappy-java all 1.1.10.5-2 [83.7 kB] 1704s Get:148 http://ftpmaster.internal/ubuntu plucky/main s390x libapr1t64 s390x 1.7.2-3.2ubuntu1 [114 kB] 1704s Get:149 http://ftpmaster.internal/ubuntu plucky/universe s390x libnetty-tcnative-jni s390x 2.0.28-1build4 [36.8 kB] 1704s Get:150 http://ftpmaster.internal/ubuntu plucky/universe s390x libnetty-tcnative-java all 2.0.28-1build4 [24.8 kB] 1704s Get:151 http://ftpmaster.internal/ubuntu plucky/universe s390x liblog4j1.2-java all 1.2.17-11 [439 kB] 1704s Get:152 http://ftpmaster.internal/ubuntu plucky/universe s390x libzookeeper-java all 3.9.2-2 [1885 kB] 1704s Get:153 http://ftpmaster.internal/ubuntu plucky/universe s390x zookeeper all 3.9.2-2 [57.8 kB] 1704s Get:154 http://ftpmaster.internal/ubuntu plucky/universe s390x zookeeperd all 3.9.2-2 [6036 B] 1704s Get:155 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 1704s Get:156 http://ftpmaster.internal/ubuntu plucky/main s390x libcares2 s390x 1.34.2-1 [96.8 kB] 1704s Get:157 http://ftpmaster.internal/ubuntu plucky/universe s390x libev4t64 s390x 1:4.33-2.1build1 [32.0 kB] 1704s Get:158 http://ftpmaster.internal/ubuntu plucky/main s390x libio-pty-perl s390x 1:1.20-1build3 [31.6 kB] 1704s Get:159 http://ftpmaster.internal/ubuntu plucky/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 1704s Get:160 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 1704s Get:161 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 1704s Get:162 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-sphinxdoc all 7.4.7-4 [158 kB] 1704s Get:163 http://ftpmaster.internal/ubuntu plucky/main s390x libpq5 s390x 17.0-1 [252 kB] 1704s Get:164 http://ftpmaster.internal/ubuntu plucky/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 1704s Get:165 http://ftpmaster.internal/ubuntu plucky/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 1704s Get:166 http://ftpmaster.internal/ubuntu plucky/main s390x libxslt1.1 s390x 1.1.39-0exp1ubuntu1 [169 kB] 1704s Get:167 http://ftpmaster.internal/ubuntu plucky/universe s390x moreutils s390x 0.69-1 [57.4 kB] 1704s Get:168 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-ydiff all 1.3-1 [18.4 kB] 1704s Get:169 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-cdiff all 1.3-1 [1770 B] 1704s Get:170 http://ftpmaster.internal/ubuntu plucky/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 1704s Get:171 http://ftpmaster.internal/ubuntu plucky/main s390x python3-click all 8.1.7-2 [79.5 kB] 1704s Get:172 http://ftpmaster.internal/ubuntu plucky/main s390x python3-six all 1.16.0-7 [13.1 kB] 1704s Get:173 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 1704s Get:174 http://ftpmaster.internal/ubuntu plucky/main s390x python3-wcwidth all 0.2.13+dfsg1-1 [26.3 kB] 1704s Get:175 http://ftpmaster.internal/ubuntu plucky/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 1704s Get:176 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 1704s Get:177 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psycopg2 s390x 2.9.9-2 [132 kB] 1704s Get:178 http://ftpmaster.internal/ubuntu plucky/main s390x python3-greenlet s390x 3.0.3-0ubuntu6 [156 kB] 1704s Get:179 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 1704s Get:180 http://ftpmaster.internal/ubuntu plucky/main s390x python3-eventlet all 0.36.1-0ubuntu1 [274 kB] 1704s Get:181 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-zope.event all 5.0-0.1 [7512 B] 1704s Get:182 http://ftpmaster.internal/ubuntu plucky/main s390x python3-zope.interface s390x 7.1.1-1 [140 kB] 1704s Get:183 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-gevent s390x 24.2.1-1 [835 kB] 1704s Get:184 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-kerberos s390x 1.1.14-3.1build9 [21.4 kB] 1704s Get:185 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pure-sasl all 0.5.1+dfsg1-4 [11.4 kB] 1704s Get:186 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-kazoo all 2.9.0-2 [103 kB] 1704s Get:187 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni all 3.3.1-1 [264 kB] 1704s Get:188 http://ftpmaster.internal/ubuntu plucky/main s390x sphinx-rtd-theme-common all 3.0.1+dfsg-1 [1012 kB] 1704s Get:189 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni-doc all 3.3.1-1 [497 kB] 1704s Get:190 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-16 s390x 16.4-3 [1294 kB] 1705s Get:191 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-16 s390x 16.4-3 [16.3 MB] 1705s Get:192 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql all 16+262 [11.8 kB] 1705s Get:193 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 1705s Get:194 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse-type all 0.6.4-1 [23.4 kB] 1705s Get:195 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-behave all 1.2.6-6 [98.6 kB] 1706s Get:196 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 1706s Preconfiguring packages ... 1706s Fetched 129 MB in 7s (17.6 MB/s) 1706s Selecting previously unselected package fonts-lato. 1706s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55517 files and directories currently installed.) 1706s Preparing to unpack .../000-fonts-lato_2.015-1_all.deb ... 1706s Unpacking fonts-lato (2.015-1) ... 1706s Selecting previously unselected package libjson-perl. 1706s Preparing to unpack .../001-libjson-perl_4.10000-1_all.deb ... 1706s Unpacking libjson-perl (4.10000-1) ... 1706s Selecting previously unselected package postgresql-client-common. 1706s Preparing to unpack .../002-postgresql-client-common_262_all.deb ... 1706s Unpacking postgresql-client-common (262) ... 1706s Selecting previously unselected package ssl-cert. 1706s Preparing to unpack .../003-ssl-cert_1.1.2ubuntu2_all.deb ... 1706s Unpacking ssl-cert (1.1.2ubuntu2) ... 1706s Selecting previously unselected package postgresql-common. 1706s Preparing to unpack .../004-postgresql-common_262_all.deb ... 1706s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 1706s Unpacking postgresql-common (262) ... 1706s Selecting previously unselected package ca-certificates-java. 1706s Preparing to unpack .../005-ca-certificates-java_20240118_all.deb ... 1706s Unpacking ca-certificates-java (20240118) ... 1707s Selecting previously unselected package java-common. 1707s Preparing to unpack .../006-java-common_0.76_all.deb ... 1707s Unpacking java-common (0.76) ... 1707s Selecting previously unselected package liblcms2-2:s390x. 1707s Preparing to unpack .../007-liblcms2-2_2.16-2_s390x.deb ... 1707s Unpacking liblcms2-2:s390x (2.16-2) ... 1707s Selecting previously unselected package libjpeg-turbo8:s390x. 1707s Preparing to unpack .../008-libjpeg-turbo8_2.1.5-2ubuntu2_s390x.deb ... 1707s Unpacking libjpeg-turbo8:s390x (2.1.5-2ubuntu2) ... 1707s Selecting previously unselected package libjpeg8:s390x. 1707s Preparing to unpack .../009-libjpeg8_8c-2ubuntu11_s390x.deb ... 1707s Unpacking libjpeg8:s390x (8c-2ubuntu11) ... 1707s Selecting previously unselected package libpcsclite1:s390x. 1707s Preparing to unpack .../010-libpcsclite1_2.3.0-1_s390x.deb ... 1707s Unpacking libpcsclite1:s390x (2.3.0-1) ... 1707s Selecting previously unselected package openjdk-21-jre-headless:s390x. 1707s Preparing to unpack .../011-openjdk-21-jre-headless_21.0.5+11-1_s390x.deb ... 1707s Unpacking openjdk-21-jre-headless:s390x (21.0.5+11-1) ... 1707s Selecting previously unselected package default-jre-headless. 1707s Preparing to unpack .../012-default-jre-headless_2%3a1.21-76_s390x.deb ... 1707s Unpacking default-jre-headless (2:1.21-76) ... 1707s Selecting previously unselected package libgdk-pixbuf2.0-common. 1707s Preparing to unpack .../013-libgdk-pixbuf2.0-common_2.42.12+dfsg-1_all.deb ... 1707s Unpacking libgdk-pixbuf2.0-common (2.42.12+dfsg-1) ... 1707s Selecting previously unselected package libdeflate0:s390x. 1707s Preparing to unpack .../014-libdeflate0_1.22-1_s390x.deb ... 1707s Unpacking libdeflate0:s390x (1.22-1) ... 1707s Selecting previously unselected package libjbig0:s390x. 1707s Preparing to unpack .../015-libjbig0_2.1-6.1ubuntu2_s390x.deb ... 1707s Unpacking libjbig0:s390x (2.1-6.1ubuntu2) ... 1707s Selecting previously unselected package libsharpyuv0:s390x. 1707s Preparing to unpack .../016-libsharpyuv0_1.4.0-0.1_s390x.deb ... 1707s Unpacking libsharpyuv0:s390x (1.4.0-0.1) ... 1707s Selecting previously unselected package libwebp7:s390x. 1707s Preparing to unpack .../017-libwebp7_1.4.0-0.1_s390x.deb ... 1707s Unpacking libwebp7:s390x (1.4.0-0.1) ... 1707s Selecting previously unselected package libtiff6:s390x. 1707s Preparing to unpack .../018-libtiff6_4.5.1+git230720-4ubuntu4_s390x.deb ... 1707s Unpacking libtiff6:s390x (4.5.1+git230720-4ubuntu4) ... 1707s Selecting previously unselected package libgdk-pixbuf-2.0-0:s390x. 1707s Preparing to unpack .../019-libgdk-pixbuf-2.0-0_2.42.12+dfsg-1_s390x.deb ... 1707s Unpacking libgdk-pixbuf-2.0-0:s390x (2.42.12+dfsg-1) ... 1707s Selecting previously unselected package gtk-update-icon-cache. 1707s Preparing to unpack .../020-gtk-update-icon-cache_4.16.5+ds-1_s390x.deb ... 1707s No diversion 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin', none removed. 1707s No diversion 'diversion of /usr/share/man/man8/update-icon-caches.8.gz to /usr/share/man/man8/update-icon-caches.gtk2.8.gz by libgtk-3-bin', none removed. 1707s Unpacking gtk-update-icon-cache (4.16.5+ds-1) ... 1707s Selecting previously unselected package hicolor-icon-theme. 1707s Preparing to unpack .../021-hicolor-icon-theme_0.18-1_all.deb ... 1707s Unpacking hicolor-icon-theme (0.18-1) ... 1708s Selecting previously unselected package humanity-icon-theme. 1708s Preparing to unpack .../022-humanity-icon-theme_0.6.16_all.deb ... 1708s Unpacking humanity-icon-theme (0.6.16) ... 1708s Selecting previously unselected package ubuntu-mono. 1708s Preparing to unpack .../023-ubuntu-mono_24.04-0ubuntu1_all.deb ... 1708s Unpacking ubuntu-mono (24.04-0ubuntu1) ... 1708s Selecting previously unselected package adwaita-icon-theme. 1708s Preparing to unpack .../024-adwaita-icon-theme_47.0-2_all.deb ... 1708s Unpacking adwaita-icon-theme (47.0-2) ... 1708s Selecting previously unselected package at-spi2-common. 1708s Preparing to unpack .../025-at-spi2-common_2.54.0-1_all.deb ... 1708s Unpacking at-spi2-common (2.54.0-1) ... 1708s Selecting previously unselected package libatk1.0-0t64:s390x. 1708s Preparing to unpack .../026-libatk1.0-0t64_2.54.0-1_s390x.deb ... 1708s Unpacking libatk1.0-0t64:s390x (2.54.0-1) ... 1708s Selecting previously unselected package libxi6:s390x. 1708s Preparing to unpack .../027-libxi6_2%3a1.8.2-1_s390x.deb ... 1708s Unpacking libxi6:s390x (2:1.8.2-1) ... 1708s Selecting previously unselected package libatspi2.0-0t64:s390x. 1708s Preparing to unpack .../028-libatspi2.0-0t64_2.54.0-1_s390x.deb ... 1708s Unpacking libatspi2.0-0t64:s390x (2.54.0-1) ... 1708s Selecting previously unselected package libatk-bridge2.0-0t64:s390x. 1708s Preparing to unpack .../029-libatk-bridge2.0-0t64_2.54.0-1_s390x.deb ... 1708s Unpacking libatk-bridge2.0-0t64:s390x (2.54.0-1) ... 1708s Selecting previously unselected package libfreetype6:s390x. 1708s Preparing to unpack .../030-libfreetype6_2.13.3+dfsg-1_s390x.deb ... 1708s Unpacking libfreetype6:s390x (2.13.3+dfsg-1) ... 1708s Selecting previously unselected package fonts-dejavu-mono. 1708s Preparing to unpack .../031-fonts-dejavu-mono_2.37-8_all.deb ... 1708s Unpacking fonts-dejavu-mono (2.37-8) ... 1708s Selecting previously unselected package fonts-dejavu-core. 1708s Preparing to unpack .../032-fonts-dejavu-core_2.37-8_all.deb ... 1708s Unpacking fonts-dejavu-core (2.37-8) ... 1708s Selecting previously unselected package fontconfig-config. 1708s Preparing to unpack .../033-fontconfig-config_2.15.0-1.1ubuntu2_s390x.deb ... 1708s Unpacking fontconfig-config (2.15.0-1.1ubuntu2) ... 1708s Selecting previously unselected package libfontconfig1:s390x. 1708s Preparing to unpack .../034-libfontconfig1_2.15.0-1.1ubuntu2_s390x.deb ... 1708s Unpacking libfontconfig1:s390x (2.15.0-1.1ubuntu2) ... 1708s Selecting previously unselected package libpixman-1-0:s390x. 1708s Preparing to unpack .../035-libpixman-1-0_0.44.0-3_s390x.deb ... 1708s Unpacking libpixman-1-0:s390x (0.44.0-3) ... 1708s Selecting previously unselected package libxcb-render0:s390x. 1708s Preparing to unpack .../036-libxcb-render0_1.17.0-2_s390x.deb ... 1708s Unpacking libxcb-render0:s390x (1.17.0-2) ... 1708s Selecting previously unselected package libxcb-shm0:s390x. 1708s Preparing to unpack .../037-libxcb-shm0_1.17.0-2_s390x.deb ... 1708s Unpacking libxcb-shm0:s390x (1.17.0-2) ... 1708s Selecting previously unselected package libxrender1:s390x. 1708s Preparing to unpack .../038-libxrender1_1%3a0.9.10-1.1build1_s390x.deb ... 1708s Unpacking libxrender1:s390x (1:0.9.10-1.1build1) ... 1708s Selecting previously unselected package libcairo2:s390x. 1708s Preparing to unpack .../039-libcairo2_1.18.2-2_s390x.deb ... 1708s Unpacking libcairo2:s390x (1.18.2-2) ... 1709s Selecting previously unselected package libcairo-gobject2:s390x. 1709s Preparing to unpack .../040-libcairo-gobject2_1.18.2-2_s390x.deb ... 1709s Unpacking libcairo-gobject2:s390x (1.18.2-2) ... 1709s Selecting previously unselected package libcolord2:s390x. 1709s Preparing to unpack .../041-libcolord2_1.4.7-1build2_s390x.deb ... 1709s Unpacking libcolord2:s390x (1.4.7-1build2) ... 1709s Selecting previously unselected package libavahi-common-data:s390x. 1709s Preparing to unpack .../042-libavahi-common-data_0.8-13ubuntu6_s390x.deb ... 1709s Unpacking libavahi-common-data:s390x (0.8-13ubuntu6) ... 1709s Selecting previously unselected package libavahi-common3:s390x. 1709s Preparing to unpack .../043-libavahi-common3_0.8-13ubuntu6_s390x.deb ... 1709s Unpacking libavahi-common3:s390x (0.8-13ubuntu6) ... 1709s Selecting previously unselected package libavahi-client3:s390x. 1709s Preparing to unpack .../044-libavahi-client3_0.8-13ubuntu6_s390x.deb ... 1709s Unpacking libavahi-client3:s390x (0.8-13ubuntu6) ... 1709s Selecting previously unselected package libcups2t64:s390x. 1709s Preparing to unpack .../045-libcups2t64_2.4.10-1ubuntu2_s390x.deb ... 1709s Unpacking libcups2t64:s390x (2.4.10-1ubuntu2) ... 1709s Selecting previously unselected package libepoxy0:s390x. 1709s Preparing to unpack .../046-libepoxy0_1.5.10-2_s390x.deb ... 1709s Unpacking libepoxy0:s390x (1.5.10-2) ... 1709s Selecting previously unselected package libgraphite2-3:s390x. 1709s Preparing to unpack .../047-libgraphite2-3_1.3.14-2ubuntu1_s390x.deb ... 1709s Unpacking libgraphite2-3:s390x (1.3.14-2ubuntu1) ... 1709s Selecting previously unselected package libharfbuzz0b:s390x. 1709s Preparing to unpack .../048-libharfbuzz0b_10.0.1-1_s390x.deb ... 1709s Unpacking libharfbuzz0b:s390x (10.0.1-1) ... 1709s Selecting previously unselected package fontconfig. 1709s Preparing to unpack .../049-fontconfig_2.15.0-1.1ubuntu2_s390x.deb ... 1709s Unpacking fontconfig (2.15.0-1.1ubuntu2) ... 1709s Selecting previously unselected package libthai-data. 1709s Preparing to unpack .../050-libthai-data_0.1.29-2build1_all.deb ... 1709s Unpacking libthai-data (0.1.29-2build1) ... 1709s Selecting previously unselected package libdatrie1:s390x. 1709s Preparing to unpack .../051-libdatrie1_0.2.13-3build1_s390x.deb ... 1709s Unpacking libdatrie1:s390x (0.2.13-3build1) ... 1709s Selecting previously unselected package libthai0:s390x. 1709s Preparing to unpack .../052-libthai0_0.1.29-2build1_s390x.deb ... 1709s Unpacking libthai0:s390x (0.1.29-2build1) ... 1709s Selecting previously unselected package libpango-1.0-0:s390x. 1709s Preparing to unpack .../053-libpango-1.0-0_1.54.0+ds-3_s390x.deb ... 1709s Unpacking libpango-1.0-0:s390x (1.54.0+ds-3) ... 1709s Selecting previously unselected package libpangoft2-1.0-0:s390x. 1709s Preparing to unpack .../054-libpangoft2-1.0-0_1.54.0+ds-3_s390x.deb ... 1709s Unpacking libpangoft2-1.0-0:s390x (1.54.0+ds-3) ... 1709s Selecting previously unselected package libpangocairo-1.0-0:s390x. 1709s Preparing to unpack .../055-libpangocairo-1.0-0_1.54.0+ds-3_s390x.deb ... 1709s Unpacking libpangocairo-1.0-0:s390x (1.54.0+ds-3) ... 1709s Selecting previously unselected package libwayland-client0:s390x. 1709s Preparing to unpack .../056-libwayland-client0_1.23.0-1_s390x.deb ... 1709s Unpacking libwayland-client0:s390x (1.23.0-1) ... 1709s Selecting previously unselected package libwayland-cursor0:s390x. 1709s Preparing to unpack .../057-libwayland-cursor0_1.23.0-1_s390x.deb ... 1709s Unpacking libwayland-cursor0:s390x (1.23.0-1) ... 1709s Selecting previously unselected package libwayland-egl1:s390x. 1709s Preparing to unpack .../058-libwayland-egl1_1.23.0-1_s390x.deb ... 1709s Unpacking libwayland-egl1:s390x (1.23.0-1) ... 1709s Selecting previously unselected package libxcomposite1:s390x. 1709s Preparing to unpack .../059-libxcomposite1_1%3a0.4.6-1_s390x.deb ... 1709s Unpacking libxcomposite1:s390x (1:0.4.6-1) ... 1709s Selecting previously unselected package libxfixes3:s390x. 1709s Preparing to unpack .../060-libxfixes3_1%3a6.0.0-2build1_s390x.deb ... 1709s Unpacking libxfixes3:s390x (1:6.0.0-2build1) ... 1709s Selecting previously unselected package libxcursor1:s390x. 1709s Preparing to unpack .../061-libxcursor1_1%3a1.2.2-1_s390x.deb ... 1709s Unpacking libxcursor1:s390x (1:1.2.2-1) ... 1709s Selecting previously unselected package libxdamage1:s390x. 1709s Preparing to unpack .../062-libxdamage1_1%3a1.1.6-1build1_s390x.deb ... 1709s Unpacking libxdamage1:s390x (1:1.1.6-1build1) ... 1709s Selecting previously unselected package libxinerama1:s390x. 1709s Preparing to unpack .../063-libxinerama1_2%3a1.1.4-3build1_s390x.deb ... 1709s Unpacking libxinerama1:s390x (2:1.1.4-3build1) ... 1709s Selecting previously unselected package libxrandr2:s390x. 1709s Preparing to unpack .../064-libxrandr2_2%3a1.5.4-1_s390x.deb ... 1709s Unpacking libxrandr2:s390x (2:1.5.4-1) ... 1709s Selecting previously unselected package libdconf1:s390x. 1709s Preparing to unpack .../065-libdconf1_0.40.0-4build2_s390x.deb ... 1709s Unpacking libdconf1:s390x (0.40.0-4build2) ... 1709s Selecting previously unselected package dconf-service. 1709s Preparing to unpack .../066-dconf-service_0.40.0-4build2_s390x.deb ... 1709s Unpacking dconf-service (0.40.0-4build2) ... 1709s Selecting previously unselected package dconf-gsettings-backend:s390x. 1709s Preparing to unpack .../067-dconf-gsettings-backend_0.40.0-4build2_s390x.deb ... 1709s Unpacking dconf-gsettings-backend:s390x (0.40.0-4build2) ... 1709s Selecting previously unselected package libgtk-3-common. 1709s Preparing to unpack .../068-libgtk-3-common_3.24.43-3ubuntu2_all.deb ... 1709s Unpacking libgtk-3-common (3.24.43-3ubuntu2) ... 1709s Selecting previously unselected package libgtk-3-0t64:s390x. 1709s Preparing to unpack .../069-libgtk-3-0t64_3.24.43-3ubuntu2_s390x.deb ... 1709s Unpacking libgtk-3-0t64:s390x (3.24.43-3ubuntu2) ... 1709s Selecting previously unselected package libglvnd0:s390x. 1709s Preparing to unpack .../070-libglvnd0_1.7.0-1build1_s390x.deb ... 1709s Unpacking libglvnd0:s390x (1.7.0-1build1) ... 1709s Selecting previously unselected package libglapi-mesa:s390x. 1709s Preparing to unpack .../071-libglapi-mesa_24.2.3-1ubuntu1_s390x.deb ... 1709s Unpacking libglapi-mesa:s390x (24.2.3-1ubuntu1) ... 1709s Selecting previously unselected package libx11-xcb1:s390x. 1709s Preparing to unpack .../072-libx11-xcb1_2%3a1.8.10-2_s390x.deb ... 1709s Unpacking libx11-xcb1:s390x (2:1.8.10-2) ... 1709s Selecting previously unselected package libxcb-dri2-0:s390x. 1709s Preparing to unpack .../073-libxcb-dri2-0_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-dri2-0:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxcb-dri3-0:s390x. 1709s Preparing to unpack .../074-libxcb-dri3-0_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-dri3-0:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxcb-glx0:s390x. 1709s Preparing to unpack .../075-libxcb-glx0_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-glx0:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxcb-present0:s390x. 1709s Preparing to unpack .../076-libxcb-present0_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-present0:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxcb-randr0:s390x. 1709s Preparing to unpack .../077-libxcb-randr0_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-randr0:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxcb-sync1:s390x. 1709s Preparing to unpack .../078-libxcb-sync1_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-sync1:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxcb-xfixes0:s390x. 1709s Preparing to unpack .../079-libxcb-xfixes0_1.17.0-2_s390x.deb ... 1709s Unpacking libxcb-xfixes0:s390x (1.17.0-2) ... 1709s Selecting previously unselected package libxshmfence1:s390x. 1709s Preparing to unpack .../080-libxshmfence1_1.3-1build5_s390x.deb ... 1709s Unpacking libxshmfence1:s390x (1.3-1build5) ... 1709s Selecting previously unselected package libxxf86vm1:s390x. 1709s Preparing to unpack .../081-libxxf86vm1_1%3a1.1.4-1build4_s390x.deb ... 1709s Unpacking libxxf86vm1:s390x (1:1.1.4-1build4) ... 1709s Selecting previously unselected package libdrm-amdgpu1:s390x. 1709s Preparing to unpack .../082-libdrm-amdgpu1_2.4.123-1_s390x.deb ... 1709s Unpacking libdrm-amdgpu1:s390x (2.4.123-1) ... 1709s Selecting previously unselected package libdrm-radeon1:s390x. 1709s Preparing to unpack .../083-libdrm-radeon1_2.4.123-1_s390x.deb ... 1709s Unpacking libdrm-radeon1:s390x (2.4.123-1) ... 1709s Selecting previously unselected package mesa-libgallium:s390x. 1709s Preparing to unpack .../084-mesa-libgallium_24.2.3-1ubuntu1_s390x.deb ... 1709s Unpacking mesa-libgallium:s390x (24.2.3-1ubuntu1) ... 1709s Selecting previously unselected package libvulkan1:s390x. 1709s Preparing to unpack .../085-libvulkan1_1.3.296.0-1_s390x.deb ... 1709s Unpacking libvulkan1:s390x (1.3.296.0-1) ... 1709s Selecting previously unselected package libwayland-server0:s390x. 1709s Preparing to unpack .../086-libwayland-server0_1.23.0-1_s390x.deb ... 1709s Unpacking libwayland-server0:s390x (1.23.0-1) ... 1709s Selecting previously unselected package libgbm1:s390x. 1709s Preparing to unpack .../087-libgbm1_24.2.3-1ubuntu1_s390x.deb ... 1709s Unpacking libgbm1:s390x (24.2.3-1ubuntu1) ... 1709s Selecting previously unselected package libgl1-mesa-dri:s390x. 1709s Preparing to unpack .../088-libgl1-mesa-dri_24.2.3-1ubuntu1_s390x.deb ... 1709s Unpacking libgl1-mesa-dri:s390x (24.2.3-1ubuntu1) ... 1709s Selecting previously unselected package libglx-mesa0:s390x. 1709s Preparing to unpack .../089-libglx-mesa0_24.2.3-1ubuntu1_s390x.deb ... 1709s Unpacking libglx-mesa0:s390x (24.2.3-1ubuntu1) ... 1709s Selecting previously unselected package libglx0:s390x. 1709s Preparing to unpack .../090-libglx0_1.7.0-1build1_s390x.deb ... 1709s Unpacking libglx0:s390x (1.7.0-1build1) ... 1709s Selecting previously unselected package libgl1:s390x. 1709s Preparing to unpack .../091-libgl1_1.7.0-1build1_s390x.deb ... 1709s Unpacking libgl1:s390x (1.7.0-1build1) ... 1709s Selecting previously unselected package libasound2-data. 1709s Preparing to unpack .../092-libasound2-data_1.2.12-1_all.deb ... 1709s Unpacking libasound2-data (1.2.12-1) ... 1709s Selecting previously unselected package libasound2t64:s390x. 1709s Preparing to unpack .../093-libasound2t64_1.2.12-1_s390x.deb ... 1709s Unpacking libasound2t64:s390x (1.2.12-1) ... 1709s Selecting previously unselected package libgif7:s390x. 1709s Preparing to unpack .../094-libgif7_5.2.2-1ubuntu1_s390x.deb ... 1709s Unpacking libgif7:s390x (5.2.2-1ubuntu1) ... 1709s Selecting previously unselected package x11-common. 1709s Preparing to unpack .../095-x11-common_1%3a7.7+23ubuntu3_all.deb ... 1709s Unpacking x11-common (1:7.7+23ubuntu3) ... 1709s Selecting previously unselected package libxtst6:s390x. 1709s Preparing to unpack .../096-libxtst6_2%3a1.2.3-1.1build1_s390x.deb ... 1709s Unpacking libxtst6:s390x (2:1.2.3-1.1build1) ... 1709s Selecting previously unselected package openjdk-21-jre:s390x. 1709s Preparing to unpack .../097-openjdk-21-jre_21.0.5+11-1_s390x.deb ... 1709s Unpacking openjdk-21-jre:s390x (21.0.5+11-1) ... 1710s Selecting previously unselected package default-jre. 1710s Preparing to unpack .../098-default-jre_2%3a1.21-76_s390x.deb ... 1710s Unpacking default-jre (2:1.21-76) ... 1710s Selecting previously unselected package libhamcrest-java. 1710s Preparing to unpack .../099-libhamcrest-java_2.2-2_all.deb ... 1710s Unpacking libhamcrest-java (2.2-2) ... 1710s Selecting previously unselected package junit4. 1710s Preparing to unpack .../100-junit4_4.13.2-5_all.deb ... 1710s Unpacking junit4 (4.13.2-5) ... 1710s Selecting previously unselected package libcommons-cli-java. 1710s Preparing to unpack .../101-libcommons-cli-java_1.6.0-1_all.deb ... 1710s Unpacking libcommons-cli-java (1.6.0-1) ... 1710s Selecting previously unselected package libapache-pom-java. 1710s Preparing to unpack .../102-libapache-pom-java_33-2_all.deb ... 1710s Unpacking libapache-pom-java (33-2) ... 1710s Selecting previously unselected package libcommons-parent-java. 1710s Preparing to unpack .../103-libcommons-parent-java_56-1_all.deb ... 1710s Unpacking libcommons-parent-java (56-1) ... 1710s Selecting previously unselected package libcommons-io-java. 1710s Preparing to unpack .../104-libcommons-io-java_2.17.0-1_all.deb ... 1710s Unpacking libcommons-io-java (2.17.0-1) ... 1710s Selecting previously unselected package libdropwizard-metrics-java. 1710s Preparing to unpack .../105-libdropwizard-metrics-java_3.2.6-1_all.deb ... 1710s Unpacking libdropwizard-metrics-java (3.2.6-1) ... 1710s Selecting previously unselected package libfindbugs-annotations-java. 1710s Preparing to unpack .../106-libfindbugs-annotations-java_3.1.0~preview2-4_all.deb ... 1710s Unpacking libfindbugs-annotations-java (3.1.0~preview2-4) ... 1710s Selecting previously unselected package libatinject-jsr330-api-java. 1710s Preparing to unpack .../107-libatinject-jsr330-api-java_1.0+ds1-5_all.deb ... 1710s Unpacking libatinject-jsr330-api-java (1.0+ds1-5) ... 1710s Selecting previously unselected package liberror-prone-java. 1710s Preparing to unpack .../108-liberror-prone-java_2.18.0-1_all.deb ... 1710s Unpacking liberror-prone-java (2.18.0-1) ... 1710s Selecting previously unselected package libjsr305-java. 1710s Preparing to unpack .../109-libjsr305-java_0.1~+svn49-11_all.deb ... 1710s Unpacking libjsr305-java (0.1~+svn49-11) ... 1710s Selecting previously unselected package libguava-java. 1710s Preparing to unpack .../110-libguava-java_32.0.1-1_all.deb ... 1710s Unpacking libguava-java (32.0.1-1) ... 1710s Selecting previously unselected package libjackson2-annotations-java. 1710s Preparing to unpack .../111-libjackson2-annotations-java_2.14.0-1_all.deb ... 1710s Unpacking libjackson2-annotations-java (2.14.0-1) ... 1710s Selecting previously unselected package libjackson2-core-java. 1710s Preparing to unpack .../112-libjackson2-core-java_2.14.1-1_all.deb ... 1710s Unpacking libjackson2-core-java (2.14.1-1) ... 1710s Selecting previously unselected package libjackson2-databind-java. 1710s Preparing to unpack .../113-libjackson2-databind-java_2.14.0-1_all.deb ... 1710s Unpacking libjackson2-databind-java (2.14.0-1) ... 1710s Selecting previously unselected package libasm-java. 1710s Preparing to unpack .../114-libasm-java_9.7.1-1_all.deb ... 1710s Unpacking libasm-java (9.7.1-1) ... 1710s Selecting previously unselected package libel-api-java. 1710s Preparing to unpack .../115-libel-api-java_3.0.0-3_all.deb ... 1710s Unpacking libel-api-java (3.0.0-3) ... 1710s Selecting previously unselected package libjsp-api-java. 1710s Preparing to unpack .../116-libjsp-api-java_2.3.4-3_all.deb ... 1710s Unpacking libjsp-api-java (2.3.4-3) ... 1710s Selecting previously unselected package libservlet-api-java. 1710s Preparing to unpack .../117-libservlet-api-java_4.0.1-2_all.deb ... 1710s Unpacking libservlet-api-java (4.0.1-2) ... 1710s Selecting previously unselected package libwebsocket-api-java. 1710s Preparing to unpack .../118-libwebsocket-api-java_1.1-2_all.deb ... 1710s Unpacking libwebsocket-api-java (1.1-2) ... 1710s Selecting previously unselected package libjetty9-java. 1710s Preparing to unpack .../119-libjetty9-java_9.4.56-1_all.deb ... 1710s Unpacking libjetty9-java (9.4.56-1) ... 1710s Selecting previously unselected package libjnr-constants-java. 1710s Preparing to unpack .../120-libjnr-constants-java_0.10.4-2_all.deb ... 1710s Unpacking libjnr-constants-java (0.10.4-2) ... 1710s Selecting previously unselected package libjffi-jni:s390x. 1710s Preparing to unpack .../121-libjffi-jni_1.3.13+ds-1_s390x.deb ... 1710s Unpacking libjffi-jni:s390x (1.3.13+ds-1) ... 1710s Selecting previously unselected package libjffi-java. 1710s Preparing to unpack .../122-libjffi-java_1.3.13+ds-1_all.deb ... 1710s Unpacking libjffi-java (1.3.13+ds-1) ... 1710s Selecting previously unselected package libjnr-x86asm-java. 1710s Preparing to unpack .../123-libjnr-x86asm-java_1.0.2-5.1_all.deb ... 1710s Unpacking libjnr-x86asm-java (1.0.2-5.1) ... 1710s Selecting previously unselected package libjnr-ffi-java. 1710s Preparing to unpack .../124-libjnr-ffi-java_2.2.15-2_all.deb ... 1710s Unpacking libjnr-ffi-java (2.2.15-2) ... 1710s Selecting previously unselected package libjnr-enxio-java. 1710s Preparing to unpack .../125-libjnr-enxio-java_0.32.16-1_all.deb ... 1710s Unpacking libjnr-enxio-java (0.32.16-1) ... 1710s Selecting previously unselected package libjnr-posix-java. 1710s Preparing to unpack .../126-libjnr-posix-java_3.1.18-1_all.deb ... 1710s Unpacking libjnr-posix-java (3.1.18-1) ... 1710s Selecting previously unselected package libjnr-unixsocket-java. 1710s Preparing to unpack .../127-libjnr-unixsocket-java_0.38.21-2_all.deb ... 1710s Unpacking libjnr-unixsocket-java (0.38.21-2) ... 1710s Selecting previously unselected package libactivation-java. 1710s Preparing to unpack .../128-libactivation-java_1.2.0-2_all.deb ... 1710s Unpacking libactivation-java (1.2.0-2) ... 1710s Selecting previously unselected package libmail-java. 1710s Preparing to unpack .../129-libmail-java_1.6.5-3_all.deb ... 1710s Unpacking libmail-java (1.6.5-3) ... 1710s Selecting previously unselected package libcommons-logging-java. 1710s Preparing to unpack .../130-libcommons-logging-java_1.3.0-1ubuntu1_all.deb ... 1710s Unpacking libcommons-logging-java (1.3.0-1ubuntu1) ... 1710s Selecting previously unselected package libjaxb-api-java. 1710s Preparing to unpack .../131-libjaxb-api-java_2.3.1-1_all.deb ... 1710s Unpacking libjaxb-api-java (2.3.1-1) ... 1710s Selecting previously unselected package libspring-core-java. 1710s Preparing to unpack .../132-libspring-core-java_4.3.30-2_all.deb ... 1710s Unpacking libspring-core-java (4.3.30-2) ... 1710s Selecting previously unselected package libspring-beans-java. 1710s Preparing to unpack .../133-libspring-beans-java_4.3.30-2_all.deb ... 1710s Unpacking libspring-beans-java (4.3.30-2) ... 1710s Selecting previously unselected package libtaglibs-standard-spec-java. 1710s Preparing to unpack .../134-libtaglibs-standard-spec-java_1.2.5-3_all.deb ... 1710s Unpacking libtaglibs-standard-spec-java (1.2.5-3) ... 1710s Selecting previously unselected package libtaglibs-standard-impl-java. 1710s Preparing to unpack .../135-libtaglibs-standard-impl-java_1.2.5-3_all.deb ... 1710s Unpacking libtaglibs-standard-impl-java (1.2.5-3) ... 1710s Selecting previously unselected package libeclipse-jdt-core-compiler-batch-java. 1710s Preparing to unpack .../136-libeclipse-jdt-core-compiler-batch-java_3.35.0+eclipse4.29-2_all.deb ... 1710s Unpacking libeclipse-jdt-core-compiler-batch-java (3.35.0+eclipse4.29-2) ... 1710s Selecting previously unselected package libeclipse-jdt-core-java. 1710s Preparing to unpack .../137-libeclipse-jdt-core-java_3.35.0+eclipse4.29-2_all.deb ... 1710s Unpacking libeclipse-jdt-core-java (3.35.0+eclipse4.29-2) ... 1710s Selecting previously unselected package libtomcat9-java. 1710s Preparing to unpack .../138-libtomcat9-java_9.0.70-2ubuntu1.1_all.deb ... 1710s Unpacking libtomcat9-java (9.0.70-2ubuntu1.1) ... 1710s Selecting previously unselected package libjetty9-extra-java. 1710s Preparing to unpack .../139-libjetty9-extra-java_9.4.56-1_all.deb ... 1710s Unpacking libjetty9-extra-java (9.4.56-1) ... 1710s Selecting previously unselected package libjctools-java. 1710s Preparing to unpack .../140-libjctools-java_2.0.2-1_all.deb ... 1710s Unpacking libjctools-java (2.0.2-1) ... 1710s Selecting previously unselected package libnetty-java. 1710s Preparing to unpack .../141-libnetty-java_1%3a4.1.48-10_all.deb ... 1710s Unpacking libnetty-java (1:4.1.48-10) ... 1710s Selecting previously unselected package libslf4j-java. 1710s Preparing to unpack .../142-libslf4j-java_1.7.32-1_all.deb ... 1710s Unpacking libslf4j-java (1.7.32-1) ... 1710s Selecting previously unselected package libsnappy1v5:s390x. 1710s Preparing to unpack .../143-libsnappy1v5_1.2.1-1_s390x.deb ... 1710s Unpacking libsnappy1v5:s390x (1.2.1-1) ... 1710s Selecting previously unselected package libsnappy-jni. 1710s Preparing to unpack .../144-libsnappy-jni_1.1.10.5-2_s390x.deb ... 1710s Unpacking libsnappy-jni (1.1.10.5-2) ... 1710s Selecting previously unselected package libsnappy-java. 1710s Preparing to unpack .../145-libsnappy-java_1.1.10.5-2_all.deb ... 1710s Unpacking libsnappy-java (1.1.10.5-2) ... 1710s Selecting previously unselected package libapr1t64:s390x. 1710s Preparing to unpack .../146-libapr1t64_1.7.2-3.2ubuntu1_s390x.deb ... 1710s Unpacking libapr1t64:s390x (1.7.2-3.2ubuntu1) ... 1710s Selecting previously unselected package libnetty-tcnative-jni. 1710s Preparing to unpack .../147-libnetty-tcnative-jni_2.0.28-1build4_s390x.deb ... 1710s Unpacking libnetty-tcnative-jni (2.0.28-1build4) ... 1710s Selecting previously unselected package libnetty-tcnative-java. 1710s Preparing to unpack .../148-libnetty-tcnative-java_2.0.28-1build4_all.deb ... 1710s Unpacking libnetty-tcnative-java (2.0.28-1build4) ... 1710s Selecting previously unselected package liblog4j1.2-java. 1710s Preparing to unpack .../149-liblog4j1.2-java_1.2.17-11_all.deb ... 1710s Unpacking liblog4j1.2-java (1.2.17-11) ... 1710s Selecting previously unselected package libzookeeper-java. 1710s Preparing to unpack .../150-libzookeeper-java_3.9.2-2_all.deb ... 1710s Unpacking libzookeeper-java (3.9.2-2) ... 1710s Selecting previously unselected package zookeeper. 1710s Preparing to unpack .../151-zookeeper_3.9.2-2_all.deb ... 1710s Unpacking zookeeper (3.9.2-2) ... 1710s Selecting previously unselected package zookeeperd. 1710s Preparing to unpack .../152-zookeeperd_3.9.2-2_all.deb ... 1710s Unpacking zookeeperd (3.9.2-2) ... 1710s Selecting previously unselected package fonts-font-awesome. 1710s Preparing to unpack .../153-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 1710s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1710s Selecting previously unselected package libcares2:s390x. 1710s Preparing to unpack .../154-libcares2_1.34.2-1_s390x.deb ... 1710s Unpacking libcares2:s390x (1.34.2-1) ... 1711s Selecting previously unselected package libev4t64:s390x. 1711s Preparing to unpack .../155-libev4t64_1%3a4.33-2.1build1_s390x.deb ... 1711s Unpacking libev4t64:s390x (1:4.33-2.1build1) ... 1711s Selecting previously unselected package libio-pty-perl. 1711s Preparing to unpack .../156-libio-pty-perl_1%3a1.20-1build3_s390x.deb ... 1711s Unpacking libio-pty-perl (1:1.20-1build3) ... 1711s Selecting previously unselected package libipc-run-perl. 1711s Preparing to unpack .../157-libipc-run-perl_20231003.0-2_all.deb ... 1711s Unpacking libipc-run-perl (20231003.0-2) ... 1711s Selecting previously unselected package libjs-jquery. 1711s Preparing to unpack .../158-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 1711s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1711s Selecting previously unselected package libjs-underscore. 1711s Preparing to unpack .../159-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 1711s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1711s Selecting previously unselected package libjs-sphinxdoc. 1711s Preparing to unpack .../160-libjs-sphinxdoc_7.4.7-4_all.deb ... 1711s Unpacking libjs-sphinxdoc (7.4.7-4) ... 1711s Selecting previously unselected package libpq5:s390x. 1711s Preparing to unpack .../161-libpq5_17.0-1_s390x.deb ... 1711s Unpacking libpq5:s390x (17.0-1) ... 1711s Selecting previously unselected package libtime-duration-perl. 1711s Preparing to unpack .../162-libtime-duration-perl_1.21-2_all.deb ... 1711s Unpacking libtime-duration-perl (1.21-2) ... 1711s Selecting previously unselected package libtimedate-perl. 1711s Preparing to unpack .../163-libtimedate-perl_2.3300-2_all.deb ... 1711s Unpacking libtimedate-perl (2.3300-2) ... 1711s Selecting previously unselected package libxslt1.1:s390x. 1711s Preparing to unpack .../164-libxslt1.1_1.1.39-0exp1ubuntu1_s390x.deb ... 1711s Unpacking libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 1711s Selecting previously unselected package moreutils. 1711s Preparing to unpack .../165-moreutils_0.69-1_s390x.deb ... 1711s Unpacking moreutils (0.69-1) ... 1711s Selecting previously unselected package python3-ydiff. 1711s Preparing to unpack .../166-python3-ydiff_1.3-1_all.deb ... 1711s Unpacking python3-ydiff (1.3-1) ... 1711s Selecting previously unselected package python3-cdiff. 1711s Preparing to unpack .../167-python3-cdiff_1.3-1_all.deb ... 1711s Unpacking python3-cdiff (1.3-1) ... 1711s Selecting previously unselected package python3-colorama. 1711s Preparing to unpack .../168-python3-colorama_0.4.6-4_all.deb ... 1711s Unpacking python3-colorama (0.4.6-4) ... 1711s Selecting previously unselected package python3-click. 1711s Preparing to unpack .../169-python3-click_8.1.7-2_all.deb ... 1711s Unpacking python3-click (8.1.7-2) ... 1711s Selecting previously unselected package python3-six. 1711s Preparing to unpack .../170-python3-six_1.16.0-7_all.deb ... 1711s Unpacking python3-six (1.16.0-7) ... 1711s Selecting previously unselected package python3-dateutil. 1711s Preparing to unpack .../171-python3-dateutil_2.9.0-2_all.deb ... 1711s Unpacking python3-dateutil (2.9.0-2) ... 1711s Selecting previously unselected package python3-wcwidth. 1711s Preparing to unpack .../172-python3-wcwidth_0.2.13+dfsg1-1_all.deb ... 1711s Unpacking python3-wcwidth (0.2.13+dfsg1-1) ... 1711s Selecting previously unselected package python3-prettytable. 1711s Preparing to unpack .../173-python3-prettytable_3.10.1-1_all.deb ... 1711s Unpacking python3-prettytable (3.10.1-1) ... 1711s Selecting previously unselected package python3-psutil. 1711s Preparing to unpack .../174-python3-psutil_5.9.8-2build2_s390x.deb ... 1711s Unpacking python3-psutil (5.9.8-2build2) ... 1711s Selecting previously unselected package python3-psycopg2. 1711s Preparing to unpack .../175-python3-psycopg2_2.9.9-2_s390x.deb ... 1711s Unpacking python3-psycopg2 (2.9.9-2) ... 1711s Selecting previously unselected package python3-greenlet. 1711s Preparing to unpack .../176-python3-greenlet_3.0.3-0ubuntu6_s390x.deb ... 1711s Unpacking python3-greenlet (3.0.3-0ubuntu6) ... 1711s Selecting previously unselected package python3-dnspython. 1711s Preparing to unpack .../177-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 1711s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 1711s Selecting previously unselected package python3-eventlet. 1711s Preparing to unpack .../178-python3-eventlet_0.36.1-0ubuntu1_all.deb ... 1711s Unpacking python3-eventlet (0.36.1-0ubuntu1) ... 1711s Selecting previously unselected package python3-zope.event. 1711s Preparing to unpack .../179-python3-zope.event_5.0-0.1_all.deb ... 1711s Unpacking python3-zope.event (5.0-0.1) ... 1711s Selecting previously unselected package python3-zope.interface. 1711s Preparing to unpack .../180-python3-zope.interface_7.1.1-1_s390x.deb ... 1711s Unpacking python3-zope.interface (7.1.1-1) ... 1711s Selecting previously unselected package python3-gevent. 1711s Preparing to unpack .../181-python3-gevent_24.2.1-1_s390x.deb ... 1711s Unpacking python3-gevent (24.2.1-1) ... 1711s Selecting previously unselected package python3-kerberos. 1711s Preparing to unpack .../182-python3-kerberos_1.1.14-3.1build9_s390x.deb ... 1711s Unpacking python3-kerberos (1.1.14-3.1build9) ... 1711s Selecting previously unselected package python3-pure-sasl. 1711s Preparing to unpack .../183-python3-pure-sasl_0.5.1+dfsg1-4_all.deb ... 1711s Unpacking python3-pure-sasl (0.5.1+dfsg1-4) ... 1711s Selecting previously unselected package python3-kazoo. 1711s Preparing to unpack .../184-python3-kazoo_2.9.0-2_all.deb ... 1711s Unpacking python3-kazoo (2.9.0-2) ... 1711s Selecting previously unselected package patroni. 1711s Preparing to unpack .../185-patroni_3.3.1-1_all.deb ... 1711s Unpacking patroni (3.3.1-1) ... 1711s Selecting previously unselected package sphinx-rtd-theme-common. 1711s Preparing to unpack .../186-sphinx-rtd-theme-common_3.0.1+dfsg-1_all.deb ... 1711s Unpacking sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 1711s Selecting previously unselected package patroni-doc. 1711s Preparing to unpack .../187-patroni-doc_3.3.1-1_all.deb ... 1711s Unpacking patroni-doc (3.3.1-1) ... 1711s Selecting previously unselected package postgresql-client-16. 1711s Preparing to unpack .../188-postgresql-client-16_16.4-3_s390x.deb ... 1711s Unpacking postgresql-client-16 (16.4-3) ... 1711s Selecting previously unselected package postgresql-16. 1711s Preparing to unpack .../189-postgresql-16_16.4-3_s390x.deb ... 1711s Unpacking postgresql-16 (16.4-3) ... 1712s Selecting previously unselected package postgresql. 1712s Preparing to unpack .../190-postgresql_16+262_all.deb ... 1712s Unpacking postgresql (16+262) ... 1712s Selecting previously unselected package python3-parse. 1712s Preparing to unpack .../191-python3-parse_1.20.2-1_all.deb ... 1712s Unpacking python3-parse (1.20.2-1) ... 1712s Selecting previously unselected package python3-parse-type. 1712s Preparing to unpack .../192-python3-parse-type_0.6.4-1_all.deb ... 1712s Unpacking python3-parse-type (0.6.4-1) ... 1712s Selecting previously unselected package python3-behave. 1712s Preparing to unpack .../193-python3-behave_1.2.6-6_all.deb ... 1712s Unpacking python3-behave (1.2.6-6) ... 1712s Selecting previously unselected package python3-coverage. 1712s Preparing to unpack .../194-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 1712s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1712s Selecting previously unselected package autopkgtest-satdep. 1712s Preparing to unpack .../195-4-autopkgtest-satdep.deb ... 1712s Unpacking autopkgtest-satdep (0) ... 1712s Setting up postgresql-client-common (262) ... 1712s Setting up libgraphite2-3:s390x (1.3.14-2ubuntu1) ... 1712s Setting up libxcb-dri3-0:s390x (1.17.0-2) ... 1712s Setting up liblcms2-2:s390x (2.16-2) ... 1712s Setting up libtaglibs-standard-spec-java (1.2.5-3) ... 1712s Setting up libpixman-1-0:s390x (0.44.0-3) ... 1712s Setting up libev4t64:s390x (1:4.33-2.1build1) ... 1712s Setting up libjackson2-annotations-java (2.14.0-1) ... 1712s Setting up libsharpyuv0:s390x (1.4.0-0.1) ... 1712s Setting up libwayland-server0:s390x (1.23.0-1) ... 1712s Setting up libx11-xcb1:s390x (2:1.8.10-2) ... 1712s Setting up libslf4j-java (1.7.32-1) ... 1712s Setting up fonts-lato (2.015-1) ... 1712s Setting up libeclipse-jdt-core-compiler-batch-java (3.35.0+eclipse4.29-2) ... 1712s Setting up libxdamage1:s390x (1:1.1.6-1build1) ... 1712s Setting up libxcb-xfixes0:s390x (1.17.0-2) ... 1712s Setting up libjsr305-java (0.1~+svn49-11) ... 1712s Setting up hicolor-icon-theme (0.18-1) ... 1712s Setting up libxi6:s390x (2:1.8.2-1) ... 1712s Setting up java-common (0.76) ... 1712s Setting up libxrender1:s390x (1:0.9.10-1.1build1) ... 1712s Setting up libdatrie1:s390x (0.2.13-3build1) ... 1712s Setting up libcommons-cli-java (1.6.0-1) ... 1712s Setting up libio-pty-perl (1:1.20-1build3) ... 1712s Setting up python3-colorama (0.4.6-4) ... 1712s Setting up libxcb-render0:s390x (1.17.0-2) ... 1712s Setting up python3-zope.event (5.0-0.1) ... 1712s Setting up python3-zope.interface (7.1.1-1) ... 1712s Setting up libdrm-radeon1:s390x (2.4.123-1) ... 1712s Setting up libglvnd0:s390x (1.7.0-1build1) ... 1712s Setting up libxcb-glx0:s390x (1.17.0-2) ... 1712s Setting up libgdk-pixbuf2.0-common (2.42.12+dfsg-1) ... 1712s Setting up python3-ydiff (1.3-1) ... 1712s Setting up libasm-java (9.7.1-1) ... 1712s Setting up x11-common (1:7.7+23ubuntu3) ... 1713s Setting up libpq5:s390x (17.0-1) ... 1713s Setting up libdeflate0:s390x (1.22-1) ... 1713s Setting up python3-kerberos (1.1.14-3.1build9) ... 1713s Setting up liblog4j1.2-java (1.2.17-11) ... 1713s Setting up libel-api-java (3.0.0-3) ... 1713s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1713s Setting up libxcb-shm0:s390x (1.17.0-2) ... 1713s Setting up python3-click (8.1.7-2) ... 1713s Setting up libjnr-x86asm-java (1.0.2-5.1) ... 1713s Setting up libjbig0:s390x (2.1-6.1ubuntu2) ... 1713s Setting up libcolord2:s390x (1.4.7-1build2) ... 1713s Setting up python3-psutil (5.9.8-2build2) ... 1713s Setting up libeclipse-jdt-core-java (3.35.0+eclipse4.29-2) ... 1713s Setting up libxxf86vm1:s390x (1:1.1.4-1build4) ... 1713s Setting up libsnappy1v5:s390x (1.2.1-1) ... 1713s Setting up libxcb-present0:s390x (1.17.0-2) ... 1713s Setting up libtaglibs-standard-impl-java (1.2.5-3) ... 1713s Setting up libdconf1:s390x (0.40.0-4build2) ... 1713s Setting up libjctools-java (2.0.2-1) ... 1713s Setting up libdropwizard-metrics-java (3.2.6-1) ... 1713s Setting up python3-six (1.16.0-7) ... 1713s Setting up libasound2-data (1.2.12-1) ... 1714s Setting up libasound2t64:s390x (1.2.12-1) ... 1714s Setting up python3-wcwidth (0.2.13+dfsg1-1) ... 1714s Setting up libfreetype6:s390x (2.13.3+dfsg-1) ... 1714s Setting up libfindbugs-annotations-java (3.1.0~preview2-4) ... 1714s Setting up libepoxy0:s390x (1.5.10-2) ... 1714s Setting up ssl-cert (1.1.2ubuntu2) ... 1714s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 1714s Setting up libxfixes3:s390x (1:6.0.0-2build1) ... 1714s Setting up libxcb-sync1:s390x (1.17.0-2) ... 1714s Setting up libapache-pom-java (33-2) ... 1714s Setting up libavahi-common-data:s390x (0.8-13ubuntu6) ... 1714s Setting up libatinject-jsr330-api-java (1.0+ds1-5) ... 1714s Setting up libatspi2.0-0t64:s390x (2.54.0-1) ... 1714s Setting up libwebsocket-api-java (1.1-2) ... 1714s Setting up python3-greenlet (3.0.3-0ubuntu6) ... 1715s Setting up libxinerama1:s390x (2:1.1.4-3build1) ... 1715s Setting up fonts-dejavu-mono (2.37-8) ... 1715s Setting up libcares2:s390x (1.34.2-1) ... 1715s Setting up libxrandr2:s390x (2:1.5.4-1) ... 1715s Setting up python3-psycopg2 (2.9.9-2) ... 1715s Setting up fonts-dejavu-core (2.37-8) ... 1715s Setting up libipc-run-perl (20231003.0-2) ... 1715s Setting up libpcsclite1:s390x (2.3.0-1) ... 1715s Setting up libjpeg-turbo8:s390x (2.1.5-2ubuntu2) ... 1715s Setting up libactivation-java (1.2.0-2) ... 1715s Setting up libtomcat9-java (9.0.70-2ubuntu1.1) ... 1715s Setting up libhamcrest-java (2.2-2) ... 1715s Setting up libglapi-mesa:s390x (24.2.3-1ubuntu1) ... 1715s Setting up libjsp-api-java (2.3.4-3) ... 1715s Setting up libvulkan1:s390x (1.3.296.0-1) ... 1715s Setting up libtime-duration-perl (1.21-2) ... 1715s Setting up libwebp7:s390x (1.4.0-0.1) ... 1715s Setting up libtimedate-perl (2.3300-2) ... 1715s Setting up libxcb-dri2-0:s390x (1.17.0-2) ... 1715s Setting up libgif7:s390x (5.2.2-1ubuntu1) ... 1715s Setting up libxshmfence1:s390x (1.3-1build5) ... 1715s Setting up libmail-java (1.6.5-3) ... 1715s Setting up at-spi2-common (2.54.0-1) ... 1715s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 1715s Setting up libnetty-java (1:4.1.48-10) ... 1715s Setting up libxcb-randr0:s390x (1.17.0-2) ... 1715s Setting up python3-parse (1.20.2-1) ... 1715s Setting up libapr1t64:s390x (1.7.2-3.2ubuntu1) ... 1715s Setting up libjson-perl (4.10000-1) ... 1715s Setting up libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 1715s Setting up libservlet-api-java (4.0.1-2) ... 1715s Setting up libjackson2-core-java (2.14.1-1) ... 1715s Setting up libharfbuzz0b:s390x (10.0.1-1) ... 1715s Setting up libthai-data (0.1.29-2build1) ... 1715s Setting up python3-dateutil (2.9.0-2) ... 1715s Setting up libjffi-jni:s390x (1.3.13+ds-1) ... 1715s Setting up libwayland-egl1:s390x (1.23.0-1) ... 1715s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1715s Setting up ca-certificates-java (20240118) ... 1715s No JRE found. Skipping Java certificates setup. 1715s Setting up python3-prettytable (3.10.1-1) ... 1715s Setting up libsnappy-jni (1.1.10.5-2) ... 1715s Setting up libxcomposite1:s390x (1:0.4.6-1) ... 1715s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1715s Setting up sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 1715s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1715s Setting up libdrm-amdgpu1:s390x (2.4.123-1) ... 1715s Setting up libjnr-constants-java (0.10.4-2) ... 1715s Setting up libwayland-client0:s390x (1.23.0-1) ... 1715s Setting up libjpeg8:s390x (8c-2ubuntu11) ... 1715s Setting up libjaxb-api-java (2.3.1-1) ... 1715s Setting up libjffi-java (1.3.13+ds-1) ... 1715s Setting up mesa-libgallium:s390x (24.2.3-1ubuntu1) ... 1715s Setting up libjetty9-java (9.4.56-1) ... 1715s Setting up moreutils (0.69-1) ... 1715s Setting up libatk1.0-0t64:s390x (2.54.0-1) ... 1715s Setting up openjdk-21-jre-headless:s390x (21.0.5+11-1) ... 1715s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/java to provide /usr/bin/java (java) in auto mode 1715s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/jpackage to provide /usr/bin/jpackage (jpackage) in auto mode 1715s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode 1715s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode 1715s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode 1715s Setting up python3-pure-sasl (0.5.1+dfsg1-4) ... 1715s Setting up libgbm1:s390x (24.2.3-1ubuntu1) ... 1715s Setting up fontconfig-config (2.15.0-1.1ubuntu2) ... 1715s Setting up libxtst6:s390x (2:1.2.3-1.1build1) ... 1715s Setting up libxcursor1:s390x (1:1.2.2-1) ... 1715s Setting up postgresql-client-16 (16.4-3) ... 1716s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 1716s Setting up python3-cdiff (1.3-1) ... 1716s Setting up libgl1-mesa-dri:s390x (24.2.3-1ubuntu1) ... 1716s Setting up libcommons-parent-java (56-1) ... 1716s Setting up libavahi-common3:s390x (0.8-13ubuntu6) ... 1716s Setting up libcommons-logging-java (1.3.0-1ubuntu1) ... 1716s Setting up dconf-service (0.40.0-4build2) ... 1716s Setting up python3-gevent (24.2.1-1) ... 1716s Setting up libjackson2-databind-java (2.14.0-1) ... 1716s Setting up libthai0:s390x (0.1.29-2build1) ... 1716s Setting up python3-parse-type (0.6.4-1) ... 1716s Setting up python3-eventlet (0.36.1-0ubuntu1) ... 1716s Setting up libnetty-tcnative-jni (2.0.28-1build4) ... 1716s Setting up python3-kazoo (2.9.0-2) ... 1717s Setting up postgresql-common (262) ... 1717s 1717s Creating config file /etc/postgresql-common/createcluster.conf with new version 1717s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 1717s Removing obsolete dictionary files: 1718s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 1718s Setting up libjs-sphinxdoc (7.4.7-4) ... 1718s Setting up libtiff6:s390x (4.5.1+git230720-4ubuntu4) ... 1718s Setting up libwayland-cursor0:s390x (1.23.0-1) ... 1718s Setting up libgdk-pixbuf-2.0-0:s390x (2.42.12+dfsg-1) ... 1718s Setting up python3-behave (1.2.6-6) ... 1718s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 1718s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 1718s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 1718s """Registers a custom type that will be available to "parse" 1718s Setting up libsnappy-java (1.1.10.5-2) ... 1718s Setting up libfontconfig1:s390x (2.15.0-1.1ubuntu2) ... 1718s Setting up patroni (3.3.1-1) ... 1718s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 1719s Setting up libavahi-client3:s390x (0.8-13ubuntu6) ... 1719s Setting up libjnr-ffi-java (2.2.15-2) ... 1719s Setting up libatk-bridge2.0-0t64:s390x (2.54.0-1) ... 1719s Setting up gtk-update-icon-cache (4.16.5+ds-1) ... 1719s Setting up fontconfig (2.15.0-1.1ubuntu2) ... 1721s Regenerating fonts cache... done. 1721s Setting up libglx-mesa0:s390x (24.2.3-1ubuntu1) ... 1721s Setting up postgresql-16 (16.4-3) ... 1721s Creating new PostgreSQL cluster 16/main ... 1721s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 1721s The files belonging to this database system will be owned by user "postgres". 1721s This user must also own the server process. 1721s 1721s The database cluster will be initialized with locale "C.UTF-8". 1721s The default database encoding has accordingly been set to "UTF8". 1721s The default text search configuration will be set to "english". 1721s 1721s Data page checksums are disabled. 1721s 1721s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 1721s creating subdirectories ... ok 1721s selecting dynamic shared memory implementation ... posix 1721s selecting default max_connections ... 100 1721s selecting default shared_buffers ... 128MB 1721s selecting default time zone ... Etc/UTC 1721s creating configuration files ... ok 1721s running bootstrap script ... ok 1721s performing post-bootstrap initialization ... ok 1721s syncing data to disk ... ok 1724s Setting up libglx0:s390x (1.7.0-1build1) ... 1724s Setting up libspring-core-java (4.3.30-2) ... 1724s Setting up dconf-gsettings-backend:s390x (0.40.0-4build2) ... 1724s Setting up libcommons-io-java (2.17.0-1) ... 1724s Setting up patroni-doc (3.3.1-1) ... 1724s Setting up libpango-1.0-0:s390x (1.54.0+ds-3) ... 1724s Setting up libcairo2:s390x (1.18.2-2) ... 1724s Setting up libjnr-enxio-java (0.32.16-1) ... 1724s Setting up libgl1:s390x (1.7.0-1build1) ... 1724s Setting up libcairo-gobject2:s390x (1.18.2-2) ... 1724s Setting up postgresql (16+262) ... 1724s Setting up libpangoft2-1.0-0:s390x (1.54.0+ds-3) ... 1724s Setting up libcups2t64:s390x (2.4.10-1ubuntu2) ... 1724s Setting up libgtk-3-common (3.24.43-3ubuntu2) ... 1724s Setting up libjnr-posix-java (3.1.18-1) ... 1724s Setting up libpangocairo-1.0-0:s390x (1.54.0+ds-3) ... 1724s Setting up libspring-beans-java (4.3.30-2) ... 1724s Setting up libjnr-unixsocket-java (0.38.21-2) ... 1724s Setting up libjetty9-extra-java (9.4.56-1) ... 1724s Setting up libguava-java (32.0.1-1) ... 1724s Setting up adwaita-icon-theme (47.0-2) ... 1724s update-alternatives: using /usr/share/icons/Adwaita/cursor.theme to provide /usr/share/icons/default/index.theme (x-cursor-theme) in auto mode 1724s Setting up liberror-prone-java (2.18.0-1) ... 1724s Setting up humanity-icon-theme (0.6.16) ... 1724s Setting up ubuntu-mono (24.04-0ubuntu1) ... 1724s Processing triggers for man-db (2.12.1-3) ... 1725s Processing triggers for libglib2.0-0t64:s390x (2.82.1-0ubuntu1) ... 1725s Setting up libgtk-3-0t64:s390x (3.24.43-3ubuntu2) ... 1725s Processing triggers for libc-bin (2.40-1ubuntu3) ... 1725s Processing triggers for ca-certificates-java (20240118) ... 1726s Adding debian:ACCVRAIZ1.pem 1726s Adding debian:AC_RAIZ_FNMT-RCM.pem 1726s Adding debian:AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem 1726s Adding debian:ANF_Secure_Server_Root_CA.pem 1726s Adding debian:Actalis_Authentication_Root_CA.pem 1726s Adding debian:AffirmTrust_Commercial.pem 1726s Adding debian:AffirmTrust_Networking.pem 1726s Adding debian:AffirmTrust_Premium.pem 1726s Adding debian:AffirmTrust_Premium_ECC.pem 1726s Adding debian:Amazon_Root_CA_1.pem 1726s Adding debian:Amazon_Root_CA_2.pem 1726s Adding debian:Amazon_Root_CA_3.pem 1726s Adding debian:Amazon_Root_CA_4.pem 1726s Adding debian:Atos_TrustedRoot_2011.pem 1726s Adding debian:Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem 1726s Adding debian:Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem 1726s Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem 1726s Adding debian:BJCA_Global_Root_CA1.pem 1726s Adding debian:BJCA_Global_Root_CA2.pem 1726s Adding debian:Baltimore_CyberTrust_Root.pem 1726s Adding debian:Buypass_Class_2_Root_CA.pem 1726s Adding debian:Buypass_Class_3_Root_CA.pem 1726s Adding debian:CA_Disig_Root_R2.pem 1726s Adding debian:CFCA_EV_ROOT.pem 1726s Adding debian:COMODO_Certification_Authority.pem 1726s Adding debian:COMODO_ECC_Certification_Authority.pem 1726s Adding debian:COMODO_RSA_Certification_Authority.pem 1726s Adding debian:Certainly_Root_E1.pem 1726s Adding debian:Certainly_Root_R1.pem 1726s Adding debian:Certigna.pem 1726s Adding debian:Certigna_Root_CA.pem 1726s Adding debian:Certum_EC-384_CA.pem 1726s Adding debian:Certum_Trusted_Network_CA.pem 1726s Adding debian:Certum_Trusted_Network_CA_2.pem 1726s Adding debian:Certum_Trusted_Root_CA.pem 1726s Adding debian:CommScope_Public_Trust_ECC_Root-01.pem 1726s Adding debian:CommScope_Public_Trust_ECC_Root-02.pem 1726s Adding debian:CommScope_Public_Trust_RSA_Root-01.pem 1726s Adding debian:CommScope_Public_Trust_RSA_Root-02.pem 1726s Adding debian:Comodo_AAA_Services_root.pem 1726s Adding debian:D-TRUST_BR_Root_CA_1_2020.pem 1726s Adding debian:D-TRUST_EV_Root_CA_1_2020.pem 1726s Adding debian:D-TRUST_Root_Class_3_CA_2_2009.pem 1726s Adding debian:D-TRUST_Root_Class_3_CA_2_EV_2009.pem 1726s Adding debian:DigiCert_Assured_ID_Root_CA.pem 1726s Adding debian:DigiCert_Assured_ID_Root_G2.pem 1726s Adding debian:DigiCert_Assured_ID_Root_G3.pem 1726s Adding debian:DigiCert_Global_Root_CA.pem 1726s Adding debian:DigiCert_Global_Root_G2.pem 1726s Adding debian:DigiCert_Global_Root_G3.pem 1726s Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem 1726s Adding debian:DigiCert_TLS_ECC_P384_Root_G5.pem 1726s Adding debian:DigiCert_TLS_RSA4096_Root_G5.pem 1726s Adding debian:DigiCert_Trusted_Root_G4.pem 1726s Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem 1726s Adding debian:Entrust_Root_Certification_Authority.pem 1726s Adding debian:Entrust_Root_Certification_Authority_-_EC1.pem 1726s Adding debian:Entrust_Root_Certification_Authority_-_G2.pem 1726s Adding debian:Entrust_Root_Certification_Authority_-_G4.pem 1726s Adding debian:GDCA_TrustAUTH_R5_ROOT.pem 1726s Adding debian:GLOBALTRUST_2020.pem 1726s Adding debian:GTS_Root_R1.pem 1726s Adding debian:GTS_Root_R2.pem 1726s Adding debian:GTS_Root_R3.pem 1726s Adding debian:GTS_Root_R4.pem 1726s Adding debian:GlobalSign_ECC_Root_CA_-_R4.pem 1726s Adding debian:GlobalSign_ECC_Root_CA_-_R5.pem 1726s Adding debian:GlobalSign_Root_CA.pem 1726s Adding debian:GlobalSign_Root_CA_-_R3.pem 1726s Adding debian:GlobalSign_Root_CA_-_R6.pem 1726s Adding debian:GlobalSign_Root_E46.pem 1726s Adding debian:GlobalSign_Root_R46.pem 1726s Adding debian:Go_Daddy_Class_2_CA.pem 1726s Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem 1726s Adding debian:HARICA_TLS_ECC_Root_CA_2021.pem 1726s Adding debian:HARICA_TLS_RSA_Root_CA_2021.pem 1726s Adding debian:Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem 1726s Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem 1726s Adding debian:HiPKI_Root_CA_-_G1.pem 1726s Adding debian:Hongkong_Post_Root_CA_3.pem 1726s Adding debian:ISRG_Root_X1.pem 1726s Adding debian:ISRG_Root_X2.pem 1726s Adding debian:IdenTrust_Commercial_Root_CA_1.pem 1726s Adding debian:IdenTrust_Public_Sector_Root_CA_1.pem 1726s Adding debian:Izenpe.com.pem 1726s Adding debian:Microsec_e-Szigno_Root_CA_2009.pem 1726s Adding debian:Microsoft_ECC_Root_Certificate_Authority_2017.pem 1726s Adding debian:Microsoft_RSA_Root_Certificate_Authority_2017.pem 1726s Adding debian:NAVER_Global_Root_Certification_Authority.pem 1726s Adding debian:NetLock_Arany_=Class_Gold=_Főtanúsítvány.pem 1726s Adding debian:OISTE_WISeKey_Global_Root_GB_CA.pem 1726s Adding debian:OISTE_WISeKey_Global_Root_GC_CA.pem 1726s Adding debian:QuoVadis_Root_CA_1_G3.pem 1726s Adding debian:QuoVadis_Root_CA_2.pem 1726s Adding debian:QuoVadis_Root_CA_2_G3.pem 1726s Adding debian:QuoVadis_Root_CA_3.pem 1726s Adding debian:QuoVadis_Root_CA_3_G3.pem 1726s Adding debian:SSL.com_EV_Root_Certification_Authority_ECC.pem 1726s Adding debian:SSL.com_EV_Root_Certification_Authority_RSA_R2.pem 1726s Adding debian:SSL.com_Root_Certification_Authority_ECC.pem 1726s Adding debian:SSL.com_Root_Certification_Authority_RSA.pem 1726s Adding debian:SSL.com_TLS_ECC_Root_CA_2022.pem 1726s Adding debian:SSL.com_TLS_RSA_Root_CA_2022.pem 1726s Adding debian:SZAFIR_ROOT_CA2.pem 1726s Adding debian:Sectigo_Public_Server_Authentication_Root_E46.pem 1726s Adding debian:Sectigo_Public_Server_Authentication_Root_R46.pem 1726s Adding debian:SecureSign_RootCA11.pem 1726s Adding debian:SecureTrust_CA.pem 1726s Adding debian:Secure_Global_CA.pem 1726s Adding debian:Security_Communication_ECC_RootCA1.pem 1726s Adding debian:Security_Communication_RootCA2.pem 1726s Adding debian:Security_Communication_RootCA3.pem 1726s Adding debian:Security_Communication_Root_CA.pem 1726s Adding debian:Starfield_Class_2_CA.pem 1726s Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem 1726s Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem 1726s Adding debian:SwissSign_Gold_CA_-_G2.pem 1726s Adding debian:SwissSign_Silver_CA_-_G2.pem 1726s Adding debian:T-TeleSec_GlobalRoot_Class_2.pem 1726s Adding debian:T-TeleSec_GlobalRoot_Class_3.pem 1726s Adding debian:TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem 1726s Adding debian:TWCA_Global_Root_CA.pem 1726s Adding debian:TWCA_Root_Certification_Authority.pem 1726s Adding debian:TeliaSonera_Root_CA_v1.pem 1726s Adding debian:Telia_Root_CA_v2.pem 1726s Adding debian:TrustAsia_Global_Root_CA_G3.pem 1726s Adding debian:TrustAsia_Global_Root_CA_G4.pem 1726s Adding debian:Trustwave_Global_Certification_Authority.pem 1726s Adding debian:Trustwave_Global_ECC_P256_Certification_Authority.pem 1726s Adding debian:Trustwave_Global_ECC_P384_Certification_Authority.pem 1726s Adding debian:TunTrust_Root_CA.pem 1726s Adding debian:UCA_Extended_Validation_Root.pem 1726s Adding debian:UCA_Global_G2_Root.pem 1726s Adding debian:USERTrust_ECC_Certification_Authority.pem 1726s Adding debian:USERTrust_RSA_Certification_Authority.pem 1726s Adding debian:XRamp_Global_CA_Root.pem 1726s Adding debian:certSIGN_ROOT_CA.pem 1726s Adding debian:certSIGN_Root_CA_G2.pem 1726s Adding debian:e-Szigno_Root_CA_2017.pem 1726s Adding debian:ePKI_Root_Certification_Authority.pem 1726s Adding debian:emSign_ECC_Root_CA_-_C3.pem 1726s Adding debian:emSign_ECC_Root_CA_-_G3.pem 1726s Adding debian:emSign_Root_CA_-_C1.pem 1726s Adding debian:emSign_Root_CA_-_G1.pem 1726s Adding debian:vTrus_ECC_Root_CA.pem 1726s Adding debian:vTrus_Root_CA.pem 1726s done. 1726s Setting up openjdk-21-jre:s390x (21.0.5+11-1) ... 1726s Setting up junit4 (4.13.2-5) ... 1726s Setting up default-jre-headless (2:1.21-76) ... 1726s Setting up default-jre (2:1.21-76) ... 1726s Setting up libnetty-tcnative-java (2.0.28-1build4) ... 1726s Setting up libzookeeper-java (3.9.2-2) ... 1726s Setting up zookeeper (3.9.2-2) ... 1726s warn: The home directory `/var/lib/zookeeper' already exists. Not touching this directory. 1726s warn: Warning: The home directory `/var/lib/zookeeper' does not belong to the user you are currently creating. 1726s update-alternatives: using /etc/zookeeper/conf_example to provide /etc/zookeeper/conf (zookeeper-conf) in auto mode 1726s Setting up zookeeperd (3.9.2-2) ... 1726s Setting up autopkgtest-satdep (0) ... 1731s (Reading database ... 75650 files and directories currently installed.) 1731s Removing autopkgtest-satdep (0) ... 1733s autopkgtest [11:58:23]: test acceptance-zookeeper: debian/tests/acceptance zookeeper "-e dcs_failsafe_mode" 1733s autopkgtest [11:58:23]: test acceptance-zookeeper: [----------------------- 1738s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 1738s ++ ls -1r /usr/lib/postgresql/ 1738s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 1738s + '[' 16 == 10 -o 16 == 11 ']' 1738s + echo '### PostgreSQL 16 acceptance-zookeeper -e dcs_failsafe_mode ###' 1738s ### PostgreSQL 16 acceptance-zookeeper -e dcs_failsafe_mode ### 1738s + su postgres -p -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=zookeeper PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave -e dcs_failsafe_mode | ts' 1739s Nov 13 11:58:29 Feature: basic replication # features/basic_replication.feature:1 1739s Nov 13 11:58:29 We should check that the basic bootstrapping, replication and failover works. 1739s Nov 13 11:58:29 Scenario: check replication of a single table # features/basic_replication.feature:4 1739s Nov 13 11:58:29 Given I start postgres0 # features/steps/basic_replication.py:8 1742s Nov 13 11:58:32 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1743s Nov 13 11:58:33 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1743s Nov 13 11:58:33 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 1743s Nov 13 11:58:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 1743s Nov 13 11:58:33 When I start postgres1 # features/steps/basic_replication.py:8 1746s Nov 13 11:58:36 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 1749s Nov 13 11:58:39 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 1749s Nov 13 11:58:39 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 1749s Nov 13 11:58:39 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1750s Nov 13 11:58:40 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 1750s Nov 13 11:58:40 1750s Nov 13 11:58:40 Scenario: check restart of sync replica # features/basic_replication.feature:17 1750s Nov 13 11:58:40 Given I shut down postgres2 # features/steps/basic_replication.py:29 1751s Nov 13 11:58:41 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 1751s Nov 13 11:58:41 When I start postgres2 # features/steps/basic_replication.py:8 1753s Nov 13 11:58:43 And I shut down postgres1 # features/steps/basic_replication.py:29 1756s Nov 13 11:58:46 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1757s Nov 13 11:58:47 When I start postgres1 # features/steps/basic_replication.py:8 1759s Nov 13 11:58:49 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1760s Nov 13 11:58:50 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1760s Nov 13 11:58:50 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1760s Nov 13 11:58:50 1760s Nov 13 11:58:50 Scenario: check stuck sync replica # features/basic_replication.feature:28 1760s Nov 13 11:58:50 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 1760s Nov 13 11:58:50 Then I receive a response code 200 # features/steps/patroni_api.py:98 1760s Nov 13 11:58:50 And I create table on postgres0 # features/steps/basic_replication.py:73 1760s Nov 13 11:58:50 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 1761s Nov 13 11:58:51 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 1761s Nov 13 11:58:51 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 1761s Nov 13 11:58:51 And I load data on postgres0 # features/steps/basic_replication.py:84 1761s Nov 13 11:58:51 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 1764s Nov 13 11:58:54 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 1764s Nov 13 11:58:54 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1765s Nov 13 11:58:55 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1765s Nov 13 11:58:55 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 1766s Nov 13 11:58:55 Then I receive a response code 200 # features/steps/patroni_api.py:98 1766s Nov 13 11:58:55 And I drop table on postgres0 # features/steps/basic_replication.py:73 1766s Nov 13 11:58:55 1766s Nov 13 11:58:55 Scenario: check multi sync replication # features/basic_replication.feature:44 1766s Nov 13 11:58:55 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 1766s Nov 13 11:58:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 1766s Nov 13 11:58:56 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1770s Nov 13 11:59:00 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1770s Nov 13 11:59:00 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1770s Nov 13 11:59:00 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 1770s Nov 13 11:59:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 1770s Nov 13 11:59:00 And I shut down postgres1 # features/steps/basic_replication.py:29 1773s Nov 13 11:59:03 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1774s Nov 13 11:59:04 When I start postgres1 # features/steps/basic_replication.py:8 1776s Nov 13 11:59:06 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1777s Nov 13 11:59:07 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1777s Nov 13 11:59:07 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1777s Nov 13 11:59:07 1777s Nov 13 11:59:07 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 1777s Nov 13 11:59:07 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 1778s Nov 13 11:59:08 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1778s Nov 13 11:59:08 When I sleep for 2 seconds # features/steps/patroni_api.py:39 1780s Nov 13 11:59:10 And I shut down postgres0 # features/steps/basic_replication.py:29 1781s Nov 13 11:59:11 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 1782s Nov 13 11:59:12 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1782s Nov 13 11:59:12 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 1802s Nov 13 11:59:31 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 1805s Nov 13 11:59:35 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 1805s Nov 13 11:59:35 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 1805s Nov 13 11:59:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1805s Nov 13 11:59:35 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 1805s Nov 13 11:59:35 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1808s Nov 13 11:59:38 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 1808s Nov 13 11:59:38 1808s Nov 13 11:59:38 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 1808s Nov 13 11:59:38 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 1808s Nov 13 11:59:38 And I start postgres0 # features/steps/basic_replication.py:8 1808s Nov 13 11:59:38 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1814s Nov 13 11:59:44 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 1814s Nov 13 11:59:44 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 1814s Nov 13 11:59:44 1814s Nov 13 11:59:44 @reject-duplicate-name 1814s Nov 13 11:59:44 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 1814s Nov 13 11:59:44 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 1816s Nov 13 11:59:46 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 1820s Nov 13 11:59:50 1820s Nov 13 11:59:50 Feature: cascading replication # features/cascading_replication.feature:1 1820s Nov 13 11:59:50 We should check that patroni can do base backup and streaming from the replica 1820s Nov 13 11:59:50 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 1820s Nov 13 11:59:50 Given I start postgres0 # features/steps/basic_replication.py:8 1823s Nov 13 11:59:53 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1823s Nov 13 11:59:53 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 1826s Nov 13 11:59:56 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1831s Nov 13 12:00:01 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 1831s Nov 13 12:00:01 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 1831s Nov 13 12:00:01 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 1831s Nov 13 12:00:01 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 1834s Nov 13 12:00:04 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 1839s Nov 13 12:00:09 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 1845s Nov 13 12:00:15 1845s SKIP FEATURE citus: Citus extenstion isn't available 1845s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 1845s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 1845s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 1845s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 1845s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 1845s Nov 13 12:00:15 Feature: citus # features/citus.feature:1 1845s Nov 13 12:00:15 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 1845s Nov 13 12:00:15 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 1845s Nov 13 12:00:15 Given I start postgres0 in citus group 0 # None 1845s Nov 13 12:00:15 And I start postgres2 in citus group 1 # None 1845s Nov 13 12:00:15 Then postgres0 is a leader in a group 0 after 10 seconds # None 1845s Nov 13 12:00:15 And postgres2 is a leader in a group 1 after 10 seconds # None 1845s Nov 13 12:00:15 When I start postgres1 in citus group 0 # None 1845s Nov 13 12:00:15 And I start postgres3 in citus group 1 # None 1845s Nov 13 12:00:15 Then replication works from postgres0 to postgres1 after 15 seconds # None 1845s Nov 13 12:00:15 Then replication works from postgres2 to postgres3 after 15 seconds # None 1845s Nov 13 12:00:15 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 1845s Nov 13 12:00:15 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1845s Nov 13 12:00:15 1845s Nov 13 12:00:15 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 1845s Nov 13 12:00:15 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 1845s Nov 13 12:00:15 Then postgres1 role is the primary after 10 seconds # None 1845s Nov 13 12:00:15 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 1845s Nov 13 12:00:15 And replication works from postgres1 to postgres0 after 15 seconds # None 1845s Nov 13 12:00:15 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 1845s Nov 13 12:00:15 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 1845s Nov 13 12:00:15 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 1845s Nov 13 12:00:15 Then postgres0 role is the primary after 10 seconds # None 1845s Nov 13 12:00:15 And replication works from postgres0 to postgres1 after 15 seconds # None 1845s Nov 13 12:00:15 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 1845s Nov 13 12:00:15 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 1845s Nov 13 12:00:15 1845s Nov 13 12:00:15 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 1845s Nov 13 12:00:15 Given I create a distributed table on postgres0 # None 1845s Nov 13 12:00:15 And I start a thread inserting data on postgres0 # None 1845s Nov 13 12:00:15 When I run patronictl.py switchover batman --group 1 --force # None 1845s Nov 13 12:00:15 Then I receive a response returncode 0 # None 1845s Nov 13 12:00:15 And postgres3 role is the primary after 10 seconds # None 1845s Nov 13 12:00:15 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 1845s Nov 13 12:00:15 And replication works from postgres3 to postgres2 after 15 seconds # None 1845s Nov 13 12:00:15 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1845s Nov 13 12:00:15 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 1845s Nov 13 12:00:15 And a thread is still alive # None 1845s Nov 13 12:00:15 When I run patronictl.py switchover batman --group 1 --force # None 1845s Nov 13 12:00:15 Then I receive a response returncode 0 # None 1845s Nov 13 12:00:15 And postgres2 role is the primary after 10 seconds # None 1845s Nov 13 12:00:15 And replication works from postgres2 to postgres3 after 15 seconds # None 1845s Nov 13 12:00:15 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1845s Nov 13 12:00:15 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 1845s Nov 13 12:00:15 And a thread is still alive # None 1845s Nov 13 12:00:15 When I stop a thread # None 1845s Nov 13 12:00:15 Then a distributed table on postgres0 has expected rows # None 1845s Nov 13 12:00:15 1845s Nov 13 12:00:15 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 1845s Nov 13 12:00:15 Given I cleanup a distributed table on postgres0 # None 1845s Nov 13 12:00:15 And I start a thread inserting data on postgres0 # None 1845s Nov 13 12:00:15 When I run patronictl.py restart batman postgres2 --group 1 --force # None 1845s Nov 13 12:00:15 Then I receive a response returncode 0 # None 1845s Nov 13 12:00:15 And postgres2 role is the primary after 10 seconds # None 1845s Nov 13 12:00:15 And replication works from postgres2 to postgres3 after 15 seconds # None 1845s Nov 13 12:00:15 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1845s Nov 13 12:00:15 And a thread is still alive # None 1845s Nov 13 12:00:15 When I stop a thread # None 1845s Nov 13 12:00:15 Then a distributed table on postgres0 has expected rows # None 1845s Nov 13 12:00:15 1845s Nov 13 12:00:15 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 1845s Nov 13 12:00:15 Given I start postgres4 in citus group 2 # None 1845s Nov 13 12:00:15 Then postgres4 is a leader in a group 2 after 10 seconds # None 1845s Nov 13 12:00:15 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 1845s Nov 13 12:00:15 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 1845s Nov 13 12:00:15 Then I receive a response returncode 0 # None 1845s Nov 13 12:00:15 And I receive a response output "+ttl: 20" # None 1845s Nov 13 12:00:15 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 1845s Nov 13 12:00:15 When I shut down postgres4 # None 1845s Nov 13 12:00:15 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 1845s Nov 13 12:00:15 When I run patronictl.py restart batman postgres2 --group 1 --force # None 1845s Nov 13 12:00:15 Then a transaction finishes in 20 seconds # None 1845s Nov 13 12:00:15 1845s Nov 13 12:00:15 Feature: custom bootstrap # features/custom_bootstrap.feature:1 1845s Nov 13 12:00:15 We should check that patroni can bootstrap a new cluster from a backup 1845s Nov 13 12:00:15 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 1845s Nov 13 12:00:15 Given I start postgres0 # features/steps/basic_replication.py:8 1848s Nov 13 12:00:18 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1848s Nov 13 12:00:18 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 1848s Nov 13 12:00:18 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 1851s Nov 13 12:00:21 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 1852s Nov 13 12:00:22 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 1852s Nov 13 12:00:22 1852s Nov 13 12:00:22 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 1852s Nov 13 12:00:22 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 1852s Nov 13 12:00:22 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 1853s Nov 13 12:00:23 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 1857s Nov 13 12:00:27 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 1857s Nov 13 12:00:27 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 1863s Nov 13 12:00:33 1863s Nov 13 12:00:33 Feature: ignored slots # features/ignored_slots.feature:1 1863s Nov 13 12:00:33 1863s Nov 13 12:00:33 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 1863s Nov 13 12:00:33 Given I start postgres1 # features/steps/basic_replication.py:8 1866s Nov 13 12:00:36 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1866s Nov 13 12:00:36 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1866s Nov 13 12:00:36 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 1866s Nov 13 12:00:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 1866s Nov 13 12:00:36 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 1866s Nov 13 12:00:36 When I shut down postgres1 # features/steps/basic_replication.py:29 1868s Nov 13 12:00:38 And I start postgres1 # features/steps/basic_replication.py:8 1870s Nov 13 12:00:40 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1871s Nov 13 12:00:41 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1872s Nov 13 12:00:42 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 1872s Nov 13 12:00:42 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1872s Nov 13 12:00:42 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1872s Nov 13 12:00:42 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1872s Nov 13 12:00:42 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1872s Nov 13 12:00:42 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1872s Nov 13 12:00:42 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1872s Nov 13 12:00:42 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1872s Nov 13 12:00:42 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1872s Nov 13 12:00:42 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1872s Nov 13 12:00:42 When I start postgres0 # features/steps/basic_replication.py:8 1875s Nov 13 12:00:45 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 1875s Nov 13 12:00:45 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1875s Nov 13 12:00:45 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 1876s Nov 13 12:00:46 When I shut down postgres1 # features/steps/basic_replication.py:29 1878s Nov 13 12:00:48 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1879s Nov 13 12:00:49 When I start postgres1 # features/steps/basic_replication.py:8 1881s Nov 13 12:00:51 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1881s Nov 13 12:00:51 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 1882s Nov 13 12:00:52 And I sleep for 2 seconds # features/steps/patroni_api.py:39 1884s Nov 13 12:00:54 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1884s Nov 13 12:00:54 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1884s Nov 13 12:00:54 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1884s Nov 13 12:00:54 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1884s Nov 13 12:00:54 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 1884s Nov 13 12:00:54 When I shut down postgres0 # features/steps/basic_replication.py:29 1886s Nov 13 12:00:56 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1887s Nov 13 12:00:57 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1887s Nov 13 12:00:57 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1887s Nov 13 12:00:57 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1887s Nov 13 12:00:57 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1889s Nov 13 12:00:59 1889s Nov 13 12:00:59 Feature: nostream node # features/nostream_node.feature:1 1889s Nov 13 12:00:59 1889s Nov 13 12:00:59 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 1889s Nov 13 12:00:59 When I start postgres0 # features/steps/basic_replication.py:8 1892s Nov 13 12:01:02 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 1895s Nov 13 12:01:05 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 1895s Nov 13 12:01:05 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 1900s Nov 13 12:01:10 1900s Nov 13 12:01:10 @slot-advance 1900s Nov 13 12:01:10 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 1900s Nov 13 12:01:10 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 1900s Nov 13 12:01:10 Then I receive a response code 200 # features/steps/patroni_api.py:98 1900s Nov 13 12:01:10 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 1902s Nov 13 12:01:12 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 1903s Nov 13 12:01:13 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 1906s Nov 13 12:01:16 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 1913s Nov 13 12:01:23 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 1913s Nov 13 12:01:23 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 1918s Nov 13 12:01:28 1918s Nov 13 12:01:28 Feature: patroni api # features/patroni_api.feature:1 1918s Nov 13 12:01:28 We should check that patroni correctly responds to valid and not-valid API requests. 1918s Nov 13 12:01:28 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 1918s Nov 13 12:01:28 Given I start postgres0 # features/steps/basic_replication.py:8 1921s Nov 13 12:01:31 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1921s Nov 13 12:01:31 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 1921s Nov 13 12:01:31 Then I receive a response code 200 # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 And I receive a response state running # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 And I receive a response role master # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 1921s Nov 13 12:01:31 Then I receive a response code 503 # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 1921s Nov 13 12:01:31 Then I receive a response code 200 # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 1921s Nov 13 12:01:31 Then I receive a response code 503 # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 1921s Nov 13 12:01:31 Then I receive a response code 503 # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 1921s Nov 13 12:01:31 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 1923s Nov 13 12:01:33 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 1923s Nov 13 12:01:33 Then I receive a response code 412 # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 1923s Nov 13 12:01:33 Then I receive a response code 400 # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 1923s Nov 13 12:01:33 Then I receive a response code 400 # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 1923s Nov 13 12:01:33 Scenario: check local configuration reload # features/patroni_api.feature:32 1923s Nov 13 12:01:33 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 1923s Nov 13 12:01:33 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 1923s Nov 13 12:01:33 Then I receive a response code 202 # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 1923s Nov 13 12:01:33 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 1923s Nov 13 12:01:33 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 1923s Nov 13 12:01:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 1923s Nov 13 12:01:33 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 1925s Nov 13 12:01:35 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 1925s Nov 13 12:01:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1925s Nov 13 12:01:35 And I receive a response ttl 20 # features/steps/patroni_api.py:98 1925s Nov 13 12:01:35 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 1925s Nov 13 12:01:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 1925s Nov 13 12:01:35 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 1925s Nov 13 12:01:35 And I sleep for 4 seconds # features/steps/patroni_api.py:39 1929s Nov 13 12:01:39 1929s Nov 13 12:01:39 Scenario: check the scheduled restart # features/patroni_api.feature:49 1929s Nov 13 12:01:39 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 1931s Nov 13 12:01:40 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1931s Nov 13 12:01:40 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 1931s Nov 13 12:01:40 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 1931s Nov 13 12:01:41 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 1931s Nov 13 12:01:41 Then I receive a response code 202 # features/steps/patroni_api.py:98 1931s Nov 13 12:01:41 And I sleep for 8 seconds # features/steps/patroni_api.py:39 1939s Nov 13 12:01:49 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 1939s Nov 13 12:01:49 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 1939s Nov 13 12:01:49 Then I receive a response code 202 # features/steps/patroni_api.py:98 1939s Nov 13 12:01:49 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 1946s Nov 13 12:01:56 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1947s Nov 13 12:01:57 1947s Nov 13 12:01:57 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 1947s Nov 13 12:01:57 Given I start postgres1 # features/steps/basic_replication.py:8 1950s Nov 13 12:02:00 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1951s Nov 13 12:02:01 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 1952s Nov 13 12:02:02 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1952s Nov 13 12:02:02 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 1952s Nov 13 12:02:02 waiting for server to shut down.... done 1952s Nov 13 12:02:02 server stopped 1952s Nov 13 12:02:02 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1952s Nov 13 12:02:02 Then I receive a response code 503 # features/steps/patroni_api.py:98 1952s Nov 13 12:02:02 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 1953s Nov 13 12:02:03 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 1956s Nov 13 12:02:06 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1956s Nov 13 12:02:06 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1957s Nov 13 12:02:07 And I sleep for 2 seconds # features/steps/patroni_api.py:39 1959s Nov 13 12:02:09 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1959s Nov 13 12:02:09 Then I receive a response code 200 # features/steps/patroni_api.py:98 1959s Nov 13 12:02:09 And I receive a response state running # features/steps/patroni_api.py:98 1959s Nov 13 12:02:09 And I receive a response role replica # features/steps/patroni_api.py:98 1959s Nov 13 12:02:09 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 1962s Nov 13 12:02:12 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1962s Nov 13 12:02:12 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 1962s Nov 13 12:02:12 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 1963s Nov 13 12:02:13 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1963s Nov 13 12:02:13 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 1966s Nov 13 12:02:16 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1966s Nov 13 12:02:16 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 1966s Nov 13 12:02:16 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 1967s Nov 13 12:02:17 1967s Nov 13 12:02:17 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 1967s Nov 13 12:02:17 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 1969s Nov 13 12:02:19 Then I receive a response code 200 # features/steps/patroni_api.py:98 1969s Nov 13 12:02:19 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 1969s Nov 13 12:02:19 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1969s Nov 13 12:02:19 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 1971s Nov 13 12:02:21 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 1971s Nov 13 12:02:21 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1973s Nov 13 12:02:23 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 1973s Nov 13 12:02:23 Then I receive a response code 503 # features/steps/patroni_api.py:98 1973s Nov 13 12:02:23 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 1973s Nov 13 12:02:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 1973s Nov 13 12:02:23 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 1973s Nov 13 12:02:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 1973s Nov 13 12:02:23 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1973s Nov 13 12:02:23 Then I receive a response code 503 # features/steps/patroni_api.py:98 1973s Nov 13 12:02:23 1973s Nov 13 12:02:23 Scenario: check the scheduled switchover # features/patroni_api.feature:107 1973s Nov 13 12:02:23 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 1975s Nov 13 12:02:25 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 1975s Nov 13 12:02:25 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 1975s Nov 13 12:02:25 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 1976s Nov 13 12:02:26 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1976s Nov 13 12:02:26 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 1977s Nov 13 12:02:27 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1977s Nov 13 12:02:27 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 1988s Nov 13 12:02:37 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1989s Nov 13 12:02:38 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 1991s Nov 13 12:02:41 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 1991s Nov 13 12:02:41 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1992s Nov 13 12:02:42 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 1992s Nov 13 12:02:42 Then I receive a response code 200 # features/steps/patroni_api.py:98 1992s Nov 13 12:02:42 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 1992s Nov 13 12:02:42 Then I receive a response code 503 # features/steps/patroni_api.py:98 1992s Nov 13 12:02:42 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 1992s Nov 13 12:02:42 Then I receive a response code 503 # features/steps/patroni_api.py:98 1992s Nov 13 12:02:42 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 1992s Nov 13 12:02:42 Then I receive a response code 200 # features/steps/patroni_api.py:98 1996s Nov 13 12:02:46 1996s Nov 13 12:02:46 Feature: permanent slots # features/permanent_slots.feature:1 1996s Nov 13 12:02:46 1996s Nov 13 12:02:46 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 1996s Nov 13 12:02:46 Given I start postgres0 # features/steps/basic_replication.py:8 1999s Nov 13 12:02:49 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1999s Nov 13 12:02:49 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1999s Nov 13 12:02:49 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 1999s Nov 13 12:02:49 Then I receive a response code 200 # features/steps/patroni_api.py:98 1999s Nov 13 12:02:49 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 1999s Nov 13 12:02:49 When I start postgres1 # features/steps/basic_replication.py:8 2002s Nov 13 12:02:52 And I start postgres2 # features/steps/basic_replication.py:8 2005s Nov 13 12:02:55 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 2008s Nov 13 12:02:58 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2008s Nov 13 12:02:58 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 2008s Nov 13 12:02:58 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 2008s Nov 13 12:02:58 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2008s Nov 13 12:02:58 2008s Nov 13 12:02:58 @slot-advance 2008s Nov 13 12:02:58 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 2008s Nov 13 12:02:58 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2011s Nov 13 12:03:01 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2011s Nov 13 12:03:01 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2012s Nov 13 12:03:02 2012s Nov 13 12:03:02 @slot-advance 2012s Nov 13 12:03:02 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 2012s Nov 13 12:03:02 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2019s Nov 13 12:03:09 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2019s Nov 13 12:03:09 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2020s Nov 13 12:03:10 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2021s Nov 13 12:03:11 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 2021s Nov 13 12:03:11 @slot-advance 2021s Nov 13 12:03:11 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 2021s Nov 13 12:03:11 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 2021s Nov 13 12:03:11 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 2021s Nov 13 12:03:11 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 2021s Nov 13 12:03:11 2021s Nov 13 12:03:11 @slot-advance 2021s Nov 13 12:03:11 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 2021s Nov 13 12:03:11 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 2021s Nov 13 12:03:11 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 2021s Nov 13 12:03:11 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 2021s Nov 13 12:03:11 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 2025s Nov 13 12:03:15 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 2025s Nov 13 12:03:15 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 2025s Nov 13 12:03:15 2025s Nov 13 12:03:15 @slot-advance 2025s Nov 13 12:03:15 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 2025s Nov 13 12:03:15 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 2025s Nov 13 12:03:15 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 2025s Nov 13 12:03:15 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 2025s Nov 13 12:03:15 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 2025s Nov 13 12:03:15 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 2025s Nov 13 12:03:15 2025s Nov 13 12:03:15 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 2025s Nov 13 12:03:15 Given I shut down postgres3 # features/steps/basic_replication.py:29 2026s Nov 13 12:03:16 And I shut down postgres2 # features/steps/basic_replication.py:29 2027s Nov 13 12:03:17 And I shut down postgres0 # features/steps/basic_replication.py:29 2029s Nov 13 12:03:19 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2029s Nov 13 12:03:19 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 2029s Nov 13 12:03:19 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2031s Nov 13 12:03:21 2031s Nov 13 12:03:21 Feature: priority replication # features/priority_failover.feature:1 2031s Nov 13 12:03:21 We should check that we can give nodes priority during failover 2031s Nov 13 12:03:21 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 2031s Nov 13 12:03:21 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 2034s Nov 13 12:03:24 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 2037s Nov 13 12:03:27 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2042s Nov 13 12:03:32 When I shut down postgres0 # features/steps/basic_replication.py:29 2044s Nov 13 12:03:34 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 2046s Nov 13 12:03:36 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2046s Nov 13 12:03:36 When I start postgres0 # features/steps/basic_replication.py:8 2048s Nov 13 12:03:38 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2050s Nov 13 12:03:40 2050s Nov 13 12:03:40 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 2050s Nov 13 12:03:40 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 2054s Nov 13 12:03:43 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 2057s Nov 13 12:03:47 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 2058s Nov 13 12:03:48 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 2062s Nov 13 12:03:52 When I shut down postgres0 # features/steps/basic_replication.py:29 2064s Nov 13 12:03:54 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2064s Nov 13 12:03:54 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 2064s Nov 13 12:03:54 2064s Nov 13 12:03:54 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 2064s Nov 13 12:03:54 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 2064s Nov 13 12:03:54 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 2064s Nov 13 12:03:54 Then I receive a response code 202 # features/steps/patroni_api.py:98 2064s Nov 13 12:03:54 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 2066s Nov 13 12:03:56 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 2067s Nov 13 12:03:57 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 2067s Nov 13 12:03:57 Then I receive a response code 412 # features/steps/patroni_api.py:98 2067s Nov 13 12:03:57 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 2067s Nov 13 12:03:57 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 2067s Nov 13 12:03:57 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 2067s Nov 13 12:03:57 Then I receive a response code 202 # features/steps/patroni_api.py:98 2067s Nov 13 12:03:57 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 2067s Nov 13 12:03:57 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 2069s Nov 13 12:03:59 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 2071s Nov 13 12:04:01 Then I receive a response code 200 # features/steps/patroni_api.py:98 2071s Nov 13 12:04:01 Assertion Failed: status code 503 != 200, response: Failover failed 2071s Nov 13 12:04:01 2076s Nov 13 12:04:05 And postgres1 role is the primary after 10 seconds # None 2076s Nov 13 12:04:05 2076s Nov 13 12:04:05 Feature: recovery # features/recovery.feature:1 2076s Nov 13 12:04:05 We want to check that crashed postgres is started back 2076s Nov 13 12:04:05 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 2076s Nov 13 12:04:05 Given I start postgres0 # features/steps/basic_replication.py:8 2078s Nov 13 12:04:08 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2078s Nov 13 12:04:08 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2078s Nov 13 12:04:08 When I start postgres1 # features/steps/basic_replication.py:8 2081s Nov 13 12:04:11 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 2081s Nov 13 12:04:11 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2086s Nov 13 12:04:16 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 2086s Nov 13 12:04:16 waiting for server to shut down.... done 2086s Nov 13 12:04:16 server stopped 2086s Nov 13 12:04:16 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2088s Nov 13 12:04:18 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2088s Nov 13 12:04:18 Then I receive a response code 200 # features/steps/patroni_api.py:98 2088s Nov 13 12:04:18 And I receive a response role master # features/steps/patroni_api.py:98 2088s Nov 13 12:04:18 And I receive a response timeline 1 # features/steps/patroni_api.py:98 2088s Nov 13 12:04:18 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 2089s Nov 13 12:04:19 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 2091s Nov 13 12:04:21 2091s Nov 13 12:04:21 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 2091s Nov 13 12:04:21 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 2092s Nov 13 12:04:21 Then I receive a response code 200 # features/steps/patroni_api.py:98 2092s Nov 13 12:04:21 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 2092s Nov 13 12:04:22 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 2092s Nov 13 12:04:22 waiting for server to shut down.... done 2092s Nov 13 12:04:22 server stopped 2092s Nov 13 12:04:22 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2094s Nov 13 12:04:24 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2098s Nov 13 12:04:28 2098s Nov 13 12:04:28 Feature: standby cluster # features/standby_cluster.feature:1 2098s Nov 13 12:04:28 2098s Nov 13 12:04:28 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 2098s Nov 13 12:04:28 Given I start postgres1 # features/steps/basic_replication.py:8 2102s Nov 13 12:04:32 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2102s Nov 13 12:04:32 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2102s Nov 13 12:04:32 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 2102s Nov 13 12:04:32 Then I receive a response code 200 # features/steps/patroni_api.py:98 2102s Nov 13 12:04:32 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 2102s Nov 13 12:04:32 And I sleep for 3 seconds # features/steps/patroni_api.py:39 2105s Nov 13 12:04:35 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 2105s Nov 13 12:04:35 Then I receive a response code 200 # features/steps/patroni_api.py:98 2105s Nov 13 12:04:35 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 2105s Nov 13 12:04:35 When I start postgres0 # features/steps/basic_replication.py:8 2107s Nov 13 12:04:37 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2108s Nov 13 12:04:38 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 2109s Nov 13 12:04:39 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 2109s Nov 13 12:04:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 2109s Nov 13 12:04:39 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2109s Nov 13 12:04:39 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 2110s Nov 13 12:04:40 2110s Nov 13 12:04:40 @slot-advance 2110s Nov 13 12:04:40 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 2110s Nov 13 12:04:40 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 2112s Nov 13 12:04:42 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2120s Nov 13 12:04:49 2120s Nov 13 12:04:49 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 2120s Nov 13 12:04:49 When I shut down postgres1 # features/steps/basic_replication.py:29 2121s Nov 13 12:04:51 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2121s Nov 13 12:04:51 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 2122s Nov 13 12:04:52 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2122s Nov 13 12:04:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 2122s Nov 13 12:04:52 2122s Nov 13 12:04:52 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 2122s Nov 13 12:04:52 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 2126s Nov 13 12:04:55 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 2126s Nov 13 12:04:55 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 2126s Nov 13 12:04:55 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2126s Nov 13 12:04:56 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 2126s Nov 13 12:04:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 2126s Nov 13 12:04:56 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2126s Nov 13 12:04:56 And I sleep for 3 seconds # features/steps/patroni_api.py:39 2129s Nov 13 12:04:59 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2129s Nov 13 12:04:59 Then I receive a response code 503 # features/steps/patroni_api.py:98 2129s Nov 13 12:04:59 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 2129s Nov 13 12:04:59 Then I receive a response code 200 # features/steps/patroni_api.py:98 2129s Nov 13 12:04:59 And I receive a response role standby_leader # features/steps/patroni_api.py:98 2129s Nov 13 12:04:59 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 2129s Nov 13 12:04:59 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 2132s Nov 13 12:05:02 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 2132s Nov 13 12:05:02 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 2137s Nov 13 12:05:07 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 2137s Nov 13 12:05:07 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 2137s Nov 13 12:05:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 2137s Nov 13 12:05:07 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2137s Nov 13 12:05:07 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 2137s Nov 13 12:05:07 2137s Nov 13 12:05:07 Scenario: check switchover # features/standby_cluster.feature:57 2137s Nov 13 12:05:07 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 2140s Nov 13 12:05:10 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 2140s Nov 13 12:05:10 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 2142s Nov 13 12:05:12 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 2142s Nov 13 12:05:12 2142s Nov 13 12:05:12 Scenario: check failover # features/standby_cluster.feature:63 2142s Nov 13 12:05:12 When I kill postgres2 # features/steps/basic_replication.py:34 2143s Nov 13 12:05:13 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 2143s Nov 13 12:05:13 waiting for server to shut down.... done 2143s Nov 13 12:05:13 server stopped 2143s Nov 13 12:05:13 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 2163s Nov 13 12:05:33 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 2163s Nov 13 12:05:33 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2164s Nov 13 12:05:33 Then I receive a response code 503 # features/steps/patroni_api.py:98 2164s Nov 13 12:05:33 And I receive a response role standby_leader # features/steps/patroni_api.py:98 2164s Nov 13 12:05:33 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 2165s Nov 13 12:05:35 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 2169s Nov 13 12:05:39 2169s Nov 13 12:05:39 Feature: watchdog # features/watchdog.feature:1 2169s Nov 13 12:05:39 Verify that watchdog gets pinged and triggered under appropriate circumstances. 2169s Nov 13 12:05:39 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 2169s Nov 13 12:05:39 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 2171s Nov 13 12:05:41 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2172s Nov 13 12:05:42 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2172s Nov 13 12:05:42 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 2173s Nov 13 12:05:43 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 2173s Nov 13 12:05:43 2173s Nov 13 12:05:43 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 2173s Nov 13 12:05:43 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 2174s Nov 13 12:05:44 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2174s Nov 13 12:05:44 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 2174s Nov 13 12:05:44 When I sleep for 4 seconds # features/steps/patroni_api.py:39 2178s Nov 13 12:05:48 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 2178s Nov 13 12:05:48 2178s Nov 13 12:05:48 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 2178s Nov 13 12:05:48 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 2180s Nov 13 12:05:50 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2180s Nov 13 12:05:50 When I sleep for 2 seconds # features/steps/patroni_api.py:39 2182s Nov 13 12:05:52 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 2182s Nov 13 12:05:52 2182s Nov 13 12:05:52 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 2182s Nov 13 12:05:52 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 2182s Nov 13 12:05:52 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 2183s Nov 13 12:05:53 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2183s Nov 13 12:05:53 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 2184s Nov 13 12:05:54 2184s Nov 13 12:05:54 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 2184s Nov 13 12:05:54 Given I shut down postgres0 # features/steps/basic_replication.py:29 2186s Nov 13 12:05:56 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 2186s Nov 13 12:05:56 2186s Nov 13 12:05:56 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 2186s Nov 13 12:05:56 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 2186s Nov 13 12:05:56 And I start postgres0 with watchdog # features/steps/watchdog.py:16 2188s Nov 13 12:05:58 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2191s Nov 13 12:06:00 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 2191s Nov 13 12:06:00 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 2217s Nov 13 12:06:27 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10026.XslikRJx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10070.XFRkYhqx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10224.XouPibex 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10287.XReTysYx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10353.XLGXVdPx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10456.XcuAQipx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10596.XbjdhDHx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10727.XyMNwTVx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10774.XtZMjgBx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10781.XKbAsnRx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10786.XvDGXDCx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.10802.XmwwPukx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.6946.XYBHqhJx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.6995.XBPfnhax 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7047.XrukOHux 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7100.XrFugYpx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7147.XFxkYnAx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7220.XdxrSeIx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7269.XsFROZcx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7274.XYXntGNx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7373.XORoPPZx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7471.XeHXMcfx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7483.XcPKrEGx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7527.XURTusEx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7593.XjnOhvrx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7770.XoFAtoax 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7816.XiDfxKdx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7872.XaAkYENx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.7966.XVQiwcBx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8022.XEVZeSYx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8085.XsOadIgx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8178.XgPrEnrx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8281.XvkFWqox 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8325.XejKeSHx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8394.XShyhhdx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8425.XKdAAVSx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8554.XEzUDvXx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8604.XUyCSkvx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8624.XtOKSipx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8663.XHaGOVfx 2217s Nov 13 12:06:27 Skipping duplicate data .coverage.autopkgtest.8716.XfNUUlcx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8723.XyOZMcux 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8762.XpVkRyxx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8807.XzUdaogx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8937.XjZiKGLx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8941.XDWpJZIx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.8949.XDhZcVnx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9091.XYzZmYAx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9138.XCcnGUdx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9185.XHvCqcTx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9232.XhnzaBUx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9277.XQQJGArx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9470.XLDHUnwx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9514.XYMEsSox 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9601.XMHiutPx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9688.XuYyFEqx 2217s Nov 13 12:06:27 Combined data file .coverage.autopkgtest.9741.XrEAbuux 2219s Nov 13 12:06:29 Name Stmts Miss Cover 2219s Nov 13 12:06:29 -------------------------------------------------------------------------------------------------------- 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/__init__.py 1 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/client.py 629 266 58% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/exceptions.py 110 1 99% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/handlers/__init__.py 0 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/handlers/threading.py 94 15 84% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/handlers/utils.py 222 75 66% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/hosts.py 18 4 78% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/loggingsupport.py 1 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/protocol/__init__.py 0 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/protocol/connection.py 485 176 64% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/protocol/paths.py 33 8 76% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/protocol/serialization.py 316 111 65% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/protocol/states.py 49 9 82% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/python2atexit.py 32 19 41% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/__init__.py 0 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/barrier.py 97 80 18% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/counter.py 49 36 27% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/election.py 16 10 38% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/lease.py 54 36 33% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/lock.py 295 242 18% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/partitioner.py 155 120 23% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/party.py 62 43 31% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/queue.py 157 126 20% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/recipe/watchers.py 172 138 20% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/retry.py 60 9 85% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/security.py 58 35 40% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/kazoo/version.py 1 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/__main__.py 199 63 68% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/api.py 770 285 63% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/config.py 371 92 75% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 91 86% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/dcs/zookeeper.py 288 67 77% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/ha.py 1244 373 70% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/log.py 219 67 69% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 179 78% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 216 73% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 167 60% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 34 90% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/request.py 62 7 89% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/utils.py 350 123 65% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 46 77% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psutil/__init__.py 951 629 34% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/puresasl/__init__.py 21 2 90% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/puresasl/client.py 71 47 34% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/puresasl/mechanisms.py 363 263 28% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/six.py 504 249 51% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 128 45% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 23 57% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/connection.py 324 110 66% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 136 61% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 88 62% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/response.py 562 334 41% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 9 86% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 52 50% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 52 70% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 75 58% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 19 73% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 78 62% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 18 31% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 38 22% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/parser.py 352 180 49% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/reader.py 122 30 75% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/scanner.py 758 415 45% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 2219s Nov 13 12:06:29 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 2219s Nov 13 12:06:29 patroni/__init__.py 13 2 85% 2219s Nov 13 12:06:29 patroni/__main__.py 199 199 0% 2219s Nov 13 12:06:29 patroni/api.py 770 770 0% 2219s Nov 13 12:06:29 patroni/async_executor.py 96 69 28% 2219s Nov 13 12:06:29 patroni/collections.py 56 15 73% 2219s Nov 13 12:06:29 patroni/config.py 371 194 48% 2219s Nov 13 12:06:29 patroni/config_generator.py 212 212 0% 2219s Nov 13 12:06:29 patroni/ctl.py 936 411 56% 2219s Nov 13 12:06:29 patroni/daemon.py 76 76 0% 2219s Nov 13 12:06:29 patroni/dcs/__init__.py 646 271 58% 2219s Nov 13 12:06:29 patroni/dcs/consul.py 485 485 0% 2219s Nov 13 12:06:29 patroni/dcs/etcd3.py 679 679 0% 2219s Nov 13 12:06:29 patroni/dcs/etcd.py 603 603 0% 2219s Nov 13 12:06:29 patroni/dcs/exhibitor.py 61 61 0% 2219s Nov 13 12:06:29 patroni/dcs/kubernetes.py 938 938 0% 2219s Nov 13 12:06:29 patroni/dcs/raft.py 319 319 0% 2219s Nov 13 12:06:29 patroni/dcs/zookeeper.py 288 152 47% 2219s Nov 13 12:06:29 patroni/dynamic_loader.py 35 7 80% 2219s Nov 13 12:06:29 patroni/exceptions.py 16 1 94% 2219s Nov 13 12:06:29 patroni/file_perm.py 43 15 65% 2219s Nov 13 12:06:29 patroni/global_config.py 81 18 78% 2219s Nov 13 12:06:29 patroni/ha.py 1244 1244 0% 2219s Nov 13 12:06:29 patroni/log.py 219 173 21% 2219s Nov 13 12:06:29 patroni/postgresql/__init__.py 821 651 21% 2219s Nov 13 12:06:29 patroni/postgresql/available_parameters/__init__.py 21 3 86% 2219s Nov 13 12:06:29 patroni/postgresql/bootstrap.py 252 222 12% 2219s Nov 13 12:06:29 patroni/postgresql/callback_executor.py 55 34 38% 2219s Nov 13 12:06:29 patroni/postgresql/cancellable.py 104 84 19% 2219s Nov 13 12:06:29 patroni/postgresql/config.py 813 698 14% 2219s Nov 13 12:06:29 patroni/postgresql/connection.py 75 50 33% 2219s Nov 13 12:06:29 patroni/postgresql/misc.py 41 29 29% 2219s Nov 13 12:06:29 patroni/postgresql/mpp/__init__.py 89 21 76% 2219s Nov 13 12:06:29 patroni/postgresql/mpp/citus.py 259 259 0% 2219s Nov 13 12:06:29 patroni/postgresql/postmaster.py 170 139 18% 2219s Nov 13 12:06:29 patroni/postgresql/rewind.py 416 416 0% 2219s Nov 13 12:06:29 patroni/postgresql/slots.py 334 285 15% 2219s Nov 13 12:06:29 patroni/postgresql/sync.py 130 96 26% 2219s Nov 13 12:06:29 patroni/postgresql/validator.py 157 52 67% 2219s Nov 13 12:06:29 patroni/psycopg.py 42 28 33% 2219s Nov 13 12:06:29 patroni/raft_controller.py 22 22 0% 2219s Nov 13 12:06:29 patroni/request.py 62 6 90% 2219s Nov 13 12:06:29 patroni/scripts/__init__.py 0 0 100% 2219s Nov 13 12:06:29 patroni/scripts/aws.py 59 59 0% 2219s Nov 13 12:06:29 patroni/scripts/barman/__init__.py 0 0 100% 2219s Nov 13 12:06:29 patroni/scripts/barman/cli.py 51 51 0% 2219s Nov 13 12:06:29 patroni/scripts/barman/config_switch.py 51 51 0% 2219s Nov 13 12:06:29 patroni/scripts/barman/recover.py 37 37 0% 2219s Nov 13 12:06:29 patroni/scripts/barman/utils.py 94 94 0% 2219s Nov 13 12:06:29 patroni/scripts/wale_restore.py 207 207 0% 2219s Nov 13 12:06:29 patroni/tags.py 38 11 71% 2219s Nov 13 12:06:29 patroni/utils.py 350 228 35% 2219s Nov 13 12:06:29 patroni/validator.py 301 215 29% 2219s Nov 13 12:06:29 patroni/version.py 1 0 100% 2219s Nov 13 12:06:29 patroni/watchdog/__init__.py 2 2 0% 2219s Nov 13 12:06:29 patroni/watchdog/base.py 203 203 0% 2219s Nov 13 12:06:29 patroni/watchdog/linux.py 135 135 0% 2219s Nov 13 12:06:29 -------------------------------------------------------------------------------------------------------- 2219s Nov 13 12:06:29 TOTAL 39824 23874 40% 2219s Nov 13 12:06:29 2219s Nov 13 12:06:29 Failing scenarios: 2219s Nov 13 12:06:29 features/priority_failover.feature:23 check conflicting configuration handling 2219s Nov 13 12:06:29 2219s Nov 13 12:06:29 10 features passed, 1 failed, 1 skipped 2219s Nov 13 12:06:29 43 scenarios passed, 1 failed, 5 skipped 2219s Nov 13 12:06:29 442 steps passed, 1 failed, 62 skipped, 0 undefined 2219s Nov 13 12:06:29 Took 7m15.161s 2219s features/output/priority_replication_failed/patroni_postgres0.log: 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/patroni_postgres0.log: 2219s + cat features/output/priority_replication_failed/patroni_postgres0.log 2219s 2024-11-13 12:03:23,024 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:23,028 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:23,035 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:622 - _connect_attempt]: Connection dropped: socket connection error: Bad file descriptor 2219s 2024-11-13 12:03:23,036 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:626 - _connect_attempt]: Transition to CONNECTING 2219s 2024-11-13 12:03:23,036 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:543 - _session_callback]: Zookeeper connection lost 2219s 2024-11-13 12:03:23,036 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(::1):2181, use_ssl: False 2219s 2024-11-13 12:03:23,036 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1224 - reload_config]: No PostgreSQL configuration items changed, nothing to reload. 2219s 2024-11-13 12:03:23,038 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:23,051 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: None; I am postgres0 2219s 2024-11-13 12:03:23,054 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: trying to bootstrap a new cluster 2219s The files belonging to this database system will be owned by user "postgres". 2219s This user must also own the server process. 2219s 2219s The database cluster will be initialized with locale "C.UTF-8". 2219s The default text search configuration will be set to "english". 2219s 2219s Data page checksums are enabled. 2219s 2219s creating directory /tmp/autopkgtest.FwqS2V/build.hfu/src/data/postgres0 ... ok 2219s creating subdirectories ... ok 2219s selecting dynamic shared memory implementation ... posix 2219s selecting default max_connections ... 100 2219s selecting default shared_buffers ... 128MB 2219s selecting default time zone ... UTC 2219s creating configuration files ... ok 2219s running bootstrap script ... ok 2219s performing post-bootstrap initialization ... ok 2219s syncing data to disk ... ok 2219s 2219s Success. You can now start the database server using: 2219s 2219s pg_ctl -D /tmp/autopkgtest.FwqS2V/build.hfu/src/data/postgres0 -l logfile start 2219s 2219s 2024-11-13 12:03:23.882 UTC [9494] DEBUG: registering background worker "logical replication launcher" 2219s 2024-11-13 12:03:23.883 UTC [9494] DEBUG: mmap(8388608) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2219s 2024-11-13 12:03:23.886 UTC [9494] LOG: redirecting log output to logging collector process 2219s 2024-11-13 12:03:23.886 UTC [9494] HINT: Future log output will appear in directory "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication". 2219s 2024-11-13 12:03:23,893 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py:249 - start]: postmaster pid=9494 2219s /tmp:5382 - rejecting connections 2219s /tmp:5382 - accepting connections 2219s 2024-11-13 12:03:23,910 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni heartbeat connection to postgres 2219s 2024-11-13 12:03:23,914 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: running post_bootstrap 2219s ?column? 2219s ---------- 2219s 1 2219s (1 row) 2219s 2219s 2024-11-13 12:03:23,931 WARNING [/usr/lib/python3/dist-packages/patroni/watchdog/base.py:143 - _activate]: Could not activate Linux watchdog device: Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog' 2219s 2024-11-13 12:03:23,942 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: initialized a new cluster 2219s 2024-11-13 12:03:25,945 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:27,947 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:29,946 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:31,935 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:34,073 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:617 - _connect_attempt]: Closing connection to localhost:2181 2219s 2024-11-13 12:03:34,073 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:537 - _session_callback]: Zookeeper session closed, state: CLOSED 2219s 2024-11-13 12:03:34,074 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:34,075 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:34,076 WARNING [/usr/lib/python3/dist-packages/patroni/dcs/zookeeper.py:352 - touch_member]: Recreating the member ZNode due to ownership mismatch 2219s 2024-11-13 12:03:38,139 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:38,143 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:38,178 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1224 - reload_config]: No PostgreSQL configuration items changed, nothing to reload. 2219s 2024-11-13 12:03:38,194 WARNING [/usr/lib/python3/dist-packages/patroni/postgresql/__init__.py:1019 - is_healthy]: Postgresql is not running. 2219s 2024-11-13 12:03:38,194 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: None; I am postgres0 2219s 2024-11-13 12:03:38,196 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:549 - recover]: pg_controldata: 2219s pg_control version number: 1300 2219s Catalog version number: 202307071 2219s Database system identifier: 7436733309391054094 2219s Database cluster state: shut down 2219s pg_control last modified: Wed Nov 13 12:03:33 2024 2219s Latest checkpoint location: 0/4000028 2219s Latest checkpoint's REDO location: 0/4000028 2219s Latest checkpoint's REDO WAL file: 000000010000000000000004 2219s Latest checkpoint's TimeLineID: 1 2219s Latest checkpoint's PrevTimeLineID: 1 2219s Latest checkpoint's full_page_writes: on 2219s Latest checkpoint's NextXID: 0:739 2219s Latest checkpoint's NextOID: 16389 2219s Latest checkpoint's NextMultiXactId: 1 2219s Latest checkpoint's NextMultiOffset: 0 2219s Latest checkpoint's oldestXID: 723 2219s Latest checkpoint's oldestXID's DB: 1 2219s Latest checkpoint's oldestActiveXID: 0 2219s Latest checkpoint's oldestMultiXid: 1 2219s Latest checkpoint's oldestMulti's DB: 1 2219s Latest checkpoint's oldestCommitTsXid: 0 2219s Latest checkpoint's newestCommitTsXid: 0 2219s Time of latest checkpoint: Wed Nov 13 12:03:33 2024 2219s Fake LSN counter for unlogged rels: 0/3E8 2219s Minimum recovery ending location: 0/0 2219s Min recovery ending loc's timeline: 0 2219s Backup start location: 0/0 2219s Backup end location: 0/0 2219s End-of-backup record required: no 2219s wal_level setting: replica 2219s wal_log_hints setting: on 2219s max_connections setting: 100 2219s max_worker_processes setting: 8 2219s max_wal_senders setting: 10 2219s max_prepared_xacts setting: 0 2219s max_locks_per_xact setting: 64 2219s track_commit_timestamp setting: off 2219s Maximum data alignment: 8 2219s Database block size: 8192 2219s Blocks per segment of large relation: 131072 2219s WAL block size: 8192 2219s Bytes per WAL segment: 16777216 2219s Maximum length of identifiers: 64 2219s Maximum columns in an index: 32 2219s Maximum size of a TOAST chunk: 1996 2219s Size of a large-object chunk: 2048 2219s Date/time type storage: 64-bit integers 2219s Float8 argument passing: by value 2219s Data page checksum version: 1 2219s Mock authentication nonce: e70cf1097252b579f6e095f88831ae80545225c490a12aa2933346a409a35423 2219s 2219s 2024-11-13 12:03:38,205 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: None; I am postgres0 2219s 2024-11-13 12:03:38,205 WARNING [/usr/lib/python3/dist-packages/patroni/dcs/zookeeper.py:352 - touch_member]: Re+ for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/patroni_postgres1.log: 2219s + cat features/output/priority_replication_failed/patroni_postgres1.log 2219s creating the member ZNode due to ownership mismatch 2219s 2024-11-13 12:03:38,208 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: starting as a secondary 2219s 2024-11-13 12:03:38.513 UTC [9620] DEBUG: registering background worker "logical replication launcher" 2219s 2024-11-13 12:03:38.515 UTC [9620] DEBUG: mmap(8388608) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2219s 2024-11-13 12:03:38.519 UTC [9620] LOG: redirecting log output to logging collector process 2219s 2024-11-13 12:03:38.519 UTC [9620] HINT: Future log output will appear in directory "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication". 2219s 2024-11-13 12:03:38,546 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py:249 - start]: postmaster pid=9620 2219s /tmp:5382 - rejecting connections 2219s /tmp:5382 - rejecting connections 2219s /tmp:5382 - accepting connections 2219s 2024-11-13 12:03:39,572 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni heartbeat connection to postgres 2219s 2024-11-13 12:03:39,580 WARNING [/usr/lib/python3/dist-packages/patroni/watchdog/base.py:143 - _activate]: Could not activate Linux watchdog device: Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog' 2219s 2024-11-13 12:03:39,582 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/slots.py:341 - _drop_incorrect_slots]: Dropped unknown replication slot 'postgres1' 2219s 2024-11-13 12:03:39,586 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: promoted self to leader by acquiring session lock 2219s server promoting 2219s 2024-11-13 12:03:40,622 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:42,614 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:44,604 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:46,623 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:48,611 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:50,604 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres0), the leader with the lock 2219s 2024-11-13 12:03:53,245 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:617 - _connect_attempt]: Closing connection to localhost:2181 2219s 2024-11-13 12:03:53,245 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:537 - _session_callback]: Zookeeper session closed, state: CLOSED 2219s 2024-11-13 12:03:53,247 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:53,249 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:53,249 WARNING [/usr/lib/python3/dist-packages/patroni/dcs/zookeeper.py:352 - touch_member]: Recreating the member ZNode due to ownership mismatch 2219s features/output/priority_replication_failed/patroni_postgres1.log: 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/patroni_postgres2.log: 2219s + cat features/output/priority_replication_failed/patroni_postgres2.log 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/patroni_postgres3.log: 2219s + cat features/output/priority_replication_failed/patroni_postgres3.log 2219s 2024-11-13 12:03:26,035 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:26,041 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:26,064 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1224 - reload_config]: No PostgreSQL configuration items changed, nothing to reload. 2219s 2024-11-13 12:03:26,079 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:622 - _connect_attempt]: Connection dropped: socket connection error: Invalid file descriptor: -1 2219s 2024-11-13 12:03:26,079 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:626 - _connect_attempt]: Transition to CONNECTING 2219s 2024-11-13 12:03:26,079 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:543 - _session_callback]: Zookeeper connection lost 2219s 2024-11-13 12:03:26,080 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(::1):2181, use_ssl: False 2219s 2024-11-13 12:03:26,082 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:26,167 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres1 2219s 2024-11-13 12:03:26,169 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: trying to bootstrap from leader 'postgres0' 2219s 2024-11-13 12:03:26,178 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres1 2219s 2024-11-13 12:03:26,179 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: bootstrap from leader 'postgres0' in progress 2219s 2024-11-13 12:03:26,391 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py:279 - create_replica]: replica has been created using basebackup 2219s 2024-11-13 12:03:26,392 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:425 - clone]: bootstrapped from leader 'postgres0' 2219s 2024-11-13 12:03:26.649 UTC [9535] DEBUG: registering background worker "logical replication launcher" 2219s 2024-11-13 12:03:26.651 UTC [9535] DEBUG: mmap(8388608) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2219s 2024-11-13 12:03:26.654 UTC [9535] LOG: redirecting log output to logging collector process 2219s 2024-11-13 12:03:26.654 UTC [9535] HINT: Future log output will appear in directory "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication". 2219s 2024-11-13 12:03:26,662 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py:249 - start]: postmaster pid=9535 2219s /tmp:5383 - rejecting connections 2219s /tmp:5383 - rejecting connections 2219s /tmp:5383 - accepting connections 2219s 2024-11-13 12:03:27,688 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres1 2219s 2024-11-13 12:03:27,688 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni heartbeat connection to postgres 2219s 2024-11-13 12:03:27,699 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:29,689 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:31,689 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:33,691 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s server signaled 2219s 2024-11-13 12:03:36,102 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: following a different leader because I am not allowed to promote 2219s 2024-11-13 12:03:36,104 WARNING [/usr/lib/python3/dist-packages/patroni/__main__.py:181 - schedule_next_run]: Loop time exceeded, rescheduling immediately. 2219s 2024-11-13 12:03:38,119 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: following a different leader because I am not allowed to promote 2219s 2024-11-13 12:03:38,121 WARNING [/usr/lib/python3/dist-packages/patroni/__main__.py:181 - schedule_next_run]: Loop time exceeded, rescheduling immediately. 2219s 2024-11-13 12:03:40,135 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=1 lsn=0/40000A0 2219s server signaled 2219s 2024-11-13 12:03:40,146 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: following a different leader because I am not allowed to promote 2219s 2024-11-13 12:03:40,148 WARNING [/usr/lib/python3/dist-packages/patroni/__main__.py:181 - schedule_next_run]: Loop time exceeded, rescheduling immediately. 2219s 2024-11-13 12:03:40,151 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres1 2219s 2024-11-13 12:03:40,153 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=1 lsn=0/40000A0 2219s 2024-11-13 12:03:40,156 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:42,151 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres1 2219s 2024-11-13 12:03:42,156 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=2 lsn=0/4000180 2219s 2024-11-13 12:03:42,196 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:245 - _check_timeline_and_lsn]: primary_timeline=2 2219s 2024-11-13 12:03:42,201 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:44,161 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:46,154 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:48,187 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:50,159 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:52,161 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:55,279 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=2 lsn=0/90000A0 2219s 2024-11-13 12:03:55,308 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:245 - _check_timeline_and_lsn]: primary_timeline=3 2219s 2024-11-13 12:03:55,309 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:207 - _log_primary_history]: primary: history=1 0/40000A0 no recovery target specified 2219s 2 0/90000A0 no recovery target specified 2219s server signaled 2219s 2024-11-13 12:03:55,336 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: following a different leader because I am not allowed to promote 2219s 2024-11-13 12:03:55,337 WARNING [/usr/lib/python3/dist-packages/patroni/__main__.py:181 - schedule_next_run]: Loop time exceeded, rescheduling immediately. 2219s 2024-11-13 12:03:55,361 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres3; I am postgres1 2219s 2024-11-13 12:03:55,367 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=2 lsn=0/90000A0 2219s 2024-11-13 12:03:55,415 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:245 - _check_timeline_and_lsn]: primary_timeline=3 2219s 2024-11-13 12:03:55,415 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:207 - _log_primary_history]: primary: history=1 0/40000A0 no recovery target specified 2219s 2 0/90000A0 no recovery target specified 2219s 2024-11-13 12:03:55,426 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:03:57,361 WARNING [/usr/lib/python3/dist-packages/patroni/config.py:837 - _validate_failover_tags]: Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False 2219s 2024-11-13 12:03:57,364 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1201 - reload_config]: Reloading PostgreSQL configuration. 2219s server signaled 2219s 2024-11-13 12:03:58,380 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:03:59,356 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:03:59,406 INFO [/usr/lib/python3/dist-packages/patroni/api.py:1098 - do_POST_failover]: received failover request with leader=None candidate=postgres1 scheduled_at=None 2219s 2024-11-13 12:03:59,434 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni restapi connection to postgres 2219s 2024-11-13 12:03:59,477 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:900 - fetch_node_status]: Got response from postgres1 https://127.0.0.1:8009/patroni: {"state": "running", "postmaster_start_time": "2024-11-13 12:03:26.656909+00:00", "role": "replica", "server_version": 160004, "xlog": {"received_location": 150995328, "replayed_location": 150995328, "replayed_timestamp": "2024-11-13 12:03:48.100986+00:00", "paused": false}, "timeline": 3, "replication_state": "streaming", "dcs_last_seen": 1731499439, "tags": {"failover_priority": "0", "nofailover": false}, "database_system_identifier": "7436733309391054094", "patroni": {"version": "3.3.1", "scope": "batman", "name": "postgres1"}} 2219s 2024-11-13 12:03:59,491 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres1), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:04:01,500 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:1381 - process_unhealthy_cluster]: Cleaning up failover key after acquiring leader lock... 2219s features/output/priority_replication_failed/patroni_postgres2.log: 2219s 2024-11-13 12:03:42,178 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(::1):2181, use_ssl: False 2219s 2024-11-13 12:03:42,185 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:42,233 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1224 - reload_config]: No PostgreSQL configuration items changed, nothing to reload. 2219s 2024-11-13 12:03:42,242 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:622 - _connect_attempt]: Connection dropped: socket connection error: Invalid file descriptor: -1 2219s 2024-11-13 12:03:42,242 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:626 - _connect_attempt]: Transition to CONNECTING 2219s 2024-11-13 12:03:42,242 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:543 - _session_callback]: Zookeeper connection lost 2219s 2024-11-13 12:03:42,243 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:42,244 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:42,315 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres2 2219s 2024-11-13 12:03:42,317 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: trying to bootstrap from leader 'postgres0' 2219s 2024-11-13 12:03:42,330 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres2 2219s 2024-11-13 12:03:42,331 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: bootstrap from leader 'postgres0' in progress 2219s 2024-11-13 12:03:42,581 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py:279 - create_replica]: replica has been created using basebackup 2219s 2024-11-13 12:03:42,582 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:425 - clone]: bootstrapped from leader 'postgres0' 2219s 2024-11-13 12:03:42.850 UTC [9712] DEBUG: registering background worker "logical replication launcher" 2219s 2024-11-13 12:03:42.852 UTC [9712] DEBUG: mmap(8388608) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2219s 2024-11-13 12:03:42.855 UTC [9712] LOG: redirecting log output to logging collector process 2219s 2024-11-13 12:03:42.855 UTC [9712] HINT: Future log output will appear in directory "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication". 2219s 2024-11-13 12:03:42,865 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py:249 - start]: postmaster pid=9712 2219s /tmp:5384 - rejecting connections 2219s /tmp:5384 - rejecting connections 2219s /tmp:5384 - accepting connections 2219s 2024-11-13 12:03:43,914 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres2 2219s 2024-11-13 12:03:43,915 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni heartbeat connection to postgres 2219s 2024-11-13 12:03:43,929 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:45,905 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:47,905 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:49,907 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:51,906 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:53,293 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni restapi connection to postgres 2219s 2024-11-13 12:03:53,300 WARNING [/usr/lib/python3/dist-packages/patroni/ha.py:903 - fetch_node_status]: Request failed to postgres0: GET https://127.0.0.1:8008/patroni (HTTPSConnectionPool(host='127.0.0.1', port=8008): Max retries exceeded with url: /patroni (Caused by ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))) 2219s 2024-11-13 12:03:53,326 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:900 - fetch_node_status]: Got response from postgres3 https://127.0.0.1:8011/patroni: {"state": "running", "postmaster_start_time": "2024-11-13 12:03:45.867354+00:00", "role": "replica", "server_version": 160004, "xlog": {"received_location": 150995104, "replayed_location": 150995104, "replayed_timestamp": "2024-11-13 12:03:48.100986+00:00", "paused": false}, "timeline": 2, "replication_state": "in archive recovery", "cluster_unlocked": true, "dcs_last_seen": 1731499433, "tags": {"failover_priority": "2"}, "database_system_identifier": "7436733309391054094", "patroni": {"version": "3.3.1", "scope": "batman", "name": "postgres3"}} 2219s 2024-11-13 12:03:53,327 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:1025 - _is_healthiest_node]: postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1 2219s server signaled 2219s 2024-11-13 12:03:53,348 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: following a different leader because i am not the healthiest node 2219s 2024-11-13 12:03:55,280 WARNING [/usr/lib/python3/dist-packages/patroni/config.py:837 - _validate_failover_tags]: Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True 2219s 2024-11-13 12:03:55,284 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1201 - reload_config]: Reloading PostgreSQL configuration. 2219s server signaled 2219s 2024-11-13 12:03:56,295 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres3; I am postgres2 2219s 2024-11-13 12:03:56,298 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=2 lsn=0/90000A0 2219s 2024-11-13 12:03:56,326 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:245 - _check_timeline_and_lsn]: primary_timeline=3 2219s 2024-11-13 12:03:56,327 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:207 - _log_primary_history]: primary: history=1 0/40000A0 no recovery target specified 2219s 2 0/90000A0 no recovery target specified 2219s server signaled 2219s 2024-11-13 12:03:56,341 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:03:57,237 INFO [/usr/lib/python3/dist-packages/patroni/api.py:1098 - do_POST_failover]: received failover request with leader=None candidate=postgres2 scheduled_at=None 2219s 2024-11-13 12:03:57,253 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres3; I am postgres2 2219s 2024-11-13 12:03:57,256 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=3 lsn=0/9000180 2219s 2024-11-13 12:03:57,281 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:245 - _check_timeline_and_lsn]: primary_timeline=3 2219s 2024-11-13 12:03:57,285 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:03:57,296 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:900 - fetch_node_status]: Got response from postgres2 https://127.0.0.1:8010/patroni: {"state": "running", "postmaster_start_time": "2024-11-13 12:03:42.858125+00:00", "role": "replica", "server_version": 160004, "xlog": {"received_location": 150995328, "replayed_location": 150995328, "replayed_timestamp": "2024-11-13 12:03:48.100986+00:00", "paused": false}, "timeline": 3, "replication_state": "streaming", "dcs_last_seen": 1731499437, "tags": {"failover_priority": "1", "nofailover": true}, "database_system_identifier": "7436733309391054094", "patroni": {"version": "3.3.1", "scope": "batman", "name": "postgres2"}} 2219s 2024-11-13 12:03:59,253 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres3) 2219s 2024-11-13 12:04:01,261 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres2), a secondary, and following a leader (postgres3) 2219s features/output/priority_replication_failed/patroni_postgres3.log: 2219s 2024-11-13 12:03:45,184 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(::1):2181, use_ssl: False 2219s 2024-11-13 12:03:45,188 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:45,217 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/config.py:1224 - reload_config]: No PostgreSQL configuration items changed, nothing to reload. 2219s 2024-11-13 12:03:45,222 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:622 - _connect_attempt]: Connection dropped: socket connection error: Invalid file descriptor: -1 2219s 2024-11-13 12:03:45,222 WARNING [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:626 - _connect_attempt]: Transition to CONNECTING 2219s 2024-11-13 12:03:45,222 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:543 - _session_callback]: Zookeeper connection lost 2219s 2024-11-13 12:03:45,222 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:03:45,246 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:03:45,297 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres3 2219s 2024-11-13 12:03:45,300 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: trying to bootstrap from leader 'postgres0' 2219s 2024-11-13 12:03:45,311 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres3 2219s 2024-11-13 12:03:45,314 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: bootstrap from leader 'postgres0' in progress 2219s 2024-11-13 12:03:45,584 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py:279 - create_replica]: replica has been created using basebackup 2219s 2024-11-13 12:03:45,585 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:425 - clone]: bootstrapped from leader 'postgres0' 2219s 2024-11-13 12:03:45.856 UTC [9760] DEBUG: registering background worker "logical replication launcher" 2219s 2024-11-13 12:03:45.858 UTC [9760] DEBUG: mmap(8388608) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2219s 2024-11-13 12:03:45.861 UTC [9760] LOG: redirecting log output to logging collector process 2219s 2024-11-13 12:03:45.861 UTC [9760] HINT: Future log output will appear in directory "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication". 2219s 2024-11-13 12:03:45,867 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py:249 - start]: postmaster pid=9760 2219s /tmp:5385 - rejecting connections 2219s /tmp:5385 - rejecting connections 2219s /tmp:5385 - accepting connections 2219s 2024-11-13 12:03:46,893 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres0; I am postgres3 2219s 2024-11-13 12:03:46,893 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni heartbeat connection to postgres 2219s 2024-11-13 12:03:46,904 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:48,900 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:50,895 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:52,897 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), a secondary, and following a leader (postgres0) 2219s 2024-11-13 12:03:53,285 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:53 - get]: establishing a new patroni restapi connection to postgres 2219s 2024-11-13 12:03:53,299 WARNING [/usr/lib/python3/dist-packages/patroni/ha.py:903 - fetch_node_status]: Request failed to postgres0: GET https://127.0.0.1:8008/patroni (HTTPSConnectionPool(host='127.0.0.1', port=8008): Max retries exceeded with url: /patroni (Caused by ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))) 2219s 2024-11-13 12:03:53,340 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:900 - fetch_node_status]: Got response from postgres2 https://127.0.0.1:8010/patroni: {"state": "running", "postmaster_start_time": "2024-11-13 12:03:42.858125+00:00", "role": "replica", "server_version": 160004, "xlog": {"received_location": 150995104, "replayed_location": 150995104, "replayed_timestamp": "2024-11-13 12:03:48.100986+00:00", "paused": false}, "timeline": 2, "replication_state": "in archive recovery", "cluster_unlocked": true, "dcs_last_seen": 1731499433, "tags": {"failover_priority": "1"}, "database_system_identifier": "7436733309391054094", "patroni": {"version": "3.3.1", "scope": "batman", "name": "postgres2"}} 2219s 2024-11-13 12:03:53,352 WARNING [/usr/lib/python3/dist-packages/patroni/watchdog/base.py:143 - _activate]: Could not activate Linux watchdog device: Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog' 2219s 2024-11-13 12:03:53,354 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: promoted self to leader by acquiring session lock 2219s server promoting 2219s 2024-11-13 12:03:53,357 DEBUG [/usr/lib/python3/dist-packages/patroni/postgresql/__init__.py:1216 - promote]: CallbackExecutor.call(['/usr/bin/python3', 'features/callback2.py', 'postgres3', '5385', on_role_change, 'master', 'batman']) 2219s 2024-11-13 12:03:54,403 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), the leader with the lock 2219s 2024-11-13 12:03:56,418 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), the leader with the lock 2219s 2024-11-13 12:03:58,386 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: no action. I am (postgres3), the leader with the lock 2219s 2024-11-13 12:04:00,447 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:900 - fetch_node_status]: Got response from postgres1 https://127.0.0.1:8009/patroni: {"state": "running", "postmaster_start_time": "2024-11-13 12:03:26.656909+00:00", "role": "replica", "server_version": 160004, "xlog": {"received_location": 150995328, "replayed_location": 150995328, "replayed_timestamp": "2024-11-13 12:03:48.100986+00:00", "paused": false}, "timeline": 3, "replication_state": "streaming", "dcs_last_seen": 1731499439, "tags": {"failover_priority": "0", "nofailover": false}, "database_system_identifier": "7436733309391054094", "patroni": {"version": "3.3.1", "scope": "batman", "name": "postgres1"}} 2219s 2024-11-13 12:04:00,386 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres3; I am postgres3 2219s 2024-11-13 12:04:00,449 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: manual failover: demoting myself 2219s 2024-11-13 12:04:00,450 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:1235 - demote]: Demoting self (graceful) 2219s 2024-11-13 12:04:01,481 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:617 - _connect_attempt]: Closing connection to localhost:2181 2219s 2024-11-13 12:04:01,481 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:537 - _session_callback]: Zookeeper session closed, state: CLOSED 2219s 2024-11-13 12:04:01,485 INFO [/usr/lib/python3/dist-packages/kazoo/protocol/connection.py:650 - _connect]: Connecting to localhost(127.0.0.1):2181, use_ssl: False 2219s 2024-11-13 12:04:01,489 INFO [/usr/lib/python3/dist-packages/kazoo/client.py:532 - _session_callback]: Zookeeper connection established, state: CONNECTED 2219s 2024-11-13 12:04:01,492 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:1212 - release_leader_key_voluntarily]: Leader key released 2219s 2024-11-13 12:04:01,509 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: None; I am postgres3 2219s 2024-11-13 12:04:01,509 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:1647 - handle_+ for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres0.csv: 2219s + cat features/output/priority_replication_failed/postgres0.csv 2219s long_action_in_progress]: not healthy enough for leader race 2219s 2024-11-13 12:04:01,509 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: manual failover: demote in progress 2219s 2024-11-13 12:04:03,488 INFO [/usr/lib/python3/dist-packages/patroni/ha.py:321 - has_lock]: Lock owner: postgres1; I am postgres3 2219s 2024-11-13 12:04:03,489 INFO [/usr/lib/python3/dist-packages/patroni/__main__.py:201 - _run_cycle]: manual failover: demote in progress 2219s 2024-11-13 12:04:03,500 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/rewind.py:187 - _get_local_timeline_lsn]: Local timeline=3 lsn=0/A000028 2219s 2024-11-13 12:04:03,501 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/connection.py:152 - close]: closed patroni connections to postgres 2219s 2024-11-13 12:04:03,814 INFO [/usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py:249 - start]: postmaster pid=10005 2219s 2024-11-13 12:04:03.815 UTC [10005] DEBUG: registering background worker "logical replication launcher" 2219s 2024-11-13 12:04:03.816 UTC [10005] DEBUG: mmap(8388608) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory 2219s /tmp:5385 - no response 2219s 2024-11-13 12:04:03.828 UTC [10005] LOG: redirecting log output to logging collector process 2219s 2024-11-13 12:04:03.828 UTC [10005] HINT: Future log output will appear in directory "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication". 2219s features/output/priority_replication_failed/postgres0.csv: 2219s 2024-11-13 12:03:23.886 UTC,,,9494,,6734958b.2516,1,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:23.886 UTC,,,9494,,6734958b.2516,2,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"starting PostgreSQL 16.4 (Ubuntu 16.4-3) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 14.2.0-7ubuntu1) 14.2.0, 64-bit",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:23.886 UTC,,,9494,,6734958b.2516,3,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"listening on IPv4 address ""127.0.0.1"", port 5382",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:23.887 UTC,,,9494,,6734958b.2516,4,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"listening on Unix socket ""/tmp/.s.PGSQL.5382""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,1,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"database system was shut down at 2024-11-13 12:03:23 UTC",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,2,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint record is at 0/1732590",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,3,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"redo record is at 0/1732590; shutdown true",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,4,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"next transaction ID: 731; next OID: 13623",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,5,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"next MultiXactId: 1; next MultiXactOffset: 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,6,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"oldest unfrozen transaction ID: 723, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,7,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"oldest MultiXactId: 1, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,8,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"commit timestamp Xid oldest/newest: 0/0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,9,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,10,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,11,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"starting up replication slots",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,12,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,13,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.894 UTC,,,9498,,6734958b.251a,14,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"MultiXact member stop limit is now 4294914944 based on MultiXact 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:23.897 UTC,"postgres","postgres",9500,"[local]",6734958b.251c,1,"",2024-11-13 12:03:23 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:23.899 UTC,,,9494,,6734958b.2516,5,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"starting background worker process ""logical replication launcher""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:23.899 UTC,,,9494,,6734958b.2516,6,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"database system is ready to accept connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:23.900 UTC,,,9502,,6734958b.251e,1,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"autovacuum launcher started",,,,,,,,,"","autovacuum launcher",,0 2219s 2024-11-13 12:03:23.901 UTC,,,9505,,6734958b.2521,1,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"logical replication launcher started",,,,,,,,,"","logical replication launcher",,0 2219s 2024-11-13 12:03:23.912 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,1,"idle",2024-11-13 12:03:23 UTC,3/3,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), (SELECT pg_catalog.json_agg(s.*) FROM (SELECT slot_name, slot_type as type, datoid::bigint, plugin, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint AS confirmed_flush_lsn, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint AS restart_lsn FROM pg_catalog.pg_get_replication_slots()) AS s), 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.915 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,2,"idle",2024-11-13 12:03:23 UTC,3/4,0,LOG,00000,"statement: SET log_statement TO none",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.917 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,3,"idle",2024-11-13 12:03:23 UTC,3/12,0,LOG,00000,"statement: RESET pg_stat_statements.track_utility",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.923 UTC,"postgres","postgres",9510,"[local]",6734958b.2526,1,"idle",2024-11-13 12:03:23 UTC,4/2,0,LOG,00000,"statement: SELECT 1",,,,,,,,,"psql","client backend",,0 2219s 2024-11-13 12:03:23.924 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,4,"idle",2024-11-13 12:03:23 UTC,3/13,0,LOG,00000,"statement: SET log_statement TO none",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.926 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,5,"idle",2024-11-13 12:03:23 UTC,3/21,0,LOG,00000,"statement: RESET pg_stat_statements.track_utility",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.926 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,6,"idle",2024-11-13 12:03:23 UTC,3/22,0,LOG,00000,"statement: SET log_statement TO none",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.927 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,7,"idle",2024-11-13 12:03:23 UTC,3/30,0,LOG,00000,"statement: RESET pg_stat_statements.track_utility",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.927 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,8,"idle",2024-11-13 12:03:23 UTC,3/31,0,LOG,00000,"statement: DO $$ 2219s BEGIN 2219s SET local synchronous_commit = 'local'; 2219s GRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean) TO ""rewind_user""; 2219s END;$$",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.927 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,9,"idle",2024-11-13 12:03:23 UTC,3/32,0,LOG,00000,"statement: DO $$ 2219s BEGIN 2219s SET local synchronous_commit = 'local'; 2219s GRANT EXECUTE ON function pg_catalog.pg_stat_file(text, boolean) TO ""rewind_user""; 2219s END;$$",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.928 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,10,"idle",2024-11-13 12:03:23 UTC,3/33,0,LOG,00000,"statement: DO $$ 2219s BEGIN 2219s SET local synchronous_commit = 'local'; 2219s GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO ""rewind_user""; 2219s END;$$",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.928 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,11,"idle",2024-11-13 12:03:23 UTC,3/34,0,LOG,00000,"statement: DO $$ 2219s BEGIN 2219s SET local synchronous_commit = 'local'; 2219s GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint, bigint, boolean) TO ""rewind_user""; 2219s END;$$",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:23.931 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,12,"idle",2024-11-13 12:03:23 UTC,3/35,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), (SELECT pg_catalog.json_agg(s.*) FROM (SELECT slot_name, slot_type as type, datoid::bigint, plugin, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint AS confirmed_flush_lsn, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint AS restart_lsn FROM pg_catalog.pg_get_replication_slots()) AS s), 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:24.874 UTC,"postgres","postgres",9513,"127.0.0.1:40952",6734958c.2529,1,"idle",2024-11-13 12:03:24 UTC,4/4,0,LOG,00000,"statement: SELECT 1",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:24.874 UTC,"postgres","postgres",9513,"127.0.0.1:40952",6734958c.2529,2,"idle",2024-11-13 12:03:24 UTC,4/5,0,LOG,00000,"statement: SET synchronous_commit TO 'local'",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:25.941 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,13,"idle",2024-11-13 12:03:23 UTC,3/36,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:25.944 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,14,"idle",2024-11-13 12:03:23 UTC,3/37,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:26.191 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,1,"idle",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"received replication command: SHOW data_directory_mode",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.191 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,2,"idle",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"received replication command: SHOW wal_segment_size",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.191 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,3,"idle",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.191 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,4,"idle",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client')",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.198 UTC,,,9496,,6734958b.2518,1,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"checkpoint starting: immediate force wait",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.198 UTC,,,9496,,6734958b.2518,2,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.201 UTC,,,9496,,6734958b.2518,3,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=pg_multixact/offsets/0000 time=0.428 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,4,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=global/2677 time=0.426 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,5,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=3 file=global/1260 time=0.044 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,6,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=4 file=global/1260_vm time=0.188 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,7,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=5 file=global/1214 time=0.034 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,8,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=6 file=base/5/1255_vm time=0.021 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,9,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=7 file=global/2676 time=0.026 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,10,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=8 file=global/1233 time=0.022 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,11,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=9 file=base/5/2691 time=0.017 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,12,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=10 file=global/1232 time=0.021 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.202 UTC,,,9496,,6734958b.2518,13,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=11 file=pg_xact/0000 time=0.219 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.203 UTC,,,9496,,6734958b.2518,14,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=12 file=base/5/2690 time=0.190 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.203 UTC,,,9496,,6734958b.2518,15,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=13 file=base/5/1255 time=0.032 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.218 UTC,,,9496,,6734958b.2518,16,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"checkpoint complete: wrote 18 buffers (14.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.002 s, total=0.021 s; sync files=13, longest=0.001 s, average=0.001 s; distance=9014 kB, estimate=9014 kB; lsn=0/2000060, redo lsn=0/2000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:26.218 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,5,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""postmaster.pid"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.218 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,6,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_dynshmem"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.218 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,7,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.218 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,8,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_replslot"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.221 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,9,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,10,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_snapshots"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,11,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_stat_tmp"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,12,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_subtrans"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,13,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""postmaster.opts"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,14,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_notify"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,15,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_serial"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,16,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""postmaster.pid"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,17,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_dynshmem"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.223 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,18,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.225 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,19,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_replslot"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.232 UTC,,,9504,,6734958b.2520,1,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000010000000000000001""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:26.239 UTC,"replicator","",9526,"127.0.0.1:40974",6734958e.2536,1,"idle",2024-11-13 12:03:26 UTC,6/0,0,DEBUG,00000,"received replication command: SHOW data_directory_mode",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.239 UTC,"replicator","",9526,"127.0.0.1:40974",6734958e.2536,2,"idle",2024-11-13 12:03:26 UTC,6/0,0,DEBUG,00000,"received replication command: CREATE_REPLICATION_SLOT ""pg_basebackup_9526"" TEMPORARY PHYSICAL ( RESERVE_WAL)",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.241 UTC,"replicator","",9526,"127.0.0.1:40974",6734958e.2536,3,"idle",2024-11-13 12:03:26 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.241 UTC,"replicator","",9526,"127.0.0.1:40974",6734958e.2536,4,"idle",2024-11-13 12:03:26 UTC,6/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""pg_basebackup_9526"" 0/2000000 TIMELINE 1",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.241 UTC,"replicator","",9526,"127.0.0.1:40974",6734958e.2536,5,"streaming 0/20000D8",2024-11-13 12:03:26 UTC,6/0,0,DEBUG,00000,"""pg_basebackup"" has now caught up with upstream server",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.268 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,20,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.280 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,21,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_snapshots"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.280 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,22,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_stat_tmp"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.280 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,23,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_subtrans"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.280 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,24,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"file ""postmaster.opts"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.280 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,25,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_notify"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.280 UTC,"replicator","",9523,"127.0.0.1:40964",6734958e.2533,26,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:26 UTC,5/0,0,DEBUG,00000,"contents of directory ""pg_serial"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.310 UTC,"replicator","",9526,"127.0.0.1:40974",6734958e.2536,6,"idle",2024-11-13 12:03:26 UTC,6/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:26.330 UTC,,,9504,,6734958b.2520,2,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000010000000000000002""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:26.358 UTC,,,9504,,6734958b.2520,3,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000010000000000000002.00000028.backup""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:27.058 UTC,"replicator","",9554,"127.0.0.1:40976",6734958f.2552,1,"idle",2024-11-13 12:03:27 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:27.058 UTC,"replicator","",9554,"127.0.0.1:40976",6734958f.2552,2,"idle",2024-11-13 12:03:27 UTC,5/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/3000000 TIMELINE 1",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:27.058 UTC,"replicator","",9554,"127.0.0.1:40976",6734958f.2552,3,"START_REPLICATION",2024-11-13 12:03:27 UTC,5/0,0,ERROR,42704,"replication slot ""postgres1"" does not exist",,,,,,"START_REPLICATION SLOT ""postgres1"" 0/3000000 TIMELINE 1",,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:27.267 UTC,"replicator","",9560,"127.0.0.1:40992",6734958f.2558,1,"idle",2024-11-13 12:03:27 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:27.268 UTC,"replicator","",9560,"127.0.0.1:40992",6734958f.2558,2,"idle",2024-11-13 12:03:27 UTC,5/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/3000000 TIMELINE 1",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:27.268 UTC,"replicator","",9560,"127.0.0.1:40992",6734958f.2558,3,"START_REPLICATION",2024-11-13 12:03:27 UTC,5/0,0,ERROR,42704,"replication slot ""postgres1"" does not exist",,,,,,"START_REPLICATION SLOT ""postgres1"" 0/3000000 TIMELINE 1",,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:27.912 UTC,"postgres","postgres",9513,"127.0.0.1:40952",6734958c.2529,3,"idle",2024-11-13 12:03:24 UTC,4/6,0,LOG,00000,"statement: CREATE TABLE public.test_1731499407_9117775()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.927 UTC,"postgres","postgres",9513,"127.0.0.1:40952",6734958c.2529,4,"idle",2024-11-13 12:03:24 UTC,4/7,0,LOG,00000,"statement: SHOW server_version_num",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.928 UTC,"postgres","postgres",9513,"127.0.0.1:40952",6734958c.2529,5,"idle",2024-11-13 12:03:24 UTC,4/8,0,LOG,00000,"statement: SELECT pg_switch_wal()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.935 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,15,"idle",2024-11-13 12:03:23 UTC,3/38,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:27.941 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,16,"idle",2024-11-13 12:03:23 UTC,3/39,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres1', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres1')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:27.969 UTC,,,9504,,6734958b.2520,4,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000010000000000000003""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:29.938 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,17,"idle",2024-11-13 12:03:23 UTC,3/40,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:29.943 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,18,"idle",2024-11-13 12:03:23 UTC,3/41,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:31.934 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,19,"idle",2024-11-13 12:03:23 UTC,3/42,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:32.448 UTC,"replicator","",9579,"127.0.0.1:39226",67349594.256b,1,"idle",2024-11-13 12:03:32 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:32.448 UTC,"replicator","",9579,"127.0.0.1:39226",67349594.256b,2,"idle",2024-11-13 12:03:32 UTC,5/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/4000000 TIMELINE 1",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:32.448 UTC,"replicator","",9579,"127.0.0.1:39226",67349594.256b,3,"START_REPLICATION",2024-11-13 12:03:32 UTC,5/0,0,DEBUG,00000,"""postgres1"" has now caught up with upstream server",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:32.448 UTC,"replicator","",9579,"127.0.0.1:39226",67349594.256b,4,"START_REPLICATION",2024-11-13 12:03:32 UTC,5/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:33.054 UTC,,,9494,,6734958b.2516,7,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"received fast shutdown request",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:33.055 UTC,,,9494,,6734958b.2516,8,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"aborting any active transactions",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:33.055 UTC,,,9502,,6734958b.251e,2,,2024-11-13 12:03:23 UTC,1/0,0,DEBUG,00000,"autovacuum launcher shutting down",,,,,,,,,"","autovacuum launcher",,0 2219s 2024-11-13 12:03:33.056 UTC,"postgres","postgres",9513,"127.0.0.1:40952",6734958c.2529,6,"idle",2024-11-13 12:03:24 UTC,4/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:33.057 UTC,,,9505,,6734958b.2521,2,,2024-11-13 12:03:23 UTC,2/0,0,DEBUG,00000,"logical replication launcher shutting down",,,,,,,,,"","logical replication launcher",,0 2219s 2024-11-13 12:03:33.058 UTC,"postgres","postgres",9507,"[local]",6734958b.2523,20,"idle",2024-11-13 12:03:23 UTC,3/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:33.059 UTC,,,9494,,6734958b.2516,9,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"background worker ""logical replication launcher"" (PID 9505) exited with exit code 1",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:33.061 UTC,,,9496,,6734958b.2518,17,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"shutting down",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.075 UTC,,,9496,,6734958b.2518,18,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"checkpoint starting: shutdown immediate",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.075 UTC,,,9496,,6734958b.2518,19,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.080 UTC,,,9496,,6734958b.2518,20,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=base/5/2703 time=1.816 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.080 UTC,,,9496,,6734958b.2518,21,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=base/5/1259 time=0.062 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.080 UTC,,,9496,,6734958b.2518,22,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=3 file=base/5/2608_fsm time=0.062 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.080 UTC,,,9496,,6734958b.2518,23,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=4 file=base/5/2673 time=0.028 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.080 UTC,,,9496,,6734958b.2518,24,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=5 file=base/5/2663 time=0.242 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,25,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=6 file=base/5/1247_vm time=0.214 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,26,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=7 file=base/5/1247 time=0.054 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,27,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=8 file=base/5/1249_vm time=0.212 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,28,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=9 file=base/5/2659 time=0.059 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,29,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=10 file=base/5/2704 time=0.048 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,30,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=11 file=base/5/2608 time=0.022 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,31,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=12 file=base/5/2608_vm time=0.043 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.081 UTC,,,9496,,6734958b.2518,32,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=13 file=base/5/3455 time=0.202 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.082 UTC,,,9496,,6734958b.2518,33,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=14 file=base/5/2674 time=0.203 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.082 UTC,,,9496,,6734958b.2518,34,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=15 file=base/5/16386 time=0.065 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.082 UTC,,,9496,,6734958b.2518,35,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=16 file=base/5/1249 time=0.202 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.082 UTC,,,9496,,6734958b.2518,36,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=17 file=base/5/2658 time=0.041 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.082 UTC,,,9496,,6734958b.2518,37,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=18 file=pg_xact/0000 time=0.236 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.082 UTC,,,9496,,6734958b.2518,38,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=19 file=base/5/1259_vm time=0.297 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.083 UTC,,,9496,,6734958b.2518,39,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"checkpoint sync: number=20 file=base/5/2662 time=0.051 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.107 UTC,,,9496,,6734958b.2518,40,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"checkpoint complete: wrote 16 buffers (12.5%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.005 s, total=0.035 s; sync files=20, longest=0.002 s, average=0.001 s; distance=32768 kB, estimate=32768 kB; lsn=0/4000028, redo lsn=0/4000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:33.109 UTC,,,9504,,6734958b.2520,5,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"archiver process shutting down",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:33.125 UTC,,,9494,,6734958b.2516,10,,2024-11-13 12:03:23 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:33.129 UTC,,,9495,,6734958b.2517,1,,2024-11-13 12:03:23 UTC,,0,DEBUG,00000,"logger shutting down",,,,,,,,,"","logger",,0 2219s 2024-11-13 12:03:38.519 UTC,,,9620,,6734959a.2594,1,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:38.519 UTC,,,9620,,6734959a.2594,2,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"starting PostgreSQL 16.4 (Ubuntu 16.4-3) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 14.2.0-7ubuntu1) 14.2.0, 64-bit",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:38.519 UTC,,,9620,,6734959a.2594,3,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"listening on IPv4 address ""127.0.0.1"", port 5382",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:38.522 UTC,,,9620,,6734959a.2594,4,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"listening on Unix socket ""/tmp/.s.PGSQL.5382""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:38.536 UTC,,,9624,,6734959a.2598,1,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"database system was shut down at 2024-11-13 12:03:33 UTC",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.552 UTC,"postgres","postgres",9628,"[local]",6734959a.259c,1,"",2024-11-13 12:03:38 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:38.558 UTC,"postgres","postgres",9630,"[local]",6734959a.259e,1,"",2024-11-13 12:03:38 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:38.634 UTC,,,9624,,6734959a.2598,2,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"entering standby mode",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,3,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint record is at 0/4000028",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,4,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"redo record is at 0/4000028; shutdown true",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,5,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"next transaction ID: 739; next OID: 16389",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,6,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"next MultiXactId: 1; next MultiXactOffset: 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,7,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"oldest unfrozen transaction ID: 723, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,8,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"oldest MultiXactId: 1, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.731 UTC,,,9624,,6734959a.2598,9,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"commit timestamp Xid oldest/newest: 0/0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.732 UTC,,,9624,,6734959a.2598,10,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.732 UTC,,,9624,,6734959a.2598,11,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.732 UTC,,,9624,,6734959a.2598,12,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"starting up replication slots",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.732 UTC,,,9624,,6734959a.2598,13,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"restoring replication slot from ""pg_replslot/postgres1/state""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.733 UTC,,,9624,,6734959a.2598,14,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.734 UTC,,,9624,,6734959a.2598,15,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"resetting unlogged relations: cleanup 1 init 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.734 UTC,,,9624,,6734959a.2598,16,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"initializing for hot standby",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.734 UTC,,,9624,,6734959a.2598,17,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,00000,"recovery snapshots are now enabled",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.734 UTC,,,9624,,6734959a.2598,18,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"consistent recovery state reached at 0/40000A0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.734 UTC,,,9624,,6734959a.2598,19,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"invalid record length at 0/40000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.734 UTC,,,9620,,6734959a.2594,5,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"database system is ready to accept read-only connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:38.930 UTC,,,9624,,6734959a.2598,20,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/40000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:38.965 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,1,"idle",2024-11-13 12:03:38 UTC,2/2,0,LOG,00000,"statement: SELECT 1",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:38.966 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,2,"idle",2024-11-13 12:03:38 UTC,2/3,0,LOG,00000,"statement: SET synchronous_commit TO 'local'",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:38.966 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,3,"idle",2024-11-13 12:03:38 UTC,2/4,0,LOG,00000,"statement: SELECT pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:39.025 UTC,,,9624,,6734959a.2598,21,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/40000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.574 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,1,"idle",2024-11-13 12:03:39 UTC,3/3,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:39.580 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,2,"idle",2024-11-13 12:03:39 UTC,3/4,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:39.581 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,3,"idle",2024-11-13 12:03:39 UTC,3/5,0,LOG,00000,"statement: WITH slots AS (SELECT slot_name, active FROM pg_catalog.pg_replication_slots WHERE slot_name = 'postgres1'), dropped AS (SELECT pg_catalog.pg_drop_replication_slot(slot_name), true AS dropped FROM slots WHERE not active) SELECT active, COALESCE(dropped, false) FROM slots FULL OUTER JOIN dropped ON true",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:39.582 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,4,"SELECT",2024-11-13 12:03:39 UTC,3/5,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:39.584 UTC,"replicator","",9645,"[local]",6734959b.25ad,1,"idle",2024-11-13 12:03:39 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:39.681 UTC,,,9624,,6734959a.2598,22,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/40000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.681 UTC,,,9624,,6734959a.2598,23,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"received promote request",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.681 UTC,,,9624,,6734959a.2598,24,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"redo is not required",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.780 UTC,,,9624,,6734959a.2598,25,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,00000,"resetting unlogged relations: cleanup 0 init 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.883 UTC,,,9624,,6734959a.2598,26,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"selected new timeline ID: 2",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.896 UTC,,,9624,,6734959a.2598,27,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,58P01,"could not remove file ""pg_wal/000000020000000000000004"": No such file or directory",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:39.969 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,4,"idle",2024-11-13 12:03:38 UTC,2/5,0,LOG,00000,"statement: SELECT pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:40.002 UTC,,,9624,,6734959a.2598,28,,2024-11-13 12:03:38 UTC,1/0,0,LOG,00000,"archive recovery complete",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.002 UTC,,,9624,,6734959a.2598,29,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.002 UTC,,,9624,,6734959a.2598,30,,2024-11-13 12:03:38 UTC,1/0,0,DEBUG,00000,"MultiXact member stop limit is now 4294914944 based on MultiXact 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.006 UTC,,,9622,,6734959a.2596,1,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint starting: force",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:40.006 UTC,,,9622,,6734959a.2596,2,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:40.007 UTC,,,9620,,6734959a.2594,6,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"starting background worker process ""logical replication launcher""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:40.007 UTC,,,9658,,6734959c.25ba,1,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"autovacuum launcher started",,,,,,,,,"","autovacuum launcher",,0 2219s 2024-11-13 12:03:40.008 UTC,,,9620,,6734959a.2594,7,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"database system is ready to accept connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:40.008 UTC,,,9660,,6734959c.25bc,1,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"logical replication launcher started",,,,,,,,,"","logical replication launcher",,0 2219s 2024-11-13 12:03:40.010 UTC,,,9622,,6734959a.2596,3,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=pg_multixact/offsets/0000 time=0.180 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:40.010 UTC,,,9622,,6734959a.2596,4,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=pg_xact/0000 time=0.164 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:40.011 UTC,,,9622,,6734959a.2596,5,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint complete: wrote 3 buffers (2.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.006 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/4000108, redo lsn=0/40000D0",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:40.035 UTC,,,9659,,6734959c.25bb,1,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""00000002.history""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:40.068 UTC,,,9659,,6734959c.25bb,2,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000010000000000000004.partial""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:40.274 UTC,"replicator","",9673,"127.0.0.1:37164",6734959c.25c9,1,"idle",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.274 UTC,"replicator","",9673,"127.0.0.1:37164",6734959c.25c9,2,"idle",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 2",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.276 UTC,"replicator","",9673,"127.0.0.1:37164",6734959c.25c9,3,"idle",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/4000000 TIMELINE 1",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.276 UTC,"replicator","",9673,"127.0.0.1:37164",6734959c.25c9,4,"START_REPLICATION",2024-11-13 12:03:40 UTC,5/0,0,ERROR,42704,"replication slot ""postgres1"" does not exist",,,,,,"START_REPLICATION SLOT ""postgres1"" 0/4000000 TIMELINE 1",,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.609 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,5,"idle",2024-11-13 12:03:39 UTC,3/6,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:40.616 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,6,"idle",2024-11-13 12:03:39 UTC,3/7,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres1', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres1')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:40.702 UTC,"replicator","",9687,"127.0.0.1:37176",6734959c.25d7,1,"idle",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.703 UTC,"replicator","",9687,"127.0.0.1:37176",6734959c.25d7,2,"idle",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/4000000 TIMELINE 2",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.703 UTC,"replicator","",9687,"127.0.0.1:37176",6734959c.25d7,3,"streaming 0/4000180",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"""postgres1"" has now caught up with upstream server",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.705 UTC,"replicator","",9687,"127.0.0.1:37176",6734959c.25d7,4,"streaming 0/4000180",2024-11-13 12:03:40 UTC,5/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:40.969 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,5,"idle",2024-11-13 12:03:38 UTC,2/6,0,LOG,00000,"statement: SELECT pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:42.176 UTC,"rewind_user","postgres",9690,"127.0.0.1:37192",6734959e.25da,1,"idle",2024-11-13 12:03:42 UTC,6/2,0,LOG,00000,"statement: SELECT pg_catalog.pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:42.196 UTC,"replicator","",9694,"127.0.0.1:37208",6734959e.25de,1,"idle",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:42.345 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,1,"idle",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"received replication command: SHOW data_directory_mode",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.345 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,2,"idle",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"received replication command: SHOW wal_segment_size",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.345 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,3,"idle",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.346 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,4,"idle",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client')",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.360 UTC,,,9622,,6734959a.2596,6,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint starting: immediate force wait",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:42.360 UTC,,,9622,,6734959a.2596,7,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:42.401 UTC,,,9659,,6734959c.25bb,3,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000004""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:42.406 UTC,,,9622,,6734959a.2596,8,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.046 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16383 kB, estimate=16383 kB; lsn=0/5000060, redo lsn=0/5000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:42.406 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,5,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""postmaster.pid"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.406 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,6,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_dynshmem"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.406 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,7,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.406 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,8,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_replslot"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.408 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,9,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,10,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_snapshots"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,11,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_stat_tmp"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,12,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_subtrans"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,13,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""postmaster.opts"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,14,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_notify"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,15,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_serial"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.409 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,16,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""postmaster.pid"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.411 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,17,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_dynshmem"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.411 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,18,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.412 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,19,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_replslot"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.424 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,1,"idle",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"received replication command: SHOW data_directory_mode",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.424 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,2,"idle",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"received replication command: CREATE_REPLICATION_SLOT ""pg_basebackup_9703"" TEMPORARY PHYSICAL ( RESERVE_WAL)",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.435 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,3,"idle",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.436 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,4,"idle",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 2",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.445 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,5,"idle",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""pg_basebackup_9703"" 0/5000000 TIMELINE 2",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.445 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,6,"streaming 0/50000D8",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"""pg_basebackup"" has now caught up with upstream server",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.453 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,20,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.468 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,21,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_snapshots"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.468 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,22,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_stat_tmp"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.468 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,23,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_subtrans"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.468 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,24,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"file ""postmaster.opts"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.468 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,25,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_notify"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.468 UTC,"replicator","",9700,"127.0.0.1:37210",6734959e.25e4,26,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:42 UTC,6/0,0,DEBUG,00000,"contents of directory ""pg_serial"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.507 UTC,"replicator","",9703,"127.0.0.1:37216",6734959e.25e7,7,"idle",2024-11-13 12:03:42 UTC,7/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:42.528 UTC,,,9659,,6734959c.25bb,4,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000005""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:42.559 UTC,,,9659,,6734959c.25bb,5,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000005.00000028.backup""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:42.605 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,7,"idle",2024-11-13 12:03:39 UTC,3/8,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:42.609 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,8,"idle",2024-11-13 12:03:39 UTC,3/9,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:42.609 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,9,"idle",2024-11-13 12:03:39 UTC,3/10,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres2', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres2')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:43.281 UTC,"replicator","",9733,"127.0.0.1:37230",6734959f.2605,1,"idle",2024-11-13 12:03:43 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:43.281 UTC,"replicator","",9733,"127.0.0.1:37230",6734959f.2605,2,"idle",2024-11-13 12:03:43 UTC,6/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres2"" 0/6000000 TIMELINE 2",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:43.281 UTC,"replicator","",9733,"127.0.0.1:37230",6734959f.2605,3,"START_REPLICATION",2024-11-13 12:03:43 UTC,6/0,0,DEBUG,00000,"""postgres2"" has now caught up with upstream server",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:43.281 UTC,"replicator","",9733,"127.0.0.1:37230",6734959f.2605,4,"START_REPLICATION",2024-11-13 12:03:43 UTC,6/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:44.603 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,10,"idle",2024-11-13 12:03:39 UTC,3/11,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:44.603 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,11,"idle",2024-11-13 12:03:39 UTC,3/12,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:45.325 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,1,"idle",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"received replication command: SHOW data_directory_mode",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.326 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,2,"idle",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"received replication command: SHOW wal_segment_size",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.326 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,3,"idle",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.326 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,4,"idle",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', WAIT 0, MANIFEST 'yes', TARGET 'client')",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.326 UTC,,,9622,,6734959a.2596,9,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint starting: immediate force wait",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:45.326 UTC,,,9622,,6734959a.2596,10,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:45.365 UTC,,,9622,,6734959a.2596,11,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.040 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16384 kB, estimate=16384 kB; lsn=0/6000060, redo lsn=0/6000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:45.365 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,5,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""postmaster.pid"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.365 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,6,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_dynshmem"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.365 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,7,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.366 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,8,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_replslot"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.367 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,9,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,10,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_snapshots"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,11,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_stat_tmp"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,12,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_subtrans"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,13,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""postmaster.opts"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,14,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_notify"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,15,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_serial"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,16,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""postmaster.pid"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.368 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,17,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_dynshmem"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.369 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,18,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.370 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,19,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_replslot"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.386 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,1,"idle",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"received replication command: SHOW data_directory_mode",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.386 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,2,"idle",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"received replication command: CREATE_REPLICATION_SLOT ""pg_basebackup_9751"" TEMPORARY PHYSICAL ( RESERVE_WAL)",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.391 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,3,"idle",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.396 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,4,"idle",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 2",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.396 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,5,"idle",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""pg_basebackup_9751"" 0/6000000 TIMELINE 2",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.402 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,6,"streaming 0/60000D8",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"""pg_basebackup"" has now caught up with upstream server",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.422 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,20,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""pg_internal.init"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.434 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,21,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_snapshots"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.434 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,22,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 U+ for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres0.log: 2219s + cat features/output/priority_replication_failed/postgres0.log 2219s TC,7/0,0,DEBUG,00000,"contents of directory ""pg_stat_tmp"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.434 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,23,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_subtrans"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.434 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,24,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"file ""postmaster.opts"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.434 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,25,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_notify"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.434 UTC,"replicator","",9750,"127.0.0.1:37234",673495a1.2616,26,"sending backup ""pg_basebackup base backup""",2024-11-13 12:03:45 UTC,7/0,0,DEBUG,00000,"contents of directory ""pg_serial"" excluded from backup",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.461 UTC,"replicator","",9751,"127.0.0.1:37248",673495a1.2617,7,"idle",2024-11-13 12:03:45 UTC,8/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"pg_basebackup","walsender",,0 2219s 2024-11-13 12:03:45.513 UTC,,,9659,,6734959c.25bb,6,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000006""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:45.541 UTC,,,9659,,6734959c.25bb,7,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000006.00000028.backup""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:46.312 UTC,"replicator","",9781,"127.0.0.1:37256",673495a2.2635,1,"idle",2024-11-13 12:03:46 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:46.312 UTC,"replicator","",9781,"127.0.0.1:37256",673495a2.2635,2,"idle",2024-11-13 12:03:46 UTC,7/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres3"" 0/7000000 TIMELINE 2",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:46.312 UTC,"replicator","",9781,"127.0.0.1:37256",673495a2.2635,3,"START_REPLICATION",2024-11-13 12:03:46 UTC,7/0,0,ERROR,42704,"replication slot ""postgres3"" does not exist",,,,,,"START_REPLICATION SLOT ""postgres3"" 0/7000000 TIMELINE 2",,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:46.536 UTC,"replicator","",9787,"127.0.0.1:37260",673495a2.263b,1,"idle",2024-11-13 12:03:46 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:46.536 UTC,"replicator","",9787,"127.0.0.1:37260",673495a2.263b,2,"idle",2024-11-13 12:03:46 UTC,7/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres3"" 0/7000000 TIMELINE 2",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:46.536 UTC,"replicator","",9787,"127.0.0.1:37260",673495a2.263b,3,"START_REPLICATION",2024-11-13 12:03:46 UTC,7/0,0,ERROR,42704,"replication slot ""postgres3"" does not exist",,,,,,"START_REPLICATION SLOT ""postgres3"" 0/7000000 TIMELINE 2",,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:46.616 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,12,"idle",2024-11-13 12:03:39 UTC,3/13,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:46.618 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,13,"idle",2024-11-13 12:03:39 UTC,3/14,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres3', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres3')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:47.061 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,6,"idle",2024-11-13 12:03:38 UTC,2/7,0,LOG,00000,"statement: CREATE TABLE public.test_1731499427_0613313()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:47.079 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,7,"idle",2024-11-13 12:03:38 UTC,2/8,0,LOG,00000,"statement: SHOW server_version_num",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:47.079 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,8,"idle",2024-11-13 12:03:38 UTC,2/9,0,LOG,00000,"statement: SELECT pg_switch_wal()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:47.138 UTC,,,9659,,6734959c.25bb,8,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000007""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:48.100 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,9,"idle",2024-11-13 12:03:38 UTC,2/10,0,LOG,00000,"statement: CREATE TABLE public.test_1731499428_10031()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:48.124 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,10,"idle",2024-11-13 12:03:38 UTC,2/11,0,LOG,00000,"statement: SHOW server_version_num",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:48.124 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,11,"idle",2024-11-13 12:03:38 UTC,2/12,0,LOG,00000,"statement: SELECT pg_switch_wal()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:48.183 UTC,,,9659,,6734959c.25bb,9,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000008""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:48.607 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,14,"idle",2024-11-13 12:03:39 UTC,3/15,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:48.609 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,15,"idle",2024-11-13 12:03:39 UTC,3/16,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:50.603 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,16,"idle",2024-11-13 12:03:39 UTC,3/17,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres0.yml: 2219s + cat features/output/priority_replication_failed/postgres0.yml 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres1.csv: 2219s + cat features/output/priority_replication_failed/postgres1.csv 2219s '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:51.748 UTC,"replicator","",9810,"127.0.0.1:37734",673495a7.2652,1,"idle",2024-11-13 12:03:51 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:51.748 UTC,"replicator","",9810,"127.0.0.1:37734",673495a7.2652,2,"idle",2024-11-13 12:03:51 UTC,7/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres3"" 0/9000000 TIMELINE 2",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:51.748 UTC,"replicator","",9810,"127.0.0.1:37734",673495a7.2652,3,"START_REPLICATION",2024-11-13 12:03:51 UTC,7/0,0,DEBUG,00000,"""postgres3"" has now caught up with upstream server",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:51.748 UTC,"replicator","",9810,"127.0.0.1:37734",673495a7.2652,4,"START_REPLICATION",2024-11-13 12:03:51 UTC,7/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres3","walsender",,0 2219s 2024-11-13 12:03:52.223 UTC,,,9620,,6734959a.2594,8,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"received fast shutdown request",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:52.228 UTC,,,9620,,6734959a.2594,9,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"aborting any active transactions",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:52.228 UTC,"postgres","postgres",9642,"[local]",6734959b.25aa,17,"idle",2024-11-13 12:03:39 UTC,3/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:52.231 UTC,,,9660,,6734959c.25bc,2,,2024-11-13 12:03:40 UTC,4/0,0,DEBUG,00000,"logical replication launcher shutting down",,,,,,,,,"","logical replication launcher",,0 2219s 2024-11-13 12:03:52.231 UTC,"postgres","postgres",9639,"127.0.0.1:37162",6734959a.25a7,12,"idle",2024-11-13 12:03:38 UTC,2/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:52.234 UTC,,,9620,,6734959a.2594,10,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"background worker ""logical replication launcher"" (PID 9660) exited with exit code 1",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:52.234 UTC,,,9658,,6734959c.25ba,2,,2024-11-13 12:03:40 UTC,1/0,0,DEBUG,00000,"autovacuum launcher shutting down",,,,,,,,,"","autovacuum launcher",,0 2219s 2024-11-13 12:03:52.237 UTC,,,9622,,6734959a.2596,12,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"shutting down",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.248 UTC,,,9622,,6734959a.2596,13,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint starting: shutdown immediate",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.248 UTC,,,9622,,6734959a.2596,14,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,15,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=base/5/2703 time=0.665 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,16,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=base/5/1259 time=0.044 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,17,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=3 file=base/5/2673 time=0.024 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,18,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=4 file=base/5/1249_fsm time=0.026 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,19,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=5 file=base/5/2663 time=0.030 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,20,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=6 file=base/5/1247 time=0.022 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,21,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=7 file=base/5/1249_vm time=0.024 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,22,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=8 file=base/5/2659 time=0.021 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,23,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=9 file=base/5/2704 time=0.032 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,24,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=10 file=base/5/2608 time=0.032 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,25,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=11 file=base/5/16392 time=0.032 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,26,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=12 file=base/5/2608_vm time=0.018 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,27,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=13 file=base/5/3455 time=0.022 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,28,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=14 file=base/5/2674 time=0.019 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,29,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=15 file=base/5/1249 time=0.022 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.257 UTC,,,9622,,6734959a.2596,30,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=16 file=base/5/16389 time=0.031 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.258 UTC,,,9622,,6734959a.2596,31,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=17 file=base/5/2658 time=0.020 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.258 UTC,,,9622,,6734959a.2596,32,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=18 file=pg_xact/0000 time=0.208 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.258 UTC,,,9622,,6734959a.2596,33,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"checkpoint sync: number=19 file=base/5/2662 time=0.025 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.281 UTC,,,9622,,6734959a.2596,34,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"checkpoint complete: wrote 21 buffers (16.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.002 s, total=0.034 s; sync files=19, longest=0.001 s, average=0.001 s; distance=49152 kB, estimate=49152 kB; lsn=0/9000028, redo lsn=0/9000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:52.283 UTC,,,9659,,6734959c.25bb,10,,2024-11-13 12:03:40 UTC,,0,DEBUG,00000,"archiver process shutting down",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:52.313 UTC,,,9620,,6734959a.2594,11,,2024-11-13 12:03:38 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:52.315 UTC,,,9621,,6734959a.2595,1,,2024-11-13 12:03:38 UTC,,0,DEBUG,00000,"logger shutting down",,,,,,,,,"","logger",,0 2219s features/output/priority_replication_failed/postgres0.log: 2219s 2024-11-13 12:03:23.886 UTC [9494] LOG: ending log output to stderr 2219s 2024-11-13 12:03:23.886 UTC [9494] HINT: Future log output will go to log destination "csvlog". 2219s 2024-11-13 12:03:33.129 UTC [9495] DEBUG: logger shutting down 2219s 2024-11-13 12:03:38.519 UTC [9620] LOG: ending log output to stderr 2219s 2024-11-13 12:03:38.519 UTC [9620] HINT: Future log output will go to log destination "csvlog". 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000001.history' 2219s 2024-11-13 12:03:52.315 UTC [9621] DEBUG: logger shutting down 2219s features/output/priority_replication_failed/postgres0.yml: 2219s bootstrap: 2219s dcs: 2219s loop_wait: 2 2219s maximum_lag_on_failover: 1048576 2219s postgresql: 2219s parameters: 2219s archive_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode archive --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s archive_mode: 'on' 2219s restore_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode restore --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s wal_keep_segments: 100 2219s pg_hba: 2219s - host replication replicator 127.0.0.1/32 md5 2219s - host all all 0.0.0.0/0 md5 2219s use_pg_rewind: true 2219s retry_timeout: 10 2219s ttl: 30 2219s initdb: 2219s - encoding: UTF8 2219s - data-checksums 2219s - auth: md5 2219s - auth-host: md5 2219s post_bootstrap: psql -w -c "SELECT 1" 2219s log: 2219s format: '%(asctime)s %(levelname)s [%(pathname)s:%(lineno)d - %(funcName)s]: %(message)s' 2219s loggers: 2219s patroni.postgresql.callback_executor: DEBUG 2219s name: postgres0 2219s postgresql: 2219s authentication: 2219s replication: 2219s password: rep-pass 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: replicator 2219s rewind: 2219s password: rewind_password 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: rewind_user 2219s superuser: 2219s password: patroni 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: postgres 2219s basebackup: 2219s - checkpoint: fast 2219s callbacks: 2219s on_role_change: /usr/bin/python3 features/callback2.py postgres0 5382 2219s connect_address: 127.0.0.1:5382 2219s data_dir: /tmp/autopkgtest.FwqS2V/build.hfu/src/data/postgres0 2219s listen: 127.0.0.1:5382 2219s parameters: 2219s log_destination: csvlog 2219s log_directory: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication 2219s log_filename: postgres0.log 2219s log_min_messages: debug1 2219s log_statement: all 2219s logging_collector: 'on' 2219s shared_buffers: 1MB 2219s ssl: 'on' 2219s ssl_ca_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_cert_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_key_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s unix_socket_directories: /tmp 2219s pg_hba: 2219s - local all all trust 2219s - local replication all trust 2219s - hostssl replication replicator all md5 clientcert=verify-ca 2219s - hostssl all all all md5 clientcert=verify-ca 2219s pgpass: /tmp/pgpass_postgres0 2219s use_unix_socket: true 2219s use_unix_socket_repl: true 2219s restapi: 2219s connect_address: 127.0.0.1:8008 2219s listen: 127.0.0.1:8008 2219s scope: batman 2219s tags: 2219s clonefrom: false 2219s failover_priority: '1' 2219s noloadbalance: false 2219s nostream: false 2219s nosync: false 2219s features/output/priority_replication_failed/postgres1.csv: 2219s 2024-11-13 12:03:26.654 UTC,,,9535,,6734958e.253f,1,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:26.654 UTC,,,9535,,6734958e.253f,2,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"starting PostgreSQL 16.4 (Ubuntu 16.4-3) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 14.2.0-7ubuntu1) 14.2.0, 64-bit",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:26.654 UTC,,,9535,,6734958e.253f,3,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"listening on IPv4 address ""127.0.0.1"", port 5383",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:26.655 UTC,,,9535,,6734958e.253f,4,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"listening on Unix socket ""/tmp/.s.PGSQL.5383""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:26.662 UTC,,,9539,,6734958e.2543,1,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"database system was interrupted; last known up at 2024-11-13 12:03:26 UTC",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.666 UTC,"postgres","postgres",9541,"[local]",6734958e.2545,1,"",2024-11-13 12:03:26 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:26.671 UTC,"postgres","postgres",9543,"[local]",6734958e.2547,1,"",2024-11-13 12:03:26 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:26.811 UTC,,,9539,,6734958e.2543,2,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"entering standby mode",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.811 UTC,,,9539,,6734958e.2543,3,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"backup time 2024-11-13 12:03:26 UTC in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.811 UTC,,,9539,,6734958e.2543,4,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"backup label pg_basebackup base backup in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.811 UTC,,,9539,,6734958e.2543,5,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"backup timeline 1 in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.811 UTC,,,9539,,6734958e.2543,6,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"starting backup recovery with redo LSN 0/2000028, checkpoint LSN 0/2000060, on timeline ID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.842 UTC,,,9539,,6734958e.2543,7,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"restored log file ""000000010000000000000002"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,8,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"got WAL segment from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,9,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint record is at 0/2000060",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,10,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"redo record is at 0/2000028; shutdown false",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,11,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"next transaction ID: 738; next OID: 24576",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,12,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"next MultiXactId: 1; next MultiXactOffset: 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,13,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"oldest unfrozen transaction ID: 723, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,14,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"oldest MultiXactId: 1, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,15,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"commit timestamp Xid oldest/newest: 0/0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,16,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,17,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,18,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"starting up replication slots",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.857 UTC,,,9539,,6734958e.2543,19,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.858 UTC,,,9539,,6734958e.2543,20,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"resetting unlogged relations: cleanup 1 init 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.859 UTC,,,9539,,6734958e.2543,21,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"initializing for hot standby",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.859 UTC,,,9539,,6734958e.2543,22,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"redo starts at 0/2000028",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.859 UTC,,,9539,,6734958e.2543,23,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"recovery snapshots are now enabled",,,,,"WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 738 latestCompletedXid 737 oldestRunningXid 738",,,,"","startup",,0 2219s 2024-11-13 12:03:26.896 UTC,"postgres","postgres",9550,"127.0.0.1:54968",6734958e.254e,1,"",2024-11-13 12:03:26 UTC,,0,FATAL,57P03,"the database system is not yet accepting connections","Consistent recovery state has not been yet reached.",,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:26.952 UTC,,,9539,,6734958e.2543,24,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"end of backup record reached",,,,,"WAL redo at 0/20000D8 for XLOG/BACKUP_END: 0/2000028",,,,"","startup",,0 2219s 2024-11-13 12:03:26.953 UTC,,,9539,,6734958e.2543,25,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"end of backup reached",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.953 UTC,,,9539,,6734958e.2543,26,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"completed backup recovery with redo LSN 0/2000028 and end LSN 0/2000100",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.953 UTC,,,9539,,6734958e.2543,27,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"consistent recovery state reached at 0/2000100",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:26.953 UTC,,,9535,,6734958e.253f,5,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"database system is ready to accept read-only connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:27.058 UTC,,,9553,,6734958f.2551,1,,2024-11-13 12:03:27 UTC,,0,FATAL,08P01,"could not start WAL streaming: ERROR: replication slot ""postgres1"" does not exist",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:27.268 UTC,,,9559,,6734958f.2557,1,,2024-11-13 12:03:27 UTC,,0,FATAL,08P01,"could not start WAL streaming: ERROR: replication slot ""postgres1"" does not exist",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:27.362 UTC,,,9539,,6734958e.2543,28,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/3000018",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:27.690 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,1,"idle",2024-11-13 12:03:27 UTC,2/3,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:27.691 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,2,"idle",2024-11-13 12:03:27 UTC,2/4,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:27.693 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,3,"idle",2024-11-13 12:03:27 UTC,2/5,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:27.696 UTC,"replicator","",9568,"[local]",6734958f.2560,1,"idle",2024-11-13 12:03:27 UTC,3/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:27.910 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,1,"idle",2024-11-13 12:03:27 UTC,3/3,0,LOG,00000,"statement: SELECT 1",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.911 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,2,"idle",2024-11-13 12:03:27 UTC,3/4,0,LOG,00000,"statement: SET synchronous_commit TO 'local'",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.937 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,3,"idle",2024-11-13 12:03:27 UTC,3/5,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499407_9117775",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.937 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,4,"SELECT",2024-11-13 12:03:27 UTC,3/5,0,DEBUG,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:27.937 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,5,"SELECT",2024-11-13 12:03:27 UTC,3/5,0,ERROR,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499407_9117775",15,,"","client backend",,0 2219s 2024-11-13 12:03:28.941 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,6,"idle",2024-11-13 12:03:27 UTC,3/6,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499407_9117775",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:28.941 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,7,"SELECT",2024-11-13 12:03:27 UTC,3/6,0,DEBUG,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:28.941 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,8,"SELECT",2024-11-13 12:03:27 UTC,3/6,0,ERROR,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499407_9117775",15,,"","client backend",,0 2219s 2024-11-13 12:03:29.688 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,4,"idle",2024-11-13 12:03:27 UTC,2/6,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:29.942 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,9,"idle",2024-11-13 12:03:27 UTC,3/7,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499407_9117775",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:29.942 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,10,"SELECT",2024-11-13 12:03:27 UTC,3/7,0,DEBUG,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:29.942 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,11,"SELECT",2024-11-13 12:03:27 UTC,3/7,0,ERROR,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499407_9117775",15,,"","client backend",,0 2219s 2024-11-13 12:03:30.943 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,12,"idle",2024-11-13 12:03:27 UTC,3/8,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499407_9117775",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:30.943 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,13,"SELECT",2024-11-13 12:03:27 UTC,3/8,0,DEBUG,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:30.943 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,14,"SELECT",2024-11-13 12:03:27 UTC,3/8,0,ERROR,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499407_9117775",15,,"","client backend",,0 2219s 2024-11-13 12:03:31.688 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,5,"idle",2024-11-13 12:03:27 UTC,2/7,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:31.943 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,15,"idle",2024-11-13 12:03:27 UTC,3/9,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499407_9117775",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:31.943 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,16,"SELECT",2024-11-13 12:03:27 UTC,3/9,0,DEBUG,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:31.943 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,17,"SELECT",2024-11-13 12:03:27 UTC,3/9,0,ERROR,42P01,"relation ""public.test_1731499407_9117775"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499407_9117775",15,,"","client backend",,0 2219s 2024-11-13 12:03:32.194 UTC,,,9539,,6734958e.2543,29,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"restored log file ""000000010000000000000003"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:32.204 UTC,,,9539,,6734958e.2543,30,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"got WAL segment from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:32.448 UTC,,,9578,,67349594.256a,1,,2024-11-13 12:03:32 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/4000000 on timeline 1",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:32.944 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,18,"idle",2024-11-13 12:03:27 UTC,3/10,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499407_9117775",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:33.122 UTC,,,9539,,6734958e.2543,31,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,"WAL redo at 0/4000028 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/4000028; tli 1; prev tli 1; fpw true; xid 0:739; oid 16389; multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown",,,,"","startup",,0 2219s 2024-11-13 12:03:33.122 UTC,,,9578,,67349594.256a,2,,2024-11-13 12:03:32 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 1 at 0/40000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:33.122 UTC,,,9578,,67349594.256a,3,,2024-11-13 12:03:32 UTC,,0,FATAL,08006,"could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly 2219s no COPY in progress",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:33.224 UTC,,,9539,,6734958e.2543,32,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/40000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:33.689 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,6,"idle",2024-11-13 12:03:27 UTC,2/8,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:34.079 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,7,"idle",2024-11-13 12:03:27 UTC,2/9,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:36.092 UTC,,,9535,,6734958e.253f,6,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:36.093 UTC,,,9535,,6734958e.253f,7,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"parameter ""primary_conninfo"" removed from configuration file, reset to default",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:36.093 UTC,,,9535,,6734958e.253f,8,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"parameter ""primary_slot_name"" removed from configuration file, reset to default",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:36.102 UTC,"replicator","",9596,"[local]",67349598.257c,1,"idle",2024-11-13 12:03:36 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:36.108 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,8,"idle",2024-11-13 12:03:27 UTC,2/10,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:36.196 UTC,,,9539,,6734958e.2543,33,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"invalid record length at 0/40000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:36.292 UTC,,,9539,,6734958e.2543,34,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/40000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:36.948 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,19,"idle",2024-11-13 12:03:27 UTC,3/11,0,LOG,00000,"statement: SELECT pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:38.116 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,9,"idle",2024-11-13 12:03:27 UTC,2/11,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:38.119 UTC,"replicator","",9602,"[local]",6734959a.2582,1,"idle",2024-11-13 12:03:38 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:38.124 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,10,"idle",2024-11-13 12:03:27 UTC,2/12,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:40.135 UTC,"replicator","",9665,"[local]",6734959c.25c1,1,"idle",2024-11-13 12:03:40 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:40.139 UTC,,,9535,,6734958e.253f,9,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:40.140 UTC,,,9535,,6734958e.253f,10,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"parameter ""primary_conninfo"" changed to ""user=replicator passfile=/tmp/pgpass_postgres1 host=127.0.0.1 port=5382 sslmode=verify-ca sslcert=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt sslkey=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key sslrootcert=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt application_name=postgres1 gssencmode=prefer channel_binding=prefer""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:40.140 UTC,,,9535,,6734958e.253f,11,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"parameter ""primary_slot_name"" changed to ""postgres1""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:40.151 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,11,"idle",2024-11-13 12:03:27 UTC,2/13,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:40.153 UTC,"replicator","",9671,"[local]",6734959c.25c7,1,"idle",2024-11-13 12:03:40 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:40.153 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,12,"idle",2024-11-13 12:03:27 UTC,2/14,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:40.259 UTC,,,9539,,6734958e.2543,35,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/40000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.274 UTC,,,9672,,6734959c.25c8,1,,2024-11-13 12:03:40 UTC,,0,LOG,00000,"fetching timeline history file for timeline 2 from primary server",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:40.276 UTC,,,9672,,6734959c.25c8,2,,2024-11-13 12:03:40 UTC,,0,FATAL,08P01,"could not start WAL streaming: ERROR: replication slot ""postgres1"" does not exist",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:40.302 UTC,,,9539,,6734958e.2543,36,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"restored log file ""00000002.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.438 UTC,,,9539,,6734958e.2543,37,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"restored log file ""00000002.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.440 UTC,,,9539,,6734958e.2543,38,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"new target timeline is 2",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.676 UTC,,,9539,,6734958e.2543,39,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/40000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:40.703 UTC,,,9686,,6734959c.25d6,1,,2024-11-13 12:03:40 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/4000000 on timeline 2",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:42.152 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,13,"idle",2024-11-13 12:03:27 UTC,2/15,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:42.154 UTC,"replicator","",9689,"[local]",6734959e.25d9,1,"idle",2024-11-13 12:03:42 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:42.506 UTC,,,9539,,6734958e.2543,40,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"saw end-of-backup record for backup starting at 0/5000028, waiting for 0/0",,,,,"WAL redo at 0/50000D8 for XLOG/BACKUP_END: 0/5000028",,,,"","startup",,0 2219s 2024-11-13 12:03:44.158 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,14,"idle",2024-11-13 12:03:27 UTC,2/16,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:45.457 UTC,,,9539,,6734958e.2543,41,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"saw end-of-backup record for backup starting at 0/6000028, waiting for 0/0",,,,,"WAL redo at 0/60000D8 for XLOG/BACKUP_END: 0/6000028",,,,"","startup",,0 2219s 2024-11-13 12:03:46.152 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,15,"idle",2024-11-13 12:03:27 UTC,2/17,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:48.181 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,16,"idle",2024-11-13 12:03:27 UTC,2/18,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:50.154 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,17,"idle",2024-11-13 12:03:27 UTC,2/19,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:52.158 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,18,"idle",2024-11-13 12:03:27 UTC,2/20,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:52.304 UTC,,,9539,,6734958e.2543,42,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,"WAL redo at 0/9000028 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/9000028; tli 2; prev tli 2; fpw true; xid 0:741; oid 16395; multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown",,,,"","startup",,0 2219s 2024-11-13 12:03:52.306 UTC,,,9686,,6734959c.25d6,2,,2024-11-13 12:03:40 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 2 at 0/90000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.306 UTC,,,9686,,6734959c.25d6,3,,2024-11-13 12:03:40 UTC,,0,FATAL,08006,"could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly 2219s no COPY in progress",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.560 UTC,,,9539,,6734958e.2543,43,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:52.562 UTC,,,9822,,673495a8.265e,1,,2024-11-13 12:03:52 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5382 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.665 UTC,,,9539,,6734958e.2543,44,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/90000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.258 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,19,"idle",2024-11-13 12:03:27 UTC,2/21,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:55.278 UTC,"replicator","",9888,"[local]",673495ab.26a0,1,"idle",2024-11-13 12:03:55 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.313 UTC,,,9535,,6734958e.253f,12,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:55.314 UTC,,,9535,,6734958e.253f,13,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"parameter ""primary_conninfo"" changed to ""user=replicator passfile=/tmp/pgpass_postgres1 host=127.0.0.1 port=5385 sslmode=verify-ca sslcert=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt sslkey=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key sslrootcert=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt application_name=postgres1 gssencmode=prefer channel_binding=prefer""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:55.321 UTC,"replicator","",9899,"[local]",673495ab.26ab,1,"idle",2024-11-13 12:03:55 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.362 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,20,"idle",2024-11-13 12:03:27 UTC,2/22,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:55.367 UTC,"replicator","",9900,"[local]",673495ab.26ac,1,"idle",2024-11-13 12:03:55 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.416 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,21,"idle",2024-11-13 12:03:27 UTC,2/23,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:55.426 UTC,"replicator","",9903,"[local]",673495ab.26af,1,"idle",2024-11-13 12:03:55 UTC,4/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.453 UTC,,,9539,,6734958e.2543,45,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.465 UTC,,,9908,,673495ab.26b4,1,,2024-11-13 12:03:55 UTC,,0,LOG,00000,"fetching timeline history file for timeline 3 from primary server",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:55.471 UTC,,,9908,,673495ab.26b4,2,,2024-11-13 12:03:55 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/9000000 on timeline 2",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:55.471 UTC,,,9908,,673495ab.26b4,3,,2024-11-13 12:03:55 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 2 at 0/90000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:55.471 UTC,,,9908,,673495ab.26b4,4,,2024-11-13 12:03:55 UTC,,0,DEBUG,00000,"walreceiver ended streaming and awaits new instructions",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:55.471 UTC,,,9908,,673495ab.26b4,5,,2024-11-13 12:03:55 UTC,,0,FATAL,57P01,"terminating walreceiver process due to administrator command",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:55.496 UTC,,,9539,,6734958e.2543,46,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"restored log file ""00000003.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.610 UTC,,,9539,,6734958e.2543,47,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"restored log file ""00000003.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.616 UTC,,,9539,,6734958e.2543,48,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"new target timeline is 3",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.821 UTC,,,9539,,6734958e.2543,49,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.836 UTC,,,9928,,673495ab.26c8,1,,2024-11-13 12:03:55 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/9000000 on timeline 3",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:57.362 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,22,"idle",2024-11-13 12:03:27 UTC,2/24,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['wal_level','max_connections','max_wal_senders','max_prepared_transactions','max_locks_per_transaction','track_commit_timestamp','max_replication_slots','max_worker_processes','archive_command','archive_mode','log_destination','log_directory','log_filename','log_min_messages','log_statement','logging_collector','shared_buffers','ssl','ssl_ca_file','ssl_cert_file','ssl_key_file','unix_socket_directories','cluster_name','listen_addresses','port','wal_keep_size'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:57.366 UTC,,,9535,,6734958e.253f,14,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:58.368 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,23,"idle",2024-11-13 12:03:27 UTC,2/25,0,LOG,00000,"statement: SELECT name, pg_catalog.current_setting(name), unit, vartype FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) != ALL(ARRAY['archive_cleanup_command','pause_at_recovery_target','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_action','recovery_target_inclusive','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command','standby_mode','trigger_file','hot_standby','wal_log_hints']) AND pending_restart",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:58.376 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,24,"idle",2024-11-13 12:03:27 UTC,2/26,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:59.349 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,25,"idle",2024-11-13 12:03:27 UTC,2/27,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:59.438 UTC,"postgres","postgres",9961,"[local]",673495af.26e9,1,"idle",2024-11-13 12:03:59 UTC,4/11,0,LOG,00000,"statement: SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg+ for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres1.log: 2219s + cat features/output/priority_replication_failed/postgres1.log 2219s _is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:03:59.444 UTC,"replicator","",9962,"[local]",673495af.26ea,1,"idle",2024-11-13 12:03:59 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:59.490 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,26,"idle",2024-11-13 12:03:27 UTC,2/28,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:00.397 UTC,"postgres","postgres",9961,"[local]",673495af.26e9,2,"idle",2024-11-13 12:03:59 UTC,4/12,0,LOG,00000,"statement: SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:04:00.599 UTC,,,9539,,6734958e.2543,50,,2024-11-13 12:03:26 UTC,1/0,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,"WAL redo at 0/A000028 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/A000028; tli 3; prev tli 3; fpw true; xid 0:741; oid 16395; multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown",,,,"","startup",,0 2219s 2024-11-13 12:04:00.599 UTC,,,9928,,673495ab.26c8,2,,2024-11-13 12:03:55 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 3 at 0/A0000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:00.599 UTC,,,9928,,673495ab.26c8,3,,2024-11-13 12:03:55 UTC,,0,FATAL,08006,"could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly 2219s no COPY in progress",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:00.798 UTC,,,9539,,6734958e.2543,51,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"invalid record length at 0/A0000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:00.801 UTC,,,9981,,673495b0.26fd,1,,2024-11-13 12:04:00 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5385 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:00.904 UTC,,,9539,,6734958e.2543,52,,2024-11-13 12:03:26 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/A0000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:01.497 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,27,"idle",2024-11-13 12:03:27 UTC,2/29,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:01.505 UTC,"replicator","",9997,"[local]",673495b1.270d,1,"idle",2024-11-13 12:04:01 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:04:01.893 UTC,,,9535,,6734958e.253f,15,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"received fast shutdown request",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:01.896 UTC,,,9535,,6734958e.253f,16,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"aborting any active transactions",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:01.896 UTC,"postgres","postgres",9569,"127.0.0.1:54978",6734958f.2561,20,"idle",2024-11-13 12:03:27 UTC,3/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:04:01.899 UTC,"postgres","postgres",9566,"[local]",6734958f.255e,28,"idle",2024-11-13 12:03:27 UTC,2/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:01.902 UTC,"postgres","postgres",9961,"[local]",673495af.26e9,3,"idle",2024-11-13 12:03:59 UTC,4/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:04:01.905 UTC,,,9537,,6734958e.2541,1,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"shutting down",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.905 UTC,,,9537,,6734958e.2541,2,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"restartpoint starting: shutdown immediate",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.905 UTC,,,9537,,6734958e.2541,3,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.910 UTC,,,9537,,6734958e.2541,4,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=base/5/2703 time=0.268 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.912 UTC,,,9537,,6734958e.2541,5,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=base/5/1259 time=1.793 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.912 UTC,,,9537,,6734958e.2541,6,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=3 file=base/5/2608_fsm time=0.359 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.913 UTC,,,9537,,6734958e.2541,7,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=4 file=base/5/2673 time=0.465 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.914 UTC,,,9537,,6734958e.2541,8,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=5 file=base/5/1249_fsm time=1.348 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.915 UTC,,,9537,,6734958e.2541,9,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=6 file=base/5/2663 time=0.050 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.915 UTC,,,9537,,6734958e.2541,10,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=7 file=base/5/1247_vm time=0.235 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.915 UTC,,,9537,,6734958e.2541,11,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=8 file=base/5/1247 time=0.248 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.915 UTC,,,9537,,6734958e.2541,12,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=9 file=base/5/1249_vm time=0.201 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.915 UTC,,,9537,,6734958e.2541,13,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=10 file=base/5/2659 time=0.058 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.916 UTC,,,9537,,6734958e.2541,14,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=11 file=base/5/2704 time=0.234 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.916 UTC,,,9537,,6734958e.2541,15,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=12 file=base/5/2608 time=0.271 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.916 UTC,,,9537,,6734958e.2541,16,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=13 file=base/5/16392 time=0.061 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.916 UTC,,,9537,,6734958e.2541,17,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=14 file=base/5/2608_vm time=0.248 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.916 UTC,,,9537,,6734958e.2541,18,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=15 file=base/5/3455 time=0.235 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.917 UTC,,,9537,,6734958e.2541,19,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=16 file=base/5/2674 time=0.255 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.917 UTC,,,9537,,6734958e.2541,20,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=17 file=base/5/16386 time=0.093 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.917 UTC,,,9537,,6734958e.2541,21,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=18 file=base/5/1249 time=0.248 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.917 UTC,,,9537,,6734958e.2541,22,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=19 file=base/5/16389 time=0.065 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.917 UTC,,,9537,,6734958e.2541,23,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=20 file=base/5/2658 time=0.055 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.917 UTC,,,9537,,6734958e.2541,24,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=21 file=pg_xact/0000 time=0.199 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.918 UTC,,,9537,,6734958e.2541,25,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=22 file=base/5/1259_vm time=0.228 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.918 UTC,,,9537,,6734958e.2541,26,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"checkpoint sync: number=23 file=base/5/2662 time=0.075 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.920 UTC,,,9537,,6734958e.2541,27,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"restartpoint complete: wrote 9 buffers (7.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.008 s, total=0.015 s; sync files=23, longest=0.002 s, average=0.001 s; distance=131072 kB, estimate=131072 kB; lsn=0/A000028, redo lsn=0/A000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.920 UTC,,,9537,,6734958e.2541,28,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"recovery restart point at 0/A000028","Last completed transaction was at log time 2024-11-13 12:03:48.100986+00.",,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:01.923 UTC,,,9535,,6734958e.253f,17,,2024-11-13 12:03:26 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:01.925 UTC,,,9536,,6734958e.2540,1,,2024-11-13 12:03:26 UTC,,0,DEBUG,00000,"logger shutting down",,,,,,,,,"","logger",,0 2219s features/output/priority_replication_failed/postgres1.log: 2219s 2024-11-13 12:03:26.654 UTC [9535] LOG: ending log output to stderr 2219s 2024-11-13 12:03:26.654 UTC [9535] HINT: Future log output will go to log destination "csvlog". 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000003' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000003' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000003' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000002.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000010000000000000004' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000030000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build+ for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres1.yml: 2219s + cat features/output/priority_replication_failed/postgres1.yml 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres2.csv: 2219s + cat features/output/priority_replication_failed/postgres2.csv 2219s .hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003000000000000000A' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s 2024-11-13 12:04:01.925 UTC [9536] DEBUG: logger shutting down 2219s features/output/priority_replication_failed/postgres1.yml: 2219s bootstrap: 2219s dcs: 2219s loop_wait: 2 2219s maximum_lag_on_failover: 1048576 2219s postgresql: 2219s parameters: 2219s archive_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode archive --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s archive_mode: 'on' 2219s restore_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode restore --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s wal_keep_segments: 100 2219s pg_hba: 2219s - host replication replicator 127.0.0.1/32 md5 2219s - host all all 0.0.0.0/0 md5 2219s use_pg_rewind: true 2219s retry_timeout: 10 2219s ttl: 30 2219s initdb: 2219s - encoding: UTF8 2219s - data-checksums 2219s - auth: md5 2219s - auth-host: md5 2219s post_bootstrap: psql -w -c "SELECT 1" 2219s log: 2219s format: '%(asctime)s %(levelname)s [%(pathname)s:%(lineno)d - %(funcName)s]: %(message)s' 2219s loggers: 2219s patroni.postgresql.callback_executor: DEBUG 2219s name: postgres1 2219s postgresql: 2219s authentication: 2219s replication: 2219s password: rep-pass 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: replicator 2219s rewind: 2219s password: rewind_password 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: rewind_user 2219s superuser: 2219s password: patroni 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: postgres 2219s basebackup: 2219s - checkpoint: fast 2219s callbacks: 2219s on_role_change: /usr/bin/python3 features/callback2.py postgres1 5383 2219s connect_address: 127.0.0.1:5383 2219s data_dir: /tmp/autopkgtest.FwqS2V/build.hfu/src/data/postgres1 2219s listen: 127.0.0.1:5383 2219s parameters: 2219s log_destination: csvlog 2219s log_directory: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication 2219s log_filename: postgres1.log 2219s log_min_messages: debug1 2219s log_statement: all 2219s logging_collector: 'on' 2219s shared_buffers: 1MB 2219s ssl: 'on' 2219s ssl_ca_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_cert_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_key_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s unix_socket_directories: /tmp 2219s pg_hba: 2219s - local all all trust 2219s - local replication all trust 2219s - hostssl replication replicator all md5 clientcert=verify-ca 2219s - hostssl all all all md5 clientcert=verify-ca 2219s pgpass: /tmp/pgpass_postgres1 2219s use_unix_socket: true 2219s use_unix_socket_repl: true 2219s restapi: 2219s connect_address: 127.0.0.1:8009 2219s listen: 127.0.0.1:8009 2219s scope: batman 2219s tags: 2219s clonefrom: false 2219s failover_priority: '0' 2219s nofailover: false 2219s noloadbalance: false 2219s nostream: false 2219s nosync: false 2219s features/output/priority_replication_failed/postgres2.csv: 2219s 2024-11-13 12:03:42.855 UTC,,,9712,,6734959e.25f0,1,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:42.855 UTC,,,9712,,6734959e.25f0,2,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"starting PostgreSQL 16.4 (Ubuntu 16.4-3) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 14.2.0-7ubuntu1) 14.2.0, 64-bit",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:42.855 UTC,,,9712,,6734959e.25f0,3,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"listening on IPv4 address ""127.0.0.1"", port 5384",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:42.857 UTC,,,9712,,6734959e.25f0,4,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"listening on Unix socket ""/tmp/.s.PGSQL.5384""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:42.865 UTC,,,9716,,6734959e.25f4,1,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"database system was interrupted; last known up at 2024-11-13 12:03:42 UTC",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:42.869 UTC,"postgres","postgres",9718,"[local]",6734959e.25f6,1,"",2024-11-13 12:03:42 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:42.873 UTC,"postgres","postgres",9720,"[local]",6734959e.25f8,1,"",2024-11-13 12:03:42 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:42.995 UTC,,,9716,,6734959e.25f4,2,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"entering standby mode",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:42.995 UTC,,,9716,,6734959e.25f4,3,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"backup time 2024-11-13 12:03:42 UTC in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:42.995 UTC,,,9716,,6734959e.25f4,4,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"backup label pg_basebackup base backup in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:42.995 UTC,,,9716,,6734959e.25f4,5,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"backup timeline 2 in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:42.995 UTC,,,9716,,6734959e.25f4,6,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"starting backup recovery with redo LSN 0/5000028, checkpoint LSN 0/5000060, on timeline ID 2",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:42.997 UTC,"postgres","postgres",9723,"127.0.0.1:43610",6734959e.25fb,1,"",2024-11-13 12:03:42 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:43.021 UTC,,,9716,,6734959e.25f4,7,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"restored log file ""00000002.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.055 UTC,,,9716,,6734959e.25f4,8,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"restored log file ""000000020000000000000005"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,9,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"got WAL segment from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,10,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint record is at 0/5000060",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,11,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"redo record is at 0/5000028; shutdown false",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,12,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"next transaction ID: 739; next OID: 16389",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,13,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"next MultiXactId: 1; next MultiXactOffset: 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,14,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"oldest unfrozen transaction ID: 723, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,15,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"oldest MultiXactId: 1, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.065 UTC,,,9716,,6734959e.25f4,16,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"commit timestamp Xid oldest/newest: 0/0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.066 UTC,,,9716,,6734959e.25f4,17,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.066 UTC,,,9716,,6734959e.25f4,18,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.066 UTC,,,9716,,6734959e.25f4,19,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"starting up replication slots",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.066 UTC,,,9716,,6734959e.25f4,20,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.067 UTC,,,9716,,6734959e.25f4,21,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"resetting unlogged relations: cleanup 1 init 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.067 UTC,,,9716,,6734959e.25f4,22,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"initializing for hot standby",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.067 UTC,,,9716,,6734959e.25f4,23,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"redo starts at 0/5000028",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.067 UTC,,,9716,,6734959e.25f4,24,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"recovery snapshots are now enabled",,,,,"WAL redo at 0/5000028 for Standby/RUNNING_XACTS: nextXid 739 latestCompletedXid 738 oldestRunningXid 739",,,,"","startup",,0 2219s 2024-11-13 12:03:43.167 UTC,,,9716,,6734959e.25f4,25,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"end of backup record reached",,,,,"WAL redo at 0/50000D8 for XLOG/BACKUP_END: 0/5000028",,,,"","startup",,0 2219s 2024-11-13 12:03:43.167 UTC,,,9716,,6734959e.25f4,26,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"end of backup reached",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.168 UTC,,,9716,,6734959e.25f4,27,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"completed backup recovery with redo LSN 0/5000028 and end LSN 0/5000100",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.168 UTC,,,9716,,6734959e.25f4,28,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"consistent recovery state reached at 0/5000100",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:43.168 UTC,,,9712,,6734959e.25f0,5,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"database system is ready to accept read-only connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:43.281 UTC,,,9732,,6734959f.2604,1,,2024-11-13 12:03:43 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/6000000 on timeline 2",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:43.918 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,1,"idle",2024-11-13 12:03:43 UTC,2/3,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:43.920 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,2,"idle",2024-11-13 12:03:43 UTC,2/4,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:43.922 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,3,"idle",2024-11-13 12:03:43 UTC,2/5,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:43.925 UTC,"replicator","",9739,"[local]",6734959f.260b,1,"idle",2024-11-13 12:03:43 UTC,3/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:44.011 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,1,"idle",2024-11-13 12:03:43 UTC,3/3,0,LOG,00000,"statement: SELECT 1",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:44.011 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,2,"idle",2024-11-13 12:03:43 UTC,3/4,0,LOG,00000,"statement: SET synchronous_commit TO 'local'",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:45.457 UTC,,,9716,,6734959e.25f4,29,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"saw end-of-backup record for backup starting at 0/6000028, waiting for 0/0",,,,,"WAL redo at 0/60000D8 for XLOG/BACKUP_END: 0/6000028",,,,"","startup",,0 2219s 2024-11-13 12:03:45.902 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,4,"idle",2024-11-13 12:03:43 UTC,2/6,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:47.098 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,3,"idle",2024-11-13 12:03:43 UTC,3/5,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499427_0613313",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:47.098 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,4,"SELECT",2024-11-13 12:03:43 UTC,3/5,0,DEBUG,42P01,"relation ""public.test_1731499427_0613313"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:47.098 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,5,"SELECT",2024-11-13 12:03:43 UTC,3/5,0,ERROR,42P01,"relation ""public.test_1731499427_0613313"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499427_0613313",15,,"","client backend",,0 2219s 2024-11-13 12:03:47.903 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,5,"idle",2024-11-13 12:03:43 UTC,2/7,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:48.099 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,6,"idle",2024-11-13 12:03:43 UTC,3/6,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499427_0613313",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:49.904 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,6,"idle",2024-11-13 12:03:43 UTC,2/8,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:51.904 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,7,"idle",2024-11-13 12:03:43 UTC,2/9,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:52.309 UTC,,,9732,,6734959f.2604,2,,2024-11-13 12:03:43 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 2 at 0/90000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.309 UTC,,,9732,,6734959f.2604,3,,2024-11-13 12:03:43 UTC,,0,FATAL,08006,"could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly 2219s no COPY in progress",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.581 UTC,,,9716,,6734959e.25f4,30,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,"WAL redo at 0/9000028 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/9000028; tli 2; prev tli 2; fpw true; xid 0:741; oid 16395; multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown",,,,"","startup",,0 2219s 2024-11-13 12:03:52.581 UTC,,,9716,,6734959e.25f4,31,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:52.583 UTC,,,9825,,673495a8.2661,1,,2024-11-13 12:03:52 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5382 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.683 UTC,,,9716,,6734959e.25f4,32,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/90000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:52.783 UTC,,,9716,,6734959e.25f4,33,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:52.785 UTC,,,9830,,673495a8.2666,1,,2024-11-13 12:03:52 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5382 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.885 UTC,,,9716,,6734959e.25f4,34,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/90000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.254 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,8,"idle",2024-11-13 12:03:43 UTC,2/10,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:53.301 UTC,"postgres","postgres",9854,"[local]",673495a9.267e,1,"idle",2024-11-13 12:03:53 UTC,4/2,0,LOG,00000,"statement: SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:03:53.307 UTC,"replicator","",9856,"[local]",673495a9.2680,1,"idle",2024-11-13 12:03:53 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:53.334 UTC,,,9712,,6734959e.25f0,6,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:53.334 UTC,,,9712,,6734959e.25f0,7,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"parameter ""primary_conninfo"" removed from configuration file, reset to default",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:53.334 UTC,,,9712,,6734959e.25f0,8,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"parameter ""primary_slot_name"" removed from configuration file, reset to default",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:53.345 UTC,"replicator","",9862,"[local]",673495a9.2686,1,"idle",2024-11-13 12:03:53 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:53.439 UTC,,,9716,,6734959e.25f4,35,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.537 UTC,,,9716,,6734959e.25f4,36,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/90000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.282 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,9,"idle",2024-11-13 12:03:43 UTC,2/11,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['wal_level','max_connections','max_wal_senders','max_prepared_transactions','max_locks_per_transaction','track_commit_timestamp','max_replication_slots','max_worker_processes','archive_command','archive_mode','log_destination','log_directory','log_filename','log_min_messages','log_statement','logging_collector','shared_buffers','ssl','ssl_ca_file','ssl_cert_file','ssl_key_file','unix_socket_directories','cluster_name','listen_addresses','port','wal_keep_size'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:55.286 UTC,,,9712,,6734959e.25f0,9,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:55.419 UTC,,,9716,,6734959e.25f4,37,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.445 UTC,,,9716,,6734959e.25f4,38,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"restored log file ""00000003.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.576 UTC,,,9716,,6734959e.25f4,39,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"restored log file ""00000003.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.577 UTC,,,9716,,6734959e.25f4,40,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"new target timeline is 3",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.772 UTC,,,9716,,6734959e.25f4,41,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:55.882 UTC,,,9716,,6734959e.25f4,42,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/90000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:56.288 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,10,"idle",2024-11-13 12:03:43 UTC,2/12,0,LOG,00000,"statement: SELECT name, pg_catalog.current_setting(name), unit, vartype FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) != ALL(ARRAY['archive_cleanup_command','pause_at_recovery_target','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_action','recovery_target_inclusive','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command','standby_mode','trigger_file','hot_standby','wal_log_hints']) AND pending_restart",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:56.295 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,11,"idle",2024-11-13 12:03:43 UTC,2/13,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:56.298 UTC,"replicator","",9930,"[local]",673495ac.26ca,1,"idle",2024-11-13 12:03:56 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:56.327 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,12,"idle",2024-11-13 12:03:43 UTC,2/14,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:56.331 UTC,,,9712,,6734959e.25f0,10,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:56.332 UTC,,,9712,,6734959e.25f0,11,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"parameter ""primary_conninfo"" changed to ""user=replicator passfile=/tmp/pgpass_postgres2 host=127.0.0.1 port=5385 sslmode=verify-ca sslcert=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt sslkey=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key sslrootcert=/tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt application_name=postgres2 gssencmode=prefer channel_binding=prefer""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:56.332 UTC,,,9712,,6734959e.25f0,12,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"parameter ""primary_slot_name"" changed to ""postgres2""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:56.338 UTC,"replicator","",9938,"[local]",673495ac.26d2,1,"idle",2024-11-13 12:03:56 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:56.593 UTC,,,9716,,6734959e.25f4,43,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:56.607 UTC,,,9941,,673495ac.26d5,1,,2024-11-13 12:03:56 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/9000000 on timeline 3",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:57.251 UTC,"postgres","postgres",9854,"[local]",673495a9.267e,2,"idle",2024-11-13 12:03:53 UTC,4/3,0,LOG,00000,"statement: SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:03:57.254 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,13,"idle",2024-11-13 12:03:43 UTC,2/15,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:57.255 UTC,"replicator","",9949,"[local]",673495ad.26dd,1,"idle",2024-11-13 12:03:57 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:57.256 UTC,"replicator","",9950,"[local]",673495ad.26de,1,"idle",2024-11-13 12:03:57 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:57.282 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,14,"idle",2024-11-13 12:03:43 UTC,2/16,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:59.252 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,15,"idle",2024-11-13 12:03:43 UTC,2/17,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:00.575 UTC,,,9716,,6734959e.25f4,44,,2024-11-13 12:03:42 UTC,1/0,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,"WAL redo at 0/A000028 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/A000028; tli 3; prev tli 3; fpw true; xid 0:741; oid 16395; multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown",,,,"","startup",,0 2219s 2024-11-13 12:04:00.575 UTC,,,9941,,673495ac.26d5,2,,2024-11-13 12:03:56 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 3 at 0/A0000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:00.576 UTC,,,9941,,673495ac.26d5,3,,2024-11-13 12:03:56 UTC,,0,FATAL,08006,"could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly 2219s no COPY in progress",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:00.680 UTC,,,9716,,6734959e.25f4,45,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/A0000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:01.253 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,16,"idle",2024-11-13 12:03:43 UTC,2/18,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:01.257 UTC,"replicator","",9984,"[local]",673495b1.2700,1,"idle",2024-11-13 12:04:01 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:04:01.431 UTC,,,9716,,6734959e.25f4,46,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"invalid record length at 0/A0000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:01.433 UTC,,,9987,,673495b1.2703,1,,2024-11-13 12:04:01 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5385 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:01.499 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,17,"idle",2024-11-13 12:03:43 UTC,2/19,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:01.549 UTC,,,9716,,6734959e.25f4,47,,2024-11-13 12:03:42 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/A0000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:03.507 UTC,"replicator","",10004,"[local]",673495b3.2714,1,"idle",2024-11-13 12:04:03 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:04:03.748 UTC,,,9712,,6734959e.25f0,13,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"received fast shutdown request",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.750 UTC,,,9712,,6734959e.25f0,14,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"aborting any active transactions",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.750 UTC,"postgres","postgres",9854,"[local]",673495a9.267e,3,"idle",2024-11-13 12:03:53 UTC,4/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:04:03.755 UTC,"postgres","postgres",9737,"[local]",6734959f.2609,18,"idle",2024-11-13 12:03:43 UTC,2/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:03.756 UTC,"postgres","postgres",9740,"127.0.0.1:43620",6734959f.260c,7,"idle",2024-11-13 12:03:43 UTC,3/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:04:03.760 UTC,,,9714,,6734959e.25f2,1,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"shutting down",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.760 UTC,,,9714,,6734959e.25f2,2,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"restartpoint starting: shutdown immediate",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.760 UTC,,,9714,,6734959e.25f2,3,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.768 UTC,,,9714,,6734959e.25f2,4,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=base/5/2703 time=0.242 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.769 UTC,,,9714,,6734959e.25f2,5,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=base/5/1259 time=0.510 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.769 UTC,,,9714,,6734959e.25f2,6,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=3 file=base/5/2673 time=0.244 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.769 UTC,,,9714,,6734959e.25f2,7,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=4 file=base/5/1249_fsm time=0.264 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,8,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=5 file=base/5/2663 time=0.259 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,9,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=6 file=base/5/1247 time=0.251 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,10,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=7 file=base/5/1249_vm time=0.186 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,11,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=8 file=base/5/2659 time=0.023 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,12,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=9 file=base/5/2704 time=0.189 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,13,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=10 file=base/5/2608 time=0.021 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,14,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=11 file=base/5/16392 time=0.031 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.770 UTC,,,9714,,6734959e.25f2,15,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=12 file=base/5/2608_vm time=0.193 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,16,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=13 file=base/5/3455 time=0.195 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,17,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=14 file=base/5/2674 time=0.213 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,18,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=15 file=base/5/1249 time=0.227 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,19,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=16 file=base/5/16389 time=0.026 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,20,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=17 file=base/5/2658 time=0.028 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,21,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=18 file=pg_xact/0000 time=0.189 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.771 UTC,,,9714,,6734959e.25f2,22,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"checkpoint sync: number=19 file=base/5/2662 time=0.024 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.772 UTC,,,9714,,6734959e.25f2,23,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"restartpoint complete: wrote 8 buffers (6.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.004 s, total=0.012 s; sync files=19, longest=0.001 s, average=0.001 s; distance=81920 kB, estimate=81920 kB; lsn=0/A000028, redo lsn=0/A000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.772 UTC,,,9714,,6734959e.25f2,24,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"recovery restart point at 0/A000028","Last completed transaction was at log time 2024-11-13 12:03:48.100986+00.",,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:03.774 UTC,,,9712,,6734959e.25f0,15,,2024-11-13 12:03:42 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.776 UTC,,,9713,,6734959e.25f1,1,,2024-11-13 12:03:42 UTC,,0,DEBUG,00000,"logger shutting down",,,,,,,,,"","logger",,0 2219s features/output/priority_replication_failed/postgres2.log: 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres2.log: 2219s + cat features/output/priority_replication_failed/postgres2.log 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres2.yml: 2219s + cat features/output/priority_replication_failed/postgres2.yml 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres3.csv: 2219s + cat features/output/priority_replication_failed/postgres3.csv 2219s 2024-11-13 12:03:42.855 UTC [9712] LOG: ending log output to stderr 2219s 2024-11-13 12:03:42.855 UTC [9712] HINT: Future log output will go to log destination "csvlog". 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000006' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000006' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000030000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000030000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003000000000000000A' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s 2024-11-13 12:04:03.776 UTC [9713] DEBUG: logger shutting down 2219s features/output/priority_replication_failed/postgres2.yml: 2219s bootstrap: 2219s dcs: 2219s loop_wait: 2 2219s maximum_lag_on_failover: 1048576 2219s postgresql: 2219s parameters: 2219s archive_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode archive --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s archive_mode: 'on' 2219s restore_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode restore --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s wal_keep_segments: 100 2219s pg_hba: 2219s - host replication replicator 127.0.0.1/32 md5 2219s - host all all 0.0.0.0/0 md5 2219s use_pg_rewind: true 2219s retry_timeout: 10 2219s ttl: 30 2219s initdb: 2219s - encoding: UTF8 2219s - data-checksums 2219s - auth: md5 2219s - auth-host: md5 2219s post_bootstrap: psql -w -c "SELECT 1" 2219s log: 2219s format: '%(asctime)s %(levelname)s [%(pathname)s:%(lineno)d - %(funcName)s]: %(message)s' 2219s loggers: 2219s patroni.postgresql.callback_executor: DEBUG 2219s name: postgres2 2219s postgresql: 2219s authentication: 2219s replication: 2219s password: rep-pass 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: replicator 2219s rewind: 2219s password: rewind_password 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: rewind_user 2219s superuser: 2219s password: patroni 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: postgres 2219s basebackup: 2219s - checkpoint: fast 2219s callbacks: 2219s on_role_change: /usr/bin/python3 features/callback2.py postgres2 5384 2219s connect_address: 127.0.0.1:5384 2219s data_dir: /tmp/autopkgtest.FwqS2V/build.hfu/src/data/postgres2 2219s listen: 127.0.0.1:5384 2219s parameters: 2219s log_destination: csvlog 2219s log_directory: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication 2219s log_filename: postgres2.log 2219s log_min_messages: debug1 2219s log_statement: all 2219s logging_collector: 'on' 2219s shared_buffers: 1MB 2219s ssl: 'on' 2219s ssl_ca_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_cert_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_key_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s unix_socket_directories: /tmp 2219s pg_hba: 2219s - local all all trust 2219s - local replication all trust 2219s - hostssl replication replicator all md5 clientcert=verify-ca 2219s - hostssl all all all md5 clientcert=verify-ca 2219s pgpass: /tmp/pgpass_postgres2 2219s use_unix_socket: true 2219s use_unix_socket_repl: true 2219s restapi: 2219s connect_address: 127.0.0.1:8010 2219s listen: 127.0.0.1:8010 2219s scope: batman 2219s tags: 2219s clonefrom: false 2219s failover_priority: '1' 2219s nofailover: true 2219s noloadbalance: false 2219s nostream: false 2219s nosync: false 2219s features/output/priority_replication_failed/postgres3.csv: 2219s 2024-11-13 12:03:45.861 UTC,,,9760,,673495a1.2620,1,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:45.861 UTC,,,9760,,673495a1.2620,2,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"starting PostgreSQL 16.4 (Ubuntu 16.4-3) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 14.2.0-7ubuntu1) 14.2.0, 64-bit",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:45.861 UTC,,,9760,,673495a1.2620,3,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"listening on IPv4 address ""127.0.0.1"", port 5385",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:45.866 UTC,,,9760,,673495a1.2620,4,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"listening on Unix socket ""/tmp/.s.PGSQL.5385""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:45.869 UTC,,,9765,,673495a1.2625,1,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"database system was interrupted; last known up at 2024-11-13 12:03:45 UTC",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:45.871 UTC,"postgres","postgres",9766,"[local]",673495a1.2626,1,"",2024-11-13 12:03:45 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:45.876 UTC,"postgres","postgres",9768,"[local]",673495a1.2628,1,"",2024-11-13 12:03:45 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:46.021 UTC,,,9765,,673495a1.2625,2,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"entering standby mode",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.021 UTC,,,9765,,673495a1.2625,3,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"backup time 2024-11-13 12:03:45 UTC in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.021 UTC,,,9765,,673495a1.2625,4,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"backup label pg_basebackup base backup in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.021 UTC,,,9765,,673495a1.2625,5,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"backup timeline 2 in file ""backup_label""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.021 UTC,,,9765,,673495a1.2625,6,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"starting backup recovery with redo LSN 0/6000028, checkpoint LSN 0/6000060, on timeline ID 2",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.047 UTC,"postgres","postgres",9773,"127.0.0.1:59572",673495a2.262d,1,"",2024-11-13 12:03:46 UTC,,0,FATAL,57P03,"the database system is starting up",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:46.052 UTC,,,9765,,673495a1.2625,7,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"restored log file ""00000002.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.087 UTC,,,9765,,673495a1.2625,8,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"restored log file ""000000020000000000000006"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,9,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"got WAL segment from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,10,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint record is at 0/6000060",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,11,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"redo record is at 0/6000028; shutdown false",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,12,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"next transaction ID: 739; next OID: 16389",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,13,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"next MultiXactId: 1; next MultiXactOffset: 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,14,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"oldest unfrozen transaction ID: 723, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,15,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"oldest MultiXactId: 1, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,16,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"commit timestamp Xid oldest/newest: 0/0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,17,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,18,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,19,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"starting up replication slots",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.097 UTC,,,9765,,673495a1.2625,20,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.098 UTC,,,9765,,673495a1.2625,21,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"resetting unlogged relations: cleanup 1 init 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.098 UTC,,,9765,,673495a1.2625,22,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"initializing for hot standby",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.098 UTC,,,9765,,673495a1.2625,23,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"redo starts at 0/6000028",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.098 UTC,,,9765,,673495a1.2625,24,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"recovery snapshots are now enabled",,,,,"WAL redo at 0/6000028 for Standby/RUNNING_XACTS: nextXid 739 latestCompletedXid 738 oldestRunningXid 739",,,,"","startup",,0 2219s 2024-11-13 12:03:46.199 UTC,,,9765,,673495a1.2625,25,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"end of backup record reached",,,,,"WAL redo at 0/60000D8 for XLOG/BACKUP_END: 0/6000028",,,,"","startup",,0 2219s 2024-11-13 12:03:46.199 UTC,,,9765,,673495a1.2625,26,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"end of backup reached",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.200 UTC,,,9765,,673495a1.2625,27,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"completed backup recovery with redo LSN 0/6000028 and end LSN 0/6000100",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.200 UTC,,,9765,,673495a1.2625,28,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"consistent recovery state reached at 0/6000100",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.200 UTC,,,9760,,673495a1.2620,5,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"database system is ready to accept read-only connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:46.312 UTC,,,9780,,673495a2.2634,1,,2024-11-13 12:03:46 UTC,,0,FATAL,08P01,"could not start WAL streaming: ERROR: replication slot ""postgres3"" does not exist",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:46.536 UTC,,,9786,,673495a2.263a,1,,2024-11-13 12:03:46 UTC,,0,FATAL,08P01,"could not start WAL streaming: ERROR: replication slot ""postgres3"" does not exist",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:46.653 UTC,,,9765,,673495a1.2625,29,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/7000018",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:46.895 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,1,"idle",2024-11-13 12:03:46 UTC,2/3,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:46.896 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,2,"idle",2024-11-13 12:03:46 UTC,2/4,0,LOG,00000,"statement: SELECT name, setting, unit, vartype, context, sourcefile FROM pg_catalog.pg_settings WHERE pg_catalog.lower(name) = ANY(ARRAY['archive_cleanup_command','primary_conninfo','primary_slot_name','promote_trigger_file','recovery_end_command','recovery_min_apply_delay','recovery_target','recovery_target_lsn','recovery_target_name','recovery_target_time','recovery_target_timeline','recovery_target_xid','restore_command'])",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:46.898 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,3,"idle",2024-11-13 12:03:46 UTC,2/5,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:46.901 UTC,"replicator","",9795,"[local]",673495a2.2643,1,"idle",2024-11-13 12:03:46 UTC,3/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:47.060 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,1,"idle",2024-11-13 12:03:47 UTC,3/3,0,LOG,00000,"statement: SELECT 1",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:47.060 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,2,"idle",2024-11-13 12:03:47 UTC,3/4,0,LOG,00000,"statement: SET synchronous_commit TO 'local'",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:48.140 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,3,"idle",2024-11-13 12:03:47 UTC,3/5,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499428_10031",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:48.140 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,4,"SELECT",2024-11-13 12:03:47 UTC,3/5,0,DEBUG,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:48.140 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,5,"SELECT",2024-11-13 12:03:47 UTC,3/5,0,ERROR,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499428_10031",15,,"","client backend",,0 2219s 2024-11-13 12:03:48.897 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,4,"idle",2024-11-13 12:03:46 UTC,2/6,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:49.140 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,6,"idle",2024-11-13 12:03:47 UTC,3/6,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499428_10031",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:49.141 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,7,"SELECT",2024-11-13 12:03:47 UTC,3/6,0,DEBUG,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:49.141 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,8,"SELECT",2024-11-13 12:03:47 UTC,3/6,0,ERROR,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499428_10031",15,,"","client backend",,0 2219s 2024-11-13 12:03:50.142 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,9,"idle",2024-11-13 12:03:47 UTC,3/7,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499428_10031",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:50.142 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,10,"SELECT",2024-11-13 12:03:47 UTC,3/7,0,DEBUG,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:50.142 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,11,"SELECT",2024-11-13 12:03:47 UTC,3/7,0,ERROR,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499428_10031",15,,"","client backend",,0 2219s 2024-11-13 12:03:50.894 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,5,"idle",2024-11-13 12:03:46 UTC,2/7,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:51.143 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,12,"idle",2024-11-13 12:03:47 UTC,3/8,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499428_10031",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:51.143 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,13,"SELECT",2024-11-13 12:03:47 UTC,3/8,0,DEBUG,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:51.143 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,14,"SELECT",2024-11-13 12:03:47 UTC,3/8,0,ERROR,42P01,"relation ""public.test_1731499428_10031"" does not exist",,,,,,"SELECT 1 FROM public.test_1731499428_10031",15,,"","client backend",,0 2219s 2024-11-13 12:03:51.478 UTC,,,9765,,673495a1.2625,30,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"restored log file ""000000020000000000000007"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:51.487 UTC,,,9765,,673495a1.2625,31,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"got WAL segment from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:51.525 UTC,,,9765,,673495a1.2625,32,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"restored log file ""000000020000000000000008"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:51.534 UTC,,,9765,,673495a1.2625,33,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"got WAL segment from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:51.748 UTC,,,9809,,673495a7.2651,1,,2024-11-13 12:03:51 UTC,,0,LOG,00000,"started streaming WAL from primary at 0/9000000 on timeline 2",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.143 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,15,"idle",2024-11-13 12:03:47 UTC,3/9,0,LOG,00000,"statement: SELECT 1 FROM public.test_1731499428_10031",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:52.307 UTC,,,9765,,673495a1.2625,34,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,"WAL redo at 0/9000028 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/9000028; tli 2; prev tli 2; fpw true; xid 0:741; oid 16395; multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown",,,,"","startup",,0 2219s 2024-11-13 12:03:52.309 UTC,,,9809,,673495a7.2651,2,,2024-11-13 12:03:51 UTC,,0,LOG,00000,"replication terminated by primary server","End of WAL reached on timeline 2 at 0/90000A0.",,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.309 UTC,,,9809,,673495a7.2651,3,,2024-11-13 12:03:51 UTC,,0,FATAL,08006,"could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly 2219s no COPY in progress",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:03:52.484 UTC,,,9765,,673495a1.2625,35,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/90000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:52.894 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,6,"idle",2024-11-13 12:03:46 UTC,2/8,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:53.263 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,7,"idle",2024-11-13 12:03:46 UTC,2/9,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:53.297 UTC,"postgres","postgres",9853,"[local]",673495a9.267d,1,"idle",2024-11-13 12:03:53 UTC,4/2,0,LOG,00000,"statement: SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:03:53.303 UTC,"replicator","",9855,"[local]",673495a9.267f,1,"idle",2024-11-13 12:03:53 UTC,5/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:53.481 UTC,,,9765,,673495a1.2625,36,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"invalid record length at 0/90000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.481 UTC,,,9765,,673495a1.2625,37,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"received promote request",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.482 UTC,,,9765,,673495a1.2625,38,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"redo done at 0/9000028 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 7.38 s",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.482 UTC,,,9765,,673495a1.2625,39,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"last completed transaction was at log time 2024-11-13 12:03:48.100986+00",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.586 UTC,,,9765,,673495a1.2625,40,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"resetting unlogged relations: cleanup 0 init 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.702 UTC,,,9765,,673495a1.2625,41,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"selected new timeline ID: 3",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.720 UTC,,,9765,,673495a1.2625,42,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,58P01,"could not remove file ""pg_wal/000000030000000000000009"": No such file or directory",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.755 UTC,,,9765,,673495a1.2625,43,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"restored log file ""00000002.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.756 UTC,,,9765,,673495a1.2625,44,,2024-11-13 12:03:45 UTC,1/0,0,LOG,00000,"archive recovery complete",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.757 UTC,,,9765,,673495a1.2625,45,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.757 UTC,,,9765,,673495a1.2625,46,,2024-11-13 12:03:45 UTC,1/0,0,DEBUG,00000,"MultiXact member stop limit is now 4294914944 based on MultiXact 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:03:53.761 UTC,,,9763,,673495a1.2623,1,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"checkpoint starting: force",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:53.761 UTC,,,9763,,673495a1.2623,2,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:53.762 UTC,,,9878,,673495a9.2696,1,,2024-11-13 12:03:53 UTC,,0,DEBUG,00000,"autovacuum launcher started",,,,,,,,,"","autovacuum launcher",,0 2219s 2024-11-13 12:03:53.762 UTC,,,9760,,673495a1.2620,6,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"starting background worker process ""logical replication launcher""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:53.763 UTC,,,9880,,673495a9.2698,1,,2024-11-13 12:03:53 UTC,,0,DEBUG,00000,"logical replication launcher started",,,,,,,,,"","logical replication launcher",,0 2219s 2024-11-13 12:03:53.766 UTC,,,9760,,673495a1.2620,7,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"database system is ready to accept connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:03:53.816 UTC,,,9879,,673495a9.2697,1,,2024-11-13 12:03:53 UTC,,0,DEBUG,00000,"archived write-ahead log file ""00000003.history""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:53.847 UTC,,,9879,,673495a9.2697,2,,2024-11-13 12:03:53 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000020000000000000009.partial""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:03:54.145 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,16,"idle",2024-11-13 12:03:47 UTC,3/10,0,LOG,00000,"statement: SELECT pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:54.272 UTC,,,9763,,673495a1.2623,3,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=1 file=base/5/2703 time=0.546 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.272 UTC,,,9763,,673495a1.2623,4,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=2 file=pg_multixact/offsets/0000 time=0.174 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.273 UTC,,,9763,,673495a1.2623,5,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=3 file=base/5/1259 time=0.532 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.273 UTC,,,9763,,673495a1.2623,6,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=4 file=base/5/2673 time=0.165 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.273 UTC,,,9763,,673495a1.2623,7,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=5 file=base/5/1249_fsm time=0.243 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,8,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=6 file=base/5/2663 time=0.172 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,9,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=7 file=base/5/1247 time=0.137 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,10,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=8 file=base/5/1249_vm time=0.173 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,11,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=9 file=base/5/2659 time=0.021 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,12,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=10 file=base/5/2704 time=0.146 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,13,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=11 file=base/5/2608 time=0.137 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.274 UTC,,,9763,,673495a1.2623,14,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=12 file=base/5/16392 time=0.020 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,15,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=13 file=base/5/2608_vm time=0.142 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,16,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=14 file=base/5/3455 time=0.141 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,17,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=15 file=base/5/2674 time=0.139 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,18,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=16 file=base/5/1249 time=0.140 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,19,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=17 file=base/5/16389 time=0.026 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,20,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=18 file=base/5/2658 time=0.027 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,21,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=19 file=pg_xact/0000 time=0.153 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.275 UTC,,,9763,,673495a1.2623,22,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"checkpoint sync: number=20 file=base/5/2662 time=0.023 ms",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.276 UTC,,,9763,,673495a1.2623,23,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"checkpoint complete: wrote 8 buffers (6.2%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.506 s, sync=0.004 s, total=0.516 s; sync files=20, longest=0.001 s, average=0.001 s; distance=49152 kB, estimate=49152 kB; lsn=0/9000108, redo lsn=0/90000D0",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:03:54.383 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,8,"idle",2024-11-13 12:03:46 UTC,2/10,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:54.387 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,9,"idle",2024-11-13 12:03:46 UTC,2/11,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres1', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres1')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:54.392 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,10,"idle",2024-11-13 12:03:46 UTC,2/12,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres2', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres2')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:54.399 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,11,"idle",2024-11-13 12:03:46 UTC,2/13,0,LOG,00000,"statement: SELECT pg_catalog.pg_create_physical_replication_slot('postgres0', true) WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'postgres0')",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:55.295 UTC,"rewind_user","postgres",9889,"127.0.0.1:48010",673495ab.26a1,1,"idle",2024-11-13 12:03:55 UTC,6/2,0,LOG,00000,"statement: SELECT pg_catalog.pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:55.308 UTC,"replicator","",9893,"127.0.0.1:48012",673495ab.26a5,1,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.308 UTC,"replicator","",9893,"127.0.0.1:48012",673495ab.26a5,2,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 3",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.386 UTC,"rewind_user","postgres",9901,"127.0.0.1:48016",673495ab.26ad,1,"idle",2024-11-13 12:03:55 UTC,6/5,0,LOG,00000,"statement: SELECT pg_catalog.pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:55.414 UTC,"replicator","",9902,"127.0.0.1:48020",673495ab.26ae,1,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.415 UTC,"replicator","",9902,"127.0.0.1:48020",673495ab.26ae,2,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 3",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:55.465 UTC,"replicator","",9909,"127.0.0.1:48028",673495ab.26b5,1,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.465 UTC,"replicator","",9909,"127.0.0.1:48028",673495ab.26b5,2,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 3",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.467 UTC,"replicator","",9909,"127.0.0.1:48028",673495ab.26b5,3,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/9000000 TIMELINE 2",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.467 UTC,"replicator","",9909,"127.0.0.1:48028",673495ab.26b5,4,"streaming 0/90000A0",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"walsender reached end of timeline at 0/90000A0 (sent up to 0/90000A0)",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.467 UTC,"replicator","",9909,"127.0.0.1:48028",673495ab.26b5,5,"streaming 0/90000A0",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"""postgres1"" has now caught up with upstream server",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.471 UTC,"replicator","",9909,"127.0.0.1:48028",673495ab.26b5,6,"streaming 0/90000A0",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.835 UTC,"replicator","",9929,"127.0.0.1:48038",673495ab.26c9,1,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.835 UTC,"replicator","",9929,"127.0.0.1:48038",673495ab.26c9,2,"idle",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres1"" 0/9000000 TIMELINE 3",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.836 UTC,"replicator","",9929,"127.0.0.1:48038",673495ab.26c9,3,"streaming 0/9000180",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"""postgres1"" has now caught up with upstream server",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:55.842 UTC,"replicator","",9929,"127.0.0.1:48038",673495ab.26c9,4,"streaming 0/9000180",2024-11-13 12:03:55 UTC,6/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres1","walsender",,0 2219s 2024-11-13 12:03:56.312 UTC,"rewind_user","postgres",9931,"127.0.0.1:48046",673495ac.26cb,1,"idle",2024-11-13 12:03:56 UTC,7/2,0,LOG,00000,"statement: SELECT pg_catalog.pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:56.326 UTC,"replicator","",9932,"127.0.0.1:48056",673495ac.26cc,1,"idle",2024-11-13 12:03:56 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:56.326 UTC,"replicator","",9932,"127.0.0.1:48056",673495ac.26cc,2,"idle",2024-11-13 12:03:56 UTC,7/0,0,DEBUG,00000,"received replication command: TIMELINE_HISTORY 3",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:56.417 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,12,"idle",2024-11-13 12:03:46 UTC,2/14,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:56.417 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,13,"idle",2024-11-13 12:03:46 UTC,2/15,0,LOG,00000,"statement: SELECT slot_name, slot_type, pg_catalog.pg_wal_lsn_diff(restart_lsn, '0/0')::bigint, plugin, database, datoid, catalog_xmin, pg_catalog.pg_wal_lsn_diff(confirmed_flush_lsn, '0/0')::bigint FROM pg_catalog.pg_replication_slots WHERE NOT temporary",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:03:56.607 UTC,"replicator","",9942,"127.0.0.1:48058",673495ac.26d6,1,"idle",2024-11-13 12:03:56 UTC,7/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:56.607 UTC,"replicator","",9942,"127.0.0.1:48058",673495ac.26d6,2,"idle",2024-11-13 12:03:56 UTC,7/0,0,DEBUG,00000,"received replication command: START_REPLICATION SLOT ""postgres2"" 0/9000000 TIMELINE 3",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:56.607 UTC,"replicator","",9942,"127.0.0.1:48058",673495ac.26d6,3,"streaming 0/9000180",2024-11-13 12:03:56 UTC,7/0,0,DEBUG,00000,"""postgres2"" has now caught up with upstream server",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:56.607 UTC,"replicator","",9942,"127.0.0.1:48058",673495ac.26d6,4,"streaming 0/9000180",2024-11-13 12:03:56 UTC,7/0,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"postgres2","walsender",,0 2219s 2024-11-13 12:03:57.269 UTC,"rewind_user","postgres",9951,"127.0.0.1:48066",673495ad.26df,1,"idle",2024-11-13 12:03:57 UTC,8/2,0,LOG,00000,"statement: SELECT pg_catalog.pg_is_in_recovery()",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:03:57.281 UTC,"replicator","",9952,"127.0.0.1:48076",673495ad.26e0,1,"idle",2024-11-13 12:03:57 UTC,8/0,0,DEBUG,00000,"received replication command: IDENTIFY_SYSTEM",,,,,,,,,"","walsender",,0 2219s 2024-11-13 12:03:58.385 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,14,"idle",2024-11-13 12:03:46 UTC,2/16,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:00.386 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,15,"idle",2024-11-13 12:03:46 UTC,2/17,0,LOG,00000,"statement: SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:00.458 UTC,"postgres","postgres",9971,"[local]",673495b0.26f3,1,"idle",2024-11-13 12:04:00 UTC,8/5,0,LOG,00000,"statement: SET statement_timeout = 0",,,,,,,,,"Patroni","client backend",,0 2219s 2024-11-13 12:04:00.458 UTC,"postgres","postgres",9971,"[local]",673495b0.26f3,2,"idle",2024-11-13 12:04:00 UTC,8/6,0,LOG,00000,"statement: CHECKPOINT",,,,,,,,,"Patroni","client backend",,0 2219s 2024-11-13 12:04:00.458 UTC,,,9763,,673495a1.2623,24,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"checkpoint starting: immediate force wait",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.458 UTC,,,9763,,673495a1.2623,25,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.463 UTC,,,9763,,673495a1.2623,26,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.006 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=44236 kB; lsn=0/90001B8, redo lsn=0/9000180",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.464 UTC,,,9760,,673495a1.2620,8,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"received fast shutdown request",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:00.465 UTC,,,9760,,673495a1.2620,9,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"aborting any active transactions",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:00.465 UTC,"postgres","postgres",9853,"[local]",673495a9.267d,2,"idle",2024-11-13 12:03:53 UTC,4/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni restapi","client backend",,0 2219s 2024-11-13 12:04:00.465 UTC,"postgres","postgres",9796,"127.0.0.1:59578",673495a3.2644,17,"idle",2024-11-13 12:03:47 UTC,3/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"","client backend",,0 2219s 2024-11-13 12:04:00.467 UTC,,,9880,,673495a9.2698,2,,2024-11-13 12:03:53 UTC,5/0,0,DEBUG,00000,"logical replication launcher shutting down",,,,,,,,,"","logical replication launcher",,0 2219s 2024-11-13 12:04:00.468 UTC,,,9760,,673495a1.2620,10,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"background worker ""logical replication launcher"" (PID 9880) exited with exit code 1",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:00.468 UTC,"postgres","postgres",9793,"[local]",673495a2.2641,16,"idle",2024-11-13 12:03:46 UTC,2/0,0,FATAL,57P01,"terminating connection due to administrator command",,,,,,,,,"Patroni heartbeat","client backend",,0 2219s 2024-11-13 12:04:00.470 UTC,,,9878,,673495a9.2696,2,,2024-11-13 12:03:53 UTC,1/0,0,DEBUG,00000,"autovacuum launcher shutting down",,,,,,,,,"","autovacuum launcher",,0 2219s 2024-11-13 12:04:00.473 UTC,,,9763,,673495a1.2623,27,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"shutting down",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.503 UTC,,,9763,,673495a1.2623,28,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"checkpoint starting: shutdown immediate",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.503 UTC,,,9763,,673495a1.2623,29,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"performing replication slot checkpoint",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.550 UTC,,,9879,,673495a9.2697,3,,2024-11-13 12:03:53 UTC,,0,DEBUG,00000,"archived write-ahead log file ""000000030000000000000009""",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:04:00.569 UTC,,,9763,,673495a1.2623,30,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.071 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16383 kB, estimate=41451 kB; lsn=0/A000028, redo lsn=0/A000028",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:00.572 UTC,,,9879,,673495a9.2697,4,,2024-11-13 12:03:53 UTC,,0,DEBUG,00000,"archiver process shutting down",,,,,,,,,"","archiver",,0 2219s 2024-11-13 12:04:00.599 UTC,,,9760,,673495a1.2620,11,,2024-11-13 12:03:45 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:00.606 UTC,,,9761,,673495a1.2621,1,,2024-11-13 12:03:45 UTC,,0,DEBUG,00000,"logger shutting down",,,,,,,,,"","logger",,0 2219s 2024-11-13 12:04:03.828 UTC,,,10005,,673495b3.2715,1,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.828 UTC,,,10005,,673495b3.2715,2,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"starting PostgreSQL 16.4 (Ubuntu 16.4-3) on s390x-ibm-linux-gnu, compiled by gcc (Ubuntu 14.2.0-7ubuntu1) 14.2.0, 64-bit",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.828 UTC,,,10005,,673495b3.2715,3,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"listening on IPv4 address ""127.0.0.1"", port 5385",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.832 UTC,,,10005,,673495b3.2715,4,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"listening on Unix socket ""/tmp/.s.PGSQL.5385""",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:03.835 UTC,,,10010,,673495b3.271a,1,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"database system was shut down at 2024-11-13 12:04:00 UTC",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:03.929 UTC,,,10010,,673495b3.271a,2,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"entering standby mode",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:03.952 UTC,,,10010,,673495b3.271a,3,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"restored log file ""00000003.history"" from archive",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,4,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"checkpoint record is at 0/A000028",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,5,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"redo record is at 0/A000028; shutdown true",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,6,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"next transaction ID: 741; next OID: 16395",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,7,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"next MultiXactId: 1; next MultiXactOffset: 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,8,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"oldest unfrozen transaction ID: 723, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,9,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"oldest MultiXactId: 1, in database 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,10,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"commit timestamp Xid oldest/newest: 0/0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,11,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"transaction ID wrap limit is 2147484370, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,12,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"MultiXactId wrap limit is 2147483648, limited by database with OID 1",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,13,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"starting up replication slots",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.049 UTC,,,10010,,673495b3.271a,14,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"restoring replication slot from ""pg_replslot/postgres0/state""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.050 UTC,,,10010,,673495b3.271a,15,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"restoring replication slot from ""pg_replslot/postgres1/state""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.050 UTC,,,10010,,673495b3.271a,16,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"restoring replication slot from ""pg_replslot/postgres2/state""",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.051 UTC,,,10010,,673495b3.271a,17,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"xmin required by slots: data 0, catalog 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.053 UTC,,,10010,,673495b3.271a,18,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"resetting unlogged relations: cleanup 1 init 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.053 UTC,,,10010,,673495b3.271a,19,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"initializing for hot standby",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.053 UTC,,,10010,,673495b3.271a,20,,2024-11-13 12:04:03 UTC,1/0,0,DEBUG,00000,"recovery snapshots are now enabled",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.053 UTC,,,10010,,673495b3.271a,21,,2024-11-13 12:04:03 UTC,1/0,0,LOG,00000,"consistent recovery state reached at 0/A0000A0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.053 UTC,,,10010,,673495b3.271a,22,,2024-11-13 12:04:03 UTC,1/0,0,LOG,00000,"invalid record length at 0/A0000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.053 UTC,,,10005,,673495b3.2715,5,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"database system is ready to accept read-only connections",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:04.056 UTC,,,10017,,673495b4.2721,1,,2024-11-13 12:04:04 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5383 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:04.259 UTC,,,10010,,673495b3.271a,23,,2024-11-13 12:04:03 UTC,1/0,0,DEBUG,00000,"invalid record length at 0/A0000A0: expected at least 24, got 0",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.261 UTC,,,10022,,673495b4.2726,1,,2024-11-13 12:04:04 UTC,,0,FATAL,08006,"could not connect to the primary server: connection to server at ""127.0.0.1"", port 5383 failed: Connection refused 2219s Is the server running on that host and accepting TCP/IP connections?",,,,,,,,,"","walreceiver",,0 2219s 2024-11-13 12:04:04.357 UTC,,,10010,,673495b3.271a,24,,2024-11-13 12:04:03 UTC,1/0,0,LOG,00000,"waiting for WAL to become available at 0/A0000B8",,,,,,,,,"","startup",,0 2219s 2024-11-13 12:04:04.787 UTC,,,10005,,673495b3.2715,6,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"received fast shutdown request",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:04.788 UTC,,,10005,,673495b3.2715,7,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"aborting any active transactions",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:04.790 UTC,,,10008,,673495b3.2718,1,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"shutting down",,,,,,,,,"","checkpointer",,0 2219s 2024-11-13 12:04:04.794 UTC,,,10005,,673495b3.2715,8,,2024-11-13 12:04:03 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0 2219s 2024-11-13 12:04:04.795 UTC,,,10007,,673495b3.2717,1,,2024-11-13 12:04:03 UTC,,0,DEBUG,00000,"logger shutting down",,,,,,,,,"","logger",,0 2219s features/output/priority_replication_failed/postgres3.log: 2219s 2024-11-13 12:03:45.861 UTC [9760] LOG: ending log output to stderr 2219s 2024-11-13 12:03:45.861 UTC [9760] HINT: Future log output will go to log destination "csvlog". 2219s Traceback (most rece+ for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres3.log: 2219s + cat features/output/priority_replication_failed/postgres3.log 2219s + for file in features/output/*_failed/* 2219s + case $file in 2219s + echo features/output/priority_replication_failed/postgres3.yml: 2219s + cat features/output/priority_replication_failed/postgres3.yml 2219s + exit 1 2219s + rm -f '/tmp/pgpass?' 2219s ++ id -u 2219s + '[' 0 -eq 0 ']' 2219s + '[' -x /etc/init.d/zookeeper ']' 2219s + /etc/init.d/zookeeper stop 2219s nt call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000007' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000007' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000007' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/000000020000000000000009' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003.history' 2219s 2024-11-13 12:04:00.606 UTC [9761] DEBUG: logger shutting down 2219s 2024-11-13 12:04:03.828 UTC [10005] LOG: ending log output to stderr 2219s 2024-11-13 12:04:03.828 UTC [10005] HINT: Future log output will go to log destination "csvlog". 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003000000000000000A' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000003000000000000000A' 2219s Traceback (most recent call last): 2219s File "/tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py", line 21, in 2219s shutil.copy(full_filename, args.pathname) 2219s File "/usr/lib/python3.12/shutil.py", line 435, in copy 2219s copyfile(src, dst, follow_symlinks=follow_symlinks) 2219s File "/usr/lib/python3.12/shutil.py", line 260, in copyfile 2219s with open(src, 'rb') as fsrc: 2219s ^^^^^^^^^^^^^^^ 2219s FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive/00000004.history' 2219s 2024-11-13 12:04:04.795 UTC [10007] DEBUG: logger shutting down 2219s features/output/priority_replication_failed/postgres3.yml: 2219s bootstrap: 2219s dcs: 2219s loop_wait: 2 2219s maximum_lag_on_failover: 1048576 2219s postgresql: 2219s parameters: 2219s archive_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode archive --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s archive_mode: 'on' 2219s restore_command: /usr/bin/python3 /tmp/autopkgtest.FwqS2V/build.hfu/src/features/archive-restore.py 2219s --mode restore --dirname /tmp/autopkgtest.FwqS2V/build.hfu/src/data/wal_archive 2219s --filename %f --pathname %p 2219s wal_keep_segments: 100 2219s pg_hba: 2219s - host replication replicator 127.0.0.1/32 md5 2219s - host all all 0.0.0.0/0 md5 2219s use_pg_rewind: true 2219s retry_timeout: 10 2219s ttl: 30 2219s initdb: 2219s - encoding: UTF8 2219s - data-checksums 2219s - auth: md5 2219s - auth-host: md5 2219s post_bootstrap: psql -w -c "SELECT 1" 2219s log: 2219s format: '%(asctime)s %(levelname)s [%(pathname)s:%(lineno)d - %(funcName)s]: %(message)s' 2219s loggers: 2219s patroni.postgresql.callback_executor: DEBUG 2219s name: postgres3 2219s postgresql: 2219s authentication: 2219s replication: 2219s password: rep-pass 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: replicator 2219s rewind: 2219s password: rewind_password 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: rewind_user 2219s superuser: 2219s password: patroni 2219s sslcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s sslkey: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s sslmode: verify-ca 2219s sslrootcert: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s username: postgres 2219s basebackup: 2219s - checkpoint: fast 2219s callbacks: 2219s on_role_change: /usr/bin/python3 features/callback2.py postgres3 5385 2219s connect_address: 127.0.0.1:5385 2219s data_dir: /tmp/autopkgtest.FwqS2V/build.hfu/src/data/postgres3 2219s listen: 127.0.0.1:5385 2219s parameters: 2219s log_destination: csvlog 2219s log_directory: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/priority_replication 2219s log_filename: postgres3.log 2219s log_min_messages: debug1 2219s log_statement: all 2219s logging_collector: 'on' 2219s shared_buffers: 1MB 2219s ssl: 'on' 2219s ssl_ca_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_cert_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.crt 2219s ssl_key_file: /tmp/autopkgtest.FwqS2V/build.hfu/src/features/output/patroni.key 2219s unix_socket_directories: /tmp 2219s pg_hba: 2219s - local all all trust 2219s - local replication all trust 2219s - hostssl replication replicator all md5 clientcert=verify-ca 2219s - hostssl all all all md5 clientcert=verify-ca 2219s pgpass: /tmp/pgpass_postgres3 2219s use_unix_socket: true 2219s use_unix_socket_repl: true 2219s restapi: 2219s connect_address: 127.0.0.1:8011 2219s listen: 127.0.0.1:8011 2219s scope: batman 2219s tags: 2219s clonefrom: false 2219s failover_priority: '2' 2219s noloadbalance: false 2219s nostream: false 2219s nosync: false 2219s Stopping zookeeper (via systemctl): zookeeper.service. 2220s autopkgtest [12:06:30]: test acceptance-zookeeper: -----------------------] 2220s autopkgtest [12:06:30]: test acceptance-zookeeper: - - - - - - - - - - results - - - - - - - - - - 2220s acceptance-zookeeper FLAKY non-zero exit status 1 2221s autopkgtest [12:06:31]: test acceptance-raft: preparing testbed 2352s autopkgtest [12:08:42]: testbed dpkg architecture: s390x 2352s autopkgtest [12:08:42]: testbed apt version: 2.9.8 2352s autopkgtest [12:08:42]: @@@@@@@@@@@@@@@@@@@@ test bed setup 2353s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [73.9 kB] 2353s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [849 kB] 2353s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/restricted Sources [7016 B] 2353s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [76.4 kB] 2353s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.3 kB] 2353s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [85.8 kB] 2353s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [565 kB] 2353s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [16.6 kB] 2353s Fetched 1689 kB in 1s (2220 kB/s) 2354s Reading package lists... 2355s Reading package lists... 2355s Building dependency tree... 2355s Reading state information... 2356s Calculating upgrade... 2356s The following NEW packages will be installed: 2356s python3.13-gdbm 2356s The following packages will be upgraded: 2356s libgpgme11t64 libpython3-stdlib python3 python3-gdbm python3-minimal 2356s 5 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 2356s Need to get 252 kB of archives. 2356s After this operation, 98.3 kB of additional disk space will be used. 2356s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-minimal s390x 3.12.7-1 [27.4 kB] 2356s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3 s390x 3.12.7-1 [24.0 kB] 2356s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libpython3-stdlib s390x 3.12.7-1 [10.0 kB] 2356s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x python3.13-gdbm s390x 3.13.0-2 [31.0 kB] 2356s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-gdbm s390x 3.12.7-1 [8642 B] 2356s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x libgpgme11t64 s390x 1.23.2-5ubuntu4 [151 kB] 2356s Fetched 252 kB in 0s (633 kB/s) 2357s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 2357s Preparing to unpack .../python3-minimal_3.12.7-1_s390x.deb ... 2357s Unpacking python3-minimal (3.12.7-1) over (3.12.6-0ubuntu1) ... 2357s Setting up python3-minimal (3.12.7-1) ... 2357s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 2357s Preparing to unpack .../python3_3.12.7-1_s390x.deb ... 2357s Unpacking python3 (3.12.7-1) over (3.12.6-0ubuntu1) ... 2357s Preparing to unpack .../libpython3-stdlib_3.12.7-1_s390x.deb ... 2357s Unpacking libpython3-stdlib:s390x (3.12.7-1) over (3.12.6-0ubuntu1) ... 2357s Selecting previously unselected package python3.13-gdbm. 2357s Preparing to unpack .../python3.13-gdbm_3.13.0-2_s390x.deb ... 2357s Unpacking python3.13-gdbm (3.13.0-2) ... 2357s Preparing to unpack .../python3-gdbm_3.12.7-1_s390x.deb ... 2357s Unpacking python3-gdbm:s390x (3.12.7-1) over (3.12.6-1ubuntu1) ... 2357s Preparing to unpack .../libgpgme11t64_1.23.2-5ubuntu4_s390x.deb ... 2357s Unpacking libgpgme11t64:s390x (1.23.2-5ubuntu4) over (1.18.0-4.1ubuntu4) ... 2357s Setting up libgpgme11t64:s390x (1.23.2-5ubuntu4) ... 2357s Setting up python3.13-gdbm (3.13.0-2) ... 2357s Setting up libpython3-stdlib:s390x (3.12.7-1) ... 2357s Setting up python3 (3.12.7-1) ... 2357s Setting up python3-gdbm:s390x (3.12.7-1) ... 2357s Processing triggers for man-db (2.12.1-3) ... 2357s Processing triggers for libc-bin (2.40-1ubuntu3) ... 2358s Reading package lists... 2358s Building dependency tree... 2358s Reading state information... 2358s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2358s Hit:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease 2358s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 2358s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 2358s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 2359s Reading package lists... 2359s Reading package lists... 2359s Building dependency tree... 2359s Reading state information... 2359s Calculating upgrade... 2360s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2360s Reading package lists... 2360s Building dependency tree... 2360s Reading state information... 2360s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2365s Reading package lists... 2365s Building dependency tree... 2365s Reading state information... 2366s Starting pkgProblemResolver with broken count: 0 2366s Starting 2 pkgProblemResolver with broken count: 0 2366s Done 2366s The following additional packages will be installed: 2366s fonts-font-awesome fonts-lato libio-pty-perl libipc-run-perl libjs-jquery 2366s libjs-sphinxdoc libjs-underscore libjson-perl libpq5 libtime-duration-perl 2366s libtimedate-perl libxslt1.1 moreutils patroni patroni-doc postgresql 2366s postgresql-16 postgresql-client-16 postgresql-client-common 2366s postgresql-common python3-behave python3-cdiff python3-click 2366s python3-colorama python3-coverage python3-dateutil python3-parse 2366s python3-parse-type python3-prettytable python3-psutil python3-psycopg2 2366s python3-pysyncobj python3-six python3-wcwidth python3-ydiff 2366s sphinx-rtd-theme-common ssl-cert 2366s Suggested packages: 2366s etcd-server | consul | zookeeperd vip-manager haproxy postgresql-doc 2366s postgresql-doc-16 python-coverage-doc python-psycopg2-doc 2366s Recommended packages: 2366s javascript-common libjson-xs-perl 2366s The following NEW packages will be installed: 2366s autopkgtest-satdep fonts-font-awesome fonts-lato libio-pty-perl 2366s libipc-run-perl libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl 2366s libpq5 libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 2366s patroni-doc postgresql postgresql-16 postgresql-client-16 2366s postgresql-client-common postgresql-common python3-behave python3-cdiff 2366s python3-click python3-colorama python3-coverage python3-dateutil 2366s python3-parse python3-parse-type python3-prettytable python3-psutil 2366s python3-psycopg2 python3-pysyncobj python3-six python3-wcwidth python3-ydiff 2366s sphinx-rtd-theme-common ssl-cert 2366s 0 upgraded, 38 newly installed, 0 to remove and 0 not upgraded. 2366s Need to get 25.1 MB/25.1 MB of archives. 2366s After this operation, 83.3 MB of additional disk space will be used. 2366s Get:1 /tmp/autopkgtest.FwqS2V/5-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [752 B] 2366s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-lato all 2.015-1 [2781 kB] 2367s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x libjson-perl all 4.10000-1 [81.9 kB] 2367s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-common all 262 [36.7 kB] 2367s Get:5 http://ftpmaster.internal/ubuntu plucky/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 2367s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-common all 262 [162 kB] 2367s Get:7 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 2368s Get:8 http://ftpmaster.internal/ubuntu plucky/main s390x libio-pty-perl s390x 1:1.20-1build3 [31.6 kB] 2368s Get:9 http://ftpmaster.internal/ubuntu plucky/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 2368s Get:10 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 2368s Get:11 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 2368s Get:12 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-sphinxdoc all 7.4.7-4 [158 kB] 2368s Get:13 http://ftpmaster.internal/ubuntu plucky/main s390x libpq5 s390x 17.0-1 [252 kB] 2368s Get:14 http://ftpmaster.internal/ubuntu plucky/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 2368s Get:15 http://ftpmaster.internal/ubuntu plucky/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 2368s Get:16 http://ftpmaster.internal/ubuntu plucky/main s390x libxslt1.1 s390x 1.1.39-0exp1ubuntu1 [169 kB] 2368s Get:17 http://ftpmaster.internal/ubuntu plucky/universe s390x moreutils s390x 0.69-1 [57.4 kB] 2368s Get:18 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-ydiff all 1.3-1 [18.4 kB] 2368s Get:19 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-cdiff all 1.3-1 [1770 B] 2368s Get:20 http://ftpmaster.internal/ubuntu plucky/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 2368s Get:21 http://ftpmaster.internal/ubuntu plucky/main s390x python3-click all 8.1.7-2 [79.5 kB] 2368s Get:22 http://ftpmaster.internal/ubuntu plucky/main s390x python3-six all 1.16.0-7 [13.1 kB] 2368s Get:23 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 2368s Get:24 http://ftpmaster.internal/ubuntu plucky/main s390x python3-wcwidth all 0.2.13+dfsg1-1 [26.3 kB] 2368s Get:25 http://ftpmaster.internal/ubuntu plucky/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 2368s Get:26 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 2368s Get:27 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psycopg2 s390x 2.9.9-2 [132 kB] 2368s Get:28 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pysyncobj all 0.3.12-1 [38.9 kB] 2368s Get:29 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni all 3.3.1-1 [264 kB] 2368s Get:30 http://ftpmaster.internal/ubuntu plucky/main s390x sphinx-rtd-theme-common all 3.0.1+dfsg-1 [1012 kB] 2368s Get:31 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni-doc all 3.3.1-1 [497 kB] 2368s Get:32 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-client-16 s390x 16.4-3 [1294 kB] 2369s Get:33 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql-16 s390x 16.4-3 [16.3 MB] 2370s Get:34 http://ftpmaster.internal/ubuntu plucky/main s390x postgresql all 16+262 [11.8 kB] 2370s Get:35 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 2370s Get:36 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-parse-type all 0.6.4-1 [23.4 kB] 2370s Get:37 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-behave all 1.2.6-6 [98.6 kB] 2370s Get:38 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 2370s Preconfiguring packages ... 2370s Fetched 25.1 MB in 4s (6066 kB/s) 2370s Selecting previously unselected package fonts-lato. 2370s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55517 files and directories currently installed.) 2370s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 2370s Unpacking fonts-lato (2.015-1) ... 2371s Selecting previously unselected package libjson-perl. 2371s Preparing to unpack .../01-libjson-perl_4.10000-1_all.deb ... 2371s Unpacking libjson-perl (4.10000-1) ... 2371s Selecting previously unselected package postgresql-client-common. 2371s Preparing to unpack .../02-postgresql-client-common_262_all.deb ... 2371s Unpacking postgresql-client-common (262) ... 2371s Selecting previously unselected package ssl-cert. 2371s Preparing to unpack .../03-ssl-cert_1.1.2ubuntu2_all.deb ... 2371s Unpacking ssl-cert (1.1.2ubuntu2) ... 2371s Selecting previously unselected package postgresql-common. 2371s Preparing to unpack .../04-postgresql-common_262_all.deb ... 2371s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 2371s Unpacking postgresql-common (262) ... 2371s Selecting previously unselected package fonts-font-awesome. 2371s Preparing to unpack .../05-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 2371s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 2371s Selecting previously unselected package libio-pty-perl. 2371s Preparing to unpack .../06-libio-pty-perl_1%3a1.20-1build3_s390x.deb ... 2371s Unpacking libio-pty-perl (1:1.20-1build3) ... 2371s Selecting previously unselected package libipc-run-perl. 2371s Preparing to unpack .../07-libipc-run-perl_20231003.0-2_all.deb ... 2371s Unpacking libipc-run-perl (20231003.0-2) ... 2371s Selecting previously unselected package libjs-jquery. 2371s Preparing to unpack .../08-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 2371s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 2371s Selecting previously unselected package libjs-underscore. 2371s Preparing to unpack .../09-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 2371s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 2371s Selecting previously unselected package libjs-sphinxdoc. 2371s Preparing to unpack .../10-libjs-sphinxdoc_7.4.7-4_all.deb ... 2371s Unpacking libjs-sphinxdoc (7.4.7-4) ... 2371s Selecting previously unselected package libpq5:s390x. 2371s Preparing to unpack .../11-libpq5_17.0-1_s390x.deb ... 2371s Unpacking libpq5:s390x (17.0-1) ... 2371s Selecting previously unselected package libtime-duration-perl. 2371s Preparing to unpack .../12-libtime-duration-perl_1.21-2_all.deb ... 2371s Unpacking libtime-duration-perl (1.21-2) ... 2371s Selecting previously unselected package libtimedate-perl. 2371s Preparing to unpack .../13-libtimedate-perl_2.3300-2_all.deb ... 2371s Unpacking libtimedate-perl (2.3300-2) ... 2371s Selecting previously unselected package libxslt1.1:s390x. 2371s Preparing to unpack .../14-libxslt1.1_1.1.39-0exp1ubuntu1_s390x.deb ... 2371s Unpacking libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 2371s Selecting previously unselected package moreutils. 2371s Preparing to unpack .../15-moreutils_0.69-1_s390x.deb ... 2371s Unpacking moreutils (0.69-1) ... 2371s Selecting previously unselected package python3-ydiff. 2371s Preparing to unpack .../16-python3-ydiff_1.3-1_all.deb ... 2371s Unpacking python3-ydiff (1.3-1) ... 2371s Selecting previously unselected package python3-cdiff. 2371s Preparing to unpack .../17-python3-cdiff_1.3-1_all.deb ... 2371s Unpacking python3-cdiff (1.3-1) ... 2371s Selecting previously unselected package python3-colorama. 2371s Preparing to unpack .../18-python3-colorama_0.4.6-4_all.deb ... 2371s Unpacking python3-colorama (0.4.6-4) ... 2371s Selecting previously unselected package python3-click. 2371s Preparing to unpack .../19-python3-click_8.1.7-2_all.deb ... 2371s Unpacking python3-click (8.1.7-2) ... 2371s Selecting previously unselected package python3-six. 2371s Preparing to unpack .../20-python3-six_1.16.0-7_all.deb ... 2371s Unpacking python3-six (1.16.0-7) ... 2371s Selecting previously unselected package python3-dateutil. 2371s Preparing to unpack .../21-python3-dateutil_2.9.0-2_all.deb ... 2371s Unpacking python3-dateutil (2.9.0-2) ... 2371s Selecting previously unselected package python3-wcwidth. 2371s Preparing to unpack .../22-python3-wcwidth_0.2.13+dfsg1-1_all.deb ... 2371s Unpacking python3-wcwidth (0.2.13+dfsg1-1) ... 2371s Selecting previously unselected package python3-prettytable. 2371s Preparing to unpack .../23-python3-prettytable_3.10.1-1_all.deb ... 2371s Unpacking python3-prettytable (3.10.1-1) ... 2371s Selecting previously unselected package python3-psutil. 2371s Preparing to unpack .../24-python3-psutil_5.9.8-2build2_s390x.deb ... 2371s Unpacking python3-psutil (5.9.8-2build2) ... 2371s Selecting previously unselected package python3-psycopg2. 2371s Preparing to unpack .../25-python3-psycopg2_2.9.9-2_s390x.deb ... 2371s Unpacking python3-psycopg2 (2.9.9-2) ... 2371s Selecting previously unselected package python3-pysyncobj. 2371s Preparing to unpack .../26-python3-pysyncobj_0.3.12-1_all.deb ... 2371s Unpacking python3-pysyncobj (0.3.12-1) ... 2371s Selecting previously unselected package patroni. 2371s Preparing to unpack .../27-patroni_3.3.1-1_all.deb ... 2371s Unpacking patroni (3.3.1-1) ... 2371s Selecting previously unselected package sphinx-rtd-theme-common. 2371s Preparing to unpack .../28-sphinx-rtd-theme-common_3.0.1+dfsg-1_all.deb ... 2371s Unpacking sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 2371s Selecting previously unselected package patroni-doc. 2371s Preparing to unpack .../29-patroni-doc_3.3.1-1_all.deb ... 2371s Unpacking patroni-doc (3.3.1-1) ... 2371s Selecting previously unselected package postgresql-client-16. 2371s Preparing to unpack .../30-postgresql-client-16_16.4-3_s390x.deb ... 2371s Unpacking postgresql-client-16 (16.4-3) ... 2371s Selecting previously unselected package postgresql-16. 2371s Preparing to unpack .../31-postgresql-16_16.4-3_s390x.deb ... 2371s Unpacking postgresql-16 (16.4-3) ... 2371s Selecting previously unselected package postgresql. 2371s Preparing to unpack .../32-postgresql_16+262_all.deb ... 2371s Unpacking postgresql (16+262) ... 2371s Selecting previously unselected package python3-parse. 2371s Preparing to unpack .../33-python3-parse_1.20.2-1_all.deb ... 2371s Unpacking python3-parse (1.20.2-1) ... 2371s Selecting previously unselected package python3-parse-type. 2371s Preparing to unpack .../34-python3-parse-type_0.6.4-1_all.deb ... 2371s Unpacking python3-parse-type (0.6.4-1) ... 2371s Selecting previously unselected package python3-behave. 2371s Preparing to unpack .../35-python3-behave_1.2.6-6_all.deb ... 2371s Unpacking python3-behave (1.2.6-6) ... 2371s Selecting previously unselected package python3-coverage. 2371s Preparing to unpack .../36-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 2371s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 2371s Selecting previously unselected package autopkgtest-satdep. 2371s Preparing to unpack .../37-5-autopkgtest-satdep.deb ... 2371s Unpacking autopkgtest-satdep (0) ... 2371s Setting up postgresql-client-common (262) ... 2371s Setting up fonts-lato (2.015-1) ... 2371s Setting up libio-pty-perl (1:1.20-1build3) ... 2371s Setting up python3-pysyncobj (0.3.12-1) ... 2371s Setting up python3-colorama (0.4.6-4) ... 2372s Setting up python3-ydiff (1.3-1) ... 2372s Setting up libpq5:s390x (17.0-1) ... 2372s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 2372s Setting up python3-click (8.1.7-2) ... 2372s Setting up python3-psutil (5.9.8-2build2) ... 2372s Setting up python3-six (1.16.0-7) ... 2372s Setting up python3-wcwidth (0.2.13+dfsg1-1) ... 2372s Setting up ssl-cert (1.1.2ubuntu2) ... 2373s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 2373s Setting up python3-psycopg2 (2.9.9-2) ... 2373s Setting up libipc-run-perl (20231003.0-2) ... 2373s Setting up libtime-duration-perl (1.21-2) ... 2373s Setting up libtimedate-perl (2.3300-2) ... 2373s Setting up python3-parse (1.20.2-1) ... 2373s Setting up libjson-perl (4.10000-1) ... 2373s Setting up libxslt1.1:s390x (1.1.39-0exp1ubuntu1) ... 2373s Setting up python3-dateutil (2.9.0-2) ... 2373s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 2373s Setting up python3-prettytable (3.10.1-1) ... 2373s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 2373s Setting up sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 2373s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 2373s Setting up moreutils (0.69-1) ... 2373s Setting up postgresql-client-16 (16.4-3) ... 2374s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 2374s Setting up python3-cdiff (1.3-1) ... 2374s Setting up python3-parse-type (0.6.4-1) ... 2374s Setting up postgresql-common (262) ... 2374s 2374s Creating config file /etc/postgresql-common/createcluster.conf with new version 2374s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 2374s Removing obsolete dictionary files: 2375s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 2375s Setting up libjs-sphinxdoc (7.4.7-4) ... 2375s Setting up python3-behave (1.2.6-6) ... 2375s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 2375s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 2375s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 2375s """Registers a custom type that will be available to "parse" 2375s Setting up patroni (3.3.1-1) ... 2375s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 2376s Setting up postgresql-16 (16.4-3) ... 2376s Creating new PostgreSQL cluster 16/main ... 2376s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 2376s The files belonging to this database system will be owned by user "postgres". 2376s This user must also own the server process. 2376s 2376s The database cluster will be initialized with locale "C.UTF-8". 2376s The default database encoding has accordingly been set to "UTF8". 2376s The default text search configuration will be set to "english". 2376s 2376s Data page checksums are disabled. 2376s 2376s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 2376s creating subdirectories ... ok 2376s selecting dynamic shared memory implementation ... posix 2376s selecting default max_connections ... 100 2376s selecting default shared_buffers ... 128MB 2376s selecting default time zone ... Etc/UTC 2376s creating configuration files ... ok 2376s running bootstrap script ... ok 2376s performing post-bootstrap initialization ... ok 2376s syncing data to disk ... ok 2379s Setting up patroni-doc (3.3.1-1) ... 2379s Setting up postgresql (16+262) ... 2379s Setting up autopkgtest-satdep (0) ... 2379s Processing triggers for man-db (2.12.1-3) ... 2380s Processing triggers for libc-bin (2.40-1ubuntu3) ... 2383s (Reading database ... 58536 files and directories currently installed.) 2383s Removing autopkgtest-satdep (0) ... 2384s autopkgtest [12:09:14]: test acceptance-raft: debian/tests/acceptance raft 2384s autopkgtest [12:09:14]: test acceptance-raft: [----------------------- 2384s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 2384s ++ ls -1r /usr/lib/postgresql/ 2384s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 2384s ### PostgreSQL 16 acceptance-raft ### 2384s + '[' 16 == 10 -o 16 == 11 ']' 2384s + echo '### PostgreSQL 16 acceptance-raft ###' 2384s + bash -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=raft PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave | ts' 2390s Nov 13 12:09:20 Feature: basic replication # features/basic_replication.feature:1 2390s Nov 13 12:09:20 We should check that the basic bootstrapping, replication and failover works. 2390s Nov 13 12:09:20 Scenario: check replication of a single table # features/basic_replication.feature:4 2390s Nov 13 12:09:20 Given I start postgres0 # features/steps/basic_replication.py:8 2393s Nov 13 12:09:23 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2393s Nov 13 12:09:23 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2393s Nov 13 12:09:23 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 2393s Nov 13 12:09:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 2393s Nov 13 12:09:23 When I start postgres1 # features/steps/basic_replication.py:8 2396s Nov 13 12:09:26 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 2405s Nov 13 12:09:35 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 2405s Nov 13 12:09:35 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 2405s Nov 13 12:09:35 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2406s Nov 13 12:09:36 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 2406s Nov 13 12:09:36 2406s Nov 13 12:09:36 Scenario: check restart of sync replica # features/basic_replication.feature:17 2406s Nov 13 12:09:36 Given I shut down postgres2 # features/steps/basic_replication.py:29 2407s Nov 13 12:09:37 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 2407s Nov 13 12:09:37 When I start postgres2 # features/steps/basic_replication.py:8 2411s Nov 13 12:09:40 And I shut down postgres1 # features/steps/basic_replication.py:29 2413s Nov 13 12:09:43 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 2414s Nov 13 12:09:44 When I start postgres1 # features/steps/basic_replication.py:8 2418s Nov 13 12:09:47 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2418s Nov 13 12:09:47 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2418s Nov 13 12:09:48 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 2418s Nov 13 12:09:48 2418s Nov 13 12:09:48 Scenario: check stuck sync replica # features/basic_replication.feature:28 2418s Nov 13 12:09:48 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 2418s Nov 13 12:09:48 Then I receive a response code 200 # features/steps/patroni_api.py:98 2418s Nov 13 12:09:48 And I create table on postgres0 # features/steps/basic_replication.py:73 2418s Nov 13 12:09:48 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 2419s Nov 13 12:09:49 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 2419s Nov 13 12:09:49 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 2419s Nov 13 12:09:49 And I load data on postgres0 # features/steps/basic_replication.py:84 2419s Nov 13 12:09:49 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 2422s Nov 13 12:09:52 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 2422s Nov 13 12:09:52 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2422s Nov 13 12:09:52 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 2422s Nov 13 12:09:52 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 2422s Nov 13 12:09:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 2422s Nov 13 12:09:52 And I drop table on postgres0 # features/steps/basic_replication.py:73 2422s Nov 13 12:09:52 2422s Nov 13 12:09:52 Scenario: check multi sync replication # features/basic_replication.feature:44 2422s Nov 13 12:09:52 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 2422s Nov 13 12:09:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 2422s Nov 13 12:09:52 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 2426s Nov 13 12:09:56 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2426s Nov 13 12:09:56 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2426s Nov 13 12:09:56 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 2426s Nov 13 12:09:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 2426s Nov 13 12:09:56 And I shut down postgres1 # features/steps/basic_replication.py:29 2429s Nov 13 12:09:59 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 2430s Nov 13 12:10:00 When I start postgres1 # features/steps/basic_replication.py:8 2433s Nov 13 12:10:03 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2433s Nov 13 12:10:03 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2433s Nov 13 12:10:03 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 2434s Nov 13 12:10:04 2434s Nov 13 12:10:04 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 2434s Nov 13 12:10:04 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 2435s Nov 13 12:10:05 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2435s Nov 13 12:10:05 When I sleep for 2 seconds # features/steps/patroni_api.py:39 2437s Nov 13 12:10:07 And I shut down postgres0 # features/steps/basic_replication.py:29 2438s Nov 13 12:10:08 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 2440s Nov 13 12:10:10 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2440s Nov 13 12:10:10 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 2458s Nov 13 12:10:28 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 2461s Nov 13 12:10:31 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 2461s Nov 13 12:10:31 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 2461s Nov 13 12:10:31 Then I receive a response code 200 # features/steps/patroni_api.py:98 2461s Nov 13 12:10:31 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 2461s Nov 13 12:10:31 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2463s Nov 13 12:10:33 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 2463s Nov 13 12:10:33 2463s Nov 13 12:10:33 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 2463s Nov 13 12:10:33 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 2463s Nov 13 12:10:33 And I start postgres0 # features/steps/basic_replication.py:8 2463s Nov 13 12:10:33 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 2467s Nov 13 12:10:37 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 2467s Nov 13 12:10:37 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 2469s SKIP Scenario check graceful rejection when two nodes have the same name: Flaky test with Raft 2485s Nov 13 12:10:55 2485s Nov 13 12:10:55 @reject-duplicate-name 2485s Nov 13 12:10:55 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 2485s Nov 13 12:10:55 Given I start duplicate postgres0 on port 8011 # None 2485s Nov 13 12:10:55 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # None 2485s Nov 13 12:10:55 2485s Nov 13 12:10:55 Feature: cascading replication # features/cascading_replication.feature:1 2485s Nov 13 12:10:55 We should check that patroni can do base backup and streaming from the replica 2485s Nov 13 12:10:55 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 2485s Nov 13 12:10:55 Given I start postgres0 # features/steps/basic_replication.py:8 2494s Nov 13 12:11:04 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2494s Nov 13 12:11:04 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 2497s Nov 13 12:11:07 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2502s Nov 13 12:11:12 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 2502s Nov 13 12:11:12 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 2502s Nov 13 12:11:12 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 2502s Nov 13 12:11:12 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 2505s Nov 13 12:11:15 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 2510s Nov 13 12:11:20 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 2525s Nov 13 12:11:35 2525s Nov 13 12:11:35 Feature: citus # features/citus.feature:1 2525s SKIP FEATURE citus: Citus extenstion isn't available 2525s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 2525s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 2525s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 2525s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 2525s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 2525s Nov 13 12:11:35 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 2525s Nov 13 12:11:35 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 2525s Nov 13 12:11:35 Given I start postgres0 in citus group 0 # None 2525s Nov 13 12:11:35 And I start postgres2 in citus group 1 # None 2525s Nov 13 12:11:35 Then postgres0 is a leader in a group 0 after 10 seconds # None 2525s Nov 13 12:11:35 And postgres2 is a leader in a group 1 after 10 seconds # None 2525s Nov 13 12:11:35 When I start postgres1 in citus group 0 # None 2525s Nov 13 12:11:35 And I start postgres3 in citus group 1 # None 2525s Nov 13 12:11:35 Then replication works from postgres0 to postgres1 after 15 seconds # None 2525s Nov 13 12:11:35 Then replication works from postgres2 to postgres3 after 15 seconds # None 2525s Nov 13 12:11:35 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 2525s Nov 13 12:11:35 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2525s Nov 13 12:11:35 2525s Nov 13 12:11:35 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 2525s Nov 13 12:11:35 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 2525s Nov 13 12:11:35 Then postgres1 role is the primary after 10 seconds # None 2525s Nov 13 12:11:35 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 2525s Nov 13 12:11:35 And replication works from postgres1 to postgres0 after 15 seconds # None 2525s Nov 13 12:11:35 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 2525s Nov 13 12:11:35 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 2525s Nov 13 12:11:35 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 2525s Nov 13 12:11:35 Then postgres0 role is the primary after 10 seconds # None 2525s Nov 13 12:11:35 And replication works from postgres0 to postgres1 after 15 seconds # None 2525s Nov 13 12:11:35 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 2525s Nov 13 12:11:35 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 2525s Nov 13 12:11:35 2525s Nov 13 12:11:35 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 2525s Nov 13 12:11:35 Given I create a distributed table on postgres0 # None 2525s Nov 13 12:11:35 And I start a thread inserting data on postgres0 # None 2525s Nov 13 12:11:35 When I run patronictl.py switchover batman --group 1 --force # None 2525s Nov 13 12:11:35 Then I receive a response returncode 0 # None 2525s Nov 13 12:11:35 And postgres3 role is the primary after 10 seconds # None 2525s Nov 13 12:11:35 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 2525s Nov 13 12:11:35 And replication works from postgres3 to postgres2 after 15 seconds # None 2525s Nov 13 12:11:35 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2525s Nov 13 12:11:35 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 2525s Nov 13 12:11:35 And a thread is still alive # None 2525s Nov 13 12:11:35 When I run patronictl.py switchover batman --group 1 --force # None 2525s Nov 13 12:11:35 Then I receive a response returncode 0 # None 2525s Nov 13 12:11:35 And postgres2 role is the primary after 10 seconds # None 2525s Nov 13 12:11:35 And replication works from postgres2 to postgres3 after 15 seconds # None 2525s Nov 13 12:11:35 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2525s Nov 13 12:11:35 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 2525s Nov 13 12:11:35 And a thread is still alive # None 2525s Nov 13 12:11:35 When I stop a thread # None 2525s Nov 13 12:11:35 Then a distributed table on postgres0 has expected rows # None 2525s Nov 13 12:11:35 2525s Nov 13 12:11:35 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 2525s Nov 13 12:11:35 Given I cleanup a distributed table on postgres0 # None 2525s Nov 13 12:11:35 And I start a thread inserting data on postgres0 # None 2525s Nov 13 12:11:35 When I run patronictl.py restart batman postgres2 --group 1 --force # None 2525s Nov 13 12:11:35 Then I receive a response returncode 0 # None 2525s Nov 13 12:11:35 And postgres2 role is the primary after 10 seconds # None 2525s Nov 13 12:11:35 And replication works from postgres2 to postgres3 after 15 seconds # None 2525s Nov 13 12:11:35 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2525s Nov 13 12:11:35 And a thread is still alive # None 2525s Nov 13 12:11:35 When I stop a thread # None 2526s Nov 13 12:11:35 Then a distributed table on postgres0 has expected rows # None 2532s Nov 13 12:11:42 2532s Nov 13 12:11:42 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 2532s Nov 13 12:11:42 Given I start postgres4 in citus group 2 # None 2532s Nov 13 12:11:42 Then postgres4 is a leader in a group 2 after 10 seconds # None 2532s Nov 13 12:11:42 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 2532s Nov 13 12:11:42 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 2532s Nov 13 12:11:42 Then I receive a response returncode 0 # None 2532s Nov 13 12:11:42 And I receive a response output "+ttl: 20" # None 2532s Nov 13 12:11:42 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 2532s Nov 13 12:11:42 When I shut down postgres4 # None 2532s Nov 13 12:11:42 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 2532s Nov 13 12:11:42 When I run patronictl.py restart batman postgres2 --group 1 --force # None 2532s Nov 13 12:11:42 Then a transaction finishes in 20 seconds # None 2532s Nov 13 12:11:42 2532s Nov 13 12:11:42 Feature: custom bootstrap # features/custom_bootstrap.feature:1 2532s Nov 13 12:11:42 We should check that patroni can bootstrap a new cluster from a backup 2532s Nov 13 12:11:42 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 2532s Nov 13 12:11:42 Given I start postgres0 # features/steps/basic_replication.py:8 2535s Nov 13 12:11:45 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2535s Nov 13 12:11:45 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 2535s Nov 13 12:11:45 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 2539s Nov 13 12:11:49 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 2540s Nov 13 12:11:50 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 2540s Nov 13 12:11:50 2540s Nov 13 12:11:50 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 2540s Nov 13 12:11:50 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 2540s Nov 13 12:11:50 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 2540s Nov 13 12:11:50 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 2544s Nov 13 12:11:54 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 2544s Nov 13 12:11:54 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 2561s Nov 13 12:12:11 2561s Nov 13 12:12:11 Feature: dcs failsafe mode # features/dcs_failsafe_mode.feature:1 2561s Nov 13 12:12:11 We should check the basic dcs failsafe mode functioning 2561s Nov 13 12:12:11 Scenario: check failsafe mode can be successfully enabled # features/dcs_failsafe_mode.feature:4 2561s Nov 13 12:12:11 Given I start postgres0 # features/steps/basic_replication.py:8 2565s Nov 13 12:12:14 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2565s Nov 13 12:12:14 Then "config" key in DCS has ttl=30 after 10 seconds # features/steps/cascading_replication.py:23 2565s Nov 13 12:12:14 When I issue a PATCH request to http://127.0.0.1:8008/config with {"loop_wait": 2, "ttl": 20, "retry_timeout": 3, "failsafe_mode": true} # features/steps/patroni_api.py:71 2565s Nov 13 12:12:15 Then I receive a response code 200 # features/steps/patroni_api.py:98 2565s Nov 13 12:12:15 And Response on GET http://127.0.0.1:8008/failsafe contains postgres0 after 10 seconds # features/steps/patroni_api.py:156 2565s Nov 13 12:12:15 When I issue a GET request to http://127.0.0.1:8008/failsafe # features/steps/patroni_api.py:61 2565s Nov 13 12:12:15 Then I receive a response code 200 # features/steps/patroni_api.py:98 2565s Nov 13 12:12:15 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 2565s Nov 13 12:12:15 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}},"slots":{"dcs_slot_1": null,"postgres0":null}} # features/steps/patroni_api.py:71 2565s Nov 13 12:12:15 Then I receive a response code 200 # features/steps/patroni_api.py:98 2565s Nov 13 12:12:15 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots": {"dcs_slot_0": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 2565s Nov 13 12:12:15 Then I receive a response code 200 # features/steps/patroni_api.py:98 2565s Nov 13 12:12:15 2565s Nov 13 12:12:15 @dcs-failsafe 2565s Nov 13 12:12:15 Scenario: check one-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:20 2565s Nov 13 12:12:15 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 2565s Nov 13 12:12:15 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 2569s Nov 13 12:12:19 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2569s Nov 13 12:12:19 2569s Nov 13 12:12:19 @dcs-failsafe 2569s Nov 13 12:12:19 Scenario: check new replica isn't promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:26 2569s Nov 13 12:12:19 Given DCS is up # features/steps/dcs_failsafe_mode.py:9 2569s Nov 13 12:12:19 When I do a backup of postgres0 # features/steps/custom_bootstrap.py:25 2569s Nov 13 12:12:19 And I shut down postgres0 # features/steps/basic_replication.py:29 2571s Nov 13 12:12:21 When I start postgres1 in a cluster batman from backup with no_leader # features/steps/dcs_failsafe_mode.py:14 2574s Nov 13 12:12:24 Then postgres1 role is the replica after 12 seconds # features/steps/basic_replication.py:105 2574s Nov 13 12:12:24 2574s Nov 13 12:12:24 Scenario: check leader and replica are both in /failsafe key after leader is back # features/dcs_failsafe_mode.feature:33 2574s Nov 13 12:12:24 Given I start postgres0 # features/steps/basic_replication.py:8 2577s Nov 13 12:12:27 And I start postgres1 # features/steps/basic_replication.py:8 2577s Nov 13 12:12:27 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2577s Nov 13 12:12:27 And "members/postgres1" key in DCS has state=running after 2 seconds # features/steps/cascading_replication.py:23 2577s Nov 13 12:12:27 And Response on GET http://127.0.0.1:8009/failsafe contains postgres1 after 10 seconds # features/steps/patroni_api.py:156 2583s Nov 13 12:12:33 When I issue a GET request to http://127.0.0.1:8009/failsafe # features/steps/patroni_api.py:61 2583s Nov 13 12:12:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 2583s Nov 13 12:12:33 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 2583s Nov 13 12:12:33 And I receive a response postgres1 http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:98 2583s Nov 13 12:12:33 2583s Nov 13 12:12:33 @dcs-failsafe @slot-advance 2583s Nov 13 12:12:33 Scenario: check leader and replica are functioning while DCS is down # features/dcs_failsafe_mode.feature:46 2583s Nov 13 12:12:33 Given I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 2583s Nov 13 12:12:33 Then physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2586s Nov 13 12:12:36 And logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2586s Nov 13 12:12:36 And DCS is down # features/steps/dcs_failsafe_mode.py:4 2586s Nov 13 12:12:36 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 2590s Nov 13 12:12:40 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2590s Nov 13 12:12:40 And postgres1 role is the replica after 2 seconds # features/steps/basic_replication.py:105 2590s Nov 13 12:12:40 And replication works from postgres0 to postgres1 after 10 seconds # features/steps/basic_replication.py:112 2590s Nov 13 12:12:40 When I get all changes from logical slot dcs_slot_0 on postgres0 # features/steps/slots.py:70 2590s Nov 13 12:12:40 And I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 2590s Nov 13 12:12:40 Then logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 20 seconds # features/steps/slots.py:51 2597s Nov 13 12:12:47 And physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2597s Nov 13 12:12:47 2597s Nov 13 12:12:47 @dcs-failsafe 2597s Nov 13 12:12:47 Scenario: check primary is demoted when one replica is shut down and DCS is down # features/dcs_failsafe_mode.feature:61 2597s Nov 13 12:12:47 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 2597s Nov 13 12:12:47 And I kill postgres1 # features/steps/basic_replication.py:34 2598s Nov 13 12:12:48 And I kill postmaster on postgres1 # features/steps/basic_replication.py:44 2598s Nov 13 12:12:48 waiting for server to shut down.... done 2598s Nov 13 12:12:48 server stopped 2598s Nov 13 12:12:48 Then postgres0 role is the replica after 12 seconds # features/steps/basic_replication.py:105 2600s Nov 13 12:12:50 2600s Nov 13 12:12:50 @dcs-failsafe 2600s Nov 13 12:12:50 Scenario: check known replica is promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:68 2600s Nov 13 12:12:50 Given I kill postgres0 # features/steps/basic_replication.py:34 2601s Nov 13 12:12:51 And I shut down postmaster on postgres0 # features/steps/basic_replication.py:39 2601s Nov 13 12:12:51 waiting for server to shut down.... done 2601s Nov 13 12:12:51 server stopped 2601s Nov 13 12:12:51 And DCS is up # features/steps/dcs_failsafe_mode.py:9 2601s Nov 13 12:12:51 When I start postgres1 # features/steps/basic_replication.py:8 2604s Nov 13 12:12:54 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2604s Nov 13 12:12:54 And postgres1 role is the primary after 25 seconds # features/steps/basic_replication.py:105 2605s Nov 13 12:12:55 2605s Nov 13 12:12:55 @dcs-failsafe 2605s Nov 13 12:12:55 Scenario: scale to three-node cluster # features/dcs_failsafe_mode.feature:77 2605s Nov 13 12:12:55 Given I start postgres0 # features/steps/basic_replication.py:8 2608s Nov 13 12:12:58 And I start postgres2 # features/steps/basic_replication.py:8 2612s Nov 13 12:13:01 Then "members/postgres2" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2612s Nov 13 12:13:02 And "members/postgres0" key in DCS has state=running after 20 seconds # features/steps/cascading_replication.py:23 2612s Nov 13 12:13:02 And Response on GET http://127.0.0.1:8008/failsafe contains postgres2 after 10 seconds # features/steps/patroni_api.py:156 2614s Nov 13 12:13:04 And replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 2615s Nov 13 12:13:05 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 2616s Nov 13 12:13:06 2616s Nov 13 12:13:06 @dcs-failsafe @slot-advance 2616s Nov 13 12:13:06 Scenario: make sure permanent slots exist on replicas # features/dcs_failsafe_mode.feature:88 2616s Nov 13 12:13:06 Given I issue a PATCH request to http://127.0.0.1:8009/config with {"slots":{"dcs_slot_0":null,"dcs_slot_2":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2616s Nov 13 12:13:06 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 2622s Nov 13 12:13:12 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 2623s Nov 13 12:13:13 When I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 2623s Nov 13 12:13:13 Then physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 2624s Nov 13 12:13:14 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 2624s Nov 13 12:13:14 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 2624s Nov 13 12:13:14 2624s Nov 13 12:13:14 @dcs-failsafe 2624s Nov 13 12:13:14 Scenario: check three-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:98 2624s Nov 13 12:13:14 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 2624s Nov 13 12:13:14 Then Response on GET http://127.0.0.1:8009/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 2629s Nov 13 12:13:19 Then postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2629s Nov 13 12:13:19 And postgres0 role is the replica after 2 seconds # features/steps/basic_replication.py:105 2629s Nov 13 12:13:19 And postgres2 role is the replica after 2 seconds # features/steps/basic_replication.py:105 2629s Nov 13 12:13:19 2629s Nov 13 12:13:19 @dcs-failsafe @slot-advance 2629s Nov 13 12:13:19 Scenario: check that permanent slots are in sync between nodes while DCS is down # features/dcs_failsafe_mode.feature:107 2629s Nov 13 12:13:19 Given replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 2629s Nov 13 12:13:19 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 2630s Nov 13 12:13:20 When I get all changes from logical slot dcs_slot_2 on postgres1 # features/steps/slots.py:70 2630s Nov 13 12:13:20 And I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 2630s Nov 13 12:13:20 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 2636s Nov 13 12:13:26 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 2636s Nov 13 12:13:26 And physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 2636s Nov 13 12:13:26 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 2636s Nov 13 12:13:26 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 2648s Nov 13 12:13:38 2648s Nov 13 12:13:38 Feature: ignored slots # features/ignored_slots.feature:1 2648s Nov 13 12:13:38 2648s Nov 13 12:13:38 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 2648s Nov 13 12:13:38 Given I start postgres1 # features/steps/basic_replication.py:8 2651s Nov 13 12:13:41 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2651s Nov 13 12:13:41 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2651s Nov 13 12:13:41 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 2651s Nov 13 12:13:41 Then I receive a response code 200 # features/steps/patroni_api.py:98 2651s Nov 13 12:13:41 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 2651s Nov 13 12:13:41 When I shut down postgres1 # features/steps/basic_replication.py:29 2653s Nov 13 12:13:43 And I start postgres1 # features/steps/basic_replication.py:8 2656s Nov 13 12:13:46 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2658s Nov 13 12:13:48 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 2659s Nov 13 12:13:49 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 2659s Nov 13 12:13:49 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2659s Nov 13 12:13:49 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2659s Nov 13 12:13:49 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2659s Nov 13 12:13:49 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2659s Nov 13 12:13:49 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2659s Nov 13 12:13:49 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2659s Nov 13 12:13:49 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2659s Nov 13 12:13:49 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2659s Nov 13 12:13:49 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2659s Nov 13 12:13:49 When I start postgres0 # features/steps/basic_replication.py:8 2662s Nov 13 12:13:52 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 2663s Nov 13 12:13:53 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 2663s Nov 13 12:13:53 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 2664s Nov 13 12:13:54 When I shut down postgres1 # features/steps/basic_replication.py:29 2666s Nov 13 12:13:56 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 2667s Nov 13 12:13:57 When I start postgres1 # features/steps/basic_replication.py:8 2670s Nov 13 12:14:00 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 2670s Nov 13 12:14:00 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 2670s Nov 13 12:14:00 And I sleep for 2 seconds # features/steps/patroni_api.py:39 2672s Nov 13 12:14:02 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2672s Nov 13 12:14:02 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2672s Nov 13 12:14:02 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2672s Nov 13 12:14:02 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2672s Nov 13 12:14:02 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 2672s Nov 13 12:14:02 When I shut down postgres0 # features/steps/basic_replication.py:29 2674s Nov 13 12:14:04 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 2675s Nov 13 12:14:05 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2675s Nov 13 12:14:05 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2675s Nov 13 12:14:05 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2675s Nov 13 12:14:05 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2684s Nov 13 12:14:13 2684s Nov 13 12:14:13 Feature: nostream node # features/nostream_node.feature:1 2684s Nov 13 12:14:13 2684s Nov 13 12:14:13 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 2684s Nov 13 12:14:13 When I start postgres0 # features/steps/basic_replication.py:8 2692s Nov 13 12:14:22 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 2695s Nov 13 12:14:25 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 2696s Nov 13 12:14:26 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 2701s Nov 13 12:14:30 2701s Nov 13 12:14:30 @slot-advance 2701s Nov 13 12:14:30 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 2701s Nov 13 12:14:30 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2701s Nov 13 12:14:31 Then I receive a response code 200 # features/steps/patroni_api.py:98 2701s Nov 13 12:14:31 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2703s Nov 13 12:14:32 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2704s Nov 13 12:14:33 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 2706s Nov 13 12:14:36 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 2713s Nov 13 12:14:42 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 2713s Nov 13 12:14:42 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 2730s Nov 13 12:15:00 2730s Nov 13 12:15:00 Feature: patroni api # features/patroni_api.feature:1 2730s Nov 13 12:15:00 We should check that patroni correctly responds to valid and not-valid API requests. 2730s Nov 13 12:15:00 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 2730s Nov 13 12:15:00 Given I start postgres0 # features/steps/basic_replication.py:8 2733s Nov 13 12:15:03 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2733s Nov 13 12:15:03 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2733s Nov 13 12:15:03 Then I receive a response code 200 # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 And I receive a response state running # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 And I receive a response role master # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 2733s Nov 13 12:15:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 2733s Nov 13 12:15:03 Then I receive a response code 200 # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2733s Nov 13 12:15:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 2733s Nov 13 12:15:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 2733s Nov 13 12:15:03 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 2735s Nov 13 12:15:05 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 2735s Nov 13 12:15:05 Then I receive a response code 412 # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 2735s Nov 13 12:15:05 Then I receive a response code 400 # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 2735s Nov 13 12:15:05 Then I receive a response code 400 # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 2735s Nov 13 12:15:05 Scenario: check local configuration reload # features/patroni_api.feature:32 2735s Nov 13 12:15:05 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 2735s Nov 13 12:15:05 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 2735s Nov 13 12:15:05 Then I receive a response code 202 # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 2735s Nov 13 12:15:05 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 2735s Nov 13 12:15:05 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 2735s Nov 13 12:15:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 2735s Nov 13 12:15:05 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 2737s Nov 13 12:15:07 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 2737s Nov 13 12:15:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 2737s Nov 13 12:15:07 And I receive a response ttl 20 # features/steps/patroni_api.py:98 2737s Nov 13 12:15:07 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 2737s Nov 13 12:15:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 2737s Nov 13 12:15:07 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 2737s Nov 13 12:15:07 And I sleep for 4 seconds # features/steps/patroni_api.py:39 2741s Nov 13 12:15:11 2741s Nov 13 12:15:11 Scenario: check the scheduled restart # features/patroni_api.feature:49 2741s Nov 13 12:15:11 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 2743s Nov 13 12:15:13 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2743s Nov 13 12:15:13 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 2743s Nov 13 12:15:13 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 2743s Nov 13 12:15:13 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 2743s Nov 13 12:15:13 Then I receive a response code 202 # features/steps/patroni_api.py:98 2743s Nov 13 12:15:13 And I sleep for 8 seconds # features/steps/patroni_api.py:39 2751s Nov 13 12:15:21 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 2751s Nov 13 12:15:21 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 2751s Nov 13 12:15:21 Then I receive a response code 202 # features/steps/patroni_api.py:98 2751s Nov 13 12:15:21 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 2758s Nov 13 12:15:28 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2759s Nov 13 12:15:29 2759s Nov 13 12:15:29 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 2759s Nov 13 12:15:29 Given I start postgres1 # features/steps/basic_replication.py:8 2762s Nov 13 12:15:32 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2763s Nov 13 12:15:33 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 2765s Nov 13 12:15:34 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2765s Nov 13 12:15:34 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 2765s Nov 13 12:15:35 waiting for server to shut down.... done 2765s Nov 13 12:15:35 server stopped 2765s Nov 13 12:15:35 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2765s Nov 13 12:15:35 Then I receive a response code 503 # features/steps/patroni_api.py:98 2765s Nov 13 12:15:35 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 2766s Nov 13 12:15:36 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 2769s Nov 13 12:15:39 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2769s Nov 13 12:15:39 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2770s Nov 13 12:15:40 And I sleep for 2 seconds # features/steps/patroni_api.py:39 2772s Nov 13 12:15:42 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2772s Nov 13 12:15:42 Then I receive a response code 200 # features/steps/patroni_api.py:98 2772s Nov 13 12:15:42 And I receive a response state running # features/steps/patroni_api.py:98 2772s Nov 13 12:15:42 And I receive a response role replica # features/steps/patroni_api.py:98 2772s Nov 13 12:15:42 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 2775s Nov 13 12:15:45 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2775s Nov 13 12:15:45 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 2775s Nov 13 12:15:45 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 2776s Nov 13 12:15:46 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2776s Nov 13 12:15:46 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2779s Nov 13 12:15:48 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2779s Nov 13 12:15:48 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 2779s Nov 13 12:15:48 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 2780s Nov 13 12:15:49 2780s Nov 13 12:15:49 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 2780s Nov 13 12:15:49 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 2782s Nov 13 12:15:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 2782s Nov 13 12:15:52 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 2782s Nov 13 12:15:52 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2782s Nov 13 12:15:52 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2787s Nov 13 12:15:57 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 2787s Nov 13 12:15:57 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2787s Nov 13 12:15:57 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 2787s Nov 13 12:15:57 Then I receive a response code 503 # features/steps/patroni_api.py:98 2787s Nov 13 12:15:57 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2787s Nov 13 12:15:57 Then I receive a response code 200 # features/steps/patroni_api.py:98 2787s Nov 13 12:15:57 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2787s Nov 13 12:15:57 Then I receive a response code 200 # features/steps/patroni_api.py:98 2787s Nov 13 12:15:57 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2787s Nov 13 12:15:57 Then I receive a response code 503 # features/steps/patroni_api.py:98 2787s Nov 13 12:15:57 2787s Nov 13 12:15:57 Scenario: check the scheduled switchover # features/patroni_api.feature:107 2787s Nov 13 12:15:57 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 2788s Nov 13 12:15:58 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 2788s Nov 13 12:15:58 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 2788s Nov 13 12:15:58 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 2790s Nov 13 12:16:00 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2790s Nov 13 12:16:00 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 2792s Nov 13 12:16:02 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2792s Nov 13 12:16:02 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 2802s Nov 13 12:16:12 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2802s Nov 13 12:16:12 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2805s Nov 13 12:16:15 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 2805s Nov 13 12:16:15 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2806s Nov 13 12:16:16 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 2806s Nov 13 12:16:16 Then I receive a response code 200 # features/steps/patroni_api.py:98 2806s Nov 13 12:16:16 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2806s Nov 13 12:16:16 Then I receive a response code 503 # features/steps/patroni_api.py:98 2806s Nov 13 12:16:16 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2806s Nov 13 12:16:16 Then I receive a response code 503 # features/steps/patroni_api.py:98 2806s Nov 13 12:16:16 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2806s Nov 13 12:16:16 Then I receive a response code 200 # features/steps/patroni_api.py:98 2816s Nov 13 12:16:26 2816s Nov 13 12:16:26 Feature: permanent slots # features/permanent_slots.feature:1 2816s Nov 13 12:16:26 2816s Nov 13 12:16:26 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 2816s Nov 13 12:16:26 Given I start postgres0 # features/steps/basic_replication.py:8 2825s Nov 13 12:16:35 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2825s Nov 13 12:16:35 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2825s Nov 13 12:16:35 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 2826s Nov 13 12:16:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 2826s Nov 13 12:16:36 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 2826s Nov 13 12:16:36 When I start postgres1 # features/steps/basic_replication.py:8 2835s Nov 13 12:16:45 And I start postgres2 # features/steps/basic_replication.py:8 2844s Nov 13 12:16:54 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 2853s Nov 13 12:17:03 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2853s Nov 13 12:17:03 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 2853s Nov 13 12:17:03 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 2853s Nov 13 12:17:03 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2853s Nov 13 12:17:03 2853s Nov 13 12:17:03 @slot-advance 2853s Nov 13 12:17:03 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 2853s Nov 13 12:17:03 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2856s Nov 13 12:17:06 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2856s Nov 13 12:17:06 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2857s Nov 13 12:17:07 2857s Nov 13 12:17:07 @slot-advance 2857s Nov 13 12:17:07 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 2857s Nov 13 12:17:07 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2860s Nov 13 12:17:10 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2860s Nov 13 12:17:10 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2861s Nov 13 12:17:11 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2862s Nov 13 12:17:12 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 2862s Nov 13 12:17:12 @slot-advance 2862s Nov 13 12:17:12 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 2862s Nov 13 12:17:12 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 2862s Nov 13 12:17:12 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 2862s Nov 13 12:17:12 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 2862s Nov 13 12:17:12 2862s Nov 13 12:17:12 @slot-advance 2862s Nov 13 12:17:12 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 2862s Nov 13 12:17:12 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 2862s Nov 13 12:17:12 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 2862s Nov 13 12:17:12 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 2862s Nov 13 12:17:12 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2863s Nov 13 12:17:13 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 2865s Nov 13 12:17:15 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 2865s Nov 13 12:17:15 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 2865s Nov 13 12:17:15 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 2865s Nov 13 12:17:15 2865s Nov 13 12:17:15 @slot-advance 2865s Nov 13 12:17:15 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 2865s Nov 13 12:17:15 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 2865s Nov 13 12:17:15 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 2865s Nov 13 12:17:15 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 2865s Nov 13 12:17:15 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 2865s Nov 13 12:17:15 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 2865s Nov 13 12:17:15 2865s Nov 13 12:17:15 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 2865s Nov 13 12:17:15 Given I shut down postgres3 # features/steps/basic_replication.py:29 2866s Nov 13 12:17:16 And I shut down postgres2 # features/steps/basic_replication.py:29 2867s Nov 13 12:17:17 And I shut down postgres0 # features/steps/basic_replication.py:29 2869s Nov 13 12:17:19 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2869s Nov 13 12:17:19 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 2869s Nov 13 12:17:19 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2881s Nov 13 12:17:31 2881s Nov 13 12:17:31 Feature: priority replication # features/priority_failover.feature:1 2881s Nov 13 12:17:31 We should check that we can give nodes priority during failover 2881s Nov 13 12:17:31 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 2881s Nov 13 12:17:31 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 2884s Nov 13 12:17:34 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 2887s Nov 13 12:17:37 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2892s Nov 13 12:17:42 When I shut down postgres0 # features/steps/basic_replication.py:29 2894s Nov 13 12:17:44 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 2896s Nov 13 12:17:46 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2896s Nov 13 12:17:46 When I start postgres0 # features/steps/basic_replication.py:8 2898s Nov 13 12:17:48 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2902s Nov 13 12:17:52 2902s Nov 13 12:17:52 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 2902s Nov 13 12:17:52 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 2905s Nov 13 12:17:55 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 2914s Nov 13 12:18:04 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 2915s Nov 13 12:18:05 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 2919s Nov 13 12:18:09 When I shut down postgres0 # features/steps/basic_replication.py:29 2921s Nov 13 12:18:11 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2921s Nov 13 12:18:11 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 2921s Nov 13 12:18:11 2921s Nov 13 12:18:11 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 2921s Nov 13 12:18:11 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 2921s Nov 13 12:18:11 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 2921s Nov 13 12:18:11 Then I receive a response code 202 # features/steps/patroni_api.py:98 2921s Nov 13 12:18:11 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 2922s Nov 13 12:18:12 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 2923s Nov 13 12:18:13 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 2923s Nov 13 12:18:13 Then I receive a response code 412 # features/steps/patroni_api.py:98 2923s Nov 13 12:18:13 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 2923s Nov 13 12:18:13 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 2923s Nov 13 12:18:13 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 2923s Nov 13 12:18:13 Then I receive a response code 202 # features/steps/patroni_api.py:98 2923s Nov 13 12:18:13 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 2925s Nov 13 12:18:15 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 2926s Nov 13 12:18:16 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 2929s Nov 13 12:18:19 Then I receive a response code 200 # features/steps/patroni_api.py:98 2929s Nov 13 12:18:19 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2942s Nov 13 12:18:32 2942s Nov 13 12:18:32 Feature: recovery # features/recovery.feature:1 2942s Nov 13 12:18:32 We want to check that crashed postgres is started back 2942s Nov 13 12:18:32 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 2942s Nov 13 12:18:32 Given I start postgres0 # features/steps/basic_replication.py:8 2945s Nov 13 12:18:35 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2945s Nov 13 12:18:35 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2945s Nov 13 12:18:35 When I start postgres1 # features/steps/basic_replication.py:8 2948s Nov 13 12:18:38 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 2948s Nov 13 12:18:38 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2953s Nov 13 12:18:43 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 2953s Nov 13 12:18:43 waiting for server to shut down.... done 2953s Nov 13 12:18:43 server stopped 2953s Nov 13 12:18:43 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2955s Nov 13 12:18:45 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2955s Nov 13 12:18:45 Then I receive a response code 200 # features/steps/patroni_api.py:98 2955s Nov 13 12:18:45 And I receive a response role master # features/steps/patroni_api.py:98 2955s Nov 13 12:18:45 And I receive a response timeline 1 # features/steps/patroni_api.py:98 2955s Nov 13 12:18:45 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 2956s Nov 13 12:18:46 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 2958s Nov 13 12:18:48 2958s Nov 13 12:18:48 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 2958s Nov 13 12:18:48 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 2958s Nov 13 12:18:48 Then I receive a response code 200 # features/steps/patroni_api.py:98 2958s Nov 13 12:18:48 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 2958s Nov 13 12:18:48 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 2958s Nov 13 12:18:48 waiting for server to shut down.... done 2958s Nov 13 12:18:48 server stopped 2958s Nov 13 12:18:48 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2960s Nov 13 12:18:50 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2971s Nov 13 12:19:01 2971s Nov 13 12:19:01 Feature: standby cluster # features/standby_cluster.feature:1 2971s Nov 13 12:19:01 2971s Nov 13 12:19:01 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 2971s Nov 13 12:19:01 Given I start postgres1 # features/steps/basic_replication.py:8 2974s Nov 13 12:19:04 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2974s Nov 13 12:19:04 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2974s Nov 13 12:19:04 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 2974s Nov 13 12:19:04 Then I receive a response code 200 # features/steps/patroni_api.py:98 2974s Nov 13 12:19:04 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 2974s Nov 13 12:19:04 And I sleep for 3 seconds # features/steps/patroni_api.py:39 2977s Nov 13 12:19:07 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 2977s Nov 13 12:19:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 2977s Nov 13 12:19:07 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 2977s Nov 13 12:19:07 When I start postgres0 # features/steps/basic_replication.py:8 2980s Nov 13 12:19:10 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2980s Nov 13 12:19:10 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 2981s Nov 13 12:19:11 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 2981s Nov 13 12:19:11 Then I receive a response code 200 # features/steps/patroni_api.py:98 2981s Nov 13 12:19:11 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2981s Nov 13 12:19:11 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 2981s Nov 13 12:19:11 2981s Nov 13 12:19:11 @slot-advance 2981s Nov 13 12:19:11 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 2981s Nov 13 12:19:11 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 2984s Nov 13 12:19:14 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2989s Nov 13 12:19:19 2989s Nov 13 12:19:19 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 2989s Nov 13 12:19:19 When I shut down postgres1 # features/steps/basic_replication.py:29 2991s Nov 13 12:19:21 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2991s Nov 13 12:19:21 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 2992s Nov 13 12:19:22 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2992s Nov 13 12:19:22 Then I receive a response code 200 # features/steps/patroni_api.py:98 2992s Nov 13 12:19:22 2992s Nov 13 12:19:22 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 2992s Nov 13 12:19:22 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 2995s Nov 13 12:19:25 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 2997s Nov 13 12:19:27 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 2997s Nov 13 12:19:27 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2997s Nov 13 12:19:27 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 2997s Nov 13 12:19:27 Then I receive a response code 200 # features/steps/patroni_api.py:98 2997s Nov 13 12:19:27 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2997s Nov 13 12:19:27 And I sleep for 3 seconds # features/steps/patroni_api.py:39 3000s Nov 13 12:19:30 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 3000s Nov 13 12:19:30 Then I receive a response code 503 # features/steps/patroni_api.py:98 3000s Nov 13 12:19:30 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 3001s Nov 13 12:19:30 Then I receive a response code 200 # features/steps/patroni_api.py:98 3001s Nov 13 12:19:30 And I receive a response role standby_leader # features/steps/patroni_api.py:98 3001s Nov 13 12:19:30 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 3001s Nov 13 12:19:30 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 3004s Nov 13 12:19:34 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 3004s Nov 13 12:19:34 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 3004s Nov 13 12:19:34 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 3004s Nov 13 12:19:34 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 3004s Nov 13 12:19:34 Then I receive a response code 200 # features/steps/patroni_api.py:98 3004s Nov 13 12:19:34 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 3004s Nov 13 12:19:34 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 3004s Nov 13 12:19:34 3004s Nov 13 12:19:34 Scenario: check switchover # features/standby_cluster.feature:57 3004s Nov 13 12:19:34 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 3007s Nov 13 12:19:37 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 3007s Nov 13 12:19:37 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 3009s Nov 13 12:19:39 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 3009s Nov 13 12:19:39 3009s Nov 13 12:19:39 Scenario: check failover # features/standby_cluster.feature:63 3009s Nov 13 12:19:39 When I kill postgres2 # features/steps/basic_replication.py:34 3010s Nov 13 12:19:40 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 3010s Nov 13 12:19:40 waiting for server to shut down.... done 3010s Nov 13 12:19:40 server stopped 3010s Nov 13 12:19:40 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 3029s Nov 13 12:19:59 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 3030s Nov 13 12:19:59 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 3030s Nov 13 12:20:00 Then I receive a response code 503 # features/steps/patroni_api.py:98 3030s Nov 13 12:20:00 And I receive a response role standby_leader # features/steps/patroni_api.py:98 3030s Nov 13 12:20:00 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 3031s Nov 13 12:20:01 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 3045s Nov 13 12:20:15 3045s Nov 13 12:20:15 Feature: watchdog # features/watchdog.feature:1 3045s Nov 13 12:20:15 Verify that watchdog gets pinged and triggered under appropriate circumstances. 3045s Nov 13 12:20:15 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 3045s Nov 13 12:20:15 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 3048s Nov 13 12:20:18 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3048s Nov 13 12:20:18 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3048s Nov 13 12:20:18 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 3048s Nov 13 12:20:18 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 3048s Nov 13 12:20:18 3048s Nov 13 12:20:18 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 3048s Nov 13 12:20:18 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 3050s Nov 13 12:20:20 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3050s Nov 13 12:20:20 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 3050s Nov 13 12:20:20 When I sleep for 4 seconds # features/steps/patroni_api.py:39 3054s Nov 13 12:20:24 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 3054s Nov 13 12:20:24 3054s Nov 13 12:20:24 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 3054s Nov 13 12:20:24 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 3055s Nov 13 12:20:25 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3055s Nov 13 12:20:25 When I sleep for 2 seconds # features/steps/patroni_api.py:39 3057s Nov 13 12:20:27 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 3057s Nov 13 12:20:27 3057s Nov 13 12:20:27 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 3057s Nov 13 12:20:27 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 3057s Nov 13 12:20:27 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 3059s Nov 13 12:20:29 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3059s Nov 13 12:20:29 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 3060s Nov 13 12:20:30 3060s Nov 13 12:20:30 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 3060s Nov 13 12:20:30 Given I shut down postgres0 # features/steps/basic_replication.py:29 3062s Nov 13 12:20:32 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 3062s Nov 13 12:20:32 3062s Nov 13 12:20:32 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 3062s Nov 13 12:20:32 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 3062s Nov 13 12:20:32 And I start postgres0 with watchdog # features/steps/watchdog.py:16 3065s Nov 13 12:20:35 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3067s Nov 13 12:20:37 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 3067s Nov 13 12:20:37 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 3105s Nov 13 12:21:15 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4559.XxbVjSQx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4562.XnfXiGXx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4606.XUyqfLZx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4646.XoEcZTax 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4704.XPrYmvMx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4749.XBdtHjjx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4825.XPHzyajx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4877.XFxNKpYx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4882.XzahtSSx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.4972.XnJBxjnx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5073.XMfWrdwx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5076.XHkNkknx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5119.XHbxklsx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5184.XPkjQcCx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5323.XHCrXqZx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5327.XkhtITax 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5330.XXZwtKcx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5375.XzCnaGEx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5430.XfHSPFBx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5514.XsskZlBx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5518.XLgQSdlx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5831.XJxDmgDx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5905.XRsOyITx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.5962.XuTLCXex 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6236.XEVblpSx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6239.XNEuuunx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6291.XaoFTtwx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6353.XZOZtNux 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6442.XNdocCrx 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.6539.XayhvzEx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6542.XAcTRxHx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6585.XHscFSGx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6652.XWBSanBx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6681.XYZWXwux 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.6803.XiyKWECx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6806.XgfpSSix 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6858.XuzEUNhx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6875.XeoMzWPx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6913.XKBuFnyx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6969.XDStsyZx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.6975.XixcTgFx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7011.XXmKatax 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7053.XUzWkhXx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7217.XETSbelx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7221.XGzdpYyx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7228.XekchYIx 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.7363.XabcBxCx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7366.XjAbRhVx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7412.XcUuuMgx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7453.XYTZjQVx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7497.XyWPBaJx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7546.XXSoVrlx 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.7709.XlFPlRTx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7712.XEjHLYAx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7755.XgujjzEx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7837.XTbXBuRx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7932.XDWcwNRx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.7979.XhDTaQhx 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.8329.XuMIhvqx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8332.XKaCOsBx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8376.XdQhEkHx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8523.XPsZPPrx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8526.XnSSCMKx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8588.XgoYsMJx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8642.XNGLMwEx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8750.XZOdSfHx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.8871.XSjvwXxx 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.8998.XOrMaKlx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.9002.XuBiNsux 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.9045.XcZUXWRx 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.9048.XEbTVUOx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.9052.XbvctVGx 3107s Nov 13 12:21:17 Combined data file .coverage.autopkgtest.9064.XwYTtBox 3107s Nov 13 12:21:17 Skipping duplicate data .coverage.autopkgtest.9131.XHgegcyx 3108s Nov 13 12:21:18 Name Stmts Miss Cover 3108s Nov 13 12:21:18 ------------------------------------------------------------------------------------------------------------- 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/fernet.py 137 54 61% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/__init__.py 5 0 100% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/__init__.py 3 0 100% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/aead.py 114 96 16% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/backend.py 397 257 35% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/ciphers.py 125 50 60% 3108s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 30 77% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 59 58% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 50 64% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hmac.py 6 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/kdf/__init__.py 7 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/kdf/pbkdf2.py 27 5 81% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/padding.py 117 27 77% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/pkcs12.py 82 49 40% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/utils.py 77 23 70% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/__main__.py 199 65 67% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/api.py 770 288 63% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/config.py 371 98 74% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 83 87% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/dcs/raft.py 319 35 89% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/ha.py 1244 308 75% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/log.py 219 69 68% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 172 79% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 216 73% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 163 61% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 32 90% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/request.py 62 6 90% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/utils.py 350 123 65% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 42 79% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psutil/__init__.py 951 629 34% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/__init__.py 2 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/atomic_replace.py 4 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/config.py 80 1 99% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/dns_resolver.py 51 10 80% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/encryptor.py 17 2 88% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/fast_queue.py 21 1 95% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/journal.py 193 37 81% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/monotonic.py 77 70 9% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/node.py 49 10 80% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/pickle.py 52 32 38% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/pipe_notifier.py 24 2 92% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/poller.py 87 41 53% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/serializer.py 166 133 20% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/syncobj.py 1045 392 62% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/tcp_connection.py 250 40 84% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/tcp_server.py 56 12 79% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/transport.py 266 57 79% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/utility.py 59 7 88% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/version.py 1 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/pysyncobj/win_inet_pton.py 44 31 30% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/six.py 504 250 50% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 108 54% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 15 72% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/connection.py 324 104 68% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 136 61% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 88 62% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/response.py 562 336 40% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 9 86% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 49 72% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 75 58% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 19 73% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 78 62% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 9 65% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 38 22% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/parser.py 352 180 49% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/reader.py 122 30 75% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/scanner.py 758 415 45% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 3109s Nov 13 12:21:18 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 3109s Nov 13 12:21:18 patroni/__init__.py 13 2 85% 3109s Nov 13 12:21:18 patroni/__main__.py 199 199 0% 3109s Nov 13 12:21:18 patroni/api.py 770 770 0% 3109s Nov 13 12:21:18 patroni/async_executor.py 96 69 28% 3109s Nov 13 12:21:18 patroni/collections.py 56 15 73% 3109s Nov 13 12:21:18 patroni/config.py 371 189 49% 3109s Nov 13 12:21:18 patroni/config_generator.py 212 212 0% 3109s Nov 13 12:21:18 patroni/ctl.py 936 411 56% 3109s Nov 13 12:21:18 patroni/daemon.py 76 6 92% 3109s Nov 13 12:21:18 patroni/dcs/__init__.py 646 268 59% 3109s Nov 13 12:21:18 patroni/dcs/consul.py 485 485 0% 3109s Nov 13 12:21:18 patroni/dcs/etcd3.py 679 679 0% 3109s Nov 13 12:21:18 patroni/dcs/etcd.py 603 603 0% 3109s Nov 13 12:21:18 patroni/dcs/exhibitor.py 61 61 0% 3109s Nov 13 12:21:18 patroni/dcs/kubernetes.py 938 938 0% 3109s Nov 13 12:21:18 patroni/dcs/raft.py 319 73 77% 3109s Nov 13 12:21:18 patroni/dcs/zookeeper.py 288 288 0% 3109s Nov 13 12:21:18 patroni/dynamic_loader.py 35 7 80% 3109s Nov 13 12:21:18 patroni/exceptions.py 16 1 94% 3109s Nov 13 12:21:18 patroni/file_perm.py 43 15 65% 3109s Nov 13 12:21:18 patroni/global_config.py 81 18 78% 3109s Nov 13 12:21:18 patroni/ha.py 1244 1244 0% 3109s Nov 13 12:21:18 patroni/log.py 219 93 58% 3109s Nov 13 12:21:18 patroni/postgresql/__init__.py 821 651 21% 3109s Nov 13 12:21:18 patroni/postgresql/available_parameters/__init__.py 21 1 95% 3109s Nov 13 12:21:18 patroni/postgresql/bootstrap.py 252 222 12% 3109s Nov 13 12:21:18 patroni/postgresql/callback_executor.py 55 34 38% 3109s Nov 13 12:21:18 patroni/postgresql/cancellable.py 104 84 19% 3109s Nov 13 12:21:18 patroni/postgresql/config.py 813 698 14% 3109s Nov 13 12:21:18 patroni/postgresql/connection.py 75 50 33% 3109s Nov 13 12:21:18 patroni/postgresql/misc.py 41 29 29% 3109s Nov 13 12:21:18 patroni/postgresql/mpp/__init__.py 89 21 76% 3109s Nov 13 12:21:18 patroni/postgresql/mpp/citus.py 259 259 0% 3109s Nov 13 12:21:18 patroni/postgresql/postmaster.py 170 139 18% 3109s Nov 13 12:21:18 patroni/postgresql/rewind.py 416 416 0% 3109s Nov 13 12:21:18 patroni/postgresql/slots.py 334 285 15% 3109s Nov 13 12:21:18 patroni/postgresql/sync.py 130 96 26% 3109s Nov 13 12:21:18 patroni/postgresql/validator.py 157 52 67% 3109s Nov 13 12:21:18 patroni/psycopg.py 42 28 33% 3109s Nov 13 12:21:18 patroni/raft_controller.py 22 1 95% 3109s Nov 13 12:21:18 patroni/request.py 62 6 90% 3109s Nov 13 12:21:18 patroni/scripts/__init__.py 0 0 100% 3109s Nov 13 12:21:18 patroni/scripts/aws.py 59 59 0% 3109s Nov 13 12:21:18 patroni/scripts/barman/__init__.py 0 0 100% 3109s Nov 13 12:21:18 patroni/scripts/barman/cli.py 51 51 0% 3109s Nov 13 12:21:18 patroni/scripts/barman/config_switch.py 51 51 0% 3109s Nov 13 12:21:18 patroni/scripts/barman/recover.py 37 37 0% 3109s Nov 13 12:21:18 patroni/scripts/barman/utils.py 94 94 0% 3109s Nov 13 12:21:18 patroni/scripts/wale_restore.py 207 207 0% 3109s Nov 13 12:21:18 patroni/tags.py 38 11 71% 3109s Nov 13 12:21:18 patroni/utils.py 350 215 39% 3109s Nov 13 12:21:18 patroni/validator.py 301 215 29% 3109s Nov 13 12:21:18 patroni/version.py 1 0 100% 3109s Nov 13 12:21:18 patroni/watchdog/__init__.py 2 2 0% 3109s Nov 13 12:21:18 patroni/watchdog/base.py 203 203 0% 3109s Nov 13 12:21:18 patroni/watchdog/linux.py 135 135 0% 3109s Nov 13 12:21:18 ------------------------------------------------------------------------------------------------------------- 3109s Nov 13 12:21:18 TOTAL 44230 24989 44% 3109s Nov 13 12:21:18 12 features passed, 0 failed, 1 skipped 3109s Nov 13 12:21:18 54 scenarios passed, 0 failed, 6 skipped 3109s Nov 13 12:21:18 522 steps passed, 0 failed, 63 skipped, 0 undefined 3109s Nov 13 12:21:18 Took 9m11.380s 3109s ### End 16 acceptance-raft ### 3109s + echo '### End 16 acceptance-raft ###' 3109s + rm -f '/tmp/pgpass?' 3109s ++ id -u 3109s + '[' 1000 -eq 0 ']' 3109s autopkgtest [12:21:19]: test acceptance-raft: -----------------------] 3110s acceptance-raft PASS 3110s autopkgtest [12:21:20]: test acceptance-raft: - - - - - - - - - - results - - - - - - - - - - 3110s autopkgtest [12:21:20]: test test: preparing testbed 3220s autopkgtest [12:23:10]: testbed dpkg architecture: s390x 3220s autopkgtest [12:23:10]: testbed apt version: 2.9.8 3220s autopkgtest [12:23:10]: @@@@@@@@@@@@@@@@@@@@ test bed setup 3221s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [73.9 kB] 3221s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/restricted Sources [7016 B] 3221s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [16.5 kB] 3221s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [104 kB] 3221s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [967 kB] 3222s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [107 kB] 3222s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [641 kB] 3222s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [17.4 kB] 3222s Fetched 1934 kB in 1s (2001 kB/s) 3222s Reading package lists... 3224s Reading package lists... 3224s Building dependency tree... 3224s Reading state information... 3224s Calculating upgrade... 3224s The following NEW packages will be installed: 3224s python3.13-gdbm 3224s The following packages will be upgraded: 3224s libgpgme11t64 libpython3-stdlib python3 python3-gdbm python3-minimal 3224s 5 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 3224s Need to get 252 kB of archives. 3224s After this operation, 98.3 kB of additional disk space will be used. 3224s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-minimal s390x 3.12.7-1 [27.4 kB] 3224s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3 s390x 3.12.7-1 [24.0 kB] 3224s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libpython3-stdlib s390x 3.12.7-1 [10.0 kB] 3224s Get:4 http://ftpmaster.internal/ubuntu plucky/main s390x python3.13-gdbm s390x 3.13.0-2 [31.0 kB] 3225s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x python3-gdbm s390x 3.12.7-1 [8642 B] 3225s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x libgpgme11t64 s390x 1.23.2-5ubuntu4 [151 kB] 3225s Fetched 252 kB in 0s (554 kB/s) 3225s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 3225s Preparing to unpack .../python3-minimal_3.12.7-1_s390x.deb ... 3225s Unpacking python3-minimal (3.12.7-1) over (3.12.6-0ubuntu1) ... 3225s Setting up python3-minimal (3.12.7-1) ... 3225s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55510 files and directories currently installed.) 3225s Preparing to unpack .../python3_3.12.7-1_s390x.deb ... 3225s Unpacking python3 (3.12.7-1) over (3.12.6-0ubuntu1) ... 3225s Preparing to unpack .../libpython3-stdlib_3.12.7-1_s390x.deb ... 3225s Unpacking libpython3-stdlib:s390x (3.12.7-1) over (3.12.6-0ubuntu1) ... 3225s Selecting previously unselected package python3.13-gdbm. 3225s Preparing to unpack .../python3.13-gdbm_3.13.0-2_s390x.deb ... 3225s Unpacking python3.13-gdbm (3.13.0-2) ... 3225s Preparing to unpack .../python3-gdbm_3.12.7-1_s390x.deb ... 3225s Unpacking python3-gdbm:s390x (3.12.7-1) over (3.12.6-1ubuntu1) ... 3225s Preparing to unpack .../libgpgme11t64_1.23.2-5ubuntu4_s390x.deb ... 3225s Unpacking libgpgme11t64:s390x (1.23.2-5ubuntu4) over (1.18.0-4.1ubuntu4) ... 3225s Setting up libgpgme11t64:s390x (1.23.2-5ubuntu4) ... 3225s Setting up python3.13-gdbm (3.13.0-2) ... 3225s Setting up libpython3-stdlib:s390x (3.12.7-1) ... 3225s Setting up python3 (3.12.7-1) ... 3225s Setting up python3-gdbm:s390x (3.12.7-1) ... 3225s Processing triggers for man-db (2.12.1-3) ... 3226s Processing triggers for libc-bin (2.40-1ubuntu3) ... 3226s Reading package lists... 3226s Building dependency tree... 3226s Reading state information... 3226s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3227s Hit:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease 3227s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 3227s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 3227s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 3227s Reading package lists... 3227s Reading package lists... 3228s Building dependency tree... 3228s Reading state information... 3228s Calculating upgrade... 3228s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3228s Reading package lists... 3228s Building dependency tree... 3228s Reading state information... 3228s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3232s Reading package lists... 3232s Building dependency tree... 3232s Reading state information... 3232s Starting pkgProblemResolver with broken count: 0 3232s Starting 2 pkgProblemResolver with broken count: 0 3232s Done 3232s The following additional packages will be installed: 3232s fonts-font-awesome fonts-lato libcares2 libev4t64 libjs-jquery 3232s libjs-jquery-hotkeys libjs-jquery-isonscreen libjs-jquery-metadata 3232s libjs-jquery-tablesorter libjs-jquery-throttle-debounce libjs-sphinxdoc 3232s libjs-underscore libpq5 patroni patroni-doc python3-aiohttp 3232s python3-aiosignal python3-async-timeout python3-boto3 python3-botocore 3232s python3-cachetools python3-cdiff python3-click python3-colorama 3232s python3-consul python3-coverage python3-dateutil python3-dnspython 3232s python3-etcd python3-eventlet python3-flake8 python3-frozenlist 3232s python3-gevent python3-google-auth python3-greenlet python3-iniconfig 3232s python3-jmespath python3-kazoo python3-kerberos python3-kubernetes 3232s python3-mccabe python3-mock python3-multidict python3-packaging 3232s python3-pluggy python3-prettytable python3-psutil python3-psycopg2 3232s python3-pure-sasl python3-pyasn1 python3-pyasn1-modules python3-pycodestyle 3232s python3-pyflakes python3-pysyncobj python3-pytest python3-pytest-cov 3232s python3-pyu2f python3-requests-oauthlib python3-responses python3-rsa 3232s python3-s3transfer python3-six python3-wcwidth python3-websocket 3232s python3-yarl python3-ydiff python3-zope.event python3-zope.interface 3232s sphinx-rtd-theme-common 3232s Suggested packages: 3232s postgresql etcd-server | consul | zookeeperd vip-manager haproxy 3232s python3-tornado python3-twisted python-coverage-doc python3-trio 3232s python3-aioquic python3-h2 python3-httpx python3-httpcore etcd 3232s python-eventlet-doc python-gevent-doc python-greenlet-dev 3232s python-greenlet-doc python-kazoo-doc python-mock-doc python-psycopg2-doc 3232s Recommended packages: 3232s javascript-common python3-aiodns pyflakes3 3232s The following NEW packages will be installed: 3232s autopkgtest-satdep fonts-font-awesome fonts-lato libcares2 libev4t64 3232s libjs-jquery libjs-jquery-hotkeys libjs-jquery-isonscreen 3232s libjs-jquery-metadata libjs-jquery-tablesorter 3232s libjs-jquery-throttle-debounce libjs-sphinxdoc libjs-underscore libpq5 3232s patroni patroni-doc python3-aiohttp python3-aiosignal python3-async-timeout 3232s python3-boto3 python3-botocore python3-cachetools python3-cdiff 3232s python3-click python3-colorama python3-consul python3-coverage 3232s python3-dateutil python3-dnspython python3-etcd python3-eventlet 3232s python3-flake8 python3-frozenlist python3-gevent python3-google-auth 3232s python3-greenlet python3-iniconfig python3-jmespath python3-kazoo 3232s python3-kerberos python3-kubernetes python3-mccabe python3-mock 3232s python3-multidict python3-packaging python3-pluggy python3-prettytable 3232s python3-psutil python3-psycopg2 python3-pure-sasl python3-pyasn1 3232s python3-pyasn1-modules python3-pycodestyle python3-pyflakes 3232s python3-pysyncobj python3-pytest python3-pytest-cov python3-pyu2f 3232s python3-requests-oauthlib python3-responses python3-rsa python3-s3transfer 3232s python3-six python3-wcwidth python3-websocket python3-yarl python3-ydiff 3232s python3-zope.event python3-zope.interface sphinx-rtd-theme-common 3232s 0 upgraded, 70 newly installed, 0 to remove and 0 not upgraded. 3232s Need to get 17.0 MB/17.0 MB of archives. 3232s After this operation, 158 MB of additional disk space will be used. 3232s Get:1 /tmp/autopkgtest.FwqS2V/6-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [792 B] 3232s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-lato all 2.015-1 [2781 kB] 3233s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 3233s Get:4 http://ftpmaster.internal/ubuntu plucky/universe s390x libjs-jquery-hotkeys all 0~20130707+git2d51e3a9+dfsg-2.1 [11.5 kB] 3233s Get:5 http://ftpmaster.internal/ubuntu plucky/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 3233s Get:6 http://ftpmaster.internal/ubuntu plucky/main s390x libcares2 s390x 1.34.2-1 [96.8 kB] 3233s Get:7 http://ftpmaster.internal/ubuntu plucky/universe s390x libev4t64 s390x 1:4.33-2.1build1 [32.0 kB] 3233s Get:8 http://ftpmaster.internal/ubuntu plucky/universe s390x libjs-jquery-metadata all 12-4 [6582 B] 3233s Get:9 http://ftpmaster.internal/ubuntu plucky/universe s390x libjs-jquery-tablesorter all 1:2.31.3+dfsg1-4 [192 kB] 3233s Get:10 http://ftpmaster.internal/ubuntu plucky/universe s390x libjs-jquery-throttle-debounce all 1.1+dfsg.1-2 [12.5 kB] 3233s Get:11 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 3233s Get:12 http://ftpmaster.internal/ubuntu plucky/main s390x libjs-sphinxdoc all 7.4.7-4 [158 kB] 3233s Get:13 http://ftpmaster.internal/ubuntu plucky/main s390x libpq5 s390x 17.0-1 [252 kB] 3233s Get:14 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-ydiff all 1.3-1 [18.4 kB] 3233s Get:15 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-cdiff all 1.3-1 [1770 B] 3233s Get:16 http://ftpmaster.internal/ubuntu plucky/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 3233s Get:17 http://ftpmaster.internal/ubuntu plucky/main s390x python3-click all 8.1.7-2 [79.5 kB] 3233s Get:18 http://ftpmaster.internal/ubuntu plucky/main s390x python3-six all 1.16.0-7 [13.1 kB] 3233s Get:19 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 3233s Get:20 http://ftpmaster.internal/ubuntu plucky/main s390x python3-wcwidth all 0.2.13+dfsg1-1 [26.3 kB] 3233s Get:21 http://ftpmaster.internal/ubuntu plucky/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 3233s Get:22 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 3233s Get:23 http://ftpmaster.internal/ubuntu plucky/main s390x python3-psycopg2 s390x 2.9.9-2 [132 kB] 3233s Get:24 http://ftpmaster.internal/ubuntu plucky/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 3233s Get:25 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-etcd all 0.4.5-4 [31.9 kB] 3233s Get:26 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-consul all 0.7.1-2 [21.6 kB] 3233s Get:27 http://ftpmaster.internal/ubuntu plucky/main s390x python3-greenlet s390x 3.0.3-0ubuntu6 [156 kB] 3233s Get:28 http://ftpmaster.internal/ubuntu plucky/main s390x python3-eventlet all 0.36.1-0ubuntu1 [274 kB] 3233s Get:29 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-zope.event all 5.0-0.1 [7512 B] 3233s Get:30 http://ftpmaster.internal/ubuntu plucky/main s390x python3-zope.interface s390x 7.1.1-1 [140 kB] 3233s Get:31 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-gevent s390x 24.2.1-1 [835 kB] 3233s Get:32 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-kerberos s390x 1.1.14-3.1build9 [21.4 kB] 3233s Get:33 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pure-sasl all 0.5.1+dfsg1-4 [11.4 kB] 3233s Get:34 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-kazoo all 2.9.0-2 [103 kB] 3233s Get:35 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-multidict s390x 6.1.0-1 [34.1 kB] 3233s Get:36 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-yarl s390x 1.9.4-1 [72.8 kB] 3233s Get:37 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-async-timeout all 4.0.3-1 [6412 B] 3233s Get:38 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-frozenlist s390x 1.5.0-1 [49.7 kB] 3233s Get:39 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-aiosignal all 1.3.1-1 [5172 B] 3233s Get:40 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-aiohttp s390x 3.9.5-1 [294 kB] 3233s Get:41 http://ftpmaster.internal/ubuntu plucky/main s390x python3-cachetools all 5.3.3-1 [10.3 kB] 3233s Get:42 http://ftpmaster.internal/ubuntu plucky/main s390x python3-pyasn1 all 0.5.1-1 [57.4 kB] 3233s Get:43 http://ftpmaster.internal/ubuntu plucky/main s390x python3-pyasn1-modules all 0.3.0-1 [80.2 kB] 3234s Get:44 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pyu2f all 0.1.5-4 [22.9 kB] 3234s Get:45 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-responses all 0.25.3-1 [54.3 kB] 3234s Get:46 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-rsa all 4.9-2 [28.2 kB] 3234s Get:47 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-google-auth all 2.28.2-3 [91.0 kB] 3234s Get:48 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-requests-oauthlib all 1.3.1-1 [18.8 kB] 3234s Get:49 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-websocket all 1.8.0-2 [38.5 kB] 3234s Get:50 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-kubernetes all 30.1.0-1 [386 kB] 3234s Get:51 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pysyncobj all 0.3.12-1 [38.9 kB] 3234s Get:52 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni all 3.3.1-1 [264 kB] 3234s Get:53 http://ftpmaster.internal/ubuntu plucky/main s390x sphinx-rtd-theme-common all 3.0.1+dfsg-1 [1012 kB] 3234s Get:54 http://ftpmaster.internal/ubuntu plucky/universe s390x patroni-doc all 3.3.1-1 [497 kB] 3234s Get:55 http://ftpmaster.internal/ubuntu plucky/main s390x python3-jmespath all 1.0.1-1 [21.3 kB] 3234s Get:56 http://ftpmaster.internal/ubuntu plucky/main s390x python3-botocore all 1.34.46+repack-1ubuntu1 [6211 kB] 3234s Get:57 http://ftpmaster.internal/ubuntu plucky/main s390x python3-s3transfer all 0.10.1-1ubuntu2 [54.3 kB] 3234s Get:58 http://ftpmaster.internal/ubuntu plucky/main s390x python3-boto3 all 1.34.46+dfsg-1ubuntu1 [72.5 kB] 3234s Get:59 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 3234s Get:60 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-mccabe all 0.7.0-1 [8678 B] 3234s Get:61 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pycodestyle all 2.11.1-1 [29.9 kB] 3234s Get:62 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pyflakes all 3.2.0-1 [52.8 kB] 3234s Get:63 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-flake8 all 7.1.1-1 [43.9 kB] 3234s Get:64 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-iniconfig all 1.1.1-2 [6024 B] 3234s Get:65 http://ftpmaster.internal/ubuntu plucky/main s390x python3-packaging all 24.1-1 [41.4 kB] 3234s Get:66 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pluggy all 1.5.0-1 [21.0 kB] 3235s Get:67 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pytest all 8.3.3-1 [251 kB] 3235s Get:68 http://ftpmaster.internal/ubuntu plucky/universe s390x libjs-jquery-isonscreen all 1.2.0-1.1 [3244 B] 3235s Get:69 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-pytest-cov all 5.0.0-1 [21.3 kB] 3235s Get:70 http://ftpmaster.internal/ubuntu plucky/universe s390x python3-mock all 5.1.0-1 [64.1 kB] 3235s Fetched 17.0 MB in 2s (7155 kB/s) 3235s Selecting previously unselected package fonts-lato. 3235s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 55517 files and directories currently installed.) 3235s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 3235s Unpacking fonts-lato (2.015-1) ... 3235s Selecting previously unselected package libjs-jquery. 3235s Preparing to unpack .../01-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 3235s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 3235s Selecting previously unselected package libjs-jquery-hotkeys. 3235s Preparing to unpack .../02-libjs-jquery-hotkeys_0~20130707+git2d51e3a9+dfsg-2.1_all.deb ... 3235s Unpacking libjs-jquery-hotkeys (0~20130707+git2d51e3a9+dfsg-2.1) ... 3235s Selecting previously unselected package fonts-font-awesome. 3235s Preparing to unpack .../03-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 3235s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 3235s Selecting previously unselected package libcares2:s390x. 3235s Preparing to unpack .../04-libcares2_1.34.2-1_s390x.deb ... 3235s Unpacking libcares2:s390x (1.34.2-1) ... 3235s Selecting previously unselected package libev4t64:s390x. 3235s Preparing to unpack .../05-libev4t64_1%3a4.33-2.1build1_s390x.deb ... 3235s Unpacking libev4t64:s390x (1:4.33-2.1build1) ... 3235s Selecting previously unselected package libjs-jquery-metadata. 3235s Preparing to unpack .../06-libjs-jquery-metadata_12-4_all.deb ... 3235s Unpacking libjs-jquery-metadata (12-4) ... 3235s Selecting previously unselected package libjs-jquery-tablesorter. 3235s Preparing to unpack .../07-libjs-jquery-tablesorter_1%3a2.31.3+dfsg1-4_all.deb ... 3235s Unpacking libjs-jquery-tablesorter (1:2.31.3+dfsg1-4) ... 3235s Selecting previously unselected package libjs-jquery-throttle-debounce. 3235s Preparing to unpack .../08-libjs-jquery-throttle-debounce_1.1+dfsg.1-2_all.deb ... 3235s Unpacking libjs-jquery-throttle-debounce (1.1+dfsg.1-2) ... 3235s Selecting previously unselected package libjs-underscore. 3235s Preparing to unpack .../09-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 3235s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 3235s Selecting previously unselected package libjs-sphinxdoc. 3235s Preparing to unpack .../10-libjs-sphinxdoc_7.4.7-4_all.deb ... 3235s Unpacking libjs-sphinxdoc (7.4.7-4) ... 3235s Selecting previously unselected package libpq5:s390x. 3235s Preparing to unpack .../11-libpq5_17.0-1_s390x.deb ... 3235s Unpacking libpq5:s390x (17.0-1) ... 3235s Selecting previously unselected package python3-ydiff. 3235s Preparing to unpack .../12-python3-ydiff_1.3-1_all.deb ... 3235s Unpacking python3-ydiff (1.3-1) ... 3235s Selecting previously unselected package python3-cdiff. 3235s Preparing to unpack .../13-python3-cdiff_1.3-1_all.deb ... 3235s Unpacking python3-cdiff (1.3-1) ... 3235s Selecting previously unselected package python3-colorama. 3235s Preparing to unpack .../14-python3-colorama_0.4.6-4_all.deb ... 3235s Unpacking python3-colorama (0.4.6-4) ... 3235s Selecting previously unselected package python3-click. 3235s Preparing to unpack .../15-python3-click_8.1.7-2_all.deb ... 3235s Unpacking python3-click (8.1.7-2) ... 3235s Selecting previously unselected package python3-six. 3235s Preparing to unpack .../16-python3-six_1.16.0-7_all.deb ... 3235s Unpacking python3-six (1.16.0-7) ... 3235s Selecting previously unselected package python3-dateutil. 3235s Preparing to unpack .../17-python3-dateutil_2.9.0-2_all.deb ... 3235s Unpacking python3-dateutil (2.9.0-2) ... 3235s Selecting previously unselected package python3-wcwidth. 3235s Preparing to unpack .../18-python3-wcwidth_0.2.13+dfsg1-1_all.deb ... 3235s Unpacking python3-wcwidth (0.2.13+dfsg1-1) ... 3235s Selecting previously unselected package python3-prettytable. 3235s Preparing to unpack .../19-python3-prettytable_3.10.1-1_all.deb ... 3235s Unpacking python3-prettytable (3.10.1-1) ... 3235s Selecting previously unselected package python3-psutil. 3235s Preparing to unpack .../20-python3-psutil_5.9.8-2build2_s390x.deb ... 3235s Unpacking python3-psutil (5.9.8-2build2) ... 3235s Selecting previously unselected package python3-psycopg2. 3235s Preparing to unpack .../21-python3-psycopg2_2.9.9-2_s390x.deb ... 3235s Unpacking python3-psycopg2 (2.9.9-2) ... 3235s Selecting previously unselected package python3-dnspython. 3235s Preparing to unpack .../22-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 3235s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 3235s Selecting previously unselected package python3-etcd. 3235s Preparing to unpack .../23-python3-etcd_0.4.5-4_all.deb ... 3235s Unpacking python3-etcd (0.4.5-4) ... 3235s Selecting previously unselected package python3-consul. 3235s Preparing to unpack .../24-python3-consul_0.7.1-2_all.deb ... 3235s Unpacking python3-consul (0.7.1-2) ... 3235s Selecting previously unselected package python3-greenlet. 3235s Preparing to unpack .../25-python3-greenlet_3.0.3-0ubuntu6_s390x.deb ... 3235s Unpacking python3-greenlet (3.0.3-0ubuntu6) ... 3235s Selecting previously unselected package python3-eventlet. 3235s Preparing to unpack .../26-python3-eventlet_0.36.1-0ubuntu1_all.deb ... 3235s Unpacking python3-eventlet (0.36.1-0ubuntu1) ... 3235s Selecting previously unselected package python3-zope.event. 3235s Preparing to unpack .../27-python3-zope.event_5.0-0.1_all.deb ... 3235s Unpacking python3-zope.event (5.0-0.1) ... 3235s Selecting previously unselected package python3-zope.interface. 3235s Preparing to unpack .../28-python3-zope.interface_7.1.1-1_s390x.deb ... 3235s Unpacking python3-zope.interface (7.1.1-1) ... 3235s Selecting previously unselected package python3-gevent. 3235s Preparing to unpack .../29-python3-gevent_24.2.1-1_s390x.deb ... 3235s Unpacking python3-gevent (24.2.1-1) ... 3235s Selecting previously unselected package python3-kerberos. 3235s Preparing to unpack .../30-python3-kerberos_1.1.14-3.1build9_s390x.deb ... 3235s Unpacking python3-kerberos (1.1.14-3.1build9) ... 3235s Selecting previously unselected package python3-pure-sasl. 3235s Preparing to unpack .../31-python3-pure-sasl_0.5.1+dfsg1-4_all.deb ... 3235s Unpacking python3-pure-sasl (0.5.1+dfsg1-4) ... 3235s Selecting previously unselected package python3-kazoo. 3235s Preparing to unpack .../32-python3-kazoo_2.9.0-2_all.deb ... 3235s Unpacking python3-kazoo (2.9.0-2) ... 3235s Selecting previously unselected package python3-multidict. 3235s Preparing to unpack .../33-python3-multidict_6.1.0-1_s390x.deb ... 3235s Unpacking python3-multidict (6.1.0-1) ... 3235s Selecting previously unselected package python3-yarl. 3235s Preparing to unpack .../34-python3-yarl_1.9.4-1_s390x.deb ... 3235s Unpacking python3-yarl (1.9.4-1) ... 3235s Selecting previously unselected package python3-async-timeout. 3235s Preparing to unpack .../35-python3-async-timeout_4.0.3-1_all.deb ... 3235s Unpacking python3-async-timeout (4.0.3-1) ... 3235s Selecting previously unselected package python3-frozenlist. 3235s Preparing to unpack .../36-python3-frozenlist_1.5.0-1_s390x.deb ... 3235s Unpacking python3-frozenlist (1.5.0-1) ... 3235s Selecting previously unselected package python3-aiosignal. 3235s Preparing to unpack .../37-python3-aiosignal_1.3.1-1_all.deb ... 3235s Unpacking python3-aiosignal (1.3.1-1) ... 3235s Selecting previously unselected package python3-aiohttp. 3235s Preparing to unpack .../38-python3-aiohttp_3.9.5-1_s390x.deb ... 3235s Unpacking python3-aiohttp (3.9.5-1) ... 3235s Selecting previously unselected package python3-cachetools. 3235s Preparing to unpack .../39-python3-cachetools_5.3.3-1_all.deb ... 3235s Unpacking python3-cachetools (5.3.3-1) ... 3235s Selecting previously unselected package python3-pyasn1. 3235s Preparing to unpack .../40-python3-pyasn1_0.5.1-1_all.deb ... 3235s Unpacking python3-pyasn1 (0.5.1-1) ... 3235s Selecting previously unselected package python3-pyasn1-modules. 3235s Preparing to unpack .../41-python3-pyasn1-modules_0.3.0-1_all.deb ... 3235s Unpacking python3-pyasn1-modules (0.3.0-1) ... 3235s Selecting previously unselected package python3-pyu2f. 3235s Preparing to unpack .../42-python3-pyu2f_0.1.5-4_all.deb ... 3235s Unpacking python3-pyu2f (0.1.5-4) ... 3235s Selecting previously unselected package python3-responses. 3235s Preparing to unpack .../43-python3-responses_0.25.3-1_all.deb ... 3235s Unpacking python3-responses (0.25.3-1) ... 3235s Selecting previously unselected package python3-rsa. 3235s Preparing to unpack .../44-python3-rsa_4.9-2_all.deb ... 3235s Unpacking python3-rsa (4.9-2) ... 3235s Selecting previously unselected package python3-google-auth. 3235s Preparing to unpack .../45-python3-google-auth_2.28.2-3_all.deb ... 3235s Unpacking python3-google-auth (2.28.2-3) ... 3235s Selecting previously unselected package python3-requests-oauthlib. 3235s Preparing to unpack .../46-python3-requests-oauthlib_1.3.1-1_all.deb ... 3235s Unpacking python3-requests-oauthlib (1.3.1-1) ... 3235s Selecting previously unselected package python3-websocket. 3235s Preparing to unpack .../47-python3-websocket_1.8.0-2_all.deb ... 3235s Unpacking python3-websocket (1.8.0-2) ... 3235s Selecting previously unselected package python3-kubernetes. 3235s Preparing to unpack .../48-python3-kubernetes_30.1.0-1_all.deb ... 3235s Unpacking python3-kubernetes (30.1.0-1) ... 3236s Selecting previously unselected package python3-pysyncobj. 3236s Preparing to unpack .../49-python3-pysyncobj_0.3.12-1_all.deb ... 3236s Unpacking python3-pysyncobj (0.3.12-1) ... 3236s Selecting previously unselected package patroni. 3236s Preparing to unpack .../50-patroni_3.3.1-1_all.deb ... 3236s Unpacking patroni (3.3.1-1) ... 3236s Selecting previously unselected package sphinx-rtd-theme-common. 3236s Preparing to unpack .../51-sphinx-rtd-theme-common_3.0.1+dfsg-1_all.deb ... 3236s Unpacking sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 3236s Selecting previously unselected package patroni-doc. 3236s Preparing to unpack .../52-patroni-doc_3.3.1-1_all.deb ... 3236s Unpacking patroni-doc (3.3.1-1) ... 3236s Selecting previously unselected package python3-jmespath. 3236s Preparing to unpack .../53-python3-jmespath_1.0.1-1_all.deb ... 3236s Unpacking python3-jmespath (1.0.1-1) ... 3236s Selecting previously unselected package python3-botocore. 3236s Preparing to unpack .../54-python3-botocore_1.34.46+repack-1ubuntu1_all.deb ... 3236s Unpacking python3-botocore (1.34.46+repack-1ubuntu1) ... 3236s Selecting previously unselected package python3-s3transfer. 3236s Preparing to unpack .../55-python3-s3transfer_0.10.1-1ubuntu2_all.deb ... 3236s Unpacking python3-s3transfer (0.10.1-1ubuntu2) ... 3236s Selecting previously unselected package python3-boto3. 3236s Preparing to unpack .../56-python3-boto3_1.34.46+dfsg-1ubuntu1_all.deb ... 3236s Unpacking python3-boto3 (1.34.46+dfsg-1ubuntu1) ... 3236s Selecting previously unselected package python3-coverage. 3236s Preparing to unpack .../57-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 3236s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 3236s Selecting previously unselected package python3-mccabe. 3236s Preparing to unpack .../58-python3-mccabe_0.7.0-1_all.deb ... 3236s Unpacking python3-mccabe (0.7.0-1) ... 3236s Selecting previously unselected package python3-pycodestyle. 3236s Preparing to unpack .../59-python3-pycodestyle_2.11.1-1_all.deb ... 3236s Unpacking python3-pycodestyle (2.11.1-1) ... 3236s Selecting previously unselected package python3-pyflakes. 3236s Preparing to unpack .../60-python3-pyflakes_3.2.0-1_all.deb ... 3236s Unpacking python3-pyflakes (3.2.0-1) ... 3236s Selecting previously unselected package python3-flake8. 3236s Preparing to unpack .../61-python3-flake8_7.1.1-1_all.deb ... 3236s Unpacking python3-flake8 (7.1.1-1) ... 3236s Selecting previously unselected package python3-iniconfig. 3236s Preparing to unpack .../62-python3-iniconfig_1.1.1-2_all.deb ... 3236s Unpacking python3-iniconfig (1.1.1-2) ... 3236s Selecting previously unselected package python3-packaging. 3236s Preparing to unpack .../63-python3-packaging_24.1-1_all.deb ... 3236s Unpacking python3-packaging (24.1-1) ... 3236s Selecting previously unselected package python3-pluggy. 3236s Preparing to unpack .../64-python3-pluggy_1.5.0-1_all.deb ... 3236s Unpacking python3-pluggy (1.5.0-1) ... 3236s Selecting previously unselected package python3-pytest. 3236s Preparing to unpack .../65-python3-pytest_8.3.3-1_all.deb ... 3236s Unpacking python3-pytest (8.3.3-1) ... 3236s Selecting previously unselected package libjs-jquery-isonscreen. 3236s Preparing to unpack .../66-libjs-jquery-isonscreen_1.2.0-1.1_all.deb ... 3236s Unpacking libjs-jquery-isonscreen (1.2.0-1.1) ... 3236s Selecting previously unselected package python3-pytest-cov. 3236s Preparing to unpack .../67-python3-pytest-cov_5.0.0-1_all.deb ... 3236s Unpacking python3-pytest-cov (5.0.0-1) ... 3236s Selecting previously unselected package python3-mock. 3236s Preparing to unpack .../68-python3-mock_5.1.0-1_all.deb ... 3236s Unpacking python3-mock (5.1.0-1) ... 3236s Selecting previously unselected package autopkgtest-satdep. 3236s Preparing to unpack .../69-6-autopkgtest-satdep.deb ... 3236s Unpacking autopkgtest-satdep (0) ... 3236s Setting up python3-iniconfig (1.1.1-2) ... 3236s Setting up libev4t64:s390x (1:4.33-2.1build1) ... 3236s Setting up fonts-lato (2.015-1) ... 3236s Setting up python3-pysyncobj (0.3.12-1) ... 3236s Setting up python3-cachetools (5.3.3-1) ... 3236s Setting up python3-colorama (0.4.6-4) ... 3237s Setting up python3-zope.event (5.0-0.1) ... 3237s Setting up python3-zope.interface (7.1.1-1) ... 3237s Setting up python3-pyflakes (3.2.0-1) ... 3237s Setting up python3-ydiff (1.3-1) ... 3237s Setting up libpq5:s390x (17.0-1) ... 3237s Setting up python3-kerberos (1.1.14-3.1build9) ... 3237s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 3237s Setting up libjs-jquery-throttle-debounce (1.1+dfsg.1-2) ... 3237s Setting up python3-click (8.1.7-2) ... 3237s Setting up python3-psutil (5.9.8-2build2) ... 3238s Setting up python3-multidict (6.1.0-1) ... 3238s Setting up python3-frozenlist (1.5.0-1) ... 3238s Setting up python3-aiosignal (1.3.1-1) ... 3238s Setting up python3-mock (5.1.0-1) ... 3238s Setting up python3-async-timeout (4.0.3-1) ... 3238s Setting up python3-six (1.16.0-7) ... 3238s Setting up python3-responses (0.25.3-1) ... 3238s Setting up python3-pycodestyle (2.11.1-1) ... 3238s Setting up python3-packaging (24.1-1) ... 3238s Setting up python3-wcwidth (0.2.13+dfsg1-1) ... 3239s Setting up python3-pyu2f (0.1.5-4) ... 3239s Setting up python3-jmespath (1.0.1-1) ... 3239s Setting up python3-greenlet (3.0.3-0ubuntu6) ... 3239s Setting up libcares2:s390x (1.34.2-1) ... 3239s Setting up python3-psycopg2 (2.9.9-2) ... 3239s Setting up python3-pluggy (1.5.0-1) ... 3239s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 3239s Setting up python3-pyasn1 (0.5.1-1) ... 3239s Setting up python3-dateutil (2.9.0-2) ... 3240s Setting up python3-mccabe (0.7.0-1) ... 3240s Setting up python3-consul (0.7.1-2) ... 3240s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 3240s Setting up libjs-jquery-hotkeys (0~20130707+git2d51e3a9+dfsg-2.1) ... 3240s Setting up python3-prettytable (3.10.1-1) ... 3240s Setting up python3-yarl (1.9.4-1) ... 3240s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 3240s Setting up sphinx-rtd-theme-common (3.0.1+dfsg-1) ... 3240s Setting up python3-websocket (1.8.0-2) ... 3240s Setting up python3-requests-oauthlib (1.3.1-1) ... 3240s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 3240s Setting up python3-pure-sasl (0.5.1+dfsg1-4) ... 3240s Setting up python3-etcd (0.4.5-4) ... 3240s Setting up python3-pytest (8.3.3-1) ... 3241s Setting up python3-cdiff (1.3-1) ... 3241s Setting up python3-aiohttp (3.9.5-1) ... 3241s Setting up python3-gevent (24.2.1-1) ... 3241s Setting up python3-flake8 (7.1.1-1) ... 3241s Setting up python3-eventlet (0.36.1-0ubuntu1) ... 3241s Setting up python3-kazoo (2.9.0-2) ... 3242s Setting up python3-pyasn1-modules (0.3.0-1) ... 3242s Setting up libjs-jquery-metadata (12-4) ... 3242s Setting up python3-botocore (1.34.46+repack-1ubuntu1) ... 3242s Setting up libjs-jquery-isonscreen (1.2.0-1.1) ... 3242s Setting up libjs-sphinxdoc (7.4.7-4) ... 3242s Setting up libjs-jquery-tablesorter (1:2.31.3+dfsg1-4) ... 3242s Setting up python3-rsa (4.9-2) ... 3242s Setting up patroni (3.3.1-1) ... 3242s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 3243s Setting up patroni-doc (3.3.1-1) ... 3243s Setting up python3-s3transfer (0.10.1-1ubuntu2) ... 3243s Setting up python3-pytest-cov (5.0.0-1) ... 3243s Setting up python3-google-auth (2.28.2-3) ... 3243s Setting up python3-boto3 (1.34.46+dfsg-1ubuntu1) ... 3243s Setting up python3-kubernetes (30.1.0-1) ... 3245s Setting up autopkgtest-satdep (0) ... 3245s Processing triggers for man-db (2.12.1-3) ... 3245s Processing triggers for libc-bin (2.40-1ubuntu3) ... 3248s (Reading database ... 61619 files and directories currently installed.) 3248s Removing autopkgtest-satdep (0) ... 3249s autopkgtest [12:23:39]: test test: [----------------------- 3249s running test 3250s ============================= test session starts ============================== 3250s platform linux -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0 -- /usr/bin/python3 3250s cachedir: .pytest_cache 3250s rootdir: /tmp/autopkgtest.FwqS2V/build.hfu/src 3250s plugins: typeguard-4.4.1, cov-5.0.0 3256s collecting ... collected 646 items 3256s 3256s tests/test_api.py::TestRestApiHandler::test_RestApiServer_query PASSED [ 0%] 3256s tests/test_api.py::TestRestApiHandler::test_basicauth PASSED [ 0%] 3256s tests/test_api.py::TestRestApiHandler::test_do_DELETE_restart PASSED [ 0%] 3256s tests/test_api.py::TestRestApiHandler::test_do_DELETE_switchover PASSED [ 0%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET PASSED [ 0%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_cluster PASSED [ 0%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_config PASSED [ 1%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_failsafe PASSED [ 1%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_history PASSED [ 1%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_liveness PASSED [ 1%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_metrics PASSED [ 1%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_patroni PASSED [ 1%] 3256s tests/test_api.py::TestRestApiHandler::test_do_GET_readiness PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_HEAD PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_OPTIONS PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_PATCH_config PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_citus PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_failover PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_failsafe PASSED [ 2%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_mpp PASSED [ 3%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_reinitialize PASSED [ 3%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_reload PASSED [ 3%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_restart PASSED [ 3%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_sigterm PASSED [ 3%] 3256s tests/test_api.py::TestRestApiHandler::test_do_POST_switchover PASSED [ 3%] 3256s tests/test_api.py::TestRestApiHandler::test_do_PUT_config PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_check_access PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_get_certificate_serial_number PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_handle_error PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_process_request_error PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_process_request_thread PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_query PASSED [ 4%] 3256s tests/test_api.py::TestRestApiServer::test_reload_config PASSED [ 5%] 3256s tests/test_api.py::TestRestApiServer::test_reload_local_certificate PASSED [ 5%] 3256s tests/test_api.py::TestRestApiServer::test_socket_error PASSED [ 5%] 3256s tests/test_async_executor.py::TestAsyncExecutor::test_cancel PASSED [ 5%] 3256s tests/test_async_executor.py::TestAsyncExecutor::test_run PASSED [ 5%] 3256s tests/test_async_executor.py::TestAsyncExecutor::test_run_async PASSED [ 5%] 3256s tests/test_async_executor.py::TestCriticalTask::test_completed_task PASSED [ 6%] 3256s tests/test_aws.py::TestAWSConnection::test_aws_bizare_response PASSED [ 6%] 3256s tests/test_aws.py::TestAWSConnection::test_main PASSED [ 6%] 3256s tests/test_aws.py::TestAWSConnection::test_non_aws PASSED [ 6%] 3256s tests/test_aws.py::TestAWSConnection::test_on_role_change PASSED [ 6%] 3256s tests/test_barman.py::test_set_up_logging PASSED [ 6%] 3256s tests/test_barman.py::TestPgBackupApi::test__build_full_url PASSED [ 6%] 3256s tests/test_barman.py::TestPgBackupApi::test__deserialize_response PASSED [ 7%] 3256s tests/test_barman.py::TestPgBackupApi::test__ensure_api_ok PASSED [ 7%] 3256s tests/test_barman.py::TestPgBackupApi::test__get_request PASSED [ 7%] 3256s tests/test_barman.py::TestPgBackupApi::test__post_request PASSED [ 7%] 3256s tests/test_barman.py::TestPgBackupApi::test__serialize_request PASSED [ 7%] 3256s tests/test_barman.py::TestPgBackupApi::test_create_config_switch_operation PASSED [ 7%] 3256s tests/test_barman.py::TestPgBackupApi::test_create_recovery_operation PASSED [ 8%] 3256s tests/test_barman.py::TestPgBackupApi::test_get_operation_status PASSED [ 8%] 3256s tests/test_barman.py::TestBarmanRecover::test__restore_backup PASSED [ 8%] 3256s tests/test_barman.py::TestBarmanRecoverCli::test_run_barman_recover PASSED [ 8%] 3256s tests/test_barman.py::TestBarmanConfigSwitch::test__switch_config PASSED [ 8%] 3256s tests/test_barman.py::TestBarmanConfigSwitchCli::test__should_skip_switch PASSED [ 8%] 3256s tests/test_barman.py::TestBarmanConfigSwitchCli::test_run_barman_config_switch PASSED [ 8%] 3256s tests/test_barman.py::TestMain::test_main PASSED [ 9%] 3256s tests/test_bootstrap.py::TestBootstrap::test__initdb PASSED [ 9%] 3256s tests/test_bootstrap.py::TestBootstrap::test__process_user_options PASSED [ 9%] 3256s tests/test_bootstrap.py::TestBootstrap::test_basebackup PASSED [ 9%] 3256s tests/test_bootstrap.py::TestBootstrap::test_bootstrap PASSED [ 9%] 3256s tests/test_bootstrap.py::TestBootstrap::test_call_post_bootstrap PASSED [ 9%] 3256s tests/test_bootstrap.py::TestBootstrap::test_clone PASSED [ 10%] 3256s tests/test_bootstrap.py::TestBootstrap::test_create_replica PASSED [ 10%] 3256s tests/test_bootstrap.py::TestBootstrap::test_create_replica_old_format PASSED [ 10%] 3256s tests/test_bootstrap.py::TestBootstrap::test_custom_bootstrap PASSED [ 10%] 3256s tests/test_bootstrap.py::TestBootstrap::test_post_bootstrap PASSED [ 10%] 3256s tests/test_callback_executor.py::TestCallbackExecutor::test_callback_executor PASSED [ 10%] 3256s tests/test_cancellable.py::TestCancellableSubprocess::test__kill_children PASSED [ 10%] 3256s tests/test_cancellable.py::TestCancellableSubprocess::test_call PASSED [ 11%] 3256s tests/test_cancellable.py::TestCancellableSubprocess::test_cancel PASSED [ 11%] 3256s tests/test_citus.py::TestCitus::test_add_task SKIPPED (Citus not tested) [ 11%] 3256s tests/test_citus.py::TestCitus::test_adjust_postgres_gucs SKIPPED (C...) [ 11%] 3256s tests/test_citus.py::TestCitus::test_bootstrap_duplicate_database SKIPPED [ 11%] 3256s tests/test_citus.py::TestCitus::test_handle_event SKIPPED (Citus not...) [ 11%] 3256s tests/test_citus.py::TestCitus::test_ignore_replication_slot SKIPPED [ 12%] 3256s tests/test_citus.py::TestCitus::test_load_pg_dist_node SKIPPED (Citu...) [ 12%] 3256s tests/test_citus.py::TestCitus::test_on_demote SKIPPED (Citus not te...) [ 12%] 3256s tests/test_citus.py::TestCitus::test_pick_task SKIPPED (Citus not te...) [ 12%] 3256s tests/test_citus.py::TestCitus::test_process_task SKIPPED (Citus not...) [ 12%] 3256s tests/test_citus.py::TestCitus::test_process_tasks SKIPPED (Citus no...) [ 12%] 3256s tests/test_citus.py::TestCitus::test_run SKIPPED (Citus not tested) [ 13%] 3256s tests/test_citus.py::TestCitus::test_sync_meta_data SKIPPED (Citus n...) [ 13%] 3256s tests/test_citus.py::TestCitus::test_wait SKIPPED (Citus not tested) [ 13%] 3256s tests/test_config.py::TestConfig::test__process_postgresql_parameters PASSED [ 13%] 3256s tests/test_config.py::TestConfig::test__validate_and_adjust_timeouts PASSED [ 13%] 3256s tests/test_config.py::TestConfig::test__validate_failover_tags PASSED [ 13%] 3256s tests/test_config.py::TestConfig::test_configuration_directory PASSED [ 13%] 3256s tests/test_config.py::TestConfig::test_global_config_is_synchronous_mode PASSED [ 14%] 3256s tests/test_config.py::TestConfig::test_invalid_path PASSED [ 14%] 3256s tests/test_config.py::TestConfig::test_reload_local_configuration PASSED [ 14%] 3256s tests/test_config.py::TestConfig::test_save_cache PASSED [ 14%] 3256s tests/test_config.py::TestConfig::test_set_dynamic_configuration PASSED [ 14%] 3256s tests/test_config.py::TestConfig::test_standby_cluster_parameters PASSED [ 14%] 3256s tests/test_config_generator.py::TestGenerateConfig::test_generate_config_running_instance_16 PASSED [ 15%] 3256s tests/test_config_generator.py::TestGenerateConfig::test_generate_config_running_instance_16_connect_from_env PASSED [ 15%] 3256s tests/test_config_generator.py::TestGenerateConfig::test_generate_config_running_instance_errors PASSED [ 15%] 3256s tests/test_config_generator.py::TestGenerateConfig::test_generate_sample_config_16 PASSED [ 15%] 3256s tests/test_config_generator.py::TestGenerateConfig::test_generate_sample_config_pre_13_dir_creation PASSED [ 15%] 3256s tests/test_config_generator.py::TestGenerateConfig::test_get_address PASSED [ 15%] 3256s tests/test_consul.py::TestHTTPClient::test_get PASSED [ 15%] 3256s tests/test_consul.py::TestHTTPClient::test_put PASSED [ 16%] 3256s tests/test_consul.py::TestHTTPClient::test_unknown_method PASSED [ 16%] 3256s tests/test_consul.py::TestConsul::test__get_citus_cluster PASSED [ 16%] 3256s tests/test_consul.py::TestConsul::test_cancel_initialization PASSED [ 16%] 3256s tests/test_consul.py::TestConsul::test_create_session PASSED [ 16%] 3256s tests/test_consul.py::TestConsul::test_delete_cluster PASSED [ 16%] 3256s tests/test_consul.py::TestConsul::test_delete_leader PASSED [ 17%] 3256s tests/test_consul.py::TestConsul::test_get_cluster PASSED [ 17%] 3256s tests/test_consul.py::TestConsul::test_initialize PASSED [ 17%] 3257s tests/test_consul.py::TestConsul::test_referesh_session PASSED [ 17%] 3257s tests/test_consul.py::TestConsul::test_reload_config PASSED [ 17%] 3257s tests/test_consul.py::TestConsul::test_set_config_value PASSED [ 17%] 3257s tests/test_consul.py::TestConsul::test_set_failover_value PASSED [ 17%] 3257s tests/test_consul.py::TestConsul::test_set_history_value PASSED [ 18%] 3257s tests/test_consul.py::TestConsul::test_set_retry_timeout PASSED [ 18%] 3257s tests/test_consul.py::TestConsul::test_sync_state PASSED [ 18%] 3257s tests/test_consul.py::TestConsul::test_take_leader PASSED [ 18%] 3257s tests/test_consul.py::TestConsul::test_touch_member PASSED [ 18%] 3257s tests/test_consul.py::TestConsul::test_update_leader PASSED [ 18%] 3257s tests/test_consul.py::TestConsul::test_update_service PASSED [ 19%] 3257s tests/test_consul.py::TestConsul::test_watch PASSED [ 19%] 3257s tests/test_consul.py::TestConsul::test_write_leader_optime PASSED [ 19%] 3257s tests/test_ctl.py::TestCtl::test_apply_config_changes PASSED [ 19%] 3257s tests/test_ctl.py::TestCtl::test_ctl PASSED [ 19%] 3257s tests/test_ctl.py::TestCtl::test_dsn PASSED [ 19%] 3257s tests/test_ctl.py::TestCtl::test_edit_config PASSED [ 19%] 3257s tests/test_ctl.py::TestCtl::test_failover PASSED [ 20%] 3257s tests/test_ctl.py::TestCtl::test_flush_restart PASSED [ 20%] 3257s tests/test_ctl.py::TestCtl::test_flush_switchover PASSED [ 20%] 3257s tests/test_ctl.py::TestCtl::test_format_pg_version PASSED [ 20%] 3257s tests/test_ctl.py::TestCtl::test_get_all_members PASSED [ 20%] 3257s tests/test_ctl.py::TestCtl::test_get_any_member PASSED [ 20%] 3257s tests/test_ctl.py::TestCtl::test_get_cursor PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_get_dcs PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_get_members PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_history PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_invoke_editor PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_list_extended PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_list_standby_cluster PASSED [ 21%] 3257s tests/test_ctl.py::TestCtl::test_load_config PASSED [ 22%] 3257s tests/test_ctl.py::TestCtl::test_members PASSED [ 22%] 3257s tests/test_ctl.py::TestCtl::test_output_members PASSED [ 22%] 3257s tests/test_ctl.py::TestCtl::test_parse_dcs PASSED [ 22%] 3257s tests/test_ctl.py::TestCtl::test_pause_cluster PASSED [ 22%] 3257s tests/test_ctl.py::TestCtl::test_query PASSED [ 22%] 3257s tests/test_ctl.py::TestCtl::test_query_member PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_reinit_wait PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_reload PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_remove PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_restart_reinit PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_resume_cluster PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_show_config PASSED [ 23%] 3257s tests/test_ctl.py::TestCtl::test_show_diff PASSED [ 24%] 3257s tests/test_ctl.py::TestCtl::test_switchover PASSED [ 24%] 3257s tests/test_ctl.py::TestCtl::test_topology PASSED [ 24%] 3257s tests/test_ctl.py::TestCtl::test_version PASSED [ 24%] 3257s tests/test_ctl.py::TestPatronictlPrettyTable::test__get_hline PASSED [ 24%] 3257s tests/test_ctl.py::TestPatronictlPrettyTable::test__stringify_hrule PASSED [ 24%] 3257s tests/test_ctl.py::TestPatronictlPrettyTable::test_output PASSED [ 25%] 3257s tests/test_etcd.py::TestDnsCachingResolver::test_run PASSED [ 25%] 3257s tests/test_etcd.py::TestClient::test___del__ PASSED [ 25%] 3257s tests/test_etcd.py::TestClient::test__get_machines_cache_from_dns PASSED [ 25%] 3257s tests/test_etcd.py::TestClient::test__get_machines_cache_from_srv PASSED [ 25%] 3257s tests/test_etcd.py::TestClient::test__load_machines_cache PASSED [ 25%] 3257s tests/test_etcd.py::TestClient::test__refresh_machines_cache PASSED [ 26%] 3257s tests/test_etcd.py::TestClient::test_api_execute PASSED [ 26%] 3257s tests/test_etcd.py::TestClient::test_create_connection_patched PASSED [ 26%] 3257s tests/test_etcd.py::TestClient::test_get_srv_record PASSED [ 26%] 3258s tests/test_etcd.py::TestClient::test_machines PASSED [ 26%] 3258s tests/test_etcd.py::TestEtcd::test__get_citus_cluster PASSED [ 26%] 3258s tests/test_etcd.py::TestEtcd::test_attempt_to_acquire_leader PASSED [ 26%] 3258s tests/test_etcd.py::TestEtcd::test_base_path PASSED [ 27%] 3258s tests/test_etcd.py::TestEtcd::test_cancel_initializion PASSED [ 27%] 3258s tests/test_etcd.py::TestEtcd::test_delete_cluster PASSED [ 27%] 3258s tests/test_etcd.py::TestEtcd::test_delete_leader PASSED [ 27%] 3258s tests/test_etcd.py::TestEtcd::test_get_cluster PASSED [ 27%] 3258s tests/test_etcd.py::TestEtcd::test_get_etcd_client PASSED [ 27%] 3258s tests/test_etcd.py::TestEtcd::test_initialize PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_last_seen PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_other_exceptions PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_set_history_value PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_set_ttl PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_sync_state PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_take_leader PASSED [ 28%] 3258s tests/test_etcd.py::TestEtcd::test_touch_member PASSED [ 29%] 3258s tests/test_etcd.py::TestEtcd::test_update_leader PASSED [ 29%] 3258s tests/test_etcd.py::TestEtcd::test_watch PASSED [ 29%] 3258s tests/test_etcd.py::TestEtcd::test_write_leader_optime PASSED [ 29%] 3258s tests/test_etcd3.py::TestEtcd3Client::test_authenticate PASSED [ 29%] 3258s tests/test_etcd3.py::TestKVCache::test__build_cache PASSED [ 29%] 3258s tests/test_etcd3.py::TestKVCache::test__do_watch PASSED [ 30%] 3258s tests/test_etcd3.py::TestKVCache::test_kill_stream PASSED [ 30%] 3258s tests/test_etcd3.py::TestKVCache::test_run PASSED [ 30%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test__ensure_version_prefix PASSED [ 30%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test__handle_auth_errors PASSED [ 30%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test__handle_server_response PASSED [ 30%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test__init__ PASSED [ 30%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test__restart_watcher PASSED [ 31%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test__wait_cache PASSED [ 31%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test_call_rpc PASSED [ 31%] 3258s tests/test_etcd3.py::TestPatroniEtcd3Client::test_txn PASSED [ 31%] 3258s tests/test_etcd3.py::TestEtcd3::test__get_citus_cluster PASSED [ 31%] 3258s tests/test_etcd3.py::TestEtcd3::test__update_leader PASSED [ 31%] 3258s tests/test_etcd3.py::TestEtcd3::test_attempt_to_acquire_leader PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_cancel_initialization PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_create_lease PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_delete_cluster PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_delete_leader PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_delete_sync_state PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_get_cluster PASSED [ 32%] 3258s tests/test_etcd3.py::TestEtcd3::test_initialize PASSED [ 33%] 3258s tests/test_etcd3.py::TestEtcd3::test_refresh_lease PASSED [ 33%] 3258s tests/test_etcd3.py::TestEtcd3::test_set_config_value PASSED [ 33%] 3258s tests/test_etcd3.py::TestEtcd3::test_set_failover_value PASSED [ 33%] 3258s tests/test_etcd3.py::TestEtcd3::test_set_history_value PASSED [ 33%] 3258s tests/test_etcd3.py::TestEtcd3::test_set_socket_options PASSED [ 33%] 3258s tests/test_etcd3.py::TestEtcd3::test_set_sync_state_value PASSED [ 34%] 3258s tests/test_etcd3.py::TestEtcd3::test_set_ttl PASSED [ 34%] 3258s tests/test_etcd3.py::TestEtcd3::test_take_leader PASSED [ 34%] 3258s tests/test_etcd3.py::TestEtcd3::test_touch_member PASSED [ 34%] 3258s tests/test_etcd3.py::TestEtcd3::test_watch PASSED [ 34%] 3258s tests/test_exhibitor.py::TestExhibitorEnsembleProvider::test_init PASSED [ 34%] 3258s tests/test_exhibitor.py::TestExhibitorEnsembleProvider::test_poll PASSED [ 34%] 3258s tests/test_exhibitor.py::TestExhibitor::test_get_cluster PASSED [ 35%] 3258s tests/test_file_perm.py::TestFilePermissions::test_set_permissions_from_data_directory PASSED [ 35%] 3258s tests/test_file_perm.py::TestFilePermissions::test_set_umask PASSED [ 35%] 3258s tests/test_ha.py::TestHa::test__is_healthiest_node PASSED [ 35%] 3258s tests/test_ha.py::TestHa::test_abort_join PASSED [ 35%] 3258s tests/test_ha.py::TestHa::test_acquire_lock PASSED [ 35%] 3258s tests/test_ha.py::TestHa::test_acquire_lock_as_primary PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_after_pause PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_bootstrap_as_standby_leader PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_bootstrap_from_another_member PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_bootstrap_initialize_lock_failed PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_bootstrap_initialized_new_cluster PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_bootstrap_not_running_concurrently PASSED [ 36%] 3258s tests/test_ha.py::TestHa::test_bootstrap_release_initialize_key_on_failure PASSED [ 37%] 3258s tests/test_ha.py::TestHa::test_bootstrap_release_initialize_key_on_watchdog_failure PASSED [ 37%] 3258s tests/test_ha.py::TestHa::test_bootstrap_waiting_for_leader PASSED [ 37%] 3258s tests/test_ha.py::TestHa::test_bootstrap_waiting_for_standby_leader PASSED [ 37%] 3258s tests/test_ha.py::TestHa::test_bootstrap_without_leader PASSED [ 37%] 3258s tests/test_ha.py::TestHa::test_check_failsafe_topology PASSED [ 37%] 3258s tests/test_ha.py::TestHa::test_coordinator_leader_with_lock PASSED [ 38%] 3258s tests/test_ha.py::TestHa::test_crash_recovery PASSED [ 38%] 3258s tests/test_ha.py::TestHa::test_crash_recovery_before_rewind PASSED [ 38%] 3258s tests/test_ha.py::TestHa::test_delete_future_restarts PASSED [ 38%] 3258s tests/test_ha.py::TestHa::test_demote_after_failing_to_obtain_lock PASSED [ 38%] 3258s tests/test_ha.py::TestHa::test_demote_because_not_having_lock PASSED [ 38%] 3258s tests/test_ha.py::TestHa::test_demote_because_not_healthiest PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_demote_because_update_lock_failed PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_demote_immediate PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_disable_sync_when_restarting PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_effective_tags PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_empty_directory_in_pause PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_enable_synchronous_mode PASSED [ 39%] 3258s tests/test_ha.py::TestHa::test_evaluate_scheduled_restart PASSED [ 40%] 3258s tests/test_ha.py::TestHa::test_failed_to_update_lock_in_pause PASSED [ 40%] 3258s tests/test_ha.py::TestHa::test_failover_immediately_on_zero_primary_start_timeout PASSED [ 40%] 3258s tests/test_ha.py::TestHa::test_fetch_node_status PASSED [ 40%] 3258s tests/test_ha.py::TestHa::test_follow PASSED [ 40%] 3258s tests/test_ha.py::TestHa::test_follow_copy PASSED [ 40%] 3258s tests/test_ha.py::TestHa::test_follow_in_pause PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_follow_new_leader_after_failing_to_obtain_lock PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_follow_new_leader_because_not_healthiest PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_follow_triggers_rewind PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_get_node_to_follow_nostream PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_inconsistent_synchronous_state PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_is_healthiest_node PASSED [ 41%] 3258s tests/test_ha.py::TestHa::test_is_leader PASSED [ 42%] 3258s tests/test_ha.py::TestHa::test_leader_race_stale_primary PASSED [ 42%] 3258s tests/test_ha.py::TestHa::test_leader_with_lock PASSED [ 42%] 3258s tests/test_ha.py::TestHa::test_leader_with_not_accessible_data_directory PASSED [ 42%] 3258s tests/test_ha.py::TestHa::test_long_promote PASSED [ 42%] 3258s tests/test_ha.py::TestHa::test_lost_leader_lock_during_promote PASSED [ 42%] 3258s tests/test_ha.py::TestHa::test_manual_failover_from_leader PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_failover_from_leader_in_pause PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_failover_from_leader_in_synchronous_mode PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_failover_process_no_leader PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_failover_process_no_leader_in_pause PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_failover_process_no_leader_in_synchronous_mode PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_failover_while_starting PASSED [ 43%] 3259s tests/test_ha.py::TestHa::test_manual_switchover_from_leader PASSED [ 44%] 3259s tests/test_ha.py::TestHa::test_manual_switchover_from_leader_in_pause PASSED [ 44%] 3259s tests/test_ha.py::TestHa::test_manual_switchover_from_leader_in_synchronous_mode PASSED [ 44%] 3259s tests/test_ha.py::TestHa::test_manual_switchover_process_no_leader PASSED [ 44%] 3259s tests/test_ha.py::TestHa::test_manual_switchover_process_no_leader_in_pause PASSED [ 44%] 3259s tests/test_ha.py::TestHa::test_manual_switchover_process_no_leader_in_synchronous_mode PASSED [ 44%] 3259s tests/test_ha.py::TestHa::test_no_dcs_connection_primary_demote PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_no_dcs_connection_primary_failsafe PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_no_dcs_connection_replica_failsafe PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_no_dcs_connection_replica_failsafe_not_enabled_but_active PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_no_etcd_connection_in_pause PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_notify_citus_coordinator PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_permanent_logical_slots_after_promote PASSED [ 45%] 3259s tests/test_ha.py::TestHa::test_post_recover PASSED [ 46%] 3259s tests/test_ha.py::TestHa::test_postgres_unhealthy_in_pause PASSED [ 46%] 3259s tests/test_ha.py::TestHa::test_primary_stop_timeout PASSED [ 46%] 3259s tests/test_ha.py::TestHa::test_process_healthy_cluster_in_pause PASSED [ 46%] 3259s tests/test_ha.py::TestHa::test_process_healthy_standby_cluster_as_cascade_replica PASSED [ 46%] 3259s tests/test_ha.py::TestHa::test_process_healthy_standby_cluster_as_standby_leader PASSED [ 46%] 3259s tests/test_ha.py::TestHa::test_process_sync_replication PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_process_unhealthy_standby_cluster_as_cascade_replica PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_process_unhealthy_standby_cluster_as_standby_leader PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_promote_because_have_lock PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_promote_without_watchdog PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_promoted_by_acquiring_lock PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_promotion_cancelled_after_pre_promote_failed PASSED [ 47%] 3259s tests/test_ha.py::TestHa::test_readonly_dcs_primary_failsafe PASSED [ 48%] 3259s tests/test_ha.py::TestHa::test_recover_former_primary PASSED [ 48%] 3259s tests/test_ha.py::TestHa::test_recover_raft PASSED [ 48%] 3259s tests/test_ha.py::TestHa::test_recover_replica_failed PASSED [ 48%] 3259s tests/test_ha.py::TestHa::test_recover_unhealthy_leader_in_standby_cluster PASSED [ 48%] 3259s tests/test_ha.py::TestHa::test_recover_unhealthy_unlocked_standby_cluster PASSED [ 48%] 3259s tests/test_ha.py::TestHa::test_recover_with_reinitialize PASSED [ 49%] 3259s tests/test_ha.py::TestHa::test_recover_with_rewind PASSED [ 49%] 3259s tests/test_ha.py::TestHa::test_reinitialize PASSED [ 49%] 3259s tests/test_ha.py::TestHa::test_restart PASSED [ 49%] 3259s tests/test_ha.py::TestHa::test_restart_in_progress PASSED [ 49%] 3259s tests/test_ha.py::TestHa::test_restart_matches PASSED [ 49%] 3259s tests/test_ha.py::TestHa::test_restore_cluster_config PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_run_cycle PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_schedule_future_restart PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_scheduled_restart PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_scheduled_switchover_from_leader PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_shutdown PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_shutdown_citus_worker PASSED [ 50%] 3259s tests/test_ha.py::TestHa::test_start_as_cascade_replica_in_standby_cluster PASSED [ 51%] 3259s tests/test_ha.py::TestHa::test_start_as_readonly PASSED [ 51%] 3259s tests/test_ha.py::TestHa::test_start_as_replica PASSED [ 51%] 3259s tests/test_ha.py::TestHa::test_start_primary_after_failure PASSED [ 51%] 3259s tests/test_ha.py::TestHa::test_starting_timeout PASSED [ 51%] 3259s tests/test_ha.py::TestHa::test_sync_replication_become_primary PASSED [ 51%] 3259s tests/test_ha.py::TestHa::test_sysid_no_match PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_sysid_no_match_in_pause PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_touch_member PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_unhealthy_sync_mode PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_update_cluster_history PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_update_failsafe PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_update_lock PASSED [ 52%] 3259s tests/test_ha.py::TestHa::test_wakup PASSED [ 53%] 3259s tests/test_ha.py::TestHa::test_watch PASSED [ 53%] 3259s tests/test_ha.py::TestHa::test_worker_restart PASSED [ 53%] 3259s tests/test_kubernetes.py::TestK8sConfig::test_load_incluster_config PASSED [ 53%] 3259s tests/test_kubernetes.py::TestK8sConfig::test_load_kube_config PASSED [ 53%] 3260s tests/test_kubernetes.py::TestK8sConfig::test_refresh_token PASSED [ 53%] 3260s tests/test_kubernetes.py::TestApiClient::test__do_http_request PASSED [ 54%] 3260s tests/test_kubernetes.py::TestApiClient::test__refresh_api_servers_cache PASSED [ 54%] 3260s tests/test_kubernetes.py::TestApiClient::test_request PASSED [ 54%] 3260s tests/test_kubernetes.py::TestCoreV1Api::test_create_namespaced_service PASSED [ 54%] 3260s tests/test_kubernetes.py::TestCoreV1Api::test_delete_namespaced_pod PASSED [ 54%] 3260s tests/test_kubernetes.py::TestCoreV1Api::test_list_namespaced_endpoints PASSED [ 54%] 3260s tests/test_kubernetes.py::TestCoreV1Api::test_list_namespaced_pod PASSED [ 54%] 3260s tests/test_kubernetes.py::TestCoreV1Api::test_patch_namespaced_config_map PASSED [ 55%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test__get_citus_cluster PASSED [ 55%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test__wait_caches PASSED [ 55%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_attempt_to_acquire_leader PASSED [ 55%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_cancel_initialization PASSED [ 55%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_delete_cluster PASSED [ 55%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_delete_leader PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_get_citus_coordinator PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_get_cluster PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_get_mpp_coordinator PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_initialize PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_manual_failover PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_reload_config PASSED [ 56%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_set_config_value PASSED [ 57%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_set_history_value PASSED [ 57%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_take_leader PASSED [ 57%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_touch_member PASSED [ 57%] 3260s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_watch PASSED [ 57%] 3260s tests/test_kubernetes.py::TestKubernetesEndpointsNoPodIP::test_update_leader PASSED [ 57%] 3260s tests/test_kubernetes.py::TestKubernetesEndpoints::test__create_config_service PASSED [ 58%] 3260s tests/test_kubernetes.py::TestKubernetesEndpoints::test__update_leader_with_retry PASSED [ 58%] 3261s tests/test_kubernetes.py::TestKubernetesEndpoints::test_delete_sync_state PASSED [ 58%] 3261s tests/test_kubernetes.py::TestKubernetesEndpoints::test_update_leader PASSED [ 58%] 3261s tests/test_kubernetes.py::TestKubernetesEndpoints::test_write_leader_optime PASSED [ 58%] 3261s tests/test_kubernetes.py::TestKubernetesEndpoints::test_write_sync_state PASSED [ 58%] 3261s tests/test_kubernetes.py::TestCacheBuilder::test__build_cache PASSED [ 58%] 3261s tests/test_kubernetes.py::TestCacheBuilder::test__do_watch PASSED [ 59%] 3261s tests/test_kubernetes.py::TestCacheBuilder::test__list PASSED [ 59%] 3261s tests/test_kubernetes.py::TestCacheBuilder::test_kill_stream PASSED [ 59%] 3261s tests/test_kubernetes.py::TestCacheBuilder::test_run PASSED [ 59%] 3261s tests/test_log.py::TestPatroniLogger::test_dateformat PASSED [ 59%] 3261s tests/test_log.py::TestPatroniLogger::test_fail_to_use_python_json_logger PASSED [ 59%] 3261s tests/test_log.py::TestPatroniLogger::test_interceptor PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_invalid_dateformat PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_invalid_json_format PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_invalid_plain_format PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_json_list_format PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_json_str_format PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_patroni_logger PASSED [ 60%] 3261s tests/test_log.py::TestPatroniLogger::test_plain_format PASSED [ 61%] 3261s tests/test_mpp.py::TestMPP::test_get_handler_impl_exception PASSED [ 61%] 3261s tests/test_mpp.py::TestMPP::test_null_handler PASSED [ 61%] 3261s tests/test_patroni.py::TestPatroni::test__filter_tags PASSED [ 61%] 3261s tests/test_patroni.py::TestPatroni::test_check_psycopg PASSED [ 61%] 3261s tests/test_patroni.py::TestPatroni::test_ensure_unique_name PASSED [ 61%] 3261s tests/test_patroni.py::TestPatroni::test_failover_priority PASSED [ 62%] 3261s tests/test_patroni.py::TestPatroni::test_load_dynamic_configuration PASSED [ 62%] 3261s tests/test_patroni.py::TestPatroni::test_no_config PASSED [ 62%] 3261s tests/test_patroni.py::TestPatroni::test_nofailover PASSED [ 62%] 3261s tests/test_patroni.py::TestPatroni::test_noloadbalance PASSED [ 62%] 3261s tests/test_patroni.py::TestPatroni::test_nostream PASSED [ 62%] 3261s tests/test_patroni.py::TestPatroni::test_nosync PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_patroni_main PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_patroni_patroni_main PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_reload_config PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_replicatefrom PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_run PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_schedule_next_run PASSED [ 63%] 3261s tests/test_patroni.py::TestPatroni::test_shutdown PASSED [ 64%] 3261s tests/test_patroni.py::TestPatroni::test_sigterm_handler PASSED [ 64%] 3261s tests/test_patroni.py::TestPatroni::test_validate_config PASSED [ 64%] 3261s tests/test_postgresql.py::TestPostgresql::test__do_stop PASSED [ 64%] 3262s tests/test_postgresql.py::TestPostgresql::test__get_postgres_guc_validators PASSED [ 64%] 3262s tests/test_postgresql.py::TestPostgresql::test__load_postgres_gucs_validators PASSED [ 64%] 3262s tests/test_postgresql.py::TestPostgresql::test__query PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test__read_postgres_gucs_validators_file PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test__read_recovery_params PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test__read_recovery_params_pre_v12 PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test__wait_for_connection_close PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test__write_recovery_params PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test_call_nowait PASSED [ 65%] 3262s tests/test_postgresql.py::TestPostgresql::test_can_create_replica_without_replication_connection PASSED [ 66%] 3262s tests/test_postgresql.py::TestPostgresql::test_check_for_startup PASSED [ 66%] 3262s tests/test_postgresql.py::TestPostgresql::test_check_recovery_conf PASSED [ 66%] 3262s tests/test_postgresql.py::TestPostgresql::test_checkpoint PASSED [ 66%] 3262s tests/test_postgresql.py::TestPostgresql::test_controldata PASSED [ 66%] 3262s tests/test_postgresql.py::TestPostgresql::test_effective_configuration PASSED [ 66%] 3262s tests/test_postgresql.py::TestPostgresql::test_follow PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_get_major_version PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_get_postgres_role_from_data_directory PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_get_primary_timeline PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_get_server_parameters PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_handle_parameter_change PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_is_healthy PASSED [ 67%] 3262s tests/test_postgresql.py::TestPostgresql::test_is_primary PASSED [ 68%] 3262s tests/test_postgresql.py::TestPostgresql::test_is_primary_exception PASSED [ 68%] 3262s tests/test_postgresql.py::TestPostgresql::test_is_running PASSED [ 68%] 3262s tests/test_postgresql.py::TestPostgresql::test_latest_checkpoint_location PASSED [ 68%] 3262s tests/test_postgresql.py::TestPostgresql::test_move_data_directory PASSED [ 68%] 3262s tests/test_postgresql.py::TestPostgresql::test_pgpass_is_dir PASSED [ 68%] 3262s tests/test_postgresql.py::TestPostgresql::test_postmaster_start_time PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_promote PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_query PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_received_timeline PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_reload PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_reload_config PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_remove_data_directory PASSED [ 69%] 3262s tests/test_postgresql.py::TestPostgresql::test_replica_cached_timeline PASSED [ 70%] 3262s tests/test_postgresql.py::TestPostgresql::test_replica_method_can_work_without_replication_connection PASSED [ 70%] 3262s tests/test_postgresql.py::TestPostgresql::test_resolve_connection_addresses PASSED [ 70%] 3262s tests/test_postgresql.py::TestPostgresql::test_restart PASSED [ 70%] 3262s tests/test_postgresql.py::TestPostgresql::test_restore_configuration_files PASSED [ 70%] 3262s tests/test_postgresql.py::TestPostgresql::test_save_configuration_files PASSED [ 70%] 3262s tests/test_postgresql.py::TestPostgresql::test_set_enforce_hot_standby_feedback PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_start PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_stop PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_sysid PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_terminate_starting_postmaster PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_timeline_wal_position PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_validator_factory PASSED [ 71%] 3262s tests/test_postgresql.py::TestPostgresql::test_wait_for_port_open PASSED [ 72%] 3262s tests/test_postgresql.py::TestPostgresql::test_wait_for_startup PASSED [ 72%] 3262s tests/test_postgresql.py::TestPostgresql::test_write_pgpass PASSED [ 72%] 3262s tests/test_postgresql.py::TestPostgresql::test_write_postgresql_and_sanitize_auto_conf PASSED [ 72%] 3262s tests/test_postgresql.py::TestPostgresql2::test_available_gucs PASSED [ 72%] 3262s tests/test_postgresql.py::TestPostgresql2::test_cluster_info_query PASSED [ 72%] 3262s tests/test_postgresql.py::TestPostgresql2::test_load_current_server_parameters PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_from_pid PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_from_pidfile PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_init PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_read_postmaster_pidfile PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_signal_kill PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_signal_stop PASSED [ 73%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_signal_stop_nt PASSED [ 74%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_start PASSED [ 74%] 3262s tests/test_postmaster.py::TestPostmasterProcess::test_wait_for_user_backends_to_close PASSED [ 74%] 3262s tests/test_raft.py::TestTCPTransport::test__connectIfNecessarySingle PASSED [ 74%] 3262s tests/test_raft.py::TestDynMemberSyncObj::test__SyncObj__doChangeCluster PASSED [ 74%] 3262s tests/test_raft.py::TestDynMemberSyncObj::test_add_member PASSED [ 74%] 3262s tests/test_raft.py::TestDynMemberSyncObj::test_getMembers PASSED [ 75%] 3263s tests/test_raft.py::TestKVStoreTTL::test_delete PASSED [ 75%] 3265s tests/test_raft.py::TestKVStoreTTL::test_expire PASSED [ 75%] 3267s tests/test_raft.py::TestKVStoreTTL::test_on_ready_override PASSED [ 75%] 3267s tests/test_raft.py::TestKVStoreTTL::test_retry PASSED [ 75%] 3269s tests/test_raft.py::TestKVStoreTTL::test_set PASSED [ 75%] 3269s tests/test_raft.py::TestRaft::test_init PASSED [ 76%] 3271s tests/test_raft.py::TestRaft::test_raft PASSED [ 76%] 3271s tests/test_raft_controller.py::TestPatroniRaftController::test_patroni_raft_controller_main PASSED [ 76%] 3271s tests/test_raft_controller.py::TestPatroniRaftController::test_reload_config PASSED [ 76%] 3271s tests/test_raft_controller.py::TestPatroniRaftController::test_run PASSED [ 76%] 3271s tests/test_rewind.py::TestRewind::test__check_timeline_and_lsn PASSED [ 76%] 3271s tests/test_rewind.py::TestRewind::test__get_local_timeline_lsn PASSED [ 76%] 3271s tests/test_rewind.py::TestRewind::test__log_primary_history PASSED [ 77%] 3271s tests/test_rewind.py::TestRewind::test_archive_ready_wals PASSED [ 77%] 3271s tests/test_rewind.py::TestRewind::test_can_rewind PASSED [ 77%] 3271s tests/test_rewind.py::TestRewind::test_check_leader_is_not_in_recovery PASSED [ 77%] 3271s tests/test_rewind.py::TestRewind::test_cleanup_archive_status PASSED [ 77%] 3271s tests/test_rewind.py::TestRewind::test_ensure_checkpoint_after_promote PASSED [ 77%] 3271s tests/test_rewind.py::TestRewind::test_ensure_clean_shutdown PASSED [ 78%] 3271s tests/test_rewind.py::TestRewind::test_execute PASSED [ 78%] 3271s tests/test_rewind.py::TestRewind::test_maybe_clean_pg_replslot PASSED [ 78%] 3271s tests/test_rewind.py::TestRewind::test_pg_rewind PASSED [ 78%] 3271s tests/test_rewind.py::TestRewind::test_read_postmaster_opts PASSED [ 78%] 3271s tests/test_rewind.py::TestRewind::test_single_user_mode PASSED [ 78%] 3271s tests/test_slots.py::TestSlotsHandler::test__ensure_logical_slots_replica PASSED [ 78%] 3271s tests/test_slots.py::TestSlotsHandler::test_advance_physical_slots PASSED [ 79%] 3271s tests/test_slots.py::TestSlotsHandler::test_cascading_replica_sync_replication_slots PASSED [ 79%] 3271s tests/test_slots.py::TestSlotsHandler::test_check_logical_slots_readiness PASSED [ 79%] 3271s tests/test_slots.py::TestSlotsHandler::test_copy_logical_slots PASSED [ 79%] 3271s tests/test_slots.py::TestSlotsHandler::test_fsync_dir PASSED [ 79%] 3271s tests/test_slots.py::TestSlotsHandler::test_get_slot_name_on_primary PASSED [ 79%] 3271s tests/test_slots.py::TestSlotsHandler::test_nostream_slot_processing PASSED [ 80%] 3271s tests/test_slots.py::TestSlotsHandler::test_on_promote PASSED [ 80%] 3271s tests/test_slots.py::TestSlotsHandler::test_process_permanent_slots PASSED [ 80%] 3271s tests/test_slots.py::TestSlotsHandler::test_should_enforce_hot_standby_feedback PASSED [ 80%] 3271s tests/test_slots.py::TestSlotsHandler::test_slots_advance_thread PASSED [ 80%] 3271s tests/test_slots.py::TestSlotsHandler::test_sync_replication_slots PASSED [ 80%] 3271s tests/test_sync.py::TestSync::test_pick_sync_standby PASSED [ 80%] 3271s tests/test_sync.py::TestSync::test_set_sync_standby PASSED [ 81%] 3271s tests/test_utils.py::TestUtils::test_enable_keepalive PASSED [ 81%] 3271s tests/test_utils.py::TestUtils::test_polling_loop PASSED [ 81%] 3271s tests/test_utils.py::TestUtils::test_unquote PASSED [ 81%] 3271s tests/test_utils.py::TestUtils::test_validate_directory_couldnt_create PASSED [ 81%] 3271s tests/test_utils.py::TestUtils::test_validate_directory_is_not_a_directory PASSED [ 81%] 3271s tests/test_utils.py::TestUtils::test_validate_directory_not_writable PASSED [ 82%] 3271s tests/test_utils.py::TestUtils::test_validate_directory_writable PASSED [ 82%] 3271s tests/test_utils.py::TestRetrySleeper::test_copy PASSED [ 82%] 3271s tests/test_utils.py::TestRetrySleeper::test_deadline PASSED [ 82%] 3271s tests/test_utils.py::TestRetrySleeper::test_maximum_delay PASSED [ 82%] 3271s tests/test_utils.py::TestRetrySleeper::test_reset PASSED [ 82%] 3271s tests/test_utils.py::TestRetrySleeper::test_too_many_tries PASSED [ 82%] 3271s tests/test_validator.py::TestValidator::test_bin_dir_is_empty PASSED [ 83%] 3271s tests/test_validator.py::TestValidator::test_bin_dir_is_empty_string_excutables_in_path PASSED [ 83%] 3271s tests/test_validator.py::TestValidator::test_bin_dir_is_file PASSED [ 83%] 3271s tests/test_validator.py::TestValidator::test_complete_config PASSED [ 83%] 3271s tests/test_validator.py::TestValidator::test_data_dir_contains_pg_version PASSED [ 83%] 3271s tests/test_validator.py::TestValidator::test_data_dir_is_empty_string PASSED [ 83%] 3271s tests/test_validator.py::TestValidator::test_directory_contains PASSED [ 84%] 3271s tests/test_validator.py::TestValidator::test_empty_config PASSED [ 84%] 3271s tests/test_validator.py::TestValidator::test_failover_priority_int PASSED [ 84%] 3272s tests/test_validator.py::TestValidator::test_json_log_format PASSED [ 84%] 3272s tests/test_validator.py::TestValidator::test_one_of PASSED [ 84%] 3272s tests/test_validator.py::TestValidator::test_pg_version_missmatch PASSED [ 84%] 3272s tests/test_validator.py::TestValidator::test_pg_wal_doesnt_exist PASSED [ 84%] 3272s tests/test_validator.py::TestValidator::test_validate_binary_name PASSED [ 85%] 3272s tests/test_validator.py::TestValidator::test_validate_binary_name_empty_string PASSED [ 85%] 3272s tests/test_validator.py::TestValidator::test_validate_binary_name_missing PASSED [ 85%] 3272s tests/test_wale_restore.py::TestWALERestore::test_create_replica_with_s3 PASSED [ 85%] 3272s tests/test_wale_restore.py::TestWALERestore::test_fix_subdirectory_path_if_broken PASSED [ 85%] 3272s tests/test_wale_restore.py::TestWALERestore::test_get_major_version PASSED [ 85%] 3272s tests/test_wale_restore.py::TestWALERestore::test_main PASSED [ 86%] 3272s tests/test_wale_restore.py::TestWALERestore::test_run PASSED [ 86%] 3272s tests/test_wale_restore.py::TestWALERestore::test_should_use_s3_to_create_replica PASSED [ 86%] 3272s tests/test_watchdog.py::TestWatchdog::test_basic_operation PASSED [ 86%] 3272s tests/test_watchdog.py::TestWatchdog::test_config_reload PASSED [ 86%] 3272s tests/test_watchdog.py::TestWatchdog::test_exceptions PASSED [ 86%] 3272s tests/test_watchdog.py::TestWatchdog::test_invalid_timings PASSED [ 86%] 3272s tests/test_watchdog.py::TestWatchdog::test_parse_mode PASSED [ 87%] 3272s tests/test_watchdog.py::TestWatchdog::test_timeout_does_not_ensure_safe_termination PASSED [ 87%] 3272s tests/test_watchdog.py::TestWatchdog::test_unsafe_timeout_disable_watchdog_and_exit PASSED [ 87%] 3272s tests/test_watchdog.py::TestWatchdog::test_unsupported_platform PASSED [ 87%] 3272s tests/test_watchdog.py::TestWatchdog::test_watchdog_activate PASSED [ 87%] 3272s tests/test_watchdog.py::TestWatchdog::test_watchdog_not_activated PASSED [ 87%] 3272s tests/test_watchdog.py::TestNullWatchdog::test_basics PASSED [ 88%] 3272s tests/test_watchdog.py::TestLinuxWatchdogDevice::test__ioctl PASSED [ 88%] 3272s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_basics PASSED [ 88%] 3272s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_error_handling PASSED [ 88%] 3272s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_is_healthy PASSED [ 88%] 3272s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_open PASSED [ 88%] 3272s tests/test_zookeeper.py::TestPatroniSequentialThreadingHandler::test_create_connection PASSED [ 89%] 3272s tests/test_zookeeper.py::TestPatroniSequentialThreadingHandler::test_select PASSED [ 89%] 3272s tests/test_zookeeper.py::TestPatroniKazooClient::test__call PASSED [ 89%] 3272s tests/test_zookeeper.py::TestZooKeeper::test__cluster_loader PASSED [ 89%] 3272s tests/test_zookeeper.py::TestZooKeeper::test__get_citus_cluster PASSED [ 89%] 3272s tests/test_zookeeper.py::TestZooKeeper::test__kazoo_connect PASSED [ 89%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_attempt_to_acquire_leader PASSED [ 89%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_cancel_initialization PASSED [ 90%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_delete_cluster PASSED [ 90%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_delete_leader PASSED [ 90%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_get_children PASSED [ 90%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_get_citus_coordinator PASSED [ 90%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_get_cluster PASSED [ 90%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_get_mpp_coordinator PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_get_node PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_initialize PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_reload_config PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_set_config_value PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_set_failover_value PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_set_history_value PASSED [ 91%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_sync_state PASSED [ 92%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_take_leader PASSED [ 92%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_touch_member PASSED [ 92%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_update_leader PASSED [ 92%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_watch PASSED [ 92%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_watcher PASSED [ 92%] 3272s tests/test_zookeeper.py::TestZooKeeper::test_write_leader_optime PASSED [ 93%] 3272s patroni/__init__.py::patroni.parse_version PASSED [ 93%] 3272s patroni/api.py::patroni.api.check_access PASSED [ 93%] 3272s patroni/collections.py::patroni.collections.CaseInsensitiveDict.__len__ PASSED [ 93%] 3272s patroni/collections.py::patroni.collections.CaseInsensitiveDict.__repr__ PASSED [ 93%] 3272s patroni/collections.py::patroni.collections.CaseInsensitiveSet.__len__ PASSED [ 93%] 3272s patroni/collections.py::patroni.collections.CaseInsensitiveSet.__repr__ PASSED [ 93%] 3272s patroni/collections.py::patroni.collections.CaseInsensitiveSet.__str__ SKIPPED [ 94%] 3272s patroni/collections.py::patroni.collections._FrozenDict.__len__ PASSED [ 94%] 3272s patroni/ctl.py::patroni.ctl.format_pg_version PASSED [ 94%] 3272s patroni/ctl.py::patroni.ctl.parse_dcs PASSED [ 94%] 3272s patroni/ctl.py::patroni.ctl.parse_scheduled PASSED [ 94%] 3273s patroni/ctl.py::patroni.ctl.watching PASSED [ 94%] 3273s patroni/dcs/__init__.py::patroni.dcs.Cluster.__len__ PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.Cluster.timeline PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.ClusterConfig.from_node PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.Failover PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.Failover.__len__ PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.Leader.checkpoint_after_promote PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.Member.from_node PASSED [ 95%] 3273s patroni/dcs/__init__.py::patroni.dcs.Member.patroni_version PASSED [ 96%] 3273s patroni/dcs/__init__.py::patroni.dcs.SyncState.from_node PASSED [ 96%] 3273s patroni/dcs/__init__.py::patroni.dcs.SyncState.matches PASSED [ 96%] 3273s patroni/dcs/__init__.py::patroni.dcs.TimelineHistory.from_node PASSED [ 96%] 3273s patroni/dcs/kubernetes.py::patroni.dcs.kubernetes.Kubernetes.subsets_changed PASSED [ 96%] 3273s patroni/postgresql/bootstrap.py::patroni.postgresql.bootstrap.Bootstrap.process_user_options PASSED [ 96%] 3273s patroni/postgresql/config.py::patroni.postgresql.config.parse_dsn PASSED [ 97%] 3273s patroni/postgresql/config.py::patroni.postgresql.config.read_recovery_param_value PASSED [ 97%] 3273s patroni/postgresql/misc.py::patroni.postgresql.misc.postgres_major_version_to_int PASSED [ 97%] 3273s patroni/postgresql/misc.py::patroni.postgresql.misc.postgres_version_to_int PASSED [ 97%] 3273s patroni/postgresql/sync.py::patroni.postgresql.sync.parse_sync_standby_names PASSED [ 97%] 3273s patroni/scripts/wale_restore.py::patroni.scripts.wale_restore.repr_size PASSED [ 97%] 3273s patroni/scripts/wale_restore.py::patroni.scripts.wale_restore.size_as_bytes PASSED [ 97%] 3273s patroni/utils.py::patroni.utils.compare_values PASSED [ 98%] 3273s patroni/utils.py::patroni.utils.convert_int_from_base_unit PASSED [ 98%] 3273s patroni/utils.py::patroni.utils.convert_real_from_base_unit PASSED [ 98%] 3273s patroni/utils.py::patroni.utils.convert_to_base_unit PASSED [ 98%] 3273s patroni/utils.py::patroni.utils.deep_compare PASSED [ 98%] 3273s patroni/utils.py::patroni.utils.maybe_convert_from_base_unit PASSED [ 98%] 3273s patroni/utils.py::patroni.utils.parse_bool PASSED [ 99%] 3273s patroni/utils.py::patroni.utils.parse_int PASSED [ 99%] 3273s patroni/utils.py::patroni.utils.parse_real PASSED [ 99%] 3273s patroni/utils.py::patroni.utils.split_host_port PASSED [ 99%] 3273s patroni/utils.py::patroni.utils.strtod PASSED [ 99%] 3273s patroni/utils.py::patroni.utils.strtol PASSED [ 99%] 3275s patroni/utils.py::patroni.utils.unquote PASSED [100%] 3275s 3275s ---------- coverage: platform linux, python 3.12.7-final-0 ----------- 3275s Name Stmts Miss Cover Missing 3275s ----------------------------------------------------------------------------------- 3275s patroni/__init__.py 13 0 100% 3275s patroni/__main__.py 199 1 99% 395 3275s patroni/api.py 770 0 100% 3275s patroni/async_executor.py 96 0 100% 3275s patroni/collections.py 56 3 95% 50, 99, 107 3275s patroni/config.py 371 0 100% 3275s patroni/config_generator.py 212 0 100% 3275s patroni/ctl.py 936 0 100% 3275s patroni/daemon.py 76 0 100% 3275s patroni/dcs/__init__.py 646 0 100% 3275s patroni/dcs/consul.py 485 0 100% 3275s patroni/dcs/etcd3.py 679 0 100% 3275s patroni/dcs/etcd.py 603 0 100% 3275s patroni/dcs/exhibitor.py 61 0 100% 3275s patroni/dcs/kubernetes.py 938 0 100% 3275s patroni/dcs/raft.py 319 0 100% 3275s patroni/dcs/zookeeper.py 288 0 100% 3275s patroni/dynamic_loader.py 35 0 100% 3275s patroni/exceptions.py 16 0 100% 3275s patroni/file_perm.py 43 0 100% 3275s patroni/global_config.py 81 0 100% 3275s patroni/ha.py 1244 2 99% 1925-1926 3275s patroni/log.py 219 2 99% 365-367 3275s patroni/postgresql/__init__.py 821 0 100% 3275s patroni/postgresql/available_parameters/__init__.py 21 0 100% 3275s patroni/postgresql/bootstrap.py 252 0 100% 3275s patroni/postgresql/callback_executor.py 55 0 100% 3275s patroni/postgresql/cancellable.py 104 0 100% 3275s patroni/postgresql/config.py 813 0 100% 3275s patroni/postgresql/connection.py 75 0 100% 3275s patroni/postgresql/misc.py 41 0 100% 3275s patroni/postgresql/mpp/__init__.py 89 0 100% 3275s patroni/postgresql/mpp/citus.py 259 122 53% 49, 52, 62, 66, 135-144, 149-162, 183-186, 205-227, 230-234, 255-271, 274-299, 302-320, 330, 338, 343-346, 360-361, 369-380, 395-399, 437, 458-459 3275s patroni/postgresql/postmaster.py 170 0 100% 3275s patroni/postgresql/rewind.py 416 0 100% 3275s patroni/postgresql/slots.py 334 0 100% 3275s patroni/postgresql/sync.py 130 0 100% 3275s patroni/postgresql/validator.py 157 0 100% 3275s patroni/psycopg.py 42 16 62% 19, 25-26, 42, 44-82, 120 3275s patroni/raft_controller.py 22 0 100% 3275s patroni/request.py 62 0 100% 3275s patroni/scripts/__init__.py 0 0 100% 3275s patroni/scripts/aws.py 59 1 98% 86 3275s patroni/scripts/barman/__init__.py 0 0 100% 3275s patroni/scripts/barman/cli.py 51 1 98% 240 3275s patroni/scripts/barman/config_switch.py 51 0 100% 3275s patroni/scripts/barman/recover.py 37 0 100% 3275s patroni/scripts/barman/utils.py 94 0 100% 3275s patroni/scripts/wale_restore.py 207 1 99% 374 3275s patroni/tags.py 38 0 100% 3275s patroni/utils.py 350 0 100% 3275s patroni/validator.py 301 0 100% 3275s patroni/version.py 1 0 100% 3275s patroni/watchdog/__init__.py 2 0 100% 3275s patroni/watchdog/base.py 203 0 100% 3275s patroni/watchdog/linux.py 135 1 99% 36 3275s ----------------------------------------------------------------------------------- 3275s TOTAL 13778 150 99% 3275s Coverage XML written to file coverage.xml 3275s 3275s 3275s ======================= 632 passed, 14 skipped in 25.25s ======================= 3275s autopkgtest [12:24:05]: test test: -----------------------] 3276s autopkgtest [12:24:06]: test test: - - - - - - - - - - results - - - - - - - - - - 3276s test PASS 3276s autopkgtest [12:24:06]: @@@@@@@@@@@@@@@@@@@@ summary 3276s acceptance-etcd3 PASS 3276s acceptance-etcd-basic PASS 3276s acceptance-etcd PASS 3276s acceptance-zookeeper FLAKY non-zero exit status 1 3276s acceptance-raft PASS 3276s test PASS 3288s nova [W] Using flock in prodstack6-s390x 3288s flock: timeout while waiting to get lock 3288s Creating nova instance adt-plucky-s390x-patroni-20241113-112929-juju-7f2275-prod-proposed-migration-environment-15-8a0ff9a5-55d3-48c1-a06d-798d1a04feec from image adt/ubuntu-plucky-s390x-server-20241113.img (UUID e740277e-1f72-40ae-bfbe-46030537c71c)... 3288s nova [W] Using flock in prodstack6-s390x 3288s Creating nova instance adt-plucky-s390x-patroni-20241113-112929-juju-7f2275-prod-proposed-migration-environment-15-8a0ff9a5-55d3-48c1-a06d-798d1a04feec from image adt/ubuntu-plucky-s390x-server-20241113.img (UUID e740277e-1f72-40ae-bfbe-46030537c71c)... 3288s nova [W] Using flock in prodstack6-s390x 3288s flock: timeout while waiting to get lock 3288s Creating nova instance adt-plucky-s390x-patroni-20241113-112929-juju-7f2275-prod-proposed-migration-environment-15-8a0ff9a5-55d3-48c1-a06d-798d1a04feec from image adt/ubuntu-plucky-s390x-server-20241113.img (UUID e740277e-1f72-40ae-bfbe-46030537c71c)... 3288s nova [W] Using flock in prodstack6-s390x 3288s Creating nova instance adt-plucky-s390x-patroni-20241113-112929-juju-7f2275-prod-proposed-migration-environment-15-8a0ff9a5-55d3-48c1-a06d-798d1a04feec from image adt/ubuntu-plucky-s390x-server-20241113.img (UUID e740277e-1f72-40ae-bfbe-46030537c71c)... 3288s nova [W] Using flock in prodstack6-s390x 3288s flock: timeout while waiting to get lock 3288s Creating nova instance adt-plucky-s390x-patroni-20241113-112929-juju-7f2275-prod-proposed-migration-environment-15-8a0ff9a5-55d3-48c1-a06d-798d1a04feec from image adt/ubuntu-plucky-s390x-server-20241113.img (UUID e740277e-1f72-40ae-bfbe-46030537c71c)...