0s autopkgtest [23:12:49]: starting date and time: 2025-03-15 23:12:49+0000 0s autopkgtest [23:12:49]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production' 0s autopkgtest [23:12:49]: host juju-7f2275-prod-proposed-migration-environment-20; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.dl_1x_0d/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:glibc --apt-upgrade slony1-2 --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=glibc/2.41-1ubuntu2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor builder-cpu2-ram4-disk20 --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-20@bos03-15.secgroup --name adt-plucky-amd64-slony1-2-20250315-231249-juju-7f2275-prod-proposed-migration-environment-20-9c726d7d-e8ae-4cd9-a983-ce8eed6e074f --image adt/ubuntu-plucky-amd64-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-20 --net-id=net_prod-proposed-migration-amd64 -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 45s autopkgtest [23:13:34]: testbed dpkg architecture: amd64 46s autopkgtest [23:13:35]: testbed apt version: 2.9.31ubuntu1 46s autopkgtest [23:13:35]: @@@@@@@@@@@@@@@@@@@@ test bed setup 46s autopkgtest [23:13:35]: testbed release detected to be: None 47s autopkgtest [23:13:36]: updating testbed package index (apt update) 47s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [126 kB] 48s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 48s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 48s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 48s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [369 kB] 48s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [44.1 kB] 48s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [14.5 kB] 48s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 Packages [85.7 kB] 48s Get:9 http://ftpmaster.internal/ubuntu plucky-proposed/main i386 Packages [67.4 kB] 48s Get:10 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 c-n-f Metadata [1852 B] 48s Get:11 http://ftpmaster.internal/ubuntu plucky-proposed/restricted amd64 c-n-f Metadata [116 B] 48s Get:12 http://ftpmaster.internal/ubuntu plucky-proposed/universe amd64 Packages [342 kB] 48s Get:13 http://ftpmaster.internal/ubuntu plucky-proposed/universe i386 Packages [174 kB] 48s Get:14 http://ftpmaster.internal/ubuntu plucky-proposed/universe amd64 c-n-f Metadata [15.3 kB] 48s Get:15 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse amd64 Packages [16.1 kB] 48s Get:16 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse i386 Packages [8544 B] 48s Get:17 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse amd64 c-n-f Metadata [628 B] 48s Fetched 1265 kB in 1s (1335 kB/s) 49s Reading package lists... 50s Reading package lists... 50s Building dependency tree... 50s Reading state information... 51s Calculating upgrade... 51s Calculating upgrade... 51s The following package was automatically installed and is no longer required: 51s libnl-genl-3-200 51s Use 'sudo apt autoremove' to remove it. 51s The following NEW packages will be installed: 51s bpftool libdebuginfod-common libdebuginfod1t64 linux-headers-6.14.0-10 51s linux-headers-6.14.0-10-generic linux-image-6.14.0-10-generic 51s linux-modules-6.14.0-10-generic linux-modules-extra-6.14.0-10-generic 51s linux-perf linux-tools-6.14.0-10 linux-tools-6.14.0-10-generic pnp.ids 51s The following packages will be upgraded: 51s apparmor apt apt-utils binutils binutils-common binutils-x86-64-linux-gnu 51s cloud-init cloud-init-base curl dosfstools exfatprogs fwupd gcc-15-base 51s gir1.2-girepository-2.0 gir1.2-glib-2.0 htop hwdata initramfs-tools 51s initramfs-tools-bin initramfs-tools-core libapparmor1 libapt-pkg7.0 51s libassuan9 libatomic1 libaudit-common libaudit1 libbinutils libbrotli1 51s libc-bin libc-dev-bin libc6 libc6-dev libcap-ng0 libctf-nobfd0 libctf0 51s libcurl3t64-gnutls libcurl4t64 libestr0 libftdi1-2 libfwupd3 libgcc-s1 51s libgirepository-1.0-1 libglib2.0-0t64 libglib2.0-data libgpgme11t64 51s libgprofng0 libjemalloc2 liblz4-1 liblzma5 libmm-glib0 libncurses6 51s libncursesw6 libnewt0.52 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 51s libnss-systemd libpam-systemd libparted2t64 libpci3 libpython3-stdlib 51s libpython3.13 libpython3.13-minimal libpython3.13-stdlib libseccomp2 51s libselinux1 libsemanage-common libsemanage2 libsframe1 libsqlite3-0 51s libstdc++6 libsystemd-shared libsystemd0 libtinfo6 libudev1 libxml2 51s linux-firmware linux-generic linux-headers-generic linux-headers-virtual 51s linux-image-generic linux-image-virtual linux-libc-dev linux-tools-common 51s linux-virtual locales media-types ncurses-base ncurses-bin ncurses-term 51s parted pci.ids pciutils pinentry-curses python-apt-common python3 51s python3-apt python3-bcrypt python3-cffi-backend python3-dbus python3-gi 51s python3-jinja2 python3-lazr.uri python3-markupsafe python3-minimal 51s python3-newt python3-rpds-py python3-systemd python3-yaml python3.13 51s python3.13-gdbm python3.13-minimal rsync rsyslog strace systemd 51s systemd-cryptsetup systemd-resolved systemd-sysv systemd-timesyncd 51s ubuntu-kernel-accessories ubuntu-minimal ubuntu-standard udev whiptail 51s xz-utils 51s 126 upgraded, 12 newly installed, 0 to remove and 0 not upgraded. 51s Need to get 829 MB of archives. 51s After this operation, 325 MB of additional disk space will be used. 51s Get:1 http://ftpmaster.internal/ubuntu plucky/main amd64 ncurses-bin amd64 6.5+20250216-2 [194 kB] 52s Get:2 http://ftpmaster.internal/ubuntu plucky/main amd64 libc-dev-bin amd64 2.41-1ubuntu1 [24.7 kB] 52s Get:3 http://ftpmaster.internal/ubuntu plucky/main amd64 libc6-dev amd64 2.41-1ubuntu1 [2182 kB] 52s Get:4 http://ftpmaster.internal/ubuntu plucky/main amd64 locales all 2.41-1ubuntu1 [4246 kB] 52s Get:5 http://ftpmaster.internal/ubuntu plucky/main amd64 libc6 amd64 2.41-1ubuntu1 [3327 kB] 52s Get:6 http://ftpmaster.internal/ubuntu plucky/main amd64 libc-bin amd64 2.41-1ubuntu1 [701 kB] 52s Get:7 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-libc-dev amd64 6.14.0-10.10 [1723 kB] 52s Get:8 http://ftpmaster.internal/ubuntu plucky/main amd64 libatomic1 amd64 15-20250222-0ubuntu1 [10.4 kB] 52s Get:9 http://ftpmaster.internal/ubuntu plucky/main amd64 gcc-15-base amd64 15-20250222-0ubuntu1 [53.4 kB] 52s Get:10 http://ftpmaster.internal/ubuntu plucky/main amd64 libgcc-s1 amd64 15-20250222-0ubuntu1 [77.8 kB] 52s Get:11 http://ftpmaster.internal/ubuntu plucky/main amd64 libstdc++6 amd64 15-20250222-0ubuntu1 [798 kB] 52s Get:12 http://ftpmaster.internal/ubuntu plucky/main amd64 ncurses-base all 6.5+20250216-2 [25.9 kB] 52s Get:13 http://ftpmaster.internal/ubuntu plucky/main amd64 ncurses-term all 6.5+20250216-2 [276 kB] 52s Get:14 http://ftpmaster.internal/ubuntu plucky/main amd64 liblz4-1 amd64 1.10.0-4 [66.4 kB] 52s Get:15 http://ftpmaster.internal/ubuntu plucky/main amd64 liblzma5 amd64 5.6.4-1 [157 kB] 52s Get:16 http://ftpmaster.internal/ubuntu plucky/main amd64 libsystemd0 amd64 257.3-1ubuntu3 [595 kB] 52s Get:17 http://ftpmaster.internal/ubuntu plucky/main amd64 libnss-systemd amd64 257.3-1ubuntu3 [199 kB] 52s Get:18 http://ftpmaster.internal/ubuntu plucky/main amd64 systemd-sysv amd64 257.3-1ubuntu3 [11.9 kB] 52s Get:19 http://ftpmaster.internal/ubuntu plucky/main amd64 systemd-resolved amd64 257.3-1ubuntu3 [345 kB] 52s Get:20 http://ftpmaster.internal/ubuntu plucky/main amd64 libpam-systemd amd64 257.3-1ubuntu3 [302 kB] 52s Get:21 http://ftpmaster.internal/ubuntu plucky/main amd64 libsystemd-shared amd64 257.3-1ubuntu3 [2371 kB] 52s Get:22 http://ftpmaster.internal/ubuntu plucky/main amd64 systemd amd64 257.3-1ubuntu3 [3052 kB] 53s Get:23 http://ftpmaster.internal/ubuntu plucky/main amd64 systemd-timesyncd amd64 257.3-1ubuntu3 [42.1 kB] 53s Get:24 http://ftpmaster.internal/ubuntu plucky/main amd64 systemd-cryptsetup amd64 257.3-1ubuntu3 [124 kB] 53s Get:25 http://ftpmaster.internal/ubuntu plucky/main amd64 udev amd64 257.3-1ubuntu3 [1404 kB] 53s Get:26 http://ftpmaster.internal/ubuntu plucky/main amd64 libudev1 amd64 257.3-1ubuntu3 [215 kB] 53s Get:27 http://ftpmaster.internal/ubuntu plucky/main amd64 libaudit-common all 1:4.0.2-2ubuntu2 [6628 B] 53s Get:28 http://ftpmaster.internal/ubuntu plucky/main amd64 libcap-ng0 amd64 0.8.5-4build1 [15.6 kB] 53s Get:29 http://ftpmaster.internal/ubuntu plucky/main amd64 libaudit1 amd64 1:4.0.2-2ubuntu2 [54.0 kB] 53s Get:30 http://ftpmaster.internal/ubuntu plucky/main amd64 libseccomp2 amd64 2.5.5-1ubuntu6 [53.5 kB] 53s Get:31 http://ftpmaster.internal/ubuntu plucky/main amd64 libselinux1 amd64 3.7-3ubuntu3 [87.3 kB] 53s Get:32 http://ftpmaster.internal/ubuntu plucky/main amd64 libapparmor1 amd64 4.1.0~beta5-0ubuntu8 [55.0 kB] 53s Get:33 http://ftpmaster.internal/ubuntu plucky/main amd64 libapt-pkg7.0 amd64 2.9.33 [1138 kB] 53s Get:34 http://ftpmaster.internal/ubuntu plucky/main amd64 apt amd64 2.9.33 [1439 kB] 53s Get:35 http://ftpmaster.internal/ubuntu plucky/main amd64 apt-utils amd64 2.9.33 [222 kB] 53s Get:36 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-minimal amd64 3.13.2-2 [27.7 kB] 53s Get:37 http://ftpmaster.internal/ubuntu plucky/main amd64 python3 amd64 3.13.2-2 [24.0 kB] 53s Get:38 http://ftpmaster.internal/ubuntu plucky/main amd64 libpython3.13 amd64 3.13.2-2 [2341 kB] 53s Get:39 http://ftpmaster.internal/ubuntu plucky/main amd64 media-types all 13.0.0 [29.9 kB] 53s Get:40 http://ftpmaster.internal/ubuntu plucky/main amd64 libncurses6 amd64 6.5+20250216-2 [126 kB] 53s Get:41 http://ftpmaster.internal/ubuntu plucky/main amd64 libncursesw6 amd64 6.5+20250216-2 [165 kB] 53s Get:42 http://ftpmaster.internal/ubuntu plucky/main amd64 libtinfo6 amd64 6.5+20250216-2 [119 kB] 53s Get:43 http://ftpmaster.internal/ubuntu plucky/main amd64 libsqlite3-0 amd64 3.46.1-2 [715 kB] 53s Get:44 http://ftpmaster.internal/ubuntu plucky/main amd64 python3.13 amd64 3.13.2-2 [735 kB] 53s Get:45 http://ftpmaster.internal/ubuntu plucky/main amd64 python3.13-minimal amd64 3.13.2-2 [2365 kB] 53s Get:46 http://ftpmaster.internal/ubuntu plucky/main amd64 libpython3.13-minimal amd64 3.13.2-2 [883 kB] 53s Get:47 http://ftpmaster.internal/ubuntu plucky/main amd64 libpython3.13-stdlib amd64 3.13.2-2 [2066 kB] 53s Get:48 http://ftpmaster.internal/ubuntu plucky/main amd64 libpython3-stdlib amd64 3.13.2-2 [10.4 kB] 53s Get:49 http://ftpmaster.internal/ubuntu plucky/main amd64 rsync amd64 3.4.1+ds1-3 [482 kB] 53s Get:50 http://ftpmaster.internal/ubuntu plucky/main amd64 libdebuginfod-common all 0.192-4 [15.4 kB] 53s Get:51 http://ftpmaster.internal/ubuntu plucky/main amd64 libsemanage-common all 3.7-2.1build1 [7268 B] 53s Get:52 http://ftpmaster.internal/ubuntu plucky/main amd64 libsemanage2 amd64 3.7-2.1build1 [106 kB] 53s Get:53 http://ftpmaster.internal/ubuntu plucky/main amd64 libassuan9 amd64 3.0.2-2 [43.1 kB] 53s Get:54 http://ftpmaster.internal/ubuntu plucky/main amd64 gir1.2-girepository-2.0 amd64 1.83.4-1 [25.3 kB] 53s Get:55 http://ftpmaster.internal/ubuntu plucky/main amd64 gir1.2-glib-2.0 amd64 2.84.0-1 [184 kB] 53s Get:56 http://ftpmaster.internal/ubuntu plucky/main amd64 libglib2.0-0t64 amd64 2.84.0-1 [1669 kB] 53s Get:57 http://ftpmaster.internal/ubuntu plucky/main amd64 libgirepository-1.0-1 amd64 1.83.4-1 [89.5 kB] 53s Get:58 http://ftpmaster.internal/ubuntu plucky/main amd64 libestr0 amd64 0.1.11-2 [8340 B] 53s Get:59 http://ftpmaster.internal/ubuntu plucky/main amd64 libglib2.0-data all 2.84.0-1 [53.0 kB] 53s Get:60 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-newt amd64 0.52.24-4ubuntu2 [21.1 kB] 53s Get:61 http://ftpmaster.internal/ubuntu plucky/main amd64 libnewt0.52 amd64 0.52.24-4ubuntu2 [55.7 kB] 53s Get:62 http://ftpmaster.internal/ubuntu plucky/main amd64 libxml2 amd64 2.12.7+dfsg+really2.9.14-0.2ubuntu5 [772 kB] 53s Get:63 http://ftpmaster.internal/ubuntu plucky/main amd64 python-apt-common all 2.9.9build1 [21.3 kB] 53s Get:64 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-apt amd64 2.9.9build1 [172 kB] 53s Get:65 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-cffi-backend amd64 1.17.1-2build2 [96.6 kB] 53s Get:66 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-dbus amd64 1.3.2-5build5 [102 kB] 53s Get:67 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-gi amd64 3.50.0-4build1 [252 kB] 53s Get:68 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-yaml amd64 6.0.2-1build2 [144 kB] 53s Get:69 http://ftpmaster.internal/ubuntu plucky/main amd64 rsyslog amd64 8.2412.0-2ubuntu2 [555 kB] 54s Get:70 http://ftpmaster.internal/ubuntu plucky/main amd64 whiptail amd64 0.52.24-4ubuntu2 [19.1 kB] 54s Get:71 http://ftpmaster.internal/ubuntu plucky/main amd64 ubuntu-minimal amd64 1.549 [11.5 kB] 54s Get:72 http://ftpmaster.internal/ubuntu plucky/main amd64 apparmor amd64 4.1.0~beta5-0ubuntu8 [701 kB] 54s Get:73 http://ftpmaster.internal/ubuntu plucky/main amd64 dosfstools amd64 4.2-1.2 [95.0 kB] 54s Get:74 http://ftpmaster.internal/ubuntu plucky/main amd64 libnl-genl-3-200 amd64 3.7.0-1 [12.2 kB] 54s Get:75 http://ftpmaster.internal/ubuntu plucky/main amd64 libnl-route-3-200 amd64 3.7.0-1 [191 kB] 54s Get:76 http://ftpmaster.internal/ubuntu plucky/main amd64 libnl-3-200 amd64 3.7.0-1 [64.9 kB] 54s Get:77 http://ftpmaster.internal/ubuntu plucky/main amd64 parted amd64 3.6-5 [53.9 kB] 54s Get:78 http://ftpmaster.internal/ubuntu plucky/main amd64 libparted2t64 amd64 3.6-5 [158 kB] 54s Get:79 http://ftpmaster.internal/ubuntu plucky/main amd64 pci.ids all 0.0~2025.03.09-1 [285 kB] 54s Get:80 http://ftpmaster.internal/ubuntu plucky/main amd64 pciutils amd64 1:3.13.0-2 [110 kB] 54s Get:81 http://ftpmaster.internal/ubuntu plucky/main amd64 libpci3 amd64 1:3.13.0-2 [39.8 kB] 54s Get:82 http://ftpmaster.internal/ubuntu plucky/main amd64 strace amd64 6.13+ds-1ubuntu1 [622 kB] 54s Get:83 http://ftpmaster.internal/ubuntu plucky/main amd64 xz-utils amd64 5.6.4-1 [278 kB] 54s Get:84 http://ftpmaster.internal/ubuntu plucky/main amd64 ubuntu-standard amd64 1.549 [11.5 kB] 54s Get:85 http://ftpmaster.internal/ubuntu plucky/main amd64 libgprofng0 amd64 2.44-3ubuntu1 [886 kB] 54s Get:86 http://ftpmaster.internal/ubuntu plucky/main amd64 libctf0 amd64 2.44-3ubuntu1 [96.5 kB] 54s Get:87 http://ftpmaster.internal/ubuntu plucky/main amd64 libctf-nobfd0 amd64 2.44-3ubuntu1 [98.9 kB] 54s Get:88 http://ftpmaster.internal/ubuntu plucky/main amd64 binutils-x86-64-linux-gnu amd64 2.44-3ubuntu1 [1108 kB] 54s Get:89 http://ftpmaster.internal/ubuntu plucky/main amd64 libbinutils amd64 2.44-3ubuntu1 [585 kB] 54s Get:90 http://ftpmaster.internal/ubuntu plucky/main amd64 binutils amd64 2.44-3ubuntu1 [208 kB] 54s Get:91 http://ftpmaster.internal/ubuntu plucky/main amd64 binutils-common amd64 2.44-3ubuntu1 [215 kB] 54s Get:92 http://ftpmaster.internal/ubuntu plucky/main amd64 libsframe1 amd64 2.44-3ubuntu1 [14.8 kB] 54s Get:93 http://ftpmaster.internal/ubuntu plucky/main amd64 hwdata all 0.393-3 [1562 B] 54s Get:94 http://ftpmaster.internal/ubuntu plucky/main amd64 pnp.ids all 0.393-3 [29.5 kB] 54s Get:95 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-tools-common all 6.14.0-10.10 [295 kB] 54s Get:96 http://ftpmaster.internal/ubuntu plucky/main amd64 bpftool amd64 7.6.0+6.14.0-10.10 [1147 kB] 54s Get:97 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-markupsafe amd64 2.1.5-1build4 [13.4 kB] 54s Get:98 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-jinja2 all 3.1.5-2ubuntu1 [109 kB] 54s Get:99 http://ftpmaster.internal/ubuntu plucky/main amd64 cloud-init-base all 25.1-0ubuntu3 [616 kB] 54s Get:100 http://ftpmaster.internal/ubuntu plucky/main amd64 libbrotli1 amd64 1.1.0-2build4 [365 kB] 54s Get:101 http://ftpmaster.internal/ubuntu plucky/main amd64 curl amd64 8.12.1-3ubuntu1 [258 kB] 54s Get:102 http://ftpmaster.internal/ubuntu plucky/main amd64 libcurl4t64 amd64 8.12.1-3ubuntu1 [437 kB] 55s Get:103 http://ftpmaster.internal/ubuntu plucky/main amd64 exfatprogs amd64 1.2.8-1 [76.3 kB] 55s Get:104 http://ftpmaster.internal/ubuntu plucky/main amd64 libcurl3t64-gnutls amd64 8.12.1-3ubuntu1 [432 kB] 55s Get:105 http://ftpmaster.internal/ubuntu plucky/main amd64 fwupd amd64 2.0.6-4 [5408 kB] 55s Get:106 http://ftpmaster.internal/ubuntu plucky/main amd64 libfwupd3 amd64 2.0.6-4 [136 kB] 55s Get:107 http://ftpmaster.internal/ubuntu plucky/main amd64 libmm-glib0 amd64 1.23.4-0ubuntu3 [251 kB] 55s Get:108 http://ftpmaster.internal/ubuntu plucky/main amd64 htop amd64 3.4.0-2 [195 kB] 55s Get:109 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-firmware amd64 20250310.git9e1370d3-0ubuntu1 [571 MB] 78s Get:110 http://ftpmaster.internal/ubuntu plucky/main amd64 initramfs-tools all 0.146ubuntu1 [7920 B] 78s Get:111 http://ftpmaster.internal/ubuntu plucky/main amd64 initramfs-tools-core all 0.146ubuntu1 [51.9 kB] 78s Get:112 http://ftpmaster.internal/ubuntu plucky/main amd64 initramfs-tools-bin amd64 0.146ubuntu1 [26.2 kB] 78s Get:113 http://ftpmaster.internal/ubuntu plucky/main amd64 libdebuginfod1t64 amd64 0.192-4 [21.0 kB] 78s Get:114 http://ftpmaster.internal/ubuntu plucky/main amd64 libftdi1-2 amd64 1.5-8build1 [30.2 kB] 78s Get:115 http://ftpmaster.internal/ubuntu plucky/main amd64 libgpgme11t64 amd64 1.24.2-1ubuntu2 [155 kB] 78s Get:116 http://ftpmaster.internal/ubuntu plucky/main amd64 libjemalloc2 amd64 5.3.0-3 [277 kB] 78s Get:117 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-modules-6.14.0-10-generic amd64 6.14.0-10.10 [41.2 MB] 80s Get:118 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-image-6.14.0-10-generic amd64 6.14.0-10.10 [15.3 MB] 80s Get:119 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-modules-extra-6.14.0-10-generic amd64 6.14.0-10.10 [120 MB] 85s Get:120 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-generic amd64 6.14.0-10.10 [1730 B] 85s Get:121 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-image-generic amd64 6.14.0-10.10 [11.1 kB] 85s Get:122 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-virtual amd64 6.14.0-10.10 [1722 B] 85s Get:123 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-image-virtual amd64 6.14.0-10.10 [11.1 kB] 85s Get:124 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-headers-virtual amd64 6.14.0-10.10 [1642 B] 85s Get:125 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-headers-6.14.0-10 all 6.14.0-10.10 [14.2 MB] 86s Get:126 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-headers-6.14.0-10-generic amd64 6.14.0-10.10 [3915 kB] 86s Get:127 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-headers-generic amd64 6.14.0-10.10 [11.0 kB] 86s Get:128 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-perf amd64 6.14.0-10.10 [4122 kB] 86s Get:129 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-tools-6.14.0-10 amd64 6.14.0-10.10 [1394 kB] 86s Get:130 http://ftpmaster.internal/ubuntu plucky/main amd64 linux-tools-6.14.0-10-generic amd64 6.14.0-10.10 [830 B] 86s Get:131 http://ftpmaster.internal/ubuntu plucky/main amd64 pinentry-curses amd64 1.3.1-2ubuntu3 [42.3 kB] 86s Get:132 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-lazr.uri all 1.0.6-6 [13.7 kB] 86s Get:133 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-rpds-py amd64 0.21.0-2ubuntu2 [278 kB] 86s Get:134 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-systemd amd64 235-1build6 [43.9 kB] 86s Get:135 http://ftpmaster.internal/ubuntu plucky/main amd64 python3.13-gdbm amd64 3.13.2-2 [31.9 kB] 86s Get:136 http://ftpmaster.internal/ubuntu plucky/main amd64 ubuntu-kernel-accessories amd64 1.549 [11.2 kB] 86s Get:137 http://ftpmaster.internal/ubuntu plucky/main amd64 cloud-init all 25.1-0ubuntu3 [2100 B] 86s Get:138 http://ftpmaster.internal/ubuntu plucky/main amd64 python3-bcrypt amd64 4.2.0-2.1build1 [221 kB] 86s Preconfiguring packages ... 87s Fetched 829 MB in 35s (23.8 MB/s) 87s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109140 files and directories currently installed.) 87s Preparing to unpack .../ncurses-bin_6.5+20250216-2_amd64.deb ... 87s Unpacking ncurses-bin (6.5+20250216-2) over (6.5+20250216-1) ... 87s Setting up ncurses-bin (6.5+20250216-2) ... 87s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109140 files and directories currently installed.) 87s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu1_amd64.deb ... 87s Unpacking libc-dev-bin (2.41-1ubuntu1) over (2.40-4ubuntu1) ... 87s Preparing to unpack .../libc6-dev_2.41-1ubuntu1_amd64.deb ... 87s Unpacking libc6-dev:amd64 (2.41-1ubuntu1) over (2.40-4ubuntu1) ... 87s Preparing to unpack .../locales_2.41-1ubuntu1_all.deb ... 87s Unpacking locales (2.41-1ubuntu1) over (2.40-4ubuntu1) ... 88s Preparing to unpack .../libc6_2.41-1ubuntu1_amd64.deb ... 88s Checking for services that may need to be restarted... 88s Checking init scripts... 88s Checking for services that may need to be restarted... 88s Checking init scripts... 88s Stopping some services possibly affected by the upgrade (will be restarted later): 88s cron: stopping...done. 88s 88s Unpacking libc6:amd64 (2.41-1ubuntu1) over (2.40-4ubuntu1) ... 88s Setting up libc6:amd64 (2.41-1ubuntu1) ... 88s Checking for services that may need to be restarted... 88s Checking init scripts... 88s Restarting services possibly affected by the upgrade: 88s cron: restarting...done. 88s 88s Services restarted successfully. 88s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109141 files and directories currently installed.) 88s Preparing to unpack .../libc-bin_2.41-1ubuntu1_amd64.deb ... 88s Unpacking libc-bin (2.41-1ubuntu1) over (2.40-4ubuntu1) ... 88s Setting up libc-bin (2.41-1ubuntu1) ... 88s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109141 files and directories currently installed.) 88s Preparing to unpack .../linux-libc-dev_6.14.0-10.10_amd64.deb ... 88s Unpacking linux-libc-dev:amd64 (6.14.0-10.10) over (6.12.0-16.16) ... 89s Preparing to unpack .../libatomic1_15-20250222-0ubuntu1_amd64.deb ... 89s Unpacking libatomic1:amd64 (15-20250222-0ubuntu1) over (15-20250213-1ubuntu1) ... 89s Preparing to unpack .../gcc-15-base_15-20250222-0ubuntu1_amd64.deb ... 89s Unpacking gcc-15-base:amd64 (15-20250222-0ubuntu1) over (15-20250213-1ubuntu1) ... 89s Setting up gcc-15-base:amd64 (15-20250222-0ubuntu1) ... 89s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 89s Preparing to unpack .../libgcc-s1_15-20250222-0ubuntu1_amd64.deb ... 89s Unpacking libgcc-s1:amd64 (15-20250222-0ubuntu1) over (15-20250213-1ubuntu1) ... 89s Setting up libgcc-s1:amd64 (15-20250222-0ubuntu1) ... 89s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 89s Preparing to unpack .../libstdc++6_15-20250222-0ubuntu1_amd64.deb ... 89s Unpacking libstdc++6:amd64 (15-20250222-0ubuntu1) over (15-20250213-1ubuntu1) ... 89s Setting up libstdc++6:amd64 (15-20250222-0ubuntu1) ... 89s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 89s Preparing to unpack .../ncurses-base_6.5+20250216-2_all.deb ... 89s Unpacking ncurses-base (6.5+20250216-2) over (6.5+20250216-1) ... 89s Setting up ncurses-base (6.5+20250216-2) ... 89s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 89s Preparing to unpack .../ncurses-term_6.5+20250216-2_all.deb ... 89s Unpacking ncurses-term (6.5+20250216-2) over (6.5+20250216-1) ... 90s Preparing to unpack .../liblz4-1_1.10.0-4_amd64.deb ... 90s Unpacking liblz4-1:amd64 (1.10.0-4) over (1.10.0-3) ... 90s Setting up liblz4-1:amd64 (1.10.0-4) ... 90s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 90s Preparing to unpack .../liblzma5_5.6.4-1_amd64.deb ... 90s Unpacking liblzma5:amd64 (5.6.4-1) over (5.6.3-1) ... 90s Setting up liblzma5:amd64 (5.6.4-1) ... 90s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 90s Preparing to unpack .../libsystemd0_257.3-1ubuntu3_amd64.deb ... 90s Unpacking libsystemd0:amd64 (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 90s Setting up libsystemd0:amd64 (257.3-1ubuntu3) ... 90s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 90s Preparing to unpack .../libnss-systemd_257.3-1ubuntu3_amd64.deb ... 90s Unpacking libnss-systemd:amd64 (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 90s Preparing to unpack .../systemd-sysv_257.3-1ubuntu3_amd64.deb ... 90s Unpacking systemd-sysv (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 90s Preparing to unpack .../systemd-resolved_257.3-1ubuntu3_amd64.deb ... 90s Unpacking systemd-resolved (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 90s Preparing to unpack .../libpam-systemd_257.3-1ubuntu3_amd64.deb ... 90s Unpacking libpam-systemd:amd64 (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 90s Preparing to unpack .../libsystemd-shared_257.3-1ubuntu3_amd64.deb ... 90s Unpacking libsystemd-shared:amd64 (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 90s Setting up libsystemd-shared:amd64 (257.3-1ubuntu3) ... 90s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 90s Preparing to unpack .../systemd_257.3-1ubuntu3_amd64.deb ... 90s Unpacking systemd (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 91s Preparing to unpack .../systemd-timesyncd_257.3-1ubuntu3_amd64.deb ... 91s Unpacking systemd-timesyncd (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 91s Preparing to unpack .../systemd-cryptsetup_257.3-1ubuntu3_amd64.deb ... 91s Unpacking systemd-cryptsetup (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 91s Preparing to unpack .../udev_257.3-1ubuntu3_amd64.deb ... 91s Unpacking udev (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 91s Preparing to unpack .../libudev1_257.3-1ubuntu3_amd64.deb ... 91s Unpacking libudev1:amd64 (257.3-1ubuntu3) over (257.2-3ubuntu1) ... 91s Setting up libudev1:amd64 (257.3-1ubuntu3) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../libaudit-common_1%3a4.0.2-2ubuntu2_all.deb ... 91s Unpacking libaudit-common (1:4.0.2-2ubuntu2) over (1:4.0.2-2ubuntu1) ... 91s Setting up libaudit-common (1:4.0.2-2ubuntu2) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../libcap-ng0_0.8.5-4build1_amd64.deb ... 91s Unpacking libcap-ng0:amd64 (0.8.5-4build1) over (0.8.5-4) ... 91s Setting up libcap-ng0:amd64 (0.8.5-4build1) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../libaudit1_1%3a4.0.2-2ubuntu2_amd64.deb ... 91s Unpacking libaudit1:amd64 (1:4.0.2-2ubuntu2) over (1:4.0.2-2ubuntu1) ... 91s Setting up libaudit1:amd64 (1:4.0.2-2ubuntu2) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../libseccomp2_2.5.5-1ubuntu6_amd64.deb ... 91s Unpacking libseccomp2:amd64 (2.5.5-1ubuntu6) over (2.5.5-1ubuntu5) ... 91s Setting up libseccomp2:amd64 (2.5.5-1ubuntu6) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../libselinux1_3.7-3ubuntu3_amd64.deb ... 91s Unpacking libselinux1:amd64 (3.7-3ubuntu3) over (3.7-3ubuntu2) ... 91s Setting up libselinux1:amd64 (3.7-3ubuntu3) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../libapparmor1_4.1.0~beta5-0ubuntu8_amd64.deb ... 91s Unpacking libapparmor1:amd64 (4.1.0~beta5-0ubuntu8) over (4.1.0~beta5-0ubuntu5) ... 91s Preparing to unpack .../libapt-pkg7.0_2.9.33_amd64.deb ... 91s Unpacking libapt-pkg7.0:amd64 (2.9.33) over (2.9.31ubuntu1) ... 91s Setting up libapt-pkg7.0:amd64 (2.9.33) ... 91s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 91s Preparing to unpack .../archives/apt_2.9.33_amd64.deb ... 92s Unpacking apt (2.9.33) over (2.9.31ubuntu1) ... 92s Setting up apt (2.9.33) ... 92s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 92s Preparing to unpack .../apt-utils_2.9.33_amd64.deb ... 92s Unpacking apt-utils (2.9.33) over (2.9.31ubuntu1) ... 92s Preparing to unpack .../python3-minimal_3.13.2-2_amd64.deb ... 92s Unpacking python3-minimal (3.13.2-2) over (3.13.2-1) ... 92s Setting up python3-minimal (3.13.2-2) ... 93s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 93s Preparing to unpack .../0-python3_3.13.2-2_amd64.deb ... 93s Unpacking python3 (3.13.2-2) over (3.13.2-1) ... 93s Preparing to unpack .../1-libpython3.13_3.13.2-2_amd64.deb ... 93s Unpacking libpython3.13:amd64 (3.13.2-2) over (3.13.2-1) ... 93s Preparing to unpack .../2-media-types_13.0.0_all.deb ... 93s Unpacking media-types (13.0.0) over (12.0.0) ... 93s Preparing to unpack .../3-libncurses6_6.5+20250216-2_amd64.deb ... 93s Unpacking libncurses6:amd64 (6.5+20250216-2) over (6.5+20250216-1) ... 93s Preparing to unpack .../4-libncursesw6_6.5+20250216-2_amd64.deb ... 93s Unpacking libncursesw6:amd64 (6.5+20250216-2) over (6.5+20250216-1) ... 93s Preparing to unpack .../5-libtinfo6_6.5+20250216-2_amd64.deb ... 93s Unpacking libtinfo6:amd64 (6.5+20250216-2) over (6.5+20250216-1) ... 93s Setting up libtinfo6:amd64 (6.5+20250216-2) ... 93s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109146 files and directories currently installed.) 93s Preparing to unpack .../0-libsqlite3-0_3.46.1-2_amd64.deb ... 93s Unpacking libsqlite3-0:amd64 (3.46.1-2) over (3.46.1-1) ... 93s Preparing to unpack .../1-python3.13_3.13.2-2_amd64.deb ... 93s Unpacking python3.13 (3.13.2-2) over (3.13.2-1) ... 93s Preparing to unpack .../2-python3.13-minimal_3.13.2-2_amd64.deb ... 93s Unpacking python3.13-minimal (3.13.2-2) over (3.13.2-1) ... 93s Preparing to unpack .../3-libpython3.13-minimal_3.13.2-2_amd64.deb ... 93s Unpacking libpython3.13-minimal:amd64 (3.13.2-2) over (3.13.2-1) ... 93s Preparing to unpack .../4-libpython3.13-stdlib_3.13.2-2_amd64.deb ... 93s Unpacking libpython3.13-stdlib:amd64 (3.13.2-2) over (3.13.2-1) ... 94s Preparing to unpack .../5-libpython3-stdlib_3.13.2-2_amd64.deb ... 94s Unpacking libpython3-stdlib:amd64 (3.13.2-2) over (3.13.2-1) ... 94s Preparing to unpack .../6-rsync_3.4.1+ds1-3_amd64.deb ... 94s Unpacking rsync (3.4.1+ds1-3) over (3.4.1-0syncable1) ... 94s Selecting previously unselected package libdebuginfod-common. 94s Preparing to unpack .../7-libdebuginfod-common_0.192-4_all.deb ... 94s Unpacking libdebuginfod-common (0.192-4) ... 94s Preparing to unpack .../8-libsemanage-common_3.7-2.1build1_all.deb ... 94s Unpacking libsemanage-common (3.7-2.1build1) over (3.7-2.1) ... 94s Setting up libsemanage-common (3.7-2.1build1) ... 94s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109155 files and directories currently installed.) 94s Preparing to unpack .../libsemanage2_3.7-2.1build1_amd64.deb ... 94s Unpacking libsemanage2:amd64 (3.7-2.1build1) over (3.7-2.1) ... 94s Setting up libsemanage2:amd64 (3.7-2.1build1) ... 94s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109155 files and directories currently installed.) 94s Preparing to unpack .../libassuan9_3.0.2-2_amd64.deb ... 94s Unpacking libassuan9:amd64 (3.0.2-2) over (3.0.1-2) ... 94s Setting up libassuan9:amd64 (3.0.2-2) ... 94s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 109155 files and directories currently installed.) 94s Preparing to unpack .../00-gir1.2-girepository-2.0_1.83.4-1_amd64.deb ... 94s Unpacking gir1.2-girepository-2.0:amd64 (1.83.4-1) over (1.82.0-4) ... 94s Preparing to unpack .../01-gir1.2-glib-2.0_2.84.0-1_amd64.deb ... 94s Unpacking gir1.2-glib-2.0:amd64 (2.84.0-1) over (2.83.5-1) ... 94s Preparing to unpack .../02-libglib2.0-0t64_2.84.0-1_amd64.deb ... 94s Unpacking libglib2.0-0t64:amd64 (2.84.0-1) over (2.83.5-1) ... 94s Preparing to unpack .../03-libgirepository-1.0-1_1.83.4-1_amd64.deb ... 94s Unpacking libgirepository-1.0-1:amd64 (1.83.4-1) over (1.82.0-4) ... 94s Preparing to unpack .../04-libestr0_0.1.11-2_amd64.deb ... 94s Unpacking libestr0:amd64 (0.1.11-2) over (0.1.11-1build1) ... 94s Preparing to unpack .../05-libglib2.0-data_2.84.0-1_all.deb ... 94s Unpacking libglib2.0-data (2.84.0-1) over (2.83.5-1) ... 94s Preparing to unpack .../06-python3-newt_0.52.24-4ubuntu2_amd64.deb ... 94s Unpacking python3-newt:amd64 (0.52.24-4ubuntu2) over (0.52.24-4ubuntu1) ... 94s Preparing to unpack .../07-libnewt0.52_0.52.24-4ubuntu2_amd64.deb ... 94s Unpacking libnewt0.52:amd64 (0.52.24-4ubuntu2) over (0.52.24-4ubuntu1) ... 94s Preparing to unpack .../08-libxml2_2.12.7+dfsg+really2.9.14-0.2ubuntu5_amd64.deb ... 94s Unpacking libxml2:amd64 (2.12.7+dfsg+really2.9.14-0.2ubuntu5) over (2.12.7+dfsg+really2.9.14-0.2ubuntu4) ... 95s Preparing to unpack .../09-python-apt-common_2.9.9build1_all.deb ... 95s Unpacking python-apt-common (2.9.9build1) over (2.9.9) ... 95s Preparing to unpack .../10-python3-apt_2.9.9build1_amd64.deb ... 95s Unpacking python3-apt (2.9.9build1) over (2.9.9) ... 95s Preparing to unpack .../11-python3-cffi-backend_1.17.1-2build2_amd64.deb ... 95s Unpacking python3-cffi-backend:amd64 (1.17.1-2build2) over (1.17.1-2build1) ... 95s Preparing to unpack .../12-python3-dbus_1.3.2-5build5_amd64.deb ... 95s Unpacking python3-dbus (1.3.2-5build5) over (1.3.2-5build4) ... 95s Preparing to unpack .../13-python3-gi_3.50.0-4build1_amd64.deb ... 95s Unpacking python3-gi (3.50.0-4build1) over (3.50.0-4) ... 95s Preparing to unpack .../14-python3-yaml_6.0.2-1build2_amd64.deb ... 95s Unpacking python3-yaml (6.0.2-1build2) over (6.0.2-1build1) ... 95s Preparing to unpack .../15-rsyslog_8.2412.0-2ubuntu2_amd64.deb ... 95s Unpacking rsyslog (8.2412.0-2ubuntu2) over (8.2412.0-2ubuntu1) ... 95s Preparing to unpack .../16-whiptail_0.52.24-4ubuntu2_amd64.deb ... 95s Unpacking whiptail (0.52.24-4ubuntu2) over (0.52.24-4ubuntu1) ... 95s Preparing to unpack .../17-ubuntu-minimal_1.549_amd64.deb ... 95s Unpacking ubuntu-minimal (1.549) over (1.548) ... 95s Preparing to unpack .../18-apparmor_4.1.0~beta5-0ubuntu8_amd64.deb ... 96s Unpacking apparmor (4.1.0~beta5-0ubuntu8) over (4.1.0~beta5-0ubuntu5) ... 96s Preparing to unpack .../19-dosfstools_4.2-1.2_amd64.deb ... 96s Unpacking dosfstools (4.2-1.2) over (4.2-1.1build1) ... 96s Preparing to unpack .../20-libnl-genl-3-200_3.7.0-1_amd64.deb ... 96s Unpacking libnl-genl-3-200:amd64 (3.7.0-1) over (3.7.0-0.3build2) ... 96s Preparing to unpack .../21-libnl-route-3-200_3.7.0-1_amd64.deb ... 96s Unpacking libnl-route-3-200:amd64 (3.7.0-1) over (3.7.0-0.3build2) ... 96s Preparing to unpack .../22-libnl-3-200_3.7.0-1_amd64.deb ... 96s Unpacking libnl-3-200:amd64 (3.7.0-1) over (3.7.0-0.3build2) ... 96s Preparing to unpack .../23-parted_3.6-5_amd64.deb ... 96s Unpacking parted (3.6-5) over (3.6-4build1) ... 96s Preparing to unpack .../24-libparted2t64_3.6-5_amd64.deb ... 96s Adding 'diversion of /lib/x86_64-linux-gnu/libparted.so.2 to /lib/x86_64-linux-gnu/libparted.so.2.usr-is-merged by libparted2t64' 96s Adding 'diversion of /lib/x86_64-linux-gnu/libparted.so.2.0.5 to /lib/x86_64-linux-gnu/libparted.so.2.0.5.usr-is-merged by libparted2t64' 96s Unpacking libparted2t64:amd64 (3.6-5) over (3.6-4build1) ... 96s Preparing to unpack .../25-pci.ids_0.0~2025.03.09-1_all.deb ... 96s Unpacking pci.ids (0.0~2025.03.09-1) over (0.0~2025.02.12-1) ... 96s Preparing to unpack .../26-pciutils_1%3a3.13.0-2_amd64.deb ... 97s Unpacking pciutils (1:3.13.0-2) over (1:3.13.0-1) ... 97s Preparing to unpack .../27-libpci3_1%3a3.13.0-2_amd64.deb ... 97s Unpacking libpci3:amd64 (1:3.13.0-2) over (1:3.13.0-1) ... 97s Preparing to unpack .../28-strace_6.13+ds-1ubuntu1_amd64.deb ... 97s Unpacking strace (6.13+ds-1ubuntu1) over (6.11-0ubuntu1) ... 97s Preparing to unpack .../29-xz-utils_5.6.4-1_amd64.deb ... 97s Unpacking xz-utils (5.6.4-1) over (5.6.3-1) ... 97s Preparing to unpack .../30-ubuntu-standard_1.549_amd64.deb ... 97s Unpacking ubuntu-standard (1.549) over (1.548) ... 97s Preparing to unpack .../31-libgprofng0_2.44-3ubuntu1_amd64.deb ... 97s Unpacking libgprofng0:amd64 (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../32-libctf0_2.44-3ubuntu1_amd64.deb ... 97s Unpacking libctf0:amd64 (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../33-libctf-nobfd0_2.44-3ubuntu1_amd64.deb ... 97s Unpacking libctf-nobfd0:amd64 (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../34-binutils-x86-64-linux-gnu_2.44-3ubuntu1_amd64.deb ... 97s Unpacking binutils-x86-64-linux-gnu (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../35-libbinutils_2.44-3ubuntu1_amd64.deb ... 97s Unpacking libbinutils:amd64 (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../36-binutils_2.44-3ubuntu1_amd64.deb ... 97s Unpacking binutils (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../37-binutils-common_2.44-3ubuntu1_amd64.deb ... 97s Unpacking binutils-common:amd64 (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../38-libsframe1_2.44-3ubuntu1_amd64.deb ... 97s Unpacking libsframe1:amd64 (2.44-3ubuntu1) over (2.44-2ubuntu1) ... 97s Preparing to unpack .../39-hwdata_0.393-3_all.deb ... 97s Unpacking hwdata (0.393-3) over (0.392-1) ... 97s Selecting previously unselected package pnp.ids. 97s Preparing to unpack .../40-pnp.ids_0.393-3_all.deb ... 97s Unpacking pnp.ids (0.393-3) ... 97s Preparing to unpack .../41-linux-tools-common_6.14.0-10.10_all.deb ... 97s Unpacking linux-tools-common (6.14.0-10.10) over (6.12.0-16.16) ... 97s Selecting previously unselected package bpftool. 97s Preparing to unpack .../42-bpftool_7.6.0+6.14.0-10.10_amd64.deb ... 97s Unpacking bpftool (7.6.0+6.14.0-10.10) ... 97s Preparing to unpack .../43-python3-markupsafe_2.1.5-1build4_amd64.deb ... 97s Unpacking python3-markupsafe (2.1.5-1build4) over (2.1.5-1build3) ... 97s Preparing to unpack .../44-python3-jinja2_3.1.5-2ubuntu1_all.deb ... 97s Unpacking python3-jinja2 (3.1.5-2ubuntu1) over (3.1.5-2) ... 97s Preparing to unpack .../45-cloud-init-base_25.1-0ubuntu3_all.deb ... 98s Unpacking cloud-init-base (25.1-0ubuntu3) over (25.1-0ubuntu2) ... 98s Preparing to unpack .../46-libbrotli1_1.1.0-2build4_amd64.deb ... 98s Unpacking libbrotli1:amd64 (1.1.0-2build4) over (1.1.0-2build3) ... 98s Preparing to unpack .../47-curl_8.12.1-3ubuntu1_amd64.deb ... 98s Unpacking curl (8.12.1-3ubuntu1) over (8.12.1-2ubuntu1) ... 98s Preparing to unpack .../48-libcurl4t64_8.12.1-3ubuntu1_amd64.deb ... 98s Unpacking libcurl4t64:amd64 (8.12.1-3ubuntu1) over (8.12.1-2ubuntu1) ... 98s Preparing to unpack .../49-exfatprogs_1.2.8-1_amd64.deb ... 98s Unpacking exfatprogs (1.2.8-1) over (1.2.7-3) ... 98s Preparing to unpack .../50-libcurl3t64-gnutls_8.12.1-3ubuntu1_amd64.deb ... 98s Unpacking libcurl3t64-gnutls:amd64 (8.12.1-3ubuntu1) over (8.12.1-2ubuntu1) ... 98s Preparing to unpack .../51-fwupd_2.0.6-4_amd64.deb ... 98s Unpacking fwupd (2.0.6-4) over (2.0.6-3) ... 98s Preparing to unpack .../52-libfwupd3_2.0.6-4_amd64.deb ... 98s Unpacking libfwupd3:amd64 (2.0.6-4) over (2.0.6-3) ... 98s Preparing to unpack .../53-libmm-glib0_1.23.4-0ubuntu3_amd64.deb ... 98s Unpacking libmm-glib0:amd64 (1.23.4-0ubuntu3) over (1.23.4-0ubuntu2) ... 98s Preparing to unpack .../54-htop_3.4.0-2_amd64.deb ... 98s Unpacking htop (3.4.0-2) over (3.3.0-5) ... 98s Preparing to unpack .../55-linux-firmware_20250310.git9e1370d3-0ubuntu1_amd64.deb ... 98s Unpacking linux-firmware (20250310.git9e1370d3-0ubuntu1) over (20250204.git0fd450ee-0ubuntu1) ... 101s Preparing to unpack .../56-initramfs-tools_0.146ubuntu1_all.deb ... 101s Unpacking initramfs-tools (0.146ubuntu1) over (0.145ubuntu3) ... 101s Preparing to unpack .../57-initramfs-tools-core_0.146ubuntu1_all.deb ... 101s Unpacking initramfs-tools-core (0.146ubuntu1) over (0.145ubuntu3) ... 101s Preparing to unpack .../58-initramfs-tools-bin_0.146ubuntu1_amd64.deb ... 101s Unpacking initramfs-tools-bin (0.146ubuntu1) over (0.145ubuntu3) ... 101s Selecting previously unselected package libdebuginfod1t64:amd64. 101s Preparing to unpack .../59-libdebuginfod1t64_0.192-4_amd64.deb ... 101s Unpacking libdebuginfod1t64:amd64 (0.192-4) ... 101s Preparing to unpack .../60-libftdi1-2_1.5-8build1_amd64.deb ... 101s Unpacking libftdi1-2:amd64 (1.5-8build1) over (1.5-8) ... 101s Preparing to unpack .../61-libgpgme11t64_1.24.2-1ubuntu2_amd64.deb ... 101s Unpacking libgpgme11t64:amd64 (1.24.2-1ubuntu2) over (1.24.2-1ubuntu1) ... 101s Preparing to unpack .../62-libjemalloc2_5.3.0-3_amd64.deb ... 101s Unpacking libjemalloc2:amd64 (5.3.0-3) over (5.3.0-2build1) ... 102s Selecting previously unselected package linux-modules-6.14.0-10-generic. 102s Preparing to unpack .../63-linux-modules-6.14.0-10-generic_6.14.0-10.10_amd64.deb ... 102s Unpacking linux-modules-6.14.0-10-generic (6.14.0-10.10) ... 102s Selecting previously unselected package linux-image-6.14.0-10-generic. 102s Preparing to unpack .../64-linux-image-6.14.0-10-generic_6.14.0-10.10_amd64.deb ... 102s Unpacking linux-image-6.14.0-10-generic (6.14.0-10.10) ... 102s Selecting previously unselected package linux-modules-extra-6.14.0-10-generic. 102s Preparing to unpack .../65-linux-modules-extra-6.14.0-10-generic_6.14.0-10.10_amd64.deb ... 102s Unpacking linux-modules-extra-6.14.0-10-generic (6.14.0-10.10) ... 103s Preparing to unpack .../66-linux-generic_6.14.0-10.10_amd64.deb ... 103s Unpacking linux-generic (6.14.0-10.10) over (6.12.0-16.16+2) ... 103s Preparing to unpack .../67-linux-image-generic_6.14.0-10.10_amd64.deb ... 103s Unpacking linux-image-generic (6.14.0-10.10) over (6.12.0-16.16+2) ... 103s Preparing to unpack .../68-linux-virtual_6.14.0-10.10_amd64.deb ... 103s Unpacking linux-virtual (6.14.0-10.10) over (6.12.0-16.16+2) ... 103s Preparing to unpack .../69-linux-image-virtual_6.14.0-10.10_amd64.deb ... 103s Unpacking linux-image-virtual (6.14.0-10.10) over (6.12.0-16.16+2) ... 103s Preparing to unpack .../70-linux-headers-virtual_6.14.0-10.10_amd64.deb ... 103s Unpacking linux-headers-virtual (6.14.0-10.10) over (6.12.0-16.16+2) ... 103s Selecting previously unselected package linux-headers-6.14.0-10. 103s Preparing to unpack .../71-linux-headers-6.14.0-10_6.14.0-10.10_all.deb ... 103s Unpacking linux-headers-6.14.0-10 (6.14.0-10.10) ... 106s Selecting previously unselected package linux-headers-6.14.0-10-generic. 106s Preparing to unpack .../72-linux-headers-6.14.0-10-generic_6.14.0-10.10_amd64.deb ... 106s Unpacking linux-headers-6.14.0-10-generic (6.14.0-10.10) ... 107s Preparing to unpack .../73-linux-headers-generic_6.14.0-10.10_amd64.deb ... 107s Unpacking linux-headers-generic (6.14.0-10.10) over (6.12.0-16.16+2) ... 107s Selecting previously unselected package linux-perf. 107s Preparing to unpack .../74-linux-perf_6.14.0-10.10_amd64.deb ... 107s Unpacking linux-perf (6.14.0-10.10) ... 107s Selecting previously unselected package linux-tools-6.14.0-10. 107s Preparing to unpack .../75-linux-tools-6.14.0-10_6.14.0-10.10_amd64.deb ... 107s Unpacking linux-tools-6.14.0-10 (6.14.0-10.10) ... 107s Selecting previously unselected package linux-tools-6.14.0-10-generic. 107s Preparing to unpack .../76-linux-tools-6.14.0-10-generic_6.14.0-10.10_amd64.deb ... 107s Unpacking linux-tools-6.14.0-10-generic (6.14.0-10.10) ... 107s Preparing to unpack .../77-pinentry-curses_1.3.1-2ubuntu3_amd64.deb ... 107s Unpacking pinentry-curses (1.3.1-2ubuntu3) over (1.3.1-2ubuntu2) ... 107s Preparing to unpack .../78-python3-lazr.uri_1.0.6-6_all.deb ... 107s Unpacking python3-lazr.uri (1.0.6-6) over (1.0.6-5) ... 107s Preparing to unpack .../79-python3-rpds-py_0.21.0-2ubuntu2_amd64.deb ... 108s Unpacking python3-rpds-py (0.21.0-2ubuntu2) over (0.21.0-2ubuntu1) ... 108s Preparing to unpack .../80-python3-systemd_235-1build6_amd64.deb ... 108s Unpacking python3-systemd (235-1build6) over (235-1build5) ... 108s Preparing to unpack .../81-python3.13-gdbm_3.13.2-2_amd64.deb ... 108s Unpacking python3.13-gdbm (3.13.2-2) over (3.13.2-1) ... 108s Preparing to unpack .../82-ubuntu-kernel-accessories_1.549_amd64.deb ... 108s Unpacking ubuntu-kernel-accessories (1.549) over (1.548) ... 108s Preparing to unpack .../83-cloud-init_25.1-0ubuntu3_all.deb ... 108s Unpacking cloud-init (25.1-0ubuntu3) over (25.1-0ubuntu2) ... 108s Preparing to unpack .../84-python3-bcrypt_4.2.0-2.1build1_amd64.deb ... 108s Unpacking python3-bcrypt (4.2.0-2.1build1) over (4.2.0-2.1) ... 108s Setting up linux-headers-6.14.0-10 (6.14.0-10.10) ... 108s Setting up media-types (13.0.0) ... 108s Installing new version of config file /etc/mime.types ... 108s Setting up linux-headers-6.14.0-10-generic (6.14.0-10.10) ... 108s Setting up ubuntu-kernel-accessories (1.549) ... 108s Setting up libapparmor1:amd64 (4.1.0~beta5-0ubuntu8) ... 108s Setting up pci.ids (0.0~2025.03.09-1) ... 108s Setting up libnewt0.52:amd64 (0.52.24-4ubuntu2) ... 108s Setting up apt-utils (2.9.33) ... 108s Setting up libdebuginfod-common (0.192-4) ... 108s Setting up exfatprogs (1.2.8-1) ... 108s Setting up linux-firmware (20250310.git9e1370d3-0ubuntu1) ... 108s Setting up bpftool (7.6.0+6.14.0-10.10) ... 108s Setting up libestr0:amd64 (0.1.11-2) ... 108s Setting up libbrotli1:amd64 (1.1.0-2build4) ... 108s Setting up libsqlite3-0:amd64 (3.46.1-2) ... 108s Setting up dosfstools (4.2-1.2) ... 108s Setting up rsyslog (8.2412.0-2ubuntu2) ... 108s info: The user `syslog' is already a member of `adm'. 109s Setting up binutils-common:amd64 (2.44-3ubuntu1) ... 109s Setting up libcurl3t64-gnutls:amd64 (8.12.1-3ubuntu1) ... 109s Setting up linux-libc-dev:amd64 (6.14.0-10.10) ... 109s Setting up libctf-nobfd0:amd64 (2.44-3ubuntu1) ... 109s Setting up systemd (257.3-1ubuntu3) ... 109s /usr/lib/tmpfiles.d/legacy.conf:14: Duplicate line for path "/run/lock", ignoring. 109s Created symlink '/run/systemd/system/tmp.mount' → '/dev/null'. 109s /usr/lib/tmpfiles.d/legacy.conf:14: Duplicate line for path "/run/lock", ignoring. 110s Setting up libparted2t64:amd64 (3.6-5) ... 110s Removing 'diversion of /lib/x86_64-linux-gnu/libparted.so.2 to /lib/x86_64-linux-gnu/libparted.so.2.usr-is-merged by libparted2t64' 110s Removing 'diversion of /lib/x86_64-linux-gnu/libparted.so.2.0.5 to /lib/x86_64-linux-gnu/libparted.so.2.0.5.usr-is-merged by libparted2t64' 110s Setting up linux-headers-generic (6.14.0-10.10) ... 110s Setting up libjemalloc2:amd64 (5.3.0-3) ... 110s Setting up locales (2.41-1ubuntu1) ... 110s Installing new version of config file /etc/locale.alias ... 111s Generating locales (this might take a while)... 112s en_US.UTF-8... done 112s Generation complete. 112s Setting up libsframe1:amd64 (2.44-3ubuntu1) ... 112s Setting up libpython3.13-minimal:amd64 (3.13.2-2) ... 112s Setting up apparmor (4.1.0~beta5-0ubuntu8) ... 112s Installing new version of config file /etc/apparmor.d/fusermount3 ... 112s Installing new version of config file /etc/apparmor.d/lsusb ... 112s Installing new version of config file /etc/apparmor.d/openvpn ... 113s Reloading AppArmor profiles 115s Setting up libftdi1-2:amd64 (1.5-8build1) ... 115s Setting up libglib2.0-data (2.84.0-1) ... 115s Setting up systemd-cryptsetup (257.3-1ubuntu3) ... 115s Setting up libncurses6:amd64 (6.5+20250216-2) ... 115s Setting up strace (6.13+ds-1ubuntu1) ... 115s Setting up xz-utils (5.6.4-1) ... 115s Setting up systemd-timesyncd (257.3-1ubuntu3) ... 115s systemd-time-wait-sync.service is a disabled or a static unit not running, not starting it. 115s Setting up libatomic1:amd64 (15-20250222-0ubuntu1) ... 115s Setting up udev (257.3-1ubuntu3) ... 116s Setting up linux-modules-6.14.0-10-generic (6.14.0-10.10) ... 118s Setting up libncursesw6:amd64 (6.5+20250216-2) ... 118s Setting up libpci3:amd64 (1:3.13.0-2) ... 118s Setting up whiptail (0.52.24-4ubuntu2) ... 118s Setting up python-apt-common (2.9.9build1) ... 118s Setting up pnp.ids (0.393-3) ... 118s Setting up libnl-3-200:amd64 (3.7.0-1) ... 118s Setting up python3.13-minimal (3.13.2-2) ... 119s Setting up libgpgme11t64:amd64 (1.24.2-1ubuntu2) ... 119s Setting up libbinutils:amd64 (2.44-3ubuntu1) ... 119s Setting up libc-dev-bin (2.41-1ubuntu1) ... 119s Setting up libpython3.13-stdlib:amd64 (3.13.2-2) ... 119s Setting up libxml2:amd64 (2.12.7+dfsg+really2.9.14-0.2ubuntu5) ... 119s Setting up rsync (3.4.1+ds1-3) ... 119s rsync.service is a disabled or a static unit not running, not starting it. 119s Setting up python3.13-gdbm (3.13.2-2) ... 119s Setting up libpython3-stdlib:amd64 (3.13.2-2) ... 119s Setting up systemd-resolved (257.3-1ubuntu3) ... 120s Setting up initramfs-tools-bin (0.146ubuntu1) ... 120s Setting up ncurses-term (6.5+20250216-2) ... 120s Setting up libctf0:amd64 (2.44-3ubuntu1) ... 120s Setting up libpython3.13:amd64 (3.13.2-2) ... 120s Setting up pinentry-curses (1.3.1-2ubuntu3) ... 120s Setting up libdebuginfod1t64:amd64 (0.192-4) ... 120s Setting up systemd-sysv (257.3-1ubuntu3) ... 120s Setting up linux-headers-virtual (6.14.0-10.10) ... 120s Setting up libcurl4t64:amd64 (8.12.1-3ubuntu1) ... 120s Setting up python3.13 (3.13.2-2) ... 121s Setting up htop (3.4.0-2) ... 121s Setting up linux-image-6.14.0-10-generic (6.14.0-10.10) ... 123s I: /boot/vmlinuz.old is now a symlink to vmlinuz-6.12.0-16-generic 123s I: /boot/initrd.img.old is now a symlink to initrd.img-6.12.0-16-generic 123s I: /boot/vmlinuz is now a symlink to vmlinuz-6.14.0-10-generic 123s I: /boot/initrd.img is now a symlink to initrd.img-6.14.0-10-generic 123s Setting up parted (3.6-5) ... 123s Setting up libnss-systemd:amd64 (257.3-1ubuntu3) ... 123s Setting up python3 (3.13.2-2) ... 123s Setting up python3-newt:amd64 (0.52.24-4ubuntu2) ... 123s Setting up python3-markupsafe (2.1.5-1build4) ... 123s Setting up linux-modules-extra-6.14.0-10-generic (6.14.0-10.10) ... 125s Setting up libnl-route-3-200:amd64 (3.7.0-1) ... 125s Setting up hwdata (0.393-3) ... 125s Setting up python3-jinja2 (3.1.5-2ubuntu1) ... 126s Setting up libglib2.0-0t64:amd64 (2.84.0-1) ... 126s No schema files found: doing nothing. 126s Setting up libgprofng0:amd64 (2.44-3ubuntu1) ... 126s Setting up linux-perf (6.14.0-10.10) ... 126s Setting up gir1.2-glib-2.0:amd64 (2.84.0-1) ... 126s Setting up pciutils (1:3.13.0-2) ... 126s Setting up python3-rpds-py (0.21.0-2ubuntu2) ... 126s Setting up libmm-glib0:amd64 (1.23.4-0ubuntu3) ... 126s Setting up libnl-genl-3-200:amd64 (3.7.0-1) ... 126s Setting up libpam-systemd:amd64 (257.3-1ubuntu3) ... 126s Setting up libc6-dev:amd64 (2.41-1ubuntu1) ... 126s Setting up libgirepository-1.0-1:amd64 (1.83.4-1) ... 126s Setting up curl (8.12.1-3ubuntu1) ... 126s Setting up linux-image-virtual (6.14.0-10.10) ... 126s Setting up initramfs-tools-core (0.146ubuntu1) ... 126s Setting up linux-tools-common (6.14.0-10.10) ... 126s Setting up python3-systemd (235-1build6) ... 126s Setting up python3-cffi-backend:amd64 (1.17.1-2build2) ... 126s Setting up binutils-x86-64-linux-gnu (2.44-3ubuntu1) ... 126s Setting up linux-image-generic (6.14.0-10.10) ... 126s Setting up python3-dbus (1.3.2-5build5) ... 126s Setting up linux-tools-6.14.0-10 (6.14.0-10.10) ... 126s Setting up initramfs-tools (0.146ubuntu1) ... 126s Installing new version of config file /etc/kernel/postinst.d/initramfs-tools ... 126s Installing new version of config file /etc/kernel/postrm.d/initramfs-tools ... 126s update-initramfs: deferring update (trigger activated) 126s Setting up linux-generic (6.14.0-10.10) ... 126s Setting up ubuntu-minimal (1.549) ... 126s Setting up python3-apt (2.9.9build1) ... 126s Setting up python3-bcrypt (4.2.0-2.1build1) ... 126s Setting up python3-yaml (6.0.2-1build2) ... 126s Setting up libfwupd3:amd64 (2.0.6-4) ... 126s Setting up python3-lazr.uri (1.0.6-6) ... 127s Setting up binutils (2.44-3ubuntu1) ... 127s Setting up ubuntu-standard (1.549) ... 127s Setting up cloud-init-base (25.1-0ubuntu3) ... 128s Setting up linux-virtual (6.14.0-10.10) ... 128s Setting up gir1.2-girepository-2.0:amd64 (1.83.4-1) ... 128s Setting up python3-gi (3.50.0-4build1) ... 128s Setting up linux-tools-6.14.0-10-generic (6.14.0-10.10) ... 128s Setting up fwupd (2.0.6-4) ... 129s fwupd-refresh.service is a disabled or a static unit not running, not starting it. 129s Setting up cloud-init (25.1-0ubuntu3) ... 129s Processing triggers for man-db (2.13.0-1) ... 131s Processing triggers for dbus (1.16.2-1ubuntu1) ... 131s Processing triggers for shared-mime-info (2.4-5) ... 131s Warning: program compiled against libxml 212 using older 209 131s Processing triggers for libc-bin (2.41-1ubuntu1) ... 131s Processing triggers for linux-image-6.14.0-10-generic (6.14.0-10.10) ... 131s /etc/kernel/postinst.d/initramfs-tools: 131s update-initramfs: Generating /boot/initrd.img-6.14.0-10-generic 131s W: No lz4 in /usr/bin:/sbin:/bin, using gzip 141s /etc/kernel/postinst.d/zz-update-grub: 141s Sourcing file `/etc/default/grub' 141s Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' 141s Sourcing file `/etc/default/grub.d/90-autopkgtest.cfg' 141s Generating grub configuration file ... 141s Found linux image: /boot/vmlinuz-6.14.0-10-generic 141s Found initrd image: /boot/initrd.img-6.14.0-10-generic 141s Found linux image: /boot/vmlinuz-6.12.0-16-generic 141s Found initrd image: /boot/initrd.img-6.12.0-16-generic 142s Found linux image: /boot/vmlinuz-6.11.0-8-generic 142s Found initrd image: /boot/initrd.img-6.11.0-8-generic 142s Warning: os-prober will not be executed to detect other bootable partitions. 142s Systems on them will not be added to the GRUB boot configuration. 142s Check GRUB_DISABLE_OS_PROBER documentation entry. 142s Adding boot menu entry for UEFI Firmware Settings ... 142s done 142s Processing triggers for initramfs-tools (0.146ubuntu1) ... 142s update-initramfs: Generating /boot/initrd.img-6.14.0-10-generic 142s W: No lz4 in /usr/bin:/sbin:/bin, using gzip 153s Reading package lists... 153s Building dependency tree... 153s Reading state information... 153s Solving dependencies... 154s The following packages will be REMOVED: 154s libnl-genl-3-200* libnsl2* libpython3.12-minimal* libpython3.12-stdlib* 154s libpython3.12t64* linux-headers-6.11.0-8* linux-headers-6.11.0-8-generic* 154s linux-headers-6.12.0-16* linux-headers-6.12.0-16-generic* 154s linux-image-6.11.0-8-generic* linux-image-6.12.0-16-generic* 154s linux-modules-6.11.0-8-generic* linux-modules-6.12.0-16-generic* 154s linux-modules-extra-6.12.0-16-generic* linux-tools-6.11.0-8* 154s linux-tools-6.11.0-8-generic* linux-tools-6.12.0-16* 154s linux-tools-6.12.0-16-generic* 154s 0 upgraded, 0 newly installed, 18 to remove and 5 not upgraded. 154s After this operation, 545 MB disk space will be freed. 154s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 148643 files and directories currently installed.) 154s Removing libnl-genl-3-200:amd64 (3.7.0-1) ... 154s Removing linux-tools-6.11.0-8-generic (6.11.0-8.8) ... 154s Removing linux-tools-6.11.0-8 (6.11.0-8.8) ... 154s Removing libpython3.12t64:amd64 (3.12.9-1) ... 154s Removing libpython3.12-stdlib:amd64 (3.12.9-1) ... 154s Removing libnsl2:amd64 (1.3.0-3build3) ... 154s Removing libpython3.12-minimal:amd64 (3.12.9-1) ... 154s Removing linux-headers-6.11.0-8-generic (6.11.0-8.8) ... 155s Removing linux-headers-6.11.0-8 (6.11.0-8.8) ... 157s Removing linux-headers-6.12.0-16-generic (6.12.0-16.16) ... 157s Removing linux-headers-6.12.0-16 (6.12.0-16.16) ... 159s Removing linux-image-6.11.0-8-generic (6.11.0-8.8) ... 159s /etc/kernel/postrm.d/initramfs-tools: 159s update-initramfs: Deleting /boot/initrd.img-6.11.0-8-generic 160s /etc/kernel/postrm.d/zz-update-grub: 160s Sourcing file `/etc/default/grub' 160s Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' 160s Sourcing file `/etc/default/grub.d/90-autopkgtest.cfg' 160s Generating grub configuration file ... 160s Found linux image: /boot/vmlinuz-6.14.0-10-generic 160s Found initrd image: /boot/initrd.img-6.14.0-10-generic 160s Found linux image: /boot/vmlinuz-6.12.0-16-generic 160s Found initrd image: /boot/initrd.img-6.12.0-16-generic 160s Warning: os-prober will not be executed to detect other bootable partitions. 160s Systems on them will not be added to the GRUB boot configuration. 160s Check GRUB_DISABLE_OS_PROBER documentation entry. 160s Adding boot menu entry for UEFI Firmware Settings ... 160s done 160s Removing linux-image-6.12.0-16-generic (6.12.0-16.16) ... 160s W: Removing the running kernel 160s I: /boot/vmlinuz.old is now a symlink to vmlinuz-6.14.0-10-generic 160s I: /boot/initrd.img.old is now a symlink to initrd.img-6.14.0-10-generic 160s /etc/kernel/postrm.d/initramfs-tools: 160s update-initramfs: Deleting /boot/initrd.img-6.12.0-16-generic 160s /etc/kernel/postrm.d/zz-update-grub: 160s Sourcing file `/etc/default/grub' 160s Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' 160s Sourcing file `/etc/default/grub.d/90-autopkgtest.cfg' 160s Generating grub configuration file ... 160s Found linux image: /boot/vmlinuz-6.14.0-10-generic 160s Found initrd image: /boot/initrd.img-6.14.0-10-generic 161s Warning: os-prober will not be executed to detect other bootable partitions. 161s Systems on them will not be added to the GRUB boot configuration. 161s Check GRUB_DISABLE_OS_PROBER documentation entry. 161s Adding boot menu entry for UEFI Firmware Settings ... 161s done 161s Removing linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 161s Removing linux-modules-extra-6.12.0-16-generic (6.12.0-16.16) ... 162s Removing linux-modules-6.12.0-16-generic (6.12.0-16.16) ... 162s Removing linux-tools-6.12.0-16-generic (6.12.0-16.16) ... 162s Removing linux-tools-6.12.0-16 (6.12.0-16.16) ... 163s Processing triggers for libc-bin (2.41-1ubuntu1) ... 163s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 76972 files and directories currently installed.) 163s Purging configuration files for linux-image-6.11.0-8-generic (6.11.0-8.8) ... 163s Purging configuration files for libpython3.12-minimal:amd64 (3.12.9-1) ... 163s Purging configuration files for linux-modules-extra-6.12.0-16-generic (6.12.0-16.16) ... 163s Purging configuration files for linux-modules-6.12.0-16-generic (6.12.0-16.16) ... 163s dpkg: warning: while removing linux-modules-6.12.0-16-generic, directory '/lib/modules/6.12.0-16-generic' not empty so not removed 163s Purging configuration files for linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 163s Purging configuration files for linux-image-6.12.0-16-generic (6.12.0-16.16) ... 163s rmdir: failed to remove '/lib/modules/6.12.0-16-generic': Directory not empty 163s autopkgtest [23:15:32]: upgrading testbed (apt dist-upgrade and autopurge) 163s Reading package lists... 163s Building dependency tree... 163s Reading state information... 164s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 164s Starting 2 pkgProblemResolver with broken count: 0 164s Done 164s Entering ResolveByKeep 165s 165s Calculating upgrade... 165s The following packages will be upgraded: 165s libc-bin libc-dev-bin libc6 libc6-dev locales 165s 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 165s Need to get 10.5 MB of archives. 165s After this operation, 1024 B of additional disk space will be used. 165s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 libc6-dev amd64 2.41-1ubuntu2 [2183 kB] 165s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 libc-dev-bin amd64 2.41-1ubuntu2 [24.7 kB] 165s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 libc6 amd64 2.41-1ubuntu2 [3327 kB] 166s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 libc-bin amd64 2.41-1ubuntu2 [700 kB] 166s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main amd64 locales all 2.41-1ubuntu2 [4246 kB] 166s Preconfiguring packages ... 166s Fetched 10.5 MB in 1s (9391 kB/s) 166s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 76968 files and directories currently installed.) 166s Preparing to unpack .../libc6-dev_2.41-1ubuntu2_amd64.deb ... 166s Unpacking libc6-dev:amd64 (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 167s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu2_amd64.deb ... 167s Unpacking libc-dev-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 167s Preparing to unpack .../libc6_2.41-1ubuntu2_amd64.deb ... 167s Unpacking libc6:amd64 (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 167s Setting up libc6:amd64 (2.41-1ubuntu2) ... 167s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 76968 files and directories currently installed.) 167s Preparing to unpack .../libc-bin_2.41-1ubuntu2_amd64.deb ... 167s Unpacking libc-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 167s Setting up libc-bin (2.41-1ubuntu2) ... 167s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 76968 files and directories currently installed.) 167s Preparing to unpack .../locales_2.41-1ubuntu2_all.deb ... 167s Unpacking locales (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 167s Setting up locales (2.41-1ubuntu2) ... 168s Generating locales (this might take a while)... 169s en_US.UTF-8... done 169s Generation complete. 169s Setting up libc-dev-bin (2.41-1ubuntu2) ... 169s Setting up libc6-dev:amd64 (2.41-1ubuntu2) ... 169s Processing triggers for man-db (2.13.0-1) ... 170s Processing triggers for systemd (257.3-1ubuntu3) ... 171s Reading package lists... 171s Building dependency tree... 171s Reading state information... 171s Starting pkgProblemResolver with broken count: 0 171s Starting 2 pkgProblemResolver with broken count: 0 171s Done 172s Solving dependencies... 172s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 172s autopkgtest [23:15:41]: rebooting testbed after setup commands that affected boot 195s autopkgtest [23:16:04]: testbed running kernel: Linux 6.14.0-10-generic #10-Ubuntu SMP PREEMPT_DYNAMIC Wed Mar 12 16:07:00 UTC 2025 197s autopkgtest [23:16:06]: @@@@@@@@@@@@@@@@@@@@ apt-source slony1-2 200s Get:1 http://ftpmaster.internal/ubuntu plucky/universe slony1-2 2.2.11-6 (dsc) [2462 B] 200s Get:2 http://ftpmaster.internal/ubuntu plucky/universe slony1-2 2.2.11-6 (tar) [1465 kB] 200s Get:3 http://ftpmaster.internal/ubuntu plucky/universe slony1-2 2.2.11-6 (diff) [17.3 kB] 200s gpgv: Signature made Thu Sep 19 09:07:19 2024 UTC 200s gpgv: using RSA key 5C48FE6157F49179597087C64C5A6BAB12D2A7AE 200s gpgv: Can't check signature: No public key 200s dpkg-source: warning: cannot verify inline signature for ./slony1-2_2.2.11-6.dsc: no acceptable signature found 201s autopkgtest [23:16:10]: testing package slony1-2 version 2.2.11-6 201s autopkgtest [23:16:10]: build not needed 202s autopkgtest [23:16:11]: test load-functions: preparing testbed 202s Reading package lists... 202s Building dependency tree... 202s Reading state information... 203s Starting pkgProblemResolver with broken count: 0 203s Starting 2 pkgProblemResolver with broken count: 0 203s Done 203s The following NEW packages will be installed: 203s libio-pty-perl libipc-run-perl libjson-perl libllvm20 libpq5 libxslt1.1 203s postgresql-17 postgresql-17-slony1-2 postgresql-client-17 203s postgresql-client-common postgresql-common postgresql-common-dev 203s slony1-2-bin slony1-2-doc ssl-cert 203s 0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded. 203s Need to get 49.8 MB of archives. 203s After this operation, 203 MB of additional disk space will be used. 203s Get:1 http://ftpmaster.internal/ubuntu plucky/main amd64 libjson-perl all 4.10000-1 [81.9 kB] 203s Get:2 http://ftpmaster.internal/ubuntu plucky/main amd64 postgresql-client-common all 274 [47.6 kB] 203s Get:3 http://ftpmaster.internal/ubuntu plucky/main amd64 libio-pty-perl amd64 1:1.20-1build3 [31.4 kB] 204s Get:4 http://ftpmaster.internal/ubuntu plucky/main amd64 libipc-run-perl all 20231003.0-2 [91.5 kB] 204s Get:5 http://ftpmaster.internal/ubuntu plucky/main amd64 postgresql-common-dev all 274 [73.0 kB] 204s Get:6 http://ftpmaster.internal/ubuntu plucky/main amd64 ssl-cert all 1.1.3ubuntu1 [18.7 kB] 204s Get:7 http://ftpmaster.internal/ubuntu plucky/main amd64 postgresql-common all 274 [101 kB] 204s Get:8 http://ftpmaster.internal/ubuntu plucky/main amd64 libllvm20 amd64 1:20.1.0~+rc2-1~exp2ubuntu0.4 [30.5 MB] 205s Get:9 http://ftpmaster.internal/ubuntu plucky/main amd64 libpq5 amd64 17.4-1 [155 kB] 205s Get:10 http://ftpmaster.internal/ubuntu plucky/main amd64 libxslt1.1 amd64 1.1.39-0exp1ubuntu2 [175 kB] 205s Get:11 http://ftpmaster.internal/ubuntu plucky/main amd64 postgresql-client-17 amd64 17.4-1 [1425 kB] 205s Get:12 http://ftpmaster.internal/ubuntu plucky/main amd64 postgresql-17 amd64 17.4-1 [16.6 MB] 206s Get:13 http://ftpmaster.internal/ubuntu plucky/universe amd64 postgresql-17-slony1-2 amd64 2.2.11-6 [22.8 kB] 206s Get:14 http://ftpmaster.internal/ubuntu plucky/universe amd64 slony1-2-bin amd64 2.2.11-6 [231 kB] 206s Get:15 http://ftpmaster.internal/ubuntu plucky/universe amd64 slony1-2-doc all 2.2.11-6 [327 kB] 206s Preconfiguring packages ... 206s Fetched 49.8 MB in 3s (19.4 MB/s) 206s Selecting previously unselected package libjson-perl. 206s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 76968 files and directories currently installed.) 206s Preparing to unpack .../00-libjson-perl_4.10000-1_all.deb ... 206s Unpacking libjson-perl (4.10000-1) ... 206s Selecting previously unselected package postgresql-client-common. 206s Preparing to unpack .../01-postgresql-client-common_274_all.deb ... 206s Unpacking postgresql-client-common (274) ... 206s Selecting previously unselected package libio-pty-perl. 206s Preparing to unpack .../02-libio-pty-perl_1%3a1.20-1build3_amd64.deb ... 206s Unpacking libio-pty-perl (1:1.20-1build3) ... 206s Selecting previously unselected package libipc-run-perl. 206s Preparing to unpack .../03-libipc-run-perl_20231003.0-2_all.deb ... 206s Unpacking libipc-run-perl (20231003.0-2) ... 206s Selecting previously unselected package postgresql-common-dev. 206s Preparing to unpack .../04-postgresql-common-dev_274_all.deb ... 206s Unpacking postgresql-common-dev (274) ... 206s Selecting previously unselected package ssl-cert. 206s Preparing to unpack .../05-ssl-cert_1.1.3ubuntu1_all.deb ... 206s Unpacking ssl-cert (1.1.3ubuntu1) ... 207s Selecting previously unselected package postgresql-common. 207s Preparing to unpack .../06-postgresql-common_274_all.deb ... 207s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 207s Unpacking postgresql-common (274) ... 207s Selecting previously unselected package libllvm20:amd64. 207s Preparing to unpack .../07-libllvm20_1%3a20.1.0~+rc2-1~exp2ubuntu0.4_amd64.deb ... 207s Unpacking libllvm20:amd64 (1:20.1.0~+rc2-1~exp2ubuntu0.4) ... 207s Selecting previously unselected package libpq5:amd64. 207s Preparing to unpack .../08-libpq5_17.4-1_amd64.deb ... 207s Unpacking libpq5:amd64 (17.4-1) ... 207s Selecting previously unselected package libxslt1.1:amd64. 207s Preparing to unpack .../09-libxslt1.1_1.1.39-0exp1ubuntu2_amd64.deb ... 207s Unpacking libxslt1.1:amd64 (1.1.39-0exp1ubuntu2) ... 207s Selecting previously unselected package postgresql-client-17. 207s Preparing to unpack .../10-postgresql-client-17_17.4-1_amd64.deb ... 207s Unpacking postgresql-client-17 (17.4-1) ... 207s Selecting previously unselected package postgresql-17. 207s Preparing to unpack .../11-postgresql-17_17.4-1_amd64.deb ... 207s Unpacking postgresql-17 (17.4-1) ... 208s Selecting previously unselected package postgresql-17-slony1-2. 208s Preparing to unpack .../12-postgresql-17-slony1-2_2.2.11-6_amd64.deb ... 208s Unpacking postgresql-17-slony1-2 (2.2.11-6) ... 208s Selecting previously unselected package slony1-2-bin. 208s Preparing to unpack .../13-slony1-2-bin_2.2.11-6_amd64.deb ... 208s Unpacking slony1-2-bin (2.2.11-6) ... 208s Selecting previously unselected package slony1-2-doc. 208s Preparing to unpack .../14-slony1-2-doc_2.2.11-6_all.deb ... 208s Unpacking slony1-2-doc (2.2.11-6) ... 208s Setting up postgresql-client-common (274) ... 208s Setting up libio-pty-perl (1:1.20-1build3) ... 208s Setting up libpq5:amd64 (17.4-1) ... 208s Setting up ssl-cert (1.1.3ubuntu1) ... 208s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 209s Setting up libllvm20:amd64 (1:20.1.0~+rc2-1~exp2ubuntu0.4) ... 209s Setting up libipc-run-perl (20231003.0-2) ... 209s Setting up libjson-perl (4.10000-1) ... 209s Setting up libxslt1.1:amd64 (1.1.39-0exp1ubuntu2) ... 209s Setting up slony1-2-doc (2.2.11-6) ... 209s Setting up postgresql-common-dev (274) ... 209s Setting up postgresql-client-17 (17.4-1) ... 209s update-alternatives: using /usr/share/postgresql/17/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 209s Setting up postgresql-common (274) ... 209s Creating config file /etc/postgresql-common/createcluster.conf with new version 210s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 210s Removing obsolete dictionary files: 210s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 211s Setting up slony1-2-bin (2.2.11-6) ... 211s Setting up postgresql-17 (17.4-1) ... 212s Creating new PostgreSQL cluster 17/main ... 212s /usr/lib/postgresql/17/bin/initdb -D /var/lib/postgresql/17/main --auth-local peer --auth-host scram-sha-256 --no-instructions 212s The files belonging to this database system will be owned by user "postgres". 212s This user must also own the server process. 212s 212s The database cluster will be initialized with locale "C.UTF-8". 212s The default database encoding has accordingly been set to "UTF8". 212s The default text search configuration will be set to "english". 212s 212s Data page checksums are disabled. 212s 212s fixing permissions on existing directory /var/lib/postgresql/17/main ... ok 212s creating subdirectories ... ok 212s selecting dynamic shared memory implementation ... posix 212s selecting default "max_connections" ... 100 212s selecting default "shared_buffers" ... 128MB 212s selecting default time zone ... Etc/UTC 212s creating configuration files ... ok 212s running bootstrap script ... ok 212s performing post-bootstrap initialization ... ok 212s syncing data to disk ... ok 215s Setting up postgresql-17-slony1-2 (2.2.11-6) ... 215s Processing triggers for man-db (2.13.0-1) ... 217s Processing triggers for libc-bin (2.41-1ubuntu2) ... 218s autopkgtest [23:16:27]: test load-functions: [----------------------- 218s ### PostgreSQL 17 psql ### 219s Creating new PostgreSQL cluster 17/regress ... 221s create table public.sl_node ( 221s no_id int4, 221s no_active bool, 221s no_comment text, 221s no_failed bool, 221s CONSTRAINT "sl_node-pkey" 221s PRIMARY KEY (no_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_node is 'Holds the list of nodes associated with this namespace.'; 221s COMMENT 221s comment on column public.sl_node.no_id is 'The unique ID number for the node'; 221s COMMENT 221s comment on column public.sl_node.no_active is 'Is the node active in replication yet?'; 221s COMMENT 221s comment on column public.sl_node.no_comment is 'A human-oriented description of the node'; 221s COMMENT 221s create table public.sl_nodelock ( 221s nl_nodeid int4, 221s nl_conncnt serial, 221s nl_backendpid int4, 221s CONSTRAINT "sl_nodelock-pkey" 221s PRIMARY KEY (nl_nodeid, nl_conncnt) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_nodelock is 'Used to prevent multiple slon instances and to identify the backends to kill in terminateNodeConnections().'; 221s COMMENT 221s comment on column public.sl_nodelock.nl_nodeid is 'Clients node_id'; 221s COMMENT 221s comment on column public.sl_nodelock.nl_conncnt is 'Clients connection number'; 221s COMMENT 221s comment on column public.sl_nodelock.nl_backendpid is 'PID of database backend owning this lock'; 221s COMMENT 221s create table public.sl_set ( 221s set_id int4, 221s set_origin int4, 221s set_locked bigint, 221s set_comment text, 221s CONSTRAINT "sl_set-pkey" 221s PRIMARY KEY (set_id), 221s CONSTRAINT "set_origin-no_id-ref" 221s FOREIGN KEY (set_origin) 221s REFERENCES public.sl_node (no_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_set is 'Holds definitions of replication sets.'; 221s COMMENT 221s comment on column public.sl_set.set_id is 'A unique ID number for the set.'; 221s COMMENT 221s comment on column public.sl_set.set_origin is 221s 'The ID number of the source node for the replication set.'; 221s COMMENT 221s comment on column public.sl_set.set_locked is 'Transaction ID where the set was locked.'; 221s COMMENT 221s comment on column public.sl_set.set_comment is 'A human-oriented description of the set.'; 221s COMMENT 221s create table public.sl_setsync ( 221s ssy_setid int4, 221s ssy_origin int4, 221s ssy_seqno int8, 221s ssy_snapshot "pg_catalog".txid_snapshot, 221s ssy_action_list text, 221s CONSTRAINT "sl_setsync-pkey" 221s PRIMARY KEY (ssy_setid), 221s CONSTRAINT "ssy_setid-set_id-ref" 221s FOREIGN KEY (ssy_setid) 221s REFERENCES public.sl_set (set_id), 221s CONSTRAINT "ssy_origin-no_id-ref" 221s FOREIGN KEY (ssy_origin) 221s REFERENCES public.sl_node (no_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_setsync is 'SYNC information'; 221s COMMENT 221s comment on column public.sl_setsync.ssy_setid is 'ID number of the replication set'; 221s COMMENT 221s comment on column public.sl_setsync.ssy_origin is 'ID number of the node'; 221s COMMENT 221s comment on column public.sl_setsync.ssy_seqno is 'Slony-I sequence number'; 221s COMMENT 221s comment on column public.sl_setsync.ssy_snapshot is 'TXID in provider system seen by the event'; 221s COMMENT 221s comment on column public.sl_setsync.ssy_action_list is 'action list used during the subscription process. At the time a subscriber copies over data from the origin, it sees all tables in a state somewhere between two SYNC events. Therefore this list must contains all log_actionseqs that are visible at that time, whose operations have therefore already been included in the data copied at the time the initial data copy is done. Those actions may therefore be filtered out of the first SYNC done after subscribing.'; 221s COMMENT 221s create table public.sl_table ( 221s tab_id int4, 221s tab_reloid oid UNIQUE NOT NULL, 221s tab_relname name NOT NULL, 221s tab_nspname name NOT NULL, 221s tab_set int4, 221s tab_idxname name NOT NULL, 221s tab_altered boolean NOT NULL, 221s tab_comment text, 221s CONSTRAINT "sl_table-pkey" 221s PRIMARY KEY (tab_id), 221s CONSTRAINT "tab_set-set_id-ref" 221s FOREIGN KEY (tab_set) 221s REFERENCES public.sl_set (set_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_table is 'Holds information about the tables being replicated.'; 221s COMMENT 221s comment on column public.sl_table.tab_id is 'Unique key for Slony-I to use to identify the table'; 221s COMMENT 221s comment on column public.sl_table.tab_reloid is 'The OID of the table in pg_catalog.pg_class.oid'; 221s COMMENT 221s comment on column public.sl_table.tab_relname is 'The name of the table in pg_catalog.pg_class.relname used to recover from a dump/restore cycle'; 221s COMMENT 221s comment on column public.sl_table.tab_nspname is 'The name of the schema in pg_catalog.pg_namespace.nspname used to recover from a dump/restore cycle'; 221s COMMENT 221s comment on column public.sl_table.tab_set is 'ID of the replication set the table is in'; 221s COMMENT 221s comment on column public.sl_table.tab_idxname is 'The name of the primary index of the table'; 221s COMMENT 221s comment on column public.sl_table.tab_altered is 'Has the table been modified for replication?'; 221s COMMENT 221s comment on column public.sl_table.tab_comment is 'Human-oriented description of the table'; 221s COMMENT 221s create table public.sl_sequence ( 221s seq_id int4, 221s seq_reloid oid UNIQUE NOT NULL, 221s seq_relname name NOT NULL, 221s seq_nspname name NOT NULL, 221s seq_set int4, 221s seq_comment text, 221s CONSTRAINT "sl_sequence-pkey" 221s PRIMARY KEY (seq_id), 221s CONSTRAINT "seq_set-set_id-ref" 221s FOREIGN KEY (seq_set) 221s REFERENCES public.sl_set (set_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_sequence is 'Similar to sl_table, each entry identifies a sequence being replicated.'; 221s COMMENT 221s comment on column public.sl_sequence.seq_id is 'An internally-used ID for Slony-I to use in its sequencing of updates'; 221s COMMENT 221s comment on column public.sl_sequence.seq_reloid is 'The OID of the sequence object'; 221s COMMENT 221s comment on column public.sl_sequence.seq_relname is 'The name of the sequence in pg_catalog.pg_class.relname used to recover from a dump/restore cycle'; 221s COMMENT 221s comment on column public.sl_sequence.seq_nspname is 'The name of the schema in pg_catalog.pg_namespace.nspname used to recover from a dump/restore cycle'; 221s COMMENT 221s comment on column public.sl_sequence.seq_set is 'Indicates which replication set the object is in'; 221s COMMENT 221s comment on column public.sl_sequence.seq_comment is 'A human-oriented comment'; 221s COMMENT 221s create table public.sl_path ( 221s pa_server int4, 221s pa_client int4, 221s pa_conninfo text NOT NULL, 221s pa_connretry int4, 221s CONSTRAINT "sl_path-pkey" 221s PRIMARY KEY (pa_server, pa_client), 221s CONSTRAINT "pa_server-no_id-ref" 221s FOREIGN KEY (pa_server) 221s REFERENCES public.sl_node (no_id), 221s CONSTRAINT "pa_client-no_id-ref" 221s FOREIGN KEY (pa_client) 221s REFERENCES public.sl_node (no_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_path is 'Holds connection information for the paths between nodes, and the synchronisation delay'; 221s COMMENT 221s comment on column public.sl_path.pa_server is 'The Node ID # (from sl_node.no_id) of the data source'; 221s COMMENT 221s comment on column public.sl_path.pa_client is 'The Node ID # (from sl_node.no_id) of the data target'; 221s COMMENT 221s comment on column public.sl_path.pa_conninfo is 'The PostgreSQL connection string used to connect to the source node.'; 221s COMMENT 221s comment on column public.sl_path.pa_connretry is 'The synchronisation delay, in seconds'; 221s COMMENT 221s create table public.sl_listen ( 221s li_origin int4, 221s li_provider int4, 221s li_receiver int4, 221s CONSTRAINT "sl_listen-pkey" 221s PRIMARY KEY (li_origin, li_provider, li_receiver), 221s CONSTRAINT "li_origin-no_id-ref" 221s FOREIGN KEY (li_origin) 221s REFERENCES public.sl_node (no_id), 221s CONSTRAINT "sl_listen-sl_path-ref" 221s FOREIGN KEY (li_provider, li_receiver) 221s REFERENCES public.sl_path (pa_server, pa_client) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_listen is 'Indicates how nodes listen to events from other nodes in the Slony-I network.'; 221s COMMENT 221s comment on column public.sl_listen.li_origin is 'The ID # (from sl_node.no_id) of the node this listener is operating on'; 221s COMMENT 221s comment on column public.sl_listen.li_provider is 'The ID # (from sl_node.no_id) of the source node for this listening event'; 221s COMMENT 221s comment on column public.sl_listen.li_receiver is 'The ID # (from sl_node.no_id) of the target node for this listening event'; 221s COMMENT 221s create table public.sl_subscribe ( 221s sub_set int4, 221s sub_provider int4, 221s sub_receiver int4, 221s sub_forward bool, 221s sub_active bool, 221s CONSTRAINT "sl_subscribe-pkey" 221s PRIMARY KEY (sub_receiver, sub_set), 221s CONSTRAINT "sl_subscribe-sl_path-ref" 221s FOREIGN KEY (sub_provider, sub_receiver) 221s REFERENCES public.sl_path (pa_server, pa_client), 221s CONSTRAINT "sub_set-set_id-ref" 221s FOREIGN KEY (sub_set) 221s REFERENCES public.sl_set (set_id) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_subscribe is 'Holds a list of subscriptions on sets'; 221s COMMENT 221s comment on column public.sl_subscribe.sub_set is 'ID # (from sl_set) of the set being subscribed to'; 221s COMMENT 221s comment on column public.sl_subscribe.sub_provider is 'ID# (from sl_node) of the node providing data'; 221s COMMENT 221s comment on column public.sl_subscribe.sub_receiver is 'ID# (from sl_node) of the node receiving data from the provider'; 221s COMMENT 221s comment on column public.sl_subscribe.sub_forward is 'Does this provider keep data in sl_log_1/sl_log_2 to allow it to be a provider for other nodes?'; 221s COMMENT 221s comment on column public.sl_subscribe.sub_active is 'Has this subscription been activated? This is not set on the subscriber until AFTER the subscriber has received COPY data from the provider'; 221s COMMENT 221s create table public.sl_event ( 221s ev_origin int4, 221s ev_seqno int8, 221s ev_timestamp timestamptz, 221s ev_snapshot "pg_catalog".txid_snapshot, 221s ev_type text, 221s ev_data1 text, 221s ev_data2 text, 221s ev_data3 text, 221s ev_data4 text, 221s ev_data5 text, 221s ev_data6 text, 221s ev_data7 text, 221s ev_data8 text, 221s CONSTRAINT "sl_event-pkey" 221s PRIMARY KEY (ev_origin, ev_seqno) 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_event is 'Holds information about replication events. After a period of time, Slony removes old confirmed events from both this table and the sl_confirm table.'; 221s COMMENT 221s comment on column public.sl_event.ev_origin is 'The ID # (from sl_node.no_id) of the source node for this event'; 221s COMMENT 221s comment on column public.sl_event.ev_seqno is 'The ID # for the event'; 221s COMMENT 221s comment on column public.sl_event.ev_timestamp is 'When this event record was created'; 221s COMMENT 221s comment on column public.sl_event.ev_snapshot is 'TXID snapshot on provider node for this event'; 221s COMMENT 221s comment on column public.sl_event.ev_seqno is 'The ID # for the event'; 221s COMMENT 221s comment on column public.sl_event.ev_type is 'The type of event this record is for. 221s SYNC = Synchronise 221s STORE_NODE = 221s ENABLE_NODE = 221s DROP_NODE = 221s STORE_PATH = 221s DROP_PATH = 221s STORE_LISTEN = 221s DROP_LISTEN = 221s STORE_SET = 221s DROP_SET = 221s MERGE_SET = 221s SET_ADD_TABLE = 221s SET_ADD_SEQUENCE = 221s STORE_TRIGGER = 221s DROP_TRIGGER = 221s MOVE_SET = 221s ACCEPT_SET = 221s SET_DROP_TABLE = 221s SET_DROP_SEQUENCE = 221s SET_MOVE_TABLE = 221s SET_MOVE_SEQUENCE = 221s FAILOVER_SET = 221s SUBSCRIBE_SET = 221s ENABLE_SUBSCRIPTION = 221s UNSUBSCRIBE_SET = 221s DDL_SCRIPT = 221s ADJUST_SEQ = 221s RESET_CONFIG = 221s '; 221s COMMENT 221s comment on column public.sl_event.ev_data1 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data2 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data3 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data4 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data5 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data6 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data7 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s comment on column public.sl_event.ev_data8 is 'Data field containing an argument needed to process the event'; 221s COMMENT 221s create table public.sl_confirm ( 221s con_origin int4, 221s con_received int4, 221s con_seqno int8, 221s con_timestamp timestamptz DEFAULT timeofday()::timestamptz 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_confirm is 'Holds confirmation of replication events. After a period of time, Slony removes old confirmed events from both this table and the sl_event table.'; 221s COMMENT 221s comment on column public.sl_confirm.con_origin is 'The ID # (from sl_node.no_id) of the source node for this event'; 221s COMMENT 221s comment on column public.sl_confirm.con_seqno is 'The ID # for the event'; 221s COMMENT 221s comment on column public.sl_confirm.con_timestamp is 'When this event was confirmed'; 221s COMMENT 221s create index sl_confirm_idx1 on public.sl_confirm 221s (con_origin, con_received, con_seqno); 221s CREATE INDEX 221s create index sl_confirm_idx2 on public.sl_confirm 221s (con_received, con_seqno); 221s CREATE INDEX 221s create table public.sl_seqlog ( 221s seql_seqid int4, 221s seql_origin int4, 221s seql_ev_seqno int8, 221s seql_last_value int8 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_seqlog is 'Log of Sequence updates'; 221s COMMENT 221s comment on column public.sl_seqlog.seql_seqid is 'Sequence ID'; 221s COMMENT 221s comment on column public.sl_seqlog.seql_origin is 'Publisher node at which the sequence originates'; 221s COMMENT 221s comment on column public.sl_seqlog.seql_ev_seqno is 'Slony-I Event with which this sequence update is associated'; 221s COMMENT 221s comment on column public.sl_seqlog.seql_last_value is 'Last value published for this sequence'; 221s COMMENT 221s create index sl_seqlog_idx on public.sl_seqlog 221s (seql_origin, seql_ev_seqno, seql_seqid); 221s CREATE INDEX 221s create function public.sequenceLastValue(p_seqname text) returns int8 221s as $$ 221s declare 221s v_seq_row record; 221s begin 221s for v_seq_row in execute 'select last_value from ' || public.slon_quote_input(p_seqname) 221s loop 221s return v_seq_row.last_value; 221s end loop; 221s 221s -- not reached 221s end; 221s $$ language plpgsql; 221s CREATE FUNCTION 221s comment on function public.sequenceLastValue(p_seqname text) is 221s 'sequenceLastValue(p_seqname) 221s 221s Utility function used in sl_seqlastvalue view to compactly get the 221s last value from the requested sequence.'; 221s COMMENT 221s create table public.sl_log_1 ( 221s log_origin int4, 221s log_txid bigint, 221s log_tableid int4, 221s log_actionseq int8, 221s log_tablenspname text, 221s log_tablerelname text, 221s log_cmdtype "char", 221s log_cmdupdncols int4, 221s log_cmdargs text[] 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s create index sl_log_1_idx1 on public.sl_log_1 221s (log_origin, log_txid, log_actionseq); 221s CREATE INDEX 221s comment on table public.sl_log_1 is 'Stores each change to be propagated to subscriber nodes'; 221s COMMENT 221s comment on column public.sl_log_1.log_origin is 'Origin node from which the change came'; 221s COMMENT 221s comment on column public.sl_log_1.log_txid is 'Transaction ID on the origin node'; 221s COMMENT 221s comment on column public.sl_log_1.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 221s COMMENT 221s comment on column public.sl_log_1.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 221s COMMENT 221s comment on column public.sl_log_1.log_tablenspname is 'The schema name of the table affected'; 221s COMMENT 221s comment on column public.sl_log_1.log_tablerelname is 'The table name of the table affected'; 221s COMMENT 221s comment on column public.sl_log_1.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 221s COMMENT 221s comment on column public.sl_log_1.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 221s COMMENT 221s comment on column public.sl_log_1.log_cmdargs is 'The data needed to perform the log action on the replica'; 221s COMMENT 221s create table public.sl_log_2 ( 221s log_origin int4, 221s log_txid bigint, 221s log_tableid int4, 221s log_actionseq int8, 221s log_tablenspname text, 221s log_tablerelname text, 221s log_cmdtype "char", 221s log_cmdupdncols int4, 221s log_cmdargs text[] 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s create index sl_log_2_idx1 on public.sl_log_2 221s (log_origin, log_txid, log_actionseq); 221s CREATE INDEX 221s comment on table public.sl_log_2 is 'Stores each change to be propagated to subscriber nodes'; 221s COMMENT 221s comment on column public.sl_log_2.log_origin is 'Origin node from which the change came'; 221s COMMENT 221s comment on column public.sl_log_2.log_txid is 'Transaction ID on the origin node'; 221s COMMENT 221s comment on column public.sl_log_2.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 221s COMMENT 221s comment on column public.sl_log_2.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 221s COMMENT 221s comment on column public.sl_log_2.log_tablenspname is 'The schema name of the table affected'; 221s COMMENT 221s comment on column public.sl_log_2.log_tablerelname is 'The table name of the table affected'; 221s COMMENT 221s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 221s COMMENT 221s comment on column public.sl_log_2.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 221s COMMENT 221s comment on column public.sl_log_2.log_cmdargs is 'The data needed to perform the log action on the replica'; 221s COMMENT 221s create table public.sl_log_script ( 221s log_origin int4, 221s log_txid bigint, 221s log_actionseq int8, 221s log_cmdtype "char", 221s log_cmdargs text[] 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s create index sl_log_script_idx1 on public.sl_log_script 221s (log_origin, log_txid, log_actionseq); 221s CREATE INDEX 221s comment on table public.sl_log_script is 'Captures SQL script queries to be propagated to subscriber nodes'; 221s COMMENT 221s comment on column public.sl_log_script.log_origin is 'Origin name from which the change came'; 221s COMMENT 221s comment on column public.sl_log_script.log_txid is 'Transaction ID on the origin node'; 221s COMMENT 221s comment on column public.sl_log_script.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 221s COMMENT 221s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. S = Script statement, s = Script complete'; 221s COMMENT 221s comment on column public.sl_log_script.log_cmdargs is 'The DDL statement, optionally followed by selected nodes to execute it on.'; 221s COMMENT 221s create table public.sl_registry ( 221s reg_key text primary key, 221s reg_int4 int4, 221s reg_text text, 221s reg_timestamp timestamptz 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s comment on table public.sl_registry is 'Stores miscellaneous runtime data'; 221s COMMENT 221s comment on column public.sl_registry.reg_key is 'Unique key of the runtime option'; 221s COMMENT 221s comment on column public.sl_registry.reg_int4 is 'Option value if type int4'; 221s COMMENT 221s comment on column public.sl_registry.reg_text is 'Option value if type text'; 221s COMMENT 221s comment on column public.sl_registry.reg_timestamp is 'Option value if type timestamp'; 221s COMMENT 221s create table public.sl_apply_stats ( 221s as_origin int4, 221s as_num_insert int8, 221s as_num_update int8, 221s as_num_delete int8, 221s as_num_truncate int8, 221s as_num_script int8, 221s as_num_total int8, 221s as_duration interval, 221s as_apply_first timestamptz, 221s as_apply_last timestamptz, 221s as_cache_prepare int8, 221s as_cache_hit int8, 221s as_cache_evict int8, 221s as_cache_prepare_max int8 221s ) WITHOUT OIDS; 221s CREATE TABLE 221s create index sl_apply_stats_idx1 on public.sl_apply_stats 221s (as_origin); 221s CREATE INDEX 221s comment on table public.sl_apply_stats is 'Local SYNC apply statistics (running totals)'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_origin is 'Origin of the SYNCs'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_num_insert is 'Number of INSERT operations performed'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_num_update is 'Number of UPDATE operations performed'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_num_delete is 'Number of DELETE operations performed'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_num_truncate is 'Number of TRUNCATE operations performed'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_num_script is 'Number of DDL operations performed'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_num_total is 'Total number of operations'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_duration is 'Processing time'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_apply_first is 'Timestamp of first recorded SYNC'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_apply_last is 'Timestamp of most recent recorded SYNC'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_cache_evict is 'Number of apply query cache evict operations'; 221s COMMENT 221s comment on column public.sl_apply_stats.as_cache_prepare_max is 'Maximum number of apply queries prepared in one SYNC group'; 221s COMMENT 221s create view public.sl_seqlastvalue as 221s select SQ.seq_id, SQ.seq_set, SQ.seq_reloid, 221s S.set_origin as seq_origin, 221s public.sequenceLastValue( 221s "pg_catalog".quote_ident(PGN.nspname) || '.' || 221s "pg_catalog".quote_ident(PGC.relname)) as seq_last_value 221s from public.sl_sequence SQ, public.sl_set S, 221s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 221s where S.set_id = SQ.seq_set 221s and PGC.oid = SQ.seq_reloid and PGN.oid = PGC.relnamespace; 221s CREATE VIEW 221s create view public.sl_failover_targets as 221s select set_id, 221s set_origin as set_origin, 221s sub1.sub_receiver as backup_id 221s FROM 221s public.sl_subscribe sub1 221s ,public.sl_set set1 221s where 221s sub1.sub_set=set_id 221s and sub1.sub_forward=true 221s --exclude candidates where the set_origin 221s --has a path a node but the failover 221s --candidate has no path to that node 221s and sub1.sub_receiver not in 221s (select p1.pa_client from 221s public.sl_path p1 221s left outer join public.sl_path p2 on 221s (p2.pa_client=p1.pa_client 221s and p2.pa_server=sub1.sub_receiver) 221s where p2.pa_client is null 221s and p1.pa_server=set_origin 221s and p1.pa_client<>sub1.sub_receiver 221s ) 221s and sub1.sub_provider=set_origin 221s --exclude any subscribers that are not 221s --direct subscribers of all sets on the 221s --origin 221s and sub1.sub_receiver not in 221s (select direct_recv.sub_receiver 221s from 221s 221s (--all direct receivers of the first set 221s select subs2.sub_receiver 221s from public.sl_subscribe subs2 221s where subs2.sub_provider=set1.set_origin 221s and subs2.sub_set=set1.set_id) as 221s direct_recv 221s inner join 221s (--all other sets from the origin 221s select set_id from public.sl_set set2 221s where set2.set_origin=set1.set_origin 221s and set2.set_id<>sub1.sub_set) 221s as othersets on(true) 221s left outer join public.sl_subscribe subs3 221s on(subs3.sub_set=othersets.set_id 221s and subs3.sub_forward=true 221s and subs3.sub_provider=set1.set_origin 221s and direct_recv.sub_receiver=subs3.sub_receiver) 221s where subs3.sub_receiver is null 221s ); 221s CREATE VIEW 221s create sequence public.sl_local_node_id 221s MINVALUE -1; 221s CREATE SEQUENCE 221s SELECT setval('public.sl_local_node_id', -1); 221s setval 221s -------- 221s -1 221s (1 row) 221s 221s comment on sequence public.sl_local_node_id is 'The local node ID is initialized to -1, meaning that this node is not initialized yet.'; 221s COMMENT 221s create sequence public.sl_event_seq; 221s CREATE SEQUENCE 221s comment on sequence public.sl_event_seq is 'The sequence for numbering events originating from this node.'; 221s COMMENT 221s select setval('public.sl_event_seq', 5000000000); 222s setval 222s ------------ 222s 5000000000 222s (1 row) 222s 222s create sequence public.sl_action_seq; 222s CREATE SEQUENCE 222s comment on sequence public.sl_action_seq is 'The sequence to number statements in the transaction logs, so that the replication engines can figure out the "agreeable" order of statements.'; 222s COMMENT 222s create sequence public.sl_log_status 222s MINVALUE 0 MAXVALUE 3; 222s CREATE SEQUENCE 222s SELECT setval('public.sl_log_status', 0); 222s setval 222s -------- 222s 0 222s (1 row) 222s 222s comment on sequence public.sl_log_status is ' 222s Bit 0x01 determines the currently active log table 222s Bit 0x02 tells if the engine needs to read both logs 222s after switching until the old log is clean and truncated. 222s 222s Possible values: 222s 0 sl_log_1 active, sl_log_2 clean 222s 1 sl_log_2 active, sl_log_1 clean 222s 2 sl_log_1 active, sl_log_2 unknown - cleanup 222s 3 sl_log_2 active, sl_log_1 unknown - cleanup 222s 222s This is not yet in use. 222s '; 222s COMMENT 222s create table public.sl_config_lock ( 222s dummy integer 222s ); 222s CREATE TABLE 222s comment on table public.sl_config_lock is 'This table exists solely to prevent overlapping execution of configuration change procedures and the resulting possible deadlocks. 222s '; 222s COMMENT 222s comment on column public.sl_config_lock.dummy is 'No data ever goes in this table so the contents never matter. Indeed, this column does not really need to exist.'; 222s COMMENT 222s create table public.sl_event_lock ( 222s dummy integer 222s ); 222s CREATE TABLE 222s comment on table public.sl_event_lock is 'This table exists solely to prevent multiple connections from concurrently creating new events and perhaps getting them out of order.'; 222s COMMENT 222s comment on column public.sl_event_lock.dummy is 'No data ever goes in this table so the contents never matter. Indeed, this column does not really need to exist.'; 222s COMMENT 222s create table public.sl_archive_counter ( 222s ac_num bigint, 222s ac_timestamp timestamptz 222s ) without oids; 222s CREATE TABLE 222s comment on table public.sl_archive_counter is 'Table used to generate the log shipping archive number. 222s '; 222s COMMENT 222s comment on column public.sl_archive_counter.ac_num is 'Counter of SYNC ID used in log shipping as the archive number'; 222s COMMENT 222s comment on column public.sl_archive_counter.ac_timestamp is 'Time at which the archive log was generated on the subscriber'; 222s COMMENT 222s insert into public.sl_archive_counter (ac_num, ac_timestamp) 222s values (0, 'epoch'::timestamptz); 222s INSERT 0 1 222s create table public.sl_components ( 222s co_actor text not null primary key, 222s co_pid integer not null, 222s co_node integer not null, 222s co_connection_pid integer not null, 222s co_activity text, 222s co_starttime timestamptz not null, 222s co_event bigint, 222s co_eventtype text 222s ) without oids; 222s CREATE TABLE 222s comment on table public.sl_components is 'Table used to monitor what various slon/slonik components are doing'; 222s COMMENT 222s comment on column public.sl_components.co_actor is 'which component am I?'; 222s COMMENT 222s comment on column public.sl_components.co_pid is 'my process/thread PID on node where slon runs'; 222s COMMENT 222s comment on column public.sl_components.co_node is 'which node am I servicing?'; 222s COMMENT 222s comment on column public.sl_components.co_connection_pid is 'PID of database connection being used on database server'; 222s COMMENT 222s comment on column public.sl_components.co_activity is 'activity that I am up to'; 222s COMMENT 222s comment on column public.sl_components.co_starttime is 'when did my activity begin? (timestamp reported as per slon process on server running slon)'; 222s COMMENT 222s comment on column public.sl_components.co_eventtype is 'what kind of event am I processing? (commonly n/a for event loop main threads)'; 222s COMMENT 222s comment on column public.sl_components.co_event is 'which event have I started processing?'; 222s COMMENT 222s CREATE OR replace function public.agg_text_sum(txt_before TEXT, txt_new TEXT) RETURNS TEXT AS 222s $BODY$ 222s DECLARE 222s c_delim text; 222s BEGIN 222s c_delim = ','; 222s IF (txt_before IS NULL or txt_before='') THEN 222s RETURN txt_new; 222s END IF; 222s RETURN txt_before || c_delim || txt_new; 222s END; 222s $BODY$ 222s LANGUAGE plpgsql; 222s CREATE FUNCTION 222s comment on function public.agg_text_sum(text,text) is 222s 'An accumulator function used by the slony string_agg function to 222s aggregate rows into a string'; 222s COMMENT 222s CREATE AGGREGATE public.string_agg(text) ( 222s SFUNC=public.agg_text_sum, 222s STYPE=text, 222s INITCOND='' 222s ); 222s CREATE AGGREGATE 222s grant usage on schema public to public; 222s GRANT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text, ev_data8 text) 222s returns bigint 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 222s language C 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text, ev_data8 text) is 222s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 222s 222s Create an sl_event entry'; 222s COMMENT 222s create or replace function public.denyAccess () 222s returns trigger 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__denyAccess' 222s language C 222s security definer; 222s CREATE FUNCTION 222s comment on function public.denyAccess () is 222s 'Trigger function to prevent modifications to a table on a subscriber'; 222s COMMENT 222s grant execute on function public.denyAccess () to public; 222s GRANT 222s create or replace function public.lockedSet () 222s returns trigger 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__lockedSet' 222s language C; 222s CREATE FUNCTION 222s comment on function public.lockedSet () is 222s 'Trigger function to prevent modifications to a table before and after a moveSet()'; 222s COMMENT 222s create or replace function public.getLocalNodeId (p_cluster name) returns int4 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__getLocalNodeId' 222s language C 222s security definer; 222s CREATE FUNCTION 222s grant execute on function public.getLocalNodeId (p_cluster name) to public; 222s GRANT 222s comment on function public.getLocalNodeId (p_cluster name) is 222s 'Returns the node ID of the node being serviced on the local database'; 222s COMMENT 222s create or replace function public.getModuleVersion () returns text 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__getModuleVersion' 222s language C 222s security definer; 222s CREATE FUNCTION 222s grant execute on function public.getModuleVersion () to public; 222s GRANT 222s comment on function public.getModuleVersion () is 222s 'Returns the compiled-in version number of the Slony-I shared object'; 222s COMMENT 222s create or replace function public.resetSession() returns text 222s as '$libdir/slony1_funcs.2.2.11','_Slony_I_2_2_11__resetSession' 222s language C; 222s CREATE FUNCTION 222s create or replace function public.logApply () returns trigger 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logApply' 222s language C 222s security definer; 222s CREATE FUNCTION 222s create or replace function public.logApplySetCacheSize (p_size int4) 222s returns int4 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logApplySetCacheSize' 222s language C; 222s CREATE FUNCTION 222s create or replace function public.logApplySaveStats (p_cluster name, p_origin int4, p_duration interval) 222s returns int4 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logApplySaveStats' 222s language C; 222s NOTICE: checked validity of cluster main namespace - OK! 222s CREATE FUNCTION 222s create or replace function public.checkmoduleversion () returns text as $$ 222s declare 222s moduleversion text; 222s begin 222s select into moduleversion public.getModuleVersion(); 222s if moduleversion <> '2.2.11' then 222s raise exception 'Slonik version: 2.2.11 != Slony-I version in PG build %', 222s moduleversion; 222s end if; 222s return null; 222s end;$$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.checkmoduleversion () is 222s 'Inline test function that verifies that slonik request for STORE 222s NODE/INIT CLUSTER is being run against a conformant set of 222s schema/functions.'; 222s COMMENT 222s select public.checkmoduleversion(); 222s checkmoduleversion 222s -------------------- 222s 222s (1 row) 222s 222s create or replace function public.decode_tgargs(bytea) returns text[] as 222s '$libdir/slony1_funcs.2.2.11','_Slony_I_2_2_11__slon_decode_tgargs' language C security definer; 222s CREATE FUNCTION 222s comment on function public.decode_tgargs(bytea) is 222s 'Translates the contents of pg_trigger.tgargs to an array of text arguments'; 222s COMMENT 222s grant execute on function public.decode_tgargs(bytea) to public; 222s GRANT 222s create or replace function public.check_namespace_validity () returns boolean as $$ 222s declare 222s c_cluster text; 222s begin 222s c_cluster := 'main'; 222s if c_cluster !~ E'^[[:alpha:]_][[:alnum:]_\$]{0,62}$' then 222s raise exception 'Cluster name % is not a valid SQL symbol!', c_cluster; 222s else 222s raise notice 'checked validity of cluster % namespace - OK!', c_cluster; 222s end if; 222s return 't'; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s select public.check_namespace_validity(); 222s check_namespace_validity 222s -------------------------- 222s t 222s (1 row) 222s 222s drop function public.check_namespace_validity(); 222s DROP FUNCTION 222s create or replace function public.logTrigger () returns trigger 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logTrigger' 222s language C 222s security definer; 222s CREATE FUNCTION 222s comment on function public.logTrigger () is 222s 'This is the trigger that is executed on the origin node that causes 222s updates to be recorded in sl_log_1/sl_log_2.'; 222s COMMENT 222s grant execute on function public.logTrigger () to public; 222s GRANT 222s create or replace function public.terminateNodeConnections (p_failed_node int4) returns int4 222s as $$ 222s declare 222s v_row record; 222s begin 222s for v_row in select nl_nodeid, nl_conncnt, 222s nl_backendpid from public.sl_nodelock 222s where nl_nodeid = p_failed_node for update 222s loop 222s perform public.killBackend(v_row.nl_backendpid, 'TERM'); 222s delete from public.sl_nodelock 222s where nl_nodeid = v_row.nl_nodeid 222s and nl_conncnt = v_row.nl_conncnt; 222s end loop; 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.terminateNodeConnections (p_failed_node int4) is 222s 'terminates all backends that have registered to be from the given node'; 222s COMMENT 222s create or replace function public.killBackend (p_pid int4, p_signame text) returns int4 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__killBackend' 222s language C; 222s CREATE FUNCTION 222s comment on function public.killBackend(p_pid int4, p_signame text) is 222s 'Send a signal to a postgres process. Requires superuser rights'; 222s COMMENT 222s create or replace function public.seqtrack (p_seqid int4, p_seqval int8) returns int8 222s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__seqtrack' 222s strict language C; 222s CREATE FUNCTION 222s comment on function public.seqtrack(p_seqid int4, p_seqval int8) is 222s 'Returns NULL if seqval has not changed since the last call for seqid'; 222s COMMENT 222s create or replace function public.slon_quote_brute(p_tab_fqname text) returns text 222s as $$ 222s declare 222s v_fqname text default ''; 222s begin 222s v_fqname := '"' || replace(p_tab_fqname,'"','""') || '"'; 222s return v_fqname; 222s end; 222s $$ language plpgsql immutable; 222s CREATE FUNCTION 222s comment on function public.slon_quote_brute(p_tab_fqname text) is 222s 'Brutally quote the given text'; 222s COMMENT 222s create or replace function public.slon_quote_input(p_tab_fqname text) returns text as $$ 222s declare 222s v_nsp_name text; 222s v_tab_name text; 222s v_i integer; 222s v_l integer; 222s v_pq2 integer; 222s begin 222s v_l := length(p_tab_fqname); 222s 222s -- Let us search for the dot 222s if p_tab_fqname like '"%' then 222s -- if the first part of the ident starts with a double quote, search 222s -- for the closing double quote, skipping over double double quotes. 222s v_i := 2; 222s while v_i <= v_l loop 222s if substr(p_tab_fqname, v_i, 1) != '"' then 222s v_i := v_i + 1; 222s else 222s v_i := v_i + 1; 222s if substr(p_tab_fqname, v_i, 1) != '"' then 222s exit; 222s end if; 222s v_i := v_i + 1; 222s end if; 222s end loop; 222s else 222s -- first part of ident is not quoted, search for the dot directly 222s v_i := 1; 222s while v_i <= v_l loop 222s if substr(p_tab_fqname, v_i, 1) = '.' then 222s exit; 222s end if; 222s v_i := v_i + 1; 222s end loop; 222s end if; 222s 222s -- v_i now points at the dot or behind the string. 222s 222s if substr(p_tab_fqname, v_i, 1) = '.' then 222s -- There is a dot now, so split the ident into its namespace 222s -- and objname parts and make sure each is quoted 222s v_nsp_name := substr(p_tab_fqname, 1, v_i - 1); 222s v_tab_name := substr(p_tab_fqname, v_i + 1); 222s if v_nsp_name not like '"%' then 222s v_nsp_name := '"' || replace(v_nsp_name, '"', '""') || 222s '"'; 222s end if; 222s if v_tab_name not like '"%' then 222s v_tab_name := '"' || replace(v_tab_name, '"', '""') || 222s '"'; 222s end if; 222s 222s return v_nsp_name || '.' || v_tab_name; 222s else 222s -- No dot ... must be just an ident without schema 222s if p_tab_fqname like '"%' then 222s return p_tab_fqname; 222s else 222s return '"' || replace(p_tab_fqname, '"', '""') || '"'; 222s end if; 222s end if; 222s 222s end;$$ language plpgsql immutable; 222s CREATE FUNCTION 222s comment on function public.slon_quote_input(p_text text) is 222s 'quote all words that aren''t quoted yet'; 222s COMMENT 222s create or replace function public.slonyVersionMajor() 222s returns int4 222s as $$ 222s begin 222s return 2; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.slonyVersionMajor () is 222s 'Returns the major version number of the slony schema'; 222s COMMENT 222s create or replace function public.slonyVersionMinor() 222s returns int4 222s as $$ 222s begin 222s return 2; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.slonyVersionMinor () is 222s 'Returns the minor version number of the slony schema'; 222s COMMENT 222s create or replace function public.slonyVersionPatchlevel() 222s returns int4 222s as $$ 222s begin 222s return 11; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.slonyVersionPatchlevel () is 222s 'Returns the version patch level of the slony schema'; 222s COMMENT 222s create or replace function public.slonyVersion() 222s returns text 222s as $$ 222s begin 222s return public.slonyVersionMajor()::text || '.' || 222s public.slonyVersionMinor()::text || '.' || 222s public.slonyVersionPatchlevel()::text ; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.slonyVersion() is 222s 'Returns the version number of the slony schema'; 222s COMMENT 222s create or replace function public.registry_set_int4(p_key text, p_value int4) 222s returns int4 as $$ 222s BEGIN 222s if p_value is null then 222s delete from public.sl_registry 222s where reg_key = p_key; 222s else 222s lock table public.sl_registry; 222s update public.sl_registry 222s set reg_int4 = p_value 222s where reg_key = p_key; 222s if not found then 222s insert into public.sl_registry (reg_key, reg_int4) 222s values (p_key, p_value); 222s end if; 222s end if; 222s return p_value; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registry_set_int4(p_key text, p_value int4) is 222s 'registry_set_int4(key, value) 222s 222s Set or delete a registry value'; 222s COMMENT 222s create or replace function public.registry_get_int4(p_key text, p_default int4) 222s returns int4 as $$ 222s DECLARE 222s v_value int4; 222s BEGIN 222s select reg_int4 into v_value from public.sl_registry 222s where reg_key = p_key; 222s if not found then 222s v_value = p_default; 222s if p_default notnull then 222s perform public.registry_set_int4(p_key, p_default); 222s end if; 222s else 222s if v_value is null then 222s raise exception 'Slony-I: registry key % is not an int4 value', 222s p_key; 222s end if; 222s end if; 222s return v_value; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registry_get_int4(p_key text, p_default int4) is 222s 'registry_get_int4(key, value) 222s 222s Get a registry value. If not present, set and return the default.'; 222s COMMENT 222s create or replace function public.registry_set_text(p_key text, p_value text) 222s returns text as $$ 222s BEGIN 222s if p_value is null then 222s delete from public.sl_registry 222s where reg_key = p_key; 222s else 222s lock table public.sl_registry; 222s update public.sl_registry 222s set reg_text = p_value 222s where reg_key = p_key; 222s if not found then 222s insert into public.sl_registry (reg_key, reg_text) 222s values (p_key, p_value); 222s end if; 222s end if; 222s return p_value; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registry_set_text(text, text) is 222s 'registry_set_text(key, value) 222s 222s Set or delete a registry value'; 222s COMMENT 222s create or replace function public.registry_get_text(p_key text, p_default text) 222s returns text as $$ 222s DECLARE 222s v_value text; 222s BEGIN 222s select reg_text into v_value from public.sl_registry 222s where reg_key = p_key; 222s if not found then 222s v_value = p_default; 222s if p_default notnull then 222s perform public.registry_set_text(p_key, p_default); 222s end if; 222s else 222s if v_value is null then 222s raise exception 'Slony-I: registry key % is not a text value', 222s p_key; 222s end if; 222s end if; 222s return v_value; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registry_get_text(p_key text, p_default text) is 222s 'registry_get_text(key, value) 222s 222s Get a registry value. If not present, set and return the default.'; 222s COMMENT 222s create or replace function public.registry_set_timestamp(p_key text, p_value timestamptz) 222s returns timestamp as $$ 222s BEGIN 222s if p_value is null then 222s delete from public.sl_registry 222s where reg_key = p_key; 222s else 222s lock table public.sl_registry; 222s update public.sl_registry 222s set reg_timestamp = p_value 222s where reg_key = p_key; 222s if not found then 222s insert into public.sl_registry (reg_key, reg_timestamp) 222s values (p_key, p_value); 222s end if; 222s end if; 222s return p_value; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registry_set_timestamp(p_key text, p_value timestamptz) is 222s 'registry_set_timestamp(key, value) 222s 222s Set or delete a registry value'; 222s COMMENT 222s create or replace function public.registry_get_timestamp(p_key text, p_default timestamptz) 222s returns timestamp as $$ 222s DECLARE 222s v_value timestamp; 222s BEGIN 222s select reg_timestamp into v_value from public.sl_registry 222s where reg_key = p_key; 222s if not found then 222s v_value = p_default; 222s if p_default notnull then 222s perform public.registry_set_timestamp(p_key, p_default); 222s end if; 222s else 222s if v_value is null then 222s raise exception 'Slony-I: registry key % is not an timestamp value', 222s p_key; 222s end if; 222s end if; 222s return v_value; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registry_get_timestamp(p_key text, p_default timestamptz) is 222s 'registry_get_timestamp(key, value) 222s 222s Get a registry value. If not present, set and return the default.'; 222s COMMENT 222s create or replace function public.cleanupNodelock () 222s returns int4 222s as $$ 222s declare 222s v_row record; 222s begin 222s for v_row in select nl_nodeid, nl_conncnt, nl_backendpid 222s from public.sl_nodelock 222s for update 222s loop 222s if public.killBackend(v_row.nl_backendpid, 'NULL') < 0 then 222s raise notice 'Slony-I: cleanup stale sl_nodelock entry for pid=%', 222s v_row.nl_backendpid; 222s delete from public.sl_nodelock where 222s nl_nodeid = v_row.nl_nodeid and 222s nl_conncnt = v_row.nl_conncnt; 222s end if; 222s end loop; 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.cleanupNodelock() is 222s 'Clean up stale entries when restarting slon'; 222s COMMENT 222s create or replace function public.registerNodeConnection (p_nodeid int4) 222s returns int4 222s as $$ 222s begin 222s insert into public.sl_nodelock 222s (nl_nodeid, nl_backendpid) 222s values 222s (p_nodeid, pg_backend_pid()); 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.registerNodeConnection (p_nodeid int4) is 222s 'Register (uniquely) the node connection so that only one slon can service the node'; 222s COMMENT 222s create or replace function public.initializeLocalNode (p_local_node_id int4, p_comment text) 222s returns int4 222s as $$ 222s declare 222s v_old_node_id int4; 222s v_first_log_no int4; 222s v_event_seq int8; 222s begin 222s -- ---- 222s -- Make sure this node is uninitialized or got reset 222s -- ---- 222s select last_value::int4 into v_old_node_id from public.sl_local_node_id; 222s if v_old_node_id != -1 then 222s raise exception 'Slony-I: This node is already initialized'; 222s end if; 222s 222s -- ---- 222s -- Set sl_local_node_id to the requested value and add our 222s -- own system to sl_node. 222s -- ---- 222s perform setval('public.sl_local_node_id', p_local_node_id); 222s perform public.storeNode_int (p_local_node_id, p_comment); 222s 222s if (pg_catalog.current_setting('max_identifier_length')::integer - pg_catalog.length('public')) < 5 then 222s raise notice 'Slony-I: Cluster name length [%] versus system max_identifier_length [%] ', pg_catalog.length('public'), pg_catalog.current_setting('max_identifier_length'); 222s raise notice 'leaves narrow/no room for some Slony-I-generated objects (such as indexes).'; 222s raise notice 'You may run into problems later!'; 222s end if; 222s 222s -- 222s -- Put the apply trigger onto sl_log_1 and sl_log_2 222s -- 222s create trigger apply_trigger 222s before INSERT on public.sl_log_1 222s for each row execute procedure public.logApply('_main'); 222s alter table public.sl_log_1 222s enable replica trigger apply_trigger; 222s create trigger apply_trigger 222s before INSERT on public.sl_log_2 222s for each row execute procedure public.logApply('_main'); 222s alter table public.sl_log_2 222s enable replica trigger apply_trigger; 222s 222s return p_local_node_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.initializeLocalNode (p_local_node_id int4, p_comment text) is 222s 'no_id - Node ID # 222s no_comment - Human-oriented comment 222s 222s Initializes the new node, no_id'; 222s COMMENT 222s create or replace function public.storeNode (p_no_id int4, p_no_comment text) 222s returns bigint 222s as $$ 222s begin 222s perform public.storeNode_int (p_no_id, p_no_comment); 222s return public.createEvent('_main', 'STORE_NODE', 222s p_no_id::text, p_no_comment::text); 222s end; 222s $$ language plpgsql 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.storeNode(p_no_id int4, p_no_comment text) is 222s 'no_id - Node ID # 222s no_comment - Human-oriented comment 222s 222s Generate the STORE_NODE event for node no_id'; 222s COMMENT 222s create or replace function public.storeNode_int (p_no_id int4, p_no_comment text) 222s returns int4 222s as $$ 222s declare 222s v_old_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check if the node exists 222s -- ---- 222s select * into v_old_row 222s from public.sl_node 222s where no_id = p_no_id 222s for update; 222s if found then 222s -- ---- 222s -- Node exists, update the existing row. 222s -- ---- 222s update public.sl_node 222s set no_comment = p_no_comment 222s where no_id = p_no_id; 222s else 222s -- ---- 222s -- New node, insert the sl_node row 222s -- ---- 222s insert into public.sl_node 222s (no_id, no_active, no_comment,no_failed) values 222s (p_no_id, 'f', p_no_comment,false); 222s end if; 222s 222s return p_no_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.storeNode_int(p_no_id int4, p_no_comment text) is 222s 'no_id - Node ID # 222s no_comment - Human-oriented comment 222s 222s Internal function to process the STORE_NODE event for node no_id'; 222s COMMENT 222s create or replace function public.enableNode (p_no_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_node_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that we are the node to activate and that we are 222s -- currently disabled. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select * into v_node_row 222s from public.sl_node 222s where no_id = p_no_id 222s for update; 222s if not found then 222s raise exception 'Slony-I: node % not found', p_no_id; 222s end if; 222s if v_node_row.no_active then 222s raise exception 'Slony-I: node % is already active', p_no_id; 222s end if; 222s 222s -- ---- 222s -- Activate this node and generate the ENABLE_NODE event 222s -- ---- 222s perform public.enableNode_int (p_no_id); 222s return public.createEvent('_main', 'ENABLE_NODE', 222s p_no_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.enableNode(p_no_id int4) is 222s 'no_id - Node ID # 222s 222s Generate the ENABLE_NODE event for node no_id'; 222s COMMENT 222s create or replace function public.enableNode_int (p_no_id int4) 222s returns int4 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_node_row record; 222s v_sub_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that the node is inactive 222s -- ---- 222s select * into v_node_row 222s from public.sl_node 222s where no_id = p_no_id 222s for update; 222s if not found then 222s raise exception 'Slony-I: node % not found', p_no_id; 222s end if; 222s if v_node_row.no_active then 222s return p_no_id; 222s end if; 222s 222s -- ---- 222s -- Activate the node and generate sl_confirm status rows for it. 222s -- ---- 222s update public.sl_node 222s set no_active = 't' 222s where no_id = p_no_id; 222s insert into public.sl_confirm 222s (con_origin, con_received, con_seqno) 222s select no_id, p_no_id, 0 from public.sl_node 222s where no_id != p_no_id 222s and no_active; 222s insert into public.sl_confirm 222s (con_origin, con_received, con_seqno) 222s select p_no_id, no_id, 0 from public.sl_node 222s where no_id != p_no_id 222s and no_active; 222s 222s -- ---- 222s -- Generate ENABLE_SUBSCRIPTION events for all sets that 222s -- origin here and are subscribed by the just enabled node. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s for v_sub_row in select SUB.sub_set, SUB.sub_provider from 222s public.sl_set S, 222s public.sl_subscribe SUB 222s where S.set_origin = v_local_node_id 222s and S.set_id = SUB.sub_set 222s and SUB.sub_receiver = p_no_id 222s for update of S 222s loop 222s perform public.enableSubscription (v_sub_row.sub_set, 222s v_sub_row.sub_provider, p_no_id); 222s end loop; 222s 222s return p_no_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.enableNode_int(p_no_id int4) is 222s 'no_id - Node ID # 222s 222s Internal function to process the ENABLE_NODE event for node no_id'; 222s COMMENT 222s create or replace function public.disableNode (p_no_id int4) 222s returns bigint 222s as $$ 222s begin 222s -- **** TODO **** 222s raise exception 'Slony-I: disableNode() not implemented'; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.disableNode(p_no_id int4) is 222s 'generate DISABLE_NODE event for node no_id'; 222s COMMENT 222s create or replace function public.disableNode_int (p_no_id int4) 222s returns int4 222s as $$ 222s begin 222s -- **** TODO **** 222s raise exception 'Slony-I: disableNode_int() not implemented'; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.disableNode(p_no_id int4) is 222s 'process DISABLE_NODE event for node no_id 222s 222s NOTE: This is not yet implemented!'; 222s COMMENT 222s create or replace function public.dropNode (p_no_ids int4[]) 222s returns bigint 222s as $$ 222s declare 222s v_node_row record; 222s v_idx integer; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that this got called on a different node 222s -- ---- 222s if public.getLocalNodeId('_main') = ANY (p_no_ids) then 222s raise exception 'Slony-I: DROP_NODE cannot initiate on the dropped node'; 222s end if; 222s 222s -- 222s -- if any of the deleted nodes are receivers we drop the sl_subscribe line 222s -- 222s delete from public.sl_subscribe where sub_receiver = ANY (p_no_ids); 222s 222s v_idx:=1; 222s LOOP 222s EXIT WHEN v_idx>array_upper(p_no_ids,1) ; 222s select * into v_node_row from public.sl_node 222s where no_id = p_no_ids[v_idx] 222s for update; 222s if not found then 222s raise exception 'Slony-I: unknown node ID % %', p_no_ids[v_idx],v_idx; 222s end if; 222s -- ---- 222s -- Make sure we do not break other nodes subscriptions with this 222s -- ---- 222s if exists (select true from public.sl_subscribe 222s where sub_provider = p_no_ids[v_idx]) 222s then 222s raise exception 'Slony-I: Node % is still configured as a data provider', 222s p_no_ids[v_idx]; 222s end if; 222s 222s -- ---- 222s -- Make sure no set originates there any more 222s -- ---- 222s if exists (select true from public.sl_set 222s where set_origin = p_no_ids[v_idx]) 222s then 222s raise exception 'Slony-I: Node % is still origin of one or more sets', 222s p_no_ids[v_idx]; 222s end if; 222s 222s -- ---- 222s -- Call the internal drop functionality and generate the event 222s -- ---- 222s perform public.dropNode_int(p_no_ids[v_idx]); 222s v_idx:=v_idx+1; 222s END LOOP; 222s return public.createEvent('_main', 'DROP_NODE', 222s array_to_string(p_no_ids,',')); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropNode(p_no_ids int4[]) is 222s 'generate DROP_NODE event to drop node node_id from replication'; 222s COMMENT 222s create or replace function public.dropNode_int (p_no_id int4) 222s returns int4 222s as $$ 222s declare 222s v_tab_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- If the dropped node is a remote node, clean the configuration 222s -- from all traces for it. 222s -- ---- 222s if p_no_id <> public.getLocalNodeId('_main') then 222s delete from public.sl_subscribe 222s where sub_receiver = p_no_id; 222s delete from public.sl_listen 222s where li_origin = p_no_id 222s or li_provider = p_no_id 222s or li_receiver = p_no_id; 222s delete from public.sl_path 222s where pa_server = p_no_id 222s or pa_client = p_no_id; 222s delete from public.sl_confirm 222s where con_origin = p_no_id 222s or con_received = p_no_id; 222s delete from public.sl_event 222s where ev_origin = p_no_id; 222s delete from public.sl_node 222s where no_id = p_no_id; 222s 222s return p_no_id; 222s end if; 222s 222s -- ---- 222s -- This is us ... deactivate the node for now, the daemon 222s -- will call uninstallNode() in a separate transaction. 222s -- ---- 222s update public.sl_node 222s set no_active = false 222s where no_id = p_no_id; 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return p_no_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropNode_int(p_no_id int4) is 222s 'internal function to process DROP_NODE event to drop node node_id from replication'; 222s COMMENT 222s create or replace function public.preFailover(p_failed_node int4,p_is_candidate boolean) 222s returns int4 222s as $$ 222s declare 222s v_row record; 222s v_row2 record; 222s v_n int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- All consistency checks first 222s 222s if p_is_candidate then 222s -- ---- 222s -- Check all sets originating on the failed node 222s -- ---- 222s for v_row in select set_id 222s from public.sl_set 222s where set_origin = p_failed_node 222s loop 222s -- ---- 222s -- Check that the backup node is subscribed to all sets 222s -- that originate on the failed node 222s -- ---- 222s select into v_row2 sub_forward, sub_active 222s from public.sl_subscribe 222s where sub_set = v_row.set_id 222s and sub_receiver = public.getLocalNodeId('_main'); 222s if not found then 222s raise exception 'Slony-I: cannot failover - node % is not subscribed to set %', 222s public.getLocalNodeId('_main'), v_row.set_id; 222s end if; 222s 222s -- ---- 222s -- Check that the subscription is active 222s -- ---- 222s if not v_row2.sub_active then 222s raise exception 'Slony-I: cannot failover - subscription for set % is not active', 222s v_row.set_id; 222s end if; 222s 222s -- ---- 222s -- If there are other subscribers, the backup node needs to 222s -- be a forwarder too. 222s -- ---- 222s select into v_n count(*) 222s from public.sl_subscribe 222s where sub_set = v_row.set_id 222s and sub_receiver <> public.getLocalNodeId('_main'); 222s if v_n > 0 and not v_row2.sub_forward then 222s raise exception 'Slony-I: cannot failover - node % is not a forwarder of set %', 222s public.getLocalNodeId('_main'), v_row.set_id; 222s end if; 222s end loop; 222s end if; 222s 222s -- ---- 222s -- Terminate all connections of the failed node the hard way 222s -- ---- 222s perform public.terminateNodeConnections(p_failed_node); 222s 222s update public.sl_path set pa_conninfo='' WHERE 222s pa_server=p_failed_node; 222s notify "_main_Restart"; 222s -- ---- 222s -- That is it - so far. 222s -- ---- 222s return p_failed_node; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.preFailover(p_failed_node int4,is_failover_candidate boolean) is 222s 'Prepare for a failover. This function is called on all candidate nodes. 222s It blanks the paths to the failed node 222s and then restart of all node daemons.'; 222s COMMENT 222s NOTICE: function public.clonenodeprepare(int4,int4,text) does not exist, skipping 222s create or replace function public.failedNode(p_failed_node int4, p_backup_node int4,p_failed_nodes integer[]) 222s returns int4 222s as $$ 222s declare 222s v_row record; 222s v_row2 record; 222s v_failed boolean; 222s v_restart_required boolean; 222s begin 222s 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s v_restart_required:=false; 222s -- 222s -- any nodes other than the backup receiving 222s -- ANY subscription from a failed node 222s -- will now get that data from the backup node. 222s update public.sl_subscribe set 222s sub_provider=p_backup_node 222s where sub_provider=p_failed_node 222s and sub_receiver<>p_backup_node 222s and sub_receiver <> ALL (p_failed_nodes); 222s if found then 222s v_restart_required:=true; 222s end if; 222s -- 222s -- if this node is receiving a subscription from the backup node 222s -- with a failed node as the provider we need to fix this. 222s update public.sl_subscribe set 222s sub_provider=p_backup_node 222s from public.sl_set 222s where set_id = sub_set 222s and set_origin=p_failed_node 222s and sub_provider = ANY(p_failed_nodes) 222s and sub_receiver=public.getLocalNodeId('_main'); 222s 222s -- ---- 222s -- Terminate all connections of the failed node the hard way 222s -- ---- 222s perform public.terminateNodeConnections(p_failed_node); 222s 222s -- Clear out the paths for the failed node. 222s -- This ensures that *this* node won't be pulling data from 222s -- the failed node even if it *does* become accessible 222s 222s update public.sl_path set pa_conninfo='' WHERE 222s pa_server=p_failed_node 222s and pa_conninfo<>''; 222s 222s if found then 222s v_restart_required:=true; 222s end if; 222s 222s v_failed := exists (select 1 from public.sl_node 222s where no_failed=true and no_id=p_failed_node); 222s 222s if not v_failed then 222s 222s update public.sl_node set no_failed=true where no_id = ANY (p_failed_nodes) 222s and no_failed=false; 222s if found then 222s v_restart_required:=true; 222s end if; 222s end if; 222s 222s if v_restart_required then 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s -- ---- 222s -- Make sure the node daemon will restart 222s -- ---- 222s notify "_main_Restart"; 222s end if; 222s 222s 222s -- ---- 222s -- That is it - so far. 222s -- ---- 222s return p_failed_node; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.failedNode(p_failed_node int4, p_backup_node int4,p_failed_nodes integer[]) is 222s 'Initiate failover from failed_node to backup_node. This function must be called on all nodes, 222s and then waited for the restart of all node daemons.'; 222s COMMENT 222s create or replace function public.failedNode2 (p_failed_node int4, p_backup_node int4, p_ev_seqno int8, p_failed_nodes integer[]) 222s returns bigint 222s as $$ 222s declare 222s v_row record; 222s v_new_event bigint; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s select * into v_row 222s from public.sl_event 222s where ev_origin = p_failed_node 222s and ev_seqno = p_ev_seqno; 222s if not found then 222s raise exception 'Slony-I: event %,% not found', 222s p_failed_node, p_ev_seqno; 222s end if; 222s 222s update public.sl_node set no_failed=true where no_id = ANY 222s (p_failed_nodes) and no_failed=false; 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s -- ---- 222s -- Make sure the node daemon will restart 222s -- ---- 222s raise notice 'calling restart node %',p_failed_node; 222s 222s notify "_main_Restart"; 222s 222s select public.createEvent('_main','FAILOVER_NODE', 222s p_failed_node::text,p_ev_seqno::text, 222s array_to_string(p_failed_nodes,',')) 222s into v_new_event; 222s 222s 222s return v_new_event; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.failedNode2 (p_failed_node int4, p_backup_node int4, p_ev_seqno int8,p_failed_nodes integer[] ) is 222s 'FUNCTION failedNode2 (failed_node, backup_node, set_id, ev_seqno, ev_seqfake,p_failed_nodes) 222s 222s On the node that has the highest sequence number of the failed node, 222s fake the FAILOVER_SET event.'; 222s COMMENT 222s create or replace function public.failedNode3 (p_failed_node int4, p_backup_node int4,p_seq_no bigint) 222s returns int4 222s as $$ 222s declare 222s 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s perform public.failoverSet_int(p_failed_node, 222s p_backup_node,p_seq_no); 222s 222s notify "_main_Restart"; 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s create or replace function public.failoverSet_int (p_failed_node int4, p_backup_node int4,p_last_seqno bigint) 222s returns int4 222s as $$ 222s declare 222s v_row record; 222s v_last_sync int8; 222s v_set int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s SELECT max(ev_seqno) into v_last_sync FROM public.sl_event where 222s ev_origin=p_failed_node; 222s if v_last_sync > p_last_seqno then 222s -- this node is ahead of the last sequence number from the 222s -- failed node that the backup node has. 222s -- this node must unsubscribe from all sets from the origin. 222s for v_set in select set_id from public.sl_set where 222s set_origin=p_failed_node 222s loop 222s raise warning 'Slony is dropping the subscription of set % found sync %s bigger than %s ' 222s , v_set, v_last_sync::text, p_last_seqno::text; 222s perform public.unsubscribeSet(v_set, 222s public.getLocalNodeId('_main'), 222s true); 222s end loop; 222s delete from public.sl_event where ev_origin=p_failed_node 222s and ev_seqno > p_last_seqno; 222s end if; 222s -- ---- 222s -- Change the origin of the set now to the backup node. 222s -- On the backup node this includes changing all the 222s -- trigger and protection stuff 222s for v_set in select set_id from public.sl_set where 222s set_origin=p_failed_node 222s loop 222s -- ---- 222s if p_backup_node = public.getLocalNodeId('_main') then 222s delete from public.sl_setsync 222s where ssy_setid = v_set; 222s delete from public.sl_subscribe 222s where sub_set = v_set 222s and sub_receiver = p_backup_node; 222s update public.sl_set 222s set set_origin = p_backup_node 222s where set_id = v_set; 222s update public.sl_subscribe 222s set sub_provider=p_backup_node 222s FROM public.sl_node receive_node 222s where sub_set = v_set 222s and sub_provider=p_failed_node 222s and sub_receiver=receive_node.no_id 222s and receive_node.no_failed=false; 222s 222s for v_row in select * from public.sl_table 222s where tab_set = v_set 222s order by tab_id 222s loop 222s perform public.alterTableConfigureTriggers(v_row.tab_id); 222s end loop; 222s else 222s raise notice 'deleting from sl_subscribe all rows with receiver %', 222s p_backup_node; 222s 222s delete from public.sl_subscribe 222s where sub_set = v_set 222s and sub_receiver = p_backup_node; 222s 222s update public.sl_subscribe 222s set sub_provider=p_backup_node 222s FROM public.sl_node receive_node 222s where sub_set = v_set 222s and sub_provider=p_failed_node 222s and sub_provider=p_failed_node 222s and sub_receiver=receive_node.no_id 222s and receive_node.no_failed=false; 222s update public.sl_set 222s set set_origin = p_backup_node 222s where set_id = v_set; 222s -- ---- 222s -- If we are a subscriber of the set ourself, change our 222s -- setsync status to reflect the new set origin. 222s -- ---- 222s if exists (select true from public.sl_subscribe 222s where sub_set = v_set 222s and sub_receiver = public.getLocalNodeId( 222s '_main')) 222s then 222s delete from public.sl_setsync 222s where ssy_setid = v_set; 222s 222s select coalesce(max(ev_seqno), 0) into v_last_sync 222s from public.sl_event 222s where ev_origin = p_backup_node 222s and ev_type = 'SYNC'; 222s if v_last_sync > 0 then 222s insert into public.sl_setsync 222s (ssy_setid, ssy_origin, ssy_seqno, 222s ssy_snapshot, ssy_action_list) 222s select v_set, p_backup_node, v_last_sync, 222s ev_snapshot, NULL 222s from public.sl_event 222s where ev_origin = p_backup_node 222s and ev_seqno = v_last_sync; 222s else 222s insert into public.sl_setsync 222s (ssy_setid, ssy_origin, ssy_seqno, 222s ssy_snapshot, ssy_action_list) 222s values (v_set, p_backup_node, '0', 222s '1:1:', NULL); 222s end if; 222s end if; 222s end if; 222s end loop; 222s 222s --If there are any subscriptions with 222s --the failed_node being the provider then 222s --we want to redirect those subscriptions 222s --to come from the backup node. 222s -- 222s -- The backup node should be a valid 222s -- provider for all subscriptions served 222s -- by the failed node. (otherwise it 222s -- wouldn't be a allowable backup node). 222s -- delete from public.sl_subscribe 222s -- where sub_receiver=p_backup_node; 222s 222s update public.sl_subscribe 222s set sub_provider=p_backup_node 222s from public.sl_node 222s where sub_provider=p_failed_node 222s and sl_node.no_id=sub_receiver 222s and sl_node.no_failed=false 222s and sub_receiver<>p_backup_node; 222s 222s update public.sl_subscribe 222s set sub_provider=(select set_origin from 222s public.sl_set where set_id= 222s sub_set) 222s where sub_provider=p_failed_node 222s and sub_receiver=p_backup_node; 222s 222s update public.sl_node 222s set no_active=false WHERE 222s no_id=p_failed_node; 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s 222s return p_failed_node; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.failoverSet_int (p_failed_node int4, p_backup_node int4,p_seqno bigint) is 222s 'FUNCTION failoverSet_int (failed_node, backup_node, set_id, wait_seqno) 222s 222s Finish failover for one set.'; 222s COMMENT 222s create or replace function public.uninstallNode () 222s returns int4 222s as $$ 222s declare 222s v_tab_row record; 222s begin 222s raise notice 'Slony-I: Please drop schema "_main"'; 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.uninstallNode() is 222s 'Reset the whole database to standalone by removing the whole 222s replication system.'; 222s COMMENT 222s DROP FUNCTION IF EXISTS public.cloneNodePrepare(int4,int4,text); 222s DROP FUNCTION 222s create or replace function public.cloneNodePrepare (p_no_id int4, p_no_provider int4, p_no_comment text) 222s returns bigint 222s as $$ 222s begin 222s perform public.cloneNodePrepare_int (p_no_id, p_no_provider, p_no_comment); 222s return public.createEvent('_main', 'CLONE_NODE', 222s p_no_id::text, p_no_provider::text, 222s p_no_comment::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.cloneNodePrepare(p_no_id int4, p_no_provider int4, p_no_comment text) is 222s 'Prepare for cloning a node.'; 222s COMMENT 222s create or replace function public.cloneNodePrepare_int (p_no_id int4, p_no_provider int4, p_no_comment text) 222s returns int4 222s as $$ 222s declare 222s v_dummy int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s update public.sl_node set 222s no_active = np.no_active, 222s no_comment = np.no_comment, 222s no_failed = np.no_failed 222s from public.sl_node np 222s where np.no_id = p_no_provider 222s and sl_node.no_id = p_no_id; 222s if not found then 222s insert into public.sl_node 222s (no_id, no_active, no_comment,no_failed) 222s select p_no_id, no_active, p_no_comment, no_failed 222s from public.sl_node 222s where no_id = p_no_provider; 222s end if; 222s 222s insert into public.sl_path 222s (pa_server, pa_client, pa_conninfo, pa_connretry) 222s select pa_server, p_no_id, '', pa_connretry 222s from public.sl_path 222s where pa_client = p_no_provider 222s and (pa_server, p_no_id) not in (select pa_server, pa_client 222s from public.sl_path); 222s 222s insert into public.sl_path 222s (pa_server, pa_client, pa_conninfo, pa_connretry) 222s select p_no_id, pa_client, '', pa_connretry 222s from public.sl_path 222s where pa_server = p_no_provider 222s and (p_no_id, pa_client) not in (select pa_server, pa_client 222s from public.sl_path); 222s 222s insert into public.sl_subscribe 222s (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) 222s select sub_set, sub_provider, p_no_id, sub_forward, sub_active 222s from public.sl_subscribe 222s where sub_receiver = p_no_provider; 222s 222s insert into public.sl_confirm 222s (con_origin, con_received, con_seqno, con_timestamp) 222s select con_origin, p_no_id, con_seqno, con_timestamp 222s from public.sl_confirm 222s where con_received = p_no_provider; 222s 222s perform public.RebuildListenEntries(); 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.cloneNodePrepare_int(p_no_id int4, p_no_provider int4, p_no_comment text) is 222s 'Internal part of cloneNodePrepare().'; 222s COMMENT 222s create or replace function public.cloneNodeFinish (p_no_id int4, p_no_provider int4) 222s returns int4 222s as $$ 222s declare 222s v_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s perform "pg_catalog".setval('public.sl_local_node_id', p_no_id); 222s perform public.resetSession(); 222s for v_row in select sub_set from public.sl_subscribe 222s where sub_receiver = p_no_id 222s loop 222s perform public.updateReloid(v_row.sub_set, p_no_id); 222s end loop; 222s 222s perform public.RebuildListenEntries(); 222s 222s delete from public.sl_confirm 222s where con_received = p_no_id; 222s insert into public.sl_confirm 222s (con_origin, con_received, con_seqno, con_timestamp) 222s select con_origin, p_no_id, con_seqno, con_timestamp 222s from public.sl_confirm 222s where con_received = p_no_provider; 222s insert into public.sl_confirm 222s (con_origin, con_received, con_seqno, con_timestamp) 222s select p_no_provider, p_no_id, 222s (select max(ev_seqno) from public.sl_event 222s where ev_origin = p_no_provider), current_timestamp; 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.cloneNodeFinish(p_no_id int4, p_no_provider int4) is 222s 'Internal part of cloneNodePrepare().'; 222s COMMENT 222s create or replace function public.storePath (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) 222s returns bigint 222s as $$ 222s begin 222s perform public.storePath_int(p_pa_server, p_pa_client, 222s p_pa_conninfo, p_pa_connretry); 222s return public.createEvent('_main', 'STORE_PATH', 222s p_pa_server::text, p_pa_client::text, 222s p_pa_conninfo::text, p_pa_connretry::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.storePath (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) is 222s 'FUNCTION storePath (pa_server, pa_client, pa_conninfo, pa_connretry) 222s 222s Generate the STORE_PATH event indicating that node pa_client can 222s access node pa_server using DSN pa_conninfo'; 222s COMMENT 222s create or replace function public.storePath_int (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) 222s returns int4 222s as $$ 222s declare 222s v_dummy int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check if the path already exists 222s -- ---- 222s select 1 into v_dummy 222s from public.sl_path 222s where pa_server = p_pa_server 222s and pa_client = p_pa_client 222s for update; 222s if found then 222s -- ---- 222s -- Path exists, update pa_conninfo 222s -- ---- 222s update public.sl_path 222s set pa_conninfo = p_pa_conninfo, 222s pa_connretry = p_pa_connretry 222s where pa_server = p_pa_server 222s and pa_client = p_pa_client; 222s else 222s -- ---- 222s -- New path 222s -- 222s -- In case we receive STORE_PATH events before we know 222s -- about the nodes involved in this, we generate those nodes 222s -- as pending. 222s -- ---- 222s if not exists (select 1 from public.sl_node 222s where no_id = p_pa_server) then 222s perform public.storeNode_int (p_pa_server, ''); 222s end if; 222s if not exists (select 1 from public.sl_node 222s where no_id = p_pa_client) then 222s perform public.storeNode_int (p_pa_client, ''); 222s end if; 222s insert into public.sl_path 222s (pa_server, pa_client, pa_conninfo, pa_connretry) values 222s (p_pa_server, p_pa_client, p_pa_conninfo, p_pa_connretry); 222s end if; 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.storePath_int (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) is 222s 'FUNCTION storePath (pa_server, pa_client, pa_conninfo, pa_connretry) 222s 222s Process the STORE_PATH event indicating that node pa_client can 222s access node pa_server using DSN pa_conninfo'; 222s COMMENT 222s create or replace function public.dropPath (p_pa_server int4, p_pa_client int4) 222s returns bigint 222s as $$ 222s declare 222s v_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- There should be no existing subscriptions. Auto unsubscribing 222s -- is considered too dangerous. 222s -- ---- 222s for v_row in select sub_set, sub_provider, sub_receiver 222s from public.sl_subscribe 222s where sub_provider = p_pa_server 222s and sub_receiver = p_pa_client 222s loop 222s raise exception 222s 'Slony-I: Path cannot be dropped, subscription of set % needs it', 222s v_row.sub_set; 222s end loop; 222s 222s -- ---- 222s -- Drop all sl_listen entries that depend on this path 222s -- ---- 222s for v_row in select li_origin, li_provider, li_receiver 222s from public.sl_listen 222s where li_provider = p_pa_server 222s and li_receiver = p_pa_client 222s loop 222s perform public.dropListen( 222s v_row.li_origin, v_row.li_provider, v_row.li_receiver); 222s end loop; 222s 222s -- ---- 222s -- Now drop the path and create the event 222s -- ---- 222s perform public.dropPath_int(p_pa_server, p_pa_client); 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return public.createEvent ('_main', 'DROP_PATH', 222s p_pa_server::text, p_pa_client::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropPath (p_pa_server int4, p_pa_client int4) is 222s 'Generate DROP_PATH event to drop path from pa_server to pa_client'; 222s COMMENT 222s create or replace function public.dropPath_int (p_pa_server int4, p_pa_client int4) 222s returns int4 222s as $$ 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Remove any dangling sl_listen entries with the server 222s -- as provider and the client as receiver. This must have 222s -- been cleared out before, but obviously was not. 222s -- ---- 222s delete from public.sl_listen 222s where li_provider = p_pa_server 222s and li_receiver = p_pa_client; 222s 222s delete from public.sl_path 222s where pa_server = p_pa_server 222s and pa_client = p_pa_client; 222s 222s if found then 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return 1; 222s else 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return 0; 222s end if; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropPath_int (p_pa_server int4, p_pa_client int4) is 222s 'Process DROP_PATH event to drop path from pa_server to pa_client'; 222s COMMENT 222s create or replace function public.storeListen (p_origin int4, p_provider int4, p_receiver int4) 222s returns bigint 222s as $$ 222s begin 222s perform public.storeListen_int (p_origin, p_provider, p_receiver); 222s return public.createEvent ('_main', 'STORE_LISTEN', 222s p_origin::text, p_provider::text, p_receiver::text); 222s end; 222s $$ language plpgsql 222s called on null input; 222s CREATE FUNCTION 222s comment on function public.storeListen(p_origin int4, p_provider int4, p_receiver int4) is 222s 'FUNCTION storeListen (li_origin, li_provider, li_receiver) 222s 222s generate STORE_LISTEN event, indicating that receiver node li_receiver 222s listens to node li_provider in order to get messages coming from node 222s li_origin.'; 222s COMMENT 222s create or replace function public.storeListen_int (p_li_origin int4, p_li_provider int4, p_li_receiver int4) 222s returns int4 222s as $$ 222s declare 222s v_exists int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s select 1 into v_exists 222s from public.sl_listen 222s where li_origin = p_li_origin 222s and li_provider = p_li_provider 222s and li_receiver = p_li_receiver; 222s if not found then 222s -- ---- 222s -- In case we receive STORE_LISTEN events before we know 222s -- about the nodes involved in this, we generate those nodes 222s -- as pending. 222s -- ---- 222s if not exists (select 1 from public.sl_node 222s where no_id = p_li_origin) then 222s perform public.storeNode_int (p_li_origin, ''); 222s end if; 222s if not exists (select 1 from public.sl_node 222s where no_id = p_li_provider) then 222s perform public.storeNode_int (p_li_provider, ''); 222s end if; 222s if not exists (select 1 from public.sl_node 222s where no_id = p_li_receiver) then 222s perform public.storeNode_int (p_li_receiver, ''); 222s end if; 222s 222s insert into public.sl_listen 222s (li_origin, li_provider, li_receiver) values 222s (p_li_origin, p_li_provider, p_li_receiver); 222s end if; 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.storeListen_int(p_li_origin int4, p_li_provider int4, p_li_receiver int4) is 222s 'FUNCTION storeListen_int (li_origin, li_provider, li_receiver) 222s 222s Process STORE_LISTEN event, indicating that receiver node li_receiver 222s listens to node li_provider in order to get messages coming from node 222s li_origin.'; 222s COMMENT 222s create or replace function public.dropListen (p_li_origin int4, p_li_provider int4, p_li_receiver int4) 222s returns bigint 222s as $$ 222s begin 222s perform public.dropListen_int(p_li_origin, 222s p_li_provider, p_li_receiver); 222s 222s return public.createEvent ('_main', 'DROP_LISTEN', 222s p_li_origin::text, p_li_provider::text, p_li_receiver::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropListen(p_li_origin int4, p_li_provider int4, p_li_receiver int4) is 222s 'dropListen (li_origin, li_provider, li_receiver) 222s 222s Generate the DROP_LISTEN event.'; 222s COMMENT 222s create or replace function public.dropListen_int (p_li_origin int4, p_li_provider int4, p_li_receiver int4) 222s returns int4 222s as $$ 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s delete from public.sl_listen 222s where li_origin = p_li_origin 222s and li_provider = p_li_provider 222s and li_receiver = p_li_receiver; 222s if found then 222s return 1; 222s else 222s return 0; 222s end if; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropListen_int(p_li_origin int4, p_li_provider int4, p_li_receiver int4) is 222s 'dropListen (li_origin, li_provider, li_receiver) 222s 222s Process the DROP_LISTEN event, deleting the sl_listen entry for 222s the indicated (origin,provider,receiver) combination.'; 222s COMMENT 222s create or replace function public.storeSet (p_set_id int4, p_set_comment text) 222s returns bigint 222s as $$ 222s declare 222s v_local_node_id int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s v_local_node_id := public.getLocalNodeId('_main'); 222s 222s insert into public.sl_set 222s (set_id, set_origin, set_comment) values 222s (p_set_id, v_local_node_id, p_set_comment); 222s 222s return public.createEvent('_main', 'STORE_SET', 222s p_set_id::text, v_local_node_id::text, p_set_comment::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.storeSet(p_set_id int4, p_set_comment text) is 222s 'Generate STORE_SET event for set set_id with human readable comment set_comment'; 222s COMMENT 222s create or replace function public.storeSet_int (p_set_id int4, p_set_origin int4, p_set_comment text) 222s returns int4 222s as $$ 222s declare 222s v_dummy int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s select 1 into v_dummy 222s from public.sl_set 222s where set_id = p_set_id 222s for update; 222s if found then 222s update public.sl_set 222s set set_comment = p_set_comment 222s where set_id = p_set_id; 222s else 222s if not exists (select 1 from public.sl_node 222s where no_id = p_set_origin) then 222s perform public.storeNode_int (p_set_origin, ''); 222s end if; 222s insert into public.sl_set 222s (set_id, set_origin, set_comment) values 222s (p_set_id, p_set_origin, p_set_comment); 222s end if; 222s 222s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 222s perform public.addPartialLogIndices(); 222s 222s return p_set_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.storeSet_int(p_set_id int4, p_set_origin int4, p_set_comment text) is 222s 'storeSet_int (set_id, set_origin, set_comment) 222s 222s Process the STORE_SET event, indicating the new set with given ID, 222s origin node, and human readable comment.'; 222s COMMENT 222s create or replace function public.lockSet (p_set_id int4) 222s returns int4 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_set_row record; 222s v_tab_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that the set exists and that we are the origin 222s -- and that it is not already locked. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select * into v_set_row from public.sl_set 222s where set_id = p_set_id 222s for update; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_set_id; 222s end if; 222s if v_set_row.set_origin <> v_local_node_id then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_set_id; 222s end if; 222s if v_set_row.set_locked notnull then 222s raise exception 'Slony-I: set % is already locked', p_set_id; 222s end if; 222s 222s -- ---- 222s -- Place the lockedSet trigger on all tables in the set. 222s -- ---- 222s for v_tab_row in select T.tab_id, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s from public.sl_table T, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where T.tab_set = p_set_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid 222s order by tab_id 222s loop 222s execute 'create trigger "_main_lockedset" ' || 222s 'before insert or update or delete on ' || 222s v_tab_row.tab_fqname || ' for each row execute procedure 222s public.lockedSet (''_main'');'; 222s end loop; 222s 222s -- ---- 222s -- Remember our snapshots xmax as for the set locking 222s -- ---- 222s update public.sl_set 222s set set_locked = "pg_catalog".txid_snapshot_xmax("pg_catalog".txid_current_snapshot()) 222s where set_id = p_set_id; 222s 222s return p_set_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.lockSet(p_set_id int4) is 222s 'lockSet(set_id) 222s 222s Add a special trigger to all tables of a set that disables access to 222s it.'; 222s COMMENT 222s create or replace function public.unlockSet (p_set_id int4) 222s returns int4 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_set_row record; 222s v_tab_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that the set exists and that we are the origin 222s -- and that it is not already locked. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select * into v_set_row from public.sl_set 222s where set_id = p_set_id 222s for update; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_set_id; 222s end if; 222s if v_set_row.set_origin <> v_local_node_id then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_set_id; 222s end if; 222s if v_set_row.set_locked isnull then 222s raise exception 'Slony-I: set % is not locked', p_set_id; 222s end if; 222s 222s -- ---- 222s -- Drop the lockedSet trigger from all tables in the set. 222s -- ---- 222s for v_tab_row in select T.tab_id, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s from public.sl_table T, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where T.tab_set = p_set_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid 222s order by tab_id 222s loop 222s execute 'drop trigger "_main_lockedset" ' || 222s 'on ' || v_tab_row.tab_fqname; 222s end loop; 222s 222s -- ---- 222s -- Clear out the set_locked field 222s -- ---- 222s update public.sl_set 222s set set_locked = NULL 222s where set_id = p_set_id; 222s 222s return p_set_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.unlockSet(p_set_id int4) is 222s 'Remove the special trigger from all tables of a set that disables access to it.'; 222s COMMENT 222s create or replace function public.moveSet (p_set_id int4, p_new_origin int4) 222s returns bigint 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_set_row record; 222s v_sub_row record; 222s v_sync_seqno int8; 222s v_lv_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that the set is locked and that this locking 222s -- happened long enough ago. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select * into v_set_row from public.sl_set 222s where set_id = p_set_id 222s for update; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_set_id; 222s end if; 222s if v_set_row.set_origin <> v_local_node_id then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_set_id; 222s end if; 222s if v_set_row.set_locked isnull then 222s raise exception 'Slony-I: set % is not locked', p_set_id; 222s end if; 222s if v_set_row.set_locked > "pg_catalog".txid_snapshot_xmin("pg_catalog".txid_current_snapshot()) then 222s raise exception 'Slony-I: cannot move set % yet, transactions < % are still in progress', 222s p_set_id, v_set_row.set_locked; 222s end if; 222s 222s -- ---- 222s -- Unlock the set 222s -- ---- 222s perform public.unlockSet(p_set_id); 222s 222s -- ---- 222s -- Check that the new_origin is an active subscriber of the set 222s -- ---- 222s select * into v_sub_row from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = p_new_origin; 222s if not found then 222s raise exception 'Slony-I: set % is not subscribed by node %', 222s p_set_id, p_new_origin; 222s end if; 222s if not v_sub_row.sub_active then 222s raise exception 'Slony-I: subsctiption of node % for set % is inactive', 222s p_new_origin, p_set_id; 222s end if; 222s 222s -- ---- 222s -- Reconfigure everything 222s -- ---- 222s perform public.moveSet_int(p_set_id, v_local_node_id, 222s p_new_origin, 0); 222s 222s perform public.RebuildListenEntries(); 222s 222s -- ---- 222s -- At this time we hold access exclusive locks for every table 222s -- in the set. But we did move the set to the new origin, so the 222s -- createEvent() we are doing now will not record the sequences. 222s -- ---- 222s v_sync_seqno := public.createEvent('_main', 'SYNC'); 222s insert into public.sl_seqlog 222s (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) 222s select seq_id, v_local_node_id, v_sync_seqno, seq_last_value 222s from public.sl_seqlastvalue 222s where seq_set = p_set_id; 222s 222s -- ---- 222s -- Finally we generate the real event 222s -- ---- 222s return public.createEvent('_main', 'MOVE_SET', 222s p_set_id::text, v_local_node_id::text, p_new_origin::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.moveSet(p_set_id int4, p_new_origin int4) is 222s 'moveSet(set_id, new_origin) 222s 222s Generate MOVE_SET event to request that the origin for set set_id be moved to node new_origin'; 222s COMMENT 222s create or replace function public.moveSet_int (p_set_id int4, p_old_origin int4, p_new_origin int4, p_wait_seqno int8) 222s returns int4 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_tab_row record; 222s v_sub_row record; 222s v_sub_node int4; 222s v_sub_last int4; 222s v_sub_next int4; 222s v_last_sync int8; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Get our local node ID 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s 222s -- On the new origin, raise an event - ACCEPT_SET 222s if v_local_node_id = p_new_origin then 222s -- Create a SYNC event as well so that the ACCEPT_SET has 222s -- the same snapshot as the last SYNC generated by the new 222s -- origin. This snapshot will be used by other nodes to 222s -- finalize the setsync status. 222s perform public.createEvent('_main', 'SYNC', NULL); 222s perform public.createEvent('_main', 'ACCEPT_SET', 222s p_set_id::text, p_old_origin::text, 222s p_new_origin::text, p_wait_seqno::text); 222s end if; 222s 222s -- ---- 222s -- Next we have to reverse the subscription path 222s -- ---- 222s v_sub_last = p_new_origin; 222s select sub_provider into v_sub_node 222s from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = p_new_origin; 222s if not found then 222s raise exception 'Slony-I: subscription path broken in moveSet_int'; 222s end if; 222s while v_sub_node <> p_old_origin loop 222s -- ---- 222s -- Tracing node by node, the old receiver is now in 222s -- v_sub_last and the old provider is in v_sub_node. 222s -- ---- 222s 222s -- ---- 222s -- Get the current provider of this node as next 222s -- and change the provider to the previous one in 222s -- the reverse chain. 222s -- ---- 222s select sub_provider into v_sub_next 222s from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = v_sub_node 222s for update; 222s if not found then 222s raise exception 'Slony-I: subscription path broken in moveSet_int'; 222s end if; 222s update public.sl_subscribe 222s set sub_provider = v_sub_last 222s where sub_set = p_set_id 222s and sub_receiver = v_sub_node 222s and sub_receiver <> v_sub_last; 222s 222s v_sub_last = v_sub_node; 222s v_sub_node = v_sub_next; 222s end loop; 222s 222s -- ---- 222s -- This includes creating a subscription for the old origin 222s -- ---- 222s insert into public.sl_subscribe 222s (sub_set, sub_provider, sub_receiver, 222s sub_forward, sub_active) 222s values (p_set_id, v_sub_last, p_old_origin, true, true); 222s if v_local_node_id = p_old_origin then 222s select coalesce(max(ev_seqno), 0) into v_last_sync 222s from public.sl_event 222s where ev_origin = p_new_origin 222s and ev_type = 'SYNC'; 222s if v_last_sync > 0 then 222s insert into public.sl_setsync 222s (ssy_setid, ssy_origin, ssy_seqno, 222s ssy_snapshot, ssy_action_list) 222s select p_set_id, p_new_origin, v_last_sync, 222s ev_snapshot, NULL 222s from public.sl_event 222s where ev_origin = p_new_origin 222s and ev_seqno = v_last_sync; 222s else 222s insert into public.sl_setsync 222s (ssy_setid, ssy_origin, ssy_seqno, 222s ssy_snapshot, ssy_action_list) 222s values (p_set_id, p_new_origin, '0', 222s '1:1:', NULL); 222s end if; 222s end if; 222s 222s -- ---- 222s -- Now change the ownership of the set. 222s -- ---- 222s update public.sl_set 222s set set_origin = p_new_origin 222s where set_id = p_set_id; 222s 222s -- ---- 222s -- On the new origin, delete the obsolete setsync information 222s -- and the subscription. 222s -- ---- 222s if v_local_node_id = p_new_origin then 222s delete from public.sl_setsync 222s where ssy_setid = p_set_id; 222s else 222s if v_local_node_id <> p_old_origin then 222s -- 222s -- On every other node, change the setsync so that it will 222s -- pick up from the new origins last known sync. 222s -- 222s delete from public.sl_setsync 222s where ssy_setid = p_set_id; 222s select coalesce(max(ev_seqno), 0) into v_last_sync 222s from public.sl_event 222s where ev_origin = p_new_origin 222s and ev_type = 'SYNC'; 222s if v_last_sync > 0 then 222s insert into public.sl_setsync 222s (ssy_setid, ssy_origin, ssy_seqno, 222s ssy_snapshot, ssy_action_list) 222s select p_set_id, p_new_origin, v_last_sync, 222s ev_snapshot, NULL 222s from public.sl_event 222s where ev_origin = p_new_origin 222s and ev_seqno = v_last_sync; 222s else 222s insert into public.sl_setsync 222s (ssy_setid, ssy_origin, ssy_seqno, 222s ssy_snapshot, ssy_action_list) 222s values (p_set_id, p_new_origin, 222s '0', '1:1:', NULL); 222s end if; 222s end if; 222s end if; 222s delete from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = p_new_origin; 222s 222s -- Regenerate sl_listen since we revised the subscriptions 222s perform public.RebuildListenEntries(); 222s 222s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 222s perform public.addPartialLogIndices(); 222s 222s -- ---- 222s -- If we are the new or old origin, we have to 222s -- adjust the log and deny access trigger configuration. 222s -- ---- 222s if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then 222s for v_tab_row in select tab_id from public.sl_table 222s where tab_set = p_set_id 222s order by tab_id 222s loop 222s perform public.alterTableConfigureTriggers(v_tab_row.tab_id); 222s end loop; 222s end if; 222s 222s return p_set_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.moveSet_int(p_set_id int4, p_old_origin int4, p_new_origin int4, p_wait_seqno int8) is 222s 'moveSet(set_id, old_origin, new_origin, wait_seqno) 222s 222s Process MOVE_SET event to request that the origin for set set_id be 222s moved from old_origin to node new_origin'; 222s COMMENT 222s create or replace function public.dropSet (p_set_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that the set exists and originates here 222s -- ---- 222s select set_origin into v_origin from public.sl_set 222s where set_id = p_set_id; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_set_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_set_id; 222s end if; 222s 222s -- ---- 222s -- Call the internal drop set functionality and generate the event 222s -- ---- 222s perform public.dropSet_int(p_set_id); 222s return public.createEvent('_main', 'DROP_SET', 222s p_set_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropSet(p_set_id int4) is 222s 'Generate DROP_SET event to drop replication of set set_id'; 222s COMMENT 222s create or replace function public.dropSet_int (p_set_id int4) 222s returns int4 222s as $$ 222s declare 222s v_tab_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Restore all tables original triggers and rules and remove 222s -- our replication stuff. 222s -- ---- 222s for v_tab_row in select tab_id from public.sl_table 222s where tab_set = p_set_id 222s order by tab_id 222s loop 222s perform public.alterTableDropTriggers(v_tab_row.tab_id); 222s end loop; 222s 222s -- ---- 222s -- Remove all traces of the set configuration 222s -- ---- 222s delete from public.sl_sequence 222s where seq_set = p_set_id; 222s delete from public.sl_table 222s where tab_set = p_set_id; 222s delete from public.sl_subscribe 222s where sub_set = p_set_id; 222s delete from public.sl_setsync 222s where ssy_setid = p_set_id; 222s delete from public.sl_set 222s where set_id = p_set_id; 222s 222s -- Regenerate sl_listen since we revised the subscriptions 222s perform public.RebuildListenEntries(); 222s 222s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 222s perform public.addPartialLogIndices(); 222s 222s return p_set_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.dropSet(p_set_id int4) is 222s 'Process DROP_SET event to drop replication of set set_id. This involves: 222s - Removing log and deny access triggers 222s - Removing all traces of the set configuration, including sequences, tables, subscribers, syncs, and the set itself'; 222s COMMENT 222s create or replace function public.mergeSet (p_set_id int4, p_add_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_origin int4; 222s in_progress boolean; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that both sets exist and originate here 222s -- ---- 222s if p_set_id = p_add_id then 222s raise exception 'Slony-I: merged set ids cannot be identical'; 222s end if; 222s select set_origin into v_origin from public.sl_set 222s where set_id = p_set_id; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_set_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_set_id; 222s end if; 222s 222s select set_origin into v_origin from public.sl_set 222s where set_id = p_add_id; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_add_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_add_id; 222s end if; 222s 222s -- ---- 222s -- Check that both sets are subscribed by the same set of nodes 222s -- ---- 222s if exists (select true from public.sl_subscribe SUB1 222s where SUB1.sub_set = p_set_id 222s and SUB1.sub_receiver not in (select SUB2.sub_receiver 222s from public.sl_subscribe SUB2 222s where SUB2.sub_set = p_add_id)) 222s then 222s raise exception 'Slony-I: subscriber lists of set % and % are different', 222s p_set_id, p_add_id; 222s end if; 222s 222s if exists (select true from public.sl_subscribe SUB1 222s where SUB1.sub_set = p_add_id 222s and SUB1.sub_receiver not in (select SUB2.sub_receiver 222s from public.sl_subscribe SUB2 222s where SUB2.sub_set = p_set_id)) 222s then 222s raise exception 'Slony-I: subscriber lists of set % and % are different', 222s p_add_id, p_set_id; 222s end if; 222s 222s -- ---- 222s -- Check that all ENABLE_SUBSCRIPTION events for the set are confirmed 222s -- ---- 222s select public.isSubscriptionInProgress(p_add_id) into in_progress ; 222s 222s if in_progress then 222s raise exception 'Slony-I: set % has subscriptions in progress - cannot merge', 222s p_add_id; 222s end if; 222s 222s -- ---- 222s -- Create a SYNC event, merge the sets, create a MERGE_SET event 222s -- ---- 222s perform public.createEvent('_main', 'SYNC', NULL); 222s perform public.mergeSet_int(p_set_id, p_add_id); 222s return public.createEvent('_main', 'MERGE_SET', 222s p_set_id::text, p_add_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.mergeSet(p_set_id int4, p_add_id int4) is 222s 'Generate MERGE_SET event to request that sets be merged together. 222s 222s Both sets must exist, and originate on the same node. They must be 222s subscribed by the same set of nodes.'; 222s COMMENT 222s create or replace function public.isSubscriptionInProgress(p_add_id int4) 222s returns boolean 222s as $$ 222s begin 222s if exists (select true from public.sl_event 222s where ev_type = 'ENABLE_SUBSCRIPTION' 222s and ev_data1 = p_add_id::text 222s and ev_seqno > (select max(con_seqno) from public.sl_confirm 222s where con_origin = ev_origin 222s and con_received::text = ev_data3)) 222s then 222s return true; 222s else 222s return false; 222s end if; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.isSubscriptionInProgress(p_add_id int4) is 222s 'Checks to see if a subscription for the indicated set is in progress. 222s Returns true if a subscription is in progress. Otherwise false'; 222s COMMENT 222s create or replace function public.mergeSet_int (p_set_id int4, p_add_id int4) 222s returns int4 222s as $$ 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s update public.sl_sequence 222s set seq_set = p_set_id 222s where seq_set = p_add_id; 222s update public.sl_table 222s set tab_set = p_set_id 222s where tab_set = p_add_id; 222s delete from public.sl_subscribe 222s where sub_set = p_add_id; 222s delete from public.sl_setsync 222s where ssy_setid = p_add_id; 222s delete from public.sl_set 222s where set_id = p_add_id; 222s 222s return p_set_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.mergeSet_int(p_set_id int4, p_add_id int4) is 222s 'mergeSet_int(set_id, add_id) - Perform MERGE_SET event, merging all objects from 222s set add_id into set set_id.'; 222s COMMENT 222s create or replace function public.setAddTable(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) 222s returns bigint 222s as $$ 222s declare 222s v_set_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that we are the origin of the set 222s -- ---- 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_set_id; 222s if not found then 222s raise exception 'Slony-I: setAddTable(): set % not found', p_set_id; 222s end if; 222s if v_set_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: setAddTable(): set % has remote origin', p_set_id; 222s end if; 222s 222s if exists (select true from public.sl_subscribe 222s where sub_set = p_set_id) 222s then 222s raise exception 'Slony-I: cannot add table to currently subscribed set % - must attach to an unsubscribed set', 222s p_set_id; 222s end if; 222s 222s -- ---- 222s -- Add the table to the set and generate the SET_ADD_TABLE event 222s -- ---- 222s perform public.setAddTable_int(p_set_id, p_tab_id, p_fqname, 222s p_tab_idxname, p_tab_comment); 222s return public.createEvent('_main', 'SET_ADD_TABLE', 222s p_set_id::text, p_tab_id::text, p_fqname::text, 222s p_tab_idxname::text, p_tab_comment::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setAddTable(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) is 222s 'setAddTable (set_id, tab_id, tab_fqname, tab_idxname, tab_comment) 222s 222s Add table tab_fqname to replication set on origin node, and generate 222s SET_ADD_TABLE event to allow this to propagate to other nodes. 222s 222s Note that the table id, tab_id, must be unique ACROSS ALL SETS.'; 222s COMMENT 222s create or replace function public.setAddTable_int(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) 222s returns int4 222s as $$ 222s declare 222s v_tab_relname name; 222s v_tab_nspname name; 222s v_local_node_id int4; 222s v_set_origin int4; 222s v_sub_provider int4; 222s v_relkind char; 222s v_tab_reloid oid; 222s v_pkcand_nn boolean; 222s v_prec record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- For sets with a remote origin, check that we are subscribed 222s -- to that set. Otherwise we ignore the table because it might 222s -- not even exist in our database. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_set_id; 222s if not found then 222s raise exception 'Slony-I: setAddTable_int(): set % not found', 222s p_set_id; 222s end if; 222s if v_set_origin != v_local_node_id then 222s select sub_provider into v_sub_provider 222s from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = public.getLocalNodeId('_main'); 222s if not found then 222s return 0; 222s end if; 222s end if; 222s 222s -- ---- 222s -- Get the tables OID and check that it is a real table 222s -- ---- 222s select PGC.oid, PGC.relkind, PGC.relname, PGN.nspname into v_tab_reloid, v_relkind, v_tab_relname, v_tab_nspname 222s from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where PGC.relnamespace = PGN.oid 222s and public.slon_quote_input(p_fqname) = public.slon_quote_brute(PGN.nspname) || 222s '.' || public.slon_quote_brute(PGC.relname); 222s if not found then 222s raise exception 'Slony-I: setAddTable_int(): table % not found', 222s p_fqname; 222s end if; 222s if v_relkind != 'r' then 222s raise exception 'Slony-I: setAddTable_int(): % is not a regular table', 222s p_fqname; 222s end if; 222s 222s if not exists (select indexrelid 222s from "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGC 222s where PGX.indrelid = v_tab_reloid 222s and PGX.indexrelid = PGC.oid 222s and PGC.relname = p_tab_idxname) 222s then 222s raise exception 'Slony-I: setAddTable_int(): table % has no index %', 222s p_fqname, p_tab_idxname; 222s end if; 222s 222s -- ---- 222s -- Verify that the columns in the PK (or candidate) are not NULLABLE 222s -- ---- 222s 222s v_pkcand_nn := 'f'; 222s for v_prec in select attname from "pg_catalog".pg_attribute where attrelid = 222s (select oid from "pg_catalog".pg_class where oid = v_tab_reloid) 222s and attname in (select attname from "pg_catalog".pg_attribute where 222s attrelid = (select oid from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_index PGX where 222s PGC.relname = p_tab_idxname and PGX.indexrelid=PGC.oid and 222s PGX.indrelid = v_tab_reloid)) and attnotnull <> 't' 222s loop 222s raise notice 'Slony-I: setAddTable_int: table % PK column % nullable', p_fqname, v_prec.attname; 222s v_pkcand_nn := 't'; 222s end loop; 222s if v_pkcand_nn then 222s raise exception 'Slony-I: setAddTable_int: table % not replicable!', p_fqname; 222s end if; 222s 222s select * into v_prec from public.sl_table where tab_id = p_tab_id; 222s if not found then 222s v_pkcand_nn := 't'; -- No-op -- All is well 222s else 222s raise exception 'Slony-I: setAddTable_int: table id % has already been assigned!', p_tab_id; 222s end if; 222s 222s -- ---- 222s -- Add the table to sl_table and create the trigger on it. 222s -- ---- 222s insert into public.sl_table 222s (tab_id, tab_reloid, tab_relname, tab_nspname, 222s tab_set, tab_idxname, tab_altered, tab_comment) 222s values 222s (p_tab_id, v_tab_reloid, v_tab_relname, v_tab_nspname, 222s p_set_id, p_tab_idxname, false, p_tab_comment); 222s perform public.alterTableAddTriggers(p_tab_id); 222s 222s return p_tab_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setAddTable_int(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) is 222s 'setAddTable_int (set_id, tab_id, tab_fqname, tab_idxname, tab_comment) 222s 222s This function processes the SET_ADD_TABLE event on remote nodes, 222s adding a table to replication if the remote node is subscribing to its 222s replication set.'; 222s COMMENT 222s create or replace function public.setDropTable(p_tab_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_set_id int4; 222s v_set_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Determine the set_id 222s -- ---- 222s select tab_set into v_set_id from public.sl_table where tab_id = p_tab_id; 222s 222s -- ---- 222s -- Ensure table exists 222s -- ---- 222s if not found then 222s raise exception 'Slony-I: setDropTable_int(): table % not found', 222s p_tab_id; 222s end if; 222s 222s -- ---- 222s -- Check that we are the origin of the set 222s -- ---- 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = v_set_id; 222s if not found then 222s raise exception 'Slony-I: setDropTable(): set % not found', v_set_id; 222s end if; 222s if v_set_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: setDropTable(): set % has remote origin', v_set_id; 222s end if; 222s 222s -- ---- 222s -- Drop the table from the set and generate the SET_ADD_TABLE event 222s -- ---- 222s perform public.setDropTable_int(p_tab_id); 222s return public.createEvent('_main', 'SET_DROP_TABLE', 222s p_tab_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setDropTable(p_tab_id int4) is 222s 'setDropTable (tab_id) 222s 222s Drop table tab_id from set on origin node, and generate SET_DROP_TABLE 222s event to allow this to propagate to other nodes.'; 222s COMMENT 222s create or replace function public.setDropTable_int(p_tab_id int4) 222s returns int4 222s as $$ 222s declare 222s v_set_id int4; 222s v_local_node_id int4; 222s v_set_origin int4; 222s v_sub_provider int4; 222s v_tab_reloid oid; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Determine the set_id 222s -- ---- 222s select tab_set into v_set_id from public.sl_table where tab_id = p_tab_id; 222s 222s -- ---- 222s -- Ensure table exists 222s -- ---- 222s if not found then 222s return 0; 222s end if; 222s 222s -- ---- 222s -- For sets with a remote origin, check that we are subscribed 222s -- to that set. Otherwise we ignore the table because it might 222s -- not even exist in our database. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = v_set_id; 222s if not found then 222s raise exception 'Slony-I: setDropTable_int(): set % not found', 222s v_set_id; 222s end if; 222s if v_set_origin != v_local_node_id then 222s select sub_provider into v_sub_provider 222s from public.sl_subscribe 222s where sub_set = v_set_id 222s and sub_receiver = public.getLocalNodeId('_main'); 222s if not found then 222s return 0; 222s end if; 222s end if; 222s 222s -- ---- 222s -- Drop the table from sl_table and drop trigger from it. 222s -- ---- 222s perform public.alterTableDropTriggers(p_tab_id); 222s delete from public.sl_table where tab_id = p_tab_id; 222s return p_tab_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setDropTable_int(p_tab_id int4) is 222s 'setDropTable_int (tab_id) 222s 222s This function processes the SET_DROP_TABLE event on remote nodes, 222s dropping a table from replication if the remote node is subscribing to 222s its replication set.'; 222s COMMENT 222s create or replace function public.setAddSequence (p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) 222s returns bigint 222s as $$ 222s declare 222s v_set_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that we are the origin of the set 222s -- ---- 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_set_id; 222s if not found then 222s raise exception 'Slony-I: setAddSequence(): set % not found', p_set_id; 222s end if; 222s if v_set_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: setAddSequence(): set % has remote origin - submit to origin node', p_set_id; 222s end if; 222s 222s if exists (select true from public.sl_subscribe 222s where sub_set = p_set_id) 222s then 222s raise exception 'Slony-I: cannot add sequence to currently subscribed set %', 222s p_set_id; 222s end if; 222s 222s -- ---- 222s -- Add the sequence to the set and generate the SET_ADD_SEQUENCE event 222s -- ---- 222s perform public.setAddSequence_int(p_set_id, p_seq_id, p_fqname, 222s p_seq_comment); 222s return public.createEvent('_main', 'SET_ADD_SEQUENCE', 222s p_set_id::text, p_seq_id::text, 222s p_fqname::text, p_seq_comment::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setAddSequence (p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) is 222s 'setAddSequence (set_id, seq_id, seq_fqname, seq_comment) 222s 222s On the origin node for set set_id, add sequence seq_fqname to the 222s replication set, and raise SET_ADD_SEQUENCE to cause this to replicate 222s to subscriber nodes.'; 222s COMMENT 222s create or replace function public.setAddSequence_int(p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) 222s returns int4 222s as $$ 222s declare 222s v_local_node_id int4; 222s v_set_origin int4; 222s v_sub_provider int4; 222s v_relkind char; 222s v_seq_reloid oid; 222s v_seq_relname name; 222s v_seq_nspname name; 222s v_sync_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- For sets with a remote origin, check that we are subscribed 222s -- to that set. Otherwise we ignore the sequence because it might 222s -- not even exist in our database. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_set_id; 222s if not found then 222s raise exception 'Slony-I: setAddSequence_int(): set % not found', 222s p_set_id; 222s end if; 222s if v_set_origin != v_local_node_id then 222s select sub_provider into v_sub_provider 222s from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = public.getLocalNodeId('_main'); 222s if not found then 222s return 0; 222s end if; 222s end if; 222s 222s -- ---- 222s -- Get the sequences OID and check that it is a sequence 222s -- ---- 222s select PGC.oid, PGC.relkind, PGC.relname, PGN.nspname 222s into v_seq_reloid, v_relkind, v_seq_relname, v_seq_nspname 222s from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where PGC.relnamespace = PGN.oid 222s and public.slon_quote_input(p_fqname) = public.slon_quote_brute(PGN.nspname) || 222s '.' || public.slon_quote_brute(PGC.relname); 222s if not found then 222s raise exception 'Slony-I: setAddSequence_int(): sequence % not found', 222s p_fqname; 222s end if; 222s if v_relkind != 'S' then 222s raise exception 'Slony-I: setAddSequence_int(): % is not a sequence', 222s p_fqname; 222s end if; 222s 222s select 1 into v_sync_row from public.sl_sequence where seq_id = p_seq_id; 222s if not found then 222s v_relkind := 'o'; -- all is OK 222s else 222s raise exception 'Slony-I: setAddSequence_int(): sequence ID % has already been assigned', p_seq_id; 222s end if; 222s 222s -- ---- 222s -- Add the sequence to sl_sequence 222s -- ---- 222s insert into public.sl_sequence 222s (seq_id, seq_reloid, seq_relname, seq_nspname, seq_set, seq_comment) 222s values 222s (p_seq_id, v_seq_reloid, v_seq_relname, v_seq_nspname, p_set_id, p_seq_comment); 222s 222s -- ---- 222s -- On the set origin, fake a sl_seqlog row for the last sync event 222s -- ---- 222s if v_set_origin = v_local_node_id then 222s for v_sync_row in select coalesce (max(ev_seqno), 0) as ev_seqno 222s from public.sl_event 222s where ev_origin = v_local_node_id 222s and ev_type = 'SYNC' 222s loop 222s insert into public.sl_seqlog 222s (seql_seqid, seql_origin, seql_ev_seqno, 222s seql_last_value) values 222s (p_seq_id, v_local_node_id, v_sync_row.ev_seqno, 222s public.sequenceLastValue(p_fqname)); 222s end loop; 222s end if; 222s 222s return p_seq_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setAddSequence_int(p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) is 222s 'setAddSequence_int (set_id, seq_id, seq_fqname, seq_comment) 222s 222s This processes the SET_ADD_SEQUENCE event. On remote nodes that 222s subscribe to set_id, add the sequence to the replication set.'; 222s COMMENT 222s create or replace function public.setDropSequence (p_seq_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_set_id int4; 222s v_set_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Determine set id for this sequence 222s -- ---- 222s select seq_set into v_set_id from public.sl_sequence where seq_id = p_seq_id; 222s 222s -- ---- 222s -- Ensure sequence exists 222s -- ---- 222s if not found then 222s raise exception 'Slony-I: setDropSequence_int(): sequence % not found', 222s p_seq_id; 222s end if; 222s 222s -- ---- 222s -- Check that we are the origin of the set 222s -- ---- 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = v_set_id; 222s if not found then 222s raise exception 'Slony-I: setDropSequence(): set % not found', v_set_id; 222s end if; 222s if v_set_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: setDropSequence(): set % has origin at another node - submit this to that node', v_set_id; 222s end if; 222s 222s -- ---- 222s -- Add the sequence to the set and generate the SET_ADD_SEQUENCE event 222s -- ---- 222s perform public.setDropSequence_int(p_seq_id); 222s return public.createEvent('_main', 'SET_DROP_SEQUENCE', 222s p_seq_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setDropSequence (p_seq_id int4) is 222s 'setDropSequence (seq_id) 222s 222s On the origin node for the set, drop sequence seq_id from replication 222s set, and raise SET_DROP_SEQUENCE to cause this to replicate to 222s subscriber nodes.'; 222s COMMENT 222s create or replace function public.setDropSequence_int(p_seq_id int4) 222s returns int4 222s as $$ 222s declare 222s v_set_id int4; 222s v_local_node_id int4; 222s v_set_origin int4; 222s v_sub_provider int4; 222s v_relkind char; 222s v_sync_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Determine set id for this sequence 222s -- ---- 222s select seq_set into v_set_id from public.sl_sequence where seq_id = p_seq_id; 222s 222s -- ---- 222s -- Ensure sequence exists 222s -- ---- 222s if not found then 222s return 0; 222s end if; 222s 222s -- ---- 222s -- For sets with a remote origin, check that we are subscribed 222s -- to that set. Otherwise we ignore the sequence because it might 222s -- not even exist in our database. 222s -- ---- 222s v_local_node_id := public.getLocalNodeId('_main'); 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = v_set_id; 222s if not found then 222s raise exception 'Slony-I: setDropSequence_int(): set % not found', 222s v_set_id; 222s end if; 222s if v_set_origin != v_local_node_id then 222s select sub_provider into v_sub_provider 222s from public.sl_subscribe 222s where sub_set = v_set_id 222s and sub_receiver = public.getLocalNodeId('_main'); 222s if not found then 222s return 0; 222s end if; 222s end if; 222s 222s -- ---- 222s -- drop the sequence from sl_sequence, sl_seqlog 222s -- ---- 222s delete from public.sl_seqlog where seql_seqid = p_seq_id; 222s delete from public.sl_sequence where seq_id = p_seq_id; 222s 222s return p_seq_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setDropSequence_int(p_seq_id int4) is 222s 'setDropSequence_int (seq_id) 222s 222s This processes the SET_DROP_SEQUENCE event. On remote nodes that 222s subscribe to the set containing sequence seq_id, drop the sequence 222s from the replication set.'; 222s COMMENT 222s create or replace function public.setMoveTable (p_tab_id int4, p_new_set_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_old_set_id int4; 222s v_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Get the tables current set 222s -- ---- 222s select tab_set into v_old_set_id from public.sl_table 222s where tab_id = p_tab_id; 222s if not found then 222s raise exception 'Slony-I: table %d not found', p_tab_id; 222s end if; 222s 222s -- ---- 222s -- Check that both sets exist and originate here 222s -- ---- 222s if p_new_set_id = v_old_set_id then 222s raise exception 'Slony-I: set ids cannot be identical'; 222s end if; 222s select set_origin into v_origin from public.sl_set 222s where set_id = p_new_set_id; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_new_set_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: set % does not originate on local node', 222s p_new_set_id; 222s end if; 222s 222s select set_origin into v_origin from public.sl_set 222s where set_id = v_old_set_id; 222s if not found then 222s raise exception 'Slony-I: set % not found', v_old_set_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: set % does not originate on local node', 222s v_old_set_id; 222s end if; 222s 222s -- ---- 222s -- Check that both sets are subscribed by the same set of nodes 222s -- ---- 222s if exists (select true from public.sl_subscribe SUB1 222s where SUB1.sub_set = p_new_set_id 222s and SUB1.sub_receiver not in (select SUB2.sub_receiver 222s from public.sl_subscribe SUB2 222s where SUB2.sub_set = v_old_set_id)) 222s then 222s raise exception 'Slony-I: subscriber lists of set % and % are different', 222s p_new_set_id, v_old_set_id; 222s end if; 222s 222s if exists (select true from public.sl_subscribe SUB1 222s where SUB1.sub_set = v_old_set_id 222s and SUB1.sub_receiver not in (select SUB2.sub_receiver 222s from public.sl_subscribe SUB2 222s where SUB2.sub_set = p_new_set_id)) 222s then 222s raise exception 'Slony-I: subscriber lists of set % and % are different', 222s v_old_set_id, p_new_set_id; 222s end if; 222s 222s -- ---- 222s -- Change the set the table belongs to 222s -- ---- 222s perform public.createEvent('_main', 'SYNC', NULL); 222s perform public.setMoveTable_int(p_tab_id, p_new_set_id); 222s return public.createEvent('_main', 'SET_MOVE_TABLE', 222s p_tab_id::text, p_new_set_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setMoveTable(p_tab_id int4, p_new_set_id int4) is 222s 'This generates the SET_MOVE_TABLE event. If the set that the table is 222s in is identically subscribed to the set that the table is to be moved 222s into, then the SET_MOVE_TABLE event is raised.'; 222s COMMENT 222s create or replace function public.setMoveTable_int (p_tab_id int4, p_new_set_id int4) 222s returns int4 222s as $$ 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Move the table to the new set 222s -- ---- 222s update public.sl_table 222s set tab_set = p_new_set_id 222s where tab_id = p_tab_id; 222s 222s return p_tab_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setMoveTable(p_tab_id int4, p_new_set_id int4) is 222s 'This processes the SET_MOVE_TABLE event. The table is moved 222s to the destination set.'; 222s COMMENT 222s create or replace function public.setMoveSequence (p_seq_id int4, p_new_set_id int4) 222s returns bigint 222s as $$ 222s declare 222s v_old_set_id int4; 222s v_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Get the sequences current set 222s -- ---- 222s select seq_set into v_old_set_id from public.sl_sequence 222s where seq_id = p_seq_id; 222s if not found then 222s raise exception 'Slony-I: setMoveSequence(): sequence %d not found', p_seq_id; 222s end if; 222s 222s -- ---- 222s -- Check that both sets exist and originate here 222s -- ---- 222s if p_new_set_id = v_old_set_id then 222s raise exception 'Slony-I: setMoveSequence(): set ids cannot be identical'; 222s end if; 222s select set_origin into v_origin from public.sl_set 222s where set_id = p_new_set_id; 222s if not found then 222s raise exception 'Slony-I: setMoveSequence(): set % not found', p_new_set_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: setMoveSequence(): set % does not originate on local node', 222s p_new_set_id; 222s end if; 222s 222s select set_origin into v_origin from public.sl_set 222s where set_id = v_old_set_id; 222s if not found then 222s raise exception 'Slony-I: set % not found', v_old_set_id; 222s end if; 222s if v_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: set % does not originate on local node', 222s v_old_set_id; 222s end if; 222s 222s -- ---- 222s -- Check that both sets are subscribed by the same set of nodes 222s -- ---- 222s if exists (select true from public.sl_subscribe SUB1 222s where SUB1.sub_set = p_new_set_id 222s and SUB1.sub_receiver not in (select SUB2.sub_receiver 222s from public.sl_subscribe SUB2 222s where SUB2.sub_set = v_old_set_id)) 222s then 222s raise exception 'Slony-I: subscriber lists of set % and % are different', 222s p_new_set_id, v_old_set_id; 222s end if; 222s 222s if exists (select true from public.sl_subscribe SUB1 222s where SUB1.sub_set = v_old_set_id 222s and SUB1.sub_receiver not in (select SUB2.sub_receiver 222s from public.sl_subscribe SUB2 222s where SUB2.sub_set = p_new_set_id)) 222s then 222s raise exception 'Slony-I: subscriber lists of set % and % are different', 222s v_old_set_id, p_new_set_id; 222s end if; 222s 222s -- ---- 222s -- Change the set the sequence belongs to 222s -- ---- 222s perform public.setMoveSequence_int(p_seq_id, p_new_set_id); 222s return public.createEvent('_main', 'SET_MOVE_SEQUENCE', 222s p_seq_id::text, p_new_set_id::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setMoveSequence (p_seq_id int4, p_new_set_id int4) is 222s 'setMoveSequence(p_seq_id, p_new_set_id) - This generates the 222s SET_MOVE_SEQUENCE event, after validation, notably that both sets 222s exist, are distinct, and have exactly the same subscription lists'; 222s COMMENT 222s create or replace function public.setMoveSequence_int (p_seq_id int4, p_new_set_id int4) 222s returns int4 222s as $$ 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Move the sequence to the new set 222s -- ---- 222s update public.sl_sequence 222s set seq_set = p_new_set_id 222s where seq_id = p_seq_id; 222s 222s return p_seq_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setMoveSequence_int (p_seq_id int4, p_new_set_id int4) is 222s 'setMoveSequence_int(p_seq_id, p_new_set_id) - processes the 222s SET_MOVE_SEQUENCE event, moving a sequence to another replication 222s set.'; 222s COMMENT 222s create or replace function public.sequenceSetValue(p_seq_id int4, p_seq_origin int4, p_ev_seqno int8, p_last_value int8,p_ignore_missing bool) returns int4 222s as $$ 222s declare 222s v_fqname text; 222s v_found integer; 222s begin 222s -- ---- 222s -- Get the sequences fully qualified name 222s -- ---- 222s select public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) into v_fqname 222s from public.sl_sequence SQ, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where SQ.seq_id = p_seq_id 222s and SQ.seq_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid; 222s if not found then 222s if p_ignore_missing then 222s return null; 222s end if; 222s raise exception 'Slony-I: sequenceSetValue(): sequence % not found', p_seq_id; 222s end if; 222s 222s -- ---- 222s -- Update it to the new value 222s -- ---- 222s execute 'select setval(''' || v_fqname || 222s ''', ' || p_last_value::text || ')'; 222s 222s if p_ev_seqno is not null then 222s insert into public.sl_seqlog 222s (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) 222s values (p_seq_id, p_seq_origin, p_ev_seqno, p_last_value); 222s end if; 222s return p_seq_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.sequenceSetValue(p_seq_id int4, p_seq_origin int4, p_ev_seqno int8, p_last_value int8,p_ignore_missing bool) is 222s 'sequenceSetValue (seq_id, seq_origin, ev_seqno, last_value,ignore_missing) 222s Set sequence seq_id to have new value last_value. 222s '; 222s COMMENT 222s drop function if exists public.ddlCapture (p_statement text, p_nodes text); 222s NOTICE: function public.ddlcapture(text,text) does not exist, skipping 222s NOTICE: function public.ddlscript_complete(int4,text,int4) does not exist, skipping 222s NOTICE: function public.ddlscript_complete_int(int4,int4) does not exist, skipping 222s DROP FUNCTION 222s create or replace function public.ddlCapture (p_statement text, p_nodes text) 222s returns bigint 222s as $$ 222s declare 222s c_local_node integer; 222s c_found_origin boolean; 222s c_node text; 222s c_cmdargs text[]; 222s c_nodeargs text; 222s c_delim text; 222s begin 222s c_local_node := public.getLocalNodeId('_main'); 222s 222s c_cmdargs = array_append('{}'::text[], p_statement); 222s c_nodeargs = ''; 222s if p_nodes is not null then 222s c_found_origin := 'f'; 222s -- p_nodes list needs to consist of a list of nodes that exist 222s -- and that include the current node ID 222s for c_node in select trim(node) from 222s pg_catalog.regexp_split_to_table(p_nodes, ',') as node loop 222s if not exists 222s (select 1 from public.sl_node 222s where no_id = (c_node::integer)) then 222s raise exception 'ddlcapture(%,%) - node % does not exist!', 222s p_statement, p_nodes, c_node; 222s end if; 222s 222s if c_local_node = (c_node::integer) then 222s c_found_origin := 't'; 222s end if; 222s if length(c_nodeargs)>0 then 222s c_nodeargs = c_nodeargs ||','|| c_node; 222s else 222s c_nodeargs=c_node; 222s end if; 222s end loop; 222s 222s if not c_found_origin then 222s raise exception 222s 'ddlcapture(%,%) - origin node % not included in ONLY ON list!', 222s p_statement, p_nodes, c_local_node; 222s end if; 222s end if; 222s c_cmdargs = array_append(c_cmdargs,c_nodeargs); 222s c_delim=','; 222s c_cmdargs = array_append(c_cmdargs, 222s 222s (select public.string_agg( seq_id::text || c_delim 222s || c_local_node || 222s c_delim || seq_last_value) 222s FROM ( 222s select seq_id, 222s seq_last_value from public.sl_seqlastvalue 222s where seq_origin = c_local_node) as FOO 222s where NOT public.seqtrack(seq_id,seq_last_value) is NULL)); 222s insert into public.sl_log_script 222s (log_origin, log_txid, log_actionseq, log_cmdtype, log_cmdargs) 222s values 222s (c_local_node, pg_catalog.txid_current(), 222s nextval('public.sl_action_seq'), 'S', c_cmdargs); 222s execute p_statement; 222s return currval('public.sl_action_seq'); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.ddlCapture (p_statement text, p_nodes text) is 222s 'Capture an SQL statement (usually DDL) that is to be literally replayed on subscribers'; 222s COMMENT 222s drop function if exists public.ddlScript_complete (int4, text, int4); 222s DROP FUNCTION 222s create or replace function public.ddlScript_complete (p_nodes text) 222s returns bigint 222s as $$ 222s declare 222s c_local_node integer; 222s c_found_origin boolean; 222s c_node text; 222s c_cmdargs text[]; 222s begin 222s c_local_node := public.getLocalNodeId('_main'); 222s 222s c_cmdargs = '{}'::text[]; 222s if p_nodes is not null then 222s c_found_origin := 'f'; 222s -- p_nodes list needs to consist o a list of nodes that exist 222s -- and that include the current node ID 222s for c_node in select trim(node) from 222s pg_catalog.regexp_split_to_table(p_nodes, ',') as node loop 222s if not exists 222s (select 1 from public.sl_node 222s where no_id = (c_node::integer)) then 222s raise exception 'ddlcapture(%,%) - node % does not exist!', 222s p_statement, p_nodes, c_node; 222s end if; 222s 222s if c_local_node = (c_node::integer) then 222s c_found_origin := 't'; 222s end if; 222s 222s c_cmdargs = array_append(c_cmdargs, c_node); 222s end loop; 222s 222s if not c_found_origin then 222s raise exception 222s 'ddlScript_complete(%) - origin node % not included in ONLY ON list!', 222s p_nodes, c_local_node; 222s end if; 222s end if; 222s 222s perform public.ddlScript_complete_int(); 222s 222s insert into public.sl_log_script 222s (log_origin, log_txid, log_actionseq, log_cmdtype, log_cmdargs) 222s values 222s (c_local_node, pg_catalog.txid_current(), 222s nextval('public.sl_action_seq'), 's', c_cmdargs); 222s 222s return currval('public.sl_action_seq'); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.ddlScript_complete(p_nodes text) is 222s 'ddlScript_complete(set_id, script, only_on_node) 222s 222s After script has run on origin, this fixes up relnames and 222s log trigger arguments and inserts the "fire ddlScript_complete_int() 222s log row into sl_log_script.'; 222s COMMENT 222s drop function if exists public.ddlScript_complete_int(int4, int4); 222s DROP FUNCTION 222s create or replace function public.ddlScript_complete_int () 222s returns int4 222s as $$ 222s begin 222s perform public.updateRelname(); 222s perform public.repair_log_triggers(true); 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.ddlScript_complete_int() is 222s 'ddlScript_complete_int() 222s 222s Complete processing the DDL_SCRIPT event.'; 222s COMMENT 222s create or replace function public.alterTableAddTriggers (p_tab_id int4) 222s returns int4 222s as $$ 222s declare 222s v_no_id int4; 222s v_tab_row record; 222s v_tab_fqname text; 222s v_tab_attkind text; 222s v_n int4; 222s v_trec record; 222s v_tgbad boolean; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Get our local node ID 222s -- ---- 222s v_no_id := public.getLocalNodeId('_main'); 222s 222s -- ---- 222s -- Get the sl_table row and the current origin of the table. 222s -- ---- 222s select T.tab_reloid, T.tab_set, T.tab_idxname, 222s S.set_origin, PGX.indexrelid, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s into v_tab_row 222s from public.sl_table T, public.sl_set S, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC 222s where T.tab_id = p_tab_id 222s and T.tab_set = S.set_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid 222s and PGX.indrelid = T.tab_reloid 222s and PGX.indexrelid = PGXC.oid 222s and PGXC.relname = T.tab_idxname 222s for update; 222s if not found then 222s raise exception 'Slony-I: alterTableAddTriggers(): Table with id % not found', p_tab_id; 222s end if; 222s v_tab_fqname = v_tab_row.tab_fqname; 222s 222s v_tab_attkind := public.determineAttKindUnique(v_tab_row.tab_fqname, 222s v_tab_row.tab_idxname); 222s 222s execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; 222s 222s -- ---- 222s -- Create the log and the deny access triggers 222s -- ---- 222s execute 'create trigger "_main_logtrigger"' || 222s ' after insert or update or delete on ' || 222s v_tab_fqname || ' for each row execute procedure public.logTrigger (' || 222s pg_catalog.quote_literal('_main') || ',' || 222s pg_catalog.quote_literal(p_tab_id::text) || ',' || 222s pg_catalog.quote_literal(v_tab_attkind) || ');'; 222s 222s execute 'create trigger "_main_denyaccess" ' || 222s 'before insert or update or delete on ' || 222s v_tab_fqname || ' for each row execute procedure ' || 222s 'public.denyAccess (' || pg_catalog.quote_literal('_main') || ');'; 222s 222s perform public.alterTableAddTruncateTrigger(v_tab_fqname, p_tab_id); 222s 222s perform public.alterTableConfigureTriggers (p_tab_id); 222s return p_tab_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.alterTableAddTriggers(p_tab_id int4) is 222s 'alterTableAddTriggers(tab_id) 222s 222s Adds the log and deny access triggers to a replicated table.'; 222s COMMENT 222s create or replace function public.alterTableDropTriggers (p_tab_id int4) 222s returns int4 222s as $$ 222s declare 222s v_no_id int4; 222s v_tab_row record; 222s v_tab_fqname text; 222s v_n int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Get our local node ID 222s -- ---- 222s v_no_id := public.getLocalNodeId('_main'); 222s 222s -- ---- 222s -- Get the sl_table row and the current tables origin. 222s -- ---- 222s select T.tab_reloid, T.tab_set, 222s S.set_origin, PGX.indexrelid, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s into v_tab_row 222s from public.sl_table T, public.sl_set S, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC 222s where T.tab_id = p_tab_id 222s and T.tab_set = S.set_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid 222s and PGX.indrelid = T.tab_reloid 222s and PGX.indexrelid = PGXC.oid 222s and PGXC.relname = T.tab_idxname 222s for update; 222s if not found then 222s raise exception 'Slony-I: alterTableDropTriggers(): Table with id % not found', p_tab_id; 222s end if; 222s v_tab_fqname = v_tab_row.tab_fqname; 222s 222s execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; 222s 222s -- ---- 222s -- Drop both triggers 222s -- ---- 222s execute 'drop trigger "_main_logtrigger" on ' || 222s v_tab_fqname; 222s 222s execute 'drop trigger "_main_denyaccess" on ' || 222s v_tab_fqname; 222s 222s perform public.alterTableDropTruncateTrigger(v_tab_fqname, p_tab_id); 222s 222s return p_tab_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.alterTableDropTriggers (p_tab_id int4) is 222s 'alterTableDropTriggers (tab_id) 222s 222s Remove the log and deny access triggers from a table.'; 222s COMMENT 222s create or replace function public.alterTableConfigureTriggers (p_tab_id int4) 222s returns int4 222s as $$ 222s declare 222s v_no_id int4; 222s v_tab_row record; 222s v_tab_fqname text; 222s v_n int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Get our local node ID 222s -- ---- 222s v_no_id := public.getLocalNodeId('_main'); 222s 222s -- ---- 222s -- Get the sl_table row and the current tables origin. 222s -- ---- 222s select T.tab_reloid, T.tab_set, 222s S.set_origin, PGX.indexrelid, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s into v_tab_row 222s from public.sl_table T, public.sl_set S, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC 222s where T.tab_id = p_tab_id 222s and T.tab_set = S.set_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid 222s and PGX.indrelid = T.tab_reloid 222s and PGX.indexrelid = PGXC.oid 222s and PGXC.relname = T.tab_idxname 222s for update; 222s if not found then 222s raise exception 'Slony-I: alterTableConfigureTriggers(): Table with id % not found', p_tab_id; 222s end if; 222s v_tab_fqname = v_tab_row.tab_fqname; 222s 222s -- ---- 222s -- Configuration depends on the origin of the table 222s -- ---- 222s if v_tab_row.set_origin = v_no_id then 222s -- ---- 222s -- On the origin the log trigger is configured like a default 222s -- user trigger and the deny access trigger is disabled. 222s -- ---- 222s execute 'alter table ' || v_tab_fqname || 222s ' enable trigger "_main_logtrigger"'; 222s execute 'alter table ' || v_tab_fqname || 222s ' disable trigger "_main_denyaccess"'; 222s perform public.alterTableConfigureTruncateTrigger(v_tab_fqname, 222s 'enable', 'disable'); 222s else 222s -- ---- 222s -- On a replica the log trigger is disabled and the 222s -- deny access trigger fires in origin session role. 222s -- ---- 222s execute 'alter table ' || v_tab_fqname || 222s ' disable trigger "_main_logtrigger"'; 222s execute 'alter table ' || v_tab_fqname || 222s ' enable trigger "_main_denyaccess"'; 222s perform public.alterTableConfigureTruncateTrigger(v_tab_fqname, 222s 'disable', 'enable'); 222s 222s end if; 222s 222s return p_tab_id; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.alterTableConfigureTriggers (p_tab_id int4) is 222s 'alterTableConfigureTriggers (tab_id) 222s 222s Set the enable/disable configuration for the replication triggers 222s according to the origin of the set.'; 222s NOTICE: function public.subscribeset_int(int4,int4,int4,bool,bool) does not exist, skipping 222s NOTICE: function public.unsubscribeset(int4,int4,pg_catalog.bool) does not exist, skipping 222s COMMENT 222s create or replace function public.resubscribeNode (p_origin int4, 222s p_provider int4, p_receiver int4) 222s returns bigint 222s as $$ 222s declare 222s v_record record; 222s v_missing_sets text; 222s v_ev_seqno bigint; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- 222s -- Check that the receiver exists 222s -- 222s if not exists (select no_id from public.sl_node where no_id= 222s p_receiver) then 222s raise exception 'Slony-I: subscribeSet() receiver % does not exist' , p_receiver; 222s end if; 222s 222s -- 222s -- Check that the provider exists 222s -- 222s if not exists (select no_id from public.sl_node where no_id= 222s p_provider) then 222s raise exception 'Slony-I: subscribeSet() provider % does not exist' , p_provider; 222s end if; 222s 222s 222s -- ---- 222s -- Check that this is called on the origin node 222s -- ---- 222s if p_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: subscribeSet() must be called on origin'; 222s end if; 222s 222s -- --- 222s -- Verify that the provider is either the origin or an active subscriber 222s -- Bug report #1362 222s -- --- 222s if p_origin <> p_provider then 222s for v_record in select sub1.sub_set from 222s public.sl_subscribe sub1 222s left outer join (public.sl_subscribe sub2 222s inner join 222s public.sl_set on ( 222s sl_set.set_id=sub2.sub_set 222s and sub2.sub_set=p_origin) 222s ) 222s ON ( sub1.sub_set = sub2.sub_set and 222s sub1.sub_receiver = p_provider and 222s sub1.sub_forward and sub1.sub_active 222s and sub2.sub_receiver=p_receiver) 222s 222s where sub2.sub_set is null 222s loop 222s v_missing_sets=v_missing_sets || ' ' || v_record.sub_set; 222s end loop; 222s if v_missing_sets is not null then 222s raise exception 'Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %', p_sub_provider, v_missing_sets; 222s end if; 222s end if; 222s 222s for v_record in select * from 222s public.sl_subscribe, public.sl_set where 222s sub_set=set_id and 222s sub_receiver=p_receiver 222s and set_origin=p_origin 222s loop 222s -- ---- 222s -- Create the SUBSCRIBE_SET event 222s -- ---- 222s v_ev_seqno := public.createEvent('_main', 'SUBSCRIBE_SET', 222s v_record.sub_set::text, p_provider::text, p_receiver::text, 222s case v_record.sub_forward when true then 't' else 'f' end, 222s 'f' ); 222s 222s -- ---- 222s -- Call the internal procedure to store the subscription 222s -- ---- 222s perform public.subscribeSet_int(v_record.sub_set, 222s p_provider, 222s p_receiver, v_record.sub_forward, false); 222s end loop; 222s 222s return v_ev_seqno; 222s end; 222s $$ 222s language plpgsql; 222s CREATE FUNCTION 222s NOTICE: function public.updaterelname(int4,int4) does not exist, skipping 222s NOTICE: function public.updatereloid(int4,int4) does not exist, skipping 222s create or replace function public.subscribeSet (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) 222s returns bigint 222s as $$ 222s declare 222s v_set_origin int4; 222s v_ev_seqno int8; 222s v_ev_seqno2 int8; 222s v_rec record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- 222s -- Check that the receiver exists 222s -- 222s if not exists (select no_id from public.sl_node where no_id= 222s p_sub_receiver) then 222s raise exception 'Slony-I: subscribeSet() receiver % does not exist' , p_sub_receiver; 222s end if; 222s 222s -- 222s -- Check that the provider exists 222s -- 222s if not exists (select no_id from public.sl_node where no_id= 222s p_sub_provider) then 222s raise exception 'Slony-I: subscribeSet() provider % does not exist' , p_sub_provider; 222s end if; 222s 222s -- ---- 222s -- Check that the origin and provider of the set are remote 222s -- ---- 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_sub_set; 222s if not found then 222s raise exception 'Slony-I: subscribeSet(): set % not found', p_sub_set; 222s end if; 222s if v_set_origin = p_sub_receiver then 222s raise exception 222s 'Slony-I: subscribeSet(): set origin and receiver cannot be identical'; 222s end if; 222s if p_sub_receiver = p_sub_provider then 222s raise exception 222s 'Slony-I: subscribeSet(): set provider and receiver cannot be identical'; 222s end if; 222s -- ---- 222s -- Check that this is called on the origin node 222s -- ---- 222s if v_set_origin != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: subscribeSet() must be called on origin'; 222s end if; 222s 222s -- --- 222s -- Verify that the provider is either the origin or an active subscriber 222s -- Bug report #1362 222s -- --- 222s if v_set_origin <> p_sub_provider then 222s if not exists (select 1 from public.sl_subscribe 222s where sub_set = p_sub_set and 222s sub_receiver = p_sub_provider and 222s sub_forward and sub_active) then 222s raise exception 'Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %', p_sub_provider, p_sub_set; 222s end if; 222s end if; 222s 222s -- --- 222s -- Enforce that all sets from one origin are subscribed 222s -- using the same data provider per receiver. 222s -- ---- 222s if not exists (select 1 from public.sl_subscribe 222s where sub_set = p_sub_set and sub_receiver = p_sub_receiver) then 222s -- 222s -- New subscription - error out if we have any other subscription 222s -- from that origin with a different data provider. 222s -- 222s for v_rec in select sub_provider from public.sl_subscribe 222s join public.sl_set on set_id = sub_set 222s where set_origin = v_set_origin and sub_receiver = p_sub_receiver 222s loop 222s if v_rec.sub_provider <> p_sub_provider then 222s raise exception 'Slony-I: subscribeSet(): wrong provider % - existing subscription from origin % users provider %', 222s p_sub_provider, v_set_origin, v_rec.sub_provider; 222s end if; 222s end loop; 222s else 222s -- 222s -- Existing subscription - in case the data provider changes and 222s -- there are other subscriptions, warn here. subscribeSet_int() 222s -- will currently change the data provider for those sets as well. 222s -- 222s for v_rec in select set_id, sub_provider from public.sl_subscribe 222s join public.sl_set on set_id = sub_set 222s where set_origin = v_set_origin and sub_receiver = p_sub_receiver 222s and set_id <> p_sub_set 222s loop 222s if v_rec.sub_provider <> p_sub_provider then 222s raise exception 'Slony-I: subscribeSet(): also data provider for set % use resubscribe instead', 222s v_rec.set_id; 222s end if; 222s end loop; 222s end if; 222s 222s -- ---- 222s -- Create the SUBSCRIBE_SET event 222s -- ---- 222s v_ev_seqno := public.createEvent('_main', 'SUBSCRIBE_SET', 222s p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, 222s case p_sub_forward when true then 't' else 'f' end, 222s case p_omit_copy when true then 't' else 'f' end 222s ); 222s 222s -- ---- 222s -- Call the internal procedure to store the subscription 222s -- ---- 222s v_ev_seqno2:=public.subscribeSet_int(p_sub_set, p_sub_provider, 222s p_sub_receiver, p_sub_forward, p_omit_copy); 222s 222s if v_ev_seqno2 is not null then 222s v_ev_seqno:=v_ev_seqno2; 222s end if; 222s 222s return v_ev_seqno; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.subscribeSet (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) is 222s 'subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) 222s 222s Makes sure that the receiver is not the provider, then stores the 222s subscription, and publishes the SUBSCRIBE_SET event to other nodes. 222s 222s If omit_copy is true, then no data copy will be done. 222s '; 222s COMMENT 222s DROP FUNCTION IF EXISTS public.subscribeSet_int(int4,int4,int4,bool,bool); 222s DROP FUNCTION 222s create or replace function public.subscribeSet_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) 222s returns int4 222s as $$ 222s declare 222s v_set_origin int4; 222s v_sub_row record; 222s v_seq_id bigint; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Lookup the set origin 222s -- ---- 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_sub_set; 222s if not found then 222s raise exception 'Slony-I: subscribeSet_int(): set % not found', p_sub_set; 222s end if; 222s 222s -- ---- 222s -- Provider change is only allowed for active sets 222s -- ---- 222s if p_sub_receiver = public.getLocalNodeId('_main') then 222s select sub_active into v_sub_row from public.sl_subscribe 222s where sub_set = p_sub_set 222s and sub_receiver = p_sub_receiver; 222s if found then 222s if not v_sub_row.sub_active then 222s raise exception 'Slony-I: subscribeSet_int(): set % is not active, cannot change provider', 222s p_sub_set; 222s end if; 222s end if; 222s end if; 222s 222s -- ---- 222s -- Try to change provider and/or forward for an existing subscription 222s -- ---- 222s update public.sl_subscribe 222s set sub_provider = p_sub_provider, 222s sub_forward = p_sub_forward 222s where sub_set = p_sub_set 222s and sub_receiver = p_sub_receiver; 222s if found then 222s 222s -- ---- 222s -- This is changing a subscriptoin. Make sure all sets from 222s -- this origin are subscribed using the same data provider. 222s -- For this we first check that the requested data provider 222s -- is subscribed to all the sets, the receiver is subscribed to. 222s -- ---- 222s for v_sub_row in select set_id from public.sl_set 222s join public.sl_subscribe on set_id = sub_set 222s where set_origin = v_set_origin 222s and sub_receiver = p_sub_receiver 222s and sub_set <> p_sub_set 222s loop 222s if not exists (select 1 from public.sl_subscribe 222s where sub_set = v_sub_row.set_id 222s and sub_receiver = p_sub_provider 222s and sub_active and sub_forward) 222s and not exists (select 1 from public.sl_set 222s where set_id = v_sub_row.set_id 222s and set_origin = p_sub_provider) 222s then 222s raise exception 'Slony-I: subscribeSet_int(): node % is not a forwarding subscriber for set %', 222s p_sub_provider, v_sub_row.set_id; 222s end if; 222s 222s -- ---- 222s -- New data provider offers this set as well, change that 222s -- subscription too. 222s -- ---- 222s update public.sl_subscribe 222s set sub_provider = p_sub_provider 222s where sub_set = v_sub_row.set_id 222s and sub_receiver = p_sub_receiver; 222s end loop; 222s 222s -- ---- 222s -- Rewrite sl_listen table 222s -- ---- 222s perform public.RebuildListenEntries(); 222s 222s return p_sub_set; 222s end if; 222s 222s -- ---- 222s -- Not found, insert a new one 222s -- ---- 222s if not exists (select true from public.sl_path 222s where pa_server = p_sub_provider 222s and pa_client = p_sub_receiver) 222s then 222s insert into public.sl_path 222s (pa_server, pa_client, pa_conninfo, pa_connretry) 222s values 222s (p_sub_provider, p_sub_receiver, 222s '', 10); 222s end if; 222s insert into public.sl_subscribe 222s (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) 222s values (p_sub_set, p_sub_provider, p_sub_receiver, 222s p_sub_forward, false); 222s 222s -- ---- 222s -- If the set origin is here, then enable the subscription 222s -- ---- 222s if v_set_origin = public.getLocalNodeId('_main') then 222s select public.createEvent('_main', 'ENABLE_SUBSCRIPTION', 222s p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, 222s case p_sub_forward when true then 't' else 'f' end, 222s case p_omit_copy when true then 't' else 'f' end 222s ) into v_seq_id; 222s perform public.enableSubscription(p_sub_set, 222s p_sub_provider, p_sub_receiver); 222s end if; 222s 222s -- ---- 222s -- Rewrite sl_listen table 222s -- ---- 222s perform public.RebuildListenEntries(); 222s 222s return p_sub_set; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.subscribeSet_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) is 222s 'subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) 222s 222s Internal actions for subscribing receiver sub_receiver to subscription 222s set sub_set.'; 222s COMMENT 222s drop function IF EXISTS public.unsubscribeSet(int4,int4,boolean); 222s DROP FUNCTION 222s create or replace function public.unsubscribeSet (p_sub_set int4, p_sub_receiver int4,p_force boolean) 222s returns bigint 222s as $$ 222s declare 222s v_tab_row record; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- Check that this is called on the receiver node 222s -- ---- 222s if p_sub_receiver != public.getLocalNodeId('_main') then 222s raise exception 'Slony-I: unsubscribeSet() must be called on receiver'; 222s end if; 222s 222s 222s 222s -- ---- 222s -- Check that this does not break any chains 222s -- ---- 222s if p_force=false and exists (select true from public.sl_subscribe 222s where sub_set = p_sub_set 222s and sub_provider = p_sub_receiver) 222s then 222s raise exception 'Slony-I: Cannot unsubscribe set % while being provider', 222s p_sub_set; 222s end if; 222s 222s if exists (select true from public.sl_subscribe 222s where sub_set = p_sub_set 222s and sub_provider = p_sub_receiver) 222s then 222s --delete the receivers of this provider. 222s --unsubscribeSet_int() will generate the event 222s --when it runs on the receiver. 222s delete from public.sl_subscribe 222s where sub_set=p_sub_set 222s and sub_provider=p_sub_receiver; 222s end if; 222s 222s -- ---- 222s -- Remove the replication triggers. 222s -- ---- 222s for v_tab_row in select tab_id from public.sl_table 222s where tab_set = p_sub_set 222s order by tab_id 222s loop 222s perform public.alterTableDropTriggers(v_tab_row.tab_id); 222s end loop; 222s 222s -- ---- 222s -- Remove the setsync status. This will also cause the 222s -- worker thread to ignore the set and stop replicating 222s -- right now. 222s -- ---- 222s delete from public.sl_setsync 222s where ssy_setid = p_sub_set; 222s 222s -- ---- 222s -- Remove all sl_table and sl_sequence entries for this set. 222s -- Should we ever subscribe again, the initial data 222s -- copy process will create new ones. 222s -- ---- 222s delete from public.sl_table 222s where tab_set = p_sub_set; 222s delete from public.sl_sequence 222s where seq_set = p_sub_set; 222s 222s -- ---- 222s -- Call the internal procedure to drop the subscription 222s -- ---- 222s perform public.unsubscribeSet_int(p_sub_set, p_sub_receiver); 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s -- ---- 222s -- Create the UNSUBSCRIBE_SET event 222s -- ---- 222s return public.createEvent('_main', 'UNSUBSCRIBE_SET', 222s p_sub_set::text, p_sub_receiver::text); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.unsubscribeSet (p_sub_set int4, p_sub_receiver int4,force boolean) is 222s 'unsubscribeSet (sub_set, sub_receiver,force) 222s 222s Unsubscribe node sub_receiver from subscription set sub_set. This is 222s invoked on the receiver node. It verifies that this does not break 222s any chains (e.g. - where sub_receiver is a provider for another node), 222s then restores tables, drops Slony-specific keys, drops table entries 222s for the set, drops the subscription, and generates an UNSUBSCRIBE_SET 222s node to publish that the node is being dropped.'; 222s COMMENT 222s create or replace function public.unsubscribeSet_int (p_sub_set int4, p_sub_receiver int4) 222s returns int4 222s as $$ 222s declare 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- All the real work is done before event generation on the 222s -- subscriber. 222s -- ---- 222s 222s --if this event unsubscribes the provider of this node 222s --then this node should unsubscribe itself from the set as well. 222s 222s if exists (select true from 222s public.sl_subscribe where 222s sub_set=p_sub_set and sub_provider=p_sub_receiver 222s and sub_receiver=public.getLocalNodeId('_main')) 222s then 222s perform public.unsubscribeSet(p_sub_set,public.getLocalNodeId('_main'),true); 222s end if; 222s 222s 222s delete from public.sl_subscribe 222s where sub_set = p_sub_set 222s and sub_receiver = p_sub_receiver; 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return p_sub_set; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.unsubscribeSet_int (p_sub_set int4, p_sub_receiver int4) is 222s 'unsubscribeSet_int (sub_set, sub_receiver) 222s 222s All the REAL work of removing the subscriber is done before the event 222s is generated, so this function just has to drop the references to the 222s subscription in sl_subscribe.'; 222s COMMENT 222s create or replace function public.enableSubscription (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) 222s returns int4 222s as $$ 222s begin 222s return public.enableSubscription_int (p_sub_set, 222s p_sub_provider, p_sub_receiver); 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.enableSubscription (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) is 222s 'enableSubscription (sub_set, sub_provider, sub_receiver) 222s 222s Indicates that sub_receiver intends subscribing to set sub_set from 222s sub_provider. Work is all done by the internal function 222s enableSubscription_int (sub_set, sub_provider, sub_receiver).'; 222s COMMENT 222s create or replace function public.enableSubscription_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) 222s returns int4 222s as $$ 222s declare 222s v_n int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- ---- 222s -- The real work is done in the replication engine. All 222s -- we have to do here is remembering that it happened. 222s -- ---- 222s 222s -- ---- 222s -- Well, not only ... we might be missing an important event here 222s -- ---- 222s if not exists (select true from public.sl_path 222s where pa_server = p_sub_provider 222s and pa_client = p_sub_receiver) 222s then 222s insert into public.sl_path 222s (pa_server, pa_client, pa_conninfo, pa_connretry) 222s values 222s (p_sub_provider, p_sub_receiver, 222s '', 10); 222s end if; 222s 222s update public.sl_subscribe 222s set sub_active = 't' 222s where sub_set = p_sub_set 222s and sub_receiver = p_sub_receiver; 222s get diagnostics v_n = row_count; 222s if v_n = 0 then 222s insert into public.sl_subscribe 222s (sub_set, sub_provider, sub_receiver, 222s sub_forward, sub_active) 222s values 222s (p_sub_set, p_sub_provider, p_sub_receiver, 222s false, true); 222s end if; 222s 222s -- Rewrite sl_listen table 222s perform public.RebuildListenEntries(); 222s 222s return p_sub_set; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.enableSubscription_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) is 222s 'enableSubscription_int (sub_set, sub_provider, sub_receiver) 222s 222s Internal function to enable subscription of node sub_receiver to set 222s sub_set via node sub_provider. 222s 222s slon does most of the work; all we need do here is to remember that it 222s happened. The function updates sl_subscribe, indicating that the 222s subscription has become active.'; 222s COMMENT 222s create or replace function public.forwardConfirm (p_con_origin int4, p_con_received int4, p_con_seqno int8, p_con_timestamp timestamp) 222s returns bigint 222s as $$ 222s declare 222s v_max_seqno bigint; 222s begin 222s select into v_max_seqno coalesce(max(con_seqno), 0) 222s from public.sl_confirm 222s where con_origin = p_con_origin 222s and con_received = p_con_received; 222s if v_max_seqno < p_con_seqno then 222s insert into public.sl_confirm 222s (con_origin, con_received, con_seqno, con_timestamp) 222s values (p_con_origin, p_con_received, p_con_seqno, 222s p_con_timestamp); 222s v_max_seqno = p_con_seqno; 222s end if; 222s 222s return v_max_seqno; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.forwardConfirm (p_con_origin int4, p_con_received int4, p_con_seqno int8, p_con_timestamp timestamp) is 222s 'forwardConfirm (p_con_origin, p_con_received, p_con_seqno, p_con_timestamp) 222s 222s Confirms (recorded in sl_confirm) that items from p_con_origin up to 222s p_con_seqno have been received by node p_con_received as of 222s p_con_timestamp, and raises an event to forward this confirmation.'; 222s COMMENT 222s create or replace function public.cleanupEvent (p_interval interval) 222s returns int4 222s as $$ 222s declare 222s v_max_row record; 222s v_min_row record; 222s v_max_sync int8; 222s v_origin int8; 222s v_seqno int8; 222s v_xmin bigint; 222s v_rc int8; 222s begin 222s -- ---- 222s -- First remove all confirmations where origin/receiver no longer exist 222s -- ---- 222s delete from public.sl_confirm 222s where con_origin not in (select no_id from public.sl_node); 222s delete from public.sl_confirm 222s where con_received not in (select no_id from public.sl_node); 222s -- ---- 222s -- Next remove all but the oldest confirm row per origin,receiver pair. 222s -- Ignore confirmations that are younger than 10 minutes. We currently 222s -- have an not confirmed suspicion that a possibly lost transaction due 222s -- to a server crash might have been visible to another session, and 222s -- that this led to log data that is needed again got removed. 222s -- ---- 222s for v_max_row in select con_origin, con_received, max(con_seqno) as con_seqno 222s from public.sl_confirm 222s where con_timestamp < (CURRENT_TIMESTAMP - p_interval) 222s group by con_origin, con_received 222s loop 222s delete from public.sl_confirm 222s where con_origin = v_max_row.con_origin 222s and con_received = v_max_row.con_received 222s and con_seqno < v_max_row.con_seqno; 222s end loop; 222s 222s -- ---- 222s -- Then remove all events that are confirmed by all nodes in the 222s -- whole cluster up to the last SYNC 222s -- ---- 222s for v_min_row in select con_origin, min(con_seqno) as con_seqno 222s from public.sl_confirm 222s group by con_origin 222s loop 222s select coalesce(max(ev_seqno), 0) into v_max_sync 222s from public.sl_event 222s where ev_origin = v_min_row.con_origin 222s and ev_seqno <= v_min_row.con_seqno 222s and ev_type = 'SYNC'; 222s if v_max_sync > 0 then 222s delete from public.sl_event 222s where ev_origin = v_min_row.con_origin 222s and ev_seqno < v_max_sync; 222s end if; 222s end loop; 222s 222s -- ---- 222s -- If cluster has only one node, then remove all events up to 222s -- the last SYNC - Bug #1538 222s -- http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1538 222s -- ---- 222s 222s select * into v_min_row from public.sl_node where 222s no_id <> public.getLocalNodeId('_main') limit 1; 222s if not found then 222s select ev_origin, ev_seqno into v_min_row from public.sl_event 222s where ev_origin = public.getLocalNodeId('_main') 222s order by ev_origin desc, ev_seqno desc limit 1; 222s raise notice 'Slony-I: cleanupEvent(): Single node - deleting events < %', v_min_row.ev_seqno; 222s delete from public.sl_event 222s where 222s ev_origin = v_min_row.ev_origin and 222s ev_seqno < v_min_row.ev_seqno; 222s 222s end if; 222s 222s if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_seqlog' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then 222s execute 'alter table public.sl_seqlog set without oids;'; 222s end if; 222s -- ---- 222s -- Also remove stale entries from the nodelock table. 222s -- ---- 222s perform public.cleanupNodelock(); 222s 222s -- ---- 222s -- Find the eldest event left, for each origin 222s -- ---- 222s for v_origin, v_seqno, v_xmin in 222s select ev_origin, ev_seqno, "pg_catalog".txid_snapshot_xmin(ev_snapshot) from public.sl_event 222s where (ev_origin, ev_seqno) in (select ev_origin, min(ev_seqno) from public.sl_event where ev_type = 'SYNC' group by ev_origin) 222s loop 222s delete from public.sl_seqlog where seql_origin = v_origin and seql_ev_seqno < v_seqno; 222s delete from public.sl_log_script where log_origin = v_origin and log_txid < v_xmin; 222s end loop; 222s 222s v_rc := public.logswitch_finish(); 222s if v_rc = 0 then -- no switch in progress 222s perform public.logswitch_start(); 222s end if; 222s 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.cleanupEvent (p_interval interval) is 222s 'cleaning old data out of sl_confirm, sl_event. Removes all but the 222s last sl_confirm row per (origin,receiver), and then removes all events 222s that are confirmed by all nodes in the whole cluster up to the last 222s SYNC.'; 222s COMMENT 222s create or replace function public.determineIdxnameUnique(p_tab_fqname text, p_idx_name name) returns name 222s as $$ 222s declare 222s v_tab_fqname_quoted text default ''; 222s v_idxrow record; 222s begin 222s v_tab_fqname_quoted := public.slon_quote_input(p_tab_fqname); 222s -- 222s -- Ensure that the table exists 222s -- 222s if (select PGC.relname 222s from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_namespace PGN 222s where public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 222s and PGN.oid = PGC.relnamespace) is null then 222s raise exception 'Slony-I: determineIdxnameUnique(): table % not found', v_tab_fqname_quoted; 222s end if; 222s 222s -- 222s -- Lookup the tables primary key or the specified unique index 222s -- 222s if p_idx_name isnull then 222s select PGXC.relname 222s into v_idxrow 222s from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_index PGX, 222s "pg_catalog".pg_class PGXC 222s where public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 222s and PGN.oid = PGC.relnamespace 222s and PGX.indrelid = PGC.oid 222s and PGX.indexrelid = PGXC.oid 222s and PGX.indisprimary; 222s if not found then 222s raise exception 'Slony-I: table % has no primary key', 222s v_tab_fqname_quoted; 222s end if; 222s else 222s select PGXC.relname 222s into v_idxrow 222s from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_index PGX, 222s "pg_catalog".pg_class PGXC 222s where public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 222s and PGN.oid = PGC.relnamespace 222s and PGX.indrelid = PGC.oid 222s and PGX.indexrelid = PGXC.oid 222s and PGX.indisunique 222s and public.slon_quote_brute(PGXC.relname) = public.slon_quote_input(p_idx_name); 222s if not found then 222s raise exception 'Slony-I: table % has no unique index %', 222s v_tab_fqname_quoted, p_idx_name; 222s end if; 222s end if; 222s 222s -- 222s -- Return the found index name 222s -- 222s return v_idxrow.relname; 222s end; 222s $$ language plpgsql called on null input; 222s CREATE FUNCTION 222s comment on function public.determineIdxnameUnique(p_tab_fqname text, p_idx_name name) is 222s 'FUNCTION determineIdxnameUnique (tab_fqname, indexname) 222s 222s Given a tablename, tab_fqname, check that the unique index, indexname, 222s exists or return the primary key index name for the table. If there 222s is no unique index, it raises an exception.'; 222s COMMENT 222s create or replace function public.determineAttkindUnique(p_tab_fqname text, p_idx_name name) returns text 222s as $$ 222s declare 222s v_tab_fqname_quoted text default ''; 222s v_idx_name_quoted text; 222s v_idxrow record; 222s v_attrow record; 222s v_i integer; 222s v_attno int2; 222s v_attkind text default ''; 222s v_attfound bool; 222s begin 222s v_tab_fqname_quoted := public.slon_quote_input(p_tab_fqname); 222s v_idx_name_quoted := public.slon_quote_brute(p_idx_name); 222s -- 222s -- Ensure that the table exists 222s -- 222s if (select PGC.relname 222s from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_namespace PGN 222s where public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 222s and PGN.oid = PGC.relnamespace) is null then 222s raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; 222s end if; 222s 222s -- 222s -- Lookup the tables primary key or the specified unique index 222s -- 222s if p_idx_name isnull then 222s raise exception 'Slony-I: index name must be specified'; 222s else 222s select PGXC.relname, PGX.indexrelid, PGX.indkey 222s into v_idxrow 222s from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_index PGX, 222s "pg_catalog".pg_class PGXC 222s where public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 222s and PGN.oid = PGC.relnamespace 222s and PGX.indrelid = PGC.oid 222s and PGX.indexrelid = PGXC.oid 222s and PGX.indisunique 222s and public.slon_quote_brute(PGXC.relname) = v_idx_name_quoted; 222s if not found then 222s raise exception 'Slony-I: table % has no unique index %', 222s v_tab_fqname_quoted, v_idx_name_quoted; 222s end if; 222s end if; 222s 222s -- 222s -- Loop over the tables attributes and check if they are 222s -- index attributes. If so, add a "k" to the return value, 222s -- otherwise add a "v". 222s -- 222s for v_attrow in select PGA.attnum, PGA.attname 222s from "pg_catalog".pg_class PGC, 222s "pg_catalog".pg_namespace PGN, 222s "pg_catalog".pg_attribute PGA 222s where public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 222s and PGN.oid = PGC.relnamespace 222s and PGA.attrelid = PGC.oid 222s and not PGA.attisdropped 222s and PGA.attnum > 0 222s order by attnum 222s loop 222s v_attfound = 'f'; 222s 222s v_i := 0; 222s loop 222s select indkey[v_i] into v_attno from "pg_catalog".pg_index 222s where indexrelid = v_idxrow.indexrelid; 222s if v_attno isnull or v_attno = 0 then 222s exit; 222s end if; 222s if v_attrow.attnum = v_attno then 222s v_attfound = 't'; 222s exit; 222s end if; 222s v_i := v_i + 1; 222s end loop; 222s 222s if v_attfound then 222s v_attkind := v_attkind || 'k'; 222s else 222s v_attkind := v_attkind || 'v'; 222s end if; 222s end loop; 222s 222s -- Strip off trailing v characters as they are not needed by the logtrigger 222s v_attkind := pg_catalog.rtrim(v_attkind, 'v'); 222s 222s -- 222s -- Return the resulting attkind 222s -- 222s return v_attkind; 222s end; 222s $$ language plpgsql called on null input; 222s CREATE FUNCTION 222s comment on function public.determineAttkindUnique(p_tab_fqname text, p_idx_name name) is 222s 'determineAttKindUnique (tab_fqname, indexname) 222s 222s Given a tablename, return the Slony-I specific attkind (used for the 222s log trigger) of the table. Use the specified unique index or the 222s primary key (if indexname is NULL).'; 222s COMMENT 222s create or replace function public.RebuildListenEntries() 222s returns int 222s as $$ 222s declare 222s v_row record; 222s v_cnt integer; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s -- First remove the entire configuration 222s delete from public.sl_listen; 222s 222s -- Second populate the sl_listen configuration with a full 222s -- network of all possible paths. 222s insert into public.sl_listen 222s (li_origin, li_provider, li_receiver) 222s select pa_server, pa_server, pa_client from public.sl_path; 222s while true loop 222s insert into public.sl_listen 222s (li_origin, li_provider, li_receiver) 222s select distinct li_origin, pa_server, pa_client 222s from public.sl_listen, public.sl_path 222s where li_receiver = pa_server 222s and li_origin <> pa_client 222s and pa_conninfo<>'' 222s except 222s select li_origin, li_provider, li_receiver 222s from public.sl_listen; 222s 222s if not found then 222s exit; 222s end if; 222s end loop; 222s 222s -- We now replace specific event-origin,receiver combinations 222s -- with a configuration that tries to avoid events arriving at 222s -- a node before the data provider actually has the data ready. 222s 222s -- Loop over every possible pair of receiver and event origin 222s for v_row in select N1.no_id as receiver, N2.no_id as origin, 222s N2.no_failed as failed 222s from public.sl_node as N1, public.sl_node as N2 222s where N1.no_id <> N2.no_id 222s loop 222s -- 1st choice: 222s -- If we use the event origin as a data provider for any 222s -- set that originates on that very node, we are a direct 222s -- subscriber to that origin and listen there only. 222s if exists (select true from public.sl_set, public.sl_subscribe , public.sl_node p 222s where set_origin = v_row.origin 222s and sub_set = set_id 222s and sub_provider = v_row.origin 222s and sub_receiver = v_row.receiver 222s and sub_active 222s and p.no_active 222s and p.no_id=sub_provider 222s ) 222s then 222s delete from public.sl_listen 222s where li_origin = v_row.origin 222s and li_receiver = v_row.receiver; 222s insert into public.sl_listen (li_origin, li_provider, li_receiver) 222s values (v_row.origin, v_row.origin, v_row.receiver); 222s 222s -- 2nd choice: 222s -- If we are subscribed to any set originating on this 222s -- event origin, we want to listen on all data providers 222s -- we use for this origin. We are a cascaded subscriber 222s -- for sets from this node. 222s else 222s if exists (select true from public.sl_set, public.sl_subscribe, 222s public.sl_node provider 222s where set_origin = v_row.origin 222s and sub_set = set_id 222s and sub_provider=provider.no_id 222s and provider.no_failed = false 222s and sub_receiver = v_row.receiver 222s and sub_active) 222s then 222s delete from public.sl_listen 222s where li_origin = v_row.origin 222s and li_receiver = v_row.receiver; 222s insert into public.sl_listen (li_origin, li_provider, li_receiver) 222s select distinct set_origin, sub_provider, v_row.receiver 222s from public.sl_set, public.sl_subscribe 222s where set_origin = v_row.origin 222s and sub_set = set_id 222s and sub_receiver = v_row.receiver 222s and sub_active; 222s end if; 222s end if; 222s 222s if v_row.failed then 222s 222s --for every failed node we delete all sl_listen entries 222s --except via providers (listed in sl_subscribe) 222s --or failover candidates (sl_failover_targets) 222s --we do this to prevent a non-failover candidate 222s --that is more ahead of the failover candidate from 222s --sending events to the failover candidate that 222s --are 'too far ahead' 222s 222s --if the failed node is not an origin for any 222s --node then we don't delete all listen paths 222s --for events from it. Instead we leave 222s --the listen network alone. 222s 222s select count(*) into v_cnt from public.sl_subscribe sub, 222s public.sl_set s 222s where s.set_origin=v_row.origin and s.set_id=sub.sub_set; 222s if v_cnt > 0 then 222s delete from public.sl_listen where 222s li_origin=v_row.origin and 222s li_receiver=v_row.receiver 222s and li_provider not in 222s (select sub_provider from 222s public.sl_subscribe, 222s public.sl_set where 222s sub_set=set_id 222s and set_origin=v_row.origin); 222s end if; 222s end if; 222s -- insert into public.sl_listen 222s -- (li_origin,li_provider,li_receiver) 222s -- SELECT v_row.origin, pa_server 222s -- ,v_row.receiver 222s -- FROM public.sl_path where 222s -- pa_client=v_row.receiver 222s -- and (v_row.origin,pa_server,v_row.receiver) not in 222s -- (select li_origin,li_provider,li_receiver 222s -- from public.sl_listen); 222s -- end if; 222s end loop ; 222s 222s return null ; 222s end ; 222s $$ language 'plpgsql'; 222s CREATE FUNCTION 222s comment on function public.RebuildListenEntries() is 222s 'RebuildListenEntries() 222s 222s Invoked by various subscription and path modifying functions, this 222s rewrites the sl_listen entries, adding in all the ones required to 222s allow communications between nodes in the Slony-I cluster.'; 222s COMMENT 222s create or replace function public.generate_sync_event(p_interval interval) 222s returns int4 222s as $$ 222s declare 222s v_node_row record; 222s 222s BEGIN 222s select 1 into v_node_row from public.sl_event 222s where ev_type = 'SYNC' and ev_origin = public.getLocalNodeId('_main') 222s and ev_timestamp > now() - p_interval limit 1; 222s if not found then 222s -- If there has been no SYNC in the last interval, then push one 222s perform public.createEvent('_main', 'SYNC', NULL); 222s return 1; 222s else 222s return 0; 222s end if; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.generate_sync_event(p_interval interval) is 222s 'Generate a sync event if there has not been one in the requested interval, and this is a provider node.'; 222s COMMENT 222s drop function if exists public.updateRelname(int4, int4); 222s DROP FUNCTION 222s create or replace function public.updateRelname () 222s returns int4 222s as $$ 222s declare 222s v_no_id int4; 222s v_set_origin int4; 222s begin 222s -- ---- 222s -- Grab the central configuration lock 222s -- ---- 222s lock table public.sl_config_lock; 222s 222s update public.sl_table set 222s tab_relname = PGC.relname, tab_nspname = PGN.nspname 222s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 222s where public.sl_table.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid and 222s (tab_relname <> PGC.relname or tab_nspname <> PGN.nspname); 222s update public.sl_sequence set 222s seq_relname = PGC.relname, seq_nspname = PGN.nspname 222s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 222s where public.sl_sequence.seq_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid and 222s (seq_relname <> PGC.relname or seq_nspname <> PGN.nspname); 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.updateRelname() is 222s 'updateRelname()'; 222s COMMENT 222s drop function if exists public.updateReloid (int4, int4); 222s DROP FUNCTION 222s create or replace function public.updateReloid (p_set_id int4, p_only_on_node int4) 222s returns bigint 222s as $$ 222s declare 222s v_no_id int4; 222s v_set_origin int4; 222s prec record; 222s begin 222s -- ---- 222s -- Check that we either are the set origin or a current 222s -- subscriber of the set. 222s -- ---- 222s v_no_id := public.getLocalNodeId('_main'); 222s select set_origin into v_set_origin 222s from public.sl_set 222s where set_id = p_set_id 222s for update; 222s if not found then 222s raise exception 'Slony-I: set % not found', p_set_id; 222s end if; 222s if v_set_origin <> v_no_id 222s and not exists (select 1 from public.sl_subscribe 222s where sub_set = p_set_id 222s and sub_receiver = v_no_id) 222s then 222s return 0; 222s end if; 222s 222s -- ---- 222s -- If execution on only one node is requested, check that 222s -- we are that node. 222s -- ---- 222s if p_only_on_node > 0 and p_only_on_node <> v_no_id then 222s return 0; 222s end if; 222s 222s -- Update OIDs for tables to values pulled from non-table objects in pg_class 222s -- This ensures that we won't have collisions when repairing the oids 222s for prec in select tab_id from public.sl_table loop 222s update public.sl_table set tab_reloid = (select oid from pg_class pc where relkind <> 'r' and not exists (select 1 from public.sl_table t2 where t2.tab_reloid = pc.oid) limit 1) 222s where tab_id = prec.tab_id; 222s end loop; 222s 222s for prec in select tab_id, tab_relname, tab_nspname from public.sl_table loop 222s update public.sl_table set 222s tab_reloid = (select PGC.oid 222s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 222s where public.slon_quote_brute(PGC.relname) = public.slon_quote_brute(prec.tab_relname) 222s and PGC.relnamespace = PGN.oid 222s and public.slon_quote_brute(PGN.nspname) = public.slon_quote_brute(prec.tab_nspname)) 222s where tab_id = prec.tab_id; 222s end loop; 222s 222s for prec in select seq_id from public.sl_sequence loop 222s update public.sl_sequence set seq_reloid = (select oid from pg_class pc where relkind <> 'S' and not exists (select 1 from public.sl_sequence t2 where t2.seq_reloid = pc.oid) limit 1) 222s where seq_id = prec.seq_id; 222s end loop; 222s 222s for prec in select seq_id, seq_relname, seq_nspname from public.sl_sequence loop 222s update public.sl_sequence set 222s seq_reloid = (select PGC.oid 222s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 222s where public.slon_quote_brute(PGC.relname) = public.slon_quote_brute(prec.seq_relname) 222s and PGC.relnamespace = PGN.oid 222s and public.slon_quote_brute(PGN.nspname) = public.slon_quote_brute(prec.seq_nspname)) 222s where seq_id = prec.seq_id; 222s end loop; 222s 222s return 1; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.updateReloid(p_set_id int4, p_only_on_node int4) is 222s 'updateReloid(set_id, only_on_node) 222s 222s Updates the respective reloids in sl_table and sl_seqeunce based on 222s their respective FQN'; 222s COMMENT 222s create or replace function public.logswitch_start() 222s returns int4 as $$ 222s DECLARE 222s v_current_status int4; 222s BEGIN 222s -- ---- 222s -- Get the current log status. 222s -- ---- 222s select last_value into v_current_status from public.sl_log_status; 222s 222s -- ---- 222s -- status = 0: sl_log_1 active, sl_log_2 clean 222s -- Initiate a switch to sl_log_2. 222s -- ---- 222s if v_current_status = 0 then 222s perform "pg_catalog".setval('public.sl_log_status', 3); 222s perform public.registry_set_timestamp( 222s 'logswitch.laststart', now()); 222s raise notice 'Slony-I: Logswitch to sl_log_2 initiated'; 222s return 2; 222s end if; 222s 222s -- ---- 222s -- status = 1: sl_log_2 active, sl_log_1 clean 222s -- Initiate a switch to sl_log_1. 222s -- ---- 222s if v_current_status = 1 then 222s perform "pg_catalog".setval('public.sl_log_status', 2); 222s perform public.registry_set_timestamp( 222s 'logswitch.laststart', now()); 222s raise notice 'Slony-I: Logswitch to sl_log_1 initiated'; 222s return 1; 222s end if; 222s 222s raise exception 'Previous logswitch still in progress'; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.logswitch_start() is 222s 'logswitch_start() 222s 222s Initiate a log table switch if none is in progress'; 222s COMMENT 222s create or replace function public.logswitch_finish() 222s returns int4 as $$ 222s DECLARE 222s v_current_status int4; 222s v_dummy record; 222s v_origin int8; 222s v_seqno int8; 222s v_xmin bigint; 222s v_purgeable boolean; 222s BEGIN 222s -- ---- 222s -- Get the current log status. 222s -- ---- 222s select last_value into v_current_status from public.sl_log_status; 222s 222s -- ---- 222s -- status value 0 or 1 means that there is no log switch in progress 222s -- ---- 222s if v_current_status = 0 or v_current_status = 1 then 222s return 0; 222s end if; 222s 222s -- ---- 222s -- status = 2: sl_log_1 active, cleanup sl_log_2 222s -- ---- 222s if v_current_status = 2 then 222s v_purgeable := 'true'; 222s 222s -- ---- 222s -- Attempt to lock sl_log_2 in order to make sure there are no other transactions 222s -- currently writing to it. Exit if it is still in use. This prevents TRUNCATE from 222s -- blocking writers to sl_log_2 while it is waiting for a lock. It also prevents it 222s -- immediately truncating log data generated inside the transaction which was active 222s -- when logswitch_finish() was called (and was blocking TRUNCATE) as soon as that 222s -- transaction is committed. 222s -- ---- 222s begin 222s lock table public.sl_log_2 in access exclusive mode nowait; 222s exception when lock_not_available then 222s raise notice 'Slony-I: could not lock sl_log_2 - sl_log_2 not truncated'; 222s return -1; 222s end; 222s 222s -- ---- 222s -- The cleanup thread calls us after it did the delete and 222s -- vacuum of both log tables. If sl_log_2 is empty now, we 222s -- can truncate it and the log switch is done. 222s -- ---- 222s for v_origin, v_seqno, v_xmin in 222s select ev_origin, ev_seqno, "pg_catalog".txid_snapshot_xmin(ev_snapshot) from public.sl_event 222s where (ev_origin, ev_seqno) in (select ev_origin, min(ev_seqno) from public.sl_event where ev_type = 'SYNC' group by ev_origin) 222s loop 222s if exists (select 1 from public.sl_log_2 where log_origin = v_origin and log_txid >= v_xmin limit 1) then 222s v_purgeable := 'false'; 222s end if; 222s end loop; 222s if not v_purgeable then 222s -- ---- 222s -- Found a row ... log switch is still in progress. 222s -- ---- 222s raise notice 'Slony-I: log switch to sl_log_1 still in progress - sl_log_2 not truncated'; 222s return -1; 222s end if; 222s 222s raise notice 'Slony-I: log switch to sl_log_1 complete - truncate sl_log_2'; 222s truncate public.sl_log_2; 222s if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_log_2' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then 222s execute 'alter table public.sl_log_2 set without oids;'; 222s end if; 222s perform "pg_catalog".setval('public.sl_log_status', 0); 222s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 222s perform public.addPartialLogIndices(); 222s 222s return 1; 222s end if; 222s 222s -- ---- 222s -- status = 3: sl_log_2 active, cleanup sl_log_1 222s -- ---- 222s if v_current_status = 3 then 222s v_purgeable := 'true'; 222s 222s -- ---- 222s -- Attempt to lock sl_log_1 in order to make sure there are no other transactions 222s -- currently writing to it. Exit if it is still in use. This prevents TRUNCATE from 222s -- blocking writes to sl_log_1 while it is waiting for a lock. It also prevents it 222s -- immediately truncating log data generated inside the transaction which was active 222s -- when logswitch_finish() was called (and was blocking TRUNCATE) as soon as that 222s -- transaction is committed. 222s -- ---- 222s begin 222s lock table public.sl_log_1 in access exclusive mode nowait; 222s exception when lock_not_available then 222s raise notice 'Slony-I: could not lock sl_log_1 - sl_log_1 not truncated'; 222s return -1; 222s end; 222s 222s -- ---- 222s -- The cleanup thread calls us after it did the delete and 222s -- vacuum of both log tables. If sl_log_2 is empty now, we 222s -- can truncate it and the log switch is done. 222s -- ---- 222s for v_origin, v_seqno, v_xmin in 222s select ev_origin, ev_seqno, "pg_catalog".txid_snapshot_xmin(ev_snapshot) from public.sl_event 222s where (ev_origin, ev_seqno) in (select ev_origin, min(ev_seqno) from public.sl_event where ev_type = 'SYNC' group by ev_origin) 222s loop 222s if (exists (select 1 from public.sl_log_1 where log_origin = v_origin and log_txid >= v_xmin limit 1)) then 222s v_purgeable := 'false'; 222s end if; 222s end loop; 222s if not v_purgeable then 222s -- ---- 222s -- Found a row ... log switch is still in progress. 222s -- ---- 222s raise notice 'Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated'; 222s return -1; 222s end if; 222s 222s raise notice 'Slony-I: log switch to sl_log_2 complete - truncate sl_log_1'; 222s truncate public.sl_log_1; 222s if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_log_1' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then 222s execute 'alter table public.sl_log_1 set without oids;'; 222s end if; 222s perform "pg_catalog".setval('public.sl_log_status', 1); 222s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 222s perform public.addPartialLogIndices(); 222s return 2; 222s end if; 222s END; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.logswitch_finish() is 222s 'logswitch_finish() 222s 222s Attempt to finalize a log table switch in progress 222s return values: 222s -1 if switch in progress, but not complete 222s 0 if no switch in progress 222s 1 if performed truncate on sl_log_2 222s 2 if performed truncate on sl_log_1 222s '; 222s COMMENT 222s create or replace function public.addPartialLogIndices () returns integer as $$ 222s DECLARE 222s v_current_status int4; 222s v_log int4; 222s v_dummy record; 222s v_dummy2 record; 222s idef text; 222s v_count int4; 222s v_iname text; 222s v_ilen int4; 222s v_maxlen int4; 222s BEGIN 222s v_count := 0; 222s select last_value into v_current_status from public.sl_log_status; 222s 222s -- If status is 2 or 3 --> in process of cleanup --> unsafe to create indices 222s if v_current_status in (2, 3) then 222s return 0; 222s end if; 222s 222s if v_current_status = 0 then -- Which log should get indices? 222s v_log := 2; 222s else 222s v_log := 1; 222s end if; 222s -- PartInd_test_db_sl_log_2-node-1 222s -- Add missing indices... 222s for v_dummy in select distinct set_origin from public.sl_set loop 222s v_iname := 'PartInd_main_sl_log_' || v_log::text || '-node-' 222s || v_dummy.set_origin::text; 222s -- raise notice 'Consider adding partial index % on sl_log_%', v_iname, v_log; 222s -- raise notice 'schema: [_main] tablename:[sl_log_%]', v_log; 222s select * into v_dummy2 from pg_catalog.pg_indexes where tablename = 'sl_log_' || v_log::text and indexname = v_iname; 222s if not found then 222s -- raise notice 'index was not found - add it!'; 222s v_iname := 'PartInd_main_sl_log_' || v_log::text || '-node-' || v_dummy.set_origin::text; 222s v_ilen := pg_catalog.length(v_iname); 222s v_maxlen := pg_catalog.current_setting('max_identifier_length'::text)::int4; 222s if v_ilen > v_maxlen then 222s raise exception 'Length of proposed index name [%] > max_identifier_length [%] - cluster name probably too long', v_ilen, v_maxlen; 222s end if; 222s 222s idef := 'create index "' || v_iname || 222s '" on public.sl_log_' || v_log::text || ' USING btree(log_txid) where (log_origin = ' || v_dummy.set_origin::text || ');'; 222s execute idef; 222s v_count := v_count + 1; 222s else 222s -- raise notice 'Index % already present - skipping', v_iname; 222s end if; 222s end loop; 222s 222s -- Remove unneeded indices... 222s for v_dummy in select indexname from pg_catalog.pg_indexes i where i.tablename = 'sl_log_' || v_log::text and 222s i.indexname like ('PartInd_main_sl_log_' || v_log::text || '-node-%') and 222s not exists (select 1 from public.sl_set where 222s i.indexname = 'PartInd_main_sl_log_' || v_log::text || '-node-' || set_origin::text) 222s loop 222s -- raise notice 'Dropping obsolete index %d', v_dummy.indexname; 222s idef := 'drop index public."' || v_dummy.indexname || '";'; 222s execute idef; 222s v_count := v_count - 1; 222s end loop; 222s return v_count; 222s END 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.addPartialLogIndices () is 222s 'Add partial indexes, if possible, to the unused sl_log_? table for 222s all origin nodes, and drop any that are no longer needed. 222s 222s This function presently gets run any time set origins are manipulated 222s (FAILOVER, STORE SET, MOVE SET, DROP SET), as well as each time the 222s system switches between sl_log_1 and sl_log_2.'; 222s COMMENT 222s create or replace function public.check_table_field_exists (p_namespace text, p_table text, p_field text) 222s returns bool as $$ 222s BEGIN 222s return exists ( 222s select 1 from "information_schema".columns 222s where table_schema = p_namespace 222s and table_name = p_table 222s and column_name = p_field 222s ); 222s END;$$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.check_table_field_exists (p_namespace text, p_table text, p_field text) 222s is 'Check if a table has a specific attribute'; 222s COMMENT 222s create or replace function public.add_missing_table_field (p_namespace text, p_table text, p_field text, p_type text) 222s returns bool as $$ 222s DECLARE 222s v_row record; 222s v_query text; 222s BEGIN 222s if not public.check_table_field_exists(p_namespace, p_table, p_field) then 222s raise notice 'Upgrade table %.% - add field %', p_namespace, p_table, p_field; 222s v_query := 'alter table ' || p_namespace || '.' || p_table || ' add column '; 222s v_query := v_query || p_field || ' ' || p_type || ';'; 222s execute v_query; 222s return 't'; 222s else 222s return 'f'; 222s end if; 222s END;$$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.add_missing_table_field (p_namespace text, p_table text, p_field text, p_type text) 222s is 'Add a column of a given type to a table if it is missing'; 222s COMMENT 222s create or replace function public.upgradeSchema(p_old text) 222s returns text as $$ 222s declare 222s v_tab_row record; 222s v_query text; 222s v_keepstatus text; 222s begin 222s -- If old version is pre-2.0, then we require a special upgrade process 222s if p_old like '1.%' then 222s raise exception 'Upgrading to Slony-I 2.x requires running slony_upgrade_20'; 222s end if; 222s 222s perform public.upgradeSchemaAddTruncateTriggers(); 222s 222s -- Change all Slony-I-defined columns that are "timestamp without time zone" to "timestamp *WITH* time zone" 222s if exists (select 1 from information_schema.columns c 222s where table_schema = '_main' and data_type = 'timestamp without time zone' 222s and exists (select 1 from information_schema.tables t where t.table_schema = c.table_schema and t.table_name = c.table_name and t.table_type = 'BASE TABLE') 222s and (c.table_name, c.column_name) in (('sl_confirm', 'con_timestamp'), ('sl_event', 'ev_timestamp'), ('sl_registry', 'reg_timestamp'),('sl_archive_counter', 'ac_timestamp'))) 222s then 222s 222s -- Preserve sl_status 222s select pg_get_viewdef('public.sl_status') into v_keepstatus; 222s execute 'drop view sl_status'; 222s for v_tab_row in select table_schema, table_name, column_name from information_schema.columns c 222s where table_schema = '_main' and data_type = 'timestamp without time zone' 222s and exists (select 1 from information_schema.tables t where t.table_schema = c.table_schema and t.table_name = c.table_name and t.table_type = 'BASE TABLE') 222s and (table_name, column_name) in (('sl_confirm', 'con_timestamp'), ('sl_event', 'ev_timestamp'), ('sl_registry', 'reg_timestamp'),('sl_archive_counter', 'ac_timestamp')) 222s loop 222s raise notice 'Changing Slony-I column [%.%] to timestamp WITH time zone', v_tab_row.table_name, v_tab_row.column_name; 222s v_query := 'alter table ' || public.slon_quote_brute(v_tab_row.table_schema) || 222s '.' || v_tab_row.table_name || ' alter column ' || v_tab_row.column_name || 222s ' type timestamp with time zone;'; 222s execute v_query; 222s end loop; 222s -- restore sl_status 222s execute 'create view sl_status as ' || v_keepstatus; 222s end if; 222s 222s if not exists (select 1 from information_schema.tables where table_schema = '_main' and table_name = 'sl_components') then 222s v_query := ' 222s create table public.sl_components ( 222s co_actor text not null primary key, 222s co_pid integer not null, 222s co_node integer not null, 222s co_connection_pid integer not null, 222s co_activity text, 222s co_starttime timestamptz not null, 222s co_event bigint, 222s co_eventtype text 222s ) without oids; 222s '; 222s execute v_query; 222s end if; 222s 222s 222s 222s 222s 222s if not exists (select 1 from information_schema.tables t where table_schema = '_main' and table_name = 'sl_event_lock') then 222s v_query := 'create table public.sl_event_lock (dummy integer);'; 222s execute v_query; 222s end if; 222s 222s if not exists (select 1 from information_schema.tables t 222s where table_schema = '_main' 222s and table_name = 'sl_apply_stats') then 222s v_query := ' 222s create table public.sl_apply_stats ( 222s as_origin int4, 222s as_num_insert int8, 222s as_num_update int8, 222s as_num_delete int8, 222s as_num_truncate int8, 222s as_num_script int8, 222s as_num_total int8, 222s as_duration interval, 222s as_apply_first timestamptz, 222s as_apply_last timestamptz, 222s as_cache_prepare int8, 222s as_cache_hit int8, 222s as_cache_evict int8, 222s as_cache_prepare_max int8 222s ) WITHOUT OIDS;'; 222s execute v_query; 222s end if; 222s 222s -- 222s -- On the upgrade to 2.2, we change the layout of sl_log_N by 222s -- adding columns log_tablenspname, log_tablerelname, and 222s -- log_cmdupdncols as well as changing log_cmddata into 222s -- log_cmdargs, which is a text array. 222s -- 222s if not public.check_table_field_exists('_main', 'sl_log_1', 'log_cmdargs') then 222s -- 222s -- Check that the cluster is completely caught up 222s -- 222s if public.check_unconfirmed_log() then 222s raise EXCEPTION 'cannot upgrade to new sl_log_N format due to existing unreplicated data'; 222s end if; 222s 222s -- 222s -- Drop tables sl_log_1 and sl_log_2 222s -- 222s drop table public.sl_log_1; 222s drop table public.sl_log_2; 222s 222s -- 222s -- Create the new sl_log_1 222s -- 222s create table public.sl_log_1 ( 222s log_origin int4, 222s log_txid bigint, 222s log_tableid int4, 222s log_actionseq int8, 222s log_tablenspname text, 222s log_tablerelname text, 222s log_cmdtype "char", 222s log_cmdupdncols int4, 222s log_cmdargs text[] 222s ) without oids; 222s create index sl_log_1_idx1 on public.sl_log_1 222s (log_origin, log_txid, log_actionseq); 222s 222s comment on table public.sl_log_1 is 'Stores each change to be propagated to subscriber nodes'; 222s comment on column public.sl_log_1.log_origin is 'Origin node from which the change came'; 222s comment on column public.sl_log_1.log_txid is 'Transaction ID on the origin node'; 222s comment on column public.sl_log_1.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 222s comment on column public.sl_log_1.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 222s comment on column public.sl_log_1.log_tablenspname is 'The schema name of the table affected'; 222s comment on column public.sl_log_1.log_tablerelname is 'The table name of the table affected'; 222s comment on column public.sl_log_1.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 222s comment on column public.sl_log_1.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 222s comment on column public.sl_log_1.log_cmdargs is 'The data needed to perform the log action on the replica'; 222s 222s -- 222s -- Create the new sl_log_2 222s -- 222s create table public.sl_log_2 ( 222s log_origin int4, 222s log_txid bigint, 222s log_tableid int4, 222s log_actionseq int8, 222s log_tablenspname text, 222s log_tablerelname text, 222s log_cmdtype "char", 222s log_cmdupdncols int4, 222s log_cmdargs text[] 222s ) without oids; 222s create index sl_log_2_idx1 on public.sl_log_2 222s (log_origin, log_txid, log_actionseq); 222s 222s comment on table public.sl_log_2 is 'Stores each change to be propagated to subscriber nodes'; 222s comment on column public.sl_log_2.log_origin is 'Origin node from which the change came'; 222s comment on column public.sl_log_2.log_txid is 'Transaction ID on the origin node'; 222s comment on column public.sl_log_2.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 222s comment on column public.sl_log_2.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 222s comment on column public.sl_log_2.log_tablenspname is 'The schema name of the table affected'; 222s comment on column public.sl_log_2.log_tablerelname is 'The table name of the table affected'; 222s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 222s comment on column public.sl_log_2.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 222s comment on column public.sl_log_2.log_cmdargs is 'The data needed to perform the log action on the replica'; 222s 222s create table public.sl_log_script ( 222s log_origin int4, 222s log_txid bigint, 222s log_actionseq int8, 222s log_cmdtype "char", 222s log_cmdargs text[] 222s ) WITHOUT OIDS; 222s create index sl_log_script_idx1 on public.sl_log_script 222s (log_origin, log_txid, log_actionseq); 222s 222s comment on table public.sl_log_script is 'Captures SQL script queries to be propagated to subscriber nodes'; 222s comment on column public.sl_log_script.log_origin is 'Origin name from which the change came'; 222s comment on column public.sl_log_script.log_txid is 'Transaction ID on the origin node'; 222s comment on column public.sl_log_script.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 222s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. S = Script statement, s = Script complete'; 222s comment on column public.sl_log_script.log_cmdargs is 'The DDL statement, optionally followed by selected nodes to execute it on.'; 222s 222s -- 222s -- Put the log apply triggers back onto sl_log_1/2 222s -- 222s create trigger apply_trigger 222s before INSERT on public.sl_log_1 222s for each row execute procedure public.logApply('_main'); 222s alter table public.sl_log_1 222s enable replica trigger apply_trigger; 222s create trigger apply_trigger 222s before INSERT on public.sl_log_2 222s for each row execute procedure public.logApply('_main'); 222s alter table public.sl_log_2 222s enable replica trigger apply_trigger; 222s end if; 222s if not exists (select 1 from information_schema.routines where routine_schema = '_main' and routine_name = 'string_agg') then 222s CREATE AGGREGATE public.string_agg(text) ( 222s SFUNC=public.agg_text_sum, 222s STYPE=text, 222s INITCOND='' 222s ); 222s end if; 222s if not exists (select 1 from information_schema.views where table_schema='_main' and table_name='sl_failover_targets') then 222s create view public.sl_failover_targets as 222s select set_id, 222s set_origin as set_origin, 222s sub1.sub_receiver as backup_id 222s 222s FROM 222s public.sl_subscribe sub1 222s ,public.sl_set set1 222s where 222s sub1.sub_set=set_id 222s and sub1.sub_forward=true 222s --exclude candidates where the set_origin 222s --has a path a node but the failover 222s --candidate has no path to that node 222s and sub1.sub_receiver not in 222s (select p1.pa_client from 222s public.sl_path p1 222s left outer join public.sl_path p2 on 222s (p2.pa_client=p1.pa_client 222s and p2.pa_server=sub1.sub_receiver) 222s where p2.pa_client is null 222s and p1.pa_server=set_origin 222s and p1.pa_client<>sub1.sub_receiver 222s ) 222s and sub1.sub_provider=set_origin 222s --exclude any subscribers that are not 222s --direct subscribers of all sets on the 222s --origin 222s and sub1.sub_receiver not in 222s (select direct_recv.sub_receiver 222s from 222s 222s (--all direct receivers of the first set 222s select subs2.sub_receiver 222s from public.sl_subscribe subs2 222s where subs2.sub_provider=set1.set_origin 222s and subs2.sub_set=set1.set_id) as 222s direct_recv 222s inner join 222s (--all other sets from the origin 222s select set_id from public.sl_set set2 222s where set2.set_origin=set1.set_origin 222s and set2.set_id<>sub1.sub_set) 222s as othersets on(true) 222s left outer join public.sl_subscribe subs3 222s on(subs3.sub_set=othersets.set_id 222s and subs3.sub_forward=true 222s and subs3.sub_provider=set1.set_origin 222s and direct_recv.sub_receiver=subs3.sub_receiver) 222s where subs3.sub_receiver is null 222s ); 222s end if; 222s 222s if not public.check_table_field_exists('_main', 'sl_node', 'no_failed') then 222s alter table public.sl_node add column no_failed bool; 222s update public.sl_node set no_failed=false; 222s end if; 222s return p_old; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s create or replace function public.check_unconfirmed_log () 222s returns bool as $$ 222s declare 222s v_rc bool = false; 222s v_error bool = false; 222s v_origin integer; 222s v_allconf bigint; 222s v_allsnap txid_snapshot; 222s v_count bigint; 222s begin 222s -- 222s -- Loop over all nodes that are the origin of at least one set 222s -- 222s for v_origin in select distinct set_origin as no_id 222s from public.sl_set loop 222s -- 222s -- Per origin determine which is the highest event seqno 222s -- that is confirmed by all subscribers to any of the 222s -- origins sets. 222s -- 222s select into v_allconf min(max_seqno) from ( 222s select con_received, max(con_seqno) as max_seqno 222s from public.sl_confirm 222s where con_origin = v_origin 222s and con_received in ( 222s select distinct sub_receiver 222s from public.sl_set as SET, 222s public.sl_subscribe as SUB 222s where SET.set_id = SUB.sub_set 222s and SET.set_origin = v_origin 222s ) 222s group by con_received 222s ) as maxconfirmed; 222s if not found then 222s raise NOTICE 'check_unconfirmed_log(): cannot determine highest ev_seqno for node % confirmed by all subscribers', v_origin; 222s v_error = true; 222s continue; 222s end if; 222s 222s -- 222s -- Get the txid snapshot that corresponds with that event 222s -- 222s select into v_allsnap ev_snapshot 222s from public.sl_event 222s where ev_origin = v_origin 222s and ev_seqno = v_allconf; 222s if not found then 222s raise NOTICE 'check_unconfirmed_log(): cannot find event %,% in sl_event', v_origin, v_allconf; 222s v_error = true; 222s continue; 222s end if; 222s 222s -- 222s -- Count the number of log rows that appeard after that event. 222s -- 222s select into v_count count(*) from ( 222s select 1 from public.sl_log_1 222s where log_origin = v_origin 222s and log_txid >= "pg_catalog".txid_snapshot_xmax(v_allsnap) 222s union all 222s select 1 from public.sl_log_1 222s where log_origin = v_origin 222s and log_txid in ( 222s select * from "pg_catalog".txid_snapshot_xip(v_allsnap) 222s ) 222s union all 222s select 1 from public.sl_log_2 222s where log_origin = v_origin 222s and log_txid >= "pg_catalog".txid_snapshot_xmax(v_allsnap) 222s union all 222s select 1 from public.sl_log_2 222s where log_origin = v_origin 222s and log_txid in ( 222s select * from "pg_catalog".txid_snapshot_xip(v_allsnap) 222s ) 222s ) as cnt; 222s 222s if v_count > 0 then 222s raise NOTICE 'check_unconfirmed_log(): origin % has % log rows that have not propagated to all subscribers yet', v_origin, v_count; 222s v_rc = true; 222s end if; 222s end loop; 222s 222s if v_error then 222s raise EXCEPTION 'check_unconfirmed_log(): aborting due to previous inconsistency'; 222s end if; 222s 222s return v_rc; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s set search_path to public 222s ; 222s SET 222s comment on function public.upgradeSchema(p_old text) is 222s 'Called during "update functions" by slonik to perform schema changes'; 222s COMMENT 222s create or replace view public.sl_status as select 222s E.ev_origin as st_origin, 222s C.con_received as st_received, 222s E.ev_seqno as st_last_event, 222s E.ev_timestamp as st_last_event_ts, 222s C.con_seqno as st_last_received, 222s C.con_timestamp as st_last_received_ts, 222s CE.ev_timestamp as st_last_received_event_ts, 222s E.ev_seqno - C.con_seqno as st_lag_num_events, 222s current_timestamp - CE.ev_timestamp as st_lag_time 222s from public.sl_event E, public.sl_confirm C, 222s public.sl_event CE 222s where E.ev_origin = C.con_origin 222s and CE.ev_origin = E.ev_origin 222s and CE.ev_seqno = C.con_seqno 222s and (E.ev_origin, E.ev_seqno) in 222s (select ev_origin, max(ev_seqno) 222s from public.sl_event 222s where ev_origin = public.getLocalNodeId('_main') 222s group by 1 222s ) 222s and (C.con_origin, C.con_received, C.con_seqno) in 222s (select con_origin, con_received, max(con_seqno) 222s from public.sl_confirm 222s where con_origin = public.getLocalNodeId('_main') 222s group by 1, 2 222s ); 222s CREATE VIEW 222s comment on view public.sl_status is 'View showing how far behind remote nodes are.'; 222s COMMENT 222s create or replace function public.copyFields(p_tab_id integer) 222s returns text 222s as $$ 222s declare 222s result text; 222s prefix text; 222s prec record; 222s begin 222s result := ''; 222s prefix := '('; -- Initially, prefix is the opening paren 222s 222s for prec in select public.slon_quote_input(a.attname) as column from public.sl_table t, pg_catalog.pg_attribute a where t.tab_id = p_tab_id and t.tab_reloid = a.attrelid and a.attnum > 0 and a.attisdropped = false order by attnum 222s loop 222s result := result || prefix || prec.column; 222s prefix := ','; -- Subsequently, prepend columns with commas 222s end loop; 222s result := result || ')'; 222s return result; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.copyFields(p_tab_id integer) is 222s 'Return a string consisting of what should be appended to a COPY statement 222s to specify fields for the passed-in tab_id. 222s 222s In PG versions > 7.3, this looks like (field1,field2,...fieldn)'; 222s COMMENT 222s create or replace function public.prepareTableForCopy(p_tab_id int4) 222s returns int4 222s as $$ 222s declare 222s v_tab_oid oid; 222s v_tab_fqname text; 222s begin 222s -- ---- 222s -- Get the OID and fully qualified name for the table 222s -- --- 222s select PGC.oid, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s into v_tab_oid, v_tab_fqname 222s from public.sl_table T, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where T.tab_id = p_tab_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid; 222s if not found then 222s raise exception 'Table with ID % not found in sl_table', p_tab_id; 222s end if; 222s 222s -- ---- 222s -- Try using truncate to empty the table and fallback to 222s -- delete on error. 222s -- ---- 222s perform public.TruncateOnlyTable(v_tab_fqname); 222s raise notice 'truncate of % succeeded', v_tab_fqname; 222s 222s -- suppress index activity 222s perform public.disable_indexes_on_table(v_tab_oid); 222s 222s return 1; 222s exception when others then 222s raise notice 'truncate of % failed - doing delete', v_tab_fqname; 222s perform public.disable_indexes_on_table(v_tab_oid); 222s execute 'delete from only ' || public.slon_quote_input(v_tab_fqname); 222s return 0; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.prepareTableForCopy(p_tab_id int4) is 222s 'Delete all data and suppress index maintenance'; 222s COMMENT 222s create or replace function public.finishTableAfterCopy(p_tab_id int4) 222s returns int4 222s as $$ 222s declare 222s v_tab_oid oid; 222s v_tab_fqname text; 222s begin 222s -- ---- 222s -- Get the tables OID and fully qualified name 222s -- --- 222s select PGC.oid, 222s public.slon_quote_brute(PGN.nspname) || '.' || 222s public.slon_quote_brute(PGC.relname) as tab_fqname 222s into v_tab_oid, v_tab_fqname 222s from public.sl_table T, 222s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 222s where T.tab_id = p_tab_id 222s and T.tab_reloid = PGC.oid 222s and PGC.relnamespace = PGN.oid; 222s if not found then 222s raise exception 'Table with ID % not found in sl_table', p_tab_id; 222s end if; 222s 222s -- ---- 222s -- Reenable indexes and reindex the table. 222s -- ---- 222s perform public.enable_indexes_on_table(v_tab_oid); 222s execute 'reindex table ' || public.slon_quote_input(v_tab_fqname); 222s 222s return 1; 222s end; 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.finishTableAfterCopy(p_tab_id int4) is 222s 'Reenable index maintenance and reindex the table'; 222s COMMENT 222s create or replace function public.setup_vactables_type () returns integer as $$ 222s begin 222s if not exists (select 1 from pg_catalog.pg_type t, pg_catalog.pg_namespace n 222s where n.nspname = '_main' and t.typnamespace = n.oid and 222s t.typname = 'vactables') then 222s execute 'create type public.vactables as (nspname name, relname name);'; 222s end if; 222s return 1; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.setup_vactables_type () is 222s 'Function to be run as part of loading slony1_funcs.sql that creates the vactables type if it is missing'; 222s COMMENT 222s select public.setup_vactables_type(); 222s setup_vactables_type 222s ---------------------- 222s 1 222s (1 row) 222s 222s drop function public.setup_vactables_type (); 222s DROP FUNCTION 222s create or replace function public.TablesToVacuum () returns setof public.vactables as $$ 222s declare 222s prec public.vactables%rowtype; 222s begin 222s prec.nspname := '_main'; 222s prec.relname := 'sl_event'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := '_main'; 222s prec.relname := 'sl_confirm'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := '_main'; 222s prec.relname := 'sl_setsync'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := '_main'; 222s prec.relname := 'sl_seqlog'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := '_main'; 222s prec.relname := 'sl_archive_counter'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := '_main'; 222s prec.relname := 'sl_components'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := '_main'; 222s prec.relname := 'sl_log_script'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := 'pg_catalog'; 222s prec.relname := 'pg_listener'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s prec.nspname := 'pg_catalog'; 222s prec.relname := 'pg_statistic'; 222s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 222s return next prec; 222s end if; 222s 222s return; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.TablesToVacuum () is 222s 'Return a list of tables that require frequent vacuuming. The 222s function is used so that the list is not hardcoded into C code.'; 222s COMMENT 222s create or replace function public.add_empty_table_to_replication(p_set_id int4, p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) returns bigint as $$ 222s declare 222s 222s prec record; 222s v_origin int4; 222s v_isorigin boolean; 222s v_fqname text; 222s v_query text; 222s v_rows integer; 222s v_idxname text; 222s 222s begin 222s -- Need to validate that the set exists; the set will tell us if this is the origin 222s select set_origin into v_origin from public.sl_set where set_id = p_set_id; 222s if not found then 222s raise exception 'add_empty_table_to_replication: set % not found!', p_set_id; 222s end if; 222s 222s -- Need to be aware of whether or not this node is origin for the set 222s v_isorigin := ( v_origin = public.getLocalNodeId('_main') ); 222s 222s v_fqname := '"' || p_nspname || '"."' || p_tabname || '"'; 222s -- Take out a lock on the table 222s v_query := 'lock ' || v_fqname || ';'; 222s execute v_query; 222s 222s if v_isorigin then 222s -- On the origin, verify that the table is empty, failing if it has any tuples 222s v_query := 'select 1 as tuple from ' || v_fqname || ' limit 1;'; 222s execute v_query into prec; 222s GET DIAGNOSTICS v_rows = ROW_COUNT; 222s if v_rows = 0 then 222s raise notice 'add_empty_table_to_replication: table % empty on origin - OK', v_fqname; 222s else 222s raise exception 'add_empty_table_to_replication: table % contained tuples on origin node %', v_fqname, v_origin; 222s end if; 222s else 222s -- On other nodes, TRUNCATE the table 222s v_query := 'truncate ' || v_fqname || ';'; 222s execute v_query; 222s end if; 222s -- If p_idxname is NULL, then look up the PK index, and RAISE EXCEPTION if one does not exist 222s if p_idxname is NULL then 222s select c2.relname into prec from pg_catalog.pg_index i, pg_catalog.pg_class c1, pg_catalog.pg_class c2, pg_catalog.pg_namespace n where i.indrelid = c1.oid and i.indexrelid = c2.oid and c1.relname = p_tabname and i.indisprimary and n.nspname = p_nspname and n.oid = c1.relnamespace; 222s if not found then 222s raise exception 'add_empty_table_to_replication: table % has no primary key and no candidate specified!', v_fqname; 222s else 222s v_idxname := prec.relname; 222s end if; 222s else 222s v_idxname := p_idxname; 222s end if; 222s return public.setAddTable_int(p_set_id, p_tab_id, v_fqname, v_idxname, p_comment); 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.add_empty_table_to_replication(p_set_id int4, p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) is 222s 'Verify that a table is empty, and add it to replication. 222s tab_idxname is optional - if NULL, then we use the primary key. 222s 222s Note that this function is to be run within an EXECUTE SCRIPT script, 222s so it runs at the right place in the transaction stream on all 222s nodes.'; 222s COMMENT 222s create or replace function public.replicate_partition(p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) returns bigint as $$ 222s declare 222s prec record; 222s prec2 record; 222s v_set_id int4; 222s 222s begin 222s -- Look up the parent table; fail if it does not exist 222s select c1.oid into prec from pg_catalog.pg_class c1, pg_catalog.pg_class c2, pg_catalog.pg_inherits i, pg_catalog.pg_namespace n where c1.oid = i.inhparent and c2.oid = i.inhrelid and n.oid = c2.relnamespace and n.nspname = p_nspname and c2.relname = p_tabname; 222s if not found then 222s raise exception 'replicate_partition: No parent table found for %.%!', p_nspname, p_tabname; 222s end if; 222s 222s -- The parent table tells us what replication set to use 222s select tab_set into prec2 from public.sl_table where tab_reloid = prec.oid; 222s if not found then 222s raise exception 'replicate_partition: Parent table % for new partition %.% is not replicated!', prec.oid, p_nspname, p_tabname; 222s end if; 222s 222s v_set_id := prec2.tab_set; 222s 222s -- Now, we have all the parameters necessary to run add_empty_table_to_replication... 222s return public.add_empty_table_to_replication(v_set_id, p_tab_id, p_nspname, p_tabname, p_idxname, p_comment); 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.replicate_partition(p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) is 222s 'Add a partition table to replication. 222s tab_idxname is optional - if NULL, then we use the primary key. 222s This function looks up replication configuration via the parent table. 222s 222s Note that this function is to be run within an EXECUTE SCRIPT script, 222s so it runs at the right place in the transaction stream on all 222s nodes.'; 222s COMMENT 222s create or replace function public.disable_indexes_on_table (i_oid oid) 222s returns integer as $$ 222s begin 222s -- Setting pg_class.relhasindex to false will cause copy not to 222s -- maintain any indexes. At the end of the copy we will reenable 222s -- them and reindex the table. This bulk creating of indexes is 222s -- faster. 222s 222s update pg_catalog.pg_class set relhasindex ='f' where oid = i_oid; 222s return 1; 222s end $$ 222s language plpgsql; 222s CREATE FUNCTION 222s comment on function public.disable_indexes_on_table(i_oid oid) is 222s 'disable indexes on the specified table. 222s Used during subscription process to suppress indexes, which allows 222s COPY to go much faster. 222s 222s This may be set as a SECURITY DEFINER in order to eliminate the need 222s for superuser access by Slony-I. 222s '; 222s COMMENT 222s create or replace function public.enable_indexes_on_table (i_oid oid) 222s returns integer as $$ 222s begin 222s update pg_catalog.pg_class set relhasindex ='t' where oid = i_oid; 222s return 1; 222s end $$ 222s language plpgsql 222s security definer; 222s CREATE FUNCTION 222s comment on function public.enable_indexes_on_table(i_oid oid) is 222s 're-enable indexes on the specified table. 222s 222s This may be set as a SECURITY DEFINER in order to eliminate the need 222s for superuser access by Slony-I. 222s '; 222s COMMENT 222s drop function if exists public.reshapeSubscription(int4,int4,int4); 222s DROP FUNCTION 222s create or replace function public.reshapeSubscription (p_sub_origin int4, p_sub_provider int4, p_sub_receiver int4) returns int4 as $$ 222s begin 222s update public.sl_subscribe 222s set sub_provider=p_sub_provider 222s from public.sl_set 222s WHERE sub_set=sl_set.set_id 222s and sl_set.set_origin=p_sub_origin and sub_receiver=p_sub_receiver; 222s if found then 222s perform public.RebuildListenEntries(); 222s notify "_main_Restart"; 222s end if; 222s return 0; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.reshapeSubscription(p_sub_origin int4, p_sub_provider int4, p_sub_receiver int4) is 222s 'Run on a receiver/subscriber node when the provider for that 222s subscription is being changed. Slonik will invoke this method 222s before the SUBSCRIBE_SET event propogates to the receiver 222s so listen paths can be updated.'; 222s COMMENT 222s create or replace function public.slon_node_health_check() returns boolean as $$ 222s declare 222s prec record; 222s all_ok boolean; 222s begin 222s all_ok := 't'::boolean; 222s -- validate that all tables in sl_table have: 222s -- sl_table agreeing with pg_class 222s for prec in select tab_id, tab_relname, tab_nspname from 222s public.sl_table t where not exists (select 1 from pg_catalog.pg_class c, pg_catalog.pg_namespace n 222s where c.oid = t.tab_reloid and c.relname = t.tab_relname and c.relnamespace = n.oid and n.nspname = t.tab_nspname) loop 222s all_ok := 'f'::boolean; 222s raise warning 'table [id,nsp,name]=[%,%,%] - sl_table does not match pg_class/pg_namespace', prec.tab_id, prec.tab_relname, prec.tab_nspname; 222s end loop; 222s if not all_ok then 222s raise warning 'Mismatch found between sl_table and pg_class. Slonik command REPAIR CONFIG may be useful to rectify this.'; 222s end if; 222s return all_ok; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.slon_node_health_check() is 'called when slon starts up to validate that there are not problems with node configuration. Returns t if all is OK, f if there is a problem.'; 222s COMMENT 222s create or replace function public.log_truncate () returns trigger as 222s $$ 222s declare 222s r_role text; 222s c_nspname text; 222s c_relname text; 222s c_log integer; 222s c_node integer; 222s c_tabid integer; 222s begin 222s -- Ignore this call if session_replication_role = 'local' 222s select into r_role setting 222s from pg_catalog.pg_settings where name = 'session_replication_role'; 222s if r_role = 'local' then 222s return NULL; 222s end if; 222s 222s c_tabid := tg_argv[0]; 222s c_node := public.getLocalNodeId('_main'); 222s select tab_nspname, tab_relname into c_nspname, c_relname 222s from public.sl_table where tab_id = c_tabid; 222s select last_value into c_log from public.sl_log_status; 222s if c_log in (0, 2) then 222s insert into public.sl_log_1 ( 222s log_origin, log_txid, log_tableid, 222s log_actionseq, log_tablenspname, 222s log_tablerelname, log_cmdtype, 222s log_cmdupdncols, log_cmdargs 222s ) values ( 222s c_node, pg_catalog.txid_current(), c_tabid, 222s nextval('public.sl_action_seq'), c_nspname, 222s c_relname, 'T', 0, '{}'::text[]); 222s else -- (1, 3) 222s insert into public.sl_log_2 ( 222s log_origin, log_txid, log_tableid, 222s log_actionseq, log_tablenspname, 222s log_tablerelname, log_cmdtype, 222s log_cmdupdncols, log_cmdargs 222s ) values ( 222s c_node, pg_catalog.txid_current(), c_tabid, 222s nextval('public.sl_action_seq'), c_nspname, 222s c_relname, 'T', 0, '{}'::text[]); 222s end if; 222s return NULL; 222s end 222s $$ language plpgsql 222s security definer; 222s CREATE FUNCTION 222s comment on function public.log_truncate () 222s is 'trigger function run when a replicated table receives a TRUNCATE request'; 222s COMMENT 222s create or replace function public.deny_truncate () returns trigger as 222s $$ 222s declare 222s r_role text; 222s begin 222s -- Ignore this call if session_replication_role = 'local' 222s select into r_role setting 222s from pg_catalog.pg_settings where name = 'session_replication_role'; 222s if r_role = 'local' then 222s return NULL; 222s end if; 222s 222s raise exception 'truncation of replicated table forbidden on subscriber node'; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.deny_truncate () 222s is 'trigger function run when a replicated table receives a TRUNCATE request'; 222s COMMENT 222s create or replace function public.store_application_name (i_name text) returns text as $$ 222s declare 222s p_command text; 222s begin 222s if exists (select 1 from pg_catalog.pg_settings where name = 'application_name') then 222s p_command := 'set application_name to '''|| i_name || ''';'; 222s execute p_command; 222s return i_name; 222s end if; 222s return NULL::text; 222s end $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.store_application_name (i_name text) is 222s 'Set application_name GUC, if possible. Returns NULL if it fails to work.'; 222s COMMENT 222s create or replace function public.is_node_reachable(origin_node_id integer, 222s receiver_node_id integer) returns boolean as $$ 222s declare 222s listen_row record; 222s reachable boolean; 222s begin 222s reachable:=false; 222s select * into listen_row from public.sl_listen where 222s li_origin=origin_node_id and li_receiver=receiver_node_id; 222s if found then 222s reachable:=true; 222s end if; 222s return reachable; 222s end $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.is_node_reachable(origin_node_id integer, receiver_node_id integer) 222s is 'Is the receiver node reachable from the origin, via any of the listen paths?'; 222s COMMENT 222s create or replace function public.component_state (i_actor text, i_pid integer, i_node integer, i_conn_pid integer, i_activity text, i_starttime timestamptz, i_event bigint, i_eventtype text) returns integer as $$ 222s begin 222s -- Trim out old state for this component 222s if not exists (select 1 from public.sl_components where co_actor = i_actor) then 222s insert into public.sl_components 222s (co_actor, co_pid, co_node, co_connection_pid, co_activity, co_starttime, co_event, co_eventtype) 222s values 222s (i_actor, i_pid, i_node, i_conn_pid, i_activity, i_starttime, i_event, i_eventtype); 222s else 222s update public.sl_components 222s set 222s co_connection_pid = i_conn_pid, co_activity = i_activity, co_starttime = i_starttime, co_event = i_event, 222s co_eventtype = i_eventtype 222s where co_actor = i_actor 222s and co_starttime < i_starttime; 222s end if; 222s return 1; 222s end $$ 222s language plpgsql; 222s CREATE FUNCTION 222s comment on function public.component_state (i_actor text, i_pid integer, i_node integer, i_conn_pid integer, i_activity text, i_starttime timestamptz, i_event bigint, i_eventtype text) is 222s 'Store state of a Slony component. Useful for monitoring'; 222s COMMENT 222s create or replace function public.recreate_log_trigger(p_fq_table_name text, 222s p_tab_id oid, p_tab_attkind text) returns integer as $$ 222s begin 222s execute 'drop trigger "_main_logtrigger" on ' || 222s p_fq_table_name ; 222s -- ---- 222s execute 'create trigger "_main_logtrigger"' || 222s ' after insert or update or delete on ' || 222s p_fq_table_name 222s || ' for each row execute procedure public.logTrigger (' || 222s pg_catalog.quote_literal('_main') || ',' || 222s pg_catalog.quote_literal(p_tab_id::text) || ',' || 222s pg_catalog.quote_literal(p_tab_attkind) || ');'; 222s return 0; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s comment on function public.recreate_log_trigger(p_fq_table_name text, 222s p_tab_id oid, p_tab_attkind text) is 222s 'A function that drops and recreates the log trigger on the specified table. 222s It is intended to be used after the primary_key/unique index has changed.'; 222s COMMENT 222s create or replace function public.repair_log_triggers(only_locked boolean) 222s returns integer as $$ 222s declare 222s retval integer; 222s table_row record; 222s begin 222s retval=0; 222s for table_row in 222s select tab_nspname,tab_relname, 222s tab_idxname, tab_id, mode, 222s public.determineAttKindUnique(tab_nspname|| 222s '.'||tab_relname,tab_idxname) as attkind 222s from 222s public.sl_table 222s left join 222s pg_locks on (relation=tab_reloid and pid=pg_backend_pid() 222s and mode='AccessExclusiveLock') 222s ,pg_trigger 222s where tab_reloid=tgrelid and 222s public.determineAttKindUnique(tab_nspname||'.' 222s ||tab_relname,tab_idxname) 222s !=(public.decode_tgargs(tgargs))[2] 222s and tgname = '_main' 222s || '_logtrigger' 222s LOOP 222s if (only_locked=false) or table_row.mode='AccessExclusiveLock' then 222s perform public.recreate_log_trigger 222s (table_row.tab_nspname||'.'||table_row.tab_relname, 222s table_row.tab_id,table_row.attkind); 222s retval=retval+1; 222s else 222s raise notice '%.% has an invalid configuration on the log trigger. This was not corrected because only_lock is true and the table is not locked.', 222s table_row.tab_nspname,table_row.tab_relname; 222s 222s end if; 222s end loop; 222s return retval; 222s end 222s $$ 222s language plpgsql; 222s CREATE FUNCTION 222s comment on function public.repair_log_triggers(only_locked boolean) 222s is ' 222s repair the log triggers as required. If only_locked is true then only 222s tables that are already exclusively locked by the current transaction are 222s repaired. Otherwise all replicated tables with outdated trigger arguments 222s are recreated.'; 222s COMMENT 222s create or replace function public.unsubscribe_abandoned_sets(p_failed_node int4) returns bigint 222s as $$ 222s declare 222s v_row record; 222s v_seq_id bigint; 222s v_local_node int4; 222s begin 222s 222s select public.getLocalNodeId('_main') into 222s v_local_node; 222s 222s if found then 222s --abandon all subscriptions from this origin. 222s for v_row in select sub_set,sub_receiver from 222s public.sl_subscribe, public.sl_set 222s where sub_set=set_id and set_origin=p_failed_node 222s and sub_receiver=v_local_node 222s loop 222s raise notice 'Slony-I: failover_abandon_set() is abandoning subscription to set % on node % because it is too far ahead', v_row.sub_set, 222s v_local_node; 222s --If this node is a provider for the set 222s --then the receiver needs to be unsubscribed. 222s -- 222s select public.unsubscribeSet(v_row.sub_set, 222s v_local_node,true) 222s into v_seq_id; 222s end loop; 222s end if; 222s 222s return v_seq_id; 222s end 222s $$ language plpgsql; 222s CREATE FUNCTION 222s CREATE OR replace function public.agg_text_sum(txt_before TEXT, txt_new TEXT) RETURNS TEXT AS 222s $BODY$ 222s DECLARE 222s c_delim text; 222s BEGIN 222s c_delim = ','; 222s IF (txt_before IS NULL or txt_before='') THEN 222s RETURN txt_new; 222s END IF; 222s RETURN txt_before || c_delim || txt_new; 222s END; 222s $BODY$ 222s LANGUAGE plpgsql; 222s CREATE FUNCTION 222s comment on function public.agg_text_sum(text,text) is 222s 'An accumulator function used by the slony string_agg function to 222s aggregate rows into a string'; 222s COMMENT 222s Dropping cluster 17/regress ... 222s NOTICE: function public.reshapesubscription(int4,int4,int4) does not exist, skipping 222s ### End 17 psql ### 222s autopkgtest [23:16:31]: test load-functions: -----------------------] 223s autopkgtest [23:16:32]: test load-functions: - - - - - - - - - - results - - - - - - - - - - 223s load-functions PASS 223s autopkgtest [23:16:32]: @@@@@@@@@@@@@@@@@@@@ summary 223s load-functions PASS 240s nova [W] Skipping flock for amd64 240s Creating nova instance adt-plucky-amd64-slony1-2-20250315-231249-juju-7f2275-prod-proposed-migration-environment-20-9c726d7d-e8ae-4cd9-a983-ce8eed6e074f from image adt/ubuntu-plucky-amd64-server-20250304.img (UUID 9c7d4da5-d95f-4c85-ac1f-51eb37e75c4c)... 240s nova [W] Timed out waiting for 72d74f2b-54b7-4a1c-845b-93a95b00585a to get deleted.