0s autopkgtest [19:06:41]: starting date and time: 2024-03-24 19:06:41+0000 0s autopkgtest [19:06:41]: git checkout: 4a1cd702 l/adt_testbed: don't blame the testbed for unsolvable build deps 0s autopkgtest [19:06:41]: host juju-7f2275-prod-proposed-migration-environment-4; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.6b9l9z4g/out --timeout-copy=6000 --setup-commands 'ln -s /dev/null /etc/systemd/system/bluetooth.service; printf "http_proxy=http://squid.internal:3128\nhttps_proxy=http://squid.internal:3128\nno_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com\n" >> /etc/environment' --apt-pocket=proposed --apt-upgrade slony1-2 --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 '--env=ADT_TEST_TRIGGERS=postgresql-16/16.2-1ubuntu2 perl/5.38.2-3.2' -- lxd -r lxd-armhf-10.44.124.217 lxd-armhf-10.44.124.217:autopkgtest/ubuntu/noble/armhf 44s autopkgtest [19:07:25]: testbed dpkg architecture: armhf 46s autopkgtest [19:07:27]: testbed apt version: 2.7.12 46s autopkgtest [19:07:27]: @@@@@@@@@@@@@@@@@@@@ test bed setup 48s Get:1 http://ftpmaster.internal/ubuntu noble-proposed InRelease [117 kB] 48s Get:2 http://ftpmaster.internal/ubuntu noble-proposed/multiverse Sources [57.3 kB] 48s Get:3 http://ftpmaster.internal/ubuntu noble-proposed/universe Sources [3986 kB] 49s Get:4 http://ftpmaster.internal/ubuntu noble-proposed/main Sources [496 kB] 49s Get:5 http://ftpmaster.internal/ubuntu noble-proposed/restricted Sources [6540 B] 49s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/main armhf Packages [672 kB] 49s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/main armhf c-n-f Metadata [2492 B] 49s Get:8 http://ftpmaster.internal/ubuntu noble-proposed/restricted armhf Packages [1372 B] 49s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/restricted armhf c-n-f Metadata [116 B] 49s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/universe armhf Packages [4099 kB] 49s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/universe armhf c-n-f Metadata [7776 B] 49s Get:12 http://ftpmaster.internal/ubuntu noble-proposed/multiverse armhf Packages [48.7 kB] 49s Get:13 http://ftpmaster.internal/ubuntu noble-proposed/multiverse armhf c-n-f Metadata [116 B] 55s Fetched 9494 kB in 2s (4033 kB/s) 56s Reading package lists... 61s Get:1 http://ports.ubuntu.com/ubuntu-ports noble-proposed InRelease [117 kB] 62s Get:2 http://ports.ubuntu.com/ubuntu-ports noble-proposed/main armhf Packages [672 kB] 62s Get:3 http://ports.ubuntu.com/ubuntu-ports noble-proposed/main armhf c-n-f Metadata [2492 B] 62s Get:4 http://ports.ubuntu.com/ubuntu-ports noble-proposed/universe armhf Packages [4099 kB] 62s Get:5 http://ports.ubuntu.com/ubuntu-ports noble-proposed/universe armhf c-n-f Metadata [7776 B] 62s Get:6 http://ports.ubuntu.com/ubuntu-ports noble-proposed/restricted armhf Packages [1372 B] 62s Get:7 http://ports.ubuntu.com/ubuntu-ports noble-proposed/restricted armhf c-n-f Metadata [116 B] 62s Get:8 http://ports.ubuntu.com/ubuntu-ports noble-proposed/multiverse armhf Packages [48.7 kB] 62s Get:9 http://ports.ubuntu.com/ubuntu-ports noble-proposed/multiverse armhf c-n-f Metadata [116 B] 68s Fetched 4949 kB in 2s (2781 kB/s) 69s Reading package lists... 78s tee: /proc/self/fd/2: Permission denied 105s Get:1 http://ftpmaster.internal/ubuntu noble InRelease [255 kB] 105s Hit:2 http://ports.ubuntu.com/ubuntu-ports noble-proposed InRelease 105s Hit:3 http://ports.ubuntu.com/ubuntu-ports noble InRelease 106s Hit:4 http://ports.ubuntu.com/ubuntu-ports noble-updates InRelease 106s Hit:5 http://ports.ubuntu.com/ubuntu-ports noble-backports InRelease 106s Hit:6 http://ports.ubuntu.com/ubuntu-ports noble-security InRelease 106s Hit:7 http://ftpmaster.internal/ubuntu noble-updates InRelease 106s Hit:8 http://ftpmaster.internal/ubuntu noble-security InRelease 106s Get:9 http://ftpmaster.internal/ubuntu noble-proposed InRelease [117 kB] 107s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/universe Sources [4007 kB] 108s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/main Sources [495 kB] 108s Get:12 http://ftpmaster.internal/ubuntu noble-proposed/main armhf Packages [671 kB] 108s Get:13 http://ftpmaster.internal/ubuntu noble-proposed/universe armhf Packages [4100 kB] 109s Fetched 9645 kB in 4s (2649 kB/s) 112s Reading package lists... 112s Reading package lists... 113s Building dependency tree... 113s Reading state information... 114s Calculating upgrade... 115s The following packages were automatically installed and are no longer required: 115s linux-headers-6.8.0-11 python3-distutils python3-lib2to3 115s Use 'apt autoremove' to remove them. 116s The following packages will be REMOVED: 116s libapt-pkg6.0 libarchive13 libatm1 libcurl3-gnutls libcurl4 libdb5.3 libelf1 116s libext2fs2 libgdbm-compat4 libgdbm6 libglib2.0-0 libgnutls30 libgpgme11 116s libhogweed6 libmagic1 libnetplan0 libnettle8 libnpth0 libnvme1 libparted2 116s libpcap0.8 libperl5.38 libpng16-16 libpsl5 libreadline8 libreiserfscore0 116s libssl3 libtirpc3 libuv1 linux-headers-6.8.0-11-generic 116s The following NEW packages will be installed: 116s libapt-pkg6.0t64 libarchive13t64 libatm1t64 libcurl3t64-gnutls libcurl4t64 116s libdb5.3t64 libelf1t64 libext2fs2t64 libgdbm-compat4t64 libgdbm6t64 116s libglib2.0-0t64 libgnutls30t64 libgpgme11t64 libhogweed6t64 libmagic1t64 117s libnetplan1 libnettle8t64 libnpth0t64 libnvme1t64 libparted2t64 117s libpcap0.8t64 libperl5.38t64 libpng16-16t64 libpsl5t64 libreadline8t64 117s libreiserfscore0t64 libssl3t64 libtirpc3t64 libuv1t64 linux-headers-6.8.0-20 117s linux-headers-6.8.0-20-generic xdg-user-dirs 117s The following packages have been kept back: 117s multipath-tools 117s The following packages will be upgraded: 117s apparmor apt apt-utils bind9-dnsutils bind9-host bind9-libs binutils 117s binutils-arm-linux-gnueabihf binutils-common bolt bsdextrautils bsdutils 117s btrfs-progs cloud-init coreutils cryptsetup-bin curl dbus dbus-bin 117s dbus-daemon dbus-session-bus-common dbus-system-bus-common dbus-user-session 117s debianutils dhcpcd-base dirmngr dmsetup dpkg dpkg-dev e2fsprogs 117s e2fsprogs-l10n eject fdisk file fonts-ubuntu-console ftp fwupd gawk 117s gcc-13-base gcc-14-base gir1.2-girepository-2.0 gir1.2-glib-2.0 gnupg 117s gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client gpgconf gpgsm gpgv 117s groff-base ibverbs-providers inetutils-telnet info initramfs-tools 117s initramfs-tools-bin initramfs-tools-core install-info iproute2 jq keyboxd 117s kmod kpartx krb5-locales libapparmor1 libaudit-common libaudit1 libbinutils 117s libblkid1 libblockdev-crypto3 libblockdev-fs3 libblockdev-loop3 117s libblockdev-mdraid3 libblockdev-nvme3 libblockdev-part3 libblockdev-swap3 117s libblockdev-utils3 libblockdev3 libbpf1 libbrotli1 libbsd0 libc-bin libc6 117s libcap-ng0 libcom-err2 libcryptsetup12 libctf-nobfd0 libctf0 libdbus-1-3 117s libdebconfclient0 libdevmapper1.02.1 libdpkg-perl libevent-core-2.1-7 117s libexpat1 libfdisk1 libfido2-1 libftdi1-2 libfwupd2 libgcc-s1 117s libgirepository-1.0-1 libglib2.0-data libgssapi-krb5-2 libgudev-1.0-0 117s libgusb2 libibverbs1 libjcat1 libjq1 libjson-glib-1.0-0 117s libjson-glib-1.0-common libk5crypto3 libkmod2 libkrb5-3 libkrb5support0 117s libldap-common libldap2 liblocale-gettext-perl liblzma5 libmagic-mgc 117s libmbim-glib4 libmbim-proxy libmm-glib0 libmount1 libnghttp2-14 libnsl2 117s libnss-systemd libpam-modules libpam-modules-bin libpam-runtime 117s libpam-systemd libpam0g libplymouth5 libpolkit-agent-1-0 117s libpolkit-gobject-1-0 libproc2-0 libprotobuf-c1 libpython3-stdlib 117s libpython3.11-minimal libpython3.11-stdlib libpython3.12-minimal 117s libpython3.12-stdlib libqmi-glib5 libqmi-proxy libqrtr-glib0 librtmp1 117s libsasl2-2 libsasl2-modules libsasl2-modules-db libseccomp2 libselinux1 117s libsemanage-common libsemanage2 libsframe1 libslang2 libsmartcols1 117s libsqlite3-0 libss2 libssh-4 libstdc++6 libsystemd-shared libsystemd0 117s libtext-charwidth-perl libtext-iconv-perl libtirpc-common libudev1 117s libudisks2-0 libusb-1.0-0 libuuid1 libvolume-key1 libxml2 libxmlb2 libxmuu1 117s linux-headers-generic locales logsave lshw lsof man-db mount mtr-tiny 117s netplan-generator netplan.io openssh-client openssh-server 117s openssh-sftp-server openssl parted perl perl-base perl-modules-5.38 117s pinentry-curses plymouth plymouth-theme-ubuntu-text procps psmisc 117s python-apt-common python3 python3-apt python3-cryptography python3-dbus 117s python3-distutils python3-gdbm python3-gi python3-lib2to3 python3-markupsafe 117s python3-minimal python3-netplan python3-pkg-resources python3-pyrsistent 117s python3-setuptools python3-typing-extensions python3-yaml python3.11 117s python3.11-minimal python3.12 python3.12-minimal readline-common rsync 117s rsyslog shared-mime-info sudo systemd systemd-dev systemd-resolved 117s systemd-sysv systemd-timesyncd tcpdump telnet tnftp ubuntu-minimal 117s ubuntu-pro-client ubuntu-pro-client-l10n ubuntu-standard udev udisks2 117s usb.ids util-linux uuid-runtime vim-common vim-tiny wget xxd xz-utils zlib1g 117s 244 upgraded, 32 newly installed, 30 to remove and 1 not upgraded. 117s Need to get 108 MB of archives. 117s After this operation, 85.0 MB of additional disk space will be used. 117s Get:1 http://ftpmaster.internal/ubuntu noble-proposed/main armhf bsdutils armhf 1:2.39.3-9ubuntu2 [102 kB] 117s Get:2 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gcc-14-base armhf 14-20240315-1ubuntu1 [47.0 kB] 117s Get:3 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgcc-s1 armhf 14-20240315-1ubuntu1 [41.5 kB] 117s Get:4 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libstdc++6 armhf 14-20240315-1ubuntu1 [714 kB] 117s Get:5 http://ftpmaster.internal/ubuntu noble/main armhf libc6 armhf 2.39-0ubuntu6 [2827 kB] 117s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/main armhf openssl armhf 3.0.13-0ubuntu2 [975 kB] 117s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/main armhf zlib1g armhf 1:1.3.dfsg-3.1ubuntu1 [49.2 kB] 117s Get:8 http://ftpmaster.internal/ubuntu noble-proposed/main armhf librtmp1 armhf 2.4+20151223.gitfa8646d.1-2build6 [51.3 kB] 117s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3.12 armhf 3.12.2-4build3 [645 kB] 117s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libexpat1 armhf 2.6.1-2 [65.9 kB] 117s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3.12-minimal armhf 3.12.2-4build3 [1942 kB] 117s Get:12 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpython3.12-stdlib armhf 3.12.2-4build3 [1906 kB] 117s Get:13 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpython3.12-minimal armhf 3.12.2-4build3 [816 kB] 117s Get:14 http://ftpmaster.internal/ubuntu noble-proposed/main armhf parted armhf 3.6-3.1build2 [39.4 kB] 117s Get:15 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblkid1 armhf 2.39.3-9ubuntu2 [160 kB] 117s Get:16 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libselinux1 armhf 3.5-2ubuntu1 [70.9 kB] 117s Get:17 http://ftpmaster.internal/ubuntu noble-proposed/main armhf systemd-dev all 255.4-1ubuntu5 [103 kB] 117s Get:18 http://ftpmaster.internal/ubuntu noble-proposed/main armhf systemd-timesyncd armhf 255.4-1ubuntu5 [36.0 kB] 117s Get:19 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dbus-session-bus-common all 1.14.10-4ubuntu2 [80.3 kB] 117s Get:20 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libaudit-common all 1:3.1.2-2.1 [5674 B] 118s Get:21 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libcap-ng0 armhf 0.8.4-2build1 [13.5 kB] 118s Get:22 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libaudit1 armhf 1:3.1.2-2.1 [44.3 kB] 118s Get:23 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpam0g armhf 1.5.3-5ubuntu3 [62.0 kB] 118s Get:24 http://ftpmaster.internal/ubuntu noble-proposed/main armhf liblzma5 armhf 5.6.0-0.2 [117 kB] 118s Get:25 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libldap2 armhf 2.6.7+dfsg-1~exp1ubuntu6 [172 kB] 118s Get:26 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libudisks2-0 armhf 2.10.1-6 [143 kB] 118s Get:27 http://ftpmaster.internal/ubuntu noble-proposed/main armhf udisks2 armhf 2.10.1-6 [276 kB] 118s Get:28 http://ftpmaster.internal/ubuntu noble-proposed/main armhf shared-mime-info armhf 2.4-1build1 [470 kB] 118s Get:29 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gir1.2-girepository-2.0 armhf 1.79.1-1ubuntu6 [24.8 kB] 118s Get:30 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gir1.2-glib-2.0 armhf 2.79.3-3ubuntu5 [182 kB] 118s Get:31 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgirepository-1.0-1 armhf 1.79.1-1ubuntu6 [106 kB] 118s Get:32 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-gi armhf 3.47.0-3build1 [219 kB] 118s Get:33 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-dbus armhf 1.3.2-5build2 [94.7 kB] 118s Get:34 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgpgme11t64 armhf 1.18.0-4.1ubuntu3 [120 kB] 118s Get:35 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libvolume-key1 armhf 0.3.12-7build1 [38.4 kB] 118s Get:36 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libnetplan1 armhf 1.0-1 [113 kB] 118s Get:37 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-netplan armhf 1.0-1 [22.5 kB] 118s Get:38 http://ftpmaster.internal/ubuntu noble-proposed/main armhf netplan-generator armhf 1.0-1 [58.7 kB] 118s Get:39 http://ftpmaster.internal/ubuntu noble-proposed/main armhf initramfs-tools-bin armhf 0.142ubuntu23 [20.3 kB] 118s Get:40 http://ftpmaster.internal/ubuntu noble-proposed/main armhf initramfs-tools-core all 0.142ubuntu23 [50.1 kB] 118s Get:41 http://ftpmaster.internal/ubuntu noble/main armhf ubuntu-minimal armhf 1.536build1 [10.7 kB] 118s Get:42 http://ftpmaster.internal/ubuntu noble-proposed/main armhf initramfs-tools all 0.142ubuntu23 [9058 B] 118s Get:43 http://ftpmaster.internal/ubuntu noble-proposed/main armhf netplan.io armhf 1.0-1 [64.3 kB] 118s Get:44 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libxmlb2 armhf 0.3.15-1build1 [57.0 kB] 118s Get:45 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libqrtr-glib0 armhf 1.2.2-1ubuntu3 [15.4 kB] 118s Get:46 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libqmi-glib5 armhf 1.35.2-0ubuntu1 [908 kB] 118s Get:47 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libqmi-proxy armhf 1.35.2-0ubuntu1 [5732 B] 118s Get:48 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpolkit-agent-1-0 armhf 124-1ubuntu1 [15.3 kB] 118s Get:49 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpolkit-gobject-1-0 armhf 124-1ubuntu1 [44.1 kB] 118s Get:50 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libglib2.0-0t64 armhf 2.79.3-3ubuntu5 [1414 kB] 118s Get:51 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libjcat1 armhf 0.2.0-2build2 [30.4 kB] 118s Get:52 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libarchive13t64 armhf 3.7.2-1.1ubuntu2 [330 kB] 118s Get:53 http://ftpmaster.internal/ubuntu noble-proposed/main armhf fwupd armhf 1.9.15-2 [4350 kB] 118s Get:54 http://ftpmaster.internal/ubuntu noble-proposed/main armhf ubuntu-pro-client-l10n armhf 31.2.1 [19.4 kB] 118s Get:55 http://ftpmaster.internal/ubuntu noble-proposed/main armhf ubuntu-pro-client armhf 31.2.1 [216 kB] 118s Get:56 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3.11 armhf 3.11.8-1build4 [589 kB] 118s Get:57 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3.11-minimal armhf 3.11.8-1build4 [1795 kB] 118s Get:58 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpython3.11-minimal armhf 3.11.8-1build4 [826 kB] 118s Get:59 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsasl2-modules-db armhf 2.1.28+dfsg1-5ubuntu1 [19.0 kB] 118s Get:60 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libtext-iconv-perl armhf 1.7-8build2 [12.7 kB] 118s Get:61 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libtext-charwidth-perl armhf 0.04-11build2 [8962 B] 118s Get:62 http://ftpmaster.internal/ubuntu noble-proposed/main armhf perl-base armhf 5.38.2-3.2 [1671 kB] 118s Get:63 http://ftpmaster.internal/ubuntu noble-proposed/main armhf liblocale-gettext-perl armhf 1.07-6ubuntu4 [15.0 kB] 118s Get:64 http://ftpmaster.internal/ubuntu noble-proposed/main armhf perl-modules-5.38 all 5.38.2-3.2 [3110 kB] 118s Get:65 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-gdbm armhf 3.12.2-3ubuntu1.1 [17.1 kB] 118s Get:66 http://ftpmaster.internal/ubuntu noble-proposed/main armhf man-db armhf 2.12.0-3build4 [1196 kB] 118s Get:67 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgdbm6t64 armhf 1.23-5.1 [30.3 kB] 118s Get:68 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgdbm-compat4t64 armhf 1.23-5.1 [6208 B] 118s Get:69 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libperl5.38t64 armhf 5.38.2-3.2 [4101 kB] 118s Get:70 http://ftpmaster.internal/ubuntu noble-proposed/main armhf perl armhf 5.38.2-3.2 [231 kB] 118s Get:71 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libdb5.3t64 armhf 5.3.28+dfsg2-6 [661 kB] 118s Get:72 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpython3.11-stdlib armhf 3.11.8-1build4 [1810 kB] 118s Get:73 http://ftpmaster.internal/ubuntu noble-proposed/main armhf keyboxd armhf 2.4.4-2ubuntu15 [111 kB] 118s Get:74 http://ftpmaster.internal/ubuntu noble/main armhf libnpth0t64 armhf 1.6-3.1 [6940 B] 118s Get:75 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gpgv armhf 2.4.4-2ubuntu15 [224 kB] 118s Get:76 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gpg armhf 2.4.4-2ubuntu15 [524 kB] 118s Get:77 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gpg-wks-client armhf 2.4.4-2ubuntu15 [87.4 kB] 118s Get:78 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gnupg-utils armhf 2.4.4-2ubuntu15 [158 kB] 118s Get:79 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gpg-agent armhf 2.4.4-2ubuntu15 [235 kB] 118s Get:80 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gpgsm armhf 2.4.4-2ubuntu15 [241 kB] 118s Get:81 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libreadline8t64 armhf 8.2-4 [129 kB] 118s Get:82 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gawk armhf 1:5.2.1-2build2 [415 kB] 118s Get:83 http://ftpmaster.internal/ubuntu noble-proposed/main armhf fdisk armhf 2.39.3-9ubuntu2 [135 kB] 118s Get:84 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gpgconf armhf 2.4.4-2ubuntu15 [115 kB] 119s Get:85 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dirmngr armhf 2.4.4-2ubuntu15 [346 kB] 119s Get:86 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gnupg all 2.4.4-2ubuntu15 [359 kB] 119s Get:87 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-apt armhf 2.7.7 [162 kB] 119s Get:88 http://ftpmaster.internal/ubuntu noble-proposed/main armhf apt-utils armhf 2.7.14 [210 kB] 119s Get:89 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libapt-pkg6.0t64 armhf 2.7.14 [986 kB] 119s Get:90 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libnettle8t64 armhf 3.9.1-2.2 [187 kB] 119s Get:91 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libhogweed6t64 armhf 3.9.1-2.2 [187 kB] 119s Get:92 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgnutls30t64 armhf 3.8.3-1.1ubuntu2 [1046 kB] 119s Get:93 http://ftpmaster.internal/ubuntu noble-proposed/main armhf apt armhf 2.7.14 [1368 kB] 119s Get:94 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libcurl3t64-gnutls armhf 8.5.0-2ubuntu8 [290 kB] 119s Get:95 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libfwupd2 armhf 1.9.15-2 [123 kB] 119s Get:96 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpsl5t64 armhf 0.21.2-1.1 [55.7 kB] 119s Get:97 http://ftpmaster.internal/ubuntu noble-proposed/main armhf wget armhf 1.21.4-1ubuntu2 [317 kB] 119s Get:98 http://ftpmaster.internal/ubuntu noble-proposed/main armhf tnftp armhf 20230507-2build1 [98.6 kB] 119s Get:99 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpcap0.8t64 armhf 1.10.4-4.1ubuntu2 [137 kB] 119s Get:100 http://ftpmaster.internal/ubuntu noble-proposed/main armhf tcpdump armhf 4.99.4-3ubuntu2 [425 kB] 119s Get:101 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsystemd-shared armhf 255.4-1ubuntu5 [2009 kB] 119s Get:102 http://ftpmaster.internal/ubuntu noble-proposed/main armhf systemd-resolved armhf 255.4-1ubuntu5 [289 kB] 119s Get:103 http://ftpmaster.internal/ubuntu noble-proposed/main armhf sudo armhf 1.9.15p5-3ubuntu3 [936 kB] 119s Get:104 http://ftpmaster.internal/ubuntu noble-proposed/main armhf rsync armhf 3.2.7-1build1 [413 kB] 119s Get:105 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-cryptography armhf 41.0.7-4build2 [788 kB] 119s Get:106 http://ftpmaster.internal/ubuntu noble-proposed/main armhf openssh-sftp-server armhf 1:9.6p1-3ubuntu11 [35.5 kB] 119s Get:107 http://ftpmaster.internal/ubuntu noble-proposed/main armhf openssh-client armhf 1:9.6p1-3ubuntu11 [890 kB] 119s Get:108 http://ftpmaster.internal/ubuntu noble-proposed/main armhf openssh-server armhf 1:9.6p1-3ubuntu11 [503 kB] 119s Get:109 http://ftpmaster.internal/ubuntu noble-proposed/main armhf linux-headers-6.8.0-20 all 6.8.0-20.20 [13.6 MB] 120s Get:110 http://ftpmaster.internal/ubuntu noble-proposed/main armhf linux-headers-6.8.0-20-generic armhf 6.8.0-20.20 [1287 kB] 120s Get:111 http://ftpmaster.internal/ubuntu noble-proposed/main armhf linux-headers-generic armhf 6.8.0-20.20+1 [9610 B] 120s Get:112 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libssl3t64 armhf 3.0.13-0ubuntu2 [1558 kB] 120s Get:113 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libudev1 armhf 255.4-1ubuntu5 [166 kB] 120s Get:114 http://ftpmaster.internal/ubuntu noble-proposed/main armhf systemd armhf 255.4-1ubuntu5 [3502 kB] 120s Get:115 http://ftpmaster.internal/ubuntu noble-proposed/main armhf udev armhf 255.4-1ubuntu5 [1852 kB] 120s Get:116 http://ftpmaster.internal/ubuntu noble-proposed/main armhf systemd-sysv armhf 255.4-1ubuntu5 [11.9 kB] 120s Get:117 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libnss-systemd armhf 255.4-1ubuntu5 [148 kB] 120s Get:118 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpam-systemd armhf 255.4-1ubuntu5 [216 kB] 120s Get:119 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsystemd0 armhf 255.4-1ubuntu5 [410 kB] 120s Get:120 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpam-modules-bin armhf 1.5.3-5ubuntu3 [47.0 kB] 120s Get:121 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpam-modules armhf 1.5.3-5ubuntu3 [261 kB] 120s Get:122 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpam-runtime all 1.5.3-5ubuntu3 [40.8 kB] 120s Get:123 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dbus-user-session armhf 1.14.10-4ubuntu2 [9962 B] 120s Get:124 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libapparmor1 armhf 4.0.0-beta3-0ubuntu2 [45.0 kB] 120s Get:125 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dbus-bin armhf 1.14.10-4ubuntu2 [37.1 kB] 120s Get:126 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dbus-system-bus-common all 1.14.10-4ubuntu2 [81.5 kB] 120s Get:127 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dbus armhf 1.14.10-4ubuntu2 [28.1 kB] 120s Get:128 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dbus-daemon armhf 1.14.10-4ubuntu2 [109 kB] 120s Get:129 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libdbus-1-3 armhf 1.14.10-4ubuntu2 [190 kB] 120s Get:130 http://ftpmaster.internal/ubuntu noble-proposed/main armhf kmod armhf 31+20240202-2ubuntu4 [91.8 kB] 120s Get:131 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libkmod2 armhf 31+20240202-2ubuntu4 [44.9 kB] 120s Get:132 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libmount1 armhf 2.39.3-9ubuntu2 [171 kB] 120s Get:133 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libseccomp2 armhf 2.5.5-1ubuntu2 [49.5 kB] 120s Get:134 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libuuid1 armhf 2.39.3-9ubuntu2 [34.4 kB] 120s Get:135 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libcryptsetup12 armhf 2:2.7.0-1ubuntu2 [238 kB] 120s Get:136 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libfdisk1 armhf 2.39.3-9ubuntu2 [196 kB] 120s Get:137 http://ftpmaster.internal/ubuntu noble-proposed/main armhf mount armhf 2.39.3-9ubuntu2 [134 kB] 120s Get:138 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libdevmapper1.02.1 armhf 2:1.02.185-3ubuntu2 [135 kB] 120s Get:139 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libparted2t64 armhf 3.6-3.1build2 [143 kB] 120s Get:140 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsqlite3-0 armhf 3.45.1-1ubuntu1 [599 kB] 120s Get:141 http://ftpmaster.internal/ubuntu noble-proposed/main armhf pinentry-curses armhf 1.2.1-3ubuntu4 [36.7 kB] 120s Get:142 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsmartcols1 armhf 2.39.3-9ubuntu2 [117 kB] 120s Get:143 http://ftpmaster.internal/ubuntu noble-proposed/main armhf readline-common all 8.2-4 [56.4 kB] 120s Get:144 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-yaml armhf 6.0.1-2build1 [117 kB] 120s Get:145 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python-apt-common all 2.7.7 [19.8 kB] 120s Get:146 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-setuptools all 68.1.2-2ubuntu1 [396 kB] 120s Get:147 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-pkg-resources all 68.1.2-2ubuntu1 [168 kB] 120s Get:148 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dpkg armhf 1.22.6ubuntu4 [1229 kB] 120s Get:149 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-minimal armhf 3.12.2-0ubuntu1 [27.1 kB] 120s Get:150 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3 armhf 3.12.2-0ubuntu1 [24.1 kB] 120s Get:151 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpython3-stdlib armhf 3.12.2-0ubuntu1 [9802 B] 120s Get:152 http://ftpmaster.internal/ubuntu noble-proposed/main armhf bsdextrautils armhf 2.39.3-9ubuntu2 [78.7 kB] 120s Get:153 http://ftpmaster.internal/ubuntu noble-proposed/main armhf groff-base armhf 1.23.0-3build1 [946 kB] 120s Get:154 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsasl2-2 armhf 2.1.28+dfsg1-5ubuntu1 [49.7 kB] 120s Get:155 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-utils3 armhf 3.1.0-1build1 [16.9 kB] 121s Get:156 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-crypto3 armhf 3.1.0-1build1 [20.3 kB] 121s Get:157 http://ftpmaster.internal/ubuntu noble-proposed/main armhf logsave armhf 1.47.0-2.4~exp1ubuntu2 [21.9 kB] 121s Get:158 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dhcpcd-base armhf 1:10.0.6-1ubuntu2 [186 kB] 121s Get:159 http://ftpmaster.internal/ubuntu noble-proposed/main armhf eject armhf 2.39.3-9ubuntu2 [43.2 kB] 121s Get:160 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libbpf1 armhf 1:1.3.0-2build1 [146 kB] 121s Get:161 http://ftpmaster.internal/ubuntu noble-proposed/main armhf iproute2 armhf 6.1.0-1ubuntu5 [1060 kB] 121s Get:162 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libelf1t64 armhf 0.190-1.1build2 [49.9 kB] 121s Get:163 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libtirpc-common all 1.3.4+ds-1.1 [8018 B] 121s Get:164 http://ftpmaster.internal/ubuntu noble-proposed/main armhf lsof armhf 4.95.0-1build2 [248 kB] 121s Get:165 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libnsl2 armhf 1.3.0-3build2 [36.5 kB] 121s Get:166 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgssapi-krb5-2 armhf 1.20.1-6ubuntu1 [119 kB] 121s Get:167 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libkrb5-3 armhf 1.20.1-6ubuntu1 [320 kB] 121s Get:168 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libkrb5support0 armhf 1.20.1-6ubuntu1 [31.5 kB] 121s Get:169 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libk5crypto3 armhf 1.20.1-6ubuntu1 [78.6 kB] 121s Get:170 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libcom-err2 armhf 1.47.0-2.4~exp1ubuntu2 [21.9 kB] 121s Get:171 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libtirpc3t64 armhf 1.3.4+ds-1.1 [73.2 kB] 121s Get:172 http://ftpmaster.internal/ubuntu noble/main armhf libc-bin armhf 2.39-0ubuntu6 [530 kB] 121s Get:173 http://ftpmaster.internal/ubuntu noble/main armhf locales all 2.39-0ubuntu6 [4232 kB] 121s Get:174 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libproc2-0 armhf 2:4.0.4-4ubuntu2 [49.0 kB] 121s Get:175 http://ftpmaster.internal/ubuntu noble-proposed/main armhf procps armhf 2:4.0.4-4ubuntu2 [700 kB] 121s Get:176 http://ftpmaster.internal/ubuntu noble-proposed/main armhf vim-tiny armhf 2:9.1.0016-1ubuntu6 [665 kB] 121s Get:177 http://ftpmaster.internal/ubuntu noble-proposed/main armhf vim-common all 2:9.1.0016-1ubuntu6 [385 kB] 121s Get:178 http://ftpmaster.internal/ubuntu noble-proposed/main armhf e2fsprogs-l10n all 1.47.0-2.4~exp1ubuntu2 [5996 B] 121s Get:179 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-fs3 armhf 3.1.0-1build1 [34.4 kB] 121s Get:180 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libreiserfscore0t64 armhf 1:3.6.27-7.1 [66.2 kB] 121s Get:181 http://ftpmaster.internal/ubuntu noble-proposed/main armhf btrfs-progs armhf 6.6.3-1.1build1 [852 kB] 121s Get:182 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libext2fs2t64 armhf 1.47.0-2.4~exp1ubuntu2 [201 kB] 121s Get:183 http://ftpmaster.internal/ubuntu noble-proposed/main armhf e2fsprogs armhf 1.47.0-2.4~exp1ubuntu2 [571 kB] 121s Get:184 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-loop3 armhf 3.1.0-1build1 [6502 B] 121s Get:185 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-mdraid3 armhf 3.1.0-1build1 [13.3 kB] 121s Get:186 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-nvme3 armhf 3.1.0-1build1 [17.5 kB] 121s Get:187 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libnvme1t64 armhf 1.8-3 [67.5 kB] 121s Get:188 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-part3 armhf 3.1.0-1build1 [16.4 kB] 121s Get:189 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev-swap3 armhf 3.1.0-1build1 [8894 B] 121s Get:190 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libblockdev3 armhf 3.1.0-1build1 [42.9 kB] 121s Get:191 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgudev-1.0-0 armhf 1:238-3ubuntu2 [13.6 kB] 121s Get:192 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libxml2 armhf 2.9.14+dfsg-1.3ubuntu2 [595 kB] 121s Get:193 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libmbim-proxy armhf 1.31.2-0ubuntu2 [5748 B] 121s Get:194 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libmbim-glib4 armhf 1.31.2-0ubuntu2 [216 kB] 121s Get:195 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libjson-glib-1.0-common all 1.8.0-2build1 [4210 B] 121s Get:196 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libjson-glib-1.0-0 armhf 1.8.0-2build1 [61.2 kB] 121s Get:197 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libusb-1.0-0 armhf 2:1.0.27-1 [48.7 kB] 121s Get:198 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libgusb2 armhf 0.4.8-1build1 [34.6 kB] 121s Get:199 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libmm-glib0 armhf 1.23.4-0ubuntu1 [214 kB] 121s Get:200 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libprotobuf-c1 armhf 1.4.1-1ubuntu3 [17.7 kB] 121s Get:201 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libbrotli1 armhf 1.1.0-2build1 [319 kB] 121s Get:202 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libnghttp2-14 armhf 1.59.0-1build1 [68.1 kB] 121s Get:203 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libssh-4 armhf 0.10.6-2build1 [169 kB] 122s Get:204 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libibverbs1 armhf 50.0-2build1 [57.9 kB] 122s Get:205 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libfido2-1 armhf 1.14.0-1build1 [75.8 kB] 122s Get:206 http://ftpmaster.internal/ubuntu noble-proposed/main armhf coreutils armhf 9.4-3ubuntu3 [1280 kB] 122s Get:207 http://ftpmaster.internal/ubuntu noble/main armhf debianutils armhf 5.17 [88.9 kB] 122s Get:208 http://ftpmaster.internal/ubuntu noble-proposed/main armhf util-linux armhf 2.39.3-9ubuntu2 [1216 kB] 122s Get:209 http://ftpmaster.internal/ubuntu noble-proposed/main armhf curl armhf 8.5.0-2ubuntu8 [219 kB] 122s Get:210 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libcurl4t64 armhf 8.5.0-2ubuntu8 [296 kB] 122s Get:211 http://ftpmaster.internal/ubuntu noble-proposed/main armhf file armhf 1:5.45-3 [21.1 kB] 122s Get:212 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libmagic-mgc armhf 1:5.45-3 [307 kB] 122s Get:213 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libmagic1t64 armhf 1:5.45-3 [81.4 kB] 122s Get:214 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libplymouth5 armhf 24.004.60-1ubuntu6 [140 kB] 122s Get:215 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpng16-16t64 armhf 1.6.43-3 [166 kB] 122s Get:216 http://ftpmaster.internal/ubuntu noble-proposed/main armhf bind9-host armhf 1:9.18.24-0ubuntu3 [47.4 kB] 122s Get:217 http://ftpmaster.internal/ubuntu noble-proposed/main armhf bind9-dnsutils armhf 1:9.18.24-0ubuntu3 [149 kB] 123s Get:218 http://ftpmaster.internal/ubuntu noble-proposed/main armhf bind9-libs armhf 1:9.18.24-0ubuntu3 [1148 kB] 123s Get:219 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libuv1t64 armhf 1.48.0-1.1 [82.9 kB] 123s Get:220 http://ftpmaster.internal/ubuntu noble-proposed/main armhf uuid-runtime armhf 2.39.3-9ubuntu2 [41.7 kB] 123s Get:221 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libdebconfclient0 armhf 0.271ubuntu2 [10.8 kB] 123s Get:222 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsemanage-common all 3.5-1build4 [10.1 kB] 123s Get:223 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsemanage2 armhf 3.5-1build4 [84.5 kB] 123s Get:224 http://ftpmaster.internal/ubuntu noble-proposed/main armhf install-info armhf 7.1-3build1 [60.5 kB] 123s Get:225 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gcc-13-base armhf 13.2.0-21ubuntu1 [48.3 kB] 123s Get:226 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libss2 armhf 1.47.0-2.4~exp1ubuntu2 [14.7 kB] 123s Get:227 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dmsetup armhf 2:1.02.185-3ubuntu2 [81.1 kB] 123s Get:228 http://ftpmaster.internal/ubuntu noble-proposed/main armhf krb5-locales all 1.20.1-6ubuntu1 [13.8 kB] 123s Get:229 http://ftpmaster.internal/ubuntu noble/main armhf libbsd0 armhf 0.12.1-1 [36.6 kB] 123s Get:230 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libglib2.0-data all 2.79.3-3ubuntu5 [46.6 kB] 123s Get:231 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libslang2 armhf 2.3.3-3build1 [478 kB] 123s Get:232 http://ftpmaster.internal/ubuntu noble-proposed/main armhf rsyslog armhf 8.2312.0-3ubuntu7 [460 kB] 123s Get:233 http://ftpmaster.internal/ubuntu noble/main armhf xdg-user-dirs armhf 0.18-1 [17.3 kB] 123s Get:234 http://ftpmaster.internal/ubuntu noble-proposed/main armhf xxd armhf 2:9.1.0016-1ubuntu6 [62.5 kB] 123s Get:235 http://ftpmaster.internal/ubuntu noble-proposed/main armhf apparmor armhf 4.0.0-beta3-0ubuntu2 [562 kB] 123s Get:236 http://ftpmaster.internal/ubuntu noble-proposed/main armhf ftp all 20230507-2build1 [4724 B] 123s Get:237 http://ftpmaster.internal/ubuntu noble-proposed/main armhf inetutils-telnet armhf 2:2.5-3ubuntu3 [90.7 kB] 123s Get:238 http://ftpmaster.internal/ubuntu noble-proposed/main armhf info armhf 7.1-3build1 [127 kB] 123s Get:239 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libxmuu1 armhf 2:1.1.3-3build1 [8004 B] 123s Get:240 http://ftpmaster.internal/ubuntu noble-proposed/main armhf lshw armhf 02.19.git.2021.06.19.996aaad9c7-2build2 [310 kB] 123s Get:241 http://ftpmaster.internal/ubuntu noble-proposed/main armhf mtr-tiny armhf 0.95-1.1build1 [51.7 kB] 123s Get:242 http://ftpmaster.internal/ubuntu noble-proposed/main armhf plymouth-theme-ubuntu-text armhf 24.004.60-1ubuntu6 [9818 B] 123s Get:243 http://ftpmaster.internal/ubuntu noble-proposed/main armhf plymouth armhf 24.004.60-1ubuntu6 [142 kB] 123s Get:244 http://ftpmaster.internal/ubuntu noble-proposed/main armhf psmisc armhf 23.7-1 [176 kB] 123s Get:245 http://ftpmaster.internal/ubuntu noble-proposed/main armhf telnet all 0.17+2.5-3ubuntu3 [3682 B] 123s Get:246 http://ftpmaster.internal/ubuntu noble-proposed/main armhf xz-utils armhf 5.6.0-0.2 [271 kB] 123s Get:247 http://ftpmaster.internal/ubuntu noble/main armhf ubuntu-standard armhf 1.536build1 [10.7 kB] 123s Get:248 http://ftpmaster.internal/ubuntu noble-proposed/main armhf usb.ids all 2024.03.18-1 [223 kB] 123s Get:249 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libctf-nobfd0 armhf 2.42-4ubuntu1 [88.0 kB] 123s Get:250 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libctf0 armhf 2.42-4ubuntu1 [87.7 kB] 123s Get:251 http://ftpmaster.internal/ubuntu noble-proposed/main armhf binutils-arm-linux-gnueabihf armhf 2.42-4ubuntu1 [2925 kB] 123s Get:252 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libbinutils armhf 2.42-4ubuntu1 [464 kB] 123s Get:253 http://ftpmaster.internal/ubuntu noble-proposed/main armhf binutils armhf 2.42-4ubuntu1 [3078 B] 123s Get:254 http://ftpmaster.internal/ubuntu noble-proposed/main armhf binutils-common armhf 2.42-4ubuntu1 [217 kB] 124s Get:255 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsframe1 armhf 2.42-4ubuntu1 [13.1 kB] 124s Get:256 http://ftpmaster.internal/ubuntu noble-proposed/main armhf bolt armhf 0.9.6-2build1 [138 kB] 124s Get:257 http://ftpmaster.internal/ubuntu noble-proposed/main armhf cryptsetup-bin armhf 2:2.7.0-1ubuntu2 [214 kB] 124s Get:258 http://ftpmaster.internal/ubuntu noble-proposed/main armhf dpkg-dev all 1.22.6ubuntu4 [1074 kB] 124s Get:259 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libdpkg-perl all 1.22.6ubuntu4 [268 kB] 124s Get:260 http://ftpmaster.internal/ubuntu noble/main armhf fonts-ubuntu-console all 0.869+git20240321-0ubuntu1 [18.7 kB] 124s Get:261 http://ftpmaster.internal/ubuntu noble-proposed/main armhf gnupg-l10n all 2.4.4-2ubuntu15 [65.8 kB] 124s Get:262 http://ftpmaster.internal/ubuntu noble-proposed/main armhf ibverbs-providers armhf 50.0-2build1 [27.4 kB] 124s Get:263 http://ftpmaster.internal/ubuntu noble-proposed/main armhf jq armhf 1.7.1-3 [65.2 kB] 124s Get:264 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libjq1 armhf 1.7.1-3 [156 kB] 124s Get:265 http://ftpmaster.internal/ubuntu noble/main armhf libatm1t64 armhf 1:2.5.1-5.1 [20.0 kB] 124s Get:266 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libevent-core-2.1-7 armhf 2.1.12-stable-9build1 [82.3 kB] 124s Get:267 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libftdi1-2 armhf 1.5-6build4 [25.7 kB] 124s Get:268 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libldap-common all 2.6.7+dfsg-1~exp1ubuntu6 [31.3 kB] 124s Get:269 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libsasl2-modules armhf 2.1.28+dfsg1-5ubuntu1 [61.3 kB] 124s Get:270 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-distutils all 3.12.2-3ubuntu1.1 [133 kB] 124s Get:271 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-lib2to3 all 3.12.2-3ubuntu1.1 [79.1 kB] 124s Get:272 http://ftpmaster.internal/ubuntu noble/main armhf python3-markupsafe armhf 2.1.5-1build1 [12.1 kB] 124s Get:273 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-pyrsistent armhf 0.20.0-1build1 [53.0 kB] 124s Get:274 http://ftpmaster.internal/ubuntu noble-proposed/main armhf python3-typing-extensions all 4.10.0-1 [60.7 kB] 124s Get:275 http://ftpmaster.internal/ubuntu noble/main armhf cloud-init all 24.1.2-0ubuntu1 [597 kB] 124s Get:276 http://ftpmaster.internal/ubuntu noble-proposed/main armhf kpartx armhf 0.9.4-5ubuntu6 [31.5 kB] 127s Preconfiguring packages ... 128s Fetched 108 MB in 7s (14.9 MB/s) 128s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 128s Preparing to unpack .../bsdutils_1%3a2.39.3-9ubuntu2_armhf.deb ... 128s Unpacking bsdutils (1:2.39.3-9ubuntu2) over (1:2.39.3-6ubuntu2) ... 128s Setting up bsdutils (1:2.39.3-9ubuntu2) ... 128s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 128s Preparing to unpack .../gcc-14-base_14-20240315-1ubuntu1_armhf.deb ... 128s Unpacking gcc-14-base:armhf (14-20240315-1ubuntu1) over (14-20240303-1ubuntu1) ... 128s Setting up gcc-14-base:armhf (14-20240315-1ubuntu1) ... 128s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 128s Preparing to unpack .../libgcc-s1_14-20240315-1ubuntu1_armhf.deb ... 128s Unpacking libgcc-s1:armhf (14-20240315-1ubuntu1) over (14-20240303-1ubuntu1) ... 128s Setting up libgcc-s1:armhf (14-20240315-1ubuntu1) ... 128s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 128s Preparing to unpack .../libstdc++6_14-20240315-1ubuntu1_armhf.deb ... 128s Unpacking libstdc++6:armhf (14-20240315-1ubuntu1) over (14-20240303-1ubuntu1) ... 128s Setting up libstdc++6:armhf (14-20240315-1ubuntu1) ... 128s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 128s Preparing to unpack .../libc6_2.39-0ubuntu6_armhf.deb ... 129s Unpacking libc6:armhf (2.39-0ubuntu6) over (2.39-0ubuntu2) ... 129s Setting up libc6:armhf (2.39-0ubuntu6) ... 130s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 130s Preparing to unpack .../openssl_3.0.13-0ubuntu2_armhf.deb ... 130s Unpacking openssl (3.0.13-0ubuntu2) over (3.0.10-1ubuntu4) ... 130s Preparing to unpack .../zlib1g_1%3a1.3.dfsg-3.1ubuntu1_armhf.deb ... 130s Unpacking zlib1g:armhf (1:1.3.dfsg-3.1ubuntu1) over (1:1.3.dfsg-3ubuntu1) ... 130s Setting up zlib1g:armhf (1:1.3.dfsg-3.1ubuntu1) ... 130s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58620 files and directories currently installed.) 130s Preparing to unpack .../0-librtmp1_2.4+20151223.gitfa8646d.1-2build6_armhf.deb ... 130s Unpacking librtmp1:armhf (2.4+20151223.gitfa8646d.1-2build6) over (2.4+20151223.gitfa8646d.1-2build4) ... 130s Preparing to unpack .../1-python3.12_3.12.2-4build3_armhf.deb ... 131s Unpacking python3.12 (3.12.2-4build3) over (3.12.2-1) ... 131s Preparing to unpack .../2-libexpat1_2.6.1-2_armhf.deb ... 131s Unpacking libexpat1:armhf (2.6.1-2) over (2.6.0-1) ... 131s Preparing to unpack .../3-python3.12-minimal_3.12.2-4build3_armhf.deb ... 131s Unpacking python3.12-minimal (3.12.2-4build3) over (3.12.2-1) ... 131s Preparing to unpack .../4-libpython3.12-stdlib_3.12.2-4build3_armhf.deb ... 131s Unpacking libpython3.12-stdlib:armhf (3.12.2-4build3) over (3.12.2-1) ... 131s Preparing to unpack .../5-libpython3.12-minimal_3.12.2-4build3_armhf.deb ... 131s Unpacking libpython3.12-minimal:armhf (3.12.2-4build3) over (3.12.2-1) ... 132s Preparing to unpack .../6-parted_3.6-3.1build2_armhf.deb ... 132s Unpacking parted (3.6-3.1build2) over (3.6-3) ... 132s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58618 files and directories currently installed.) 132s Removing libparted2:armhf (3.6-3) ... 132s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 132s Preparing to unpack .../libblkid1_2.39.3-9ubuntu2_armhf.deb ... 132s Unpacking libblkid1:armhf (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 132s Setting up libblkid1:armhf (2.39.3-9ubuntu2) ... 132s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 132s Preparing to unpack .../libselinux1_3.5-2ubuntu1_armhf.deb ... 132s Unpacking libselinux1:armhf (3.5-2ubuntu1) over (3.5-2build1) ... 132s Setting up libselinux1:armhf (3.5-2ubuntu1) ... 132s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 132s Preparing to unpack .../systemd-dev_255.4-1ubuntu5_all.deb ... 132s Unpacking systemd-dev (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 132s Preparing to unpack .../systemd-timesyncd_255.4-1ubuntu5_armhf.deb ... 132s Unpacking systemd-timesyncd (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 133s Preparing to unpack .../dbus-session-bus-common_1.14.10-4ubuntu2_all.deb ... 133s Unpacking dbus-session-bus-common (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 133s Preparing to unpack .../libaudit-common_1%3a3.1.2-2.1_all.deb ... 133s Unpacking libaudit-common (1:3.1.2-2.1) over (1:3.1.2-2) ... 133s Setting up libaudit-common (1:3.1.2-2.1) ... 133s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 133s Preparing to unpack .../libcap-ng0_0.8.4-2build1_armhf.deb ... 133s Unpacking libcap-ng0:armhf (0.8.4-2build1) over (0.8.4-2) ... 133s Setting up libcap-ng0:armhf (0.8.4-2build1) ... 133s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 133s Preparing to unpack .../libaudit1_1%3a3.1.2-2.1_armhf.deb ... 133s Unpacking libaudit1:armhf (1:3.1.2-2.1) over (1:3.1.2-2) ... 133s Setting up libaudit1:armhf (1:3.1.2-2.1) ... 133s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 133s Preparing to unpack .../libpam0g_1.5.3-5ubuntu3_armhf.deb ... 133s Unpacking libpam0g:armhf (1.5.3-5ubuntu3) over (1.5.2-9.1ubuntu3) ... 133s Setting up libpam0g:armhf (1.5.3-5ubuntu3) ... 133s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 133s Preparing to unpack .../liblzma5_5.6.0-0.2_armhf.deb ... 133s Unpacking liblzma5:armhf (5.6.0-0.2) over (5.4.5-0.3) ... 134s Setting up liblzma5:armhf (5.6.0-0.2) ... 134s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58612 files and directories currently installed.) 134s Preparing to unpack .../0-libldap2_2.6.7+dfsg-1~exp1ubuntu6_armhf.deb ... 134s Unpacking libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu6) over (2.6.7+dfsg-1~exp1ubuntu1) ... 134s Preparing to unpack .../1-libudisks2-0_2.10.1-6_armhf.deb ... 134s Unpacking libudisks2-0:armhf (2.10.1-6) over (2.10.1-1ubuntu2) ... 134s Preparing to unpack .../2-udisks2_2.10.1-6_armhf.deb ... 134s Unpacking udisks2 (2.10.1-6) over (2.10.1-1ubuntu2) ... 134s Preparing to unpack .../3-shared-mime-info_2.4-1build1_armhf.deb ... 134s Unpacking shared-mime-info (2.4-1build1) over (2.4-1) ... 134s Preparing to unpack .../4-gir1.2-girepository-2.0_1.79.1-1ubuntu6_armhf.deb ... 134s Unpacking gir1.2-girepository-2.0:armhf (1.79.1-1ubuntu6) over (1.79.1-1) ... 134s Preparing to unpack .../5-gir1.2-glib-2.0_2.79.3-3ubuntu5_armhf.deb ... 134s Unpacking gir1.2-glib-2.0:armhf (2.79.3-3ubuntu5) over (2.79.2-1~ubuntu1) ... 134s Preparing to unpack .../6-libgirepository-1.0-1_1.79.1-1ubuntu6_armhf.deb ... 134s Unpacking libgirepository-1.0-1:armhf (1.79.1-1ubuntu6) over (1.79.1-1) ... 134s Preparing to unpack .../7-python3-gi_3.47.0-3build1_armhf.deb ... 135s Unpacking python3-gi (3.47.0-3build1) over (3.47.0-3) ... 135s Preparing to unpack .../8-python3-dbus_1.3.2-5build2_armhf.deb ... 135s Unpacking python3-dbus (1.3.2-5build2) over (1.3.2-5build1) ... 135s dpkg: libgpgme11:armhf: dependency problems, but removing anyway as you requested: 135s libvolume-key1:armhf depends on libgpgme11 (>= 1.4.1). 135s libjcat1:armhf depends on libgpgme11 (>= 1.2.0). 135s 135s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58609 files and directories currently installed.) 135s Removing libgpgme11:armhf (1.18.0-4ubuntu1) ... 135s Selecting previously unselected package libgpgme11t64:armhf. 135s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58603 files and directories currently installed.) 135s Preparing to unpack .../00-libgpgme11t64_1.18.0-4.1ubuntu3_armhf.deb ... 135s Unpacking libgpgme11t64:armhf (1.18.0-4.1ubuntu3) ... 135s Preparing to unpack .../01-libvolume-key1_0.3.12-7build1_armhf.deb ... 135s Unpacking libvolume-key1:armhf (0.3.12-7build1) over (0.3.12-5build2) ... 135s Selecting previously unselected package libnetplan1:armhf. 135s Preparing to unpack .../02-libnetplan1_1.0-1_armhf.deb ... 135s Unpacking libnetplan1:armhf (1.0-1) ... 135s Preparing to unpack .../03-python3-netplan_1.0-1_armhf.deb ... 135s Unpacking python3-netplan (1.0-1) over (0.107.1-3) ... 136s Preparing to unpack .../04-netplan-generator_1.0-1_armhf.deb ... 136s Adding 'diversion of /lib/systemd/system-generators/netplan to /lib/systemd/system-generators/netplan.usr-is-merged by netplan-generator' 136s Unpacking netplan-generator (1.0-1) over (0.107.1-3) ... 136s Preparing to unpack .../05-initramfs-tools-bin_0.142ubuntu23_armhf.deb ... 136s Unpacking initramfs-tools-bin (0.142ubuntu23) over (0.142ubuntu20) ... 136s Preparing to unpack .../06-initramfs-tools-core_0.142ubuntu23_all.deb ... 136s Unpacking initramfs-tools-core (0.142ubuntu23) over (0.142ubuntu20) ... 136s Preparing to unpack .../07-ubuntu-minimal_1.536build1_armhf.deb ... 136s Unpacking ubuntu-minimal (1.536build1) over (1.536) ... 136s Preparing to unpack .../08-initramfs-tools_0.142ubuntu23_all.deb ... 136s Unpacking initramfs-tools (0.142ubuntu23) over (0.142ubuntu20) ... 136s Preparing to unpack .../09-netplan.io_1.0-1_armhf.deb ... 136s Unpacking netplan.io (1.0-1) over (0.107.1-3) ... 136s Preparing to unpack .../10-libxmlb2_0.3.15-1build1_armhf.deb ... 136s Unpacking libxmlb2:armhf (0.3.15-1build1) over (0.3.15-1) ... 136s Preparing to unpack .../11-libqrtr-glib0_1.2.2-1ubuntu3_armhf.deb ... 136s Unpacking libqrtr-glib0:armhf (1.2.2-1ubuntu3) over (1.2.2-1ubuntu2) ... 136s Preparing to unpack .../12-libqmi-glib5_1.35.2-0ubuntu1_armhf.deb ... 136s Unpacking libqmi-glib5:armhf (1.35.2-0ubuntu1) over (1.34.0-2) ... 136s Preparing to unpack .../13-libqmi-proxy_1.35.2-0ubuntu1_armhf.deb ... 136s Unpacking libqmi-proxy (1.35.2-0ubuntu1) over (1.34.0-2) ... 136s Preparing to unpack .../14-libpolkit-agent-1-0_124-1ubuntu1_armhf.deb ... 136s Unpacking libpolkit-agent-1-0:armhf (124-1ubuntu1) over (124-1) ... 136s Preparing to unpack .../15-libpolkit-gobject-1-0_124-1ubuntu1_armhf.deb ... 136s Unpacking libpolkit-gobject-1-0:armhf (124-1ubuntu1) over (124-1) ... 136s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58617 files and directories currently installed.) 136s Removing libnetplan0:armhf (0.107.1-3) ... 137s dpkg: libglib2.0-0:armhf: dependency problems, but removing anyway as you requested: 137s libmm-glib0:armhf depends on libglib2.0-0 (>= 2.62.0). 137s libmbim-proxy depends on libglib2.0-0 (>= 2.56). 137s libmbim-glib4:armhf depends on libglib2.0-0 (>= 2.56). 137s libjson-glib-1.0-0:armhf depends on libglib2.0-0 (>= 2.75.3). 137s libjcat1:armhf depends on libglib2.0-0 (>= 2.75.3). 137s libgusb2:armhf depends on libglib2.0-0 (>= 2.75.3). 137s libgudev-1.0-0:armhf depends on libglib2.0-0 (>= 2.38.0). 137s libfwupd2:armhf depends on libglib2.0-0 (>= 2.79.0). 137s libblockdev3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-utils3:armhf depends on libglib2.0-0 (>= 2.75.3). 137s libblockdev-swap3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-part3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-nvme3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-mdraid3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-loop3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-fs3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s libblockdev-crypto3:armhf depends on libglib2.0-0 (>= 2.42.2). 137s fwupd depends on libglib2.0-0 (>= 2.79.0). 137s bolt depends on libglib2.0-0 (>= 2.56.0). 137s 137s Removing libglib2.0-0:armhf (2.79.2-1~ubuntu1) ... 137s Selecting previously unselected package libglib2.0-0t64:armhf. 137s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58588 files and directories currently installed.) 137s Preparing to unpack .../libglib2.0-0t64_2.79.3-3ubuntu5_armhf.deb ... 137s libglib2.0-0t64.preinst: Removing /var/lib/dpkg/info/libglib2.0-0:armhf.postrm to avoid loss of /usr/share/glib-2.0/schemas/gschemas.compiled... 137s removed '/var/lib/dpkg/info/libglib2.0-0:armhf.postrm' 137s Unpacking libglib2.0-0t64:armhf (2.79.3-3ubuntu5) ... 137s Preparing to unpack .../libjcat1_0.2.0-2build2_armhf.deb ... 137s Unpacking libjcat1:armhf (0.2.0-2build2) over (0.2.0-2) ... 137s dpkg: libarchive13:armhf: dependency problems, but removing anyway as you requested: 137s fwupd depends on libarchive13 (>= 3.2.1). 137s 137s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58613 files and directories currently installed.) 137s Removing libarchive13:armhf (3.7.2-1ubuntu2) ... 137s Selecting previously unselected package libarchive13t64:armhf. 137s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58607 files and directories currently installed.) 137s Preparing to unpack .../00-libarchive13t64_3.7.2-1.1ubuntu2_armhf.deb ... 137s Unpacking libarchive13t64:armhf (3.7.2-1.1ubuntu2) ... 137s Preparing to unpack .../01-fwupd_1.9.15-2_armhf.deb ... 137s Unpacking fwupd (1.9.15-2) over (1.9.14-1) ... 138s Preparing to unpack .../02-ubuntu-pro-client-l10n_31.2.1_armhf.deb ... 138s Unpacking ubuntu-pro-client-l10n (31.2.1) over (31.1) ... 138s Preparing to unpack .../03-ubuntu-pro-client_31.2.1_armhf.deb ... 138s Unpacking ubuntu-pro-client (31.2.1) over (31.1) ... 138s Preparing to unpack .../04-python3.11_3.11.8-1build4_armhf.deb ... 138s Unpacking python3.11 (3.11.8-1build4) over (3.11.8-1) ... 138s Preparing to unpack .../05-python3.11-minimal_3.11.8-1build4_armhf.deb ... 138s Unpacking python3.11-minimal (3.11.8-1build4) over (3.11.8-1) ... 138s Preparing to unpack .../06-libpython3.11-minimal_3.11.8-1build4_armhf.deb ... 139s Unpacking libpython3.11-minimal:armhf (3.11.8-1build4) over (3.11.8-1) ... 139s Preparing to unpack .../07-libsasl2-modules-db_2.1.28+dfsg1-5ubuntu1_armhf.deb ... 139s Unpacking libsasl2-modules-db:armhf (2.1.28+dfsg1-5ubuntu1) over (2.1.28+dfsg1-4) ... 139s Preparing to unpack .../08-libtext-iconv-perl_1.7-8build2_armhf.deb ... 139s Unpacking libtext-iconv-perl:armhf (1.7-8build2) over (1.7-8build1) ... 139s Preparing to unpack .../09-libtext-charwidth-perl_0.04-11build2_armhf.deb ... 139s Unpacking libtext-charwidth-perl:armhf (0.04-11build2) over (0.04-11build1) ... 139s Preparing to unpack .../10-perl-base_5.38.2-3.2_armhf.deb ... 139s Unpacking perl-base (5.38.2-3.2) over (5.38.2-3) ... 140s Setting up perl-base (5.38.2-3.2) ... 140s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58614 files and directories currently installed.) 140s Preparing to unpack .../liblocale-gettext-perl_1.07-6ubuntu4_armhf.deb ... 140s Unpacking liblocale-gettext-perl (1.07-6ubuntu4) over (1.07-6build1) ... 140s Preparing to unpack .../perl-modules-5.38_5.38.2-3.2_all.deb ... 140s Unpacking perl-modules-5.38 (5.38.2-3.2) over (5.38.2-3) ... 141s Preparing to unpack .../python3-gdbm_3.12.2-3ubuntu1.1_armhf.deb ... 141s Unpacking python3-gdbm:armhf (3.12.2-3ubuntu1.1) over (3.11.5-1) ... 141s Preparing to unpack .../man-db_2.12.0-3build4_armhf.deb ... 141s Unpacking man-db (2.12.0-3build4) over (2.12.0-3) ... 141s dpkg: libgdbm-compat4:armhf: dependency problems, but removing anyway as you requested: 141s libperl5.38:armhf depends on libgdbm-compat4 (>= 1.18-3). 141s 141s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58614 files and directories currently installed.) 141s Removing libgdbm-compat4:armhf (1.23-5) ... 141s dpkg: libgdbm6:armhf: dependency problems, but removing anyway as you requested: 141s libperl5.38:armhf depends on libgdbm6 (>= 1.21). 141s 141s Removing libgdbm6:armhf (1.23-5) ... 141s Selecting previously unselected package libgdbm6t64:armhf. 141s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58604 files and directories currently installed.) 141s Preparing to unpack .../libgdbm6t64_1.23-5.1_armhf.deb ... 141s Unpacking libgdbm6t64:armhf (1.23-5.1) ... 141s Selecting previously unselected package libgdbm-compat4t64:armhf. 141s Preparing to unpack .../libgdbm-compat4t64_1.23-5.1_armhf.deb ... 141s Unpacking libgdbm-compat4t64:armhf (1.23-5.1) ... 141s dpkg: libperl5.38:armhf: dependency problems, but removing anyway as you requested: 141s perl depends on libperl5.38 (= 5.38.2-3). 141s 141s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58616 files and directories currently installed.) 141s Removing libperl5.38:armhf (5.38.2-3) ... 141s Selecting previously unselected package libperl5.38t64:armhf. 141s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58097 files and directories currently installed.) 141s Preparing to unpack .../libperl5.38t64_5.38.2-3.2_armhf.deb ... 141s Unpacking libperl5.38t64:armhf (5.38.2-3.2) ... 142s Preparing to unpack .../perl_5.38.2-3.2_armhf.deb ... 142s Unpacking perl (5.38.2-3.2) over (5.38.2-3) ... 142s dpkg: libdb5.3:armhf: dependency problems, but removing anyway as you requested: 142s libpython3.11-stdlib:armhf depends on libdb5.3. 142s libpam-modules:armhf depends on libdb5.3. 142s iproute2 depends on libdb5.3. 142s apt-utils depends on libdb5.3. 142s 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58616 files and directories currently installed.) 142s Removing libdb5.3:armhf (5.3.28+dfsg2-4) ... 142s Selecting previously unselected package libdb5.3t64:armhf. 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58610 files and directories currently installed.) 142s Preparing to unpack .../libdb5.3t64_5.3.28+dfsg2-6_armhf.deb ... 142s Unpacking libdb5.3t64:armhf (5.3.28+dfsg2-6) ... 142s Preparing to unpack .../libpython3.11-stdlib_3.11.8-1build4_armhf.deb ... 142s Unpacking libpython3.11-stdlib:armhf (3.11.8-1build4) over (3.11.8-1) ... 143s Preparing to unpack .../keyboxd_2.4.4-2ubuntu15_armhf.deb ... 143s Unpacking keyboxd (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 143s dpkg: libnpth0:armhf: dependency problems, but removing anyway as you requested: 143s gpgv depends on libnpth0 (>= 0.90). 143s gpgsm depends on libnpth0 (>= 0.90). 143s gpg-agent depends on libnpth0 (>= 0.90). 143s gpg depends on libnpth0 (>= 0.90). 143s dirmngr depends on libnpth0 (>= 0.90). 143s 143s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58614 files and directories currently installed.) 143s Removing libnpth0:armhf (1.6-3build2) ... 143s Selecting previously unselected package libnpth0t64:armhf. 143s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58609 files and directories currently installed.) 143s Preparing to unpack .../libnpth0t64_1.6-3.1_armhf.deb ... 143s Unpacking libnpth0t64:armhf (1.6-3.1) ... 143s Setting up libnpth0t64:armhf (1.6-3.1) ... 143s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58615 files and directories currently installed.) 143s Preparing to unpack .../gpgv_2.4.4-2ubuntu15_armhf.deb ... 143s Unpacking gpgv (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 143s Setting up gpgv (2.4.4-2ubuntu15) ... 143s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58615 files and directories currently installed.) 143s Preparing to unpack .../gpg_2.4.4-2ubuntu15_armhf.deb ... 143s Unpacking gpg (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 143s Preparing to unpack .../gpg-wks-client_2.4.4-2ubuntu15_armhf.deb ... 143s Unpacking gpg-wks-client (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 143s Preparing to unpack .../gnupg-utils_2.4.4-2ubuntu15_armhf.deb ... 143s Unpacking gnupg-utils (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 143s Preparing to unpack .../gpg-agent_2.4.4-2ubuntu15_armhf.deb ... 143s Unpacking gpg-agent (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 144s Preparing to unpack .../gpgsm_2.4.4-2ubuntu15_armhf.deb ... 144s Unpacking gpgsm (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 144s dpkg: libreadline8:armhf: dependency problems, but removing anyway as you requested: 144s gpgconf depends on libreadline8 (>= 6.0). 144s gawk depends on libreadline8 (>= 6.0). 144s fdisk depends on libreadline8 (>= 6.0). 144s 144s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58615 files and directories currently installed.) 144s Removing libreadline8:armhf (8.2-3) ... 144s Selecting previously unselected package libreadline8t64:armhf. 144s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58603 files and directories currently installed.) 144s Preparing to unpack .../libreadline8t64_8.2-4_armhf.deb ... 144s Adding 'diversion of /lib/arm-linux-gnueabihf/libhistory.so.8 to /lib/arm-linux-gnueabihf/libhistory.so.8.usr-is-merged by libreadline8t64' 144s Adding 'diversion of /lib/arm-linux-gnueabihf/libhistory.so.8.2 to /lib/arm-linux-gnueabihf/libhistory.so.8.2.usr-is-merged by libreadline8t64' 144s Adding 'diversion of /lib/arm-linux-gnueabihf/libreadline.so.8 to /lib/arm-linux-gnueabihf/libreadline.so.8.usr-is-merged by libreadline8t64' 144s Adding 'diversion of /lib/arm-linux-gnueabihf/libreadline.so.8.2 to /lib/arm-linux-gnueabihf/libreadline.so.8.2.usr-is-merged by libreadline8t64' 144s Unpacking libreadline8t64:armhf (8.2-4) ... 144s Setting up libreadline8t64:armhf (8.2-4) ... 144s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58623 files and directories currently installed.) 144s Preparing to unpack .../0-gawk_1%3a5.2.1-2build2_armhf.deb ... 144s Unpacking gawk (1:5.2.1-2build2) over (1:5.2.1-2) ... 144s Preparing to unpack .../1-fdisk_2.39.3-9ubuntu2_armhf.deb ... 144s Unpacking fdisk (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 144s Preparing to unpack .../2-gpgconf_2.4.4-2ubuntu15_armhf.deb ... 144s Unpacking gpgconf (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 144s Preparing to unpack .../3-dirmngr_2.4.4-2ubuntu15_armhf.deb ... 144s Unpacking dirmngr (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 144s Preparing to unpack .../4-gnupg_2.4.4-2ubuntu15_all.deb ... 144s Unpacking gnupg (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 144s Preparing to unpack .../5-python3-apt_2.7.7_armhf.deb ... 145s Unpacking python3-apt (2.7.7) over (2.7.6) ... 145s Preparing to unpack .../6-apt-utils_2.7.14_armhf.deb ... 145s Unpacking apt-utils (2.7.14) over (2.7.12) ... 145s dpkg: libapt-pkg6.0:armhf: dependency problems, but removing anyway as you requested: 145s apt depends on libapt-pkg6.0 (>= 2.7.12). 145s 145s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58621 files and directories currently installed.) 145s Removing libapt-pkg6.0:armhf (2.7.12) ... 145s dpkg: libnettle8:armhf: dependency problems, but removing anyway as you requested: 145s libhogweed6:armhf depends on libnettle8. 145s libgnutls30:armhf depends on libnettle8 (>= 3.9~). 145s libcurl3-gnutls:armhf depends on libnettle8. 145s 145s Removing libnettle8:armhf (3.9.1-2) ... 145s Selecting previously unselected package libapt-pkg6.0t64:armhf. 145s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58565 files and directories currently installed.) 145s Preparing to unpack .../libapt-pkg6.0t64_2.7.14_armhf.deb ... 145s Unpacking libapt-pkg6.0t64:armhf (2.7.14) ... 145s Setting up libapt-pkg6.0t64:armhf (2.7.14) ... 145s Selecting previously unselected package libnettle8t64:armhf. 145s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58615 files and directories currently installed.) 145s Preparing to unpack .../libnettle8t64_3.9.1-2.2_armhf.deb ... 145s Unpacking libnettle8t64:armhf (3.9.1-2.2) ... 145s Setting up libnettle8t64:armhf (3.9.1-2.2) ... 145s dpkg: libhogweed6:armhf: dependency problems, but removing anyway as you requested: 145s libgnutls30:armhf depends on libhogweed6 (>= 3.6). 145s 145s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58623 files and directories currently installed.) 145s Removing libhogweed6:armhf (3.9.1-2) ... 145s Selecting previously unselected package libhogweed6t64:armhf. 145s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58618 files and directories currently installed.) 145s Preparing to unpack .../libhogweed6t64_3.9.1-2.2_armhf.deb ... 145s Unpacking libhogweed6t64:armhf (3.9.1-2.2) ... 145s Setting up libhogweed6t64:armhf (3.9.1-2.2) ... 146s dpkg: libgnutls30:armhf: dependency problems, but removing anyway as you requested: 146s libcurl3-gnutls:armhf depends on libgnutls30 (>= 3.8.2). 146s apt depends on libgnutls30 (>= 3.8.1). 146s 146s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58624 files and directories currently installed.) 146s Removing libgnutls30:armhf (3.8.3-1ubuntu1) ... 146s Selecting previously unselected package libgnutls30t64:armhf. 146s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58615 files and directories currently installed.) 146s Preparing to unpack .../libgnutls30t64_3.8.3-1.1ubuntu2_armhf.deb ... 146s Unpacking libgnutls30t64:armhf (3.8.3-1.1ubuntu2) ... 146s Setting up libgnutls30t64:armhf (3.8.3-1.1ubuntu2) ... 146s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58643 files and directories currently installed.) 146s Preparing to unpack .../archives/apt_2.7.14_armhf.deb ... 146s Unpacking apt (2.7.14) over (2.7.12) ... 146s Setting up apt (2.7.14) ... 154s dpkg: libcurl3-gnutls:armhf: dependency problems, but removing anyway as you requested: 154s libfwupd2:armhf depends on libcurl3-gnutls (>= 7.63.0). 154s 154s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58643 files and directories currently installed.) 154s Removing libcurl3-gnutls:armhf (8.5.0-2ubuntu2) ... 154s Selecting previously unselected package libcurl3t64-gnutls:armhf. 154s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58636 files and directories currently installed.) 154s Preparing to unpack .../libcurl3t64-gnutls_8.5.0-2ubuntu8_armhf.deb ... 154s Unpacking libcurl3t64-gnutls:armhf (8.5.0-2ubuntu8) ... 154s Preparing to unpack .../libfwupd2_1.9.15-2_armhf.deb ... 154s Unpacking libfwupd2:armhf (1.9.15-2) over (1.9.14-1) ... 154s dpkg: libpsl5:armhf: dependency problems, but removing anyway as you requested: 154s wget depends on libpsl5 (>= 0.16.0). 154s libcurl4:armhf depends on libpsl5 (>= 0.16.0). 154s 154s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58643 files and directories currently installed.) 154s Removing libpsl5:armhf (0.21.2-1build1) ... 155s Selecting previously unselected package libpsl5t64:armhf. 155s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58638 files and directories currently installed.) 155s Preparing to unpack .../libpsl5t64_0.21.2-1.1_armhf.deb ... 155s Unpacking libpsl5t64:armhf (0.21.2-1.1) ... 155s Preparing to unpack .../wget_1.21.4-1ubuntu2_armhf.deb ... 155s Unpacking wget (1.21.4-1ubuntu2) over (1.21.4-1ubuntu1) ... 155s Preparing to unpack .../tnftp_20230507-2build1_armhf.deb ... 155s Unpacking tnftp (20230507-2build1) over (20230507-2) ... 155s dpkg: libpcap0.8:armhf: dependency problems, but removing anyway as you requested: 155s tcpdump depends on libpcap0.8 (>= 1.9.1). 155s 155s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58644 files and directories currently installed.) 155s Removing libpcap0.8:armhf (1.10.4-4ubuntu3) ... 155s Selecting previously unselected package libpcap0.8t64:armhf. 155s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58633 files and directories currently installed.) 155s Preparing to unpack .../00-libpcap0.8t64_1.10.4-4.1ubuntu2_armhf.deb ... 155s Unpacking libpcap0.8t64:armhf (1.10.4-4.1ubuntu2) ... 155s Preparing to unpack .../01-tcpdump_4.99.4-3ubuntu2_armhf.deb ... 155s Unpacking tcpdump (4.99.4-3ubuntu2) over (4.99.4-3ubuntu1) ... 155s Preparing to unpack .../02-libsystemd-shared_255.4-1ubuntu5_armhf.deb ... 155s Unpacking libsystemd-shared:armhf (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 155s Preparing to unpack .../03-systemd-resolved_255.4-1ubuntu5_armhf.deb ... 155s Unpacking systemd-resolved (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 155s Preparing to unpack .../04-sudo_1.9.15p5-3ubuntu3_armhf.deb ... 155s Unpacking sudo (1.9.15p5-3ubuntu3) over (1.9.15p5-3ubuntu1) ... 155s Preparing to unpack .../05-rsync_3.2.7-1build1_armhf.deb ... 156s Unpacking rsync (3.2.7-1build1) over (3.2.7-1) ... 156s Preparing to unpack .../06-python3-cryptography_41.0.7-4build2_armhf.deb ... 156s Unpacking python3-cryptography (41.0.7-4build2) over (41.0.7-3) ... 156s Preparing to unpack .../07-openssh-sftp-server_1%3a9.6p1-3ubuntu11_armhf.deb ... 156s Unpacking openssh-sftp-server (1:9.6p1-3ubuntu11) over (1:9.6p1-3ubuntu2) ... 156s Preparing to unpack .../08-openssh-client_1%3a9.6p1-3ubuntu11_armhf.deb ... 156s Unpacking openssh-client (1:9.6p1-3ubuntu11) over (1:9.6p1-3ubuntu2) ... 156s Preparing to unpack .../09-openssh-server_1%3a9.6p1-3ubuntu11_armhf.deb ... 156s Unpacking openssh-server (1:9.6p1-3ubuntu11) over (1:9.6p1-3ubuntu2) ... 156s Selecting previously unselected package linux-headers-6.8.0-20. 156s Preparing to unpack .../10-linux-headers-6.8.0-20_6.8.0-20.20_all.deb ... 156s Unpacking linux-headers-6.8.0-20 (6.8.0-20.20) ... 163s Selecting previously unselected package linux-headers-6.8.0-20-generic. 163s Preparing to unpack .../11-linux-headers-6.8.0-20-generic_6.8.0-20.20_armhf.deb ... 163s Unpacking linux-headers-6.8.0-20-generic (6.8.0-20.20) ... 167s Preparing to unpack .../12-linux-headers-generic_6.8.0-20.20+1_armhf.deb ... 167s Unpacking linux-headers-generic (6.8.0-20.20+1) over (6.8.0-11.11+1) ... 167s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 89796 files and directories currently installed.) 167s Removing linux-headers-6.8.0-11-generic (6.8.0-11.11) ... 169s dpkg: libssl3:armhf: dependency problems, but removing anyway as you requested: 169s systemd depends on libssl3 (>= 3.0.0). 169s libssh-4:armhf depends on libssl3 (>= 3.0.0). 169s libsasl2-modules:armhf depends on libssl3 (>= 3.0.0). 169s libsasl2-2:armhf depends on libssl3 (>= 3.0.0). 169s libnvme1 depends on libssl3 (>= 3.0.0). 169s libkrb5-3:armhf depends on libssl3 (>= 3.0.0). 169s libkmod2:armhf depends on libssl3 (>= 3.0.0). 169s libfido2-1:armhf depends on libssl3 (>= 3.0.0). 169s libcurl4:armhf depends on libssl3 (>= 3.0.0). 169s libcryptsetup12:armhf depends on libssl3 (>= 3.0.0). 169s kmod depends on libssl3 (>= 3.0.0). 169s dhcpcd-base depends on libssl3 (>= 3.0.0). 169s bind9-libs:armhf depends on libssl3 (>= 3.0.0). 169s 169s Removing libssl3:armhf (3.0.10-1ubuntu4) ... 169s Selecting previously unselected package libssl3t64:armhf. 169s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78646 files and directories currently installed.) 169s Preparing to unpack .../libssl3t64_3.0.13-0ubuntu2_armhf.deb ... 169s Unpacking libssl3t64:armhf (3.0.13-0ubuntu2) ... 169s Setting up libssl3t64:armhf (3.0.13-0ubuntu2) ... 169s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78659 files and directories currently installed.) 169s Preparing to unpack .../libudev1_255.4-1ubuntu5_armhf.deb ... 169s Unpacking libudev1:armhf (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 169s Setting up libudev1:armhf (255.4-1ubuntu5) ... 169s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78659 files and directories currently installed.) 169s Preparing to unpack .../systemd_255.4-1ubuntu5_armhf.deb ... 169s Unpacking systemd (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 170s Preparing to unpack .../udev_255.4-1ubuntu5_armhf.deb ... 170s Unpacking udev (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 170s Preparing to unpack .../libsystemd0_255.4-1ubuntu5_armhf.deb ... 170s Unpacking libsystemd0:armhf (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 170s Setting up libsystemd0:armhf (255.4-1ubuntu5) ... 170s Setting up libsystemd-shared:armhf (255.4-1ubuntu5) ... 170s Setting up systemd-dev (255.4-1ubuntu5) ... 170s Setting up systemd (255.4-1ubuntu5) ... 171s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78659 files and directories currently installed.) 171s Preparing to unpack .../systemd-sysv_255.4-1ubuntu5_armhf.deb ... 171s Unpacking systemd-sysv (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 171s Preparing to unpack .../libnss-systemd_255.4-1ubuntu5_armhf.deb ... 171s Unpacking libnss-systemd:armhf (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 172s Preparing to unpack .../libpam-systemd_255.4-1ubuntu5_armhf.deb ... 172s Unpacking libpam-systemd:armhf (255.4-1ubuntu5) over (255.2-3ubuntu2) ... 172s Preparing to unpack .../libpam-modules-bin_1.5.3-5ubuntu3_armhf.deb ... 172s Unpacking libpam-modules-bin (1.5.3-5ubuntu3) over (1.5.2-9.1ubuntu3) ... 172s Setting up libpam-modules-bin (1.5.3-5ubuntu3) ... 172s pam_namespace.service is a disabled or a static unit not running, not starting it. 172s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78659 files and directories currently installed.) 172s Preparing to unpack .../libpam-modules_1.5.3-5ubuntu3_armhf.deb ... 173s Unpacking libpam-modules:armhf (1.5.3-5ubuntu3) over (1.5.2-9.1ubuntu3) ... 173s Setting up libpam-modules:armhf (1.5.3-5ubuntu3) ... 173s Installing new version of config file /etc/security/namespace.init ... 173s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78657 files and directories currently installed.) 173s Preparing to unpack .../libpam-runtime_1.5.3-5ubuntu3_all.deb ... 173s Unpacking libpam-runtime (1.5.3-5ubuntu3) over (1.5.2-9.1ubuntu3) ... 173s Setting up libpam-runtime (1.5.3-5ubuntu3) ... 173s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78657 files and directories currently installed.) 173s Preparing to unpack .../0-dbus-user-session_1.14.10-4ubuntu2_armhf.deb ... 173s Unpacking dbus-user-session (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 173s Preparing to unpack .../1-libapparmor1_4.0.0-beta3-0ubuntu2_armhf.deb ... 173s Unpacking libapparmor1:armhf (4.0.0-beta3-0ubuntu2) over (4.0.0~alpha4-0ubuntu1) ... 173s Preparing to unpack .../2-dbus-bin_1.14.10-4ubuntu2_armhf.deb ... 173s Unpacking dbus-bin (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 174s Preparing to unpack .../3-dbus-system-bus-common_1.14.10-4ubuntu2_all.deb ... 174s Unpacking dbus-system-bus-common (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 174s Preparing to unpack .../4-dbus_1.14.10-4ubuntu2_armhf.deb ... 174s Unpacking dbus (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 174s Preparing to unpack .../5-dbus-daemon_1.14.10-4ubuntu2_armhf.deb ... 174s Unpacking dbus-daemon (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 174s Preparing to unpack .../6-libdbus-1-3_1.14.10-4ubuntu2_armhf.deb ... 174s Unpacking libdbus-1-3:armhf (1.14.10-4ubuntu2) over (1.14.10-4ubuntu1) ... 174s Preparing to unpack .../7-kmod_31+20240202-2ubuntu4_armhf.deb ... 174s Unpacking kmod (31+20240202-2ubuntu4) over (30+20230601-2ubuntu1) ... 174s dpkg: warning: unable to delete old directory '/lib/modprobe.d': Directory not empty 174s Preparing to unpack .../8-libkmod2_31+20240202-2ubuntu4_armhf.deb ... 174s Unpacking libkmod2:armhf (31+20240202-2ubuntu4) over (30+20230601-2ubuntu1) ... 174s Preparing to unpack .../9-libmount1_2.39.3-9ubuntu2_armhf.deb ... 174s Unpacking libmount1:armhf (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 174s Setting up libmount1:armhf (2.39.3-9ubuntu2) ... 174s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78656 files and directories currently installed.) 174s Preparing to unpack .../libseccomp2_2.5.5-1ubuntu2_armhf.deb ... 174s Unpacking libseccomp2:armhf (2.5.5-1ubuntu2) over (2.5.5-1ubuntu1) ... 174s Setting up libseccomp2:armhf (2.5.5-1ubuntu2) ... 174s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78656 files and directories currently installed.) 174s Preparing to unpack .../libuuid1_2.39.3-9ubuntu2_armhf.deb ... 174s Unpacking libuuid1:armhf (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 174s Setting up libuuid1:armhf (2.39.3-9ubuntu2) ... 175s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78656 files and directories currently installed.) 175s Preparing to unpack .../0-libcryptsetup12_2%3a2.7.0-1ubuntu2_armhf.deb ... 175s Unpacking libcryptsetup12:armhf (2:2.7.0-1ubuntu2) over (2:2.7.0-1ubuntu1) ... 175s Preparing to unpack .../1-libfdisk1_2.39.3-9ubuntu2_armhf.deb ... 175s Unpacking libfdisk1:armhf (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 175s Preparing to unpack .../2-mount_2.39.3-9ubuntu2_armhf.deb ... 175s Unpacking mount (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 175s Preparing to unpack .../3-libdevmapper1.02.1_2%3a1.02.185-3ubuntu2_armhf.deb ... 175s Unpacking libdevmapper1.02.1:armhf (2:1.02.185-3ubuntu2) over (2:1.02.185-3ubuntu1) ... 175s Selecting previously unselected package libparted2t64:armhf. 175s Preparing to unpack .../4-libparted2t64_3.6-3.1build2_armhf.deb ... 175s Unpacking libparted2t64:armhf (3.6-3.1build2) ... 175s Preparing to unpack .../5-libsqlite3-0_3.45.1-1ubuntu1_armhf.deb ... 175s Unpacking libsqlite3-0:armhf (3.45.1-1ubuntu1) over (3.45.1-1) ... 175s Preparing to unpack .../6-pinentry-curses_1.2.1-3ubuntu4_armhf.deb ... 175s Unpacking pinentry-curses (1.2.1-3ubuntu4) over (1.2.1-3ubuntu1) ... 175s Preparing to unpack .../7-libsmartcols1_2.39.3-9ubuntu2_armhf.deb ... 175s Unpacking libsmartcols1:armhf (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 175s Setting up libsmartcols1:armhf (2.39.3-9ubuntu2) ... 175s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78663 files and directories currently installed.) 175s Preparing to unpack .../0-readline-common_8.2-4_all.deb ... 175s Unpacking readline-common (8.2-4) over (8.2-3) ... 175s Preparing to unpack .../1-python3-yaml_6.0.1-2build1_armhf.deb ... 175s Unpacking python3-yaml (6.0.1-2build1) over (6.0.1-2) ... 176s Preparing to unpack .../2-python-apt-common_2.7.7_all.deb ... 176s Unpacking python-apt-common (2.7.7) over (2.7.6) ... 176s Preparing to unpack .../3-python3-setuptools_68.1.2-2ubuntu1_all.deb ... 176s Unpacking python3-setuptools (68.1.2-2ubuntu1) over (68.1.2-2) ... 176s Preparing to unpack .../4-python3-pkg-resources_68.1.2-2ubuntu1_all.deb ... 176s Unpacking python3-pkg-resources (68.1.2-2ubuntu1) over (68.1.2-2) ... 176s Preparing to unpack .../5-dpkg_1.22.6ubuntu4_armhf.deb ... 176s Unpacking dpkg (1.22.6ubuntu4) over (1.22.4ubuntu5) ... 177s Setting up dpkg (1.22.6ubuntu4) ... 177s Setting up libpython3.12-minimal:armhf (3.12.2-4build3) ... 177s Setting up libexpat1:armhf (2.6.1-2) ... 177s Setting up python3.12-minimal (3.12.2-4build3) ... 179s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78662 files and directories currently installed.) 179s Preparing to unpack .../python3-minimal_3.12.2-0ubuntu1_armhf.deb ... 179s Unpacking python3-minimal (3.12.2-0ubuntu1) over (3.12.1-0ubuntu2) ... 180s Setting up python3-minimal (3.12.2-0ubuntu1) ... 180s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78662 files and directories currently installed.) 180s Preparing to unpack .../00-python3_3.12.2-0ubuntu1_armhf.deb ... 180s Unpacking python3 (3.12.2-0ubuntu1) over (3.12.1-0ubuntu2) ... 180s Preparing to unpack .../01-libpython3-stdlib_3.12.2-0ubuntu1_armhf.deb ... 180s Unpacking libpython3-stdlib:armhf (3.12.2-0ubuntu1) over (3.12.1-0ubuntu2) ... 180s Preparing to unpack .../02-bsdextrautils_2.39.3-9ubuntu2_armhf.deb ... 180s Unpacking bsdextrautils (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 180s Preparing to unpack .../03-groff-base_1.23.0-3build1_armhf.deb ... 180s Unpacking groff-base (1.23.0-3build1) over (1.23.0-3) ... 180s Preparing to unpack .../04-libsasl2-2_2.1.28+dfsg1-5ubuntu1_armhf.deb ... 180s Unpacking libsasl2-2:armhf (2.1.28+dfsg1-5ubuntu1) over (2.1.28+dfsg1-4) ... 180s Preparing to unpack .../05-libblockdev-utils3_3.1.0-1build1_armhf.deb ... 180s Unpacking libblockdev-utils3:armhf (3.1.0-1build1) over (3.1.0-1) ... 181s Preparing to unpack .../06-libblockdev-crypto3_3.1.0-1build1_armhf.deb ... 181s Unpacking libblockdev-crypto3:armhf (3.1.0-1build1) over (3.1.0-1) ... 181s Preparing to unpack .../07-logsave_1.47.0-2.4~exp1ubuntu2_armhf.deb ... 181s Unpacking logsave (1.47.0-2.4~exp1ubuntu2) over (1.47.0-2ubuntu1) ... 181s Preparing to unpack .../08-dhcpcd-base_1%3a10.0.6-1ubuntu2_armhf.deb ... 181s Unpacking dhcpcd-base (1:10.0.6-1ubuntu2) over (1:10.0.6-1ubuntu1) ... 181s Preparing to unpack .../09-eject_2.39.3-9ubuntu2_armhf.deb ... 181s Unpacking eject (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 181s Preparing to unpack .../10-libbpf1_1%3a1.3.0-2build1_armhf.deb ... 181s Unpacking libbpf1:armhf (1:1.3.0-2build1) over (1:1.3.0-2) ... 181s Preparing to unpack .../11-iproute2_6.1.0-1ubuntu5_armhf.deb ... 181s Unpacking iproute2 (6.1.0-1ubuntu5) over (6.1.0-1ubuntu2) ... 181s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78662 files and directories currently installed.) 181s Removing libelf1:armhf (0.190-1) ... 181s Selecting previously unselected package libelf1t64:armhf. 181s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78657 files and directories currently installed.) 181s Preparing to unpack .../libelf1t64_0.190-1.1build2_armhf.deb ... 181s Unpacking libelf1t64:armhf (0.190-1.1build2) ... 181s Preparing to unpack .../libtirpc-common_1.3.4+ds-1.1_all.deb ... 181s Unpacking libtirpc-common (1.3.4+ds-1.1) over (1.3.4+ds-1build1) ... 181s Preparing to unpack .../lsof_4.95.0-1build2_armhf.deb ... 181s Unpacking lsof (4.95.0-1build2) over (4.95.0-1build1) ... 181s Preparing to unpack .../libnsl2_1.3.0-3build2_armhf.deb ... 182s Unpacking libnsl2:armhf (1.3.0-3build2) over (1.3.0-3) ... 182s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78662 files and directories currently installed.) 182s Removing libtirpc3:armhf (1.3.4+ds-1build1) ... 182s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78656 files and directories currently installed.) 182s Preparing to unpack .../0-libgssapi-krb5-2_1.20.1-6ubuntu1_armhf.deb ... 182s Unpacking libgssapi-krb5-2:armhf (1.20.1-6ubuntu1) over (1.20.1-5build1) ... 182s Preparing to unpack .../1-libkrb5-3_1.20.1-6ubuntu1_armhf.deb ... 182s Unpacking libkrb5-3:armhf (1.20.1-6ubuntu1) over (1.20.1-5build1) ... 182s Preparing to unpack .../2-libkrb5support0_1.20.1-6ubuntu1_armhf.deb ... 182s Unpacking libkrb5support0:armhf (1.20.1-6ubuntu1) over (1.20.1-5build1) ... 182s Preparing to unpack .../3-libk5crypto3_1.20.1-6ubuntu1_armhf.deb ... 182s Unpacking libk5crypto3:armhf (1.20.1-6ubuntu1) over (1.20.1-5build1) ... 182s Preparing to unpack .../4-libcom-err2_1.47.0-2.4~exp1ubuntu2_armhf.deb ... 182s Unpacking libcom-err2:armhf (1.47.0-2.4~exp1ubuntu2) over (1.47.0-2ubuntu1) ... 182s Selecting previously unselected package libtirpc3t64:armhf. 182s Preparing to unpack .../5-libtirpc3t64_1.3.4+ds-1.1_armhf.deb ... 182s Adding 'diversion of /lib/arm-linux-gnueabihf/libtirpc.so.3 to /lib/arm-linux-gnueabihf/libtirpc.so.3.usr-is-merged by libtirpc3t64' 182s Adding 'diversion of /lib/arm-linux-gnueabihf/libtirpc.so.3.0.0 to /lib/arm-linux-gnueabihf/libtirpc.so.3.0.0.usr-is-merged by libtirpc3t64' 182s Unpacking libtirpc3t64:armhf (1.3.4+ds-1.1) ... 182s Preparing to unpack .../6-libc-bin_2.39-0ubuntu6_armhf.deb ... 182s Unpacking libc-bin (2.39-0ubuntu6) over (2.39-0ubuntu2) ... 182s Setting up libc-bin (2.39-0ubuntu6) ... 192s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78667 files and directories currently installed.) 192s Preparing to unpack .../0-locales_2.39-0ubuntu6_all.deb ... 192s Unpacking locales (2.39-0ubuntu6) over (2.39-0ubuntu2) ... 193s Preparing to unpack .../1-libproc2-0_2%3a4.0.4-4ubuntu2_armhf.deb ... 193s Unpacking libproc2-0:armhf (2:4.0.4-4ubuntu2) over (2:4.0.4-4ubuntu1) ... 193s Preparing to unpack .../2-procps_2%3a4.0.4-4ubuntu2_armhf.deb ... 193s Unpacking procps (2:4.0.4-4ubuntu2) over (2:4.0.4-4ubuntu1) ... 193s Preparing to unpack .../3-vim-tiny_2%3a9.1.0016-1ubuntu6_armhf.deb ... 193s Unpacking vim-tiny (2:9.1.0016-1ubuntu6) over (2:9.1.0016-1ubuntu2) ... 193s Preparing to unpack .../4-vim-common_2%3a9.1.0016-1ubuntu6_all.deb ... 193s Unpacking vim-common (2:9.1.0016-1ubuntu6) over (2:9.1.0016-1ubuntu2) ... 193s Preparing to unpack .../5-e2fsprogs-l10n_1.47.0-2.4~exp1ubuntu2_all.deb ... 193s Unpacking e2fsprogs-l10n (1.47.0-2.4~exp1ubuntu2) over (1.47.0-2ubuntu1) ... 193s Preparing to unpack .../6-libblockdev-fs3_3.1.0-1build1_armhf.deb ... 193s Unpacking libblockdev-fs3:armhf (3.1.0-1build1) over (3.1.0-1) ... 193s dpkg: libreiserfscore0: dependency problems, but removing anyway as you requested: 193s btrfs-progs depends on libreiserfscore0 (>= 1:3.6.27). 193s 193s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78668 files and directories currently installed.) 193s Removing libreiserfscore0 (1:3.6.27-7) ... 193s Selecting previously unselected package libreiserfscore0t64. 194s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78663 files and directories currently installed.) 194s Preparing to unpack .../libreiserfscore0t64_1%3a3.6.27-7.1_armhf.deb ... 194s Unpacking libreiserfscore0t64 (1:3.6.27-7.1) ... 194s Preparing to unpack .../btrfs-progs_6.6.3-1.1build1_armhf.deb ... 194s Unpacking btrfs-progs (6.6.3-1.1build1) over (6.6.3-1.1) ... 194s dpkg: libext2fs2:armhf: dependency problems, but removing anyway as you requested: 194s e2fsprogs depends on libext2fs2 (= 1.47.0-2ubuntu1). 194s 194s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78669 files and directories currently installed.) 194s Removing libext2fs2:armhf (1.47.0-2ubuntu1) ... 194s Selecting previously unselected package libext2fs2t64:armhf. 194s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78662 files and directories currently installed.) 194s Preparing to unpack .../libext2fs2t64_1.47.0-2.4~exp1ubuntu2_armhf.deb ... 194s Adding 'diversion of /lib/arm-linux-gnueabihf/libe2p.so.2 to /lib/arm-linux-gnueabihf/libe2p.so.2.usr-is-merged by libext2fs2t64' 194s Adding 'diversion of /lib/arm-linux-gnueabihf/libe2p.so.2.3 to /lib/arm-linux-gnueabihf/libe2p.so.2.3.usr-is-merged by libext2fs2t64' 194s Adding 'diversion of /lib/arm-linux-gnueabihf/libext2fs.so.2 to /lib/arm-linux-gnueabihf/libext2fs.so.2.usr-is-merged by libext2fs2t64' 194s Adding 'diversion of /lib/arm-linux-gnueabihf/libext2fs.so.2.4 to /lib/arm-linux-gnueabihf/libext2fs.so.2.4.usr-is-merged by libext2fs2t64' 194s Unpacking libext2fs2t64:armhf (1.47.0-2.4~exp1ubuntu2) ... 194s Setting up libcom-err2:armhf (1.47.0-2.4~exp1ubuntu2) ... 194s Setting up libext2fs2t64:armhf (1.47.0-2.4~exp1ubuntu2) ... 194s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78678 files and directories currently installed.) 194s Preparing to unpack .../e2fsprogs_1.47.0-2.4~exp1ubuntu2_armhf.deb ... 194s Unpacking e2fsprogs (1.47.0-2.4~exp1ubuntu2) over (1.47.0-2ubuntu1) ... 194s Preparing to unpack .../libblockdev-loop3_3.1.0-1build1_armhf.deb ... 194s Unpacking libblockdev-loop3:armhf (3.1.0-1build1) over (3.1.0-1) ... 194s Preparing to unpack .../libblockdev-mdraid3_3.1.0-1build1_armhf.deb ... 194s Unpacking libblockdev-mdraid3:armhf (3.1.0-1build1) over (3.1.0-1) ... 194s Preparing to unpack .../libblockdev-nvme3_3.1.0-1build1_armhf.deb ... 194s Unpacking libblockdev-nvme3:armhf (3.1.0-1build1) over (3.1.0-1) ... 195s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78678 files and directories currently installed.) 195s Removing libnvme1 (1.8-2) ... 195s Selecting previously unselected package libnvme1t64. 195s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78671 files and directories currently installed.) 195s Preparing to unpack .../00-libnvme1t64_1.8-3_armhf.deb ... 195s Unpacking libnvme1t64 (1.8-3) ... 195s Preparing to unpack .../01-libblockdev-part3_3.1.0-1build1_armhf.deb ... 195s Unpacking libblockdev-part3:armhf (3.1.0-1build1) over (3.1.0-1) ... 195s Preparing to unpack .../02-libblockdev-swap3_3.1.0-1build1_armhf.deb ... 195s Unpacking libblockdev-swap3:armhf (3.1.0-1build1) over (3.1.0-1) ... 195s Preparing to unpack .../03-libblockdev3_3.1.0-1build1_armhf.deb ... 195s Unpacking libblockdev3:armhf (3.1.0-1build1) over (3.1.0-1) ... 195s Preparing to unpack .../04-libgudev-1.0-0_1%3a238-3ubuntu2_armhf.deb ... 195s Unpacking libgudev-1.0-0:armhf (1:238-3ubuntu2) over (1:238-3) ... 195s Preparing to unpack .../05-libxml2_2.9.14+dfsg-1.3ubuntu2_armhf.deb ... 195s Unpacking libxml2:armhf (2.9.14+dfsg-1.3ubuntu2) over (2.9.14+dfsg-1.3ubuntu1) ... 195s Preparing to unpack .../06-libmbim-proxy_1.31.2-0ubuntu2_armhf.deb ... 195s Unpacking libmbim-proxy (1.31.2-0ubuntu2) over (1.30.0-1) ... 195s Preparing to unpack .../07-libmbim-glib4_1.31.2-0ubuntu2_armhf.deb ... 195s Unpacking libmbim-glib4:armhf (1.31.2-0ubuntu2) over (1.30.0-1) ... 195s Preparing to unpack .../08-libjson-glib-1.0-common_1.8.0-2build1_all.deb ... 195s Unpacking libjson-glib-1.0-common (1.8.0-2build1) over (1.8.0-2) ... 195s Preparing to unpack .../09-libjson-glib-1.0-0_1.8.0-2build1_armhf.deb ... 195s Unpacking libjson-glib-1.0-0:armhf (1.8.0-2build1) over (1.8.0-2) ... 195s Preparing to unpack .../10-libusb-1.0-0_2%3a1.0.27-1_armhf.deb ... 195s Unpacking libusb-1.0-0:armhf (2:1.0.27-1) over (2:1.0.26-1) ... 195s Preparing to unpack .../11-libgusb2_0.4.8-1build1_armhf.deb ... 195s Unpacking libgusb2:armhf (0.4.8-1build1) over (0.4.8-1) ... 196s Preparing to unpack .../12-libmm-glib0_1.23.4-0ubuntu1_armhf.deb ... 196s Unpacking libmm-glib0:armhf (1.23.4-0ubuntu1) over (1.22.0-3) ... 196s Preparing to unpack .../13-libprotobuf-c1_1.4.1-1ubuntu3_armhf.deb ... 196s Unpacking libprotobuf-c1:armhf (1.4.1-1ubuntu3) over (1.4.1-1ubuntu2) ... 196s Preparing to unpack .../14-libbrotli1_1.1.0-2build1_armhf.deb ... 196s Unpacking libbrotli1:armhf (1.1.0-2build1) over (1.1.0-2) ... 196s Preparing to unpack .../15-libnghttp2-14_1.59.0-1build1_armhf.deb ... 196s Unpacking libnghttp2-14:armhf (1.59.0-1build1) over (1.59.0-1) ... 196s Preparing to unpack .../16-libssh-4_0.10.6-2build1_armhf.deb ... 196s Unpacking libssh-4:armhf (0.10.6-2build1) over (0.10.6-2) ... 196s Preparing to unpack .../17-libibverbs1_50.0-2build1_armhf.deb ... 196s Unpacking libibverbs1:armhf (50.0-2build1) over (50.0-2) ... 196s Preparing to unpack .../18-libfido2-1_1.14.0-1build1_armhf.deb ... 196s Unpacking libfido2-1:armhf (1.14.0-1build1) over (1.14.0-1) ... 196s Preparing to unpack .../19-coreutils_9.4-3ubuntu3_armhf.deb ... 196s Unpacking coreutils (9.4-3ubuntu3) over (9.4-2ubuntu4) ... 196s Setting up coreutils (9.4-3ubuntu3) ... 196s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78679 files and directories currently installed.) 196s Preparing to unpack .../debianutils_5.17_armhf.deb ... 196s Unpacking debianutils (5.17) over (5.16) ... 196s Setting up debianutils (5.17) ... 197s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78679 files and directories currently installed.) 197s Preparing to unpack .../util-linux_2.39.3-9ubuntu2_armhf.deb ... 197s Unpacking util-linux (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 197s Setting up util-linux (2.39.3-9ubuntu2) ... 198s fstrim.service is a disabled or a static unit not running, not starting it. 198s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78679 files and directories currently installed.) 198s Removing libatm1:armhf (1:2.5.1-5) ... 199s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78674 files and directories currently installed.) 199s Preparing to unpack .../curl_8.5.0-2ubuntu8_armhf.deb ... 199s Unpacking curl (8.5.0-2ubuntu8) over (8.5.0-2ubuntu2) ... 199s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78674 files and directories currently installed.) 199s Removing libcurl4:armhf (8.5.0-2ubuntu2) ... 199s Selecting previously unselected package libcurl4t64:armhf. 199s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78669 files and directories currently installed.) 199s Preparing to unpack .../libcurl4t64_8.5.0-2ubuntu8_armhf.deb ... 199s Unpacking libcurl4t64:armhf (8.5.0-2ubuntu8) ... 199s Preparing to unpack .../file_1%3a5.45-3_armhf.deb ... 199s Unpacking file (1:5.45-3) over (1:5.45-2) ... 199s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78675 files and directories currently installed.) 199s Removing libmagic1:armhf (1:5.45-2) ... 199s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78665 files and directories currently installed.) 199s Preparing to unpack .../libmagic-mgc_1%3a5.45-3_armhf.deb ... 199s Unpacking libmagic-mgc (1:5.45-3) over (1:5.45-2) ... 199s Selecting previously unselected package libmagic1t64:armhf. 199s Preparing to unpack .../libmagic1t64_1%3a5.45-3_armhf.deb ... 199s Unpacking libmagic1t64:armhf (1:5.45-3) ... 199s Preparing to unpack .../libplymouth5_24.004.60-1ubuntu6_armhf.deb ... 199s Unpacking libplymouth5:armhf (24.004.60-1ubuntu6) over (24.004.60-1ubuntu3) ... 200s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78676 files and directories currently installed.) 200s Removing libpng16-16:armhf (1.6.43-1) ... 200s Selecting previously unselected package libpng16-16t64:armhf. 200s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78666 files and directories currently installed.) 200s Preparing to unpack .../libpng16-16t64_1.6.43-3_armhf.deb ... 200s Unpacking libpng16-16t64:armhf (1.6.43-3) ... 200s Preparing to unpack .../bind9-host_1%3a9.18.24-0ubuntu3_armhf.deb ... 200s Unpacking bind9-host (1:9.18.24-0ubuntu3) over (1:9.18.21-0ubuntu1) ... 200s Preparing to unpack .../bind9-dnsutils_1%3a9.18.24-0ubuntu3_armhf.deb ... 200s Unpacking bind9-dnsutils (1:9.18.24-0ubuntu3) over (1:9.18.21-0ubuntu1) ... 200s Preparing to unpack .../bind9-libs_1%3a9.18.24-0ubuntu3_armhf.deb ... 200s Unpacking bind9-libs:armhf (1:9.18.24-0ubuntu3) over (1:9.18.21-0ubuntu1) ... 200s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78677 files and directories currently installed.) 200s Removing libuv1:armhf (1.48.0-1) ... 200s Selecting previously unselected package libuv1t64:armhf. 200s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78672 files and directories currently installed.) 200s Preparing to unpack .../libuv1t64_1.48.0-1.1_armhf.deb ... 200s Unpacking libuv1t64:armhf (1.48.0-1.1) ... 200s Preparing to unpack .../uuid-runtime_2.39.3-9ubuntu2_armhf.deb ... 200s Unpacking uuid-runtime (2.39.3-9ubuntu2) over (2.39.3-6ubuntu2) ... 200s Preparing to unpack .../libdebconfclient0_0.271ubuntu2_armhf.deb ... 200s Unpacking libdebconfclient0:armhf (0.271ubuntu2) over (0.271ubuntu1) ... 200s Setting up libdebconfclient0:armhf (0.271ubuntu2) ... 201s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78678 files and directories currently installed.) 201s Preparing to unpack .../libsemanage-common_3.5-1build4_all.deb ... 201s Unpacking libsemanage-common (3.5-1build4) over (3.5-1build2) ... 201s Setting up libsemanage-common (3.5-1build4) ... 201s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78678 files and directories currently installed.) 201s Preparing to unpack .../libsemanage2_3.5-1build4_armhf.deb ... 201s Unpacking libsemanage2:armhf (3.5-1build4) over (3.5-1build2) ... 201s Setting up libsemanage2:armhf (3.5-1build4) ... 201s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78678 files and directories currently installed.) 201s Preparing to unpack .../install-info_7.1-3build1_armhf.deb ... 201s Unpacking install-info (7.1-3build1) over (7.1-3) ... 201s Setting up install-info (7.1-3build1) ... 201s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78678 files and directories currently installed.) 201s Preparing to unpack .../00-gcc-13-base_13.2.0-21ubuntu1_armhf.deb ... 201s Unpacking gcc-13-base:armhf (13.2.0-21ubuntu1) over (13.2.0-17ubuntu2) ... 201s Preparing to unpack .../01-libss2_1.47.0-2.4~exp1ubuntu2_armhf.deb ... 201s Unpacking libss2:armhf (1.47.0-2.4~exp1ubuntu2) over (1.47.0-2ubuntu1) ... 201s Preparing to unpack .../02-dmsetup_2%3a1.02.185-3ubuntu2_armhf.deb ... 201s Unpacking dmsetup (2:1.02.185-3ubuntu2) over (2:1.02.185-3ubuntu1) ... 201s Preparing to unpack .../03-krb5-locales_1.20.1-6ubuntu1_all.deb ... 201s Unpacking krb5-locales (1.20.1-6ubuntu1) over (1.20.1-5build1) ... 202s Preparing to unpack .../04-libbsd0_0.12.1-1_armhf.deb ... 202s Unpacking libbsd0:armhf (0.12.1-1) over (0.11.8-1) ... 202s Preparing to unpack .../05-libglib2.0-data_2.79.3-3ubuntu5_all.deb ... 202s Unpacking libglib2.0-data (2.79.3-3ubuntu5) over (2.79.2-1~ubuntu1) ... 202s Preparing to unpack .../06-libslang2_2.3.3-3build1_armhf.deb ... 202s Unpacking libslang2:armhf (2.3.3-3build1) over (2.3.3-3) ... 202s Preparing to unpack .../07-rsyslog_8.2312.0-3ubuntu7_armhf.deb ... 202s Unpacking rsyslog (8.2312.0-3ubuntu7) over (8.2312.0-3ubuntu3) ... 202s Selecting previously unselected package xdg-user-dirs. 202s Preparing to unpack .../08-xdg-user-dirs_0.18-1_armhf.deb ... 202s Unpacking xdg-user-dirs (0.18-1) ... 202s Preparing to unpack .../09-xxd_2%3a9.1.0016-1ubuntu6_armhf.deb ... 202s Unpacking xxd (2:9.1.0016-1ubuntu6) over (2:9.1.0016-1ubuntu2) ... 202s Preparing to unpack .../10-apparmor_4.0.0-beta3-0ubuntu2_armhf.deb ... 203s Unpacking apparmor (4.0.0-beta3-0ubuntu2) over (4.0.0~alpha4-0ubuntu1) ... 204s Preparing to unpack .../11-ftp_20230507-2build1_all.deb ... 204s Unpacking ftp (20230507-2build1) over (20230507-2) ... 204s Preparing to unpack .../12-inetutils-telnet_2%3a2.5-3ubuntu3_armhf.deb ... 204s Unpacking inetutils-telnet (2:2.5-3ubuntu3) over (2:2.5-3ubuntu1) ... 204s Preparing to unpack .../13-info_7.1-3build1_armhf.deb ... 204s Unpacking info (7.1-3build1) over (7.1-3) ... 204s Preparing to unpack .../14-libxmuu1_2%3a1.1.3-3build1_armhf.deb ... 204s Unpacking libxmuu1:armhf (2:1.1.3-3build1) over (2:1.1.3-3) ... 204s Preparing to unpack .../15-lshw_02.19.git.2021.06.19.996aaad9c7-2build2_armhf.deb ... 204s Unpacking lshw (02.19.git.2021.06.19.996aaad9c7-2build2) over (02.19.git.2021.06.19.996aaad9c7-2build1) ... 204s Preparing to unpack .../16-mtr-tiny_0.95-1.1build1_armhf.deb ... 204s Unpacking mtr-tiny (0.95-1.1build1) over (0.95-1.1) ... 204s Preparing to unpack .../17-plymouth-theme-ubuntu-text_24.004.60-1ubuntu6_armhf.deb ... 204s Unpacking plymouth-theme-ubuntu-text (24.004.60-1ubuntu6) over (24.004.60-1ubuntu3) ... 204s Preparing to unpack .../18-plymouth_24.004.60-1ubuntu6_armhf.deb ... 205s Unpacking plymouth (24.004.60-1ubuntu6) over (24.004.60-1ubuntu3) ... 205s Preparing to unpack .../19-psmisc_23.7-1_armhf.deb ... 205s Unpacking psmisc (23.7-1) over (23.6-2) ... 205s Preparing to unpack .../20-telnet_0.17+2.5-3ubuntu3_all.deb ... 205s Unpacking telnet (0.17+2.5-3ubuntu3) over (0.17+2.5-3ubuntu1) ... 205s Preparing to unpack .../21-xz-utils_5.6.0-0.2_armhf.deb ... 205s Unpacking xz-utils (5.6.0-0.2) over (5.4.5-0.3) ... 205s Preparing to unpack .../22-ubuntu-standard_1.536build1_armhf.deb ... 205s Unpacking ubuntu-standard (1.536build1) over (1.536) ... 205s Preparing to unpack .../23-usb.ids_2024.03.18-1_all.deb ... 205s Unpacking usb.ids (2024.03.18-1) over (2024.01.30-1) ... 205s Preparing to unpack .../24-libctf-nobfd0_2.42-4ubuntu1_armhf.deb ... 205s Unpacking libctf-nobfd0:armhf (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 205s Preparing to unpack .../25-libctf0_2.42-4ubuntu1_armhf.deb ... 205s Unpacking libctf0:armhf (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 205s Preparing to unpack .../26-binutils-arm-linux-gnueabihf_2.42-4ubuntu1_armhf.deb ... 205s Unpacking binutils-arm-linux-gnueabihf (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 205s Preparing to unpack .../27-libbinutils_2.42-4ubuntu1_armhf.deb ... 205s Unpacking libbinutils:armhf (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 206s Preparing to unpack .../28-binutils_2.42-4ubuntu1_armhf.deb ... 206s Unpacking binutils (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 206s Preparing to unpack .../29-binutils-common_2.42-4ubuntu1_armhf.deb ... 206s Unpacking binutils-common:armhf (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 206s Preparing to unpack .../30-libsframe1_2.42-4ubuntu1_armhf.deb ... 206s Unpacking libsframe1:armhf (2.42-4ubuntu1) over (2.42-3ubuntu1) ... 206s Preparing to unpack .../31-bolt_0.9.6-2build1_armhf.deb ... 206s Unpacking bolt (0.9.6-2build1) over (0.9.6-2) ... 206s Preparing to unpack .../32-cryptsetup-bin_2%3a2.7.0-1ubuntu2_armhf.deb ... 206s Unpacking cryptsetup-bin (2:2.7.0-1ubuntu2) over (2:2.7.0-1ubuntu1) ... 206s Preparing to unpack .../33-dpkg-dev_1.22.6ubuntu4_all.deb ... 206s Unpacking dpkg-dev (1.22.6ubuntu4) over (1.22.4ubuntu5) ... 206s Preparing to unpack .../34-libdpkg-perl_1.22.6ubuntu4_all.deb ... 206s Unpacking libdpkg-perl (1.22.6ubuntu4) over (1.22.4ubuntu5) ... 206s Preparing to unpack .../35-fonts-ubuntu-console_0.869+git20240321-0ubuntu1_all.deb ... 206s Unpacking fonts-ubuntu-console (0.869+git20240321-0ubuntu1) over (0.869-0ubuntu1) ... 206s Preparing to unpack .../36-gnupg-l10n_2.4.4-2ubuntu15_all.deb ... 206s Unpacking gnupg-l10n (2.4.4-2ubuntu15) over (2.4.4-2ubuntu7) ... 206s Preparing to unpack .../37-ibverbs-providers_50.0-2build1_armhf.deb ... 207s Unpacking ibverbs-providers:armhf (50.0-2build1) over (50.0-2) ... 207s Preparing to unpack .../38-jq_1.7.1-3_armhf.deb ... 207s Unpacking jq (1.7.1-3) over (1.7.1-2) ... 207s Preparing to unpack .../39-libjq1_1.7.1-3_armhf.deb ... 207s Unpacking libjq1:armhf (1.7.1-3) over (1.7.1-2) ... 207s Selecting previously unselected package libatm1t64:armhf. 207s Preparing to unpack .../40-libatm1t64_1%3a2.5.1-5.1_armhf.deb ... 207s Unpacking libatm1t64:armhf (1:2.5.1-5.1) ... 207s Preparing to unpack .../41-libevent-core-2.1-7_2.1.12-stable-9build1_armhf.deb ... 207s Unpacking libevent-core-2.1-7:armhf (2.1.12-stable-9build1) over (2.1.12-stable-9) ... 207s Preparing to unpack .../42-libftdi1-2_1.5-6build4_armhf.deb ... 207s Unpacking libftdi1-2:armhf (1.5-6build4) over (1.5-6build3) ... 207s Preparing to unpack .../43-libldap-common_2.6.7+dfsg-1~exp1ubuntu6_all.deb ... 207s Unpacking libldap-common (2.6.7+dfsg-1~exp1ubuntu6) over (2.6.7+dfsg-1~exp1ubuntu1) ... 207s Preparing to unpack .../44-libsasl2-modules_2.1.28+dfsg1-5ubuntu1_armhf.deb ... 207s Unpacking libsasl2-modules:armhf (2.1.28+dfsg1-5ubuntu1) over (2.1.28+dfsg1-4) ... 207s Preparing to unpack .../45-python3-distutils_3.12.2-3ubuntu1.1_all.deb ... 207s Unpacking python3-distutils (3.12.2-3ubuntu1.1) over (3.11.5-1) ... 207s Preparing to unpack .../46-python3-lib2to3_3.12.2-3ubuntu1.1_all.deb ... 208s Unpacking python3-lib2to3 (3.12.2-3ubuntu1.1) over (3.11.5-1) ... 208s Preparing to unpack .../47-python3-markupsafe_2.1.5-1build1_armhf.deb ... 208s Unpacking python3-markupsafe (2.1.5-1build1) over (2.1.5-1) ... 208s Preparing to unpack .../48-python3-pyrsistent_0.20.0-1build1_armhf.deb ... 208s Unpacking python3-pyrsistent:armhf (0.20.0-1build1) over (0.20.0-1) ... 208s Preparing to unpack .../49-python3-typing-extensions_4.10.0-1_all.deb ... 208s Unpacking python3-typing-extensions (4.10.0-1) over (4.9.0-1) ... 208s Preparing to unpack .../50-cloud-init_24.1.2-0ubuntu1_all.deb ... 209s Unpacking cloud-init (24.1.2-0ubuntu1) over (24.1.1-0ubuntu1) ... 209s Preparing to unpack .../51-kpartx_0.9.4-5ubuntu6_armhf.deb ... 209s Unpacking kpartx (0.9.4-5ubuntu6) over (0.9.4-5ubuntu3) ... 209s Setting up fonts-ubuntu-console (0.869+git20240321-0ubuntu1) ... 209s Setting up pinentry-curses (1.2.1-3ubuntu4) ... 209s Setting up libtext-iconv-perl:armhf (1.7-8build2) ... 209s Setting up libtext-charwidth-perl:armhf (0.04-11build2) ... 209s Setting up libibverbs1:armhf (50.0-2build1) ... 209s Setting up systemd-sysv (255.4-1ubuntu5) ... 209s Setting up libapparmor1:armhf (4.0.0-beta3-0ubuntu2) ... 209s Setting up libatm1t64:armhf (1:2.5.1-5.1) ... 209s Setting up libgdbm6t64:armhf (1.23-5.1) ... 209s Setting up bsdextrautils (2.39.3-9ubuntu2) ... 209s Setting up libgdbm-compat4t64:armhf (1.23-5.1) ... 209s Setting up xdg-user-dirs (0.18-1) ... 209s Setting up ibverbs-providers:armhf (50.0-2build1) ... 209s Setting up linux-headers-6.8.0-20 (6.8.0-20.20) ... 209s Setting up libmagic-mgc (1:5.45-3) ... 209s Setting up gawk (1:5.2.1-2build2) ... 209s Setting up psmisc (23.7-1) ... 209s Setting up libjq1:armhf (1.7.1-3) ... 209s Setting up libtirpc-common (1.3.4+ds-1.1) ... 209s Setting up libbrotli1:armhf (1.1.0-2build1) ... 209s Setting up libsqlite3-0:armhf (3.45.1-1ubuntu1) ... 209s Setting up libsasl2-modules:armhf (2.1.28+dfsg1-5ubuntu1) ... 209s Setting up libuv1t64:armhf (1.48.0-1.1) ... 209s Setting up libmagic1t64:armhf (1:5.45-3) ... 209s Setting up rsyslog (8.2312.0-3ubuntu7) ... 210s info: The user `syslog' is already a member of `adm'. 210s apparmor_parser: Unable to replace "rsyslogd". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 210s 212s Setting up binutils-common:armhf (2.42-4ubuntu1) ... 212s Setting up libpsl5t64:armhf (0.21.2-1.1) ... 212s Setting up libnghttp2-14:armhf (1.59.0-1build1) ... 212s Setting up libreiserfscore0t64 (1:3.6.27-7.1) ... 212s Setting up libctf-nobfd0:armhf (2.42-4ubuntu1) ... 212s Setting up libnss-systemd:armhf (255.4-1ubuntu5) ... 212s Setting up krb5-locales (1.20.1-6ubuntu1) ... 212s Setting up file (1:5.45-3) ... 212s Setting up lshw (02.19.git.2021.06.19.996aaad9c7-2build2) ... 212s Setting up locales (2.39-0ubuntu6) ... 213s Generating locales (this might take a while)... 218s en_US.UTF-8... done 218s Generation complete. 218s Setting up libldap-common (2.6.7+dfsg-1~exp1ubuntu6) ... 218s Setting up libprotobuf-c1:armhf (1.4.1-1ubuntu3) ... 218s Setting up xxd (2:9.1.0016-1ubuntu6) ... 218s Setting up libsframe1:armhf (2.42-4ubuntu1) ... 218s Setting up libelf1t64:armhf (0.190-1.1build2) ... 218s Setting up libkrb5support0:armhf (1.20.1-6ubuntu1) ... 218s Setting up linux-headers-6.8.0-20-generic (6.8.0-20.20) ... 218s Setting up eject (2.39.3-9ubuntu2) ... 218s Setting up apparmor (4.0.0-beta3-0ubuntu2) ... 218s Installing new version of config file /etc/apparmor.d/abstractions/authentication ... 218s Installing new version of config file /etc/apparmor.d/abstractions/crypto ... 218s Installing new version of config file /etc/apparmor.d/abstractions/kde-open5 ... 218s Installing new version of config file /etc/apparmor.d/abstractions/openssl ... 218s Installing new version of config file /etc/apparmor.d/code ... 218s Installing new version of config file /etc/apparmor.d/firefox ... 219s apparmor_parser: Unable to replace "lsb_release". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 219s 219s apparmor_parser: Unable to replace "kmod". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 219s 219s apparmor_parser: Unable to replace "nvidia_modprobe". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 219s 220s sysctl: cannot stat /proc/sys/kernel/apparmor_restrict_unprivileged_userns: No such file or directory 220s Reloading AppArmor profiles 220s /sbin/apparmor_parser: Unable to replace "1password". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "Discord". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "MongoDB Compass". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "QtWebEngineProcess". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "brave". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "buildah". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "busybox". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "cam". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "ch-checkns". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "ch-run". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "chrome". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "vscode". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "crun". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "devhelp". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "element-desktop". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "epiphany". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "evolution". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "firefox". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "flatpak". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "geary". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "github-desktop". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "goldendict". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "ipa_verify". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "kchmviewer". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "keybase". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lc-compliance". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "libcamerify". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "linux-sandbox". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "loupe". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-attach". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-create". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-destroy". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-execute". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-stop". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-unshare". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lxc-usernsexec". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "mmdebstrap". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "msedge". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "nautilus". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "notepadqq". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "obsidian". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "opam". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "opera". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "podman". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "polypane". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "pageedit". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "qmapshack". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "qutebrowser". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "privacybrowser". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "rootlesskit". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "rssguard". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "runc". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "rpm". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-abort". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-adduser". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-checkpackages". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-apt". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-createchroot". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "qcam". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-clean". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-distupgrade". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-destroychroot". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-unhold". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-hold". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "QtWebEngineProcess". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "plasmashell". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-upgrade". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-update". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "scide". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "sbuild-shell". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "slack". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "signal-desktop". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "steam". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "slirp4netns". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "surfshark". /sbin/apparmor_parser: Unable to replace "systemd-coredump". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "thunderbird". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "stress-ng". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "toybox". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "trinity". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "tuxedo-control-center". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "tup". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "userbindmount". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "lsb_release". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "unprivileged_userns". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "vdens". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "virtiofsd". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "vivaldi-bin". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "kmod". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "nvidia_modprobe". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "vpnns". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "wpcom". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "uwsgi-core". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "unix-chkpwd". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "rsyslogd". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "/usr/bin/man". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "ubuntu_pro_apt_news". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s /sbin/apparmor_parser: Unable to replace "tcpdump". /sbin/apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 220s 220s Error: At least one profile failed to load 220s Setting up libglib2.0-0t64:armhf (2.79.3-3ubuntu5) ... 220s No schema files found: doing nothing. 220s Setting up libglib2.0-data (2.79.3-3ubuntu5) ... 220s Setting up vim-common (2:9.1.0016-1ubuntu6) ... 220s Setting up gcc-13-base:armhf (13.2.0-21ubuntu1) ... 221s Setting up libqrtr-glib0:armhf (1.2.2-1ubuntu3) ... 221s Setting up libslang2:armhf (2.3.3-3build1) ... 221s Setting up libnvme1t64 (1.8-3) ... 221s Setting up mtr-tiny (0.95-1.1build1) ... 221s Setting up gnupg-l10n (2.4.4-2ubuntu15) ... 221s Setting up librtmp1:armhf (2.4+20151223.gitfa8646d.1-2build6) ... 221s Setting up libdbus-1-3:armhf (1.14.10-4ubuntu2) ... 221s Setting up xz-utils (5.6.0-0.2) ... 221s Setting up perl-modules-5.38 (5.38.2-3.2) ... 221s Setting up libproc2-0:armhf (2:4.0.4-4ubuntu2) ... 221s Setting up libpng16-16t64:armhf (1.6.43-3) ... 221s Setting up systemd-timesyncd (255.4-1ubuntu5) ... 221s Setting up libevent-core-2.1-7:armhf (2.1.12-stable-9build1) ... 221s Setting up libss2:armhf (1.47.0-2.4~exp1ubuntu2) ... 221s Setting up usb.ids (2024.03.18-1) ... 221s Setting up sudo (1.9.15p5-3ubuntu3) ... 221s Setting up dhcpcd-base (1:10.0.6-1ubuntu2) ... 221s Setting up gir1.2-glib-2.0:armhf (2.79.3-3ubuntu5) ... 221s Setting up libk5crypto3:armhf (1.20.1-6ubuntu1) ... 221s Setting up logsave (1.47.0-2.4~exp1ubuntu2) ... 221s Setting up libfdisk1:armhf (2.39.3-9ubuntu2) ... 221s Setting up libdb5.3t64:armhf (5.3.28+dfsg2-6) ... 221s Setting up libdevmapper1.02.1:armhf (2:1.02.185-3ubuntu2) ... 221s Setting up python-apt-common (2.7.7) ... 221s Setting up mount (2.39.3-9ubuntu2) ... 221s Setting up dmsetup (2:1.02.185-3ubuntu2) ... 221s Setting up uuid-runtime (2.39.3-9ubuntu2) ... 223s uuidd.service is a disabled or a static unit not running, not starting it. 223s Setting up libmm-glib0:armhf (1.23.4-0ubuntu1) ... 223s Setting up groff-base (1.23.0-3build1) ... 223s Setting up libplymouth5:armhf (24.004.60-1ubuntu6) ... 223s Setting up dbus-session-bus-common (1.14.10-4ubuntu2) ... 223s Setting up jq (1.7.1-3) ... 223s Setting up procps (2:4.0.4-4ubuntu2) ... 223s Setting up gpgconf (2.4.4-2ubuntu15) ... 223s Setting up libpcap0.8t64:armhf (1.10.4-4.1ubuntu2) ... 223s Setting up libcryptsetup12:armhf (2:2.7.0-1ubuntu2) ... 223s Setting up libgirepository-1.0-1:armhf (1.79.1-1ubuntu6) ... 223s Setting up libjson-glib-1.0-common (1.8.0-2build1) ... 223s Setting up libkrb5-3:armhf (1.20.1-6ubuntu1) ... 223s Setting up libpython3.11-minimal:armhf (3.11.8-1build4) ... 223s Setting up libusb-1.0-0:armhf (2:1.0.27-1) ... 223s Setting up libperl5.38t64:armhf (5.38.2-3.2) ... 223s Setting up tnftp (20230507-2build1) ... 224s Setting up libbinutils:armhf (2.42-4ubuntu1) ... 224s Setting up dbus-system-bus-common (1.14.10-4ubuntu2) ... 224s Setting up libfido2-1:armhf (1.14.0-1build1) ... 224s Setting up openssl (3.0.13-0ubuntu2) ... 224s Setting up libbsd0:armhf (0.12.1-1) ... 224s Setting up readline-common (8.2-4) ... 224s Setting up libxml2:armhf (2.9.14+dfsg-1.3ubuntu2) ... 224s Setting up libxmuu1:armhf (2:1.1.3-3build1) ... 224s Setting up dbus-bin (1.14.10-4ubuntu2) ... 224s Setting up info (7.1-3build1) ... 224s Setting up liblocale-gettext-perl (1.07-6ubuntu4) ... 224s Setting up gpg (2.4.4-2ubuntu15) ... 224s Setting up libgudev-1.0-0:armhf (1:238-3ubuntu2) ... 224s Setting up libpolkit-gobject-1-0:armhf (124-1ubuntu1) ... 224s Setting up libbpf1:armhf (1:1.3.0-2build1) ... 224s Setting up libmbim-glib4:armhf (1.31.2-0ubuntu2) ... 224s Setting up rsync (3.2.7-1build1) ... 225s rsync.service is a disabled or a static unit not running, not starting it. 225s Setting up libudisks2-0:armhf (2.10.1-6) ... 225s Setting up libkmod2:armhf (31+20240202-2ubuntu4) ... 225s Setting up bolt (0.9.6-2build1) ... 226s bolt.service is a disabled or a static unit not running, not starting it. 226s Setting up gnupg-utils (2.4.4-2ubuntu15) ... 226s Setting up initramfs-tools-bin (0.142ubuntu23) ... 226s Setting up libctf0:armhf (2.42-4ubuntu1) ... 226s Setting up cryptsetup-bin (2:2.7.0-1ubuntu2) ... 226s Setting up python3.11-minimal (3.11.8-1build4) ... 228s Setting up tcpdump (4.99.4-3ubuntu2) ... 228s apparmor_parser: Unable to replace "tcpdump". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 228s 228s Setting up apt-utils (2.7.14) ... 228s Setting up gpg-agent (2.4.4-2ubuntu15) ... 229s Setting up libpython3.12-stdlib:armhf (3.12.2-4build3) ... 229s Setting up wget (1.21.4-1ubuntu2) ... 229s Setting up libxmlb2:armhf (0.3.15-1build1) ... 229s Setting up btrfs-progs (6.6.3-1.1build1) ... 229s Setting up libpython3.11-stdlib:armhf (3.11.8-1build4) ... 229s Setting up python3.12 (3.12.2-4build3) ... 232s Setting up gpgsm (2.4.4-2ubuntu15) ... 232s Setting up inetutils-telnet (2:2.5-3ubuntu3) ... 232s Setting up e2fsprogs (1.47.0-2.4~exp1ubuntu2) ... 232s update-initramfs: deferring update (trigger activated) 233s e2scrub_all.service is a disabled or a static unit not running, not starting it. 233s Setting up libparted2t64:armhf (3.6-3.1build2) ... 233s Setting up linux-headers-generic (6.8.0-20.20+1) ... 233s Setting up dbus-daemon (1.14.10-4ubuntu2) ... 233s Setting up libmbim-proxy (1.31.2-0ubuntu2) ... 233s Setting up vim-tiny (2:9.1.0016-1ubuntu6) ... 233s Setting up kmod (31+20240202-2ubuntu4) ... 234s Setting up libnetplan1:armhf (1.0-1) ... 234s Setting up man-db (2.12.0-3build4) ... 234s Updating database of manual pages ... 236s apparmor_parser: Unable to replace "/usr/bin/man". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 236s 237s man-db.service is a disabled or a static unit not running, not starting it. 237s Setting up fdisk (2.39.3-9ubuntu2) ... 237s Setting up libjson-glib-1.0-0:armhf (1.8.0-2build1) ... 237s Setting up libsasl2-modules-db:armhf (2.1.28+dfsg1-5ubuntu1) ... 237s Setting up libftdi1-2:armhf (1.5-6build4) ... 237s Setting up perl (5.38.2-3.2) ... 237s Setting up gir1.2-girepository-2.0:armhf (1.79.1-1ubuntu6) ... 237s Setting up dbus (1.14.10-4ubuntu2) ... 237s A reboot is required to replace the running dbus-daemon. 237s Please reboot the system when convenient. 238s Setting up shared-mime-info (2.4-1build1) ... 239s Setting up libblockdev-utils3:armhf (3.1.0-1build1) ... 239s Setting up libgssapi-krb5-2:armhf (1.20.1-6ubuntu1) ... 239s Setting up udev (255.4-1ubuntu5) ... 240s Setting up ftp (20230507-2build1) ... 240s Setting up keyboxd (2.4.4-2ubuntu15) ... 240s Setting up libdpkg-perl (1.22.6ubuntu4) ... 240s Setting up libsasl2-2:armhf (2.1.28+dfsg1-5ubuntu1) ... 240s Setting up libssh-4:armhf (0.10.6-2build1) ... 240s Setting up libblockdev-nvme3:armhf (3.1.0-1build1) ... 240s Setting up libblockdev-fs3:armhf (3.1.0-1build1) ... 240s Setting up kpartx (0.9.4-5ubuntu6) ... 240s Setting up libpam-systemd:armhf (255.4-1ubuntu5) ... 241s Setting up libpolkit-agent-1-0:armhf (124-1ubuntu1) ... 241s Setting up libgpgme11t64:armhf (1.18.0-4.1ubuntu3) ... 241s Setting up netplan-generator (1.0-1) ... 241s Removing 'diversion of /lib/systemd/system-generators/netplan to /lib/systemd/system-generators/netplan.usr-is-merged by netplan-generator' 241s Setting up initramfs-tools-core (0.142ubuntu23) ... 241s Setting up binutils-arm-linux-gnueabihf (2.42-4ubuntu1) ... 241s Setting up libarchive13t64:armhf (3.7.2-1.1ubuntu2) ... 241s Setting up libldap2:armhf (2.6.7+dfsg-1~exp1ubuntu6) ... 241s Setting up libpython3-stdlib:armhf (3.12.2-0ubuntu1) ... 241s Setting up systemd-resolved (255.4-1ubuntu5) ... 242s Setting up python3.11 (3.11.8-1build4) ... 244s Setting up telnet (0.17+2.5-3ubuntu3) ... 244s Setting up initramfs-tools (0.142ubuntu23) ... 244s update-initramfs: deferring update (trigger activated) 244s Setting up libblockdev-mdraid3:armhf (3.1.0-1build1) ... 244s Setting up libcurl4t64:armhf (8.5.0-2ubuntu8) ... 244s Setting up bind9-libs:armhf (1:9.18.24-0ubuntu3) ... 244s Setting up libtirpc3t64:armhf (1.3.4+ds-1.1) ... 244s Setting up e2fsprogs-l10n (1.47.0-2.4~exp1ubuntu2) ... 244s Setting up libblockdev-swap3:armhf (3.1.0-1build1) ... 244s Setting up plymouth (24.004.60-1ubuntu6) ... 244s update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults 244s update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults 245s Setting up iproute2 (6.1.0-1ubuntu5) ... 245s Setting up openssh-client (1:9.6p1-3ubuntu11) ... 245s Setting up libgusb2:armhf (0.4.8-1build1) ... 245s Setting up libblockdev-loop3:armhf (3.1.0-1build1) ... 245s Setting up libcurl3t64-gnutls:armhf (8.5.0-2ubuntu8) ... 245s Setting up parted (3.6-3.1build2) ... 245s Setting up libqmi-glib5:armhf (1.35.2-0ubuntu1) ... 245s Setting up python3 (3.12.2-0ubuntu1) ... 246s Setting up binutils (2.42-4ubuntu1) ... 246s Setting up python3-markupsafe (2.1.5-1build1) ... 246s Setting up libblockdev3:armhf (3.1.0-1build1) ... 246s Setting up libjcat1:armhf (0.2.0-2build2) ... 246s Setting up dpkg-dev (1.22.6ubuntu4) ... 246s Setting up libblockdev-part3:armhf (3.1.0-1build1) ... 246s Setting up dirmngr (2.4.4-2ubuntu15) ... 246s Setting up dbus-user-session (1.14.10-4ubuntu2) ... 246s Setting up plymouth-theme-ubuntu-text (24.004.60-1ubuntu6) ... 246s update-initramfs: deferring update (trigger activated) 246s Setting up python3-cryptography (41.0.7-4build2) ... 247s Setting up python3-gi (3.47.0-3build1) ... 247s Setting up python3-typing-extensions (4.10.0-1) ... 248s Setting up lsof (4.95.0-1build2) ... 248s Setting up python3-pyrsistent:armhf (0.20.0-1build1) ... 248s Setting up libnsl2:armhf (1.3.0-3build2) ... 248s Setting up gnupg (2.4.4-2ubuntu15) ... 248s Setting up python3-netplan (1.0-1) ... 248s Setting up curl (8.5.0-2ubuntu8) ... 248s Setting up libvolume-key1:armhf (0.3.12-7build1) ... 248s Setting up bind9-host (1:9.18.24-0ubuntu3) ... 248s Setting up python3-lib2to3 (3.12.2-3ubuntu1.1) ... 249s Setting up python3-pkg-resources (68.1.2-2ubuntu1) ... 249s Setting up python3-distutils (3.12.2-3ubuntu1.1) ... 249s python3.12: can't get files for byte-compilation 249s Setting up openssh-sftp-server (1:9.6p1-3ubuntu11) ... 249s Setting up python3-dbus (1.3.2-5build2) ... 250s Setting up python3-setuptools (68.1.2-2ubuntu1) ... 251s Setting up gpg-wks-client (2.4.4-2ubuntu15) ... 251s Setting up openssh-server (1:9.6p1-3ubuntu11) ... 251s Replacing config file /etc/ssh/sshd_config with new version 254s Created symlink /etc/systemd/system/ssh.service.requires/ssh.socket → /usr/lib/systemd/system/ssh.socket. 256s Setting up libblockdev-crypto3:armhf (3.1.0-1build1) ... 256s Setting up python3-gdbm:armhf (3.12.2-3ubuntu1.1) ... 256s Setting up python3-apt (2.7.7) ... 256s Setting up libfwupd2:armhf (1.9.15-2) ... 256s Setting up python3-yaml (6.0.1-2build1) ... 256s Setting up libqmi-proxy (1.35.2-0ubuntu1) ... 256s Setting up netplan.io (1.0-1) ... 256s Setting up bind9-dnsutils (1:9.18.24-0ubuntu3) ... 256s Setting up ubuntu-pro-client (31.2.1) ... 257s apparmor_parser: Unable to replace "ubuntu_pro_apt_news". apparmor_parser: Access denied. You need policy admin privileges to manage profiles. 257s 259s Setting up fwupd (1.9.15-2) ... 260s fwupd-offline-update.service is a disabled or a static unit not running, not starting it. 260s fwupd-refresh.service is a disabled or a static unit not running, not starting it. 260s fwupd.service is a disabled or a static unit not running, not starting it. 261s Setting up ubuntu-pro-client-l10n (31.2.1) ... 261s Setting up udisks2 (2.10.1-6) ... 261s sda: Failed to write 'change' to '/sys/devices/platform/LNRO0005:1f/virtio2/host0/target0:0:0/0:0:0:0/block/sda/uevent': Permission denied 261s sda1: Failed to write 'change' to '/sys/devices/platform/LNRO0005:1f/virtio2/host0/target0:0:0/0:0:0:0/block/sda/sda1/uevent': Permission denied 261s sda15: Failed to write 'change' to '/sys/devices/platform/LNRO0005:1f/virtio2/host0/target0:0:0/0:0:0:0/block/sda/sda15/uevent': Permission denied 261s sda2: Failed to write 'change' to '/sys/devices/platform/LNRO0005:1f/virtio2/host0/target0:0:0/0:0:0:0/block/sda/sda2/uevent': Permission denied 261s loop0: Failed to write 'change' to '/sys/devices/virtual/block/loop0/uevent': Permission denied 261s loop1: Failed to write 'change' to '/sys/devices/virtual/block/loop1/uevent': Permission denied 261s loop2: Failed to write 'change' to '/sys/devices/virtual/block/loop2/uevent': Permission denied 261s loop3: Failed to write 'change' to '/sys/devices/virtual/block/loop3/uevent': Permission denied 261s loop4: Failed to write 'change' to '/sys/devices/virtual/block/loop4/uevent': Permission denied 261s loop5: Failed to write 'change' to '/sys/devices/virtual/block/loop5/uevent': Permission denied 261s loop6: Failed to write 'change' to '/sys/devices/virtual/block/loop6/uevent': Permission denied 261s loop7: Failed to write 'change' to '/sys/devices/virtual/block/loop7/uevent': Permission denied 261s Setting up cloud-init (24.1.2-0ubuntu1) ... 265s Setting up ubuntu-minimal (1.536build1) ... 265s Setting up ubuntu-standard (1.536build1) ... 265s Processing triggers for libc-bin (2.39-0ubuntu6) ... 265s Processing triggers for ufw (0.36.2-5) ... 265s Processing triggers for install-info (7.1-3build1) ... 265s Processing triggers for initramfs-tools (0.142ubuntu23) ... 269s Reading package lists... 269s Building dependency tree... 269s Reading state information... 270s The following packages will be REMOVED: 270s linux-headers-6.8.0-11* python3-distutils* python3-lib2to3* 271s 0 upgraded, 0 newly installed, 3 to remove and 1 not upgraded. 271s After this operation, 86.5 MB disk space will be freed. 271s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78647 files and directories currently installed.) 271s Removing linux-headers-6.8.0-11 (6.8.0-11.11) ... 273s Removing python3-distutils (3.12.2-3ubuntu1.1) ... 273s Removing python3-lib2to3 (3.12.2-3ubuntu1.1) ... 276s autopkgtest [19:11:17]: rebooting testbed after setup commands that affected boot 316s autopkgtest [19:11:57]: testbed running kernel: Linux 5.4.0-173-generic #191-Ubuntu SMP Fri Feb 2 13:54:37 UTC 2024 341s autopkgtest [19:12:22]: @@@@@@@@@@@@@@@@@@@@ apt-source slony1-2 359s Get:1 http://ftpmaster.internal/ubuntu noble/universe slony1-2 2.2.11-3 (dsc) [2413 B] 359s Get:2 http://ftpmaster.internal/ubuntu noble/universe slony1-2 2.2.11-3 (tar) [1465 kB] 359s Get:3 http://ftpmaster.internal/ubuntu noble/universe slony1-2 2.2.11-3 (diff) [16.8 kB] 360s gpgv: Signature made Fri Sep 22 12:00:13 2023 UTC 360s gpgv: using RSA key 5C48FE6157F49179597087C64C5A6BAB12D2A7AE 360s gpgv: Can't check signature: No public key 360s dpkg-source: warning: cannot verify inline signature for ./slony1-2_2.2.11-3.dsc: no acceptable signature found 360s autopkgtest [19:12:41]: testing package slony1-2 version 2.2.11-3 362s autopkgtest [19:12:43]: build not needed 365s autopkgtest [19:12:46]: test load-functions: preparing testbed 376s Reading package lists... 377s Building dependency tree... 377s Reading state information... 378s Starting pkgProblemResolver with broken count: 0 378s Starting 2 pkgProblemResolver with broken count: 0 378s Done 379s The following additional packages will be installed: 379s libjson-perl libllvm17t64 libpq5 libxslt1.1 postgresql-16 379s postgresql-16-slony1-2 postgresql-client-16 postgresql-client-common 379s postgresql-common slony1-2-bin slony1-2-doc ssl-cert 379s Suggested packages: 379s postgresql-doc-16 libpg-perl 379s Recommended packages: 379s libjson-xs-perl libdbd-pg-perl 379s The following NEW packages will be installed: 379s autopkgtest-satdep libjson-perl libllvm17t64 libpq5 libxslt1.1 postgresql-16 379s postgresql-16-slony1-2 postgresql-client-16 postgresql-client-common 379s postgresql-common slony1-2-bin slony1-2-doc ssl-cert 379s 0 upgraded, 13 newly installed, 0 to remove and 1 not upgraded. 379s Need to get 42.2 MB/42.2 MB of archives. 379s After this operation, 160 MB of additional disk space will be used. 379s Get:1 /tmp/autopkgtest.N1ApGM/1-autopkgtest-satdep.deb autopkgtest-satdep armhf 0 [740 B] 379s Get:2 http://ftpmaster.internal/ubuntu noble/main armhf libjson-perl all 4.10000-1 [81.9 kB] 379s Get:3 http://ftpmaster.internal/ubuntu noble/main armhf postgresql-client-common all 257 [36.2 kB] 379s Get:4 http://ftpmaster.internal/ubuntu noble/main armhf ssl-cert all 1.1.2ubuntu1 [17.8 kB] 379s Get:5 http://ftpmaster.internal/ubuntu noble/main armhf postgresql-common all 257 [162 kB] 380s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libllvm17t64 armhf 1:17.0.6-9build2 [25.3 MB] 382s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/main armhf libpq5 armhf 16.2-1ubuntu2 [122 kB] 382s Get:8 http://ftpmaster.internal/ubuntu noble/main armhf libxslt1.1 armhf 1.1.39-0exp1 [150 kB] 382s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/main armhf postgresql-client-16 armhf 16.2-1ubuntu2 [1228 kB] 382s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/main armhf postgresql-16 armhf 16.2-1ubuntu2 [14.5 MB] 382s Get:11 http://ftpmaster.internal/ubuntu noble/universe armhf postgresql-16-slony1-2 armhf 2.2.11-3 [19.5 kB] 382s Get:12 http://ftpmaster.internal/ubuntu noble/universe armhf slony1-2-bin armhf 2.2.11-3 [221 kB] 382s Get:13 http://ftpmaster.internal/ubuntu noble/universe armhf slony1-2-doc all 2.2.11-3 [328 kB] 383s Preconfiguring packages ... 383s Fetched 42.2 MB in 3s (14.8 MB/s) 383s Selecting previously unselected package libjson-perl. 383s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58436 files and directories currently installed.) 383s Preparing to unpack .../00-libjson-perl_4.10000-1_all.deb ... 383s Unpacking libjson-perl (4.10000-1) ... 383s Selecting previously unselected package postgresql-client-common. 383s Preparing to unpack .../01-postgresql-client-common_257_all.deb ... 383s Unpacking postgresql-client-common (257) ... 383s Selecting previously unselected package ssl-cert. 383s Preparing to unpack .../02-ssl-cert_1.1.2ubuntu1_all.deb ... 383s Unpacking ssl-cert (1.1.2ubuntu1) ... 383s Selecting previously unselected package postgresql-common. 383s Preparing to unpack .../03-postgresql-common_257_all.deb ... 383s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 383s Unpacking postgresql-common (257) ... 383s Selecting previously unselected package libllvm17t64:armhf. 383s Preparing to unpack .../04-libllvm17t64_1%3a17.0.6-9build2_armhf.deb ... 383s Unpacking libllvm17t64:armhf (1:17.0.6-9build2) ... 384s Selecting previously unselected package libpq5:armhf. 384s Preparing to unpack .../05-libpq5_16.2-1ubuntu2_armhf.deb ... 384s Unpacking libpq5:armhf (16.2-1ubuntu2) ... 384s Selecting previously unselected package libxslt1.1:armhf. 384s Preparing to unpack .../06-libxslt1.1_1.1.39-0exp1_armhf.deb ... 384s Unpacking libxslt1.1:armhf (1.1.39-0exp1) ... 385s Selecting previously unselected package postgresql-client-16. 385s Preparing to unpack .../07-postgresql-client-16_16.2-1ubuntu2_armhf.deb ... 385s Unpacking postgresql-client-16 (16.2-1ubuntu2) ... 385s Selecting previously unselected package postgresql-16. 385s Preparing to unpack .../08-postgresql-16_16.2-1ubuntu2_armhf.deb ... 385s Unpacking postgresql-16 (16.2-1ubuntu2) ... 386s Selecting previously unselected package postgresql-16-slony1-2. 386s Preparing to unpack .../09-postgresql-16-slony1-2_2.2.11-3_armhf.deb ... 386s Unpacking postgresql-16-slony1-2 (2.2.11-3) ... 386s Selecting previously unselected package slony1-2-bin. 386s Preparing to unpack .../10-slony1-2-bin_2.2.11-3_armhf.deb ... 386s Unpacking slony1-2-bin (2.2.11-3) ... 386s Selecting previously unselected package slony1-2-doc. 386s Preparing to unpack .../11-slony1-2-doc_2.2.11-3_all.deb ... 386s Unpacking slony1-2-doc (2.2.11-3) ... 386s Selecting previously unselected package autopkgtest-satdep. 386s Preparing to unpack .../12-1-autopkgtest-satdep.deb ... 386s Unpacking autopkgtest-satdep (0) ... 386s Setting up postgresql-client-common (257) ... 386s Setting up libpq5:armhf (16.2-1ubuntu2) ... 386s Setting up libllvm17t64:armhf (1:17.0.6-9build2) ... 386s Setting up ssl-cert (1.1.2ubuntu1) ... 388s Created symlink /etc/systemd/system/multi-user.target.wants/ssl-cert.service → /usr/lib/systemd/system/ssl-cert.service. 388s Setting up libjson-perl (4.10000-1) ... 388s Setting up libxslt1.1:armhf (1.1.39-0exp1) ... 388s Setting up slony1-2-doc (2.2.11-3) ... 388s Setting up postgresql-client-16 (16.2-1ubuntu2) ... 389s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 389s Setting up postgresql-common (257) ... 390s 390s Creating config file /etc/postgresql-common/createcluster.conf with new version 391s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 391s Removing obsolete dictionary files: 392s Created symlink /etc/systemd/system/multi-user.target.wants/postgresql.service → /usr/lib/systemd/system/postgresql.service. 392s Setting up slony1-2-bin (2.2.11-3) ... 393s Setting up postgresql-16 (16.2-1ubuntu2) ... 394s Creating new PostgreSQL cluster 16/main ... 394s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 394s The files belonging to this database system will be owned by user "postgres". 394s This user must also own the server process. 394s 394s The database cluster will be initialized with locale "C.UTF-8". 394s The default database encoding has accordingly been set to "UTF8". 394s The default text search configuration will be set to "english". 394s 394s Data page checksums are disabled. 394s 394s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 394s creating subdirectories ... ok 394s selecting dynamic shared memory implementation ... posix 394s selecting default max_connections ... 100 394s selecting default shared_buffers ... 128MB 394s selecting default time zone ... Etc/UTC 394s creating configuration files ... ok 394s running bootstrap script ... ok 395s performing post-bootstrap initialization ... ok 395s syncing data to disk ... ok 400s Setting up postgresql-16-slony1-2 (2.2.11-3) ... 400s Setting up autopkgtest-satdep (0) ... 400s Processing triggers for man-db (2.12.0-3build4) ... 401s Processing triggers for libc-bin (2.39-0ubuntu6) ... 415s (Reading database ... 60765 files and directories currently installed.) 415s Removing autopkgtest-satdep (0) ... 421s autopkgtest [19:13:42]: test load-functions: [----------------------- 423s ### PostgreSQL 16 psql ### 423s Creating new PostgreSQL cluster 16/regress ... 427s create table public.sl_node ( 427s no_id int4, 427s no_active bool, 427s no_comment text, 427s no_failed bool, 427s CONSTRAINT "sl_node-pkey" 427s PRIMARY KEY (no_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_node is 'Holds the list of nodes associated with this namespace.'; 427s COMMENT 427s comment on column public.sl_node.no_id is 'The unique ID number for the node'; 427s COMMENT 427s comment on column public.sl_node.no_active is 'Is the node active in replication yet?'; 427s COMMENT 427s comment on column public.sl_node.no_comment is 'A human-oriented description of the node'; 427s COMMENT 427s create table public.sl_nodelock ( 427s nl_nodeid int4, 427s nl_conncnt serial, 427s nl_backendpid int4, 427s CONSTRAINT "sl_nodelock-pkey" 427s PRIMARY KEY (nl_nodeid, nl_conncnt) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_nodelock is 'Used to prevent multiple slon instances and to identify the backends to kill in terminateNodeConnections().'; 427s COMMENT 427s comment on column public.sl_nodelock.nl_nodeid is 'Clients node_id'; 427s COMMENT 427s comment on column public.sl_nodelock.nl_conncnt is 'Clients connection number'; 427s COMMENT 427s comment on column public.sl_nodelock.nl_backendpid is 'PID of database backend owning this lock'; 427s COMMENT 427s create table public.sl_set ( 427s set_id int4, 427s set_origin int4, 427s set_locked bigint, 427s set_comment text, 427s CONSTRAINT "sl_set-pkey" 427s PRIMARY KEY (set_id), 427s CONSTRAINT "set_origin-no_id-ref" 427s FOREIGN KEY (set_origin) 427s REFERENCES public.sl_node (no_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_set is 'Holds definitions of replication sets.'; 427s COMMENT 427s comment on column public.sl_set.set_id is 'A unique ID number for the set.'; 427s COMMENT 427s comment on column public.sl_set.set_origin is 427s 'The ID number of the source node for the replication set.'; 427s COMMENT 427s comment on column public.sl_set.set_locked is 'Transaction ID where the set was locked.'; 427s COMMENT 427s comment on column public.sl_set.set_comment is 'A human-oriented description of the set.'; 427s COMMENT 427s create table public.sl_setsync ( 427s ssy_setid int4, 427s ssy_origin int4, 427s ssy_seqno int8, 427s ssy_snapshot "pg_catalog".txid_snapshot, 427s ssy_action_list text, 427s CONSTRAINT "sl_setsync-pkey" 427s PRIMARY KEY (ssy_setid), 427s CONSTRAINT "ssy_setid-set_id-ref" 427s FOREIGN KEY (ssy_setid) 427s REFERENCES public.sl_set (set_id), 427s CONSTRAINT "ssy_origin-no_id-ref" 427s FOREIGN KEY (ssy_origin) 427s REFERENCES public.sl_node (no_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_setsync is 'SYNC information'; 427s COMMENT 427s comment on column public.sl_setsync.ssy_setid is 'ID number of the replication set'; 427s COMMENT 427s comment on column public.sl_setsync.ssy_origin is 'ID number of the node'; 427s COMMENT 427s comment on column public.sl_setsync.ssy_seqno is 'Slony-I sequence number'; 427s COMMENT 427s comment on column public.sl_setsync.ssy_snapshot is 'TXID in provider system seen by the event'; 427s COMMENT 427s comment on column public.sl_setsync.ssy_action_list is 'action list used during the subscription process. At the time a subscriber copies over data from the origin, it sees all tables in a state somewhere between two SYNC events. Therefore this list must contains all log_actionseqs that are visible at that time, whose operations have therefore already been included in the data copied at the time the initial data copy is done. Those actions may therefore be filtered out of the first SYNC done after subscribing.'; 427s COMMENT 427s create table public.sl_table ( 427s tab_id int4, 427s tab_reloid oid UNIQUE NOT NULL, 427s tab_relname name NOT NULL, 427s tab_nspname name NOT NULL, 427s tab_set int4, 427s tab_idxname name NOT NULL, 427s tab_altered boolean NOT NULL, 427s tab_comment text, 427s CONSTRAINT "sl_table-pkey" 427s PRIMARY KEY (tab_id), 427s CONSTRAINT "tab_set-set_id-ref" 427s FOREIGN KEY (tab_set) 427s REFERENCES public.sl_set (set_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_table is 'Holds information about the tables being replicated.'; 427s COMMENT 427s comment on column public.sl_table.tab_id is 'Unique key for Slony-I to use to identify the table'; 427s COMMENT 427s comment on column public.sl_table.tab_reloid is 'The OID of the table in pg_catalog.pg_class.oid'; 427s COMMENT 427s comment on column public.sl_table.tab_relname is 'The name of the table in pg_catalog.pg_class.relname used to recover from a dump/restore cycle'; 427s COMMENT 427s comment on column public.sl_table.tab_nspname is 'The name of the schema in pg_catalog.pg_namespace.nspname used to recover from a dump/restore cycle'; 427s COMMENT 427s comment on column public.sl_table.tab_set is 'ID of the replication set the table is in'; 427s COMMENT 427s comment on column public.sl_table.tab_idxname is 'The name of the primary index of the table'; 427s COMMENT 427s comment on column public.sl_table.tab_altered is 'Has the table been modified for replication?'; 427s COMMENT 427s comment on column public.sl_table.tab_comment is 'Human-oriented description of the table'; 427s COMMENT 427s create table public.sl_sequence ( 427s seq_id int4, 427s seq_reloid oid UNIQUE NOT NULL, 427s seq_relname name NOT NULL, 427s seq_nspname name NOT NULL, 427s seq_set int4, 427s seq_comment text, 427s CONSTRAINT "sl_sequence-pkey" 427s PRIMARY KEY (seq_id), 427s CONSTRAINT "seq_set-set_id-ref" 427s FOREIGN KEY (seq_set) 427s REFERENCES public.sl_set (set_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_sequence is 'Similar to sl_table, each entry identifies a sequence being replicated.'; 427s COMMENT 427s comment on column public.sl_sequence.seq_id is 'An internally-used ID for Slony-I to use in its sequencing of updates'; 427s COMMENT 427s comment on column public.sl_sequence.seq_reloid is 'The OID of the sequence object'; 427s COMMENT 427s comment on column public.sl_sequence.seq_relname is 'The name of the sequence in pg_catalog.pg_class.relname used to recover from a dump/restore cycle'; 427s COMMENT 427s comment on column public.sl_sequence.seq_nspname is 'The name of the schema in pg_catalog.pg_namespace.nspname used to recover from a dump/restore cycle'; 427s COMMENT 427s comment on column public.sl_sequence.seq_set is 'Indicates which replication set the object is in'; 427s COMMENT 427s comment on column public.sl_sequence.seq_comment is 'A human-oriented comment'; 427s COMMENT 427s create table public.sl_path ( 427s pa_server int4, 427s pa_client int4, 427s pa_conninfo text NOT NULL, 427s pa_connretry int4, 427s CONSTRAINT "sl_path-pkey" 427s PRIMARY KEY (pa_server, pa_client), 427s CONSTRAINT "pa_server-no_id-ref" 427s FOREIGN KEY (pa_server) 427s REFERENCES public.sl_node (no_id), 427s CONSTRAINT "pa_client-no_id-ref" 427s FOREIGN KEY (pa_client) 427s REFERENCES public.sl_node (no_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_path is 'Holds connection information for the paths between nodes, and the synchronisation delay'; 427s COMMENT 427s comment on column public.sl_path.pa_server is 'The Node ID # (from sl_node.no_id) of the data source'; 427s COMMENT 427s comment on column public.sl_path.pa_client is 'The Node ID # (from sl_node.no_id) of the data target'; 427s COMMENT 427s comment on column public.sl_path.pa_conninfo is 'The PostgreSQL connection string used to connect to the source node.'; 427s COMMENT 427s comment on column public.sl_path.pa_connretry is 'The synchronisation delay, in seconds'; 427s COMMENT 427s create table public.sl_listen ( 427s li_origin int4, 427s li_provider int4, 427s li_receiver int4, 427s CONSTRAINT "sl_listen-pkey" 427s PRIMARY KEY (li_origin, li_provider, li_receiver), 427s CONSTRAINT "li_origin-no_id-ref" 427s FOREIGN KEY (li_origin) 427s REFERENCES public.sl_node (no_id), 427s CONSTRAINT "sl_listen-sl_path-ref" 427s FOREIGN KEY (li_provider, li_receiver) 427s REFERENCES public.sl_path (pa_server, pa_client) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_listen is 'Indicates how nodes listen to events from other nodes in the Slony-I network.'; 427s COMMENT 427s comment on column public.sl_listen.li_origin is 'The ID # (from sl_node.no_id) of the node this listener is operating on'; 427s COMMENT 427s comment on column public.sl_listen.li_provider is 'The ID # (from sl_node.no_id) of the source node for this listening event'; 427s COMMENT 427s comment on column public.sl_listen.li_receiver is 'The ID # (from sl_node.no_id) of the target node for this listening event'; 427s COMMENT 427s create table public.sl_subscribe ( 427s sub_set int4, 427s sub_provider int4, 427s sub_receiver int4, 427s sub_forward bool, 427s sub_active bool, 427s CONSTRAINT "sl_subscribe-pkey" 427s PRIMARY KEY (sub_receiver, sub_set), 427s CONSTRAINT "sl_subscribe-sl_path-ref" 427s FOREIGN KEY (sub_provider, sub_receiver) 427s REFERENCES public.sl_path (pa_server, pa_client), 427s CONSTRAINT "sub_set-set_id-ref" 427s FOREIGN KEY (sub_set) 427s REFERENCES public.sl_set (set_id) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_subscribe is 'Holds a list of subscriptions on sets'; 427s COMMENT 427s comment on column public.sl_subscribe.sub_set is 'ID # (from sl_set) of the set being subscribed to'; 427s COMMENT 427s comment on column public.sl_subscribe.sub_provider is 'ID# (from sl_node) of the node providing data'; 427s COMMENT 427s comment on column public.sl_subscribe.sub_receiver is 'ID# (from sl_node) of the node receiving data from the provider'; 427s COMMENT 427s comment on column public.sl_subscribe.sub_forward is 'Does this provider keep data in sl_log_1/sl_log_2 to allow it to be a provider for other nodes?'; 427s COMMENT 427s comment on column public.sl_subscribe.sub_active is 'Has this subscription been activated? This is not set on the subscriber until AFTER the subscriber has received COPY data from the provider'; 427s COMMENT 427s create table public.sl_event ( 427s ev_origin int4, 427s ev_seqno int8, 427s ev_timestamp timestamptz, 427s ev_snapshot "pg_catalog".txid_snapshot, 427s ev_type text, 427s ev_data1 text, 427s ev_data2 text, 427s ev_data3 text, 427s ev_data4 text, 427s ev_data5 text, 427s ev_data6 text, 427s ev_data7 text, 427s ev_data8 text, 427s CONSTRAINT "sl_event-pkey" 427s PRIMARY KEY (ev_origin, ev_seqno) 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_event is 'Holds information about replication events. After a period of time, Slony removes old confirmed events from both this table and the sl_confirm table.'; 427s COMMENT 427s comment on column public.sl_event.ev_origin is 'The ID # (from sl_node.no_id) of the source node for this event'; 427s COMMENT 427s comment on column public.sl_event.ev_seqno is 'The ID # for the event'; 427s COMMENT 427s comment on column public.sl_event.ev_timestamp is 'When this event record was created'; 427s COMMENT 427s comment on column public.sl_event.ev_snapshot is 'TXID snapshot on provider node for this event'; 427s COMMENT 427s comment on column public.sl_event.ev_seqno is 'The ID # for the event'; 427s COMMENT 427s comment on column public.sl_event.ev_type is 'The type of event this record is for. 427s SYNC = Synchronise 427s STORE_NODE = 427s ENABLE_NODE = 427s DROP_NODE = 427s STORE_PATH = 427s DROP_PATH = 427s STORE_LISTEN = 427s DROP_LISTEN = 427s STORE_SET = 427s DROP_SET = 427s MERGE_SET = 427s SET_ADD_TABLE = 427s SET_ADD_SEQUENCE = 427s STORE_TRIGGER = 427s DROP_TRIGGER = 427s MOVE_SET = 427s ACCEPT_SET = 427s SET_DROP_TABLE = 427s SET_DROP_SEQUENCE = 427s SET_MOVE_TABLE = 427s SET_MOVE_SEQUENCE = 427s FAILOVER_SET = 427s SUBSCRIBE_SET = 427s ENABLE_SUBSCRIPTION = 427s UNSUBSCRIBE_SET = 427s DDL_SCRIPT = 427s ADJUST_SEQ = 427s RESET_CONFIG = 427s '; 427s COMMENT 427s comment on column public.sl_event.ev_data1 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data2 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data3 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data4 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data5 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data6 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data7 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s comment on column public.sl_event.ev_data8 is 'Data field containing an argument needed to process the event'; 427s COMMENT 427s create table public.sl_confirm ( 427s con_origin int4, 427s con_received int4, 427s con_seqno int8, 427s con_timestamp timestamptz DEFAULT timeofday()::timestamptz 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_confirm is 'Holds confirmation of replication events. After a period of time, Slony removes old confirmed events from both this table and the sl_event table.'; 427s COMMENT 427s comment on column public.sl_confirm.con_origin is 'The ID # (from sl_node.no_id) of the source node for this event'; 427s COMMENT 427s comment on column public.sl_confirm.con_seqno is 'The ID # for the event'; 427s COMMENT 427s comment on column public.sl_confirm.con_timestamp is 'When this event was confirmed'; 427s COMMENT 427s create index sl_confirm_idx1 on public.sl_confirm 427s (con_origin, con_received, con_seqno); 427s CREATE INDEX 427s create index sl_confirm_idx2 on public.sl_confirm 427s (con_received, con_seqno); 427s CREATE INDEX 427s create table public.sl_seqlog ( 427s seql_seqid int4, 427s seql_origin int4, 427s seql_ev_seqno int8, 427s seql_last_value int8 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_seqlog is 'Log of Sequence updates'; 427s COMMENT 427s comment on column public.sl_seqlog.seql_seqid is 'Sequence ID'; 427s COMMENT 427s comment on column public.sl_seqlog.seql_origin is 'Publisher node at which the sequence originates'; 427s COMMENT 427s comment on column public.sl_seqlog.seql_ev_seqno is 'Slony-I Event with which this sequence update is associated'; 427s COMMENT 427s comment on column public.sl_seqlog.seql_last_value is 'Last value published for this sequence'; 427s COMMENT 427s create index sl_seqlog_idx on public.sl_seqlog 427s (seql_origin, seql_ev_seqno, seql_seqid); 427s CREATE INDEX 427s create function public.sequenceLastValue(p_seqname text) returns int8 427s as $$ 427s declare 427s v_seq_row record; 427s begin 427s for v_seq_row in execute 'select last_value from ' || public.slon_quote_input(p_seqname) 427s loop 427s return v_seq_row.last_value; 427s end loop; 427s 427s -- not reached 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.sequenceLastValue(p_seqname text) is 427s 'sequenceLastValue(p_seqname) 427s 427s Utility function used in sl_seqlastvalue view to compactly get the 427s last value from the requested sequence.'; 427s COMMENT 427s create table public.sl_log_1 ( 427s log_origin int4, 427s log_txid bigint, 427s log_tableid int4, 427s log_actionseq int8, 427s log_tablenspname text, 427s log_tablerelname text, 427s log_cmdtype "char", 427s log_cmdupdncols int4, 427s log_cmdargs text[] 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s create index sl_log_1_idx1 on public.sl_log_1 427s (log_origin, log_txid, log_actionseq); 427s CREATE INDEX 427s comment on table public.sl_log_1 is 'Stores each change to be propagated to subscriber nodes'; 427s COMMENT 427s comment on column public.sl_log_1.log_origin is 'Origin node from which the change came'; 427s COMMENT 427s comment on column public.sl_log_1.log_txid is 'Transaction ID on the origin node'; 427s COMMENT 427s comment on column public.sl_log_1.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 427s COMMENT 427s comment on column public.sl_log_1.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 427s COMMENT 427s comment on column public.sl_log_1.log_tablenspname is 'The schema name of the table affected'; 427s COMMENT 427s comment on column public.sl_log_1.log_tablerelname is 'The table name of the table affected'; 427s COMMENT 427s comment on column public.sl_log_1.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 427s COMMENT 427s comment on column public.sl_log_1.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 427s COMMENT 427s comment on column public.sl_log_1.log_cmdargs is 'The data needed to perform the log action on the replica'; 427s COMMENT 427s create table public.sl_log_2 ( 427s log_origin int4, 427s log_txid bigint, 427s log_tableid int4, 427s log_actionseq int8, 427s log_tablenspname text, 427s log_tablerelname text, 427s log_cmdtype "char", 427s log_cmdupdncols int4, 427s log_cmdargs text[] 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s create index sl_log_2_idx1 on public.sl_log_2 427s (log_origin, log_txid, log_actionseq); 427s CREATE INDEX 427s comment on table public.sl_log_2 is 'Stores each change to be propagated to subscriber nodes'; 427s COMMENT 427s comment on column public.sl_log_2.log_origin is 'Origin node from which the change came'; 427s COMMENT 427s comment on column public.sl_log_2.log_txid is 'Transaction ID on the origin node'; 427s COMMENT 427s comment on column public.sl_log_2.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 427s COMMENT 427s comment on column public.sl_log_2.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 427s COMMENT 427s comment on column public.sl_log_2.log_tablenspname is 'The schema name of the table affected'; 427s COMMENT 427s comment on column public.sl_log_2.log_tablerelname is 'The table name of the table affected'; 427s COMMENT 427s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 427s COMMENT 427s comment on column public.sl_log_2.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 427s COMMENT 427s comment on column public.sl_log_2.log_cmdargs is 'The data needed to perform the log action on the replica'; 427s COMMENT 427s create table public.sl_log_script ( 427s log_origin int4, 427s log_txid bigint, 427s log_actionseq int8, 427s log_cmdtype "char", 427s log_cmdargs text[] 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s create index sl_log_script_idx1 on public.sl_log_script 427s (log_origin, log_txid, log_actionseq); 427s CREATE INDEX 427s comment on table public.sl_log_script is 'Captures SQL script queries to be propagated to subscriber nodes'; 427s COMMENT 427s comment on column public.sl_log_script.log_origin is 'Origin name from which the change came'; 427s COMMENT 427s comment on column public.sl_log_script.log_txid is 'Transaction ID on the origin node'; 427s COMMENT 427s comment on column public.sl_log_script.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 427s COMMENT 427s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. S = Script statement, s = Script complete'; 427s COMMENT 427s comment on column public.sl_log_script.log_cmdargs is 'The DDL statement, optionally followed by selected nodes to execute it on.'; 427s COMMENT 427s create table public.sl_registry ( 427s reg_key text primary key, 427s reg_int4 int4, 427s reg_text text, 427s reg_timestamp timestamptz 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s comment on table public.sl_registry is 'Stores miscellaneous runtime data'; 427s COMMENT 427s comment on column public.sl_registry.reg_key is 'Unique key of the runtime option'; 427s COMMENT 427s comment on column public.sl_registry.reg_int4 is 'Option value if type int4'; 427s COMMENT 427s comment on column public.sl_registry.reg_text is 'Option value if type text'; 427s COMMENT 427s comment on column public.sl_registry.reg_timestamp is 'Option value if type timestamp'; 427s COMMENT 427s create table public.sl_apply_stats ( 427s as_origin int4, 427s as_num_insert int8, 427s as_num_update int8, 427s as_num_delete int8, 427s as_num_truncate int8, 427s as_num_script int8, 427s as_num_total int8, 427s as_duration interval, 427s as_apply_first timestamptz, 427s as_apply_last timestamptz, 427s as_cache_prepare int8, 427s as_cache_hit int8, 427s as_cache_evict int8, 427s as_cache_prepare_max int8 427s ) WITHOUT OIDS; 427s CREATE TABLE 427s create index sl_apply_stats_idx1 on public.sl_apply_stats 427s (as_origin); 427s CREATE INDEX 427s comment on table public.sl_apply_stats is 'Local SYNC apply statistics (running totals)'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_origin is 'Origin of the SYNCs'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_num_insert is 'Number of INSERT operations performed'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_num_update is 'Number of UPDATE operations performed'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_num_delete is 'Number of DELETE operations performed'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_num_truncate is 'Number of TRUNCATE operations performed'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_num_script is 'Number of DDL operations performed'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_num_total is 'Total number of operations'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_duration is 'Processing time'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_apply_first is 'Timestamp of first recorded SYNC'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_apply_last is 'Timestamp of most recent recorded SYNC'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_cache_evict is 'Number of apply query cache evict operations'; 427s COMMENT 427s comment on column public.sl_apply_stats.as_cache_prepare_max is 'Maximum number of apply queries prepared in one SYNC group'; 427s COMMENT 427s create view public.sl_seqlastvalue as 427s select SQ.seq_id, SQ.seq_set, SQ.seq_reloid, 427s S.set_origin as seq_origin, 427s public.sequenceLastValue( 427s "pg_catalog".quote_ident(PGN.nspname) || '.' || 427s "pg_catalog".quote_ident(PGC.relname)) as seq_last_value 427s from public.sl_sequence SQ, public.sl_set S, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where S.set_id = SQ.seq_set 427s and PGC.oid = SQ.seq_reloid and PGN.oid = PGC.relnamespace; 427s CREATE VIEW 427s create view public.sl_failover_targets as 427s select set_id, 427s set_origin as set_origin, 427s sub1.sub_receiver as backup_id 427s FROM 427s public.sl_subscribe sub1 427s ,public.sl_set set1 427s where 427s sub1.sub_set=set_id 427s and sub1.sub_forward=true 427s --exclude candidates where the set_origin 427s --has a path a node but the failover 427s --candidate has no path to that node 427s and sub1.sub_receiver not in 427s (select p1.pa_client from 427s public.sl_path p1 427s left outer join public.sl_path p2 on 427s (p2.pa_client=p1.pa_client 427s and p2.pa_server=sub1.sub_receiver) 427s where p2.pa_client is null 427s and p1.pa_server=set_origin 427s and p1.pa_client<>sub1.sub_receiver 427s ) 427s and sub1.sub_provider=set_origin 427s --exclude any subscribers that are not 427s --direct subscribers of all sets on the 427s --origin 427s and sub1.sub_receiver not in 427s (select direct_recv.sub_receiver 427s from 427s 427s (--all direct receivers of the first set 427s select subs2.sub_receiver 427s from public.sl_subscribe subs2 427s where subs2.sub_provider=set1.set_origin 427s and subs2.sub_set=set1.set_id) as 427s direct_recv 427s inner join 427s (--all other sets from the origin 427s select set_id from public.sl_set set2 427s where set2.set_origin=set1.set_origin 427s and set2.set_id<>sub1.sub_set) 427s as othersets on(true) 427s left outer join public.sl_subscribe subs3 427s on(subs3.sub_set=othersets.set_id 427s and subs3.sub_forward=true 427s and subs3.sub_provider=set1.set_origin 427s and direct_recv.sub_receiver=subs3.sub_receiver) 427s where subs3.sub_receiver is null 427s ); 427s CREATE VIEW 427s create sequence public.sl_local_node_id 427s MINVALUE -1; 427s CREATE SEQUENCE 427s SELECT setval('public.sl_local_node_id', -1); 427s setval 427s -------- 427s -1 427s (1 row) 427s 427s comment on sequence public.sl_local_node_id is 'The local node ID is initialized to -1, meaning that this node is not initialized yet.'; 427s COMMENT 427s create sequence public.sl_event_seq; 427s CREATE SEQUENCE 427s comment on sequence public.sl_event_seq is 'The sequence for numbering events originating from this node.'; 427s COMMENT 427s select setval('public.sl_event_seq', 5000000000); 427s setval 427s ------------ 427s 5000000000 427s (1 row) 427s 427s create sequence public.sl_action_seq; 427s CREATE SEQUENCE 427s comment on sequence public.sl_action_seq is 'The sequence to number statements in the transaction logs, so that the replication engines can figure out the "agreeable" order of statements.'; 427s COMMENT 427s create sequence public.sl_log_status 427s MINVALUE 0 MAXVALUE 3; 427s CREATE SEQUENCE 427s SELECT setval('public.sl_log_status', 0); 427s setval 427s -------- 427s 0 427s (1 row) 427s 427s comment on sequence public.sl_log_status is ' 427s Bit 0x01 determines the currently active log table 427s Bit 0x02 tells if the engine needs to read both logs 427s after switching until the old log is clean and truncated. 427s 427s Possible values: 427s 0 sl_log_1 active, sl_log_2 clean 427s 1 sl_log_2 active, sl_log_1 clean 427s 2 sl_log_1 active, sl_log_2 unknown - cleanup 427s 3 sl_log_2 active, sl_log_1 unknown - cleanup 427s 427s This is not yet in use. 427s '; 427s COMMENT 427s create table public.sl_config_lock ( 427s dummy integer 427s ); 427s CREATE TABLE 427s comment on table public.sl_config_lock is 'This table exists solely to prevent overlapping execution of configuration change procedures and the resulting possible deadlocks. 427s '; 427s COMMENT 427s comment on column public.sl_config_lock.dummy is 'No data ever goes in this table so the contents never matter. Indeed, this column does not really need to exist.'; 427s COMMENT 427s create table public.sl_event_lock ( 427s dummy integer 427s ); 427s CREATE TABLE 427s comment on table public.sl_event_lock is 'This table exists solely to prevent multiple connections from concurrently creating new events and perhaps getting them out of order.'; 427s COMMENT 427s comment on column public.sl_event_lock.dummy is 'No data ever goes in this table so the contents never matter. Indeed, this column does not really need to exist.'; 427s COMMENT 427s create table public.sl_archive_counter ( 427s ac_num bigint, 427s ac_timestamp timestamptz 427s ) without oids; 427s CREATE TABLE 427s comment on table public.sl_archive_counter is 'Table used to generate the log shipping archive number. 427s '; 427s COMMENT 427s comment on column public.sl_archive_counter.ac_num is 'Counter of SYNC ID used in log shipping as the archive number'; 427s COMMENT 427s comment on column public.sl_archive_counter.ac_timestamp is 'Time at which the archive log was generated on the subscriber'; 427s COMMENT 427s insert into public.sl_archive_counter (ac_num, ac_timestamp) 427s values (0, 'epoch'::timestamptz); 427s INSERT 0 1 427s create table public.sl_components ( 427s co_actor text not null primary key, 427s co_pid integer not null, 427s co_node integer not null, 427s co_connection_pid integer not null, 427s co_activity text, 427s co_starttime timestamptz not null, 427s co_event bigint, 427s co_eventtype text 427s ) without oids; 427s CREATE TABLE 427s comment on table public.sl_components is 'Table used to monitor what various slon/slonik components are doing'; 427s COMMENT 427s comment on column public.sl_components.co_actor is 'which component am I?'; 427s COMMENT 427s comment on column public.sl_components.co_pid is 'my process/thread PID on node where slon runs'; 427s COMMENT 427s comment on column public.sl_components.co_node is 'which node am I servicing?'; 427s COMMENT 427s comment on column public.sl_components.co_connection_pid is 'PID of database connection being used on database server'; 427s COMMENT 427s comment on column public.sl_components.co_activity is 'activity that I am up to'; 427s COMMENT 427s comment on column public.sl_components.co_starttime is 'when did my activity begin? (timestamp reported as per slon process on server running slon)'; 427s COMMENT 427s comment on column public.sl_components.co_eventtype is 'what kind of event am I processing? (commonly n/a for event loop main threads)'; 427s COMMENT 427s comment on column public.sl_components.co_event is 'which event have I started processing?'; 427s COMMENT 427s CREATE OR replace function public.agg_text_sum(txt_before TEXT, txt_new TEXT) RETURNS TEXT AS 427s $BODY$ 427s DECLARE 427s c_delim text; 427s BEGIN 427s c_delim = ','; 427s IF (txt_before IS NULL or txt_before='') THEN 427s RETURN txt_new; 427s END IF; 427s RETURN txt_before || c_delim || txt_new; 427s END; 427s $BODY$ 427s LANGUAGE plpgsql; 427s CREATE FUNCTION 427s comment on function public.agg_text_sum(text,text) is 427s 'An accumulator function used by the slony string_agg function to 427s aggregate rows into a string'; 427s COMMENT 427s CREATE AGGREGATE public.string_agg(text) ( 427s SFUNC=public.agg_text_sum, 427s STYPE=text, 427s INITCOND='' 427s ); 427s CREATE AGGREGATE 427s grant usage on schema public to public; 427s GRANT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text, ev_data8 text) 427s returns bigint 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__createEvent' 427s language C 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.createEvent (p_cluster_name name, p_event_type text, ev_data1 text, ev_data2 text, ev_data3 text, ev_data4 text, ev_data5 text, ev_data6 text, ev_data7 text, ev_data8 text) is 427s 'FUNCTION createEvent (cluster_name, ev_type [, ev_data [...]]) 427s 427s Create an sl_event entry'; 427s COMMENT 427s create or replace function public.denyAccess () 427s returns trigger 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__denyAccess' 427s language C 427s security definer; 427s CREATE FUNCTION 427s comment on function public.denyAccess () is 427s 'Trigger function to prevent modifications to a table on a subscriber'; 427s COMMENT 427s grant execute on function public.denyAccess () to public; 427s GRANT 427s create or replace function public.lockedSet () 427s returns trigger 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__lockedSet' 427s language C; 427s CREATE FUNCTION 427s comment on function public.lockedSet () is 427s 'Trigger function to prevent modifications to a table before and after a moveSet()'; 427s COMMENT 427s create or replace function public.getLocalNodeId (p_cluster name) returns int4 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__getLocalNodeId' 427s language C 427s security definer; 427s CREATE FUNCTION 427s grant execute on function public.getLocalNodeId (p_cluster name) to public; 427s GRANT 427s comment on function public.getLocalNodeId (p_cluster name) is 427s 'Returns the node ID of the node being serviced on the local database'; 427s COMMENT 427s create or replace function public.getModuleVersion () returns text 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__getModuleVersion' 427s language C 427s security definer; 427s CREATE FUNCTION 427s grant execute on function public.getModuleVersion () to public; 427s GRANT 427s NOTICE: checked validity of cluster main namespace - OK! 427s NOTICE: function public.clonenodeprepare(int4,int4,text) does not exist, skipping 427s comment on function public.getModuleVersion () is 427s 'Returns the compiled-in version number of the Slony-I shared object'; 427s COMMENT 427s create or replace function public.resetSession() returns text 427s as '$libdir/slony1_funcs.2.2.11','_Slony_I_2_2_11__resetSession' 427s language C; 427s CREATE FUNCTION 427s create or replace function public.logApply () returns trigger 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logApply' 427s language C 427s security definer; 427s CREATE FUNCTION 427s create or replace function public.logApplySetCacheSize (p_size int4) 427s returns int4 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logApplySetCacheSize' 427s language C; 427s CREATE FUNCTION 427s create or replace function public.logApplySaveStats (p_cluster name, p_origin int4, p_duration interval) 427s returns int4 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logApplySaveStats' 427s language C; 427s CREATE FUNCTION 427s create or replace function public.checkmoduleversion () returns text as $$ 427s declare 427s moduleversion text; 427s begin 427s select into moduleversion public.getModuleVersion(); 427s if moduleversion <> '2.2.11' then 427s raise exception 'Slonik version: 2.2.11 != Slony-I version in PG build %', 427s moduleversion; 427s end if; 427s return null; 427s end;$$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.checkmoduleversion () is 427s 'Inline test function that verifies that slonik request for STORE 427s NODE/INIT CLUSTER is being run against a conformant set of 427s schema/functions.'; 427s COMMENT 427s select public.checkmoduleversion(); 427s checkmoduleversion 427s -------------------- 427s 427s (1 row) 427s 427s create or replace function public.decode_tgargs(bytea) returns text[] as 427s '$libdir/slony1_funcs.2.2.11','_Slony_I_2_2_11__slon_decode_tgargs' language C security definer; 427s CREATE FUNCTION 427s comment on function public.decode_tgargs(bytea) is 427s 'Translates the contents of pg_trigger.tgargs to an array of text arguments'; 427s COMMENT 427s grant execute on function public.decode_tgargs(bytea) to public; 427s GRANT 427s create or replace function public.check_namespace_validity () returns boolean as $$ 427s declare 427s c_cluster text; 427s begin 427s c_cluster := 'main'; 427s if c_cluster !~ E'^[[:alpha:]_][[:alnum:]_\$]{0,62}$' then 427s raise exception 'Cluster name % is not a valid SQL symbol!', c_cluster; 427s else 427s raise notice 'checked validity of cluster % namespace - OK!', c_cluster; 427s end if; 427s return 't'; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s select public.check_namespace_validity(); 427s check_namespace_validity 427s -------------------------- 427s t 427s (1 row) 427s 427s drop function public.check_namespace_validity(); 427s DROP FUNCTION 427s create or replace function public.logTrigger () returns trigger 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__logTrigger' 427s language C 427s security definer; 427s CREATE FUNCTION 427s comment on function public.logTrigger () is 427s 'This is the trigger that is executed on the origin node that causes 427s updates to be recorded in sl_log_1/sl_log_2.'; 427s COMMENT 427s grant execute on function public.logTrigger () to public; 427s GRANT 427s create or replace function public.terminateNodeConnections (p_failed_node int4) returns int4 427s as $$ 427s declare 427s v_row record; 427s begin 427s for v_row in select nl_nodeid, nl_conncnt, 427s nl_backendpid from public.sl_nodelock 427s where nl_nodeid = p_failed_node for update 427s loop 427s perform public.killBackend(v_row.nl_backendpid, 'TERM'); 427s delete from public.sl_nodelock 427s where nl_nodeid = v_row.nl_nodeid 427s and nl_conncnt = v_row.nl_conncnt; 427s end loop; 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.terminateNodeConnections (p_failed_node int4) is 427s 'terminates all backends that have registered to be from the given node'; 427s COMMENT 427s create or replace function public.killBackend (p_pid int4, p_signame text) returns int4 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__killBackend' 427s language C; 427s CREATE FUNCTION 427s comment on function public.killBackend(p_pid int4, p_signame text) is 427s 'Send a signal to a postgres process. Requires superuser rights'; 427s COMMENT 427s create or replace function public.seqtrack (p_seqid int4, p_seqval int8) returns int8 427s as '$libdir/slony1_funcs.2.2.11', '_Slony_I_2_2_11__seqtrack' 427s strict language C; 427s CREATE FUNCTION 427s comment on function public.seqtrack(p_seqid int4, p_seqval int8) is 427s 'Returns NULL if seqval has not changed since the last call for seqid'; 427s COMMENT 427s create or replace function public.slon_quote_brute(p_tab_fqname text) returns text 427s as $$ 427s declare 427s v_fqname text default ''; 427s begin 427s v_fqname := '"' || replace(p_tab_fqname,'"','""') || '"'; 427s return v_fqname; 427s end; 427s $$ language plpgsql immutable; 427s CREATE FUNCTION 427s comment on function public.slon_quote_brute(p_tab_fqname text) is 427s 'Brutally quote the given text'; 427s COMMENT 427s create or replace function public.slon_quote_input(p_tab_fqname text) returns text as $$ 427s declare 427s v_nsp_name text; 427s v_tab_name text; 427s v_i integer; 427s v_l integer; 427s v_pq2 integer; 427s begin 427s v_l := length(p_tab_fqname); 427s 427s -- Let us search for the dot 427s if p_tab_fqname like '"%' then 427s -- if the first part of the ident starts with a double quote, search 427s -- for the closing double quote, skipping over double double quotes. 427s v_i := 2; 427s while v_i <= v_l loop 427s if substr(p_tab_fqname, v_i, 1) != '"' then 427s v_i := v_i + 1; 427s else 427s v_i := v_i + 1; 427s if substr(p_tab_fqname, v_i, 1) != '"' then 427s exit; 427s end if; 427s v_i := v_i + 1; 427s end if; 427s end loop; 427s else 427s -- first part of ident is not quoted, search for the dot directly 427s v_i := 1; 427s while v_i <= v_l loop 427s if substr(p_tab_fqname, v_i, 1) = '.' then 427s exit; 427s end if; 427s v_i := v_i + 1; 427s end loop; 427s end if; 427s 427s -- v_i now points at the dot or behind the string. 427s 427s if substr(p_tab_fqname, v_i, 1) = '.' then 427s -- There is a dot now, so split the ident into its namespace 427s -- and objname parts and make sure each is quoted 427s v_nsp_name := substr(p_tab_fqname, 1, v_i - 1); 427s v_tab_name := substr(p_tab_fqname, v_i + 1); 427s if v_nsp_name not like '"%' then 427s v_nsp_name := '"' || replace(v_nsp_name, '"', '""') || 427s '"'; 427s end if; 427s if v_tab_name not like '"%' then 427s v_tab_name := '"' || replace(v_tab_name, '"', '""') || 427s '"'; 427s end if; 427s 427s return v_nsp_name || '.' || v_tab_name; 427s else 427s -- No dot ... must be just an ident without schema 427s if p_tab_fqname like '"%' then 427s return p_tab_fqname; 427s else 427s return '"' || replace(p_tab_fqname, '"', '""') || '"'; 427s end if; 427s end if; 427s 427s end;$$ language plpgsql immutable; 427s CREATE FUNCTION 427s comment on function public.slon_quote_input(p_text text) is 427s 'quote all words that aren''t quoted yet'; 427s COMMENT 427s create or replace function public.slonyVersionMajor() 427s returns int4 427s as $$ 427s begin 427s return 2; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.slonyVersionMajor () is 427s 'Returns the major version number of the slony schema'; 427s COMMENT 427s create or replace function public.slonyVersionMinor() 427s returns int4 427s as $$ 427s begin 427s return 2; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.slonyVersionMinor () is 427s 'Returns the minor version number of the slony schema'; 427s COMMENT 427s create or replace function public.slonyVersionPatchlevel() 427s returns int4 427s as $$ 427s begin 427s return 11; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.slonyVersionPatchlevel () is 427s 'Returns the version patch level of the slony schema'; 427s COMMENT 427s create or replace function public.slonyVersion() 427s returns text 427s as $$ 427s begin 427s return public.slonyVersionMajor()::text || '.' || 427s public.slonyVersionMinor()::text || '.' || 427s public.slonyVersionPatchlevel()::text ; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.slonyVersion() is 427s 'Returns the version number of the slony schema'; 427s COMMENT 427s create or replace function public.registry_set_int4(p_key text, p_value int4) 427s returns int4 as $$ 427s BEGIN 427s if p_value is null then 427s delete from public.sl_registry 427s where reg_key = p_key; 427s else 427s lock table public.sl_registry; 427s update public.sl_registry 427s set reg_int4 = p_value 427s where reg_key = p_key; 427s if not found then 427s insert into public.sl_registry (reg_key, reg_int4) 427s values (p_key, p_value); 427s end if; 427s end if; 427s return p_value; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registry_set_int4(p_key text, p_value int4) is 427s 'registry_set_int4(key, value) 427s 427s Set or delete a registry value'; 427s COMMENT 427s create or replace function public.registry_get_int4(p_key text, p_default int4) 427s returns int4 as $$ 427s DECLARE 427s v_value int4; 427s BEGIN 427s select reg_int4 into v_value from public.sl_registry 427s where reg_key = p_key; 427s if not found then 427s v_value = p_default; 427s if p_default notnull then 427s perform public.registry_set_int4(p_key, p_default); 427s end if; 427s else 427s if v_value is null then 427s raise exception 'Slony-I: registry key % is not an int4 value', 427s p_key; 427s end if; 427s end if; 427s return v_value; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registry_get_int4(p_key text, p_default int4) is 427s 'registry_get_int4(key, value) 427s 427s Get a registry value. If not present, set and return the default.'; 427s COMMENT 427s create or replace function public.registry_set_text(p_key text, p_value text) 427s returns text as $$ 427s BEGIN 427s if p_value is null then 427s delete from public.sl_registry 427s where reg_key = p_key; 427s else 427s lock table public.sl_registry; 427s update public.sl_registry 427s set reg_text = p_value 427s where reg_key = p_key; 427s if not found then 427s insert into public.sl_registry (reg_key, reg_text) 427s values (p_key, p_value); 427s end if; 427s end if; 427s return p_value; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registry_set_text(text, text) is 427s 'registry_set_text(key, value) 427s 427s Set or delete a registry value'; 427s COMMENT 427s create or replace function public.registry_get_text(p_key text, p_default text) 427s returns text as $$ 427s DECLARE 427s v_value text; 427s BEGIN 427s select reg_text into v_value from public.sl_registry 427s where reg_key = p_key; 427s if not found then 427s v_value = p_default; 427s if p_default notnull then 427s perform public.registry_set_text(p_key, p_default); 427s end if; 427s else 427s if v_value is null then 427s raise exception 'Slony-I: registry key % is not a text value', 427s p_key; 427s end if; 427s end if; 427s return v_value; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registry_get_text(p_key text, p_default text) is 427s 'registry_get_text(key, value) 427s 427s Get a registry value. If not present, set and return the default.'; 427s COMMENT 427s create or replace function public.registry_set_timestamp(p_key text, p_value timestamptz) 427s returns timestamp as $$ 427s BEGIN 427s if p_value is null then 427s delete from public.sl_registry 427s where reg_key = p_key; 427s else 427s lock table public.sl_registry; 427s update public.sl_registry 427s set reg_timestamp = p_value 427s where reg_key = p_key; 427s if not found then 427s insert into public.sl_registry (reg_key, reg_timestamp) 427s values (p_key, p_value); 427s end if; 427s end if; 427s return p_value; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registry_set_timestamp(p_key text, p_value timestamptz) is 427s 'registry_set_timestamp(key, value) 427s 427s Set or delete a registry value'; 427s COMMENT 427s create or replace function public.registry_get_timestamp(p_key text, p_default timestamptz) 427s returns timestamp as $$ 427s DECLARE 427s v_value timestamp; 427s BEGIN 427s select reg_timestamp into v_value from public.sl_registry 427s where reg_key = p_key; 427s if not found then 427s v_value = p_default; 427s if p_default notnull then 427s perform public.registry_set_timestamp(p_key, p_default); 427s end if; 427s else 427s if v_value is null then 427s raise exception 'Slony-I: registry key % is not an timestamp value', 427s p_key; 427s end if; 427s end if; 427s return v_value; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registry_get_timestamp(p_key text, p_default timestamptz) is 427s 'registry_get_timestamp(key, value) 427s 427s Get a registry value. If not present, set and return the default.'; 427s COMMENT 427s create or replace function public.cleanupNodelock () 427s returns int4 427s as $$ 427s declare 427s v_row record; 427s begin 427s for v_row in select nl_nodeid, nl_conncnt, nl_backendpid 427s from public.sl_nodelock 427s for update 427s loop 427s if public.killBackend(v_row.nl_backendpid, 'NULL') < 0 then 427s raise notice 'Slony-I: cleanup stale sl_nodelock entry for pid=%', 427s v_row.nl_backendpid; 427s delete from public.sl_nodelock where 427s nl_nodeid = v_row.nl_nodeid and 427s nl_conncnt = v_row.nl_conncnt; 427s end if; 427s end loop; 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.cleanupNodelock() is 427s 'Clean up stale entries when restarting slon'; 427s COMMENT 427s create or replace function public.registerNodeConnection (p_nodeid int4) 427s returns int4 427s as $$ 427s begin 427s insert into public.sl_nodelock 427s (nl_nodeid, nl_backendpid) 427s values 427s (p_nodeid, pg_backend_pid()); 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.registerNodeConnection (p_nodeid int4) is 427s 'Register (uniquely) the node connection so that only one slon can service the node'; 427s COMMENT 427s create or replace function public.initializeLocalNode (p_local_node_id int4, p_comment text) 427s returns int4 427s as $$ 427s declare 427s v_old_node_id int4; 427s v_first_log_no int4; 427s v_event_seq int8; 427s begin 427s -- ---- 427s -- Make sure this node is uninitialized or got reset 427s -- ---- 427s select last_value::int4 into v_old_node_id from public.sl_local_node_id; 427s if v_old_node_id != -1 then 427s raise exception 'Slony-I: This node is already initialized'; 427s end if; 427s 427s -- ---- 427s -- Set sl_local_node_id to the requested value and add our 427s -- own system to sl_node. 427s -- ---- 427s perform setval('public.sl_local_node_id', p_local_node_id); 427s perform public.storeNode_int (p_local_node_id, p_comment); 427s 427s if (pg_catalog.current_setting('max_identifier_length')::integer - pg_catalog.length('public')) < 5 then 427s raise notice 'Slony-I: Cluster name length [%] versus system max_identifier_length [%] ', pg_catalog.length('public'), pg_catalog.current_setting('max_identifier_length'); 427s raise notice 'leaves narrow/no room for some Slony-I-generated objects (such as indexes).'; 427s raise notice 'You may run into problems later!'; 427s end if; 427s 427s -- 427s -- Put the apply trigger onto sl_log_1 and sl_log_2 427s -- 427s create trigger apply_trigger 427s before INSERT on public.sl_log_1 427s for each row execute procedure public.logApply('_main'); 427s alter table public.sl_log_1 427s enable replica trigger apply_trigger; 427s create trigger apply_trigger 427s before INSERT on public.sl_log_2 427s for each row execute procedure public.logApply('_main'); 427s alter table public.sl_log_2 427s enable replica trigger apply_trigger; 427s 427s return p_local_node_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.initializeLocalNode (p_local_node_id int4, p_comment text) is 427s 'no_id - Node ID # 427s no_comment - Human-oriented comment 427s 427s Initializes the new node, no_id'; 427s COMMENT 427s create or replace function public.storeNode (p_no_id int4, p_no_comment text) 427s returns bigint 427s as $$ 427s begin 427s perform public.storeNode_int (p_no_id, p_no_comment); 427s return public.createEvent('_main', 'STORE_NODE', 427s p_no_id::text, p_no_comment::text); 427s end; 427s $$ language plpgsql 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.storeNode(p_no_id int4, p_no_comment text) is 427s 'no_id - Node ID # 427s no_comment - Human-oriented comment 427s 427s Generate the STORE_NODE event for node no_id'; 427s COMMENT 427s create or replace function public.storeNode_int (p_no_id int4, p_no_comment text) 427s returns int4 427s as $$ 427s declare 427s v_old_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check if the node exists 427s -- ---- 427s select * into v_old_row 427s from public.sl_node 427s where no_id = p_no_id 427s for update; 427s if found then 427s -- ---- 427s -- Node exists, update the existing row. 427s -- ---- 427s update public.sl_node 427s set no_comment = p_no_comment 427s where no_id = p_no_id; 427s else 427s -- ---- 427s -- New node, insert the sl_node row 427s -- ---- 427s insert into public.sl_node 427s (no_id, no_active, no_comment,no_failed) values 427s (p_no_id, 'f', p_no_comment,false); 427s end if; 427s 427s return p_no_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.storeNode_int(p_no_id int4, p_no_comment text) is 427s 'no_id - Node ID # 427s no_comment - Human-oriented comment 427s 427s Internal function to process the STORE_NODE event for node no_id'; 427s COMMENT 427s create or replace function public.enableNode (p_no_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_node_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that we are the node to activate and that we are 427s -- currently disabled. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select * into v_node_row 427s from public.sl_node 427s where no_id = p_no_id 427s for update; 427s if not found then 427s raise exception 'Slony-I: node % not found', p_no_id; 427s end if; 427s if v_node_row.no_active then 427s raise exception 'Slony-I: node % is already active', p_no_id; 427s end if; 427s 427s -- ---- 427s -- Activate this node and generate the ENABLE_NODE event 427s -- ---- 427s perform public.enableNode_int (p_no_id); 427s return public.createEvent('_main', 'ENABLE_NODE', 427s p_no_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.enableNode(p_no_id int4) is 427s 'no_id - Node ID # 427s 427s Generate the ENABLE_NODE event for node no_id'; 427s COMMENT 427s create or replace function public.enableNode_int (p_no_id int4) 427s returns int4 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_node_row record; 427s v_sub_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that the node is inactive 427s -- ---- 427s select * into v_node_row 427s from public.sl_node 427s where no_id = p_no_id 427s for update; 427s if not found then 427s raise exception 'Slony-I: node % not found', p_no_id; 427s end if; 427s if v_node_row.no_active then 427s return p_no_id; 427s end if; 427s 427s -- ---- 427s -- Activate the node and generate sl_confirm status rows for it. 427s -- ---- 427s update public.sl_node 427s set no_active = 't' 427s where no_id = p_no_id; 427s insert into public.sl_confirm 427s (con_origin, con_received, con_seqno) 427s select no_id, p_no_id, 0 from public.sl_node 427s where no_id != p_no_id 427s and no_active; 427s insert into public.sl_confirm 427s (con_origin, con_received, con_seqno) 427s select p_no_id, no_id, 0 from public.sl_node 427s where no_id != p_no_id 427s and no_active; 427s 427s -- ---- 427s -- Generate ENABLE_SUBSCRIPTION events for all sets that 427s -- origin here and are subscribed by the just enabled node. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s for v_sub_row in select SUB.sub_set, SUB.sub_provider from 427s public.sl_set S, 427s public.sl_subscribe SUB 427s where S.set_origin = v_local_node_id 427s and S.set_id = SUB.sub_set 427s and SUB.sub_receiver = p_no_id 427s for update of S 427s loop 427s perform public.enableSubscription (v_sub_row.sub_set, 427s v_sub_row.sub_provider, p_no_id); 427s end loop; 427s 427s return p_no_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.enableNode_int(p_no_id int4) is 427s 'no_id - Node ID # 427s 427s Internal function to process the ENABLE_NODE event for node no_id'; 427s COMMENT 427s create or replace function public.disableNode (p_no_id int4) 427s returns bigint 427s as $$ 427s begin 427s -- **** TODO **** 427s raise exception 'Slony-I: disableNode() not implemented'; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.disableNode(p_no_id int4) is 427s 'generate DISABLE_NODE event for node no_id'; 427s COMMENT 427s create or replace function public.disableNode_int (p_no_id int4) 427s returns int4 427s as $$ 427s begin 427s -- **** TODO **** 427s raise exception 'Slony-I: disableNode_int() not implemented'; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.disableNode(p_no_id int4) is 427s 'process DISABLE_NODE event for node no_id 427s 427s NOTE: This is not yet implemented!'; 427s COMMENT 427s create or replace function public.dropNode (p_no_ids int4[]) 427s returns bigint 427s as $$ 427s declare 427s v_node_row record; 427s v_idx integer; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that this got called on a different node 427s -- ---- 427s if public.getLocalNodeId('_main') = ANY (p_no_ids) then 427s raise exception 'Slony-I: DROP_NODE cannot initiate on the dropped node'; 427s end if; 427s 427s -- 427s -- if any of the deleted nodes are receivers we drop the sl_subscribe line 427s -- 427s delete from public.sl_subscribe where sub_receiver = ANY (p_no_ids); 427s 427s v_idx:=1; 427s LOOP 427s EXIT WHEN v_idx>array_upper(p_no_ids,1) ; 427s select * into v_node_row from public.sl_node 427s where no_id = p_no_ids[v_idx] 427s for update; 427s if not found then 427s raise exception 'Slony-I: unknown node ID % %', p_no_ids[v_idx],v_idx; 427s end if; 427s -- ---- 427s -- Make sure we do not break other nodes subscriptions with this 427s -- ---- 427s if exists (select true from public.sl_subscribe 427s where sub_provider = p_no_ids[v_idx]) 427s then 427s raise exception 'Slony-I: Node % is still configured as a data provider', 427s p_no_ids[v_idx]; 427s end if; 427s 427s -- ---- 427s -- Make sure no set originates there any more 427s -- ---- 427s if exists (select true from public.sl_set 427s where set_origin = p_no_ids[v_idx]) 427s then 427s raise exception 'Slony-I: Node % is still origin of one or more sets', 427s p_no_ids[v_idx]; 427s end if; 427s 427s -- ---- 427s -- Call the internal drop functionality and generate the event 427s -- ---- 427s perform public.dropNode_int(p_no_ids[v_idx]); 427s v_idx:=v_idx+1; 427s END LOOP; 427s return public.createEvent('_main', 'DROP_NODE', 427s array_to_string(p_no_ids,',')); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropNode(p_no_ids int4[]) is 427s 'generate DROP_NODE event to drop node node_id from replication'; 427s COMMENT 427s create or replace function public.dropNode_int (p_no_id int4) 427s returns int4 427s as $$ 427s declare 427s v_tab_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- If the dropped node is a remote node, clean the configuration 427s -- from all traces for it. 427s -- ---- 427s if p_no_id <> public.getLocalNodeId('_main') then 427s delete from public.sl_subscribe 427s where sub_receiver = p_no_id; 427s delete from public.sl_listen 427s where li_origin = p_no_id 427s or li_provider = p_no_id 427s or li_receiver = p_no_id; 427s delete from public.sl_path 427s where pa_server = p_no_id 427s or pa_client = p_no_id; 427s delete from public.sl_confirm 427s where con_origin = p_no_id 427s or con_received = p_no_id; 427s delete from public.sl_event 427s where ev_origin = p_no_id; 427s delete from public.sl_node 427s where no_id = p_no_id; 427s 427s return p_no_id; 427s end if; 427s 427s -- ---- 427s -- This is us ... deactivate the node for now, the daemon 427s -- will call uninstallNode() in a separate transaction. 427s -- ---- 427s update public.sl_node 427s set no_active = false 427s where no_id = p_no_id; 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return p_no_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropNode_int(p_no_id int4) is 427s 'internal function to process DROP_NODE event to drop node node_id from replication'; 427s COMMENT 427s create or replace function public.preFailover(p_failed_node int4,p_is_candidate boolean) 427s returns int4 427s as $$ 427s declare 427s v_row record; 427s v_row2 record; 427s v_n int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- All consistency checks first 427s 427s if p_is_candidate then 427s -- ---- 427s -- Check all sets originating on the failed node 427s -- ---- 427s for v_row in select set_id 427s from public.sl_set 427s where set_origin = p_failed_node 427s loop 427s -- ---- 427s -- Check that the backup node is subscribed to all sets 427s -- that originate on the failed node 427s -- ---- 427s select into v_row2 sub_forward, sub_active 427s from public.sl_subscribe 427s where sub_set = v_row.set_id 427s and sub_receiver = public.getLocalNodeId('_main'); 427s if not found then 427s raise exception 'Slony-I: cannot failover - node % is not subscribed to set %', 427s public.getLocalNodeId('_main'), v_row.set_id; 427s end if; 427s 427s -- ---- 427s -- Check that the subscription is active 427s -- ---- 427s if not v_row2.sub_active then 427s raise exception 'Slony-I: cannot failover - subscription for set % is not active', 427s v_row.set_id; 427s end if; 427s 427s -- ---- 427s -- If there are other subscribers, the backup node needs to 427s -- be a forwarder too. 427s -- ---- 427s select into v_n count(*) 427s from public.sl_subscribe 427s where sub_set = v_row.set_id 427s and sub_receiver <> public.getLocalNodeId('_main'); 427s if v_n > 0 and not v_row2.sub_forward then 427s raise exception 'Slony-I: cannot failover - node % is not a forwarder of set %', 427s public.getLocalNodeId('_main'), v_row.set_id; 427s end if; 427s end loop; 427s end if; 427s 427s -- ---- 427s -- Terminate all connections of the failed node the hard way 427s -- ---- 427s perform public.terminateNodeConnections(p_failed_node); 427s 427s update public.sl_path set pa_conninfo='' WHERE 427s pa_server=p_failed_node; 427s notify "_main_Restart"; 427s -- ---- 427s -- That is it - so far. 427s -- ---- 427s return p_failed_node; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.preFailover(p_failed_node int4,is_failover_candidate boolean) is 427s 'Prepare for a failover. This function is called on all candidate nodes. 427s It blanks the paths to the failed node 427s and then restart of all node daemons.'; 427s COMMENT 427s create or replace function public.failedNode(p_failed_node int4, p_backup_node int4,p_failed_nodes integer[]) 427s returns int4 427s as $$ 427s declare 427s v_row record; 427s v_row2 record; 427s v_failed boolean; 427s v_restart_required boolean; 427s begin 427s 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s v_restart_required:=false; 427s -- 427s -- any nodes other than the backup receiving 427s -- ANY subscription from a failed node 427s -- will now get that data from the backup node. 427s update public.sl_subscribe set 427s sub_provider=p_backup_node 427s where sub_provider=p_failed_node 427s and sub_receiver<>p_backup_node 427s and sub_receiver <> ALL (p_failed_nodes); 427s if found then 427s v_restart_required:=true; 427s end if; 427s -- 427s -- if this node is receiving a subscription from the backup node 427s -- with a failed node as the provider we need to fix this. 427s update public.sl_subscribe set 427s sub_provider=p_backup_node 427s from public.sl_set 427s where set_id = sub_set 427s and set_origin=p_failed_node 427s and sub_provider = ANY(p_failed_nodes) 427s and sub_receiver=public.getLocalNodeId('_main'); 427s 427s -- ---- 427s -- Terminate all connections of the failed node the hard way 427s -- ---- 427s perform public.terminateNodeConnections(p_failed_node); 427s 427s -- Clear out the paths for the failed node. 427s -- This ensures that *this* node won't be pulling data from 427s -- the failed node even if it *does* become accessible 427s 427s update public.sl_path set pa_conninfo='' WHERE 427s pa_server=p_failed_node 427s and pa_conninfo<>''; 427s 427s if found then 427s v_restart_required:=true; 427s end if; 427s 427s v_failed := exists (select 1 from public.sl_node 427s where no_failed=true and no_id=p_failed_node); 427s 427s if not v_failed then 427s 427s update public.sl_node set no_failed=true where no_id = ANY (p_failed_nodes) 427s and no_failed=false; 427s if found then 427s v_restart_required:=true; 427s end if; 427s end if; 427s 427s if v_restart_required then 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s -- ---- 427s -- Make sure the node daemon will restart 427s -- ---- 427s notify "_main_Restart"; 427s end if; 427s 427s 427s -- ---- 427s -- That is it - so far. 427s -- ---- 427s return p_failed_node; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.failedNode(p_failed_node int4, p_backup_node int4,p_failed_nodes integer[]) is 427s 'Initiate failover from failed_node to backup_node. This function must be called on all nodes, 427s and then waited for the restart of all node daemons.'; 427s COMMENT 427s create or replace function public.failedNode2 (p_failed_node int4, p_backup_node int4, p_ev_seqno int8, p_failed_nodes integer[]) 427s returns bigint 427s as $$ 427s declare 427s v_row record; 427s v_new_event bigint; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s select * into v_row 427s from public.sl_event 427s where ev_origin = p_failed_node 427s and ev_seqno = p_ev_seqno; 427s if not found then 427s raise exception 'Slony-I: event %,% not found', 427s p_failed_node, p_ev_seqno; 427s end if; 427s 427s update public.sl_node set no_failed=true where no_id = ANY 427s (p_failed_nodes) and no_failed=false; 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s -- ---- 427s -- Make sure the node daemon will restart 427s -- ---- 427s raise notice 'calling restart node %',p_failed_node; 427s 427s notify "_main_Restart"; 427s 427s select public.createEvent('_main','FAILOVER_NODE', 427s p_failed_node::text,p_ev_seqno::text, 427s array_to_string(p_failed_nodes,',')) 427s into v_new_event; 427s 427s 427s return v_new_event; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.failedNode2 (p_failed_node int4, p_backup_node int4, p_ev_seqno int8,p_failed_nodes integer[] ) is 427s 'FUNCTION failedNode2 (failed_node, backup_node, set_id, ev_seqno, ev_seqfake,p_failed_nodes) 427s 427s On the node that has the highest sequence number of the failed node, 427s fake the FAILOVER_SET event.'; 427s COMMENT 427s create or replace function public.failedNode3 (p_failed_node int4, p_backup_node int4,p_seq_no bigint) 427s returns int4 427s as $$ 427s declare 427s 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s perform public.failoverSet_int(p_failed_node, 427s p_backup_node,p_seq_no); 427s 427s notify "_main_Restart"; 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s create or replace function public.failoverSet_int (p_failed_node int4, p_backup_node int4,p_last_seqno bigint) 427s returns int4 427s as $$ 427s declare 427s v_row record; 427s v_last_sync int8; 427s v_set int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s SELECT max(ev_seqno) into v_last_sync FROM public.sl_event where 427s ev_origin=p_failed_node; 427s if v_last_sync > p_last_seqno then 427s -- this node is ahead of the last sequence number from the 427s -- failed node that the backup node has. 427s -- this node must unsubscribe from all sets from the origin. 427s for v_set in select set_id from public.sl_set where 427s set_origin=p_failed_node 427s loop 427s raise warning 'Slony is dropping the subscription of set % found sync %s bigger than %s ' 427s , v_set, v_last_sync::text, p_last_seqno::text; 427s perform public.unsubscribeSet(v_set, 427s public.getLocalNodeId('_main'), 427s true); 427s end loop; 427s delete from public.sl_event where ev_origin=p_failed_node 427s and ev_seqno > p_last_seqno; 427s end if; 427s -- ---- 427s -- Change the origin of the set now to the backup node. 427s -- On the backup node this includes changing all the 427s -- trigger and protection stuff 427s for v_set in select set_id from public.sl_set where 427s set_origin=p_failed_node 427s loop 427s -- ---- 427s if p_backup_node = public.getLocalNodeId('_main') then 427s delete from public.sl_setsync 427s where ssy_setid = v_set; 427s delete from public.sl_subscribe 427s where sub_set = v_set 427s and sub_receiver = p_backup_node; 427s update public.sl_set 427s set set_origin = p_backup_node 427s where set_id = v_set; 427s update public.sl_subscribe 427s set sub_provider=p_backup_node 427s FROM public.sl_node receive_node 427s where sub_set = v_set 427s and sub_provider=p_failed_node 427s and sub_receiver=receive_node.no_id 427s and receive_node.no_failed=false; 427s 427s for v_row in select * from public.sl_table 427s where tab_set = v_set 427s order by tab_id 427s loop 427s perform public.alterTableConfigureTriggers(v_row.tab_id); 427s end loop; 427s else 427s raise notice 'deleting from sl_subscribe all rows with receiver %', 427s p_backup_node; 427s 427s delete from public.sl_subscribe 427s where sub_set = v_set 427s and sub_receiver = p_backup_node; 427s 427s update public.sl_subscribe 427s set sub_provider=p_backup_node 427s FROM public.sl_node receive_node 427s where sub_set = v_set 427s and sub_provider=p_failed_node 427s and sub_provider=p_failed_node 427s and sub_receiver=receive_node.no_id 427s and receive_node.no_failed=false; 427s update public.sl_set 427s set set_origin = p_backup_node 427s where set_id = v_set; 427s -- ---- 427s -- If we are a subscriber of the set ourself, change our 427s -- setsync status to reflect the new set origin. 427s -- ---- 427s if exists (select true from public.sl_subscribe 427s where sub_set = v_set 427s and sub_receiver = public.getLocalNodeId( 427s '_main')) 427s then 427s delete from public.sl_setsync 427s where ssy_setid = v_set; 427s 427s select coalesce(max(ev_seqno), 0) into v_last_sync 427s from public.sl_event 427s where ev_origin = p_backup_node 427s and ev_type = 'SYNC'; 427s if v_last_sync > 0 then 427s insert into public.sl_setsync 427s (ssy_setid, ssy_origin, ssy_seqno, 427s ssy_snapshot, ssy_action_list) 427s select v_set, p_backup_node, v_last_sync, 427s ev_snapshot, NULL 427s from public.sl_event 427s where ev_origin = p_backup_node 427s and ev_seqno = v_last_sync; 427s else 427s insert into public.sl_setsync 427s (ssy_setid, ssy_origin, ssy_seqno, 427s ssy_snapshot, ssy_action_list) 427s values (v_set, p_backup_node, '0', 427s '1:1:', NULL); 427s end if; 427s end if; 427s end if; 427s end loop; 427s 427s --If there are any subscriptions with 427s --the failed_node being the provider then 427s --we want to redirect those subscriptions 427s --to come from the backup node. 427s -- 427s -- The backup node should be a valid 427s -- provider for all subscriptions served 427s -- by the failed node. (otherwise it 427s -- wouldn't be a allowable backup node). 427s -- delete from public.sl_subscribe 427s -- where sub_receiver=p_backup_node; 427s 427s update public.sl_subscribe 427s set sub_provider=p_backup_node 427s from public.sl_node 427s where sub_provider=p_failed_node 427s and sl_node.no_id=sub_receiver 427s and sl_node.no_failed=false 427s and sub_receiver<>p_backup_node; 427s 427s update public.sl_subscribe 427s set sub_provider=(select set_origin from 427s public.sl_set where set_id= 427s sub_set) 427s where sub_provider=p_failed_node 427s and sub_receiver=p_backup_node; 427s 427s update public.sl_node 427s set no_active=false WHERE 427s no_id=p_failed_node; 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s 427s return p_failed_node; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.failoverSet_int (p_failed_node int4, p_backup_node int4,p_seqno bigint) is 427s 'FUNCTION failoverSet_int (failed_node, backup_node, set_id, wait_seqno) 427s 427s Finish failover for one set.'; 427s COMMENT 427s create or replace function public.uninstallNode () 427s returns int4 427s as $$ 427s declare 427s v_tab_row record; 427s begin 427s raise notice 'Slony-I: Please drop schema "_main"'; 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.uninstallNode() is 427s 'Reset the whole database to standalone by removing the whole 427s replication system.'; 427s COMMENT 427s DROP FUNCTION IF EXISTS public.cloneNodePrepare(int4,int4,text); 427s DROP FUNCTION 427s create or replace function public.cloneNodePrepare (p_no_id int4, p_no_provider int4, p_no_comment text) 427s returns bigint 427s as $$ 427s begin 427s perform public.cloneNodePrepare_int (p_no_id, p_no_provider, p_no_comment); 427s return public.createEvent('_main', 'CLONE_NODE', 427s p_no_id::text, p_no_provider::text, 427s p_no_comment::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.cloneNodePrepare(p_no_id int4, p_no_provider int4, p_no_comment text) is 427s 'Prepare for cloning a node.'; 427s COMMENT 427s create or replace function public.cloneNodePrepare_int (p_no_id int4, p_no_provider int4, p_no_comment text) 427s returns int4 427s as $$ 427s declare 427s v_dummy int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s update public.sl_node set 427s no_active = np.no_active, 427s no_comment = np.no_comment, 427s no_failed = np.no_failed 427s from public.sl_node np 427s where np.no_id = p_no_provider 427s and sl_node.no_id = p_no_id; 427s if not found then 427s insert into public.sl_node 427s (no_id, no_active, no_comment,no_failed) 427s select p_no_id, no_active, p_no_comment, no_failed 427s from public.sl_node 427s where no_id = p_no_provider; 427s end if; 427s 427s insert into public.sl_path 427s (pa_server, pa_client, pa_conninfo, pa_connretry) 427s select pa_server, p_no_id, '', pa_connretry 427s from public.sl_path 427s where pa_client = p_no_provider 427s and (pa_server, p_no_id) not in (select pa_server, pa_client 427s from public.sl_path); 427s 427s insert into public.sl_path 427s (pa_server, pa_client, pa_conninfo, pa_connretry) 427s select p_no_id, pa_client, '', pa_connretry 427s from public.sl_path 427s where pa_server = p_no_provider 427s and (p_no_id, pa_client) not in (select pa_server, pa_client 427s from public.sl_path); 427s 427s insert into public.sl_subscribe 427s (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) 427s select sub_set, sub_provider, p_no_id, sub_forward, sub_active 427s from public.sl_subscribe 427s where sub_receiver = p_no_provider; 427s 427s insert into public.sl_confirm 427s (con_origin, con_received, con_seqno, con_timestamp) 427s select con_origin, p_no_id, con_seqno, con_timestamp 427s from public.sl_confirm 427s where con_received = p_no_provider; 427s 427s perform public.RebuildListenEntries(); 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.cloneNodePrepare_int(p_no_id int4, p_no_provider int4, p_no_comment text) is 427s 'Internal part of cloneNodePrepare().'; 427s COMMENT 427s create or replace function public.cloneNodeFinish (p_no_id int4, p_no_provider int4) 427s returns int4 427s as $$ 427s declare 427s v_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s perform "pg_catalog".setval('public.sl_local_node_id', p_no_id); 427s perform public.resetSession(); 427s for v_row in select sub_set from public.sl_subscribe 427s where sub_receiver = p_no_id 427s loop 427s perform public.updateReloid(v_row.sub_set, p_no_id); 427s end loop; 427s 427s perform public.RebuildListenEntries(); 427s 427s delete from public.sl_confirm 427s where con_received = p_no_id; 427s insert into public.sl_confirm 427s (con_origin, con_received, con_seqno, con_timestamp) 427s select con_origin, p_no_id, con_seqno, con_timestamp 427s from public.sl_confirm 427s where con_received = p_no_provider; 427s insert into public.sl_confirm 427s (con_origin, con_received, con_seqno, con_timestamp) 427s select p_no_provider, p_no_id, 427s (select max(ev_seqno) from public.sl_event 427s where ev_origin = p_no_provider), current_timestamp; 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.cloneNodeFinish(p_no_id int4, p_no_provider int4) is 427s 'Internal part of cloneNodePrepare().'; 427s COMMENT 427s create or replace function public.storePath (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) 427s returns bigint 427s as $$ 427s begin 427s perform public.storePath_int(p_pa_server, p_pa_client, 427s p_pa_conninfo, p_pa_connretry); 427s return public.createEvent('_main', 'STORE_PATH', 427s p_pa_server::text, p_pa_client::text, 427s p_pa_conninfo::text, p_pa_connretry::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.storePath (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) is 427s 'FUNCTION storePath (pa_server, pa_client, pa_conninfo, pa_connretry) 427s 427s Generate the STORE_PATH event indicating that node pa_client can 427s access node pa_server using DSN pa_conninfo'; 427s COMMENT 427s create or replace function public.storePath_int (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) 427s returns int4 427s as $$ 427s declare 427s v_dummy int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check if the path already exists 427s -- ---- 427s select 1 into v_dummy 427s from public.sl_path 427s where pa_server = p_pa_server 427s and pa_client = p_pa_client 427s for update; 427s if found then 427s -- ---- 427s -- Path exists, update pa_conninfo 427s -- ---- 427s update public.sl_path 427s set pa_conninfo = p_pa_conninfo, 427s pa_connretry = p_pa_connretry 427s where pa_server = p_pa_server 427s and pa_client = p_pa_client; 427s else 427s -- ---- 427s -- New path 427s -- 427s -- In case we receive STORE_PATH events before we know 427s -- about the nodes involved in this, we generate those nodes 427s -- as pending. 427s -- ---- 427s if not exists (select 1 from public.sl_node 427s where no_id = p_pa_server) then 427s perform public.storeNode_int (p_pa_server, ''); 427s end if; 427s if not exists (select 1 from public.sl_node 427s where no_id = p_pa_client) then 427s perform public.storeNode_int (p_pa_client, ''); 427s end if; 427s insert into public.sl_path 427s (pa_server, pa_client, pa_conninfo, pa_connretry) values 427s (p_pa_server, p_pa_client, p_pa_conninfo, p_pa_connretry); 427s end if; 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.storePath_int (p_pa_server int4, p_pa_client int4, p_pa_conninfo text, p_pa_connretry int4) is 427s 'FUNCTION storePath (pa_server, pa_client, pa_conninfo, pa_connretry) 427s 427s Process the STORE_PATH event indicating that node pa_client can 427s access node pa_server using DSN pa_conninfo'; 427s COMMENT 427s create or replace function public.dropPath (p_pa_server int4, p_pa_client int4) 427s returns bigint 427s as $$ 427s declare 427s v_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- There should be no existing subscriptions. Auto unsubscribing 427s -- is considered too dangerous. 427s -- ---- 427s for v_row in select sub_set, sub_provider, sub_receiver 427s from public.sl_subscribe 427s where sub_provider = p_pa_server 427s and sub_receiver = p_pa_client 427s loop 427s raise exception 427s 'Slony-I: Path cannot be dropped, subscription of set % needs it', 427s v_row.sub_set; 427s end loop; 427s 427s -- ---- 427s -- Drop all sl_listen entries that depend on this path 427s -- ---- 427s for v_row in select li_origin, li_provider, li_receiver 427s from public.sl_listen 427s where li_provider = p_pa_server 427s and li_receiver = p_pa_client 427s loop 427s perform public.dropListen( 427s v_row.li_origin, v_row.li_provider, v_row.li_receiver); 427s end loop; 427s 427s -- ---- 427s -- Now drop the path and create the event 427s -- ---- 427s perform public.dropPath_int(p_pa_server, p_pa_client); 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return public.createEvent ('_main', 'DROP_PATH', 427s p_pa_server::text, p_pa_client::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropPath (p_pa_server int4, p_pa_client int4) is 427s 'Generate DROP_PATH event to drop path from pa_server to pa_client'; 427s COMMENT 427s create or replace function public.dropPath_int (p_pa_server int4, p_pa_client int4) 427s returns int4 427s as $$ 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Remove any dangling sl_listen entries with the server 427s -- as provider and the client as receiver. This must have 427s -- been cleared out before, but obviously was not. 427s -- ---- 427s delete from public.sl_listen 427s where li_provider = p_pa_server 427s and li_receiver = p_pa_client; 427s 427s delete from public.sl_path 427s where pa_server = p_pa_server 427s and pa_client = p_pa_client; 427s 427s if found then 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return 1; 427s else 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return 0; 427s end if; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropPath_int (p_pa_server int4, p_pa_client int4) is 427s 'Process DROP_PATH event to drop path from pa_server to pa_client'; 427s COMMENT 427s create or replace function public.storeListen (p_origin int4, p_provider int4, p_receiver int4) 427s returns bigint 427s as $$ 427s begin 427s perform public.storeListen_int (p_origin, p_provider, p_receiver); 427s return public.createEvent ('_main', 'STORE_LISTEN', 427s p_origin::text, p_provider::text, p_receiver::text); 427s end; 427s $$ language plpgsql 427s called on null input; 427s CREATE FUNCTION 427s comment on function public.storeListen(p_origin int4, p_provider int4, p_receiver int4) is 427s 'FUNCTION storeListen (li_origin, li_provider, li_receiver) 427s 427s generate STORE_LISTEN event, indicating that receiver node li_receiver 427s listens to node li_provider in order to get messages coming from node 427s li_origin.'; 427s COMMENT 427s create or replace function public.storeListen_int (p_li_origin int4, p_li_provider int4, p_li_receiver int4) 427s returns int4 427s as $$ 427s declare 427s v_exists int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s select 1 into v_exists 427s from public.sl_listen 427s where li_origin = p_li_origin 427s and li_provider = p_li_provider 427s and li_receiver = p_li_receiver; 427s if not found then 427s -- ---- 427s -- In case we receive STORE_LISTEN events before we know 427s -- about the nodes involved in this, we generate those nodes 427s -- as pending. 427s -- ---- 427s if not exists (select 1 from public.sl_node 427s where no_id = p_li_origin) then 427s perform public.storeNode_int (p_li_origin, ''); 427s end if; 427s if not exists (select 1 from public.sl_node 427s where no_id = p_li_provider) then 427s perform public.storeNode_int (p_li_provider, ''); 427s end if; 427s if not exists (select 1 from public.sl_node 427s where no_id = p_li_receiver) then 427s perform public.storeNode_int (p_li_receiver, ''); 427s end if; 427s 427s insert into public.sl_listen 427s (li_origin, li_provider, li_receiver) values 427s (p_li_origin, p_li_provider, p_li_receiver); 427s end if; 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.storeListen_int(p_li_origin int4, p_li_provider int4, p_li_receiver int4) is 427s 'FUNCTION storeListen_int (li_origin, li_provider, li_receiver) 427s 427s Process STORE_LISTEN event, indicating that receiver node li_receiver 427s listens to node li_provider in order to get messages coming from node 427s li_origin.'; 427s COMMENT 427s create or replace function public.dropListen (p_li_origin int4, p_li_provider int4, p_li_receiver int4) 427s returns bigint 427s as $$ 427s begin 427s perform public.dropListen_int(p_li_origin, 427s p_li_provider, p_li_receiver); 427s 427s return public.createEvent ('_main', 'DROP_LISTEN', 427s p_li_origin::text, p_li_provider::text, p_li_receiver::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropListen(p_li_origin int4, p_li_provider int4, p_li_receiver int4) is 427s 'dropListen (li_origin, li_provider, li_receiver) 427s 427s Generate the DROP_LISTEN event.'; 427s COMMENT 427s create or replace function public.dropListen_int (p_li_origin int4, p_li_provider int4, p_li_receiver int4) 427s returns int4 427s as $$ 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s delete from public.sl_listen 427s where li_origin = p_li_origin 427s and li_provider = p_li_provider 427s and li_receiver = p_li_receiver; 427s if found then 427s return 1; 427s else 427s return 0; 427s end if; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropListen_int(p_li_origin int4, p_li_provider int4, p_li_receiver int4) is 427s 'dropListen (li_origin, li_provider, li_receiver) 427s 427s Process the DROP_LISTEN event, deleting the sl_listen entry for 427s the indicated (origin,provider,receiver) combination.'; 427s COMMENT 427s create or replace function public.storeSet (p_set_id int4, p_set_comment text) 427s returns bigint 427s as $$ 427s declare 427s v_local_node_id int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s v_local_node_id := public.getLocalNodeId('_main'); 427s 427s insert into public.sl_set 427s (set_id, set_origin, set_comment) values 427s (p_set_id, v_local_node_id, p_set_comment); 427s 427s return public.createEvent('_main', 'STORE_SET', 427s p_set_id::text, v_local_node_id::text, p_set_comment::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.storeSet(p_set_id int4, p_set_comment text) is 427s 'Generate STORE_SET event for set set_id with human readable comment set_comment'; 427s COMMENT 427s create or replace function public.storeSet_int (p_set_id int4, p_set_origin int4, p_set_comment text) 427s returns int4 427s as $$ 427s declare 427s v_dummy int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s select 1 into v_dummy 427s from public.sl_set 427s where set_id = p_set_id 427s for update; 427s if found then 427s update public.sl_set 427s set set_comment = p_set_comment 427s where set_id = p_set_id; 427s else 427s if not exists (select 1 from public.sl_node 427s where no_id = p_set_origin) then 427s perform public.storeNode_int (p_set_origin, ''); 427s end if; 427s insert into public.sl_set 427s (set_id, set_origin, set_comment) values 427s (p_set_id, p_set_origin, p_set_comment); 427s end if; 427s 427s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 427s perform public.addPartialLogIndices(); 427s 427s return p_set_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.storeSet_int(p_set_id int4, p_set_origin int4, p_set_comment text) is 427s 'storeSet_int (set_id, set_origin, set_comment) 427s 427s Process the STORE_SET event, indicating the new set with given ID, 427s origin node, and human readable comment.'; 427s COMMENT 427s create or replace function public.lockSet (p_set_id int4) 427s returns int4 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_set_row record; 427s v_tab_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that the set exists and that we are the origin 427s -- and that it is not already locked. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select * into v_set_row from public.sl_set 427s where set_id = p_set_id 427s for update; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_set_id; 427s end if; 427s if v_set_row.set_origin <> v_local_node_id then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_set_id; 427s end if; 427s if v_set_row.set_locked notnull then 427s raise exception 'Slony-I: set % is already locked', p_set_id; 427s end if; 427s 427s -- ---- 427s -- Place the lockedSet trigger on all tables in the set. 427s -- ---- 427s for v_tab_row in select T.tab_id, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s from public.sl_table T, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where T.tab_set = p_set_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid 427s order by tab_id 427s loop 427s execute 'create trigger "_main_lockedset" ' || 427s 'before insert or update or delete on ' || 427s v_tab_row.tab_fqname || ' for each row execute procedure 427s public.lockedSet (''_main'');'; 427s end loop; 427s 427s -- ---- 427s -- Remember our snapshots xmax as for the set locking 427s -- ---- 427s update public.sl_set 427s set set_locked = "pg_catalog".txid_snapshot_xmax("pg_catalog".txid_current_snapshot()) 427s where set_id = p_set_id; 427s 427s return p_set_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.lockSet(p_set_id int4) is 427s 'lockSet(set_id) 427s 427s Add a special trigger to all tables of a set that disables access to 427s it.'; 427s COMMENT 427s NOTICE: function public.ddlcapture(text,text) does not exist, skipping 427s NOTICE: function public.ddlscript_complete(int4,text,int4) does not exist, skipping 427s NOTICE: function public.ddlscript_complete_int(int4,int4) does not exist, skipping 427s NOTICE: function public.subscribeset_int(int4,int4,int4,bool,bool) does not exist, skipping 427s NOTICE: function public.unsubscribeset(int4,int4,pg_catalog.bool) does not exist, skipping 427s create or replace function public.unlockSet (p_set_id int4) 427s returns int4 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_set_row record; 427s v_tab_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that the set exists and that we are the origin 427s -- and that it is not already locked. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select * into v_set_row from public.sl_set 427s where set_id = p_set_id 427s for update; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_set_id; 427s end if; 427s if v_set_row.set_origin <> v_local_node_id then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_set_id; 427s end if; 427s if v_set_row.set_locked isnull then 427s raise exception 'Slony-I: set % is not locked', p_set_id; 427s end if; 427s 427s -- ---- 427s -- Drop the lockedSet trigger from all tables in the set. 427s -- ---- 427s for v_tab_row in select T.tab_id, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s from public.sl_table T, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where T.tab_set = p_set_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid 427s order by tab_id 427s loop 427s execute 'drop trigger "_main_lockedset" ' || 427s 'on ' || v_tab_row.tab_fqname; 427s end loop; 427s 427s -- ---- 427s -- Clear out the set_locked field 427s -- ---- 427s update public.sl_set 427s set set_locked = NULL 427s where set_id = p_set_id; 427s 427s return p_set_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.unlockSet(p_set_id int4) is 427s 'Remove the special trigger from all tables of a set that disables access to it.'; 427s COMMENT 427s create or replace function public.moveSet (p_set_id int4, p_new_origin int4) 427s returns bigint 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_set_row record; 427s v_sub_row record; 427s v_sync_seqno int8; 427s v_lv_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that the set is locked and that this locking 427s -- happened long enough ago. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select * into v_set_row from public.sl_set 427s where set_id = p_set_id 427s for update; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_set_id; 427s end if; 427s if v_set_row.set_origin <> v_local_node_id then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_set_id; 427s end if; 427s if v_set_row.set_locked isnull then 427s raise exception 'Slony-I: set % is not locked', p_set_id; 427s end if; 427s if v_set_row.set_locked > "pg_catalog".txid_snapshot_xmin("pg_catalog".txid_current_snapshot()) then 427s raise exception 'Slony-I: cannot move set % yet, transactions < % are still in progress', 427s p_set_id, v_set_row.set_locked; 427s end if; 427s 427s -- ---- 427s -- Unlock the set 427s -- ---- 427s perform public.unlockSet(p_set_id); 427s 427s -- ---- 427s -- Check that the new_origin is an active subscriber of the set 427s -- ---- 427s select * into v_sub_row from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = p_new_origin; 427s if not found then 427s raise exception 'Slony-I: set % is not subscribed by node %', 427s p_set_id, p_new_origin; 427s end if; 427s if not v_sub_row.sub_active then 427s raise exception 'Slony-I: subsctiption of node % for set % is inactive', 427s p_new_origin, p_set_id; 427s end if; 427s 427s -- ---- 427s -- Reconfigure everything 427s -- ---- 427s perform public.moveSet_int(p_set_id, v_local_node_id, 427s p_new_origin, 0); 427s 427s perform public.RebuildListenEntries(); 427s 427s -- ---- 427s -- At this time we hold access exclusive locks for every table 427s -- in the set. But we did move the set to the new origin, so the 427s -- createEvent() we are doing now will not record the sequences. 427s -- ---- 427s v_sync_seqno := public.createEvent('_main', 'SYNC'); 427s insert into public.sl_seqlog 427s (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) 427s select seq_id, v_local_node_id, v_sync_seqno, seq_last_value 427s from public.sl_seqlastvalue 427s where seq_set = p_set_id; 427s 427s -- ---- 427s -- Finally we generate the real event 427s -- ---- 427s return public.createEvent('_main', 'MOVE_SET', 427s p_set_id::text, v_local_node_id::text, p_new_origin::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.moveSet(p_set_id int4, p_new_origin int4) is 427s 'moveSet(set_id, new_origin) 427s 427s Generate MOVE_SET event to request that the origin for set set_id be moved to node new_origin'; 427s COMMENT 427s create or replace function public.moveSet_int (p_set_id int4, p_old_origin int4, p_new_origin int4, p_wait_seqno int8) 427s returns int4 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_tab_row record; 427s v_sub_row record; 427s v_sub_node int4; 427s v_sub_last int4; 427s v_sub_next int4; 427s v_last_sync int8; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Get our local node ID 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s 427s -- On the new origin, raise an event - ACCEPT_SET 427s if v_local_node_id = p_new_origin then 427s -- Create a SYNC event as well so that the ACCEPT_SET has 427s -- the same snapshot as the last SYNC generated by the new 427s -- origin. This snapshot will be used by other nodes to 427s -- finalize the setsync status. 427s perform public.createEvent('_main', 'SYNC', NULL); 427s perform public.createEvent('_main', 'ACCEPT_SET', 427s p_set_id::text, p_old_origin::text, 427s p_new_origin::text, p_wait_seqno::text); 427s end if; 427s 427s -- ---- 427s -- Next we have to reverse the subscription path 427s -- ---- 427s v_sub_last = p_new_origin; 427s select sub_provider into v_sub_node 427s from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = p_new_origin; 427s if not found then 427s raise exception 'Slony-I: subscription path broken in moveSet_int'; 427s end if; 427s while v_sub_node <> p_old_origin loop 427s -- ---- 427s -- Tracing node by node, the old receiver is now in 427s -- v_sub_last and the old provider is in v_sub_node. 427s -- ---- 427s 427s -- ---- 427s -- Get the current provider of this node as next 427s -- and change the provider to the previous one in 427s -- the reverse chain. 427s -- ---- 427s select sub_provider into v_sub_next 427s from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = v_sub_node 427s for update; 427s if not found then 427s raise exception 'Slony-I: subscription path broken in moveSet_int'; 427s end if; 427s update public.sl_subscribe 427s set sub_provider = v_sub_last 427s where sub_set = p_set_id 427s and sub_receiver = v_sub_node 427s and sub_receiver <> v_sub_last; 427s 427s v_sub_last = v_sub_node; 427s v_sub_node = v_sub_next; 427s end loop; 427s 427s -- ---- 427s -- This includes creating a subscription for the old origin 427s -- ---- 427s insert into public.sl_subscribe 427s (sub_set, sub_provider, sub_receiver, 427s sub_forward, sub_active) 427s values (p_set_id, v_sub_last, p_old_origin, true, true); 427s if v_local_node_id = p_old_origin then 427s select coalesce(max(ev_seqno), 0) into v_last_sync 427s from public.sl_event 427s where ev_origin = p_new_origin 427s and ev_type = 'SYNC'; 427s if v_last_sync > 0 then 427s insert into public.sl_setsync 427s (ssy_setid, ssy_origin, ssy_seqno, 427s ssy_snapshot, ssy_action_list) 427s select p_set_id, p_new_origin, v_last_sync, 427s ev_snapshot, NULL 427s from public.sl_event 427s where ev_origin = p_new_origin 427s and ev_seqno = v_last_sync; 427s else 427s insert into public.sl_setsync 427s (ssy_setid, ssy_origin, ssy_seqno, 427s ssy_snapshot, ssy_action_list) 427s values (p_set_id, p_new_origin, '0', 427s '1:1:', NULL); 427s end if; 427s end if; 427s 427s -- ---- 427s -- Now change the ownership of the set. 427s -- ---- 427s update public.sl_set 427s set set_origin = p_new_origin 427s where set_id = p_set_id; 427s 427s -- ---- 427s -- On the new origin, delete the obsolete setsync information 427s -- and the subscription. 427s -- ---- 427s if v_local_node_id = p_new_origin then 427s delete from public.sl_setsync 427s where ssy_setid = p_set_id; 427s else 427s if v_local_node_id <> p_old_origin then 427s -- 427s -- On every other node, change the setsync so that it will 427s -- pick up from the new origins last known sync. 427s -- 427s delete from public.sl_setsync 427s where ssy_setid = p_set_id; 427s select coalesce(max(ev_seqno), 0) into v_last_sync 427s from public.sl_event 427s where ev_origin = p_new_origin 427s and ev_type = 'SYNC'; 427s if v_last_sync > 0 then 427s insert into public.sl_setsync 427s (ssy_setid, ssy_origin, ssy_seqno, 427s ssy_snapshot, ssy_action_list) 427s select p_set_id, p_new_origin, v_last_sync, 427s ev_snapshot, NULL 427s from public.sl_event 427s where ev_origin = p_new_origin 427s and ev_seqno = v_last_sync; 427s else 427s insert into public.sl_setsync 427s (ssy_setid, ssy_origin, ssy_seqno, 427s ssy_snapshot, ssy_action_list) 427s values (p_set_id, p_new_origin, 427s '0', '1:1:', NULL); 427s end if; 427s end if; 427s end if; 427s delete from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = p_new_origin; 427s 427s -- Regenerate sl_listen since we revised the subscriptions 427s perform public.RebuildListenEntries(); 427s 427s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 427s perform public.addPartialLogIndices(); 427s 427s -- ---- 427s -- If we are the new or old origin, we have to 427s -- adjust the log and deny access trigger configuration. 427s -- ---- 427s if v_local_node_id = p_old_origin or v_local_node_id = p_new_origin then 427s for v_tab_row in select tab_id from public.sl_table 427s where tab_set = p_set_id 427s order by tab_id 427s loop 427s perform public.alterTableConfigureTriggers(v_tab_row.tab_id); 427s end loop; 427s end if; 427s 427s return p_set_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.moveSet_int(p_set_id int4, p_old_origin int4, p_new_origin int4, p_wait_seqno int8) is 427s 'moveSet(set_id, old_origin, new_origin, wait_seqno) 427s 427s Process MOVE_SET event to request that the origin for set set_id be 427s moved from old_origin to node new_origin'; 427s COMMENT 427s create or replace function public.dropSet (p_set_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that the set exists and originates here 427s -- ---- 427s select set_origin into v_origin from public.sl_set 427s where set_id = p_set_id; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_set_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_set_id; 427s end if; 427s 427s -- ---- 427s -- Call the internal drop set functionality and generate the event 427s -- ---- 427s perform public.dropSet_int(p_set_id); 427s return public.createEvent('_main', 'DROP_SET', 427s p_set_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropSet(p_set_id int4) is 427s 'Generate DROP_SET event to drop replication of set set_id'; 427s COMMENT 427s NOTICE: function public.updaterelname(int4,int4) does not exist, skipping 427s NOTICE: function public.updatereloid(int4,int4) does not exist, skipping 427s create or replace function public.dropSet_int (p_set_id int4) 427s returns int4 427s as $$ 427s declare 427s v_tab_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Restore all tables original triggers and rules and remove 427s -- our replication stuff. 427s -- ---- 427s for v_tab_row in select tab_id from public.sl_table 427s where tab_set = p_set_id 427s order by tab_id 427s loop 427s perform public.alterTableDropTriggers(v_tab_row.tab_id); 427s end loop; 427s 427s -- ---- 427s -- Remove all traces of the set configuration 427s -- ---- 427s delete from public.sl_sequence 427s where seq_set = p_set_id; 427s delete from public.sl_table 427s where tab_set = p_set_id; 427s delete from public.sl_subscribe 427s where sub_set = p_set_id; 427s delete from public.sl_setsync 427s where ssy_setid = p_set_id; 427s delete from public.sl_set 427s where set_id = p_set_id; 427s 427s -- Regenerate sl_listen since we revised the subscriptions 427s perform public.RebuildListenEntries(); 427s 427s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 427s perform public.addPartialLogIndices(); 427s 427s return p_set_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.dropSet(p_set_id int4) is 427s 'Process DROP_SET event to drop replication of set set_id. This involves: 427s - Removing log and deny access triggers 427s - Removing all traces of the set configuration, including sequences, tables, subscribers, syncs, and the set itself'; 427s COMMENT 427s create or replace function public.mergeSet (p_set_id int4, p_add_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_origin int4; 427s in_progress boolean; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that both sets exist and originate here 427s -- ---- 427s if p_set_id = p_add_id then 427s raise exception 'Slony-I: merged set ids cannot be identical'; 427s end if; 427s select set_origin into v_origin from public.sl_set 427s where set_id = p_set_id; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_set_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_set_id; 427s end if; 427s 427s select set_origin into v_origin from public.sl_set 427s where set_id = p_add_id; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_add_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_add_id; 427s end if; 427s 427s -- ---- 427s -- Check that both sets are subscribed by the same set of nodes 427s -- ---- 427s if exists (select true from public.sl_subscribe SUB1 427s where SUB1.sub_set = p_set_id 427s and SUB1.sub_receiver not in (select SUB2.sub_receiver 427s from public.sl_subscribe SUB2 427s where SUB2.sub_set = p_add_id)) 427s then 427s raise exception 'Slony-I: subscriber lists of set % and % are different', 427s p_set_id, p_add_id; 427s end if; 427s 427s if exists (select true from public.sl_subscribe SUB1 427s where SUB1.sub_set = p_add_id 427s and SUB1.sub_receiver not in (select SUB2.sub_receiver 427s from public.sl_subscribe SUB2 427s where SUB2.sub_set = p_set_id)) 427s then 427s raise exception 'Slony-I: subscriber lists of set % and % are different', 427s p_add_id, p_set_id; 427s end if; 427s 427s -- ---- 427s -- Check that all ENABLE_SUBSCRIPTION events for the set are confirmed 427s -- ---- 427s select public.isSubscriptionInProgress(p_add_id) into in_progress ; 427s 427s if in_progress then 427s raise exception 'Slony-I: set % has subscriptions in progress - cannot merge', 427s p_add_id; 427s end if; 427s 427s -- ---- 427s -- Create a SYNC event, merge the sets, create a MERGE_SET event 427s -- ---- 427s perform public.createEvent('_main', 'SYNC', NULL); 427s perform public.mergeSet_int(p_set_id, p_add_id); 427s return public.createEvent('_main', 'MERGE_SET', 427s p_set_id::text, p_add_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.mergeSet(p_set_id int4, p_add_id int4) is 427s 'Generate MERGE_SET event to request that sets be merged together. 427s 427s Both sets must exist, and originate on the same node. They must be 427s subscribed by the same set of nodes.'; 427s COMMENT 427s create or replace function public.isSubscriptionInProgress(p_add_id int4) 427s returns boolean 427s as $$ 427s begin 427s if exists (select true from public.sl_event 427s where ev_type = 'ENABLE_SUBSCRIPTION' 427s and ev_data1 = p_add_id::text 427s and ev_seqno > (select max(con_seqno) from public.sl_confirm 427s where con_origin = ev_origin 427s and con_received::text = ev_data3)) 427s then 427s return true; 427s else 427s return false; 427s end if; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.isSubscriptionInProgress(p_add_id int4) is 427s 'Checks to see if a subscription for the indicated set is in progress. 427s Returns true if a subscription is in progress. Otherwise false'; 427s COMMENT 427s create or replace function public.mergeSet_int (p_set_id int4, p_add_id int4) 427s returns int4 427s as $$ 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s update public.sl_sequence 427s set seq_set = p_set_id 427s where seq_set = p_add_id; 427s update public.sl_table 427s set tab_set = p_set_id 427s where tab_set = p_add_id; 427s delete from public.sl_subscribe 427s where sub_set = p_add_id; 427s delete from public.sl_setsync 427s where ssy_setid = p_add_id; 427s delete from public.sl_set 427s where set_id = p_add_id; 427s 427s return p_set_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.mergeSet_int(p_set_id int4, p_add_id int4) is 427s 'mergeSet_int(set_id, add_id) - Perform MERGE_SET event, merging all objects from 427s set add_id into set set_id.'; 427s COMMENT 427s create or replace function public.setAddTable(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) 427s returns bigint 427s as $$ 427s declare 427s v_set_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that we are the origin of the set 427s -- ---- 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_set_id; 427s if not found then 427s raise exception 'Slony-I: setAddTable(): set % not found', p_set_id; 427s end if; 427s if v_set_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: setAddTable(): set % has remote origin', p_set_id; 427s end if; 427s 427s if exists (select true from public.sl_subscribe 427s where sub_set = p_set_id) 427s then 427s raise exception 'Slony-I: cannot add table to currently subscribed set % - must attach to an unsubscribed set', 427s p_set_id; 427s end if; 427s 427s -- ---- 427s -- Add the table to the set and generate the SET_ADD_TABLE event 427s -- ---- 427s perform public.setAddTable_int(p_set_id, p_tab_id, p_fqname, 427s p_tab_idxname, p_tab_comment); 427s return public.createEvent('_main', 'SET_ADD_TABLE', 427s p_set_id::text, p_tab_id::text, p_fqname::text, 427s p_tab_idxname::text, p_tab_comment::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setAddTable(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) is 427s 'setAddTable (set_id, tab_id, tab_fqname, tab_idxname, tab_comment) 427s 427s Add table tab_fqname to replication set on origin node, and generate 427s SET_ADD_TABLE event to allow this to propagate to other nodes. 427s 427s Note that the table id, tab_id, must be unique ACROSS ALL SETS.'; 427s COMMENT 427s create or replace function public.setAddTable_int(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) 427s returns int4 427s as $$ 427s declare 427s v_tab_relname name; 427s v_tab_nspname name; 427s v_local_node_id int4; 427s v_set_origin int4; 427s v_sub_provider int4; 427s v_relkind char; 427s v_tab_reloid oid; 427s v_pkcand_nn boolean; 427s v_prec record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- For sets with a remote origin, check that we are subscribed 427s -- to that set. Otherwise we ignore the table because it might 427s -- not even exist in our database. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_set_id; 427s if not found then 427s raise exception 'Slony-I: setAddTable_int(): set % not found', 427s p_set_id; 427s end if; 427s if v_set_origin != v_local_node_id then 427s select sub_provider into v_sub_provider 427s from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = public.getLocalNodeId('_main'); 427s if not found then 427s return 0; 427s end if; 427s end if; 427s 427s -- ---- 427s -- Get the tables OID and check that it is a real table 427s -- ---- 427s select PGC.oid, PGC.relkind, PGC.relname, PGN.nspname into v_tab_reloid, v_relkind, v_tab_relname, v_tab_nspname 427s from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where PGC.relnamespace = PGN.oid 427s and public.slon_quote_input(p_fqname) = public.slon_quote_brute(PGN.nspname) || 427s '.' || public.slon_quote_brute(PGC.relname); 427s if not found then 427s raise exception 'Slony-I: setAddTable_int(): table % not found', 427s p_fqname; 427s end if; 427s if v_relkind != 'r' then 427s raise exception 'Slony-I: setAddTable_int(): % is not a regular table', 427s p_fqname; 427s end if; 427s 427s if not exists (select indexrelid 427s from "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGC 427s where PGX.indrelid = v_tab_reloid 427s and PGX.indexrelid = PGC.oid 427s and PGC.relname = p_tab_idxname) 427s then 427s raise exception 'Slony-I: setAddTable_int(): table % has no index %', 427s p_fqname, p_tab_idxname; 427s end if; 427s 427s -- ---- 427s -- Verify that the columns in the PK (or candidate) are not NULLABLE 427s -- ---- 427s 427s v_pkcand_nn := 'f'; 427s for v_prec in select attname from "pg_catalog".pg_attribute where attrelid = 427s (select oid from "pg_catalog".pg_class where oid = v_tab_reloid) 427s and attname in (select attname from "pg_catalog".pg_attribute where 427s attrelid = (select oid from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_index PGX where 427s PGC.relname = p_tab_idxname and PGX.indexrelid=PGC.oid and 427s PGX.indrelid = v_tab_reloid)) and attnotnull <> 't' 427s loop 427s raise notice 'Slony-I: setAddTable_int: table % PK column % nullable', p_fqname, v_prec.attname; 427s v_pkcand_nn := 't'; 427s end loop; 427s if v_pkcand_nn then 427s raise exception 'Slony-I: setAddTable_int: table % not replicable!', p_fqname; 427s end if; 427s 427s select * into v_prec from public.sl_table where tab_id = p_tab_id; 427s if not found then 427s v_pkcand_nn := 't'; -- No-op -- All is well 427s else 427s raise exception 'Slony-I: setAddTable_int: table id % has already been assigned!', p_tab_id; 427s end if; 427s 427s -- ---- 427s -- Add the table to sl_table and create the trigger on it. 427s -- ---- 427s insert into public.sl_table 427s (tab_id, tab_reloid, tab_relname, tab_nspname, 427s tab_set, tab_idxname, tab_altered, tab_comment) 427s values 427s (p_tab_id, v_tab_reloid, v_tab_relname, v_tab_nspname, 427s p_set_id, p_tab_idxname, false, p_tab_comment); 427s perform public.alterTableAddTriggers(p_tab_id); 427s 427s return p_tab_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setAddTable_int(p_set_id int4, p_tab_id int4, p_fqname text, p_tab_idxname name, p_tab_comment text) is 427s 'setAddTable_int (set_id, tab_id, tab_fqname, tab_idxname, tab_comment) 427s 427s This function processes the SET_ADD_TABLE event on remote nodes, 427s adding a table to replication if the remote node is subscribing to its 427s replication set.'; 427s COMMENT 427s create or replace function public.setDropTable(p_tab_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_set_id int4; 427s v_set_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Determine the set_id 427s -- ---- 427s select tab_set into v_set_id from public.sl_table where tab_id = p_tab_id; 427s 427s -- ---- 427s -- Ensure table exists 427s -- ---- 427s if not found then 427s raise exception 'Slony-I: setDropTable_int(): table % not found', 427s p_tab_id; 427s end if; 427s 427s -- ---- 427s -- Check that we are the origin of the set 427s -- ---- 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = v_set_id; 427s if not found then 427s raise exception 'Slony-I: setDropTable(): set % not found', v_set_id; 427s end if; 427s if v_set_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: setDropTable(): set % has remote origin', v_set_id; 427s end if; 427s 427s -- ---- 427s -- Drop the table from the set and generate the SET_ADD_TABLE event 427s -- ---- 427s perform public.setDropTable_int(p_tab_id); 427s return public.createEvent('_main', 'SET_DROP_TABLE', 427s p_tab_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setDropTable(p_tab_id int4) is 427s 'setDropTable (tab_id) 427s 427s Drop table tab_id from set on origin node, and generate SET_DROP_TABLE 427s event to allow this to propagate to other nodes.'; 427s COMMENT 427s create or replace function public.setDropTable_int(p_tab_id int4) 427s returns int4 427s as $$ 427s declare 427s v_set_id int4; 427s v_local_node_id int4; 427s v_set_origin int4; 427s v_sub_provider int4; 427s v_tab_reloid oid; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Determine the set_id 427s -- ---- 427s select tab_set into v_set_id from public.sl_table where tab_id = p_tab_id; 427s 427s -- ---- 427s -- Ensure table exists 427s -- ---- 427s if not found then 427s return 0; 427s end if; 427s 427s -- ---- 427s -- For sets with a remote origin, check that we are subscribed 427s -- to that set. Otherwise we ignore the table because it might 427s -- not even exist in our database. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = v_set_id; 427s if not found then 427s raise exception 'Slony-I: setDropTable_int(): set % not found', 427s v_set_id; 427s end if; 427s if v_set_origin != v_local_node_id then 427s select sub_provider into v_sub_provider 427s from public.sl_subscribe 427s where sub_set = v_set_id 427s and sub_receiver = public.getLocalNodeId('_main'); 427s if not found then 427s return 0; 427s end if; 427s end if; 427s 427s -- ---- 427s -- Drop the table from sl_table and drop trigger from it. 427s -- ---- 427s perform public.alterTableDropTriggers(p_tab_id); 427s delete from public.sl_table where tab_id = p_tab_id; 427s return p_tab_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setDropTable_int(p_tab_id int4) is 427s 'setDropTable_int (tab_id) 427s 427s This function processes the SET_DROP_TABLE event on remote nodes, 427s dropping a table from replication if the remote node is subscribing to 427s its replication set.'; 427s COMMENT 427s create or replace function public.setAddSequence (p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) 427s returns bigint 427s as $$ 427s declare 427s v_set_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that we are the origin of the set 427s -- ---- 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_set_id; 427s if not found then 427s raise exception 'Slony-I: setAddSequence(): set % not found', p_set_id; 427s end if; 427s if v_set_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: setAddSequence(): set % has remote origin - submit to origin node', p_set_id; 427s end if; 427s 427s if exists (select true from public.sl_subscribe 427s where sub_set = p_set_id) 427s then 427s raise exception 'Slony-I: cannot add sequence to currently subscribed set %', 427s p_set_id; 427s end if; 427s 427s -- ---- 427s -- Add the sequence to the set and generate the SET_ADD_SEQUENCE event 427s -- ---- 427s perform public.setAddSequence_int(p_set_id, p_seq_id, p_fqname, 427s p_seq_comment); 427s return public.createEvent('_main', 'SET_ADD_SEQUENCE', 427s p_set_id::text, p_seq_id::text, 427s p_fqname::text, p_seq_comment::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setAddSequence (p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) is 427s 'setAddSequence (set_id, seq_id, seq_fqname, seq_comment) 427s 427s On the origin node for set set_id, add sequence seq_fqname to the 427s replication set, and raise SET_ADD_SEQUENCE to cause this to replicate 427s to subscriber nodes.'; 427s COMMENT 427s create or replace function public.setAddSequence_int(p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) 427s returns int4 427s as $$ 427s declare 427s v_local_node_id int4; 427s v_set_origin int4; 427s v_sub_provider int4; 427s v_relkind char; 427s v_seq_reloid oid; 427s v_seq_relname name; 427s v_seq_nspname name; 427s v_sync_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- For sets with a remote origin, check that we are subscribed 427s -- to that set. Otherwise we ignore the sequence because it might 427s -- not even exist in our database. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_set_id; 427s if not found then 427s raise exception 'Slony-I: setAddSequence_int(): set % not found', 427s p_set_id; 427s end if; 427s if v_set_origin != v_local_node_id then 427s select sub_provider into v_sub_provider 427s from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = public.getLocalNodeId('_main'); 427s if not found then 427s return 0; 427s end if; 427s end if; 427s 427s -- ---- 427s -- Get the sequences OID and check that it is a sequence 427s -- ---- 427s select PGC.oid, PGC.relkind, PGC.relname, PGN.nspname 427s into v_seq_reloid, v_relkind, v_seq_relname, v_seq_nspname 427s from "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where PGC.relnamespace = PGN.oid 427s and public.slon_quote_input(p_fqname) = public.slon_quote_brute(PGN.nspname) || 427s '.' || public.slon_quote_brute(PGC.relname); 427s if not found then 427s raise exception 'Slony-I: setAddSequence_int(): sequence % not found', 427s p_fqname; 427s end if; 427s if v_relkind != 'S' then 427s raise exception 'Slony-I: setAddSequence_int(): % is not a sequence', 427s p_fqname; 427s end if; 427s 427s select 1 into v_sync_row from public.sl_sequence where seq_id = p_seq_id; 427s if not found then 427s v_relkind := 'o'; -- all is OK 427s else 427s raise exception 'Slony-I: setAddSequence_int(): sequence ID % has already been assigned', p_seq_id; 427s end if; 427s 427s -- ---- 427s -- Add the sequence to sl_sequence 427s -- ---- 427s insert into public.sl_sequence 427s (seq_id, seq_reloid, seq_relname, seq_nspname, seq_set, seq_comment) 427s values 427s (p_seq_id, v_seq_reloid, v_seq_relname, v_seq_nspname, p_set_id, p_seq_comment); 427s 427s -- ---- 427s -- On the set origin, fake a sl_seqlog row for the last sync event 427s -- ---- 427s if v_set_origin = v_local_node_id then 427s for v_sync_row in select coalesce (max(ev_seqno), 0) as ev_seqno 427s from public.sl_event 427s where ev_origin = v_local_node_id 427s and ev_type = 'SYNC' 427s loop 427s insert into public.sl_seqlog 427s (seql_seqid, seql_origin, seql_ev_seqno, 427s seql_last_value) values 427s (p_seq_id, v_local_node_id, v_sync_row.ev_seqno, 427s public.sequenceLastValue(p_fqname)); 427s end loop; 427s end if; 427s 427s return p_seq_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setAddSequence_int(p_set_id int4, p_seq_id int4, p_fqname text, p_seq_comment text) is 427s 'setAddSequence_int (set_id, seq_id, seq_fqname, seq_comment) 427s 427s This processes the SET_ADD_SEQUENCE event. On remote nodes that 427s subscribe to set_id, add the sequence to the replication set.'; 427s COMMENT 427s create or replace function public.setDropSequence (p_seq_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_set_id int4; 427s v_set_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Determine set id for this sequence 427s -- ---- 427s select seq_set into v_set_id from public.sl_sequence where seq_id = p_seq_id; 427s 427s -- ---- 427s -- Ensure sequence exists 427s -- ---- 427s if not found then 427s raise exception 'Slony-I: setDropSequence_int(): sequence % not found', 427s p_seq_id; 427s end if; 427s 427s -- ---- 427s -- Check that we are the origin of the set 427s -- ---- 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = v_set_id; 427s if not found then 427s raise exception 'Slony-I: setDropSequence(): set % not found', v_set_id; 427s end if; 427s if v_set_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: setDropSequence(): set % has origin at another node - submit this to that node', v_set_id; 427s end if; 427s 427s -- ---- 427s -- Add the sequence to the set and generate the SET_ADD_SEQUENCE event 427s -- ---- 427s perform public.setDropSequence_int(p_seq_id); 427s return public.createEvent('_main', 'SET_DROP_SEQUENCE', 427s p_seq_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setDropSequence (p_seq_id int4) is 427s 'setDropSequence (seq_id) 427s 427s On the origin node for the set, drop sequence seq_id from replication 427s set, and raise SET_DROP_SEQUENCE to cause this to replicate to 427s subscriber nodes.'; 427s COMMENT 427s create or replace function public.setDropSequence_int(p_seq_id int4) 427s returns int4 427s as $$ 427s declare 427s v_set_id int4; 427s v_local_node_id int4; 427s v_set_origin int4; 427s v_sub_provider int4; 427s v_relkind char; 427s v_sync_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Determine set id for this sequence 427s -- ---- 427s select seq_set into v_set_id from public.sl_sequence where seq_id = p_seq_id; 427s 427s -- ---- 427s -- Ensure sequence exists 427s -- ---- 427s if not found then 427s return 0; 427s end if; 427s 427s -- ---- 427s -- For sets with a remote origin, check that we are subscribed 427s -- to that set. Otherwise we ignore the sequence because it might 427s -- not even exist in our database. 427s -- ---- 427s v_local_node_id := public.getLocalNodeId('_main'); 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = v_set_id; 427s if not found then 427s raise exception 'Slony-I: setDropSequence_int(): set % not found', 427s v_set_id; 427s end if; 427s if v_set_origin != v_local_node_id then 427s select sub_provider into v_sub_provider 427s from public.sl_subscribe 427s where sub_set = v_set_id 427s and sub_receiver = public.getLocalNodeId('_main'); 427s if not found then 427s return 0; 427s end if; 427s end if; 427s 427s -- ---- 427s -- drop the sequence from sl_sequence, sl_seqlog 427s -- ---- 427s delete from public.sl_seqlog where seql_seqid = p_seq_id; 427s delete from public.sl_sequence where seq_id = p_seq_id; 427s 427s return p_seq_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setDropSequence_int(p_seq_id int4) is 427s 'setDropSequence_int (seq_id) 427s 427s This processes the SET_DROP_SEQUENCE event. On remote nodes that 427s subscribe to the set containing sequence seq_id, drop the sequence 427s from the replication set.'; 427s COMMENT 427s create or replace function public.setMoveTable (p_tab_id int4, p_new_set_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_old_set_id int4; 427s v_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Get the tables current set 427s -- ---- 427s select tab_set into v_old_set_id from public.sl_table 427s where tab_id = p_tab_id; 427s if not found then 427s raise exception 'Slony-I: table %d not found', p_tab_id; 427s end if; 427s 427s -- ---- 427s -- Check that both sets exist and originate here 427s -- ---- 427s if p_new_set_id = v_old_set_id then 427s raise exception 'Slony-I: set ids cannot be identical'; 427s end if; 427s select set_origin into v_origin from public.sl_set 427s where set_id = p_new_set_id; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_new_set_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: set % does not originate on local node', 427s p_new_set_id; 427s end if; 427s 427s select set_origin into v_origin from public.sl_set 427s where set_id = v_old_set_id; 427s if not found then 427s raise exception 'Slony-I: set % not found', v_old_set_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: set % does not originate on local node', 427s v_old_set_id; 427s end if; 427s 427s -- ---- 427s -- Check that both sets are subscribed by the same set of nodes 427s -- ---- 427s if exists (select true from public.sl_subscribe SUB1 427s where SUB1.sub_set = p_new_set_id 427s and SUB1.sub_receiver not in (select SUB2.sub_receiver 427s from public.sl_subscribe SUB2 427s where SUB2.sub_set = v_old_set_id)) 427s then 427s raise exception 'Slony-I: subscriber lists of set % and % are different', 427s p_new_set_id, v_old_set_id; 427s end if; 427s 427s if exists (select true from public.sl_subscribe SUB1 427s where SUB1.sub_set = v_old_set_id 427s and SUB1.sub_receiver not in (select SUB2.sub_receiver 427s from public.sl_subscribe SUB2 427s where SUB2.sub_set = p_new_set_id)) 427s then 427s raise exception 'Slony-I: subscriber lists of set % and % are different', 427s v_old_set_id, p_new_set_id; 427s end if; 427s 427s -- ---- 427s -- Change the set the table belongs to 427s -- ---- 427s perform public.createEvent('_main', 'SYNC', NULL); 427s perform public.setMoveTable_int(p_tab_id, p_new_set_id); 427s return public.createEvent('_main', 'SET_MOVE_TABLE', 427s p_tab_id::text, p_new_set_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setMoveTable(p_tab_id int4, p_new_set_id int4) is 427s 'This generates the SET_MOVE_TABLE event. If the set that the table is 427s in is identically subscribed to the set that the table is to be moved 427s into, then the SET_MOVE_TABLE event is raised.'; 427s COMMENT 427s create or replace function public.setMoveTable_int (p_tab_id int4, p_new_set_id int4) 427s returns int4 427s as $$ 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Move the table to the new set 427s -- ---- 427s update public.sl_table 427s set tab_set = p_new_set_id 427s where tab_id = p_tab_id; 427s 427s return p_tab_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setMoveTable(p_tab_id int4, p_new_set_id int4) is 427s 'This processes the SET_MOVE_TABLE event. The table is moved 427s to the destination set.'; 427s COMMENT 427s create or replace function public.setMoveSequence (p_seq_id int4, p_new_set_id int4) 427s returns bigint 427s as $$ 427s declare 427s v_old_set_id int4; 427s v_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Get the sequences current set 427s -- ---- 427s select seq_set into v_old_set_id from public.sl_sequence 427s where seq_id = p_seq_id; 427s if not found then 427s raise exception 'Slony-I: setMoveSequence(): sequence %d not found', p_seq_id; 427s end if; 427s 427s -- ---- 427s -- Check that both sets exist and originate here 427s -- ---- 427s if p_new_set_id = v_old_set_id then 427s raise exception 'Slony-I: setMoveSequence(): set ids cannot be identical'; 427s end if; 427s select set_origin into v_origin from public.sl_set 427s where set_id = p_new_set_id; 427s if not found then 427s raise exception 'Slony-I: setMoveSequence(): set % not found', p_new_set_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: setMoveSequence(): set % does not originate on local node', 427s p_new_set_id; 427s end if; 427s 427s select set_origin into v_origin from public.sl_set 427s where set_id = v_old_set_id; 427s if not found then 427s raise exception 'Slony-I: set % not found', v_old_set_id; 427s end if; 427s if v_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: set % does not originate on local node', 427s v_old_set_id; 427s end if; 427s 427s -- ---- 427s -- Check that both sets are subscribed by the same set of nodes 427s -- ---- 427s if exists (select true from public.sl_subscribe SUB1 427s where SUB1.sub_set = p_new_set_id 427s and SUB1.sub_receiver not in (select SUB2.sub_receiver 427s from public.sl_subscribe SUB2 427s where SUB2.sub_set = v_old_set_id)) 427s then 427s raise exception 'Slony-I: subscriber lists of set % and % are different', 427s p_new_set_id, v_old_set_id; 427s end if; 427s 427s if exists (select true from public.sl_subscribe SUB1 427s where SUB1.sub_set = v_old_set_id 427s and SUB1.sub_receiver not in (select SUB2.sub_receiver 427s from public.sl_subscribe SUB2 427s where SUB2.sub_set = p_new_set_id)) 427s then 427s raise exception 'Slony-I: subscriber lists of set % and % are different', 427s v_old_set_id, p_new_set_id; 427s end if; 427s 427s -- ---- 427s -- Change the set the sequence belongs to 427s -- ---- 427s perform public.setMoveSequence_int(p_seq_id, p_new_set_id); 427s return public.createEvent('_main', 'SET_MOVE_SEQUENCE', 427s p_seq_id::text, p_new_set_id::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setMoveSequence (p_seq_id int4, p_new_set_id int4) is 427s 'setMoveSequence(p_seq_id, p_new_set_id) - This generates the 427s SET_MOVE_SEQUENCE event, after validation, notably that both sets 427s exist, are distinct, and have exactly the same subscription lists'; 427s COMMENT 427s create or replace function public.setMoveSequence_int (p_seq_id int4, p_new_set_id int4) 427s returns int4 427s as $$ 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Move the sequence to the new set 427s -- ---- 427s update public.sl_sequence 427s set seq_set = p_new_set_id 427s where seq_id = p_seq_id; 427s 427s return p_seq_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setMoveSequence_int (p_seq_id int4, p_new_set_id int4) is 427s 'setMoveSequence_int(p_seq_id, p_new_set_id) - processes the 427s SET_MOVE_SEQUENCE event, moving a sequence to another replication 427s set.'; 427s COMMENT 427s create or replace function public.sequenceSetValue(p_seq_id int4, p_seq_origin int4, p_ev_seqno int8, p_last_value int8,p_ignore_missing bool) returns int4 427s as $$ 427s declare 427s v_fqname text; 427s v_found integer; 427s begin 427s -- ---- 427s -- Get the sequences fully qualified name 427s -- ---- 427s select public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) into v_fqname 427s from public.sl_sequence SQ, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where SQ.seq_id = p_seq_id 427s and SQ.seq_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid; 427s if not found then 427s if p_ignore_missing then 427s return null; 427s end if; 427s raise exception 'Slony-I: sequenceSetValue(): sequence % not found', p_seq_id; 427s end if; 427s 427s -- ---- 427s -- Update it to the new value 427s -- ---- 427s execute 'select setval(''' || v_fqname || 427s ''', ' || p_last_value::text || ')'; 427s 427s if p_ev_seqno is not null then 427s insert into public.sl_seqlog 427s (seql_seqid, seql_origin, seql_ev_seqno, seql_last_value) 427s values (p_seq_id, p_seq_origin, p_ev_seqno, p_last_value); 427s end if; 427s return p_seq_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.sequenceSetValue(p_seq_id int4, p_seq_origin int4, p_ev_seqno int8, p_last_value int8,p_ignore_missing bool) is 427s 'sequenceSetValue (seq_id, seq_origin, ev_seqno, last_value,ignore_missing) 427s Set sequence seq_id to have new value last_value. 427s '; 427s COMMENT 427s drop function if exists public.ddlCapture (p_statement text, p_nodes text); 427s DROP FUNCTION 427s create or replace function public.ddlCapture (p_statement text, p_nodes text) 427s returns bigint 427s as $$ 427s declare 427s c_local_node integer; 427s c_found_origin boolean; 427s c_node text; 427s c_cmdargs text[]; 427s c_nodeargs text; 427s c_delim text; 427s begin 427s c_local_node := public.getLocalNodeId('_main'); 427s 427s c_cmdargs = array_append('{}'::text[], p_statement); 427s c_nodeargs = ''; 427s if p_nodes is not null then 427s c_found_origin := 'f'; 427s -- p_nodes list needs to consist of a list of nodes that exist 427s -- and that include the current node ID 427s for c_node in select trim(node) from 427s pg_catalog.regexp_split_to_table(p_nodes, ',') as node loop 427s if not exists 427s (select 1 from public.sl_node 427s where no_id = (c_node::integer)) then 427s raise exception 'ddlcapture(%,%) - node % does not exist!', 427s p_statement, p_nodes, c_node; 427s end if; 427s 427s if c_local_node = (c_node::integer) then 427s c_found_origin := 't'; 427s end if; 427s if length(c_nodeargs)>0 then 427s c_nodeargs = c_nodeargs ||','|| c_node; 427s else 427s c_nodeargs=c_node; 427s end if; 427s end loop; 427s 427s if not c_found_origin then 427s raise exception 427s 'ddlcapture(%,%) - origin node % not included in ONLY ON list!', 427s p_statement, p_nodes, c_local_node; 427s end if; 427s end if; 427s c_cmdargs = array_append(c_cmdargs,c_nodeargs); 427s c_delim=','; 427s c_cmdargs = array_append(c_cmdargs, 427s 427s (select public.string_agg( seq_id::text || c_delim 427s || c_local_node || 427s c_delim || seq_last_value) 427s FROM ( 427s select seq_id, 427s seq_last_value from public.sl_seqlastvalue 427s where seq_origin = c_local_node) as FOO 427s where NOT public.seqtrack(seq_id,seq_last_value) is NULL)); 427s insert into public.sl_log_script 427s (log_origin, log_txid, log_actionseq, log_cmdtype, log_cmdargs) 427s values 427s (c_local_node, pg_catalog.txid_current(), 427s nextval('public.sl_action_seq'), 'S', c_cmdargs); 427s execute p_statement; 427s return currval('public.sl_action_seq'); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.ddlCapture (p_statement text, p_nodes text) is 427s 'Capture an SQL statement (usually DDL) that is to be literally replayed on subscribers'; 427s COMMENT 427s drop function if exists public.ddlScript_complete (int4, text, int4); 427s DROP FUNCTION 427s create or replace function public.ddlScript_complete (p_nodes text) 427s returns bigint 427s as $$ 427s declare 427s c_local_node integer; 427s c_found_origin boolean; 427s c_node text; 427s c_cmdargs text[]; 427s begin 427s c_local_node := public.getLocalNodeId('_main'); 427s 427s c_cmdargs = '{}'::text[]; 427s if p_nodes is not null then 427s c_found_origin := 'f'; 427s -- p_nodes list needs to consist o a list of nodes that exist 427s -- and that include the current node ID 427s for c_node in select trim(node) from 427s pg_catalog.regexp_split_to_table(p_nodes, ',') as node loop 427s if not exists 427s (select 1 from public.sl_node 427s where no_id = (c_node::integer)) then 427s raise exception 'ddlcapture(%,%) - node % does not exist!', 427s p_statement, p_nodes, c_node; 427s end if; 427s 427s if c_local_node = (c_node::integer) then 427s c_found_origin := 't'; 427s end if; 427s 427s c_cmdargs = array_append(c_cmdargs, c_node); 427s end loop; 427s 427s if not c_found_origin then 427s raise exception 427s 'ddlScript_complete(%) - origin node % not included in ONLY ON list!', 427s p_nodes, c_local_node; 427s end if; 427s end if; 427s 427s perform public.ddlScript_complete_int(); 427s 427s insert into public.sl_log_script 427s (log_origin, log_txid, log_actionseq, log_cmdtype, log_cmdargs) 427s values 427s (c_local_node, pg_catalog.txid_current(), 427s nextval('public.sl_action_seq'), 's', c_cmdargs); 427s 427s return currval('public.sl_action_seq'); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.ddlScript_complete(p_nodes text) is 427s 'ddlScript_complete(set_id, script, only_on_node) 427s 427s After script has run on origin, this fixes up relnames and 427s log trigger arguments and inserts the "fire ddlScript_complete_int() 427s log row into sl_log_script.'; 427s COMMENT 427s drop function if exists public.ddlScript_complete_int(int4, int4); 427s DROP FUNCTION 427s create or replace function public.ddlScript_complete_int () 427s returns int4 427s as $$ 427s begin 427s perform public.updateRelname(); 427s perform public.repair_log_triggers(true); 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.ddlScript_complete_int() is 427s 'ddlScript_complete_int() 427s 427s Complete processing the DDL_SCRIPT event.'; 427s COMMENT 427s create or replace function public.alterTableAddTriggers (p_tab_id int4) 427s returns int4 427s as $$ 427s declare 427s v_no_id int4; 427s v_tab_row record; 427s v_tab_fqname text; 427s v_tab_attkind text; 427s v_n int4; 427s v_trec record; 427s v_tgbad boolean; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Get our local node ID 427s -- ---- 427s v_no_id := public.getLocalNodeId('_main'); 427s 427s -- ---- 427s -- Get the sl_table row and the current origin of the table. 427s -- ---- 427s select T.tab_reloid, T.tab_set, T.tab_idxname, 427s S.set_origin, PGX.indexrelid, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s into v_tab_row 427s from public.sl_table T, public.sl_set S, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC 427s where T.tab_id = p_tab_id 427s and T.tab_set = S.set_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid 427s and PGX.indrelid = T.tab_reloid 427s and PGX.indexrelid = PGXC.oid 427s and PGXC.relname = T.tab_idxname 427s for update; 427s if not found then 427s raise exception 'Slony-I: alterTableAddTriggers(): Table with id % not found', p_tab_id; 427s end if; 427s v_tab_fqname = v_tab_row.tab_fqname; 427s 427s v_tab_attkind := public.determineAttKindUnique(v_tab_row.tab_fqname, 427s v_tab_row.tab_idxname); 427s 427s execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; 427s 427s -- ---- 427s -- Create the log and the deny access triggers 427s -- ---- 427s execute 'create trigger "_main_logtrigger"' || 427s ' after insert or update or delete on ' || 427s v_tab_fqname || ' for each row execute procedure public.logTrigger (' || 427s pg_catalog.quote_literal('_main') || ',' || 427s pg_catalog.quote_literal(p_tab_id::text) || ',' || 427s pg_catalog.quote_literal(v_tab_attkind) || ');'; 427s 427s execute 'create trigger "_main_denyaccess" ' || 427s 'before insert or update or delete on ' || 427s v_tab_fqname || ' for each row execute procedure ' || 427s 'public.denyAccess (' || pg_catalog.quote_literal('_main') || ');'; 427s 427s perform public.alterTableAddTruncateTrigger(v_tab_fqname, p_tab_id); 427s 427s perform public.alterTableConfigureTriggers (p_tab_id); 427s return p_tab_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.alterTableAddTriggers(p_tab_id int4) is 427s 'alterTableAddTriggers(tab_id) 427s 427s Adds the log and deny access triggers to a replicated table.'; 427s COMMENT 427s create or replace function public.alterTableDropTriggers (p_tab_id int4) 427s returns int4 427s as $$ 427s declare 427s v_no_id int4; 427s v_tab_row record; 427s v_tab_fqname text; 427s v_n int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Get our local node ID 427s -- ---- 427s v_no_id := public.getLocalNodeId('_main'); 427s 427s -- ---- 427s -- Get the sl_table row and the current tables origin. 427s -- ---- 427s select T.tab_reloid, T.tab_set, 427s S.set_origin, PGX.indexrelid, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s into v_tab_row 427s from public.sl_table T, public.sl_set S, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC 427s where T.tab_id = p_tab_id 427s and T.tab_set = S.set_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid 427s and PGX.indrelid = T.tab_reloid 427s and PGX.indexrelid = PGXC.oid 427s and PGXC.relname = T.tab_idxname 427s for update; 427s if not found then 427s raise exception 'Slony-I: alterTableDropTriggers(): Table with id % not found', p_tab_id; 427s end if; 427s v_tab_fqname = v_tab_row.tab_fqname; 427s 427s execute 'lock table ' || v_tab_fqname || ' in access exclusive mode'; 427s 427s -- ---- 427s -- Drop both triggers 427s -- ---- 427s execute 'drop trigger "_main_logtrigger" on ' || 427s v_tab_fqname; 427s 427s execute 'drop trigger "_main_denyaccess" on ' || 427s v_tab_fqname; 427s 427s perform public.alterTableDropTruncateTrigger(v_tab_fqname, p_tab_id); 427s 427s return p_tab_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s NOTICE: function public.reshapesubscription(int4,int4,int4) does not exist, skipping 427s comment on function public.alterTableDropTriggers (p_tab_id int4) is 427s 'alterTableDropTriggers (tab_id) 427s 427s Remove the log and deny access triggers from a table.'; 427s COMMENT 427s create or replace function public.alterTableConfigureTriggers (p_tab_id int4) 427s returns int4 427s as $$ 427s declare 427s v_no_id int4; 427s v_tab_row record; 427s v_tab_fqname text; 427s v_n int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Get our local node ID 427s -- ---- 427s v_no_id := public.getLocalNodeId('_main'); 427s 427s -- ---- 427s -- Get the sl_table row and the current tables origin. 427s -- ---- 427s select T.tab_reloid, T.tab_set, 427s S.set_origin, PGX.indexrelid, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s into v_tab_row 427s from public.sl_table T, public.sl_set S, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_index PGX, "pg_catalog".pg_class PGXC 427s where T.tab_id = p_tab_id 427s and T.tab_set = S.set_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid 427s and PGX.indrelid = T.tab_reloid 427s and PGX.indexrelid = PGXC.oid 427s and PGXC.relname = T.tab_idxname 427s for update; 427s if not found then 427s raise exception 'Slony-I: alterTableConfigureTriggers(): Table with id % not found', p_tab_id; 427s end if; 427s v_tab_fqname = v_tab_row.tab_fqname; 427s 427s -- ---- 427s -- Configuration depends on the origin of the table 427s -- ---- 427s if v_tab_row.set_origin = v_no_id then 427s -- ---- 427s -- On the origin the log trigger is configured like a default 427s -- user trigger and the deny access trigger is disabled. 427s -- ---- 427s execute 'alter table ' || v_tab_fqname || 427s ' enable trigger "_main_logtrigger"'; 427s execute 'alter table ' || v_tab_fqname || 427s ' disable trigger "_main_denyaccess"'; 427s perform public.alterTableConfigureTruncateTrigger(v_tab_fqname, 427s 'enable', 'disable'); 427s else 427s -- ---- 427s -- On a replica the log trigger is disabled and the 427s -- deny access trigger fires in origin session role. 427s -- ---- 427s execute 'alter table ' || v_tab_fqname || 427s ' disable trigger "_main_logtrigger"'; 427s execute 'alter table ' || v_tab_fqname || 427s ' enable trigger "_main_denyaccess"'; 427s perform public.alterTableConfigureTruncateTrigger(v_tab_fqname, 427s 'disable', 'enable'); 427s 427s end if; 427s 427s return p_tab_id; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.alterTableConfigureTriggers (p_tab_id int4) is 427s 'alterTableConfigureTriggers (tab_id) 427s 427s Set the enable/disable configuration for the replication triggers 427s according to the origin of the set.'; 427s COMMENT 427s create or replace function public.resubscribeNode (p_origin int4, 427s p_provider int4, p_receiver int4) 427s returns bigint 427s as $$ 427s declare 427s v_record record; 427s v_missing_sets text; 427s v_ev_seqno bigint; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- 427s -- Check that the receiver exists 427s -- 427s if not exists (select no_id from public.sl_node where no_id= 427s p_receiver) then 427s raise exception 'Slony-I: subscribeSet() receiver % does not exist' , p_receiver; 427s end if; 427s 427s -- 427s -- Check that the provider exists 427s -- 427s if not exists (select no_id from public.sl_node where no_id= 427s p_provider) then 427s raise exception 'Slony-I: subscribeSet() provider % does not exist' , p_provider; 427s end if; 427s 427s 427s -- ---- 427s -- Check that this is called on the origin node 427s -- ---- 427s if p_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: subscribeSet() must be called on origin'; 427s end if; 427s 427s -- --- 427s -- Verify that the provider is either the origin or an active subscriber 427s -- Bug report #1362 427s -- --- 427s if p_origin <> p_provider then 427s for v_record in select sub1.sub_set from 427s public.sl_subscribe sub1 427s left outer join (public.sl_subscribe sub2 427s inner join 427s public.sl_set on ( 427s sl_set.set_id=sub2.sub_set 427s and sub2.sub_set=p_origin) 427s ) 427s ON ( sub1.sub_set = sub2.sub_set and 427s sub1.sub_receiver = p_provider and 427s sub1.sub_forward and sub1.sub_active 427s and sub2.sub_receiver=p_receiver) 427s 427s where sub2.sub_set is null 427s loop 427s v_missing_sets=v_missing_sets || ' ' || v_record.sub_set; 427s end loop; 427s if v_missing_sets is not null then 427s raise exception 'Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %', p_sub_provider, v_missing_sets; 427s end if; 427s end if; 427s 427s for v_record in select * from 427s public.sl_subscribe, public.sl_set where 427s sub_set=set_id and 427s sub_receiver=p_receiver 427s and set_origin=p_origin 427s loop 427s -- ---- 427s -- Create the SUBSCRIBE_SET event 427s -- ---- 427s v_ev_seqno := public.createEvent('_main', 'SUBSCRIBE_SET', 427s v_record.sub_set::text, p_provider::text, p_receiver::text, 427s case v_record.sub_forward when true then 't' else 'f' end, 427s 'f' ); 427s 427s -- ---- 427s -- Call the internal procedure to store the subscription 427s -- ---- 427s perform public.subscribeSet_int(v_record.sub_set, 427s p_provider, 427s p_receiver, v_record.sub_forward, false); 427s end loop; 427s 427s return v_ev_seqno; 427s end; 427s $$ 427s language plpgsql; 427s CREATE FUNCTION 427s create or replace function public.subscribeSet (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) 427s returns bigint 427s as $$ 427s declare 427s v_set_origin int4; 427s v_ev_seqno int8; 427s v_ev_seqno2 int8; 427s v_rec record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- 427s -- Check that the receiver exists 427s -- 427s if not exists (select no_id from public.sl_node where no_id= 427s p_sub_receiver) then 427s raise exception 'Slony-I: subscribeSet() receiver % does not exist' , p_sub_receiver; 427s end if; 427s 427s -- 427s -- Check that the provider exists 427s -- 427s if not exists (select no_id from public.sl_node where no_id= 427s p_sub_provider) then 427s raise exception 'Slony-I: subscribeSet() provider % does not exist' , p_sub_provider; 427s end if; 427s 427s -- ---- 427s -- Check that the origin and provider of the set are remote 427s -- ---- 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_sub_set; 427s if not found then 427s raise exception 'Slony-I: subscribeSet(): set % not found', p_sub_set; 427s end if; 427s if v_set_origin = p_sub_receiver then 427s raise exception 427s 'Slony-I: subscribeSet(): set origin and receiver cannot be identical'; 427s end if; 427s if p_sub_receiver = p_sub_provider then 427s raise exception 427s 'Slony-I: subscribeSet(): set provider and receiver cannot be identical'; 427s end if; 427s -- ---- 427s -- Check that this is called on the origin node 427s -- ---- 427s if v_set_origin != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: subscribeSet() must be called on origin'; 427s end if; 427s 427s -- --- 427s -- Verify that the provider is either the origin or an active subscriber 427s -- Bug report #1362 427s -- --- 427s if v_set_origin <> p_sub_provider then 427s if not exists (select 1 from public.sl_subscribe 427s where sub_set = p_sub_set and 427s sub_receiver = p_sub_provider and 427s sub_forward and sub_active) then 427s raise exception 'Slony-I: subscribeSet(): provider % is not an active forwarding node for replication set %', p_sub_provider, p_sub_set; 427s end if; 427s end if; 427s 427s -- --- 427s -- Enforce that all sets from one origin are subscribed 427s -- using the same data provider per receiver. 427s -- ---- 427s if not exists (select 1 from public.sl_subscribe 427s where sub_set = p_sub_set and sub_receiver = p_sub_receiver) then 427s -- 427s -- New subscription - error out if we have any other subscription 427s -- from that origin with a different data provider. 427s -- 427s for v_rec in select sub_provider from public.sl_subscribe 427s join public.sl_set on set_id = sub_set 427s where set_origin = v_set_origin and sub_receiver = p_sub_receiver 427s loop 427s if v_rec.sub_provider <> p_sub_provider then 427s raise exception 'Slony-I: subscribeSet(): wrong provider % - existing subscription from origin % users provider %', 427s p_sub_provider, v_set_origin, v_rec.sub_provider; 427s end if; 427s end loop; 427s else 427s -- 427s -- Existing subscription - in case the data provider changes and 427s -- there are other subscriptions, warn here. subscribeSet_int() 427s -- will currently change the data provider for those sets as well. 427s -- 427s for v_rec in select set_id, sub_provider from public.sl_subscribe 427s join public.sl_set on set_id = sub_set 427s where set_origin = v_set_origin and sub_receiver = p_sub_receiver 427s and set_id <> p_sub_set 427s loop 427s if v_rec.sub_provider <> p_sub_provider then 427s raise exception 'Slony-I: subscribeSet(): also data provider for set % use resubscribe instead', 427s v_rec.set_id; 427s end if; 427s end loop; 427s end if; 427s 427s -- ---- 427s -- Create the SUBSCRIBE_SET event 427s -- ---- 427s v_ev_seqno := public.createEvent('_main', 'SUBSCRIBE_SET', 427s p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, 427s case p_sub_forward when true then 't' else 'f' end, 427s case p_omit_copy when true then 't' else 'f' end 427s ); 427s 427s -- ---- 427s -- Call the internal procedure to store the subscription 427s -- ---- 427s v_ev_seqno2:=public.subscribeSet_int(p_sub_set, p_sub_provider, 427s p_sub_receiver, p_sub_forward, p_omit_copy); 427s 427s if v_ev_seqno2 is not null then 427s v_ev_seqno:=v_ev_seqno2; 427s end if; 427s 427s return v_ev_seqno; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.subscribeSet (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) is 427s 'subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) 427s 427s Makes sure that the receiver is not the provider, then stores the 427s subscription, and publishes the SUBSCRIBE_SET event to other nodes. 427s 427s If omit_copy is true, then no data copy will be done. 427s '; 427s COMMENT 427s DROP FUNCTION IF EXISTS public.subscribeSet_int(int4,int4,int4,bool,bool); 427s DROP FUNCTION 427s create or replace function public.subscribeSet_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) 427s returns int4 427s as $$ 427s declare 427s v_set_origin int4; 427s v_sub_row record; 427s v_seq_id bigint; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Lookup the set origin 427s -- ---- 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_sub_set; 427s if not found then 427s raise exception 'Slony-I: subscribeSet_int(): set % not found', p_sub_set; 427s end if; 427s 427s -- ---- 427s -- Provider change is only allowed for active sets 427s -- ---- 427s if p_sub_receiver = public.getLocalNodeId('_main') then 427s select sub_active into v_sub_row from public.sl_subscribe 427s where sub_set = p_sub_set 427s and sub_receiver = p_sub_receiver; 427s if found then 427s if not v_sub_row.sub_active then 427s raise exception 'Slony-I: subscribeSet_int(): set % is not active, cannot change provider', 427s p_sub_set; 427s end if; 427s end if; 427s end if; 427s 427s -- ---- 427s -- Try to change provider and/or forward for an existing subscription 427s -- ---- 427s update public.sl_subscribe 427s set sub_provider = p_sub_provider, 427s sub_forward = p_sub_forward 427s where sub_set = p_sub_set 427s and sub_receiver = p_sub_receiver; 427s if found then 427s 427s -- ---- 427s -- This is changing a subscriptoin. Make sure all sets from 427s -- this origin are subscribed using the same data provider. 427s -- For this we first check that the requested data provider 427s -- is subscribed to all the sets, the receiver is subscribed to. 427s -- ---- 427s for v_sub_row in select set_id from public.sl_set 427s join public.sl_subscribe on set_id = sub_set 427s where set_origin = v_set_origin 427s and sub_receiver = p_sub_receiver 427s and sub_set <> p_sub_set 427s loop 427s if not exists (select 1 from public.sl_subscribe 427s where sub_set = v_sub_row.set_id 427s and sub_receiver = p_sub_provider 427s and sub_active and sub_forward) 427s and not exists (select 1 from public.sl_set 427s where set_id = v_sub_row.set_id 427s and set_origin = p_sub_provider) 427s then 427s raise exception 'Slony-I: subscribeSet_int(): node % is not a forwarding subscriber for set %', 427s p_sub_provider, v_sub_row.set_id; 427s end if; 427s 427s -- ---- 427s -- New data provider offers this set as well, change that 427s -- subscription too. 427s -- ---- 427s update public.sl_subscribe 427s set sub_provider = p_sub_provider 427s where sub_set = v_sub_row.set_id 427s and sub_receiver = p_sub_receiver; 427s end loop; 427s 427s -- ---- 427s -- Rewrite sl_listen table 427s -- ---- 427s perform public.RebuildListenEntries(); 427s 427s return p_sub_set; 427s end if; 427s 427s -- ---- 427s -- Not found, insert a new one 427s -- ---- 427s if not exists (select true from public.sl_path 427s where pa_server = p_sub_provider 427s and pa_client = p_sub_receiver) 427s then 427s insert into public.sl_path 427s (pa_server, pa_client, pa_conninfo, pa_connretry) 427s values 427s (p_sub_provider, p_sub_receiver, 427s '', 10); 427s end if; 427s insert into public.sl_subscribe 427s (sub_set, sub_provider, sub_receiver, sub_forward, sub_active) 427s values (p_sub_set, p_sub_provider, p_sub_receiver, 427s p_sub_forward, false); 427s 427s -- ---- 427s -- If the set origin is here, then enable the subscription 427s -- ---- 427s if v_set_origin = public.getLocalNodeId('_main') then 427s select public.createEvent('_main', 'ENABLE_SUBSCRIPTION', 427s p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, 427s case p_sub_forward when true then 't' else 'f' end, 427s case p_omit_copy when true then 't' else 'f' end 427s ) into v_seq_id; 427s perform public.enableSubscription(p_sub_set, 427s p_sub_provider, p_sub_receiver); 427s end if; 427s 427s -- ---- 427s -- Rewrite sl_listen table 427s -- ---- 427s perform public.RebuildListenEntries(); 427s 427s return p_sub_set; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.subscribeSet_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4, p_sub_forward bool, p_omit_copy bool) is 427s 'subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) 427s 427s Internal actions for subscribing receiver sub_receiver to subscription 427s set sub_set.'; 427s COMMENT 427s drop function IF EXISTS public.unsubscribeSet(int4,int4,boolean); 427s DROP FUNCTION 427s create or replace function public.unsubscribeSet (p_sub_set int4, p_sub_receiver int4,p_force boolean) 427s returns bigint 427s as $$ 427s declare 427s v_tab_row record; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- Check that this is called on the receiver node 427s -- ---- 427s if p_sub_receiver != public.getLocalNodeId('_main') then 427s raise exception 'Slony-I: unsubscribeSet() must be called on receiver'; 427s end if; 427s 427s 427s 427s -- ---- 427s -- Check that this does not break any chains 427s -- ---- 427s if p_force=false and exists (select true from public.sl_subscribe 427s where sub_set = p_sub_set 427s and sub_provider = p_sub_receiver) 427s then 427s raise exception 'Slony-I: Cannot unsubscribe set % while being provider', 427s p_sub_set; 427s end if; 427s 427s if exists (select true from public.sl_subscribe 427s where sub_set = p_sub_set 427s and sub_provider = p_sub_receiver) 427s then 427s --delete the receivers of this provider. 427s --unsubscribeSet_int() will generate the event 427s --when it runs on the receiver. 427s delete from public.sl_subscribe 427s where sub_set=p_sub_set 427s and sub_provider=p_sub_receiver; 427s end if; 427s 427s -- ---- 427s -- Remove the replication triggers. 427s -- ---- 427s for v_tab_row in select tab_id from public.sl_table 427s where tab_set = p_sub_set 427s order by tab_id 427s loop 427s perform public.alterTableDropTriggers(v_tab_row.tab_id); 427s end loop; 427s 427s -- ---- 427s -- Remove the setsync status. This will also cause the 427s -- worker thread to ignore the set and stop replicating 427s -- right now. 427s -- ---- 427s delete from public.sl_setsync 427s where ssy_setid = p_sub_set; 427s 427s -- ---- 427s -- Remove all sl_table and sl_sequence entries for this set. 427s -- Should we ever subscribe again, the initial data 427s -- copy process will create new ones. 427s -- ---- 427s delete from public.sl_table 427s where tab_set = p_sub_set; 427s delete from public.sl_sequence 427s where seq_set = p_sub_set; 427s 427s -- ---- 427s -- Call the internal procedure to drop the subscription 427s -- ---- 427s perform public.unsubscribeSet_int(p_sub_set, p_sub_receiver); 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s -- ---- 427s -- Create the UNSUBSCRIBE_SET event 427s -- ---- 427s return public.createEvent('_main', 'UNSUBSCRIBE_SET', 427s p_sub_set::text, p_sub_receiver::text); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.unsubscribeSet (p_sub_set int4, p_sub_receiver int4,force boolean) is 427s 'unsubscribeSet (sub_set, sub_receiver,force) 427s 427s Unsubscribe node sub_receiver from subscription set sub_set. This is 427s invoked on the receiver node. It verifies that this does not break 427s any chains (e.g. - where sub_receiver is a provider for another node), 427s then restores tables, drops Slony-specific keys, drops table entries 427s for the set, drops the subscription, and generates an UNSUBSCRIBE_SET 427s node to publish that the node is being dropped.'; 427s COMMENT 427s create or replace function public.unsubscribeSet_int (p_sub_set int4, p_sub_receiver int4) 427s returns int4 427s as $$ 427s declare 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- All the real work is done before event generation on the 427s -- subscriber. 427s -- ---- 427s 427s --if this event unsubscribes the provider of this node 427s --then this node should unsubscribe itself from the set as well. 427s 427s if exists (select true from 427s public.sl_subscribe where 427s sub_set=p_sub_set and sub_provider=p_sub_receiver 427s and sub_receiver=public.getLocalNodeId('_main')) 427s then 427s perform public.unsubscribeSet(p_sub_set,public.getLocalNodeId('_main'),true); 427s end if; 427s 427s 427s delete from public.sl_subscribe 427s where sub_set = p_sub_set 427s and sub_receiver = p_sub_receiver; 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return p_sub_set; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.unsubscribeSet_int (p_sub_set int4, p_sub_receiver int4) is 427s 'unsubscribeSet_int (sub_set, sub_receiver) 427s 427s All the REAL work of removing the subscriber is done before the event 427s is generated, so this function just has to drop the references to the 427s subscription in sl_subscribe.'; 427s COMMENT 427s create or replace function public.enableSubscription (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) 427s returns int4 427s as $$ 427s begin 427s return public.enableSubscription_int (p_sub_set, 427s p_sub_provider, p_sub_receiver); 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.enableSubscription (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) is 427s 'enableSubscription (sub_set, sub_provider, sub_receiver) 427s 427s Indicates that sub_receiver intends subscribing to set sub_set from 427s sub_provider. Work is all done by the internal function 427s enableSubscription_int (sub_set, sub_provider, sub_receiver).'; 427s COMMENT 427s create or replace function public.enableSubscription_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) 427s returns int4 427s as $$ 427s declare 427s v_n int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- ---- 427s -- The real work is done in the replication engine. All 427s -- we have to do here is remembering that it happened. 427s -- ---- 427s 427s -- ---- 427s -- Well, not only ... we might be missing an important event here 427s -- ---- 427s if not exists (select true from public.sl_path 427s where pa_server = p_sub_provider 427s and pa_client = p_sub_receiver) 427s then 427s insert into public.sl_path 427s (pa_server, pa_client, pa_conninfo, pa_connretry) 427s values 427s (p_sub_provider, p_sub_receiver, 427s '', 10); 427s end if; 427s 427s update public.sl_subscribe 427s set sub_active = 't' 427s where sub_set = p_sub_set 427s and sub_receiver = p_sub_receiver; 427s get diagnostics v_n = row_count; 427s if v_n = 0 then 427s insert into public.sl_subscribe 427s (sub_set, sub_provider, sub_receiver, 427s sub_forward, sub_active) 427s values 427s (p_sub_set, p_sub_provider, p_sub_receiver, 427s false, true); 427s end if; 427s 427s -- Rewrite sl_listen table 427s perform public.RebuildListenEntries(); 427s 427s return p_sub_set; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.enableSubscription_int (p_sub_set int4, p_sub_provider int4, p_sub_receiver int4) is 427s 'enableSubscription_int (sub_set, sub_provider, sub_receiver) 427s 427s Internal function to enable subscription of node sub_receiver to set 427s sub_set via node sub_provider. 427s 427s slon does most of the work; all we need do here is to remember that it 427s happened. The function updates sl_subscribe, indicating that the 427s subscription has become active.'; 427s COMMENT 427s create or replace function public.forwardConfirm (p_con_origin int4, p_con_received int4, p_con_seqno int8, p_con_timestamp timestamp) 427s returns bigint 427s as $$ 427s declare 427s v_max_seqno bigint; 427s begin 427s select into v_max_seqno coalesce(max(con_seqno), 0) 427s from public.sl_confirm 427s where con_origin = p_con_origin 427s and con_received = p_con_received; 427s if v_max_seqno < p_con_seqno then 427s insert into public.sl_confirm 427s (con_origin, con_received, con_seqno, con_timestamp) 427s values (p_con_origin, p_con_received, p_con_seqno, 427s p_con_timestamp); 427s v_max_seqno = p_con_seqno; 427s end if; 427s 427s return v_max_seqno; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.forwardConfirm (p_con_origin int4, p_con_received int4, p_con_seqno int8, p_con_timestamp timestamp) is 427s 'forwardConfirm (p_con_origin, p_con_received, p_con_seqno, p_con_timestamp) 427s 427s Confirms (recorded in sl_confirm) that items from p_con_origin up to 427s p_con_seqno have been received by node p_con_received as of 427s p_con_timestamp, and raises an event to forward this confirmation.'; 427s COMMENT 427s create or replace function public.cleanupEvent (p_interval interval) 427s returns int4 427s as $$ 427s declare 427s v_max_row record; 427s v_min_row record; 427s v_max_sync int8; 427s v_origin int8; 427s v_seqno int8; 427s v_xmin bigint; 427s v_rc int8; 427s begin 427s -- ---- 427s -- First remove all confirmations where origin/receiver no longer exist 427s -- ---- 427s delete from public.sl_confirm 427s where con_origin not in (select no_id from public.sl_node); 427s delete from public.sl_confirm 427s where con_received not in (select no_id from public.sl_node); 427s -- ---- 427s -- Next remove all but the oldest confirm row per origin,receiver pair. 427s -- Ignore confirmations that are younger than 10 minutes. We currently 427s -- have an not confirmed suspicion that a possibly lost transaction due 427s -- to a server crash might have been visible to another session, and 427s -- that this led to log data that is needed again got removed. 427s -- ---- 427s for v_max_row in select con_origin, con_received, max(con_seqno) as con_seqno 427s from public.sl_confirm 427s where con_timestamp < (CURRENT_TIMESTAMP - p_interval) 427s group by con_origin, con_received 427s loop 427s delete from public.sl_confirm 427s where con_origin = v_max_row.con_origin 427s and con_received = v_max_row.con_received 427s and con_seqno < v_max_row.con_seqno; 427s end loop; 427s 427s -- ---- 427s -- Then remove all events that are confirmed by all nodes in the 427s -- whole cluster up to the last SYNC 427s -- ---- 427s for v_min_row in select con_origin, min(con_seqno) as con_seqno 427s from public.sl_confirm 427s group by con_origin 427s loop 427s select coalesce(max(ev_seqno), 0) into v_max_sync 427s from public.sl_event 427s where ev_origin = v_min_row.con_origin 427s and ev_seqno <= v_min_row.con_seqno 427s and ev_type = 'SYNC'; 427s if v_max_sync > 0 then 427s delete from public.sl_event 427s where ev_origin = v_min_row.con_origin 427s and ev_seqno < v_max_sync; 427s end if; 427s end loop; 427s 427s -- ---- 427s -- If cluster has only one node, then remove all events up to 427s -- the last SYNC - Bug #1538 427s -- http://gborg.postgresql.org/project/slony1/bugs/bugupdate.php?1538 427s -- ---- 427s 427s select * into v_min_row from public.sl_node where 427s no_id <> public.getLocalNodeId('_main') limit 1; 427s if not found then 427s select ev_origin, ev_seqno into v_min_row from public.sl_event 427s where ev_origin = public.getLocalNodeId('_main') 427s order by ev_origin desc, ev_seqno desc limit 1; 427s raise notice 'Slony-I: cleanupEvent(): Single node - deleting events < %', v_min_row.ev_seqno; 427s delete from public.sl_event 427s where 427s ev_origin = v_min_row.ev_origin and 427s ev_seqno < v_min_row.ev_seqno; 427s 427s end if; 427s 427s if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_seqlog' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then 427s execute 'alter table public.sl_seqlog set without oids;'; 427s end if; 427s -- ---- 427s -- Also remove stale entries from the nodelock table. 427s -- ---- 427s perform public.cleanupNodelock(); 427s 427s -- ---- 427s -- Find the eldest event left, for each origin 427s -- ---- 427s for v_origin, v_seqno, v_xmin in 427s select ev_origin, ev_seqno, "pg_catalog".txid_snapshot_xmin(ev_snapshot) from public.sl_event 427s where (ev_origin, ev_seqno) in (select ev_origin, min(ev_seqno) from public.sl_event where ev_type = 'SYNC' group by ev_origin) 427s loop 427s delete from public.sl_seqlog where seql_origin = v_origin and seql_ev_seqno < v_seqno; 427s delete from public.sl_log_script where log_origin = v_origin and log_txid < v_xmin; 427s end loop; 427s 427s v_rc := public.logswitch_finish(); 427s if v_rc = 0 then -- no switch in progress 427s perform public.logswitch_start(); 427s end if; 427s 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.cleanupEvent (p_interval interval) is 427s 'cleaning old data out of sl_confirm, sl_event. Removes all but the 427s last sl_confirm row per (origin,receiver), and then removes all events 427s that are confirmed by all nodes in the whole cluster up to the last 427s SYNC.'; 427s COMMENT 427s create or replace function public.determineIdxnameUnique(p_tab_fqname text, p_idx_name name) returns name 427s as $$ 427s declare 427s v_tab_fqname_quoted text default ''; 427s v_idxrow record; 427s begin 427s v_tab_fqname_quoted := public.slon_quote_input(p_tab_fqname); 427s -- 427s -- Ensure that the table exists 427s -- 427s if (select PGC.relname 427s from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_namespace PGN 427s where public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 427s and PGN.oid = PGC.relnamespace) is null then 427s raise exception 'Slony-I: determineIdxnameUnique(): table % not found', v_tab_fqname_quoted; 427s end if; 427s 427s -- 427s -- Lookup the tables primary key or the specified unique index 427s -- 427s if p_idx_name isnull then 427s select PGXC.relname 427s into v_idxrow 427s from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_index PGX, 427s "pg_catalog".pg_class PGXC 427s where public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 427s and PGN.oid = PGC.relnamespace 427s and PGX.indrelid = PGC.oid 427s and PGX.indexrelid = PGXC.oid 427s and PGX.indisprimary; 427s if not found then 427s raise exception 'Slony-I: table % has no primary key', 427s v_tab_fqname_quoted; 427s end if; 427s else 427s select PGXC.relname 427s into v_idxrow 427s from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_index PGX, 427s "pg_catalog".pg_class PGXC 427s where public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 427s and PGN.oid = PGC.relnamespace 427s and PGX.indrelid = PGC.oid 427s and PGX.indexrelid = PGXC.oid 427s and PGX.indisunique 427s and public.slon_quote_brute(PGXC.relname) = public.slon_quote_input(p_idx_name); 427s if not found then 427s raise exception 'Slony-I: table % has no unique index %', 427s v_tab_fqname_quoted, p_idx_name; 427s end if; 427s end if; 427s 427s -- 427s -- Return the found index name 427s -- 427s return v_idxrow.relname; 427s end; 427s $$ language plpgsql called on null input; 427s CREATE FUNCTION 427s comment on function public.determineIdxnameUnique(p_tab_fqname text, p_idx_name name) is 427s 'FUNCTION determineIdxnameUnique (tab_fqname, indexname) 427s 427s Given a tablename, tab_fqname, check that the unique index, indexname, 427s exists or return the primary key index name for the table. If there 427s is no unique index, it raises an exception.'; 427s COMMENT 427s create or replace function public.determineAttkindUnique(p_tab_fqname text, p_idx_name name) returns text 427s as $$ 427s declare 427s v_tab_fqname_quoted text default ''; 427s v_idx_name_quoted text; 427s v_idxrow record; 427s v_attrow record; 427s v_i integer; 427s v_attno int2; 427s v_attkind text default ''; 427s v_attfound bool; 427s begin 427s v_tab_fqname_quoted := public.slon_quote_input(p_tab_fqname); 427s v_idx_name_quoted := public.slon_quote_brute(p_idx_name); 427s -- 427s -- Ensure that the table exists 427s -- 427s if (select PGC.relname 427s from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_namespace PGN 427s where public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 427s and PGN.oid = PGC.relnamespace) is null then 427s raise exception 'Slony-I: table % not found', v_tab_fqname_quoted; 427s end if; 427s 427s -- 427s -- Lookup the tables primary key or the specified unique index 427s -- 427s if p_idx_name isnull then 427s raise exception 'Slony-I: index name must be specified'; 427s else 427s select PGXC.relname, PGX.indexrelid, PGX.indkey 427s into v_idxrow 427s from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_index PGX, 427s "pg_catalog".pg_class PGXC 427s where public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 427s and PGN.oid = PGC.relnamespace 427s and PGX.indrelid = PGC.oid 427s and PGX.indexrelid = PGXC.oid 427s and PGX.indisunique 427s and public.slon_quote_brute(PGXC.relname) = v_idx_name_quoted; 427s if not found then 427s raise exception 'Slony-I: table % has no unique index %', 427s v_tab_fqname_quoted, v_idx_name_quoted; 427s end if; 427s end if; 427s 427s -- 427s -- Loop over the tables attributes and check if they are 427s -- index attributes. If so, add a "k" to the return value, 427s -- otherwise add a "v". 427s -- 427s for v_attrow in select PGA.attnum, PGA.attname 427s from "pg_catalog".pg_class PGC, 427s "pg_catalog".pg_namespace PGN, 427s "pg_catalog".pg_attribute PGA 427s where public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) = v_tab_fqname_quoted 427s and PGN.oid = PGC.relnamespace 427s and PGA.attrelid = PGC.oid 427s and not PGA.attisdropped 427s and PGA.attnum > 0 427s order by attnum 427s loop 427s v_attfound = 'f'; 427s 427s v_i := 0; 427s loop 427s select indkey[v_i] into v_attno from "pg_catalog".pg_index 427s where indexrelid = v_idxrow.indexrelid; 427s if v_attno isnull or v_attno = 0 then 427s exit; 427s end if; 427s if v_attrow.attnum = v_attno then 427s v_attfound = 't'; 427s exit; 427s end if; 427s v_i := v_i + 1; 427s end loop; 427s 427s if v_attfound then 427s v_attkind := v_attkind || 'k'; 427s else 427s v_attkind := v_attkind || 'v'; 427s end if; 427s end loop; 427s 427s -- Strip off trailing v characters as they are not needed by the logtrigger 427s v_attkind := pg_catalog.rtrim(v_attkind, 'v'); 427s 427s -- 427s -- Return the resulting attkind 427s -- 427s return v_attkind; 427s end; 427s $$ language plpgsql called on null input; 427s CREATE FUNCTION 427s comment on function public.determineAttkindUnique(p_tab_fqname text, p_idx_name name) is 427s 'determineAttKindUnique (tab_fqname, indexname) 427s 427s Given a tablename, return the Slony-I specific attkind (used for the 427s log trigger) of the table. Use the specified unique index or the 427s primary key (if indexname is NULL).'; 427s COMMENT 427s create or replace function public.RebuildListenEntries() 427s returns int 427s as $$ 427s declare 427s v_row record; 427s v_cnt integer; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s -- First remove the entire configuration 427s delete from public.sl_listen; 427s 427s -- Second populate the sl_listen configuration with a full 427s -- network of all possible paths. 427s insert into public.sl_listen 427s (li_origin, li_provider, li_receiver) 427s select pa_server, pa_server, pa_client from public.sl_path; 427s while true loop 427s insert into public.sl_listen 427s (li_origin, li_provider, li_receiver) 427s select distinct li_origin, pa_server, pa_client 427s from public.sl_listen, public.sl_path 427s where li_receiver = pa_server 427s and li_origin <> pa_client 427s and pa_conninfo<>'' 427s except 427s select li_origin, li_provider, li_receiver 427s from public.sl_listen; 427s 427s if not found then 427s exit; 427s end if; 427s end loop; 427s 427s -- We now replace specific event-origin,receiver combinations 427s -- with a configuration that tries to avoid events arriving at 427s -- a node before the data provider actually has the data ready. 427s 427s -- Loop over every possible pair of receiver and event origin 427s for v_row in select N1.no_id as receiver, N2.no_id as origin, 427s N2.no_failed as failed 427s from public.sl_node as N1, public.sl_node as N2 427s where N1.no_id <> N2.no_id 427s loop 427s -- 1st choice: 427s -- If we use the event origin as a data provider for any 427s -- set that originates on that very node, we are a direct 427s -- subscriber to that origin and listen there only. 427s if exists (select true from public.sl_set, public.sl_subscribe , public.sl_node p 427s where set_origin = v_row.origin 427s and sub_set = set_id 427s and sub_provider = v_row.origin 427s and sub_receiver = v_row.receiver 427s and sub_active 427s and p.no_active 427s and p.no_id=sub_provider 427s ) 427s then 427s delete from public.sl_listen 427s where li_origin = v_row.origin 427s and li_receiver = v_row.receiver; 427s insert into public.sl_listen (li_origin, li_provider, li_receiver) 427s values (v_row.origin, v_row.origin, v_row.receiver); 427s 427s -- 2nd choice: 427s -- If we are subscribed to any set originating on this 427s -- event origin, we want to listen on all data providers 427s -- we use for this origin. We are a cascaded subscriber 427s -- for sets from this node. 427s else 427s if exists (select true from public.sl_set, public.sl_subscribe, 427s public.sl_node provider 427s where set_origin = v_row.origin 427s and sub_set = set_id 427s and sub_provider=provider.no_id 427s and provider.no_failed = false 427s and sub_receiver = v_row.receiver 427s and sub_active) 427s then 427s delete from public.sl_listen 427s where li_origin = v_row.origin 427s and li_receiver = v_row.receiver; 427s insert into public.sl_listen (li_origin, li_provider, li_receiver) 427s select distinct set_origin, sub_provider, v_row.receiver 427s from public.sl_set, public.sl_subscribe 427s where set_origin = v_row.origin 427s and sub_set = set_id 427s and sub_receiver = v_row.receiver 427s and sub_active; 427s end if; 427s end if; 427s 427s if v_row.failed then 427s 427s --for every failed node we delete all sl_listen entries 427s --except via providers (listed in sl_subscribe) 427s --or failover candidates (sl_failover_targets) 427s --we do this to prevent a non-failover candidate 427s --that is more ahead of the failover candidate from 427s --sending events to the failover candidate that 427s --are 'too far ahead' 427s 427s --if the failed node is not an origin for any 427s --node then we don't delete all listen paths 427s --for events from it. Instead we leave 427s --the listen network alone. 427s 427s select count(*) into v_cnt from public.sl_subscribe sub, 427s public.sl_set s 427s where s.set_origin=v_row.origin and s.set_id=sub.sub_set; 427s if v_cnt > 0 then 427s delete from public.sl_listen where 427s li_origin=v_row.origin and 427s li_receiver=v_row.receiver 427s and li_provider not in 427s (select sub_provider from 427s public.sl_subscribe, 427s public.sl_set where 427s sub_set=set_id 427s and set_origin=v_row.origin); 427s end if; 427s end if; 427s -- insert into public.sl_listen 427s -- (li_origin,li_provider,li_receiver) 427s -- SELECT v_row.origin, pa_server 427s -- ,v_row.receiver 427s -- FROM public.sl_path where 427s -- pa_client=v_row.receiver 427s -- and (v_row.origin,pa_server,v_row.receiver) not in 427s -- (select li_origin,li_provider,li_receiver 427s -- from public.sl_listen); 427s -- end if; 427s end loop ; 427s 427s return null ; 427s end ; 427s $$ language 'plpgsql'; 427s CREATE FUNCTION 427s comment on function public.RebuildListenEntries() is 427s 'RebuildListenEntries() 427s 427s Invoked by various subscription and path modifying functions, this 427s rewrites the sl_listen entries, adding in all the ones required to 427s allow communications between nodes in the Slony-I cluster.'; 427s COMMENT 427s create or replace function public.generate_sync_event(p_interval interval) 427s returns int4 427s as $$ 427s declare 427s v_node_row record; 427s 427s BEGIN 427s select 1 into v_node_row from public.sl_event 427s where ev_type = 'SYNC' and ev_origin = public.getLocalNodeId('_main') 427s and ev_timestamp > now() - p_interval limit 1; 427s if not found then 427s -- If there has been no SYNC in the last interval, then push one 427s perform public.createEvent('_main', 'SYNC', NULL); 427s return 1; 427s else 427s return 0; 427s end if; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.generate_sync_event(p_interval interval) is 427s 'Generate a sync event if there has not been one in the requested interval, and this is a provider node.'; 427s COMMENT 427s drop function if exists public.updateRelname(int4, int4); 427s DROP FUNCTION 427s create or replace function public.updateRelname () 427s returns int4 427s as $$ 427s declare 427s v_no_id int4; 427s v_set_origin int4; 427s begin 427s -- ---- 427s -- Grab the central configuration lock 427s -- ---- 427s lock table public.sl_config_lock; 427s 427s update public.sl_table set 427s tab_relname = PGC.relname, tab_nspname = PGN.nspname 427s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 427s where public.sl_table.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid and 427s (tab_relname <> PGC.relname or tab_nspname <> PGN.nspname); 427s update public.sl_sequence set 427s seq_relname = PGC.relname, seq_nspname = PGN.nspname 427s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 427s where public.sl_sequence.seq_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid and 427s (seq_relname <> PGC.relname or seq_nspname <> PGN.nspname); 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.updateRelname() is 427s 'updateRelname()'; 427s COMMENT 427s drop function if exists public.updateReloid (int4, int4); 427s DROP FUNCTION 427s create or replace function public.updateReloid (p_set_id int4, p_only_on_node int4) 427s returns bigint 427s as $$ 427s declare 427s v_no_id int4; 427s v_set_origin int4; 427s prec record; 427s begin 427s -- ---- 427s -- Check that we either are the set origin or a current 427s -- subscriber of the set. 427s -- ---- 427s v_no_id := public.getLocalNodeId('_main'); 427s select set_origin into v_set_origin 427s from public.sl_set 427s where set_id = p_set_id 427s for update; 427s if not found then 427s raise exception 'Slony-I: set % not found', p_set_id; 427s end if; 427s if v_set_origin <> v_no_id 427s and not exists (select 1 from public.sl_subscribe 427s where sub_set = p_set_id 427s and sub_receiver = v_no_id) 427s then 427s return 0; 427s end if; 427s 427s -- ---- 427s -- If execution on only one node is requested, check that 427s -- we are that node. 427s -- ---- 427s if p_only_on_node > 0 and p_only_on_node <> v_no_id then 427s return 0; 427s end if; 427s 427s -- Update OIDs for tables to values pulled from non-table objects in pg_class 427s -- This ensures that we won't have collisions when repairing the oids 427s for prec in select tab_id from public.sl_table loop 427s update public.sl_table set tab_reloid = (select oid from pg_class pc where relkind <> 'r' and not exists (select 1 from public.sl_table t2 where t2.tab_reloid = pc.oid) limit 1) 427s where tab_id = prec.tab_id; 427s end loop; 427s 427s for prec in select tab_id, tab_relname, tab_nspname from public.sl_table loop 427s update public.sl_table set 427s tab_reloid = (select PGC.oid 427s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 427s where public.slon_quote_brute(PGC.relname) = public.slon_quote_brute(prec.tab_relname) 427s and PGC.relnamespace = PGN.oid 427s and public.slon_quote_brute(PGN.nspname) = public.slon_quote_brute(prec.tab_nspname)) 427s where tab_id = prec.tab_id; 427s end loop; 427s 427s for prec in select seq_id from public.sl_sequence loop 427s update public.sl_sequence set seq_reloid = (select oid from pg_class pc where relkind <> 'S' and not exists (select 1 from public.sl_sequence t2 where t2.seq_reloid = pc.oid) limit 1) 427s where seq_id = prec.seq_id; 427s end loop; 427s 427s for prec in select seq_id, seq_relname, seq_nspname from public.sl_sequence loop 427s update public.sl_sequence set 427s seq_reloid = (select PGC.oid 427s from pg_catalog.pg_class PGC, pg_catalog.pg_namespace PGN 427s where public.slon_quote_brute(PGC.relname) = public.slon_quote_brute(prec.seq_relname) 427s and PGC.relnamespace = PGN.oid 427s and public.slon_quote_brute(PGN.nspname) = public.slon_quote_brute(prec.seq_nspname)) 427s where seq_id = prec.seq_id; 427s end loop; 427s 427s return 1; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.updateReloid(p_set_id int4, p_only_on_node int4) is 427s 'updateReloid(set_id, only_on_node) 427s 427s Updates the respective reloids in sl_table and sl_seqeunce based on 427s their respective FQN'; 427s COMMENT 427s create or replace function public.logswitch_start() 427s returns int4 as $$ 427s DECLARE 427s v_current_status int4; 427s BEGIN 427s -- ---- 427s -- Get the current log status. 427s -- ---- 427s select last_value into v_current_status from public.sl_log_status; 427s 427s -- ---- 427s -- status = 0: sl_log_1 active, sl_log_2 clean 427s -- Initiate a switch to sl_log_2. 427s -- ---- 427s if v_current_status = 0 then 427s perform "pg_catalog".setval('public.sl_log_status', 3); 427s perform public.registry_set_timestamp( 427s 'logswitch.laststart', now()); 427s raise notice 'Slony-I: Logswitch to sl_log_2 initiated'; 427s return 2; 427s end if; 427s 427s -- ---- 427s -- status = 1: sl_log_2 active, sl_log_1 clean 427s -- Initiate a switch to sl_log_1. 427s -- ---- 427s if v_current_status = 1 then 427s perform "pg_catalog".setval('public.sl_log_status', 2); 427s perform public.registry_set_timestamp( 427s 'logswitch.laststart', now()); 427s raise notice 'Slony-I: Logswitch to sl_log_1 initiated'; 427s return 1; 427s end if; 427s 427s raise exception 'Previous logswitch still in progress'; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.logswitch_start() is 427s 'logswitch_start() 427s 427s Initiate a log table switch if none is in progress'; 427s COMMENT 427s create or replace function public.logswitch_finish() 427s returns int4 as $$ 427s DECLARE 427s v_current_status int4; 427s v_dummy record; 427s v_origin int8; 427s v_seqno int8; 427s v_xmin bigint; 427s v_purgeable boolean; 427s BEGIN 427s -- ---- 427s -- Get the current log status. 427s -- ---- 427s select last_value into v_current_status from public.sl_log_status; 427s 427s -- ---- 427s -- status value 0 or 1 means that there is no log switch in progress 427s -- ---- 427s if v_current_status = 0 or v_current_status = 1 then 427s return 0; 427s end if; 427s 427s -- ---- 427s -- status = 2: sl_log_1 active, cleanup sl_log_2 427s -- ---- 427s if v_current_status = 2 then 427s v_purgeable := 'true'; 427s 427s -- ---- 427s -- Attempt to lock sl_log_2 in order to make sure there are no other transactions 427s -- currently writing to it. Exit if it is still in use. This prevents TRUNCATE from 427s -- blocking writers to sl_log_2 while it is waiting for a lock. It also prevents it 427s -- immediately truncating log data generated inside the transaction which was active 427s -- when logswitch_finish() was called (and was blocking TRUNCATE) as soon as that 427s -- transaction is committed. 427s -- ---- 427s begin 427s lock table public.sl_log_2 in access exclusive mode nowait; 427s exception when lock_not_available then 427s raise notice 'Slony-I: could not lock sl_log_2 - sl_log_2 not truncated'; 427s return -1; 427s end; 427s 427s -- ---- 427s -- The cleanup thread calls us after it did the delete and 427s -- vacuum of both log tables. If sl_log_2 is empty now, we 427s -- can truncate it and the log switch is done. 427s -- ---- 427s for v_origin, v_seqno, v_xmin in 427s select ev_origin, ev_seqno, "pg_catalog".txid_snapshot_xmin(ev_snapshot) from public.sl_event 427s where (ev_origin, ev_seqno) in (select ev_origin, min(ev_seqno) from public.sl_event where ev_type = 'SYNC' group by ev_origin) 427s loop 427s if exists (select 1 from public.sl_log_2 where log_origin = v_origin and log_txid >= v_xmin limit 1) then 427s v_purgeable := 'false'; 427s end if; 427s end loop; 427s if not v_purgeable then 427s -- ---- 427s -- Found a row ... log switch is still in progress. 427s -- ---- 427s raise notice 'Slony-I: log switch to sl_log_1 still in progress - sl_log_2 not truncated'; 427s return -1; 427s end if; 427s 427s raise notice 'Slony-I: log switch to sl_log_1 complete - truncate sl_log_2'; 427s truncate public.sl_log_2; 427s if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_log_2' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then 427s execute 'alter table public.sl_log_2 set without oids;'; 427s end if; 427s perform "pg_catalog".setval('public.sl_log_status', 0); 427s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 427s perform public.addPartialLogIndices(); 427s 427s return 1; 427s end if; 427s 427s -- ---- 427s -- status = 3: sl_log_2 active, cleanup sl_log_1 427s -- ---- 427s if v_current_status = 3 then 427s v_purgeable := 'true'; 427s 427s -- ---- 427s -- Attempt to lock sl_log_1 in order to make sure there are no other transactions 427s -- currently writing to it. Exit if it is still in use. This prevents TRUNCATE from 427s -- blocking writes to sl_log_1 while it is waiting for a lock. It also prevents it 427s -- immediately truncating log data generated inside the transaction which was active 427s -- when logswitch_finish() was called (and was blocking TRUNCATE) as soon as that 427s -- transaction is committed. 427s -- ---- 427s begin 427s lock table public.sl_log_1 in access exclusive mode nowait; 427s exception when lock_not_available then 427s raise notice 'Slony-I: could not lock sl_log_1 - sl_log_1 not truncated'; 427s return -1; 427s end; 427s 427s -- ---- 427s -- The cleanup thread calls us after it did the delete and 427s -- vacuum of both log tables. If sl_log_2 is empty now, we 427s -- can truncate it and the log switch is done. 427s -- ---- 427s for v_origin, v_seqno, v_xmin in 427s select ev_origin, ev_seqno, "pg_catalog".txid_snapshot_xmin(ev_snapshot) from public.sl_event 427s where (ev_origin, ev_seqno) in (select ev_origin, min(ev_seqno) from public.sl_event where ev_type = 'SYNC' group by ev_origin) 427s loop 427s if (exists (select 1 from public.sl_log_1 where log_origin = v_origin and log_txid >= v_xmin limit 1)) then 427s v_purgeable := 'false'; 427s end if; 427s end loop; 427s if not v_purgeable then 427s -- ---- 427s -- Found a row ... log switch is still in progress. 427s -- ---- 427s raise notice 'Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated'; 427s return -1; 427s end if; 427s 427s raise notice 'Slony-I: log switch to sl_log_2 complete - truncate sl_log_1'; 427s truncate public.sl_log_1; 427s if exists (select * from "pg_catalog".pg_class c, "pg_catalog".pg_namespace n, "pg_catalog".pg_attribute a where c.relname = 'sl_log_1' and n.oid = c.relnamespace and a.attrelid = c.oid and a.attname = 'oid') then 427s execute 'alter table public.sl_log_1 set without oids;'; 427s end if; 427s perform "pg_catalog".setval('public.sl_log_status', 1); 427s -- Run addPartialLogIndices() to try to add indices to unused sl_log_? table 427s perform public.addPartialLogIndices(); 427s return 2; 427s end if; 427s END; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.logswitch_finish() is 427s 'logswitch_finish() 427s 427s Attempt to finalize a log table switch in progress 427s return values: 427s -1 if switch in progress, but not complete 427s 0 if no switch in progress 427s 1 if performed truncate on sl_log_2 427s 2 if performed truncate on sl_log_1 427s '; 427s COMMENT 427s create or replace function public.addPartialLogIndices () returns integer as $$ 427s DECLARE 427s v_current_status int4; 427s v_log int4; 427s v_dummy record; 427s v_dummy2 record; 427s idef text; 427s v_count int4; 427s v_iname text; 427s v_ilen int4; 427s v_maxlen int4; 427s BEGIN 427s v_count := 0; 427s select last_value into v_current_status from public.sl_log_status; 427s 427s -- If status is 2 or 3 --> in process of cleanup --> unsafe to create indices 427s if v_current_status in (2, 3) then 427s return 0; 427s end if; 427s 427s if v_current_status = 0 then -- Which log should get indices? 427s v_log := 2; 427s else 427s v_log := 1; 427s end if; 427s -- PartInd_test_db_sl_log_2-node-1 427s -- Add missing indices... 427s for v_dummy in select distinct set_origin from public.sl_set loop 427s v_iname := 'PartInd_main_sl_log_' || v_log::text || '-node-' 427s || v_dummy.set_origin::text; 427s -- raise notice 'Consider adding partial index % on sl_log_%', v_iname, v_log; 427s -- raise notice 'schema: [_main] tablename:[sl_log_%]', v_log; 427s select * into v_dummy2 from pg_catalog.pg_indexes where tablename = 'sl_log_' || v_log::text and indexname = v_iname; 427s if not found then 427s -- raise notice 'index was not found - add it!'; 427s v_iname := 'PartInd_main_sl_log_' || v_log::text || '-node-' || v_dummy.set_origin::text; 427s v_ilen := pg_catalog.length(v_iname); 427s v_maxlen := pg_catalog.current_setting('max_identifier_length'::text)::int4; 427s if v_ilen > v_maxlen then 427s raise exception 'Length of proposed index name [%] > max_identifier_length [%] - cluster name probably too long', v_ilen, v_maxlen; 427s end if; 427s 427s idef := 'create index "' || v_iname || 427s '" on public.sl_log_' || v_log::text || ' USING btree(log_txid) where (log_origin = ' || v_dummy.set_origin::text || ');'; 427s execute idef; 427s v_count := v_count + 1; 427s else 427s -- raise notice 'Index % already present - skipping', v_iname; 427s end if; 427s end loop; 427s 427s -- Remove unneeded indices... 427s for v_dummy in select indexname from pg_catalog.pg_indexes i where i.tablename = 'sl_log_' || v_log::text and 427s i.indexname like ('PartInd_main_sl_log_' || v_log::text || '-node-%') and 427s not exists (select 1 from public.sl_set where 427s i.indexname = 'PartInd_main_sl_log_' || v_log::text || '-node-' || set_origin::text) 427s loop 427s -- raise notice 'Dropping obsolete index %d', v_dummy.indexname; 427s idef := 'drop index public."' || v_dummy.indexname || '";'; 427s execute idef; 427s v_count := v_count - 1; 427s end loop; 427s return v_count; 427s END 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.addPartialLogIndices () is 427s 'Add partial indexes, if possible, to the unused sl_log_? table for 427s all origin nodes, and drop any that are no longer needed. 427s 427s This function presently gets run any time set origins are manipulated 427s (FAILOVER, STORE SET, MOVE SET, DROP SET), as well as each time the 427s system switches between sl_log_1 and sl_log_2.'; 427s COMMENT 427s create or replace function public.check_table_field_exists (p_namespace text, p_table text, p_field text) 427s returns bool as $$ 427s BEGIN 427s return exists ( 427s select 1 from "information_schema".columns 427s where table_schema = p_namespace 427s and table_name = p_table 427s and column_name = p_field 427s ); 427s END;$$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.check_table_field_exists (p_namespace text, p_table text, p_field text) 427s is 'Check if a table has a specific attribute'; 427s COMMENT 427s create or replace function public.add_missing_table_field (p_namespace text, p_table text, p_field text, p_type text) 427s returns bool as $$ 427s DECLARE 427s v_row record; 427s v_query text; 427s BEGIN 427s if not public.check_table_field_exists(p_namespace, p_table, p_field) then 427s raise notice 'Upgrade table %.% - add field %', p_namespace, p_table, p_field; 427s v_query := 'alter table ' || p_namespace || '.' || p_table || ' add column '; 427s v_query := v_query || p_field || ' ' || p_type || ';'; 427s execute v_query; 427s return 't'; 427s else 427s return 'f'; 427s end if; 427s END;$$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.add_missing_table_field (p_namespace text, p_table text, p_field text, p_type text) 427s is 'Add a column of a given type to a table if it is missing'; 427s COMMENT 427s create or replace function public.upgradeSchema(p_old text) 427s returns text as $$ 427s declare 427s v_tab_row record; 427s v_query text; 427s v_keepstatus text; 427s begin 427s -- If old version is pre-2.0, then we require a special upgrade process 427s if p_old like '1.%' then 427s raise exception 'Upgrading to Slony-I 2.x requires running slony_upgrade_20'; 427s end if; 427s 427s perform public.upgradeSchemaAddTruncateTriggers(); 427s 427s -- Change all Slony-I-defined columns that are "timestamp without time zone" to "timestamp *WITH* time zone" 427s if exists (select 1 from information_schema.columns c 427s where table_schema = '_main' and data_type = 'timestamp without time zone' 427s and exists (select 1 from information_schema.tables t where t.table_schema = c.table_schema and t.table_name = c.table_name and t.table_type = 'BASE TABLE') 427s and (c.table_name, c.column_name) in (('sl_confirm', 'con_timestamp'), ('sl_event', 'ev_timestamp'), ('sl_registry', 'reg_timestamp'),('sl_archive_counter', 'ac_timestamp'))) 427s then 427s 427s -- Preserve sl_status 427s select pg_get_viewdef('public.sl_status') into v_keepstatus; 427s execute 'drop view sl_status'; 427s for v_tab_row in select table_schema, table_name, column_name from information_schema.columns c 427s where table_schema = '_main' and data_type = 'timestamp without time zone' 427s and exists (select 1 from information_schema.tables t where t.table_schema = c.table_schema and t.table_name = c.table_name and t.table_type = 'BASE TABLE') 427s and (table_name, column_name) in (('sl_confirm', 'con_timestamp'), ('sl_event', 'ev_timestamp'), ('sl_registry', 'reg_timestamp'),('sl_archive_counter', 'ac_timestamp')) 427s loop 427s raise notice 'Changing Slony-I column [%.%] to timestamp WITH time zone', v_tab_row.table_name, v_tab_row.column_name; 427s v_query := 'alter table ' || public.slon_quote_brute(v_tab_row.table_schema) || 427s '.' || v_tab_row.table_name || ' alter column ' || v_tab_row.column_name || 427s ' type timestamp with time zone;'; 427s execute v_query; 427s end loop; 427s -- restore sl_status 427s execute 'create view sl_status as ' || v_keepstatus; 427s end if; 427s 427s if not exists (select 1 from information_schema.tables where table_schema = '_main' and table_name = 'sl_components') then 427s v_query := ' 427s create table public.sl_components ( 427s co_actor text not null primary key, 427s co_pid integer not null, 427s co_node integer not null, 427s co_connection_pid integer not null, 427s co_activity text, 427s co_starttime timestamptz not null, 427s co_event bigint, 427s co_eventtype text 427s ) without oids; 427s '; 427s execute v_query; 427s end if; 427s 427s 427s 427s 427s 427s if not exists (select 1 from information_schema.tables t where table_schema = '_main' and table_name = 'sl_event_lock') then 427s v_query := 'create table public.sl_event_lock (dummy integer);'; 427s execute v_query; 427s end if; 427s 427s if not exists (select 1 from information_schema.tables t 427s where table_schema = '_main' 427s and table_name = 'sl_apply_stats') then 427s v_query := ' 427s create table public.sl_apply_stats ( 427s as_origin int4, 427s as_num_insert int8, 427s as_num_update int8, 427s as_num_delete int8, 427s as_num_truncate int8, 427s as_num_script int8, 427s as_num_total int8, 427s as_duration interval, 427s as_apply_first timestamptz, 427s as_apply_last timestamptz, 427s as_cache_prepare int8, 427s as_cache_hit int8, 427s as_cache_evict int8, 427s as_cache_prepare_max int8 427s ) WITHOUT OIDS;'; 427s execute v_query; 427s end if; 427s 427s -- 427s -- On the upgrade to 2.2, we change the layout of sl_log_N by 427s -- adding columns log_tablenspname, log_tablerelname, and 427s -- log_cmdupdncols as well as changing log_cmddata into 427s -- log_cmdargs, which is a text array. 427s -- 427s if not public.check_table_field_exists('_main', 'sl_log_1', 'log_cmdargs') then 427s -- 427s -- Check that the cluster is completely caught up 427s -- 427s if public.check_unconfirmed_log() then 427s raise EXCEPTION 'cannot upgrade to new sl_log_N format due to existing unreplicated data'; 427s end if; 427s 427s -- 427s -- Drop tables sl_log_1 and sl_log_2 427s -- 427s drop table public.sl_log_1; 427s drop table public.sl_log_2; 427s 427s -- 427s -- Create the new sl_log_1 427s -- 427s create table public.sl_log_1 ( 427s log_origin int4, 427s log_txid bigint, 427s log_tableid int4, 427s log_actionseq int8, 427s log_tablenspname text, 427s log_tablerelname text, 427s log_cmdtype "char", 427s log_cmdupdncols int4, 427s log_cmdargs text[] 427s ) without oids; 427s create index sl_log_1_idx1 on public.sl_log_1 427s (log_origin, log_txid, log_actionseq); 427s 427s comment on table public.sl_log_1 is 'Stores each change to be propagated to subscriber nodes'; 427s comment on column public.sl_log_1.log_origin is 'Origin node from which the change came'; 427s comment on column public.sl_log_1.log_txid is 'Transaction ID on the origin node'; 427s comment on column public.sl_log_1.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 427s comment on column public.sl_log_1.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 427s comment on column public.sl_log_1.log_tablenspname is 'The schema name of the table affected'; 427s comment on column public.sl_log_1.log_tablerelname is 'The table name of the table affected'; 427s comment on column public.sl_log_1.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 427s comment on column public.sl_log_1.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 427s comment on column public.sl_log_1.log_cmdargs is 'The data needed to perform the log action on the replica'; 427s 427s -- 427s -- Create the new sl_log_2 427s -- 427s create table public.sl_log_2 ( 427s log_origin int4, 427s log_txid bigint, 427s log_tableid int4, 427s log_actionseq int8, 427s log_tablenspname text, 427s log_tablerelname text, 427s log_cmdtype "char", 427s log_cmdupdncols int4, 427s log_cmdargs text[] 427s ) without oids; 427s create index sl_log_2_idx1 on public.sl_log_2 427s (log_origin, log_txid, log_actionseq); 427s 427s comment on table public.sl_log_2 is 'Stores each change to be propagated to subscriber nodes'; 427s comment on column public.sl_log_2.log_origin is 'Origin node from which the change came'; 427s comment on column public.sl_log_2.log_txid is 'Transaction ID on the origin node'; 427s comment on column public.sl_log_2.log_tableid is 'The table ID (from sl_table.tab_id) that this log entry is to affect'; 427s comment on column public.sl_log_2.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 427s comment on column public.sl_log_2.log_tablenspname is 'The schema name of the table affected'; 427s comment on column public.sl_log_2.log_tablerelname is 'The table name of the table affected'; 427s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. U = Update, I = Insert, D = DELETE, T = TRUNCATE'; 427s comment on column public.sl_log_2.log_cmdupdncols is 'For cmdtype=U the number of updated columns in cmdargs'; 427s comment on column public.sl_log_2.log_cmdargs is 'The data needed to perform the log action on the replica'; 427s 427s create table public.sl_log_script ( 427s log_origin int4, 427s log_txid bigint, 427s log_actionseq int8, 427s log_cmdtype "char", 427s log_cmdargs text[] 427s ) WITHOUT OIDS; 427s create index sl_log_script_idx1 on public.sl_log_script 427s (log_origin, log_txid, log_actionseq); 427s 427s comment on table public.sl_log_script is 'Captures SQL script queries to be propagated to subscriber nodes'; 427s comment on column public.sl_log_script.log_origin is 'Origin name from which the change came'; 427s comment on column public.sl_log_script.log_txid is 'Transaction ID on the origin node'; 427s comment on column public.sl_log_script.log_actionseq is 'The sequence number in which actions will be applied on replicas'; 427s comment on column public.sl_log_2.log_cmdtype is 'Replication action to take. S = Script statement, s = Script complete'; 427s comment on column public.sl_log_script.log_cmdargs is 'The DDL statement, optionally followed by selected nodes to execute it on.'; 427s 427s -- 427s -- Put the log apply triggers back onto sl_log_1/2 427s -- 427s create trigger apply_trigger 427s before INSERT on public.sl_log_1 427s for each row execute procedure public.logApply('_main'); 427s alter table public.sl_log_1 427s enable replica trigger apply_trigger; 427s create trigger apply_trigger 427s before INSERT on public.sl_log_2 427s for each row execute procedure public.logApply('_main'); 427s alter table public.sl_log_2 427s enable replica trigger apply_trigger; 427s end if; 427s if not exists (select 1 from information_schema.routines where routine_schema = '_main' and routine_name = 'string_agg') then 427s CREATE AGGREGATE public.string_agg(text) ( 427s SFUNC=public.agg_text_sum, 427s STYPE=text, 427s INITCOND='' 427s ); 427s end if; 427s if not exists (select 1 from information_schema.views where table_schema='_main' and table_name='sl_failover_targets') then 427s create view public.sl_failover_targets as 427s select set_id, 427s set_origin as set_origin, 427s sub1.sub_receiver as backup_id 427s 427s FROM 427s public.sl_subscribe sub1 427s ,public.sl_set set1 427s where 427s sub1.sub_set=set_id 427s and sub1.sub_forward=true 427s --exclude candidates where the set_origin 427s --has a path a node but the failover 427s --candidate has no path to that node 427s and sub1.sub_receiver not in 427s (select p1.pa_client from 427s public.sl_path p1 427s left outer join public.sl_path p2 on 427s (p2.pa_client=p1.pa_client 427s and p2.pa_server=sub1.sub_receiver) 427s where p2.pa_client is null 427s and p1.pa_server=set_origin 427s and p1.pa_client<>sub1.sub_receiver 427s ) 427s and sub1.sub_provider=set_origin 427s --exclude any subscribers that are not 427s --direct subscribers of all sets on the 427s --origin 427s and sub1.sub_receiver not in 427s (select direct_recv.sub_receiver 427s from 427s 427s (--all direct receivers of the first set 427s select subs2.sub_receiver 427s from public.sl_subscribe subs2 427s where subs2.sub_provider=set1.set_origin 427s and subs2.sub_set=set1.set_id) as 427s direct_recv 427s inner join 427s (--all other sets from the origin 427s select set_id from public.sl_set set2 427s where set2.set_origin=set1.set_origin 427s and set2.set_id<>sub1.sub_set) 427s as othersets on(true) 427s left outer join public.sl_subscribe subs3 427s on(subs3.sub_set=othersets.set_id 427s and subs3.sub_forward=true 427s and subs3.sub_provider=set1.set_origin 427s and direct_recv.sub_receiver=subs3.sub_receiver) 427s where subs3.sub_receiver is null 427s ); 427s end if; 427s 427s if not public.check_table_field_exists('_main', 'sl_node', 'no_failed') then 427s alter table public.sl_node add column no_failed bool; 427s update public.sl_node set no_failed=false; 427s end if; 427s return p_old; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s create or replace function public.check_unconfirmed_log () 427s returns bool as $$ 427s declare 427s v_rc bool = false; 427s v_error bool = false; 427s v_origin integer; 427s v_allconf bigint; 427s v_allsnap txid_snapshot; 427s v_count bigint; 427s begin 427s -- 427s -- Loop over all nodes that are the origin of at least one set 427s -- 427s for v_origin in select distinct set_origin as no_id 427s from public.sl_set loop 427s -- 427s -- Per origin determine which is the highest event seqno 427s -- that is confirmed by all subscribers to any of the 427s -- origins sets. 427s -- 427s select into v_allconf min(max_seqno) from ( 427s select con_received, max(con_seqno) as max_seqno 427s from public.sl_confirm 427s where con_origin = v_origin 427s and con_received in ( 427s select distinct sub_receiver 427s from public.sl_set as SET, 427s public.sl_subscribe as SUB 427s where SET.set_id = SUB.sub_set 427s and SET.set_origin = v_origin 427s ) 427s group by con_received 427s ) as maxconfirmed; 427s if not found then 427s raise NOTICE 'check_unconfirmed_log(): cannot determine highest ev_seqno for node % confirmed by all subscribers', v_origin; 427s v_error = true; 427s continue; 427s end if; 427s 427s -- 427s -- Get the txid snapshot that corresponds with that event 427s -- 427s select into v_allsnap ev_snapshot 427s from public.sl_event 427s where ev_origin = v_origin 427s and ev_seqno = v_allconf; 427s if not found then 427s raise NOTICE 'check_unconfirmed_log(): cannot find event %,% in sl_event', v_origin, v_allconf; 427s v_error = true; 427s continue; 427s end if; 427s 427s -- 427s -- Count the number of log rows that appeard after that event. 427s -- 427s select into v_count count(*) from ( 427s select 1 from public.sl_log_1 427s where log_origin = v_origin 427s and log_txid >= "pg_catalog".txid_snapshot_xmax(v_allsnap) 427s union all 427s select 1 from public.sl_log_1 427s where log_origin = v_origin 427s and log_txid in ( 427s select * from "pg_catalog".txid_snapshot_xip(v_allsnap) 427s ) 427s union all 427s select 1 from public.sl_log_2 427s where log_origin = v_origin 427s and log_txid >= "pg_catalog".txid_snapshot_xmax(v_allsnap) 427s union all 427s select 1 from public.sl_log_2 427s where log_origin = v_origin 427s and log_txid in ( 427s select * from "pg_catalog".txid_snapshot_xip(v_allsnap) 427s ) 427s ) as cnt; 427s 427s if v_count > 0 then 427s raise NOTICE 'check_unconfirmed_log(): origin % has % log rows that have not propagated to all subscribers yet', v_origin, v_count; 427s v_rc = true; 427s end if; 427s end loop; 427s 427s if v_error then 427s raise EXCEPTION 'check_unconfirmed_log(): aborting due to previous inconsistency'; 427s end if; 427s 427s return v_rc; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s set search_path to public 427s ; 427s SET 427s comment on function public.upgradeSchema(p_old text) is 427s 'Called during "update functions" by slonik to perform schema changes'; 427s COMMENT 427s create or replace view public.sl_status as select 427s E.ev_origin as st_origin, 427s C.con_received as st_received, 427s E.ev_seqno as st_last_event, 427s E.ev_timestamp as st_last_event_ts, 427s C.con_seqno as st_last_received, 427s C.con_timestamp as st_last_received_ts, 427s CE.ev_timestamp as st_last_received_event_ts, 427s E.ev_seqno - C.con_seqno as st_lag_num_events, 427s current_timestamp - CE.ev_timestamp as st_lag_time 427s from public.sl_event E, public.sl_confirm C, 427s public.sl_event CE 427s where E.ev_origin = C.con_origin 427s and CE.ev_origin = E.ev_origin 427s and CE.ev_seqno = C.con_seqno 427s and (E.ev_origin, E.ev_seqno) in 427s (select ev_origin, max(ev_seqno) 427s from public.sl_event 427s where ev_origin = public.getLocalNodeId('_main') 427s group by 1 427s ) 427s and (C.con_origin, C.con_received, C.con_seqno) in 427s (select con_origin, con_received, max(con_seqno) 427s from public.sl_confirm 427s where con_origin = public.getLocalNodeId('_main') 427s group by 1, 2 427s ); 427s CREATE VIEW 427s comment on view public.sl_status is 'View showing how far behind remote nodes are.'; 427s COMMENT 427s create or replace function public.copyFields(p_tab_id integer) 427s returns text 427s as $$ 427s declare 427s result text; 427s prefix text; 427s prec record; 427s begin 427s result := ''; 427s prefix := '('; -- Initially, prefix is the opening paren 427s 427s for prec in select public.slon_quote_input(a.attname) as column from public.sl_table t, pg_catalog.pg_attribute a where t.tab_id = p_tab_id and t.tab_reloid = a.attrelid and a.attnum > 0 and a.attisdropped = false order by attnum 427s loop 427s result := result || prefix || prec.column; 427s prefix := ','; -- Subsequently, prepend columns with commas 427s end loop; 427s result := result || ')'; 427s return result; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.copyFields(p_tab_id integer) is 427s 'Return a string consisting of what should be appended to a COPY statement 427s to specify fields for the passed-in tab_id. 427s 427s In PG versions > 7.3, this looks like (field1,field2,...fieldn)'; 427s COMMENT 427s create or replace function public.prepareTableForCopy(p_tab_id int4) 427s returns int4 427s as $$ 427s declare 427s v_tab_oid oid; 427s v_tab_fqname text; 427s begin 427s -- ---- 427s -- Get the OID and fully qualified name for the table 427s -- --- 427s select PGC.oid, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s into v_tab_oid, v_tab_fqname 427s from public.sl_table T, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where T.tab_id = p_tab_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid; 427s if not found then 427s raise exception 'Table with ID % not found in sl_table', p_tab_id; 427s end if; 427s 427s -- ---- 427s -- Try using truncate to empty the table and fallback to 427s -- delete on error. 427s -- ---- 427s perform public.TruncateOnlyTable(v_tab_fqname); 427s raise notice 'truncate of % succeeded', v_tab_fqname; 427s 427s -- suppress index activity 427s perform public.disable_indexes_on_table(v_tab_oid); 427s 427s return 1; 427s exception when others then 427s raise notice 'truncate of % failed - doing delete', v_tab_fqname; 427s perform public.disable_indexes_on_table(v_tab_oid); 427s execute 'delete from only ' || public.slon_quote_input(v_tab_fqname); 427s return 0; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.prepareTableForCopy(p_tab_id int4) is 427s 'Delete all data and suppress index maintenance'; 427s COMMENT 427s create or replace function public.finishTableAfterCopy(p_tab_id int4) 427s returns int4 427s as $$ 427s declare 427s v_tab_oid oid; 427s v_tab_fqname text; 427s begin 427s -- ---- 427s -- Get the tables OID and fully qualified name 427s -- --- 427s select PGC.oid, 427s public.slon_quote_brute(PGN.nspname) || '.' || 427s public.slon_quote_brute(PGC.relname) as tab_fqname 427s into v_tab_oid, v_tab_fqname 427s from public.sl_table T, 427s "pg_catalog".pg_class PGC, "pg_catalog".pg_namespace PGN 427s where T.tab_id = p_tab_id 427s and T.tab_reloid = PGC.oid 427s and PGC.relnamespace = PGN.oid; 427s if not found then 427s raise exception 'Table with ID % not found in sl_table', p_tab_id; 427s end if; 427s 427s -- ---- 427s -- Reenable indexes and reindex the table. 427s -- ---- 427s perform public.enable_indexes_on_table(v_tab_oid); 427s execute 'reindex table ' || public.slon_quote_input(v_tab_fqname); 427s 427s return 1; 427s end; 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.finishTableAfterCopy(p_tab_id int4) is 427s 'Reenable index maintenance and reindex the table'; 427s COMMENT 427s create or replace function public.setup_vactables_type () returns integer as $$ 427s begin 427s if not exists (select 1 from pg_catalog.pg_type t, pg_catalog.pg_namespace n 427s where n.nspname = '_main' and t.typnamespace = n.oid and 427s t.typname = 'vactables') then 427s execute 'create type public.vactables as (nspname name, relname name);'; 427s end if; 427s return 1; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.setup_vactables_type () is 427s 'Function to be run as part of loading slony1_funcs.sql that creates the vactables type if it is missing'; 427s COMMENT 427s select public.setup_vactables_type(); 427s setup_vactables_type 427s ---------------------- 427s 1 427s (1 row) 427s 427s drop function public.setup_vactables_type (); 427s DROP FUNCTION 427s create or replace function public.TablesToVacuum () returns setof public.vactables as $$ 427s declare 427s prec public.vactables%rowtype; 427s begin 427s prec.nspname := '_main'; 427s prec.relname := 'sl_event'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := '_main'; 427s prec.relname := 'sl_confirm'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := '_main'; 427s prec.relname := 'sl_setsync'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := '_main'; 427s prec.relname := 'sl_seqlog'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := '_main'; 427s prec.relname := 'sl_archive_counter'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := '_main'; 427s prec.relname := 'sl_components'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := '_main'; 427s prec.relname := 'sl_log_script'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := 'pg_catalog'; 427s prec.relname := 'pg_listener'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s prec.nspname := 'pg_catalog'; 427s prec.relname := 'pg_statistic'; 427s if public.ShouldSlonyVacuumTable(prec.nspname, prec.relname) then 427s return next prec; 427s end if; 427s 427s return; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.TablesToVacuum () is 427s 'Return a list of tables that require frequent vacuuming. The 427s function is used so that the list is not hardcoded into C code.'; 427s COMMENT 427s create or replace function public.add_empty_table_to_replication(p_set_id int4, p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) returns bigint as $$ 427s declare 427s 427s prec record; 427s v_origin int4; 427s v_isorigin boolean; 427s v_fqname text; 427s v_query text; 427s v_rows integer; 427s v_idxname text; 427s 427s begin 427s -- Need to validate that the set exists; the set will tell us if this is the origin 427s select set_origin into v_origin from public.sl_set where set_id = p_set_id; 427s if not found then 427s raise exception 'add_empty_table_to_replication: set % not found!', p_set_id; 427s end if; 427s 427s -- Need to be aware of whether or not this node is origin for the set 427s v_isorigin := ( v_origin = public.getLocalNodeId('_main') ); 427s 427s v_fqname := '"' || p_nspname || '"."' || p_tabname || '"'; 427s -- Take out a lock on the table 427s v_query := 'lock ' || v_fqname || ';'; 427s execute v_query; 427s 427s if v_isorigin then 427s -- On the origin, verify that the table is empty, failing if it has any tuples 427s v_query := 'select 1 as tuple from ' || v_fqname || ' limit 1;'; 427s execute v_query into prec; 427s GET DIAGNOSTICS v_rows = ROW_COUNT; 427s if v_rows = 0 then 427s raise notice 'add_empty_table_to_replication: table % empty on origin - OK', v_fqname; 427s else 427s raise exception 'add_empty_table_to_replication: table % contained tuples on origin node %', v_fqname, v_origin; 427s end if; 427s else 427s -- On other nodes, TRUNCATE the table 427s v_query := 'truncate ' || v_fqname || ';'; 427s execute v_query; 427s end if; 427s -- If p_idxname is NULL, then look up the PK index, and RAISE EXCEPTION if one does not exist 427s if p_idxname is NULL then 427s select c2.relname into prec from pg_catalog.pg_index i, pg_catalog.pg_class c1, pg_catalog.pg_class c2, pg_catalog.pg_namespace n where i.indrelid = c1.oid and i.indexrelid = c2.oid and c1.relname = p_tabname and i.indisprimary and n.nspname = p_nspname and n.oid = c1.relnamespace; 427s if not found then 427s raise exception 'add_empty_table_to_replication: table % has no primary key and no candidate specified!', v_fqname; 427s else 427s v_idxname := prec.relname; 427s end if; 427s else 427s v_idxname := p_idxname; 427s end if; 427s return public.setAddTable_int(p_set_id, p_tab_id, v_fqname, v_idxname, p_comment); 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.add_empty_table_to_replication(p_set_id int4, p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) is 427s 'Verify that a table is empty, and add it to replication. 427s tab_idxname is optional - if NULL, then we use the primary key. 427s 427s Note that this function is to be run within an EXECUTE SCRIPT script, 427s so it runs at the right place in the transaction stream on all 427s nodes.'; 427s COMMENT 427s create or replace function public.replicate_partition(p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) returns bigint as $$ 427s declare 427s prec record; 427s prec2 record; 427s v_set_id int4; 427s 427s begin 427s -- Look up the parent table; fail if it does not exist 427s select c1.oid into prec from pg_catalog.pg_class c1, pg_catalog.pg_class c2, pg_catalog.pg_inherits i, pg_catalog.pg_namespace n where c1.oid = i.inhparent and c2.oid = i.inhrelid and n.oid = c2.relnamespace and n.nspname = p_nspname and c2.relname = p_tabname; 427s if not found then 427s raise exception 'replicate_partition: No parent table found for %.%!', p_nspname, p_tabname; 427s end if; 427s 427s -- The parent table tells us what replication set to use 427s select tab_set into prec2 from public.sl_table where tab_reloid = prec.oid; 427s if not found then 427s raise exception 'replicate_partition: Parent table % for new partition %.% is not replicated!', prec.oid, p_nspname, p_tabname; 427s end if; 427s 427s v_set_id := prec2.tab_set; 427s 427s -- Now, we have all the parameters necessary to run add_empty_table_to_replication... 427s return public.add_empty_table_to_replication(v_set_id, p_tab_id, p_nspname, p_tabname, p_idxname, p_comment); 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.replicate_partition(p_tab_id int4, p_nspname text, p_tabname text, p_idxname text, p_comment text) is 427s 'Add a partition table to replication. 427s tab_idxname is optional - if NULL, then we use the primary key. 427s This function looks up replication configuration via the parent table. 427s 427s Note that this function is to be run within an EXECUTE SCRIPT script, 427s so it runs at the right place in the transaction stream on all 427s nodes.'; 427s COMMENT 427s create or replace function public.disable_indexes_on_table (i_oid oid) 427s returns integer as $$ 427s begin 427s -- Setting pg_class.relhasindex to false will cause copy not to 427s -- maintain any indexes. At the end of the copy we will reenable 427s -- them and reindex the table. This bulk creating of indexes is 427s -- faster. 427s 427s update pg_catalog.pg_class set relhasindex ='f' where oid = i_oid; 427s return 1; 427s end $$ 427s language plpgsql; 427s CREATE FUNCTION 427s comment on function public.disable_indexes_on_table(i_oid oid) is 427s 'disable indexes on the specified table. 427s Used during subscription process to suppress indexes, which allows 427s COPY to go much faster. 427s 427s This may be set as a SECURITY DEFINER in order to eliminate the need 427s for superuser access by Slony-I. 427s '; 427s COMMENT 427s create or replace function public.enable_indexes_on_table (i_oid oid) 427s returns integer as $$ 427s begin 427s update pg_catalog.pg_class set relhasindex ='t' where oid = i_oid; 427s return 1; 427s end $$ 427s language plpgsql 427s security definer; 427s CREATE FUNCTION 427s comment on function public.enable_indexes_on_table(i_oid oid) is 427s 're-enable indexes on the specified table. 427s 427s This may be set as a SECURITY DEFINER in order to eliminate the need 427s for superuser access by Slony-I. 427s '; 427s COMMENT 427s drop function if exists public.reshapeSubscription(int4,int4,int4); 427s DROP FUNCTION 427s create or replace function public.reshapeSubscription (p_sub_origin int4, p_sub_provider int4, p_sub_receiver int4) returns int4 as $$ 427s begin 427s update public.sl_subscribe 427s set sub_provider=p_sub_provider 427s from public.sl_set 427s WHERE sub_set=sl_set.set_id 427s and sl_set.set_origin=p_sub_origin and sub_receiver=p_sub_receiver; 427s if found then 427s perform public.RebuildListenEntries(); 427s notify "_main_Restart"; 427s end if; 427s return 0; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.reshapeSubscription(p_sub_origin int4, p_sub_provider int4, p_sub_receiver int4) is 427s 'Run on a receiver/subscriber node when the provider for that 427s subscription is being changed. Slonik will invoke this method 427s before the SUBSCRIBE_SET event propogates to the receiver 427s so listen paths can be updated.'; 427s COMMENT 427s create or replace function public.slon_node_health_check() returns boolean as $$ 427s declare 427s prec record; 427s all_ok boolean; 427s begin 427s all_ok := 't'::boolean; 427s -- validate that all tables in sl_table have: 427s -- sl_table agreeing with pg_class 427s for prec in select tab_id, tab_relname, tab_nspname from 427s public.sl_table t where not exists (select 1 from pg_catalog.pg_class c, pg_catalog.pg_namespace n 427s where c.oid = t.tab_reloid and c.relname = t.tab_relname and c.relnamespace = n.oid and n.nspname = t.tab_nspname) loop 427s all_ok := 'f'::boolean; 427s raise warning 'table [id,nsp,name]=[%,%,%] - sl_table does not match pg_class/pg_namespace', prec.tab_id, prec.tab_relname, prec.tab_nspname; 427s end loop; 427s if not all_ok then 427s raise warning 'Mismatch found between sl_table and pg_class. Slonik command REPAIR CONFIG may be useful to rectify this.'; 427s end if; 427s return all_ok; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.slon_node_health_check() is 'called when slon starts up to validate that there are not problems with node configuration. Returns t if all is OK, f if there is a problem.'; 427s COMMENT 427s create or replace function public.log_truncate () returns trigger as 427s $$ 427s declare 427s r_role text; 427s c_nspname text; 427s c_relname text; 427s c_log integer; 427s c_node integer; 427s c_tabid integer; 427s begin 427s -- Ignore this call if session_replication_role = 'local' 427s select into r_role setting 427s from pg_catalog.pg_settings where name = 'session_replication_role'; 427s if r_role = 'local' then 427s return NULL; 427s end if; 427s 427s c_tabid := tg_argv[0]; 427s c_node := public.getLocalNodeId('_main'); 427s select tab_nspname, tab_relname into c_nspname, c_relname 427s from public.sl_table where tab_id = c_tabid; 427s select last_value into c_log from public.sl_log_status; 427s if c_log in (0, 2) then 427s insert into public.sl_log_1 ( 427s log_origin, log_txid, log_tableid, 427s log_actionseq, log_tablenspname, 427s log_tablerelname, log_cmdtype, 427s log_cmdupdncols, log_cmdargs 427s ) values ( 427s c_node, pg_catalog.txid_current(), c_tabid, 427s nextval('public.sl_action_seq'), c_nspname, 427s c_relname, 'T', 0, '{}'::text[]); 427s else -- (1, 3) 427s insert into public.sl_log_2 ( 427s log_origin, log_txid, log_tableid, 427s log_actionseq, log_tablenspname, 427s log_tablerelname, log_cmdtype, 427s log_cmdupdncols, log_cmdargs 427s ) values ( 427s c_node, pg_catalog.txid_current(), c_tabid, 427s nextval('public.sl_action_seq'), c_nspname, 427s c_relname, 'T', 0, '{}'::text[]); 427s end if; 427s return NULL; 427s end 427s $$ language plpgsql 427s security definer; 427s CREATE FUNCTION 427s comment on function public.log_truncate () 427s is 'trigger function run when a replicated table receives a TRUNCATE request'; 427s COMMENT 427s create or replace function public.deny_truncate () returns trigger as 427s $$ 427s declare 427s r_role text; 427s begin 427s -- Ignore this call if session_replication_role = 'local' 427s select into r_role setting 427s from pg_catalog.pg_settings where name = 'session_replication_role'; 427s if r_role = 'local' then 427s return NULL; 427s end if; 427s 427s raise exception 'truncation of replicated table forbidden on subscriber node'; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.deny_truncate () 427s is 'trigger function run when a replicated table receives a TRUNCATE request'; 427s COMMENT 427s create or replace function public.store_application_name (i_name text) returns text as $$ 427s declare 427s p_command text; 427s begin 427s if exists (select 1 from pg_catalog.pg_settings where name = 'application_name') then 427s p_command := 'set application_name to '''|| i_name || ''';'; 427s execute p_command; 427s return i_name; 427s end if; 427s return NULL::text; 427s end $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.store_application_name (i_name text) is 427s 'Set application_name GUC, if possible. Returns NULL if it fails to work.'; 427s COMMENT 427s create or replace function public.is_node_reachable(origin_node_id integer, 427s receiver_node_id integer) returns boolean as $$ 427s declare 427s listen_row record; 427s reachable boolean; 427s begin 427s reachable:=false; 427s select * into listen_row from public.sl_listen where 427s li_origin=origin_node_id and li_receiver=receiver_node_id; 427s if found then 427s reachable:=true; 427s end if; 427s return reachable; 427s end $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.is_node_reachable(origin_node_id integer, receiver_node_id integer) 427s is 'Is the receiver node reachable from the origin, via any of the listen paths?'; 427s COMMENT 427s create or replace function public.component_state (i_actor text, i_pid integer, i_node integer, i_conn_pid integer, i_activity text, i_starttime timestamptz, i_event bigint, i_eventtype text) returns integer as $$ 427s begin 427s -- Trim out old state for this component 427s if not exists (select 1 from public.sl_components where co_actor = i_actor) then 427s insert into public.sl_components 427s (co_actor, co_pid, co_node, co_connection_pid, co_activity, co_starttime, co_event, co_eventtype) 427s values 427s (i_actor, i_pid, i_node, i_conn_pid, i_activity, i_starttime, i_event, i_eventtype); 427s else 427s update public.sl_components 427s set 427s co_connection_pid = i_conn_pid, co_activity = i_activity, co_starttime = i_starttime, co_event = i_event, 427s co_eventtype = i_eventtype 427s where co_actor = i_actor 427s and co_starttime < i_starttime; 427s end if; 427s return 1; 427s end $$ 427s language plpgsql; 427s CREATE FUNCTION 427s comment on function public.component_state (i_actor text, i_pid integer, i_node integer, i_conn_pid integer, i_activity text, i_starttime timestamptz, i_event bigint, i_eventtype text) is 427s 'Store state of a Slony component. Useful for monitoring'; 427s COMMENT 427s create or replace function public.recreate_log_trigger(p_fq_table_name text, 427s p_tab_id oid, p_tab_attkind text) returns integer as $$ 427s begin 427s execute 'drop trigger "_main_logtrigger" on ' || 427s p_fq_table_name ; 427s -- ---- 427s execute 'create trigger "_main_logtrigger"' || 427s ' after insert or update or delete on ' || 427s p_fq_table_name 427s || ' for each row execute procedure public.logTrigger (' || 427s pg_catalog.quote_literal('_main') || ',' || 427s pg_catalog.quote_literal(p_tab_id::text) || ',' || 427s pg_catalog.quote_literal(p_tab_attkind) || ');'; 427s return 0; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s comment on function public.recreate_log_trigger(p_fq_table_name text, 427s p_tab_id oid, p_tab_attkind text) is 427s 'A function that drops and recreates the log trigger on the specified table. 427s It is intended to be used after the primary_key/unique index has changed.'; 427s COMMENT 427s create or replace function public.repair_log_triggers(only_locked boolean) 427s returns integer as $$ 427s declare 427s retval integer; 427s table_row record; 427s begin 427s retval=0; 427s for table_row in 427s select tab_nspname,tab_relname, 427s tab_idxname, tab_id, mode, 427s public.determineAttKindUnique(tab_nspname|| 427s '.'||tab_relname,tab_idxname) as attkind 427s from 427s public.sl_table 427s left join 427s pg_locks on (relation=tab_reloid and pid=pg_backend_pid() 427s and mode='AccessExclusiveLock') 427s ,pg_trigger 427s where tab_reloid=tgrelid and 427s public.determineAttKindUnique(tab_nspname||'.' 427s ||tab_relname,tab_idxname) 427s !=(public.decode_tgargs(tgargs))[2] 427s and tgname = '_main' 427s || '_logtrigger' 427s LOOP 427s if (only_locked=false) or table_row.mode='AccessExclusiveLock' then 427s perform public.recreate_log_trigger 427s (table_row.tab_nspname||'.'||table_row.tab_relname, 427s table_row.tab_id,table_row.attkind); 427s retval=retval+1; 427s else 427s raise notice '%.% has an invalid configuration on the log trigger. This was not corrected because only_lock is true and the table is not locked.', 427s table_row.tab_nspname,table_row.tab_relname; 427s 427s end if; 427s end loop; 427s return retval; 427s end 427s $$ 427s language plpgsql; 427s CREATE FUNCTION 427s comment on function public.repair_log_triggers(only_locked boolean) 427s is ' 427s repair the log triggers as required. If only_locked is true then only 427s tables that are already exclusively locked by the current transaction are 427s repaired. Otherwise all replicated tables with outdated trigger arguments 427s are recreated.'; 427s COMMENT 427s create or replace function public.unsubscribe_abandoned_sets(p_failed_node int4) returns bigint 427s as $$ 427s declare 427s v_row record; 427s v_seq_id bigint; 427s v_local_node int4; 427s begin 427s 427s select public.getLocalNodeId('_main') into 427s v_local_node; 427s 427s if found then 427s --abandon all subscriptions from this origin. 427s for v_row in select sub_set,sub_receiver from 427s public.sl_subscribe, public.sl_set 427s where sub_set=set_id and set_origin=p_failed_node 427s and sub_receiver=v_local_node 427s loop 427s raise notice 'Slony-I: failover_abandon_set() is abandoning subscription to set % on node % because it is too far ahead', v_row.sub_set, 427s v_local_node; 427s --If this node is a provider for the set 427s --then the receiver needs to be unsubscribed. 427s -- 427s select public.unsubscribeSet(v_row.sub_set, 427s v_local_node,true) 427s into v_seq_id; 427s end loop; 427s end if; 427s 427s return v_seq_id; 427s end 427s $$ language plpgsql; 427s CREATE FUNCTION 427s CREATE OR replace function public.agg_text_sum(txt_before TEXT, txt_new TEXT) RETURNS TEXT AS 427s $BODY$ 427s DECLARE 427s c_delim text; 427s BEGIN 427s c_delim = ','; 427s IF (txt_before IS NULL or txt_before='') THEN 427s RETURN txt_new; 427s END IF; 427s RETURN txt_before || c_delim || txt_new; 427s END; 427s $BODY$ 427s LANGUAGE plpgsql; 427s CREATE FUNCTION 427s comment on function public.agg_text_sum(text,text) is 427s 'An accumulator function used by the slony string_agg function to 427s aggregate rows into a string'; 427s COMMENT 427s Dropping cluster 16/regress ... 428s ### End 16 psql ### 428s autopkgtest [19:13:49]: test load-functions: -----------------------] 432s load-functions PASS 432s autopkgtest [19:13:53]: test load-functions: - - - - - - - - - - results - - - - - - - - - - 435s autopkgtest [19:13:56]: @@@@@@@@@@@@@@@@@@@@ summary 435s load-functions PASS