0s autopkgtest [18:38:48]: starting date and time: 2025-03-15 18:38:48+0000
  0s autopkgtest [18:38:48]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production'
  0s autopkgtest [18:38:48]: host juju-7f2275-prod-proposed-migration-environment-15; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.kt3d8qog/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:glibc --apt-upgrade redis --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=glibc/2.41-1ubuntu2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-s390x --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-15@bos03-s390x-31.secgroup --name adt-plucky-s390x-redis-20250315-183848-juju-7f2275-prod-proposed-migration-environment-15-3f05e94a-d600-418c-920c-07f01effe9e8 --image adt/ubuntu-plucky-s390x-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-15 --net-id=net_prod-proposed-migration-s390x -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/
139s autopkgtest [18:41:07]: testbed dpkg architecture: s390x
139s autopkgtest [18:41:07]: testbed apt version: 2.9.33
139s autopkgtest [18:41:07]: @@@@@@@@@@@@@@@@@@@@ test bed setup
140s autopkgtest [18:41:08]: testbed release detected to be: None
140s autopkgtest [18:41:08]: updating testbed package index (apt update)
141s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [126 kB]
141s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease
141s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease
141s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease
141s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [99.7 kB]
141s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [379 kB]
142s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.8 kB]
142s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [113 kB]
142s Get:9 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x c-n-f Metadata [1824 B]
142s Get:10 http://ftpmaster.internal/ubuntu plucky-proposed/restricted s390x c-n-f Metadata [116 B]
142s Get:11 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [320 kB]
142s Get:12 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x c-n-f Metadata [13.4 kB]
142s Get:13 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [3776 B]
142s Get:14 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x c-n-f Metadata [240 B]
142s Fetched 1073 kB in 2s (621 kB/s)
143s Reading package lists...
144s + lsb_release --codename --short
144s + RELEASE=plucky
144s + cat
144s + [ plucky != trusty ]
144s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y --allow-downgrades -o Dpkg::Options::=--force-confnew dist-upgrade
144s Reading package lists...
144s Building dependency tree...
144s Reading state information...
144s Calculating upgrade...
144s Calculating upgrade...
144s The following packages were automatically installed and are no longer required:
144s   libnsl2 libpython3.12-minimal libpython3.12-stdlib libpython3.12t64
144s   linux-headers-6.11.0-8 linux-headers-6.11.0-8-generic
144s   linux-modules-6.11.0-8-generic linux-tools-6.11.0-8
144s   linux-tools-6.11.0-8-generic
144s Use 'sudo apt autoremove' to remove them.
144s The following packages will be upgraded:
144s   pinentry-curses python3-jinja2 strace
144s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
144s Need to get 652 kB of archives.
144s After this operation, 27.6 kB of additional disk space will be used.
144s Get:1 http://ftpmaster.internal/ubuntu plucky/main s390x strace s390x 6.13+ds-1ubuntu1 [500 kB]
145s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x pinentry-curses s390x 1.3.1-2ubuntu3 [42.9 kB]
145s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x python3-jinja2 all 3.1.5-2ubuntu1 [109 kB]
145s Fetched 652 kB in 1s (594 kB/s)
145s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 81428 files and directories currently installed.)
145s Preparing to unpack .../strace_6.13+ds-1ubuntu1_s390x.deb ...
145s Unpacking strace (6.13+ds-1ubuntu1) over (6.11-0ubuntu1) ...
145s Preparing to unpack .../pinentry-curses_1.3.1-2ubuntu3_s390x.deb ...
145s Unpacking pinentry-curses (1.3.1-2ubuntu3) over (1.3.1-2ubuntu2) ...
145s Preparing to unpack .../python3-jinja2_3.1.5-2ubuntu1_all.deb ...
146s Unpacking python3-jinja2 (3.1.5-2ubuntu1) over (3.1.5-2) ...
146s Setting up pinentry-curses (1.3.1-2ubuntu3) ...
146s Setting up python3-jinja2 (3.1.5-2ubuntu1) ...
146s Setting up strace (6.13+ds-1ubuntu1) ...
146s Processing triggers for man-db (2.13.0-1) ...
146s + rm /etc/apt/preferences.d/force-downgrade-to-release.pref
146s + /usr/lib/apt/apt-helper analyze-pattern ?true
146s + uname -r
146s + sed s/\./\\./g
146s + running_kernel_pattern=^linux-.*6\.14\.0-10-generic.*
146s + apt list ?obsolete
146s + tail -n+2
146s + cut -d/ -f1
146s + grep -v ^linux-.*6\.14\.0-10-generic.*
146s + obsolete_pkgs=linux-headers-6.11.0-8-generic
146s linux-headers-6.11.0-8
146s linux-modules-6.11.0-8-generic
146s linux-tools-6.11.0-8-generic
146s linux-tools-6.11.0-8
146s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y purge --autoremove linux-headers-6.11.0-8-generic linux-headers-6.11.0-8 linux-modules-6.11.0-8-generic linux-tools-6.11.0-8-generic linux-tools-6.11.0-8
146s Reading package lists...
146s Building dependency tree...
146s Reading state information...
147s Solving dependencies...
147s The following packages will be REMOVED:
147s   libnsl2* libpython3.12-minimal* libpython3.12-stdlib* libpython3.12t64*
147s   linux-headers-6.11.0-8* linux-headers-6.11.0-8-generic*
147s   linux-modules-6.11.0-8-generic* linux-tools-6.11.0-8*
147s   linux-tools-6.11.0-8-generic*
147s 0 upgraded, 0 newly installed, 9 to remove and 5 not upgraded.
147s After this operation, 167 MB disk space will be freed.
147s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 81428 files and directories currently installed.)
147s Removing linux-tools-6.11.0-8-generic (6.11.0-8.8) ...
147s Removing linux-tools-6.11.0-8 (6.11.0-8.8) ...
147s Removing libpython3.12t64:s390x (3.12.9-1) ...
147s Removing libpython3.12-stdlib:s390x (3.12.9-1) ...
147s Removing libnsl2:s390x (1.3.0-3build3) ...
147s Removing libpython3.12-minimal:s390x (3.12.9-1) ...
147s Removing linux-headers-6.11.0-8-generic (6.11.0-8.8) ...
147s Removing linux-headers-6.11.0-8 (6.11.0-8.8) ...
148s Removing linux-modules-6.11.0-8-generic (6.11.0-8.8) ...
148s Processing triggers for libc-bin (2.41-1ubuntu1) ...
148s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 56328 files and directories currently installed.)
148s Purging configuration files for libpython3.12-minimal:s390x (3.12.9-1) ...
148s Purging configuration files for linux-modules-6.11.0-8-generic (6.11.0-8.8) ...
148s + grep -q trusty /etc/lsb-release
148s + [ ! -d /usr/share/doc/unattended-upgrades ]
148s + [ ! -d /usr/share/doc/lxd ]
148s + [ ! -d /usr/share/doc/lxd-client ]
148s + [ ! -d /usr/share/doc/snapd ]
148s + type iptables
148s + cat
148s + chmod 755 /etc/rc.local
148s + . /etc/rc.local
148s + iptables -w -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
148s + iptables -A OUTPUT -d 10.255.255.1/32 -p tcp -j DROP
148s + iptables -A OUTPUT -d 10.255.255.2/32 -p tcp -j DROP
148s + uname -m
148s + [ s390x = ppc64le ]
148s + [ -d /run/systemd/system ]
148s + systemd-detect-virt --quiet --vm
148s + mkdir -p /etc/systemd/system/systemd-random-seed.service.d/
148s + cat
148s + grep -q lz4 /etc/initramfs-tools/initramfs.conf
148s + echo COMPRESS=lz4
148s autopkgtest [18:41:16]: upgrading testbed (apt dist-upgrade and autopurge)
148s Reading package lists...
148s Building dependency tree...
148s Reading state information...
149s Calculating upgrade...Starting pkgProblemResolver with broken count: 0
149s Starting 2 pkgProblemResolver with broken count: 0
149s Done
149s Entering ResolveByKeep
149s 
149s Calculating upgrade...
149s The following packages will be upgraded:
149s   libc-bin libc-dev-bin libc6 libc6-dev locales
149s 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
149s Need to get 9512 kB of archives.
149s After this operation, 8192 B of additional disk space will be used.
149s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6-dev s390x 2.41-1ubuntu2 [1678 kB]
151s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-dev-bin s390x 2.41-1ubuntu2 [24.3 kB]
151s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6 s390x 2.41-1ubuntu2 [2892 kB]
155s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-bin s390x 2.41-1ubuntu2 [671 kB]
156s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x locales all 2.41-1ubuntu2 [4246 kB]
161s Preconfiguring packages ...
161s Fetched 9512 kB in 11s (830 kB/s)
161s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 56326 files and directories currently installed.)
161s Preparing to unpack .../libc6-dev_2.41-1ubuntu2_s390x.deb ...
161s Unpacking libc6-dev:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ...
161s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu2_s390x.deb ...
161s Unpacking libc-dev-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ...
161s Preparing to unpack .../libc6_2.41-1ubuntu2_s390x.deb ...
161s Unpacking libc6:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ...
161s Setting up libc6:s390x (2.41-1ubuntu2) ...
161s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 56326 files and directories currently installed.)
161s Preparing to unpack .../libc-bin_2.41-1ubuntu2_s390x.deb ...
161s Unpacking libc-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ...
161s Setting up libc-bin (2.41-1ubuntu2) ...
161s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 56326 files and directories currently installed.)
161s Preparing to unpack .../locales_2.41-1ubuntu2_all.deb ...
161s Unpacking locales (2.41-1ubuntu2) over (2.41-1ubuntu1) ...
161s Setting up locales (2.41-1ubuntu2) ...
162s Generating locales (this might take a while)...
163s   en_US.UTF-8... done
163s Generation complete.
163s Setting up libc-dev-bin (2.41-1ubuntu2) ...
163s Setting up libc6-dev:s390x (2.41-1ubuntu2) ...
163s Processing triggers for man-db (2.13.0-1) ...
163s Processing triggers for systemd (257.3-1ubuntu3) ...
164s Reading package lists...
164s Building dependency tree...
164s Reading state information...
165s Starting pkgProblemResolver with broken count: 0
165s Starting 2 pkgProblemResolver with broken count: 0
165s Done
165s Solving dependencies...
165s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
165s autopkgtest [18:41:33]: rebooting testbed after setup commands that affected boot
185s autopkgtest [18:41:53]: testbed running kernel: Linux 6.14.0-10-generic #10-Ubuntu SMP Wed Mar 12 14:53:49 UTC 2025
188s autopkgtest [18:41:56]: @@@@@@@@@@@@@@@@@@@@ apt-source redis
194s Get:1 http://ftpmaster.internal/ubuntu plucky/universe redis 5:7.0.15-3 (dsc) [2273 B]
194s Get:2 http://ftpmaster.internal/ubuntu plucky/universe redis 5:7.0.15-3 (tar) [3026 kB]
194s Get:3 http://ftpmaster.internal/ubuntu plucky/universe redis 5:7.0.15-3 (diff) [31.7 kB]
194s gpgv: Signature made Tue Jan 21 10:13:21 2025 UTC
194s gpgv:                using RSA key C2FE4BD271C139B86C533E461E953E27D4311E58
194s gpgv: Can't check signature: No public key
194s dpkg-source: warning: cannot verify inline signature for ./redis_7.0.15-3.dsc: no acceptable signature found
194s autopkgtest [18:42:02]: testing package redis version 5:7.0.15-3
196s autopkgtest [18:42:04]: build not needed
200s autopkgtest [18:42:08]: test 0001-redis-cli: preparing testbed
201s Reading package lists...
201s Building dependency tree...
201s Reading state information...
201s Starting pkgProblemResolver with broken count: 0
201s Starting 2 pkgProblemResolver with broken count: 0
201s Done
201s The following NEW packages will be installed:
201s   liblzf1 redis redis-sentinel redis-server redis-tools
201s 0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
201s Need to get 1272 kB of archives.
201s After this operation, 7357 kB of additional disk space will be used.
201s Get:1 http://ftpmaster.internal/ubuntu plucky/universe s390x liblzf1 s390x 3.6-4 [7020 B]
201s Get:2 http://ftpmaster.internal/ubuntu plucky/universe s390x redis-tools s390x 5:7.0.15-3 [1198 kB]
203s Get:3 http://ftpmaster.internal/ubuntu plucky/universe s390x redis-sentinel s390x 5:7.0.15-3 [12.2 kB]
203s Get:4 http://ftpmaster.internal/ubuntu plucky/universe s390x redis-server s390x 5:7.0.15-3 [51.7 kB]
203s Get:5 http://ftpmaster.internal/ubuntu plucky/universe s390x redis all 5:7.0.15-3 [2914 B]
203s Fetched 1272 kB in 2s (660 kB/s)
203s Selecting previously unselected package liblzf1:s390x.
203s (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 56326 files and directories currently installed.)
203s Preparing to unpack .../liblzf1_3.6-4_s390x.deb ...
203s Unpacking liblzf1:s390x (3.6-4) ...
203s Selecting previously unselected package redis-tools.
203s Preparing to unpack .../redis-tools_5%3a7.0.15-3_s390x.deb ...
203s Unpacking redis-tools (5:7.0.15-3) ...
204s Selecting previously unselected package redis-sentinel.
204s Preparing to unpack .../redis-sentinel_5%3a7.0.15-3_s390x.deb ...
204s Unpacking redis-sentinel (5:7.0.15-3) ...
204s Selecting previously unselected package redis-server.
204s Preparing to unpack .../redis-server_5%3a7.0.15-3_s390x.deb ...
204s Unpacking redis-server (5:7.0.15-3) ...
204s Selecting previously unselected package redis.
204s Preparing to unpack .../redis_5%3a7.0.15-3_all.deb ...
204s Unpacking redis (5:7.0.15-3) ...
204s Setting up liblzf1:s390x (3.6-4) ...
204s Setting up redis-tools (5:7.0.15-3) ...
204s Setting up redis-server (5:7.0.15-3) ...
204s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'.

204s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'.

204s Setting up redis-sentinel (5:7.0.15-3) ...
204s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'.

204s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'.

205s Setting up redis (5:7.0.15-3) ...
205s Processing triggers for man-db (2.13.0-1) ...
205s Processing triggers for libc-bin (2.41-1ubuntu2) ...
207s autopkgtest [18:42:15]: test 0001-redis-cli: [-----------------------
212s # Server
212s redis_version:7.0.15
212s redis_git_sha1:00000000
212s redis_git_dirty:0
212s redis_build_id:1369a98afcafaf0
212s redis_mode:standalone
212s os:Linux 6.14.0-10-generic s390x
212s arch_bits:64
212s monotonic_clock:POSIX clock_gettime
212s multiplexing_api:epoll
212s atomicvar_api:c11-builtin
212s gcc_version:14.2.0
212s process_id:1816
212s process_supervised:systemd
212s run_id:490cdf310b67ab74736831ca2079f82e07a54cf2
212s tcp_port:6379
212s server_time_usec:1742064140446532
212s uptime_in_seconds:-80
212s uptime_in_days:0
212s hz:10
212s configured_hz:10
212s lru_clock:14010892
212s executable:/usr/bin/redis-server
212s config_file:/etc/redis/redis.conf
212s io_threads_active:0
212s 
212s # Clients
212s connected_clients:3
212s cluster_connections:0
212s maxclients:10000
212s client_recent_max_input_buffer:20480
212s client_recent_max_output_buffer:0
212s blocked_clients:0
212s tracking_clients:0
212s clients_in_timeout_table:0
212s 
212s # Memory
212s used_memory:1094048
212s used_memory_human:1.04M
212s used_memory_rss:13934592
212s used_memory_rss_human:13.29M
212s used_memory_peak:1094048
212s used_memory_peak_human:1.04M
212s used_memory_peak_perc:102.15%
212s used_memory_overhead:953504
212s used_memory_startup:908704
212s used_memory_dataset:140544
212s used_memory_dataset_perc:75.83%
212s allocator_allocated:4599456
212s allocator_active:9371648
212s allocator_resident:11599872
212s total_system_memory:4190969856
212s total_system_memory_human:3.90G
212s used_memory_lua:31744
212s used_memory_vm_eval:31744
212s used_memory_lua_human:31.00K
212s used_memory_scripts_eval:0
212s number_of_cached_scripts:0
212s number_of_functions:0
212s number_of_libraries:0
212s used_memory_vm_functions:32768
212s used_memory_vm_total:64512
212s used_memory_vm_total_human:63.00K
212s used_memory_functions:200
212s used_memory_scripts:200
212s used_memory_scripts_human:200B
212s maxmemory:0
212s maxmemory_human:0B
212s maxmemory_policy:noeviction
212s allocator_frag_ratio:2.04
212s allocator_frag_bytes:4772192
212s allocator_rss_ratio:1.24
212s allocator_rss_bytes:2228224
212s rss_overhead_ratio:1.20
212s rss_overhead_bytes:2334720
212s mem_fragmentation_ratio:13.23
212s mem_fragmentation_bytes:12881144
212s mem_not_counted_for_evict:0
212s mem_replication_backlog:0
212s mem_total_replication_buffers:0
212s mem_clients_slaves:0
212s mem_clients_normal:44600
212s mem_cluster_links:0
212s mem_aof_buffer:0
212s mem_allocator:jemalloc-5.3.0
212s active_defrag_running:0
212s lazyfree_pending_objects:0
212s lazyfreed_objects:0
212s 
212s # Persistence
212s loading:0
212s async_loading:0
212s current_cow_peak:0
212s current_cow_size:0
212s current_cow_size_age:0
212s current_fork_perc:0.00
212s current_save_keys_processed:0
212s current_save_keys_total:0
212s rdb_changes_since_last_save:0
212s rdb_bgsave_in_progress:0
212s rdb_last_save_time:1742064220
212s rdb_last_bgsave_status:ok
212s rdb_last_bgsave_time_sec:-1
212s rdb_current_bgsave_time_sec:-1
212s rdb_saves:0
212s rdb_last_cow_size:0
212s rdb_last_load_keys_expired:0
212s rdb_last_load_keys_loaded:0
212s aof_enabled:0
212s aof_rewrite_in_progress:0
212s aof_rewrite_scheduled:0
212s aof_last_rewrite_time_sec:-1
212s aof_current_rewrite_time_sec:-1
212s aof_last_bgrewrite_status:ok
212s aof_rewrites:0
212s aof_rewrites_consecutive_failures:0
212s aof_last_write_status:ok
212s aof_last_cow_size:0
212s module_fork_in_progress:0
212s module_fork_last_cow_size:0
212s 
212s # Stats
212s total_connections_received:3
212s total_commands_processed:7
212s instantaneous_ops_per_sec:0
212s total_net_input_bytes:350
212s total_net_output_bytes:216
212s total_net_repl_input_bytes:0
212s total_net_repl_output_bytes:0
212s instantaneous_input_kbps:0.00
212s instantaneous_output_kbps:0.00
212s instantaneous_input_repl_kbps:0.00
212s instantaneous_output_repl_kbps:0.00
212s rejected_connections:0
212s sync_full:0
212s sync_partial_ok:0
212s sync_partial_err:0
212s expired_keys:0
212s expired_stale_perc:0.00
212s expired_time_cap_reached_count:0
212s expire_cycle_cpu_milliseconds:0
212s evicted_keys:0
212s evicted_clients:0
212s total_eviction_exceeded_time:0
212s current_eviction_exceeded_time:0
212s keyspace_hits:0
212s keyspace_misses:0
212s pubsub_channels:1
212s pubsub_patterns:0
212s pubsubshard_channels:0
212s latest_fork_usec:0
212s total_forks:0
212s migrate_cached_sockets:0
212s slave_expires_tracked_keys:0
212s active_defrag_hits:0
212s active_defrag_misses:0
212s active_defrag_key_hits:0
212s active_defrag_key_misses:0
212s total_active_defrag_time:0
212s current_active_defrag_time:0
212s tracking_total_keys:0
212s tracking_total_items:0
212s tracking_total_prefixes:0
212s unexpected_error_replies:0
212s total_error_replies:0
212s dump_payload_sanitizations:0
212s total_reads_processed:6
212s total_writes_processed:6
212s io_threaded_reads_processed:0
212s io_threaded_writes_processed:0
212s reply_buffer_shrinks:2
212s reply_buffer_expands:0
212s 
212s # Replication
212s role:master
212s connected_slaves:0
212s master_failover_state:no-failover
212s master_replid:2d052acfe6a88e9cc04e33a86f53a8cc2da52be4
212s master_replid2:0000000000000000000000000000000000000000
212s master_repl_offset:0
212s second_repl_offset:-1
212s repl_backlog_active:0
212s repl_backlog_size:1048576
212s repl_backlog_first_byte_offset:0
212s repl_backlog_histlen:0
212s 
212s # CPU
212s used_cpu_sys:0.014272
212s used_cpu_user:0.031064
212s used_cpu_sys_children:0.000249
212s used_cpu_user_children:0.000027
212s used_cpu_sys_main_thread:0.014238
212s used_cpu_user_main_thread:0.031049
212s 
212s # Modules
212s 
212s # Errorstats
212s 
212s # Cluster
212s cluster_enabled:0
212s 
212s # Keyspace
212s Redis ver. 7.0.15
212s autopkgtest [18:42:20]: test 0001-redis-cli: -----------------------]
213s 0001-redis-cli       PASS
213s autopkgtest [18:42:21]: test 0001-redis-cli:  - - - - - - - - - - results - - - - - - - - - -
213s autopkgtest [18:42:21]: test 0002-benchmark: preparing testbed
213s Reading package lists...
213s Building dependency tree...
213s Reading state information...
214s Starting pkgProblemResolver with broken count: 0
214s Starting 2 pkgProblemResolver with broken count: 0
214s Done
214s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
215s autopkgtest [18:42:23]: test 0002-benchmark: [-----------------------
221s  
PING_INLINE: rps=0.0 (overall: nan) avg_msec=nan (overall: nan)
                                                                
====== PING_INLINE ======
221s   100000 requests completed in 0.12 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.119 milliseconds (cumulative count 30)
221s 50.000% <= 0.399 milliseconds (cumulative count 51700)
221s 75.000% <= 0.655 milliseconds (cumulative count 76650)
221s 87.500% <= 0.695 milliseconds (cumulative count 88320)
221s 93.750% <= 0.719 milliseconds (cumulative count 94360)
221s 96.875% <= 0.735 milliseconds (cumulative count 97070)
221s 98.438% <= 0.759 milliseconds (cumulative count 98500)
221s 99.219% <= 1.127 milliseconds (cumulative count 99220)
221s 99.609% <= 10.263 milliseconds (cumulative count 99610)
221s 99.805% <= 10.351 milliseconds (cumulative count 99820)
221s 99.902% <= 10.391 milliseconds (cumulative count 99910)
221s 99.951% <= 10.415 milliseconds (cumulative count 99970)
221s 99.976% <= 10.423 milliseconds (cumulative count 99980)
221s 99.988% <= 10.431 milliseconds (cumulative count 100000)
221s 100.000% <= 10.431 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 1.380% <= 0.207 milliseconds (cumulative count 1380)
221s 21.140% <= 0.303 milliseconds (cumulative count 21140)
221s 53.950% <= 0.407 milliseconds (cumulative count 53950)
221s 63.470% <= 0.503 milliseconds (cumulative count 63470)
221s 67.130% <= 0.607 milliseconds (cumulative count 67130)
221s 90.490% <= 0.703 milliseconds (cumulative count 90490)
221s 98.920% <= 0.807 milliseconds (cumulative count 98920)
221s 98.990% <= 0.903 milliseconds (cumulative count 98990)
221s 99.010% <= 1.007 milliseconds (cumulative count 99010)
221s 99.170% <= 1.103 milliseconds (cumulative count 99170)
221s 99.370% <= 1.207 milliseconds (cumulative count 99370)
221s 99.510% <= 1.303 milliseconds (cumulative count 99510)
221s 99.560% <= 10.103 milliseconds (cumulative count 99560)
221s 100.000% <= 11.103 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 854700.88 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.507     0.112     0.399     0.727     0.911    10.431
221s  
====== PING_MBULK ======
221s   100000 requests completed in 0.10 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.111 milliseconds (cumulative count 1200)
221s 50.000% <= 0.359 milliseconds (cumulative count 52190)
221s 75.000% <= 0.423 milliseconds (cumulative count 75580)
221s 87.500% <= 0.463 milliseconds (cumulative count 90060)
221s 93.750% <= 0.479 milliseconds (cumulative count 95480)
221s 96.875% <= 0.495 milliseconds (cumulative count 97460)
221s 98.438% <= 0.535 milliseconds (cumulative count 98500)
221s 99.219% <= 0.671 milliseconds (cumulative count 99230)
221s 99.609% <= 3.527 milliseconds (cumulative count 99610)
221s 99.805% <= 3.615 milliseconds (cumulative count 99820)
221s 99.902% <= 3.655 milliseconds (cumulative count 99920)
221s 99.951% <= 3.671 milliseconds (cumulative count 99960)
221s 99.976% <= 3.679 milliseconds (cumulative count 99980)
221s 99.988% <= 3.687 milliseconds (cumulative count 100000)
221s 100.000% <= 3.687 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 23.460% <= 0.207 milliseconds (cumulative count 23460)
221s 33.450% <= 0.303 milliseconds (cumulative count 33450)
221s 69.800% <= 0.407 milliseconds (cumulative count 69800)
221s 97.890% <= 0.503 milliseconds (cumulative count 97890)
221s 99.010% <= 0.607 milliseconds (cumulative count 99010)
221s 99.310% <= 0.703 milliseconds (cumulative count 99310)
221s 99.510% <= 0.807 milliseconds (cumulative count 99510)
221s 100.000% <= 4.103 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1020408.19 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.345     0.104     0.359     0.479     0.607     3.687
221s  
SET: rps=103665.3 (overall: 788484.9) avg_msec=0.509 (overall: 0.509)
                                                                      
====== SET ======
221s   100000 requests completed in 0.13 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.111 milliseconds (cumulative count 90)
221s 50.000% <= 0.495 milliseconds (cumulative count 52150)
221s 75.000% <= 0.599 milliseconds (cumulative count 75370)
221s 87.500% <= 0.823 milliseconds (cumulative count 87690)
221s 93.750% <= 0.887 milliseconds (cumulative count 94530)
221s 96.875% <= 0.927 milliseconds (cumulative count 97210)
221s 98.438% <= 0.967 milliseconds (cumulative count 98540)
221s 99.219% <= 1.063 milliseconds (cumulative count 99280)
221s 99.609% <= 1.087 milliseconds (cumulative count 99670)
221s 99.805% <= 1.111 milliseconds (cumulative count 99820)
221s 99.902% <= 6.391 milliseconds (cumulative count 99920)
221s 99.951% <= 6.407 milliseconds (cumulative count 99960)
221s 99.976% <= 6.415 milliseconds (cumulative count 99980)
221s 99.988% <= 6.423 milliseconds (cumulative count 100000)
221s 100.000% <= 6.423 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 2.780% <= 0.207 milliseconds (cumulative count 2780)
221s 8.180% <= 0.303 milliseconds (cumulative count 8180)
221s 27.270% <= 0.407 milliseconds (cumulative count 27270)
221s 54.440% <= 0.503 milliseconds (cumulative count 54440)
221s 76.320% <= 0.607 milliseconds (cumulative count 76320)
221s 84.070% <= 0.703 milliseconds (cumulative count 84070)
221s 86.580% <= 0.807 milliseconds (cumulative count 86580)
221s 95.970% <= 0.903 milliseconds (cumulative count 95970)
221s 99.000% <= 1.007 milliseconds (cumulative count 99000)
221s 99.790% <= 1.103 milliseconds (cumulative count 99790)
221s 99.850% <= 1.207 milliseconds (cumulative count 99850)
221s 100.000% <= 7.103 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 775193.81 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.529     0.104     0.495     0.895     1.007     6.423
221s  
====== GET ======
221s   100000 requests completed in 0.12 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.111 milliseconds (cumulative count 240)
221s 50.000% <= 0.423 milliseconds (cumulative count 51600)
221s 75.000% <= 0.575 milliseconds (cumulative count 75500)
221s 87.500% <= 0.719 milliseconds (cumulative count 87650)
221s 93.750% <= 0.791 milliseconds (cumulative count 94300)
221s 96.875% <= 0.831 milliseconds (cumulative count 96900)
221s 98.438% <= 0.903 milliseconds (cumulative count 98570)
221s 99.219% <= 0.975 milliseconds (cumulative count 99270)
221s 99.609% <= 1.047 milliseconds (cumulative count 99610)
221s 99.805% <= 8.295 milliseconds (cumulative count 99820)
221s 99.902% <= 8.335 milliseconds (cumulative count 99920)
221s 99.951% <= 8.351 milliseconds (cumulative count 99960)
221s 99.976% <= 8.359 milliseconds (cumulative count 99980)
221s 99.988% <= 8.367 milliseconds (cumulative count 100000)
221s 100.000% <= 8.367 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 3.040% <= 0.207 milliseconds (cumulative count 3040)
221s 17.590% <= 0.303 milliseconds (cumulative count 17590)
221s 47.300% <= 0.407 milliseconds (cumulative count 47300)
221s 68.730% <= 0.503 milliseconds (cumulative count 68730)
221s 77.680% <= 0.607 milliseconds (cumulative count 77680)
221s 86.230% <= 0.703 milliseconds (cumulative count 86230)
221s 95.580% <= 0.807 milliseconds (cumulative count 95580)
221s 98.570% <= 0.903 milliseconds (cumulative count 98570)
221s 99.400% <= 1.007 milliseconds (cumulative count 99400)
221s 99.640% <= 1.103 milliseconds (cumulative count 99640)
221s 100.000% <= 9.103 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 847457.62 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.487     0.104     0.423     0.807     0.943     8.367
221s  
INCR: rps=129120.0 (overall: 1008749.9) avg_msec=0.354 (overall: 0.354)
                                                                        
====== INCR ======
221s   100000 requests completed in 0.08 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.103 milliseconds (cumulative count 10)
221s 50.000% <= 0.303 milliseconds (cumulative count 50960)
221s 75.000% <= 0.367 milliseconds (cumulative count 75250)
221s 87.500% <= 0.423 milliseconds (cumulative count 87960)
221s 93.750% <= 0.487 milliseconds (cumulative count 93920)
221s 96.875% <= 0.575 milliseconds (cumulative count 96980)
221s 98.438% <= 0.703 milliseconds (cumulative count 98440)
221s 99.219% <= 0.831 milliseconds (cumulative count 99230)
221s 99.609% <= 1.279 milliseconds (cumulative count 99620)
221s 99.805% <= 1.335 milliseconds (cumulative count 99810)
221s 99.902% <= 1.383 milliseconds (cumulative count 99910)
221s 99.951% <= 1.439 milliseconds (cumulative count 99960)
221s 99.976% <= 1.455 milliseconds (cumulative count 99990)
221s 99.994% <= 1.463 milliseconds (cumulative count 100000)
221s 100.000% <= 1.463 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.010% <= 0.103 milliseconds (cumulative count 10)
221s 9.340% <= 0.207 milliseconds (cumulative count 9340)
221s 50.960% <= 0.303 milliseconds (cumulative count 50960)
221s 84.730% <= 0.407 milliseconds (cumulative count 84730)
221s 94.630% <= 0.503 milliseconds (cumulative count 94630)
221s 97.530% <= 0.607 milliseconds (cumulative count 97530)
221s 98.440% <= 0.703 milliseconds (cumulative count 98440)
221s 99.070% <= 0.807 milliseconds (cumulative count 99070)
221s 99.490% <= 0.903 milliseconds (cumulative count 99490)
221s 99.500% <= 1.007 milliseconds (cumulative count 99500)
221s 99.700% <= 1.303 milliseconds (cumulative count 99700)
221s 99.920% <= 1.407 milliseconds (cumulative count 99920)
221s 100.000% <= 1.503 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1219512.12 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.324     0.096     0.303     0.519     0.799     1.463
221s  
====== LPUSH ======
221s   100000 requests completed in 0.09 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.119 milliseconds (cumulative count 10)
221s 50.000% <= 0.391 milliseconds (cumulative count 50990)
221s 75.000% <= 0.455 milliseconds (cumulative count 77750)
221s 87.500% <= 0.495 milliseconds (cumulative count 89660)
221s 93.750% <= 0.519 milliseconds (cumulative count 94460)
221s 96.875% <= 0.543 milliseconds (cumulative count 97210)
221s 98.438% <= 0.575 milliseconds (cumulative count 98530)
221s 99.219% <= 0.647 milliseconds (cumulative count 99240)
221s 99.609% <= 0.719 milliseconds (cumulative count 99630)
221s 99.805% <= 0.767 milliseconds (cumulative count 99830)
221s 99.902% <= 0.791 milliseconds (cumulative count 99920)
221s 99.951% <= 0.815 milliseconds (cumulative count 99960)
221s 99.976% <= 0.831 milliseconds (cumulative count 99980)
221s 99.988% <= 0.847 milliseconds (cumulative count 99990)
221s 99.994% <= 0.855 milliseconds (cumulative count 100000)
221s 100.000% <= 0.855 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 1.150% <= 0.207 milliseconds (cumulative count 1150)
221s 10.130% <= 0.303 milliseconds (cumulative count 10130)
221s 58.260% <= 0.407 milliseconds (cumulative count 58260)
221s 91.500% <= 0.503 milliseconds (cumulative count 91500)
221s 98.950% <= 0.607 milliseconds (cumulative count 98950)
221s 99.540% <= 0.703 milliseconds (cumulative count 99540)
221s 99.950% <= 0.807 milliseconds (cumulative count 99950)
221s 100.000% <= 0.903 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1111111.12 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.395     0.112     0.391     0.527     0.615     0.855
221s  
====== RPUSH ======
221s   100000 requests completed in 0.08 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.127 milliseconds (cumulative count 10)
221s 50.000% <= 0.343 milliseconds (cumulative count 51200)
221s 75.000% <= 0.399 milliseconds (cumulative count 76970)
221s 87.500% <= 0.439 milliseconds (cumulative count 88880)
221s 93.750% <= 0.463 milliseconds (cumulative count 94890)
221s 96.875% <= 0.479 milliseconds (cumulative count 97140)
221s 98.438% <= 0.503 milliseconds (cumulative count 98750)
221s 99.219% <= 0.535 milliseconds (cumulative count 99260)
221s 99.609% <= 0.607 milliseconds (cumulative count 99620)
221s 99.805% <= 0.655 milliseconds (cumulative count 99830)
221s 99.902% <= 0.695 milliseconds (cumulative count 99910)
221s 99.951% <= 0.751 milliseconds (cumulative count 99970)
221s 99.976% <= 0.759 milliseconds (cumulative count 99980)
221s 99.988% <= 0.767 milliseconds (cumulative count 99990)
221s 99.994% <= 0.775 milliseconds (cumulative count 100000)
221s 100.000% <= 0.775 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 1.120% <= 0.207 milliseconds (cumulative count 1120)
221s 28.160% <= 0.303 milliseconds (cumulative count 28160)
221s 79.660% <= 0.407 milliseconds (cumulative count 79660)
221s 98.750% <= 0.503 milliseconds (cumulative count 98750)
221s 99.620% <= 0.607 milliseconds (cumulative count 99620)
221s 99.920% <= 0.703 milliseconds (cumulative count 99920)
221s 100.000% <= 0.807 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1250000.00 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.351     0.120     0.343     0.471     0.511     0.775
221s  
LPOP: rps=121040.0 (overall: 1120740.8) avg_msec=0.399 (overall: 0.399)
                                                                        
====== LPOP ======
221s   100000 requests completed in 0.09 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.135 milliseconds (cumulative count 40)
221s 50.000% <= 0.407 milliseconds (cumulative count 53460)
221s 75.000% <= 0.463 milliseconds (cumulative count 77730)
221s 87.500% <= 0.503 milliseconds (cumulative count 89540)
221s 93.750% <= 0.527 milliseconds (cumulative count 94560)
221s 96.875% <= 0.551 milliseconds (cumulative count 97470)
221s 98.438% <= 0.575 milliseconds (cumulative count 98820)
221s 99.219% <= 0.599 milliseconds (cumulative count 99290)
221s 99.609% <= 0.647 milliseconds (cumulative count 99620)
221s 99.805% <= 0.703 milliseconds (cumulative count 99830)
221s 99.902% <= 0.751 milliseconds (cumulative count 99920)
221s 99.951% <= 0.783 milliseconds (cumulative count 99960)
221s 99.976% <= 0.823 milliseconds (cumulative count 99980)
221s 99.988% <= 0.839 milliseconds (cumulative count 99990)
221s 99.994% <= 0.863 milliseconds (cumulative count 100000)
221s 100.000% <= 0.863 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 1.630% <= 0.207 milliseconds (cumulative count 1630)
221s 7.770% <= 0.303 milliseconds (cumulative count 7770)
221s 53.460% <= 0.407 milliseconds (cumulative count 53460)
221s 89.540% <= 0.503 milliseconds (cumulative count 89540)
221s 99.370% <= 0.607 milliseconds (cumulative count 99370)
221s 99.830% <= 0.703 milliseconds (cumulative count 99830)
221s 99.970% <= 0.807 milliseconds (cumulative count 99970)
221s 100.000% <= 0.903 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1098901.12 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.403     0.128     0.407     0.535     0.583     0.863
221s  
====== RPOP ======
221s   100000 requests completed in 0.09 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.127 milliseconds (cumulative count 30)
221s 50.000% <= 0.367 milliseconds (cumulative count 52490)
221s 75.000% <= 0.423 milliseconds (cumulative count 77590)
221s 87.500% <= 0.463 milliseconds (cumulative count 89080)
221s 93.750% <= 0.487 milliseconds (cumulative count 94700)
221s 96.875% <= 0.503 milliseconds (cumulative count 96900)
221s 98.438% <= 0.527 milliseconds (cumulative count 98450)
221s 99.219% <= 0.623 milliseconds (cumulative count 99250)
221s 99.609% <= 0.679 milliseconds (cumulative count 99630)
221s 99.805% <= 0.719 milliseconds (cumulative count 99840)
221s 99.902% <= 0.759 milliseconds (cumulative count 99910)
221s 99.951% <= 0.831 milliseconds (cumulative count 99960)
221s 99.976% <= 0.847 milliseconds (cumulative count 99980)
221s 99.988% <= 0.855 milliseconds (cumulative count 99990)
221s 99.994% <= 0.871 milliseconds (cumulative count 100000)
221s 100.000% <= 0.871 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 1.530% <= 0.207 milliseconds (cumulative count 1530)
221s 16.930% <= 0.303 milliseconds (cumulative count 16930)
221s 71.440% <= 0.407 milliseconds (cumulative count 71440)
221s 96.900% <= 0.503 milliseconds (cumulative count 96900)
221s 99.160% <= 0.607 milliseconds (cumulative count 99160)
221s 99.760% <= 0.703 milliseconds (cumulative count 99760)
221s 99.940% <= 0.807 milliseconds (cumulative count 99940)
221s 100.000% <= 0.903 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1176470.62 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.370     0.120     0.367     0.495     0.567     0.871
221s  
====== SADD ======
221s   100000 requests completed in 0.08 seconds
221s   50 parallel clients
221s   3 bytes payload
221s   keep alive: 1
221s   host configuration "save": 3600 1 300 100 60 10000
221s   host configuration "appendonly": no
221s   multi-thread: no
221s 
221s Latency by percentile distribution:
221s 0.000% <= 0.127 milliseconds (cumulative count 30)
221s 50.000% <= 0.343 milliseconds (cumulative count 51340)
221s 75.000% <= 0.399 milliseconds (cumulative count 76570)
221s 87.500% <= 0.439 milliseconds (cumulative count 88650)
221s 93.750% <= 0.463 milliseconds (cumulative count 93980)
221s 96.875% <= 0.495 milliseconds (cumulative count 97040)
221s 98.438% <= 0.543 milliseconds (cumulative count 98460)
221s 99.219% <= 0.639 milliseconds (cumulative count 99280)
221s 99.609% <= 0.679 milliseconds (cumulative count 99620)
221s 99.805% <= 0.743 milliseconds (cumulative count 99820)
221s 99.902% <= 0.767 milliseconds (cumulative count 99910)
221s 99.951% <= 0.791 milliseconds (cumulative count 99960)
221s 99.976% <= 0.815 milliseconds (cumulative count 99980)
221s 99.988% <= 0.839 milliseconds (cumulative count 100000)
221s 100.000% <= 0.839 milliseconds (cumulative count 100000)
221s 
221s Cumulative distribution of latencies:
221s 0.000% <= 0.103 milliseconds (cumulative count 0)
221s 1.280% <= 0.207 milliseconds (cumulative count 1280)
221s 30.250% <= 0.303 milliseconds (cumulative count 30250)
221s 79.400% <= 0.407 milliseconds (cumulative count 79400)
221s 97.490% <= 0.503 milliseconds (cumulative count 97490)
221s 99.050% <= 0.607 milliseconds (cumulative count 99050)
221s 99.720% <= 0.703 milliseconds (cumulative count 99720)
221s 99.970% <= 0.807 milliseconds (cumulative count 99970)
221s 100.000% <= 0.903 milliseconds (cumulative count 100000)
221s 
221s Summary:
221s   throughput summary: 1234567.88 requests per second
221s   latency summary (msec):
221s           avg       min       p50       p95       p99       max
221s         0.352     0.120     0.343     0.479     0.607     0.839
222s  
HSET: rps=88565.7 (overall: 1111500.0) avg_msec=0.385 (overall: 0.385)
                                                                       
====== HSET ======
222s   100000 requests completed in 0.09 seconds
222s   50 parallel clients
222s   3 bytes payload
222s   keep alive: 1
222s   host configuration "save": 3600 1 300 100 60 10000
222s   host configuration "appendonly": no
222s   multi-thread: no
222s 
222s Latency by percentile distribution:
222s 0.000% <= 0.127 milliseconds (cumulative count 30)
222s 50.000% <= 0.375 milliseconds (cumulative count 50330)
222s 75.000% <= 0.431 milliseconds (cumulative count 76050)
222s 87.500% <= 0.471 milliseconds (cumulative count 88490)
222s 93.750% <= 0.495 milliseconds (cumulative count 94670)
222s 96.875% <= 0.519 milliseconds (cumulative count 97240)
222s 98.438% <= 0.543 milliseconds (cumulative count 98640)
222s 99.219% <= 0.623 milliseconds (cumulative count 99240)
222s 99.609% <= 0.711 milliseconds (cumulative count 99640)
222s 99.805% <= 0.791 milliseconds (cumulative count 99810)
222s 99.902% <= 0.839 milliseconds (cumulative count 99920)
222s 99.951% <= 0.871 milliseconds (cumulative count 99960)
222s 99.976% <= 0.895 milliseconds (cumulative count 99980)
222s 99.988% <= 0.903 milliseconds (cumulative count 99990)
222s 99.994% <= 0.919 milliseconds (cumulative count 100000)
222s 100.000% <= 0.919 milliseconds (cumulative count 100000)
222s 
222s Cumulative distribution of latencies:
222s 0.000% <= 0.103 milliseconds (cumulative count 0)
222s 1.210% <= 0.207 milliseconds (cumulative count 1210)
222s 11.440% <= 0.303 milliseconds (cumulative count 11440)
222s 66.110% <= 0.407 milliseconds (cumulative count 66110)
222s 95.970% <= 0.503 milliseconds (cumulative count 95970)
222s 99.170% <= 0.607 milliseconds (cumulative count 99170)
222s 99.580% <= 0.703 milliseconds (cumulative count 99580)
222s 99.850% <= 0.807 milliseconds (cumulative count 99850)
222s 99.990% <= 0.903 milliseconds (cumulative count 99990)
222s 100.000% <= 1.007 milliseconds (cumulative count 100000)
222s 
222s Summary:
222s   throughput summary: 1162790.62 requests per second
222s   latency summary (msec):
222s           avg       min       p50       p95       p99       max
222s         0.381     0.120     0.375     0.503     0.567     0.919
222s  
====== SPOP ======
222s   100000 requests completed in 0.07 seconds
222s   50 parallel clients
222s   3 bytes payload
222s   keep alive: 1
222s   host configuration "save": 3600 1 300 100 60 10000
222s   host configuration "appendonly": no
222s   multi-thread: no
222s 
222s Latency by percentile distribution:
222s 0.000% <= 0.127 milliseconds (cumulative count 30)
222s 50.000% <= 0.295 milliseconds (cumulative count 51080)
222s 75.000% <= 0.351 milliseconds (cumulative count 76360)
222s 87.500% <= 0.391 milliseconds (cumulative count 88780)
222s 93.750% <= 0.415 milliseconds (cumulative count 94290)
222s 96.875% <= 0.447 milliseconds (cumulative count 97180)
222s 98.438% <= 0.511 milliseconds (cumulative count 98450)
222s 99.219% <= 0.671 milliseconds (cumulative count 99220)
222s 99.609% <= 0.751 milliseconds (cumulative count 99610)
222s 99.805% <= 0.815 milliseconds (cumulative count 99840)
222s 99.902% <= 0.871 milliseconds (cumulative count 99910)
222s 99.951% <= 0.919 milliseconds (cumulative count 99960)
222s 99.976% <= 0.943 milliseconds (cumulative count 99980)
222s 99.988% <= 0.951 milliseconds (cumulative count 99990)
222s 99.994% <= 0.967 milliseconds (cumulative count 100000)
222s 100.000% <= 0.967 milliseconds (cumulative count 100000)
222s 
222s Cumulative distribution of latencies:
222s 0.000% <= 0.103 milliseconds (cumulative count 0)
222s 2.260% <= 0.207 milliseconds (cumulative count 2260)
222s 55.820% <= 0.303 milliseconds (cumulative count 55820)
222s 92.940% <= 0.407 milliseconds (cumulative count 92940)
222s 98.360% <= 0.503 milliseconds (cumulative count 98360)
222s 99.010% <= 0.607 milliseconds (cumulative count 99010)
222s 99.390% <= 0.703 milliseconds (cumulative count 99390)
222s 99.800% <= 0.807 milliseconds (cumulative count 99800)
222s 99.950% <= 0.903 milliseconds (cumulative count 99950)
222s 100.000% <= 1.007 milliseconds (cumulative count 100000)
222s 
222s Summary:
222s   throughput summary: 1408450.62 requests per second
222s   latency summary (msec):
222s           avg       min       p50       p95       p99       max
222s         0.310     0.120     0.295     0.423     0.607     0.967
222s  
====== ZADD ======
222s   100000 requests completed in 0.09 seconds
222s   50 parallel clients
222s   3 bytes payload
222s   keep alive: 1
222s   host configuration "save": 3600 1 300 100 60 10000
222s   host configuration "appendonly": no
222s   multi-thread: no
222s 
222s Latency by percentile distribution:
222s 0.000% <= 0.143 milliseconds (cumulative count 20)
222s 50.000% <= 0.399 milliseconds (cumulative count 52030)
222s 75.000% <= 0.455 milliseconds (cumulative count 77130)
222s 87.500% <= 0.495 milliseconds (cumulative count 89740)
222s 93.750% <= 0.511 milliseconds (cumulative count 93850)
222s 96.875% <= 0.535 milliseconds (cumulative count 97420)
222s 98.438% <= 0.551 milliseconds (cumulative count 98680)
222s 99.219% <= 0.567 milliseconds (cumulative count 99230)
222s 99.609% <= 0.591 milliseconds (cumulative count 99670)
222s 99.805% <= 0.615 milliseconds (cumulative count 99840)
222s 99.902% <= 0.639 milliseconds (cumulative count 99910)
222s 99.951% <= 0.655 milliseconds (cumulative count 99960)
222s 99.976% <= 0.671 milliseconds (cumulative count 99980)
222s 99.988% <= 0.687 milliseconds (cumulative count 99990)
222s 99.994% <= 0.695 milliseconds (cumulative count 100000)
222s 100.000% <= 0.695 milliseconds (cumulative count 100000)
222s 
222s Cumulative distribution of latencies:
222s 0.000% <= 0.103 milliseconds (cumulative count 0)
222s 0.500% <= 0.207 milliseconds (cumulative count 500)
222s 6.880% <= 0.303 milliseconds (cumulative count 6880)
222s 55.870% <= 0.407 milliseconds (cumulative count 55870)
222s 91.830% <= 0.503 milliseconds (cumulative count 91830)
222s 99.780% <= 0.607 milliseconds (cumulative count 99780)
222s 100.000% <= 0.703 milliseconds (cumulative count 100000)
222s 
222s Summary:
222s   throughput summary: 1111111.12 requests per second
222s   latency summary (msec):
222s           avg       min       p50       p95       p99       max
222s         0.399     0.136     0.399     0.519     0.567     0.695
222s  
ZPOPMIN: rps=118840.0 (overall: 1485500.0) avg_msec=0.291 (overall: 0.291)
                                                                           
====== ZPOPMIN ======
222s   100000 requests completed in 0.07 seconds
222s   50 parallel clients
222s   3 bytes payload
222s   keep alive: 1
222s   host configuration "save": 3600 1 300 100 60 10000
222s   host configuration "appendonly": no
222s   multi-thread: no
222s 
222s Latency by percentile distribution:
222s 0.000% <= 0.111 milliseconds (cumulative count 10)
222s 50.000% <= 0.279 milliseconds (cumulative count 50980)
222s 75.000% <= 0.335 milliseconds (cumulative count 77320)
222s 87.500% <= 0.375 milliseconds (cumulative count 89250)
222s 93.750% <= 0.399 milliseconds (cumulative count 95150)
222s 96.875% <= 0.423 milliseconds (cumulative count 97170)
222s 98.438% <= 0.463 milliseconds (cumulative count 98470)
222s 99.219% <= 0.575 milliseconds (cumulative count 99260)
222s 99.609% <= 0.639 milliseconds (cumulative count 99630)
222s 99.805% <= 0.695 milliseconds (cumulative count 99840)
222s 99.902% <= 0.719 milliseconds (cumulative count 99910)
222s 99.951% <= 0.767 milliseconds (cumulative count 99960)
222s 99.976% <= 0.783 milliseconds (cumulative count 99980)
222s 99.988% <= 0.791 milliseconds (cumulative count 99990)
222s 99.994% <= 0.815 milliseconds (cumulative count 100000)
222s 100.000% <= 0.815 milliseconds (cumulative count 100000)
222s 
222s Cumulative distribution of latencies:
222s 0.000% <= 0.103 milliseconds (cumulative count 0)
222s 4.400% <= 0.207 milliseconds (cumulative count 4400)
222s 65.700% <= 0.303 milliseconds (cumulative count 65700)
222s 96.100% <= 0.407 milliseconds (cumulative count 96100)
222s 98.720% <= 0.503 milliseconds (cumulative count 98720)
222s 99.430% <= 0.607 milliseconds (cumulative count 99430)
222s 99.870% <= 0.703 milliseconds (cumulative count 99870)
222s 99.990% <= 0.807 milliseconds (cumulative count 99990)
222s 100.000% <= 0.903 milliseconds (cumulative count 100000)
222s 
222s Summary:
222s   throughput summary: 1470588.12 requests per second
222s   latency summary (msec):
222s           avg       min       p50       p95       p99       max
222s         0.293     0.104     0.279     0.399     0.543     0.815
222s  
====== LPUSH (needed to benchmark LRANGE) ======
222s   100000 requests completed in 0.09 seconds
222s   50 parallel clients
222s   3 bytes payload
222s   keep alive: 1
222s   host configuration "save": 3600 1 300 100 60 10000
222s   host configuration "appendonly": no
222s   multi-thread: no
222s 
222s Latency by percentile distribution:
222s 0.000% <= 0.127 milliseconds (cumulative count 10)
222s 50.000% <= 0.391 milliseconds (cumulative count 50680)
222s 75.000% <= 0.447 milliseconds (cumulative count 75370)
222s 87.500% <= 0.487 milliseconds (cumulative count 88160)
222s 93.750% <= 0.519 milliseconds (cumulative count 94960)
222s 96.875% <= 0.535 milliseconds (cumulative count 96980)
222s 98.438% <= 0.567 milliseconds (cumulative count 98620)
222s 99.219% <= 0.631 milliseconds (cumulative count 99220)
222s 99.609% <= 0.711 milliseconds (cumulative count 99640)
222s 99.805% <= 0.767 milliseconds (cumulative count 99820)
222s 99.902% <= 0.799 milliseconds (cumulative count 99910)
222s 99.951% <= 0.847 milliseconds (cumulative count 99960)
222s 99.976% <= 0.871 milliseconds (cumulative count 99980)
222s 99.988% <= 0.887 milliseconds (cumulative count 99990)
222s 99.994% <= 0.895 milliseconds (cumulative count 100000)
222s 100.000% <= 0.895 milliseconds (cumulative count 100000)
222s 
222s Cumulative distribution of latencies:
222s 0.000% <= 0.103 milliseconds (cumulative count 0)
222s 1.220% <= 0.207 milliseconds (cumulative count 1220)
222s 8.730% <= 0.303 milliseconds (cumulative count 8730)
222s 58.170% <= 0.407 milliseconds (cumulative count 58170)
222s 92.010% <= 0.503 milliseconds (cumulative count 92010)
222s 99.100% <= 0.607 milliseconds (cumulative count 99100)
222s 99.590% <= 0.703 milliseconds (cumulative count 99590)
222s 99.920% <= 0.807 milliseconds (cumulative count 99920)
222s 100.000% <= 0.903 milliseconds (cumulative count 100000)
222s 
222s Summary:
222s   throughput summary: 1111111.12 requests per second
222s   latency summary (msec):
222s           avg       min       p50       p95       p99       max
222s         0.395     0.120     0.391     0.527     0.599     0.895
223s  
LRANGE_100 (first 100 elements): rps=64200.0 (overall: 144594.6) avg_msec=2.600 (overall: 2.600)
                                                                                                 
LRANGE_100 (first 100 elements): rps=151792.8 (overall: 149585.6) avg_msec=2.539 (overall: 2.557)
                                                                                                  
LRANGE_100 (first 100 elements): rps=151792.8 (overall: 150489.4) avg_msec=2.498 (overall: 2.533)
                                                                                                  
====== LRANGE_100 (first 100 elements) ======
223s   100000 requests completed in 0.66 seconds
223s   50 parallel clients
223s   3 bytes payload
223s   keep alive: 1
223s   host configuration "save": 3600 1 300 100 60 10000
223s   host configuration "appendonly": no
223s   multi-thread: no
223s 
223s Latency by percentile distribution:
223s 0.000% <= 0.175 milliseconds (cumulative count 10)
223s 50.000% <= 2.487 milliseconds (cumulative count 50100)
223s 75.000% <= 2.927 milliseconds (cumulative count 75280)
223s 87.500% <= 3.207 milliseconds (cumulative count 87680)
223s 93.750% <= 3.495 milliseconds (cumulative count 93780)
223s 96.875% <= 3.791 milliseconds (cumulative count 96880)
223s 98.438% <= 4.199 milliseconds (cumulative count 98460)
223s 99.219% <= 4.463 milliseconds (cumulative count 99230)
223s 99.609% <= 4.623 milliseconds (cumulative count 99610)
223s 99.805% <= 4.711 milliseconds (cumulative count 99820)
223s 99.902% <= 4.767 milliseconds (cumulative count 99910)
223s 99.951% <= 4.815 milliseconds (cumulative count 99960)
223s 99.976% <= 4.823 milliseconds (cumulative count 99990)
223s 99.994% <= 4.839 milliseconds (cumulative count 100000)
223s 100.000% <= 4.839 milliseconds (cumulative count 100000)
223s 
223s Cumulative distribution of latencies:
223s 0.000% <= 0.103 milliseconds (cumulative count 0)
223s 0.010% <= 0.207 milliseconds (cumulative count 10)
223s 0.020% <= 1.007 milliseconds (cumulative count 20)
223s 0.060% <= 1.103 milliseconds (cumulative count 60)
223s 0.420% <= 1.207 milliseconds (cumulative count 420)
223s 1.150% <= 1.303 milliseconds (cumulative count 1150)
223s 2.320% <= 1.407 milliseconds (cumulative count 2320)
223s 3.810% <= 1.503 milliseconds (cumulative count 3810)
223s 5.780% <= 1.607 milliseconds (cumulative count 5780)
223s 7.760% <= 1.703 milliseconds (cumulative count 7760)
223s 10.580% <= 1.807 milliseconds (cumulative count 10580)
223s 15.110% <= 1.903 milliseconds (cumulative count 15110)
223s 20.880% <= 2.007 milliseconds (cumulative count 20880)
223s 26.620% <= 2.103 milliseconds (cumulative count 26620)
223s 84.210% <= 3.103 milliseconds (cumulative count 84210)
223s 98.200% <= 4.103 milliseconds (cumulative count 98200)
223s 100.000% <= 5.103 milliseconds (cumulative count 100000)
223s 
223s Summary:
223s   throughput summary: 150829.56 requests per second
223s   latency summary (msec):
223s           avg       min       p50       p95       p99       max
223s         2.527     0.168     2.487     3.583     4.383     4.839
225s  
LRANGE_300 (first 300 elements): rps=24688.0 (overall: 31015.1) avg_msec=9.821 (overall: 9.821)
                                                                                                
LRANGE_300 (first 300 elements): rps=30454.2 (overall: 30702.2) avg_msec=9.994 (overall: 9.917)
                                                                                                
LRANGE_300 (first 300 elements): rps=37625.5 (overall: 33181.2) avg_msec=7.481 (overall: 8.928)
                                                                                                
LRANGE_300 (first 300 elements): rps=36334.7 (overall: 34012.6) avg_msec=8.137 (overall: 8.705)
                                                                                                
LRANGE_300 (first 300 elements): rps=42312.0 (overall: 35738.8) avg_msec=6.231 (overall: 8.096)
                                                                                                
LRANGE_300 (first 300 elements): rps=35494.1 (overall: 35696.2) avg_msec=8.054 (overall: 8.089)
                                                                                                
LRANGE_300 (first 300 elements): rps=42239.0 (overall: 36658.9) avg_msec=5.996 (overall: 7.734)
                                                                                                
LRANGE_300 (first 300 elements): rps=34309.5 (overall: 36356.5) avg_msec=8.528 (overall: 7.830)
                                                                                                
LRANGE_300 (first 300 elements): rps=28653.4 (overall: 35481.2) avg_msec=10.717 (overall: 8.095)
                                                                                                 
LRANGE_300 (first 300 elements): rps=29330.7 (overall: 34853.7) avg_msec=10.344 (overall: 8.288)
                                                                                                 
LRANGE_300 (first 300 elements): rps=27040.0 (overall: 34132.8) avg_msec=11.999 (overall: 8.560)
                                                                                                 
====== LRANGE_300 (first 300 elements) ======
225s   100000 requests completed in 2.93 seconds
225s   50 parallel clients
225s   3 bytes payload
225s   keep alive: 1
225s   host configuration "save": 3600 1 300 100 60 10000
225s   host configuration "appendonly": no
225s   multi-thread: no
225s 
225s Latency by percentile distribution:
225s 0.000% <= 0.255 milliseconds (cumulative count 10)
225s 50.000% <= 7.791 milliseconds (cumulative count 50000)
225s 75.000% <= 10.815 milliseconds (cumulative count 75040)
225s 87.500% <= 13.391 milliseconds (cumulative count 87520)
225s 93.750% <= 15.207 milliseconds (cumulative count 93760)
225s 96.875% <= 16.607 milliseconds (cumulative count 96920)
225s 98.438% <= 18.143 milliseconds (cumulative count 98450)
225s 99.219% <= 19.855 milliseconds (cumulative count 99220)
225s 99.609% <= 22.463 milliseconds (cumulative count 99620)
225s 99.805% <= 23.583 milliseconds (cumulative count 99810)
225s 99.902% <= 24.447 milliseconds (cumulative count 99910)
225s 99.951% <= 29.791 milliseconds (cumulative count 99960)
225s 99.976% <= 30.079 milliseconds (cumulative count 99980)
225s 99.988% <= 31.087 milliseconds (cumulative count 99990)
225s 99.994% <= 31.231 milliseconds (cumulative count 100000)
225s 100.000% <= 31.231 milliseconds (cumulative count 100000)
225s 
225s Cumulative distribution of latencies:
225s 0.000% <= 0.103 milliseconds (cumulative count 0)
225s 0.010% <= 0.303 milliseconds (cumulative count 10)
225s 0.020% <= 0.407 milliseconds (cumulative count 20)
225s 0.030% <= 0.503 milliseconds (cumulative count 30)
225s 0.360% <= 0.703 milliseconds (cumulative count 360)
225s 0.690% <= 0.807 milliseconds (cumulative count 690)
225s 1.060% <= 0.903 milliseconds (cumulative count 1060)
225s 1.400% <= 1.007 milliseconds (cumulative count 1400)
225s 1.650% <= 1.103 milliseconds (cumulative count 1650)
225s 1.950% <= 1.207 milliseconds (cumulative count 1950)
225s 2.120% <= 1.303 milliseconds (cumulative count 2120)
225s 2.230% <= 1.407 milliseconds (cumulative count 2230)
225s 2.330% <= 1.503 milliseconds (cumulative count 2330)
225s 2.480% <= 1.607 milliseconds (cumulative count 2480)
225s 2.540% <= 1.703 milliseconds (cumulative count 2540)
225s 2.610% <= 1.807 milliseconds (cumulative count 2610)
225s 2.640% <= 1.903 milliseconds (cumulative count 2640)
225s 2.730% <= 2.007 milliseconds (cumulative count 2730)
225s 2.770% <= 2.103 milliseconds (cumulative count 2770)
225s 3.740% <= 3.103 milliseconds (cumulative count 3740)
225s 6.310% <= 4.103 milliseconds (cumulative count 6310)
225s 13.770% <= 5.103 milliseconds (cumulative count 13770)
225s 27.590% <= 6.103 milliseconds (cumulative count 27590)
225s 41.920% <= 7.103 milliseconds (cumulative count 41920)
225s 53.320% <= 8.103 milliseconds (cumulative count 53320)
225s 62.610% <= 9.103 milliseconds (cumulative count 62610)
225s 70.370% <= 10.103 milliseconds (cumulative count 70370)
225s 76.650% <= 11.103 milliseconds (cumulative count 76650)
225s 81.970% <= 12.103 milliseconds (cumulative count 81970)
225s 86.290% <= 13.103 milliseconds (cumulative count 86290)
225s 90.150% <= 14.103 milliseconds (cumulative count 90150)
225s 93.480% <= 15.103 milliseconds (cumulative count 93480)
225s 95.920% <= 16.103 milliseconds (cumulative count 95920)
225s 97.640% <= 17.103 milliseconds (cumulative count 97640)
225s 98.400% <= 18.111 milliseconds (cumulative count 98400)
225s 99.000% <= 19.103 milliseconds (cumulative count 99000)
225s 99.260% <= 20.111 milliseconds (cumulative count 99260)
225s 99.380% <= 21.103 milliseconds (cumulative count 99380)
225s 99.540% <= 22.111 milliseconds (cumulative count 99540)
225s 99.750% <= 23.103 milliseconds (cumulative count 99750)
225s 99.860% <= 24.111 milliseconds (cumulative count 99860)
225s 99.950% <= 25.103 milliseconds (cumulative count 99950)
225s 99.980% <= 30.111 milliseconds (cumulative count 99980)
225s 99.990% <= 31.103 milliseconds (cumulative count 99990)
225s 100.000% <= 32.111 milliseconds (cumulative count 100000)
225s 
225s Summary:
225s   throughput summary: 34106.41 requests per second
225s   latency summary (msec):
225s           avg       min       p50       p95       p99       max
225s         8.594     0.248     7.791    15.695    19.103    31.231
230s  
LRANGE_500 (first 500 elements): rps=1593.7 (overall: 12363.6) avg_msec=20.728 (overall: 20.728)
                                                                                                 
LRANGE_500 (first 500 elements): rps=15794.5 (overall: 15398.6) avg_msec=17.963 (overall: 18.219)
                                                                                                  
LRANGE_500 (first 500 elements): rps=20741.0 (overall: 17895.7) avg_msec=13.205 (overall: 15.503)
                                                                                                  
LRANGE_500 (first 500 elements): rps=21368.0 (overall: 18998.7) avg_msec=12.778 (overall: 14.529)
                                                                                                  
LRANGE_500 (first 500 elements): rps=23621.5 (overall: 20116.6) avg_msec=10.280 (overall: 13.323)
                                                                                                  
LRANGE_500 (first 500 elements): rps=24378.5 (overall: 20946.5) avg_msec=8.386 (overall: 12.204)
                                                                                                 
LRANGE_500 (first 500 elements): rps=24048.0 (overall: 21450.3) avg_msec=8.510 (overall: 11.531)
                                                                                                 
LRANGE_500 (first 500 elements): rps=23145.7 (overall: 21690.5) avg_msec=10.727 (overall: 11.410)
                                                                                                  
LRANGE_500 (first 500 elements): rps=23789.7 (overall: 21949.1) avg_msec=9.600 (overall: 11.168)
                                                                                                 
LRANGE_500 (first 500 elements): rps=23738.1 (overall: 22145.4) avg_msec=9.694 (overall: 10.995)
                                                                                                 
LRANGE_500 (first 500 elements): rps=17048.0 (overall: 21645.1) avg_msec=14.381 (overall: 11.256)
                                                                                                  
LRANGE_500 (first 500 elements): rps=18405.5 (overall: 21351.3) avg_msec=15.638 (overall: 11.599)
                                                                                                  
LRANGE_500 (first 500 elements): rps=21415.0 (overall: 21356.6) avg_msec=13.027 (overall: 11.718)
                                                                                                  
LRANGE_500 (first 500 elements): rps=21689.2 (overall: 21381.8) avg_msec=12.364 (overall: 11.767)
                                                                                                  
LRANGE_500 (first 500 elements): rps=21757.9 (overall: 21408.5) avg_msec=11.989 (overall: 11.783)
                                                                                                  
LRANGE_500 (first 500 elements): rps=23011.8 (overall: 21515.3) avg_msec=9.284 (overall: 11.605)
                                                                                                 
LRANGE_500 (first 500 elements): rps=21502.0 (overall: 21514.5) avg_msec=12.981 (overall: 11.691)
                                                                                                  
LRANGE_500 (first 500 elements): rps=20894.1 (overall: 21477.9) avg_msec=12.968 (overall: 11.764)
                                                                                                  
LRANGE_500 (first 500 elements): rps=21388.0 (overall: 21473.0) avg_msec=12.894 (overall: 11.826)
                                                                                                  
====== LRANGE_500 (first 500 elements) ======
230s   100000 requests completed in 4.66 seconds
230s   50 parallel clients
230s   3 bytes payload
230s   keep alive: 1
230s   host configuration "save": 3600 1 300 100 60 10000
230s   host configuration "appendonly": no
230s   multi-thread: no
230s 
230s Latency by percentile distribution:
230s 0.000% <= 0.399 milliseconds (cumulative count 10)
230s 50.000% <= 10.807 milliseconds (cumulative count 50020)
230s 75.000% <= 14.791 milliseconds (cumulative count 75000)
230s 87.500% <= 17.871 milliseconds (cumulative count 87510)
230s 93.750% <= 19.439 milliseconds (cumulative count 93750)
230s 96.875% <= 20.863 milliseconds (cumulative count 96880)
230s 98.438% <= 22.287 milliseconds (cumulative count 98440)
230s 99.219% <= 24.735 milliseconds (cumulative count 99220)
230s 99.609% <= 28.063 milliseconds (cumulative count 99610)
230s 99.805% <= 29.055 milliseconds (cumulative count 99810)
230s 99.902% <= 29.423 milliseconds (cumulative count 99910)
230s 99.951% <= 29.679 milliseconds (cumulative count 99960)
230s 99.976% <= 29.823 milliseconds (cumulative count 99980)
230s 99.988% <= 29.887 milliseconds (cumulative count 99990)
230s 99.994% <= 30.095 milliseconds (cumulative count 100000)
230s 100.000% <= 30.095 milliseconds (cumulative count 100000)
230s 
230s Cumulative distribution of latencies:
230s 0.000% <= 0.103 milliseconds (cumulative count 0)
230s 0.010% <= 0.407 milliseconds (cumulative count 10)
230s 0.030% <= 0.703 milliseconds (cumulative count 30)
230s 0.060% <= 0.807 milliseconds (cumulative count 60)
230s 0.080% <= 0.903 milliseconds (cumulative count 80)
230s 0.120% <= 1.007 milliseconds (cumulative count 120)
230s 0.180% <= 1.103 milliseconds (cumulative count 180)
230s 0.250% <= 1.207 milliseconds (cumulative count 250)
230s 0.270% <= 1.303 milliseconds (cumulative count 270)
230s 0.290% <= 1.407 milliseconds (cumulative count 290)
230s 0.340% <= 1.503 milliseconds (cumulative count 340)
230s 0.380% <= 1.607 milliseconds (cumulative count 380)
230s 0.390% <= 1.703 milliseconds (cumulative count 390)
230s 0.410% <= 1.807 milliseconds (cumulative count 410)
230s 0.430% <= 1.903 milliseconds (cumulative count 430)
230s 0.460% <= 2.007 milliseconds (cumulative count 460)
230s 0.480% <= 2.103 milliseconds (cumulative count 480)
230s 1.410% <= 3.103 milliseconds (cumulative count 1410)
230s 2.600% <= 4.103 milliseconds (cumulative count 2600)
230s 3.630% <= 5.103 milliseconds (cumulative count 3630)
230s 6.820% <= 6.103 milliseconds (cumulative count 6820)
230s 10.610% <= 7.103 milliseconds (cumulative count 10610)
230s 16.580% <= 8.103 milliseconds (cumulative count 16580)
230s 28.640% <= 9.103 milliseconds (cumulative count 28640)
230s 42.680% <= 10.103 milliseconds (cumulative count 42680)
230s 52.620% <= 11.103 milliseconds (cumulative count 52620)
230s 61.350% <= 12.103 milliseconds (cumulative count 61350)
230s 67.750% <= 13.103 milliseconds (cumulative count 67750)
230s 72.580% <= 14.103 milliseconds (cumulative count 72580)
230s 76.020% <= 15.103 milliseconds (cumulative count 76020)
230s 80.000% <= 16.103 milliseconds (cumulative count 80000)
230s 84.330% <= 17.103 milliseconds (cumulative count 84330)
230s 88.520% <= 18.111 milliseconds (cumulative count 88520)
230s 92.520% <= 19.103 milliseconds (cumulative count 92520)
230s 95.560% <= 20.111 milliseconds (cumulative count 95560)
230s 97.230% <= 21.103 milliseconds (cumulative count 97230)
230s 98.300% <= 22.111 milliseconds (cumulative count 98300)
230s 98.920% <= 23.103 milliseconds (cumulative count 98920)
230s 99.130% <= 24.111 milliseconds (cumulative count 99130)
230s 99.270% <= 25.103 milliseconds (cumulative count 99270)
230s 99.420% <= 26.111 milliseconds (cumulative count 99420)
230s 99.480% <= 27.103 milliseconds (cumulative count 99480)
230s 99.620% <= 28.111 milliseconds (cumulative count 99620)
230s 99.810% <= 29.103 milliseconds (cumulative count 99810)
230s 100.000% <= 30.111 milliseconds (cumulative count 100000)
230s 
230s Summary:
230s   throughput summary: 21459.23 requests per second
230s   latency summary (msec):
230s           avg       min       p50       p95       p99       max
230s        11.840     0.392    10.807    19.871    23.343    30.095
238s  
LRANGE_600 (first 600 elements): rps=6569.7 (overall: 10371.1) avg_msec=23.269 (overall: 23.269)
                                                                                                 
LRANGE_600 (first 600 elements): rps=12394.4 (overall: 11609.8) avg_msec=22.979 (overall: 23.080)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12250.0 (overall: 11855.9) avg_msec=22.087 (overall: 22.685)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11803.9 (overall: 11841.5) avg_msec=22.912 (overall: 22.748)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11796.9 (overall: 11831.8) avg_msec=23.276 (overall: 22.863)
                                                                                                  
LRANGE_600 (first 600 elements): rps=15600.0 (overall: 12502.8) avg_msec=18.204 (overall: 21.828)
                                                                                                  
LRANGE_600 (first 600 elements): rps=17235.1 (overall: 13208.6) avg_msec=15.481 (overall: 20.593)
                                                                                                  
LRANGE_600 (first 600 elements): rps=16980.1 (overall: 13698.0) avg_msec=15.669 (overall: 19.800)
                                                                                                  
LRANGE_600 (first 600 elements): rps=15370.5 (overall: 13890.2) avg_msec=16.482 (overall: 19.379)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12549.8 (overall: 13752.1) avg_msec=22.899 (overall: 19.710)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12092.0 (overall: 13597.5) avg_msec=22.922 (overall: 19.975)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11905.9 (overall: 13450.9) avg_msec=22.776 (overall: 20.190)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11780.4 (overall: 13317.6) avg_msec=22.768 (overall: 20.372)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11908.0 (overall: 13215.3) avg_msec=22.778 (overall: 20.530)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12579.4 (overall: 13172.0) avg_msec=23.003 (overall: 20.690)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12960.5 (overall: 13158.4) avg_msec=21.469 (overall: 20.740)
                                                                                                  
LRANGE_600 (first 600 elements): rps=17154.2 (overall: 13398.9) avg_msec=15.177 (overall: 20.311)
                                                                                                  
LRANGE_600 (first 600 elements): rps=17294.8 (overall: 13618.4) avg_msec=15.328 (overall: 19.954)
                                                                                                  
LRANGE_600 (first 600 elements): rps=17004.0 (overall: 13799.0) avg_msec=15.828 (overall: 19.683)
                                                                                                  
LRANGE_600 (first 600 elements): rps=16800.0 (overall: 13950.4) avg_msec=16.030 (overall: 19.461)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12055.6 (overall: 13858.7) avg_msec=22.484 (overall: 19.589)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11754.9 (overall: 13761.2) avg_msec=23.052 (overall: 19.726)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11992.0 (overall: 13683.5) avg_msec=23.230 (overall: 19.861)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12510.0 (overall: 13634.1) avg_msec=23.049 (overall: 19.984)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11980.2 (overall: 13567.0) avg_msec=23.086 (overall: 20.095)
                                                                                                  
LRANGE_600 (first 600 elements): rps=12153.5 (overall: 13511.5) avg_msec=22.321 (overall: 20.173)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11743.1 (overall: 13445.0) avg_msec=22.831 (overall: 20.261)
                                                                                                  
LRANGE_600 (first 600 elements): rps=11964.1 (overall: 13391.7) avg_msec=22.997 (overall: 20.349)
                                                                                                  
LRANGE_600 (first 600 elements): rps=13142.9 (overall: 13383.0) avg_msec=21.967 (overall: 20.404)
                                                                                                  
====== LRANGE_600 (first 600 elements) ======
238s   100000 requests completed in 7.41 seconds
238s   50 parallel clients
238s   3 bytes payload
238s   keep alive: 1
238s   host configuration "save": 3600 1 300 100 60 10000
238s   host configuration "appendonly": no
238s   multi-thread: no
238s 
238s Latency by percentile distribution:
238s 0.000% <= 0.423 milliseconds (cumulative count 10)
238s 50.000% <= 21.375 milliseconds (cumulative count 50080)
238s 75.000% <= 24.191 milliseconds (cumulative count 75070)
238s 87.500% <= 26.671 milliseconds (cumulative count 87510)
238s 93.750% <= 31.007 milliseconds (cumulative count 93830)
238s 96.875% <= 31.695 milliseconds (cumulative count 96880)
238s 98.438% <= 32.079 milliseconds (cumulative count 98440)
238s 99.219% <= 32.351 milliseconds (cumulative count 99220)
238s 99.609% <= 32.607 milliseconds (cumulative count 99610)
238s 99.805% <= 32.895 milliseconds (cumulative count 99820)
238s 99.902% <= 33.183 milliseconds (cumulative count 99910)
238s 99.951% <= 33.407 milliseconds (cumulative count 99960)
238s 99.976% <= 33.535 milliseconds (cumulative count 99980)
238s 99.988% <= 34.943 milliseconds (cumulative count 99990)
238s 99.994% <= 35.103 milliseconds (cumulative count 100000)
238s 100.000% <= 35.103 milliseconds (cumulative count 100000)
238s 
238s Cumulative distribution of latencies:
238s 0.000% <= 0.103 milliseconds (cumulative count 0)
238s 0.010% <= 0.503 milliseconds (cumulative count 10)
238s 0.030% <= 0.807 milliseconds (cumulative count 30)
238s 0.940% <= 0.903 milliseconds (cumulative count 940)
238s 1.050% <= 1.007 milliseconds (cumulative count 1050)
238s 1.220% <= 1.103 milliseconds (cumulative count 1220)
238s 2.030% <= 1.207 milliseconds (cumulative count 2030)
238s 2.290% <= 1.303 milliseconds (cumulative count 2290)
238s 2.380% <= 1.407 milliseconds (cumulative count 2380)
238s 2.490% <= 1.503 milliseconds (cumulative count 2490)
238s 2.660% <= 1.607 milliseconds (cumulative count 2660)
238s 2.790% <= 1.703 milliseconds (cumulative count 2790)
238s 2.950% <= 1.807 milliseconds (cumulative count 2950)
238s 3.080% <= 1.903 milliseconds (cumulative count 3080)
238s 3.150% <= 2.007 milliseconds (cumulative count 3150)
238s 3.220% <= 2.103 milliseconds (cumulative count 3220)
238s 3.580% <= 3.103 milliseconds (cumulative count 3580)
238s 3.940% <= 4.103 milliseconds (cumulative count 3940)
238s 4.340% <= 5.103 milliseconds (cumulative count 4340)
238s 5.020% <= 6.103 milliseconds (cumulative count 5020)
238s 5.450% <= 7.103 milliseconds (cumulative count 5450)
238s 6.120% <= 8.103 milliseconds (cumulative count 6120)
238s 6.750% <= 9.103 milliseconds (cumulative count 6750)
238s 8.070% <= 10.103 milliseconds (cumulative count 8070)
238s 10.090% <= 11.103 milliseconds (cumulative count 10090)
238s 12.500% <= 12.103 milliseconds (cumulative count 12500)
238s 15.240% <= 13.103 milliseconds (cumulative count 15240)
238s 18.330% <= 14.103 milliseconds (cumulative count 18330)
238s 20.710% <= 15.103 milliseconds (cumulative count 20710)
238s 23.090% <= 16.103 milliseconds (cumulative count 23090)
238s 25.700% <= 17.103 milliseconds (cumulative count 25700)
238s 28.130% <= 18.111 milliseconds (cumulative count 28130)
238s 32.010% <= 19.103 milliseconds (cumulative count 32010)
238s 38.460% <= 20.111 milliseconds (cumulative count 38460)
238s 47.470% <= 21.103 milliseconds (cumulative count 47470)
238s 57.020% <= 22.111 milliseconds (cumulative count 57020)
238s 65.970% <= 23.103 milliseconds (cumulative count 65970)
238s 74.410% <= 24.111 milliseconds (cumulative count 74410)
238s 82.090% <= 25.103 milliseconds (cumulative count 82090)
238s 86.230% <= 26.111 milliseconds (cumulative count 86230)
238s 88.170% <= 27.103 milliseconds (cumulative count 88170)
238s 89.390% <= 28.111 milliseconds (cumulative count 89390)
238s 90.500% <= 29.103 milliseconds (cumulative count 90500)
238s 91.710% <= 30.111 milliseconds (cumulative count 91710)
238s 94.170% <= 31.103 milliseconds (cumulative count 94170)
238s 98.520% <= 32.111 milliseconds (cumulative count 98520)
238s 99.900% <= 33.119 milliseconds (cumulative count 99900)
238s 99.980% <= 34.111 milliseconds (cumulative count 99980)
238s 100.000% <= 35.103 milliseconds (cumulative count 100000)
238s 
238s Summary:
238s   throughput summary: 13491.63 requests per second
238s   latency summary (msec):
238s           avg       min       p50       p95       p99       max
238s        20.237     0.416    21.375    31.295    32.271    35.103
238s  
MSET (10 keys): rps=69482.1 (overall: 276825.4) avg_msec=1.730 (overall: 1.730)
                                                                                
MSET (10 keys): rps=291952.2 (overall: 288917.2) avg_msec=1.665 (overall: 1.677)
                                                                                 
====== MSET (10 keys) ======
238s   100000 requests completed in 0.35 seconds
238s   50 parallel clients
238s   3 bytes payload
238s   keep alive: 1
238s   host configuration "save": 3600 1 300 100 60 10000
238s   host configuration "appendonly": no
238s   multi-thread: no
238s 
238s Latency by percentile distribution:
238s 0.000% <= 0.207 milliseconds (cumulative count 10)
238s 50.000% <= 1.751 milliseconds (cumulative count 52690)
238s 75.000% <= 1.823 milliseconds (cumulative count 77010)
238s 87.500% <= 1.871 milliseconds (cumulative count 88080)
238s 93.750% <= 1.919 milliseconds (cumulative count 94090)
238s 96.875% <= 1.967 milliseconds (cumulative count 97160)
238s 98.438% <= 2.007 milliseconds (cumulative count 98500)
238s 99.219% <= 2.055 milliseconds (cumulative count 99230)
238s 99.609% <= 2.143 milliseconds (cumulative count 99610)
238s 99.805% <= 2.223 milliseconds (cumulative count 99820)
238s 99.902% <= 2.263 milliseconds (cumulative count 99920)
238s 99.951% <= 2.287 milliseconds (cumulative count 99960)
238s 99.976% <= 2.319 milliseconds (cumulative count 99980)
238s 99.988% <= 2.327 milliseconds (cumulative count 99990)
238s 99.994% <= 2.391 milliseconds (cumulative count 100000)
238s 100.000% <= 2.391 milliseconds (cumulative count 100000)
238s 
238s Cumulative distribution of latencies:
238s 0.000% <= 0.103 milliseconds (cumulative count 0)
238s 0.010% <= 0.207 milliseconds (cumulative count 10)
238s 0.030% <= 0.503 milliseconds (cumulative count 30)
238s 0.150% <= 0.607 milliseconds (cumulative count 150)
238s 0.170% <= 0.807 milliseconds (cumulative count 170)
238s 0.880% <= 0.903 milliseconds (cumulative count 880)
238s 6.510% <= 1.007 milliseconds (cumulative count 6510)
238s 10.410% <= 1.103 milliseconds (cumulative count 10410)
238s 11.290% <= 1.207 milliseconds (cumulative count 11290)
238s 11.610% <= 1.303 milliseconds (cumulative count 11610)
238s 11.800% <= 1.407 milliseconds (cumulative count 11800)
238s 11.910% <= 1.503 milliseconds (cumulative count 11910)
238s 15.470% <= 1.607 milliseconds (cumulative count 15470)
238s 34.540% <= 1.703 milliseconds (cumulative count 34540)
238s 72.520% <= 1.807 milliseconds (cumulative count 72520)
238s 92.590% <= 1.903 milliseconds (cumulative count 92590)
238s 98.500% <= 2.007 milliseconds (cumulative count 98500)
238s 99.490% <= 2.103 milliseconds (cumulative count 99490)
238s 100.000% <= 3.103 milliseconds (cumulative count 100000)
238s 
238s Summary:
238s   throughput summary: 289017.34 requests per second
238s   latency summary (msec):
238s           avg       min       p50       p95       p99       max
238s         1.678     0.200     1.751     1.935     2.039     2.391
238s 
238s autopkgtest [18:42:46]: test 0002-benchmark: -----------------------]
239s autopkgtest [18:42:47]: test 0002-benchmark:  - - - - - - - - - - results - - - - - - - - - -
239s 0002-benchmark       PASS
239s autopkgtest [18:42:47]: test 0003-redis-check-aof: preparing testbed
239s Reading package lists...
239s Building dependency tree...
239s Reading state information...
239s Starting pkgProblemResolver with broken count: 0
240s Starting 2 pkgProblemResolver with broken count: 0
240s Done
240s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
241s autopkgtest [18:42:49]: test 0003-redis-check-aof: [-----------------------
241s autopkgtest [18:42:49]: test 0003-redis-check-aof: -----------------------]
242s autopkgtest [18:42:50]: test 0003-redis-check-aof:  - - - - - - - - - - results - - - - - - - - - -
242s 0003-redis-check-aof PASS
242s autopkgtest [18:42:50]: test 0004-redis-check-rdb: preparing testbed
243s Reading package lists...
243s Building dependency tree...
243s Reading state information...
243s Starting pkgProblemResolver with broken count: 0
243s Starting 2 pkgProblemResolver with broken count: 0
243s Done
243s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
244s autopkgtest [18:42:52]: test 0004-redis-check-rdb: [-----------------------
250s OK
250s [offset 0] Checking RDB file /var/lib/redis/dump.rdb
250s [offset 27] AUX FIELD redis-ver = '7.0.15'
250s [offset 41] AUX FIELD redis-bits = '64'
250s [offset 53] AUX FIELD ctime = '1742064178'
250s [offset 68] AUX FIELD used-mem = '1451856'
250s [offset 80] AUX FIELD aof-base = '0'
250s [offset 82] Selecting DB ID 0
250s [offset 7184] Checksum OK
250s [offset 7184] \o/ RDB looks OK! \o/
250s [info] 4 keys read
250s [info] 0 expires
250s [info] 0 already expired
250s autopkgtest [18:42:58]: test 0004-redis-check-rdb: -----------------------]
250s 0004-redis-check-rdb PASS
250s autopkgtest [18:42:58]: test 0004-redis-check-rdb:  - - - - - - - - - - results - - - - - - - - - -
251s autopkgtest [18:42:59]: test 0005-cjson: preparing testbed
251s Reading package lists...
251s Building dependency tree...
251s Reading state information...
251s Starting pkgProblemResolver with broken count: 0
251s Starting 2 pkgProblemResolver with broken count: 0
251s Done
251s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
253s autopkgtest [18:43:01]: test 0005-cjson: [-----------------------
258s 
258s autopkgtest [18:43:06]: test 0005-cjson: -----------------------]
259s autopkgtest [18:43:07]: test 0005-cjson:  - - - - - - - - - - results - - - - - - - - - -
259s 0005-cjson           PASS
259s autopkgtest [18:43:07]: @@@@@@@@@@@@@@@@@@@@ summary
259s 0001-redis-cli       PASS
259s 0002-benchmark       PASS
259s 0003-redis-check-aof PASS
259s 0004-redis-check-rdb PASS
259s 0005-cjson           PASS
277s nova [W] Using flock in prodstack6-s390x
277s flock: timeout while waiting to get lock
277s Creating nova instance adt-plucky-s390x-redis-20250315-183848-juju-7f2275-prod-proposed-migration-environment-15-3f05e94a-d600-418c-920c-07f01effe9e8 from image adt/ubuntu-plucky-s390x-server-20250315.img (UUID 3d3557fa-fd0f-4bba-9b89-8d5964e09f61)...
277s nova [W] Timed out waiting for e1329f14-5804-4710-8e32-026c71bdb9e4 to get deleted.