0s autopkgtest [19:39:51]: starting date and time: 2025-03-15 19:39:51+0000 0s autopkgtest [19:39:51]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production' 0s autopkgtest [19:39:51]: host juju-7f2275-prod-proposed-migration-environment-15; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.zwku7eul/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:glibc --apt-upgrade valkey --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=glibc/2.41-1ubuntu2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-s390x --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-15@bos03-s390x-19.secgroup --name adt-plucky-s390x-valkey-20250315-193950-juju-7f2275-prod-proposed-migration-environment-15-775f14b8-eaf5-444b-a91b-72596fbd4e7f --image adt/ubuntu-plucky-s390x-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-15 --net-id=net_prod-proposed-migration-s390x -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 125s autopkgtest [19:41:56]: testbed dpkg architecture: s390x 125s autopkgtest [19:41:56]: testbed apt version: 2.9.33 125s autopkgtest [19:41:56]: @@@@@@@@@@@@@@@@@@@@ test bed setup 125s autopkgtest [19:41:56]: testbed release detected to be: None 126s autopkgtest [19:41:57]: updating testbed package index (apt update) 126s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [126 kB] 127s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 127s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 127s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 127s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [369 kB] 127s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [45.1 kB] 127s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [14.5 kB] 127s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [77.3 kB] 127s Get:9 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x c-n-f Metadata [1824 B] 127s Get:10 http://ftpmaster.internal/ubuntu plucky-proposed/restricted s390x c-n-f Metadata [116 B] 127s Get:11 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [314 kB] 127s Get:12 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x c-n-f Metadata [13.3 kB] 127s Get:13 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [3532 B] 127s Get:14 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x c-n-f Metadata [240 B] 128s Fetched 965 kB in 1s (858 kB/s) 128s Reading package lists... 129s + lsb_release --codename --short 129s + RELEASE=plucky 129s + cat 129s + [ plucky != trusty ] 129s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y --allow-downgrades -o Dpkg::Options::=--force-confnew dist-upgrade 129s Reading package lists... 129s Building dependency tree... 129s Reading state information... 129s Calculating upgrade... 129s Calculating upgrade... 129s The following packages were automatically installed and are no longer required: 129s libnsl2 libpython3.12-minimal libpython3.12-stdlib libpython3.12t64 129s linux-headers-6.11.0-8 linux-headers-6.11.0-8-generic 129s linux-modules-6.11.0-8-generic linux-tools-6.11.0-8 129s linux-tools-6.11.0-8-generic 129s Use 'sudo apt autoremove' to remove them. 129s The following packages will be upgraded: 129s pinentry-curses python3-jinja2 strace 129s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 129s Need to get 652 kB of archives. 129s After this operation, 27.6 kB of additional disk space will be used. 129s Get:1 http://ftpmaster.internal/ubuntu plucky/main s390x strace s390x 6.13+ds-1ubuntu1 [500 kB] 130s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x pinentry-curses s390x 1.3.1-2ubuntu3 [42.9 kB] 130s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x python3-jinja2 all 3.1.5-2ubuntu1 [109 kB] 130s Fetched 652 kB in 1s (760 kB/s) 130s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81428 files and directories currently installed.) 130s Preparing to unpack .../strace_6.13+ds-1ubuntu1_s390x.deb ... 130s Unpacking strace (6.13+ds-1ubuntu1) over (6.11-0ubuntu1) ... 130s Preparing to unpack .../pinentry-curses_1.3.1-2ubuntu3_s390x.deb ... 130s Unpacking pinentry-curses (1.3.1-2ubuntu3) over (1.3.1-2ubuntu2) ... 130s Preparing to unpack .../python3-jinja2_3.1.5-2ubuntu1_all.deb ... 130s Unpacking python3-jinja2 (3.1.5-2ubuntu1) over (3.1.5-2) ... 130s Setting up pinentry-curses (1.3.1-2ubuntu3) ... 130s Setting up python3-jinja2 (3.1.5-2ubuntu1) ... 131s Setting up strace (6.13+ds-1ubuntu1) ... 131s Processing triggers for man-db (2.13.0-1) ... 131s + rm /etc/apt/preferences.d/force-downgrade-to-release.pref 131s + /usr/lib/apt/apt-helper analyze-pattern ?true 131s + uname -r 131s + sed s/\./\\./g 131s + running_kernel_pattern=^linux-.*6\.14\.0-10-generic.* 131s + apt list ?obsolete 131s + tail -n+2 131s + cut -d/+ grep -v ^linux-.*6\.14\.0-10-generic.* 131s -f1 131s + obsolete_pkgs=linux-headers-6.11.0-8-generic 131s linux-headers-6.11.0-8 131s linux-modules-6.11.0-8-generic 131s linux-tools-6.11.0-8-generic 131s linux-tools-6.11.0-8 131s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y purge --autoremove linux-headers-6.11.0-8-generic linux-headers-6.11.0-8 linux-modules-6.11.0-8-generic linux-tools-6.11.0-8-generic linux-tools-6.11.0-8 131s Reading package lists... 131s Building dependency tree... 131s Reading state information... 131s Solving dependencies... 132s The following packages will be REMOVED: 132s libnsl2* libpython3.12-minimal* libpython3.12-stdlib* libpython3.12t64* 132s linux-headers-6.11.0-8* linux-headers-6.11.0-8-generic* 132s linux-modules-6.11.0-8-generic* linux-tools-6.11.0-8* 132s linux-tools-6.11.0-8-generic* 132s 0 upgraded, 0 newly installed, 9 to remove and 5 not upgraded. 132s After this operation, 167 MB disk space will be freed. 132s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81428 files and directories currently installed.) 132s Removing linux-tools-6.11.0-8-generic (6.11.0-8.8) ... 132s Removing linux-tools-6.11.0-8 (6.11.0-8.8) ... 132s Removing libpython3.12t64:s390x (3.12.9-1) ... 132s Removing libpython3.12-stdlib:s390x (3.12.9-1) ... 132s Removing libnsl2:s390x (1.3.0-3build3) ... 132s Removing libpython3.12-minimal:s390x (3.12.9-1) ... 132s Removing linux-headers-6.11.0-8-generic (6.11.0-8.8) ... 132s Removing linux-headers-6.11.0-8 (6.11.0-8.8) ... 133s Removing linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 133s Processing triggers for libc-bin (2.41-1ubuntu1) ... 133s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56328 files and directories currently installed.) 133s Purging configuration files for libpython3.12-minimal:s390x (3.12.9-1) ... 133s Purging configuration files for linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 133s + grep -q trusty /etc/lsb-release 133s + [ ! -d /usr/share/doc/unattended-upgrades ] 133s + [ ! -d /usr/share/doc/lxd ] 133s + [ ! -d /usr/share/doc/lxd-client ] 133s + [ ! -d /usr/share/doc/snapd ] 133s + type iptables 133s + cat 133s + chmod 755 /etc/rc.local 133s + . /etc/rc.local 133s + iptables -w -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 133s + iptables -A OUTPUT -d 10.255.255.1/32 -p tcp -j DROP 133s + iptables -A OUTPUT -d 10.255.255.2/32 -p tcp -j DROP 133s + uname -m 133s + [ s390x = ppc64le ] 133s + [ -d /run/systemd/system ] 133s + systemd-detect-virt --quiet --vm 133s + mkdir -p /etc/systemd/system/systemd-random-seed.service.d/ 133s + cat 133s + grep -q lz4 /etc/initramfs-tools/initramfs.conf 133s + echo COMPRESS=lz4 133s autopkgtest [19:42:04]: upgrading testbed (apt dist-upgrade and autopurge) 133s Reading package lists... 133s Building dependency tree... 133s Reading state information... 134s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 134s Starting 2 pkgProblemResolver with broken count: 0 134s Done 134s Entering ResolveByKeep 134s 134s Calculating upgrade... 134s The following packages will be upgraded: 134s libc-bin libc-dev-bin libc6 libc6-dev locales 134s 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 134s Need to get 9512 kB of archives. 134s After this operation, 8192 B of additional disk space will be used. 134s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6-dev s390x 2.41-1ubuntu2 [1678 kB] 136s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-dev-bin s390x 2.41-1ubuntu2 [24.3 kB] 136s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6 s390x 2.41-1ubuntu2 [2892 kB] 138s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-bin s390x 2.41-1ubuntu2 [671 kB] 138s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x locales all 2.41-1ubuntu2 [4246 kB] 141s Preconfiguring packages ... 141s Fetched 9512 kB in 7s (1321 kB/s) 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 142s Preparing to unpack .../libc6-dev_2.41-1ubuntu2_s390x.deb ... 142s Unpacking libc6-dev:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 142s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu2_s390x.deb ... 142s Unpacking libc-dev-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 142s Preparing to unpack .../libc6_2.41-1ubuntu2_s390x.deb ... 142s Unpacking libc6:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 142s Setting up libc6:s390x (2.41-1ubuntu2) ... 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 142s Preparing to unpack .../libc-bin_2.41-1ubuntu2_s390x.deb ... 142s Unpacking libc-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 142s Setting up libc-bin (2.41-1ubuntu2) ... 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 142s Preparing to unpack .../locales_2.41-1ubuntu2_all.deb ... 142s Unpacking locales (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 142s Setting up locales (2.41-1ubuntu2) ... 142s Generating locales (this might take a while)... 143s en_US.UTF-8... done 143s Generation complete. 143s Setting up libc-dev-bin (2.41-1ubuntu2) ... 143s Setting up libc6-dev:s390x (2.41-1ubuntu2) ... 143s Processing triggers for man-db (2.13.0-1) ... 144s Processing triggers for systemd (257.3-1ubuntu3) ... 145s Reading package lists... 145s Building dependency tree... 145s Reading state information... 145s Starting pkgProblemResolver with broken count: 0 145s Starting 2 pkgProblemResolver with broken count: 0 145s Done 145s Solving dependencies... 145s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 145s autopkgtest [19:42:16]: rebooting testbed after setup commands that affected boot 164s autopkgtest [19:42:35]: testbed running kernel: Linux 6.14.0-10-generic #10-Ubuntu SMP Wed Mar 12 14:53:49 UTC 2025 167s autopkgtest [19:42:38]: @@@@@@@@@@@@@@@@@@@@ apt-source valkey 171s Get:1 http://ftpmaster.internal/ubuntu plucky/universe valkey 8.0.2+dfsg1-1ubuntu1 (dsc) [2484 B] 171s Get:2 http://ftpmaster.internal/ubuntu plucky/universe valkey 8.0.2+dfsg1-1ubuntu1 (tar) [2599 kB] 171s Get:3 http://ftpmaster.internal/ubuntu plucky/universe valkey 8.0.2+dfsg1-1ubuntu1 (diff) [18.1 kB] 171s gpgv: Signature made Wed Feb 12 14:50:45 2025 UTC 171s gpgv: using RSA key 63EEFC3DE14D5146CE7F24BF34B8AD7D9529E793 171s gpgv: issuer "lena.voytek@canonical.com" 171s gpgv: Can't check signature: No public key 171s dpkg-source: warning: cannot verify inline signature for ./valkey_8.0.2+dfsg1-1ubuntu1.dsc: no acceptable signature found 172s autopkgtest [19:42:43]: testing package valkey version 8.0.2+dfsg1-1ubuntu1 173s autopkgtest [19:42:44]: build not needed 175s autopkgtest [19:42:46]: test 0001-valkey-cli: preparing testbed 175s Reading package lists... 175s Building dependency tree... 175s Reading state information... 175s Starting pkgProblemResolver with broken count: 0 175s Starting 2 pkgProblemResolver with broken count: 0 175s Done 176s The following NEW packages will be installed: 176s liblzf1 valkey-server valkey-tools 176s 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. 176s Need to get 1379 kB of archives. 176s After this operation, 7731 kB of additional disk space will be used. 176s Get:1 http://ftpmaster.internal/ubuntu plucky/universe s390x liblzf1 s390x 3.6-4 [7020 B] 176s Get:2 http://ftpmaster.internal/ubuntu plucky/universe s390x valkey-tools s390x 8.0.2+dfsg1-1ubuntu1 [1324 kB] 177s Get:3 http://ftpmaster.internal/ubuntu plucky/universe s390x valkey-server s390x 8.0.2+dfsg1-1ubuntu1 [48.5 kB] 177s Fetched 1379 kB in 1s (994 kB/s) 177s Selecting previously unselected package liblzf1:s390x. 177s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 177s Preparing to unpack .../liblzf1_3.6-4_s390x.deb ... 177s Unpacking liblzf1:s390x (3.6-4) ... 177s Selecting previously unselected package valkey-tools. 177s Preparing to unpack .../valkey-tools_8.0.2+dfsg1-1ubuntu1_s390x.deb ... 177s Unpacking valkey-tools (8.0.2+dfsg1-1ubuntu1) ... 177s Selecting previously unselected package valkey-server. 177s Preparing to unpack .../valkey-server_8.0.2+dfsg1-1ubuntu1_s390x.deb ... 177s Unpacking valkey-server (8.0.2+dfsg1-1ubuntu1) ... 177s Setting up liblzf1:s390x (3.6-4) ... 177s Setting up valkey-tools (8.0.2+dfsg1-1ubuntu1) ... 178s Setting up valkey-server (8.0.2+dfsg1-1ubuntu1) ... 178s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 178s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 178s Processing triggers for man-db (2.13.0-1) ... 178s Processing triggers for libc-bin (2.41-1ubuntu2) ... 180s autopkgtest [19:42:51]: test 0001-valkey-cli: [----------------------- 185s # Server 185s redis_version:7.2.4 185s server_name:valkey 185s valkey_version:8.0.2 185s redis_git_sha1:00000000 185s redis_git_dirty:0 185s redis_build_id:5fe77b42c48a3400 185s server_mode:standalone 185s os:Linux 6.14.0-10-generic s390x 185s arch_bits:64 185s monotonic_clock:POSIX clock_gettime 185s multiplexing_api:epoll 185s gcc_version:14.2.0 185s process_id:1598 185s process_supervised:systemd 185s run_id:83d32a61a43495678a0ddd4b19908d5e1b51f742 185s tcp_port:6379 185s server_time_usec:1742067861963920 185s uptime_in_seconds:5 185s uptime_in_days:0 185s hz:10 185s configured_hz:10 185s lru_clock:14014613 185s executable:/usr/bin/valkey-server 185s config_file:/etc/valkey/valkey.conf 185s io_threads_active:0 185s availability_zone: 185s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 185s 185s # Clients 185s connected_clients:1 185s cluster_connections:0 185s maxclients:10000 185s client_recent_max_input_buffer:0 185s client_recent_max_output_buffer:0 185s blocked_clients:0 185s tracking_clients:0 185s pubsub_clients:0 185s watching_clients:0 185s clients_in_timeout_table:0 185s total_watched_keys:0 185s total_blocking_keys:0 185s total_blocking_keys_on_nokey:0 185s 185s # Memory 185s used_memory:982712 185s used_memory_human:959.68K 185s used_memory_rss:14204928 185s used_memory_rss_human:13.55M 185s used_memory_peak:982712 185s used_memory_peak_human:959.68K 185s used_memory_peak_perc:100.28% 185s used_memory_overhead:962224 185s used_memory_startup:962024 185s used_memory_dataset:20488 185s used_memory_dataset_perc:99.03% 185s allocator_allocated:4541280 185s allocator_active:9502720 185s allocator_resident:11862016 185s allocator_muzzy:0 185s total_system_memory:4190953472 185s total_system_memory_human:3.90G 185s used_memory_lua:31744 185s used_memory_vm_eval:31744 185s used_memory_lua_human:31.00K 185s used_memory_scripts_eval:0 185s number_of_cached_scripts:0 185s number_of_functions:0 185s number_of_libraries:0 185s used_memory_vm_functions:33792 185s used_memory_vm_total:65536 185s used_memory_vm_total_human:64.00K 185s used_memory_functions:200 185s used_memory_scripts:200 185s used_memory_scripts_human:200B 185s maxmemory:0 185s maxmemory_human:0B 185s maxmemory_policy:noeviction 185s allocator_frag_ratio:2.08 185s allocator_frag_bytes:4895904 185s allocator_rss_ratio:1.25 185s allocator_rss_bytes:2359296 185s rss_overhead_ratio:1.20 185s rss_overhead_bytes:2342912 185s mem_fragmentation_ratio:14.76 185s mem_fragmentation_bytes:13242760 185s mem_not_counted_for_evict:0 185s mem_replication_backlog:0 185s mem_total_replication_buffers:0 185s mem_clients_slaves:0 185s mem_clients_normal:0 185s mem_cluster_links:0 185s mem_aof_buffer:0 185s mem_allocator:jemalloc-5.3.0 185s mem_overhead_db_hashtable_rehashing:0 185s active_defrag_running:0 185s lazyfree_pending_objects:0 185s lazyfreed_objects:0 185s 185s # Persistence 185s loading:0 185s async_loading:0 185s current_cow_peak:0 185s current_cow_size:0 185s current_cow_size_age:0 185s current_fork_perc:0.00 185s current_save_keys_processed:0 185s current_save_keys_total:0 185s rdb_changes_since_last_save:0 185s rdb_bgsave_in_progress:0 185s rdb_last_save_time:1742067856 185s rdb_last_bgsave_status:ok 185s rdb_last_bgsave_time_sec:-1 185s rdb_current_bgsave_time_sec:-1 185s rdb_saves:0 185s rdb_last_cow_size:0 185s rdb_last_load_keys_expired:0 185s rdb_last_load_keys_loaded:0 185s aof_enabled:0 185s aof_rewrite_in_progress:0 185s aof_rewrite_scheduled:0 185s aof_last_rewrite_time_sec:-1 185s aof_current_rewrite_time_sec:-1 185s aof_last_bgrewrite_status:ok 185s aof_rewrites:0 185s aof_rewrites_consecutive_failures:0 185s aof_last_write_status:ok 185s aof_last_cow_size:0 185s module_fork_in_progress:0 185s module_fork_last_cow_size:0 185s 185s # Stats 185s total_connections_received:1 185s total_commands_processed:0 185s instantaneous_ops_per_sec:0 185s total_net_input_bytes:14 185s total_net_output_bytes:0 185s total_net_repl_input_bytes:0 185s total_net_repl_output_bytes:0 185s instantaneous_input_kbps:0.00 185s instantaneous_output_kbps:0.00 185s instantaneous_input_repl_kbps:0.00 185s instantaneous_output_repl_kbps:0.00 185s rejected_connections:0 185s sync_full:0 185s sync_partial_ok:0 185s sync_partial_err:0 185s expired_keys:0 185s expired_stale_perc:0.00 185s expired_time_cap_reached_count:0 185s expire_cycle_cpu_milliseconds:0 185s evicted_keys:0 185s evicted_clients:0 185s evicted_scripts:0 185s total_eviction_exceeded_time:0 185s current_eviction_exceeded_time:0 185s keyspace_hits:0 185s keyspace_misses:0 185s pubsub_channels:0 185s pubsub_patterns:0 185s pubsubshard_channels:0 185s latest_fork_usec:0 185s total_forks:0 185s migrate_cached_sockets:0 185s slave_expires_tracked_keys:0 185s active_defrag_hits:0 185s active_defrag_misses:0 185s active_defrag_key_hits:0 185s active_defrag_key_misses:0 185s total_active_defrag_time:0 185s current_active_defrag_time:0 185s tracking_total_keys:0 185s tracking_total_items:0 185s tracking_total_prefixes:0 185s unexpected_error_replies:0 185s total_error_replies:0 185s dump_payload_sanitizations:0 185s total_reads_processed:1 185s total_writes_processed:0 185s io_threaded_reads_processed:0 185s io_threaded_writes_processed:0 185s io_threaded_freed_objects:0 185s io_threaded_poll_processed:0 185s io_threaded_total_prefetch_batches:0 185s io_threaded_total_prefetch_entries:0 185s client_query_buffer_limit_disconnections:0 185s client_output_buffer_limit_disconnections:0 185s reply_buffer_shrinks:0 185s reply_buffer_expands:0 185s eventloop_cycles:51 185s eventloop_duration_sum:9646 185s eventloop_duration_cmd_sum:0 185s instantaneous_eventloop_cycles_per_sec:9 185s instantaneous_eventloop_duration_usec:199 185s acl_access_denied_auth:0 185s acl_access_denied_cmd:0 185s acl_access_denied_key:0 185s acl_access_denied_channel:0 185s 185s # Replication 185s role:master 185s connected_slaves:0 185s replicas_waiting_psync:0 185s master_failover_state:no-failover 185s master_replid:b6b311318f44f68face63626f6a6ba692b5c683d 185s master_replid2:0000000000000000000000000000000000000000 185s master_repl_offset:0 185s second_repl_offset:-1 185s repl_backlog_active:0 185s repl_backlog_size:10485760 185s repl_backlog_first_byte_offset:0 185s repl_backlog_histlen:0 185s 185s # CPU 185s used_cpu_sys:0.014641 185s used_cpu_user:0.034944 185s used_cpu_sys_children:0.000250 185s used_cpu_user_children:0.000026 185s used_cpu_sys_main_thread:0.014539 185s used_cpu_user_main_thread:0.034898 185s 185s # Modules 185s 185s # Errorstats 185s 185s # Cluster 185s cluster_enabled:0 185s 185s # Keyspace 185s Redis ver. 8.0.2 185s autopkgtest [19:42:56]: test 0001-valkey-cli: -----------------------] 186s autopkgtest [19:42:57]: test 0001-valkey-cli: - - - - - - - - - - results - - - - - - - - - - 186s 0001-valkey-cli PASS 186s autopkgtest [19:42:57]: test 0002-benchmark: preparing testbed 186s Reading package lists... 186s Building dependency tree... 186s Reading state information... 186s Starting pkgProblemResolver with broken count: 0 186s Starting 2 pkgProblemResolver with broken count: 0 186s Done 187s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 187s autopkgtest [19:42:58]: test 0002-benchmark: [----------------------- 193s PING_INLINE: rps=0.0 (overall: nan) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 193s 100000 requests completed in 0.12 seconds 193s 50 parallel clients 193s 3 bytes payload 193s keep alive: 1 193s host configuration "save": 3600 1 300 100 60 10000 193s host configuration "appendonly": no 193s multi-thread: no 193s 193s Latency by percentile distribution: 193s 0.000% <= 0.111 milliseconds (cumulative count 160) 193s 50.000% <= 0.439 milliseconds (cumulative count 51140) 193s 75.000% <= 0.567 milliseconds (cumulative count 75630) 193s 87.500% <= 0.727 milliseconds (cumulative count 87910) 193s 93.750% <= 0.815 milliseconds (cumulative count 94160) 193s 96.875% <= 0.871 milliseconds (cumulative count 97280) 193s 98.438% <= 0.895 milliseconds (cumulative count 98490) 193s 99.219% <= 0.935 milliseconds (cumulative count 99270) 193s 99.609% <= 1.479 milliseconds (cumulative count 99630) 193s 99.805% <= 1.503 milliseconds (cumulative count 99830) 193s 99.902% <= 1.519 milliseconds (cumulative count 99930) 193s 99.951% <= 1.535 milliseconds (cumulative count 99960) 193s 99.976% <= 5.959 milliseconds (cumulative count 99980) 193s 99.988% <= 5.967 milliseconds (cumulative count 99990) 193s 99.994% <= 5.975 milliseconds (cumulative count 100000) 193s 100.000% <= 5.975 milliseconds (cumulative count 100000) 193s 193s Cumulative distribution of latencies: 193s 0.000% <= 0.103 milliseconds (cumulative count 0) 193s 2.190% <= 0.207 milliseconds (cumulative count 2190) 193s 13.980% <= 0.303 milliseconds (cumulative count 13980) 193s 42.160% <= 0.407 milliseconds (cumulative count 42160) 193s 66.530% <= 0.503 milliseconds (cumulative count 66530) 193s 77.630% <= 0.607 milliseconds (cumulative count 77630) 193s 85.820% <= 0.703 milliseconds (cumulative count 85820) 193s 93.580% <= 0.807 milliseconds (cumulative count 93580) 193s 98.740% <= 0.903 milliseconds (cumulative count 98740) 193s 99.370% <= 1.007 milliseconds (cumulative count 99370) 193s 99.470% <= 1.103 milliseconds (cumulative count 99470) 193s 99.830% <= 1.503 milliseconds (cumulative count 99830) 193s 99.970% <= 1.607 milliseconds (cumulative count 99970) 193s 100.000% <= 6.103 milliseconds (cumulative count 100000) 193s 193s Summary: 193s throughput summary: 862069.00 requests per second 193s latency summary (msec): 193s avg min p50 p95 p99 max 193s 0.479 0.104 0.439 0.831 0.919 5.975 193s ====== PING_MBULK ====== 193s 100000 requests completed in 0.10 seconds 193s 50 parallel clients 193s 3 bytes payload 193s keep alive: 1 193s host configuration "save": 3600 1 300 100 60 10000 193s host configuration "appendonly": no 193s multi-thread: no 193s 193s Latency by percentile distribution: 193s 0.000% <= 0.103 milliseconds (cumulative count 60) 193s 50.000% <= 0.423 milliseconds (cumulative count 51810) 193s 75.000% <= 0.487 milliseconds (cumulative count 77130) 193s 87.500% <= 0.519 milliseconds (cumulative count 89680) 193s 93.750% <= 0.535 milliseconds (cumulative count 95550) 193s 96.875% <= 0.543 milliseconds (cumulative count 97540) 193s 98.438% <= 0.551 milliseconds (cumulative count 98470) 193s 99.219% <= 0.567 milliseconds (cumulative count 99270) 193s 99.609% <= 0.599 milliseconds (cumulative count 99680) 193s 99.805% <= 0.631 milliseconds (cumulative count 99820) 193s 99.902% <= 0.719 milliseconds (cumulative count 99910) 193s 99.951% <= 0.775 milliseconds (cumulative count 99960) 193s 99.976% <= 0.871 milliseconds (cumulative count 99990) 193s 99.994% <= 3.535 milliseconds (cumulative count 100000) 193s 100.000% <= 3.535 milliseconds (cumulative count 100000) 193s 193s Cumulative distribution of latencies: 193s 0.060% <= 0.103 milliseconds (cumulative count 60) 193s 10.620% <= 0.207 milliseconds (cumulative count 10620) 193s 38.930% <= 0.303 milliseconds (cumulative count 38930) 193s 46.070% <= 0.407 milliseconds (cumulative count 46070) 193s 83.620% <= 0.503 milliseconds (cumulative count 83620) 193s 99.730% <= 0.607 milliseconds (cumulative count 99730) 193s 99.890% <= 0.703 milliseconds (cumulative count 99890) 193s 99.970% <= 0.807 milliseconds (cumulative count 99970) 193s 99.990% <= 0.903 milliseconds (cumulative count 99990) 193s 100.000% <= 4.103 milliseconds (cumulative count 100000) 193s 193s Summary: 193s throughput summary: 1010101.00 requests per second 193s latency summary (msec): 193s avg min p50 p95 p99 max 193s 0.374 0.096 0.423 0.535 0.567 3.535 193s SET: rps=101800.0 (overall: 795312.4) avg_msec=0.525 (overall: 0.525) ====== SET ====== 193s 100000 requests completed in 0.13 seconds 193s 50 parallel clients 193s 3 bytes payload 193s keep alive: 1 193s host configuration "save": 3600 1 300 100 60 10000 193s host configuration "appendonly": no 193s multi-thread: no 193s 193s Latency by percentile distribution: 193s 0.000% <= 0.103 milliseconds (cumulative count 10) 193s 50.000% <= 0.527 milliseconds (cumulative count 51560) 193s 75.000% <= 0.623 milliseconds (cumulative count 76090) 193s 87.500% <= 0.735 milliseconds (cumulative count 87570) 193s 93.750% <= 0.927 milliseconds (cumulative count 93810) 193s 96.875% <= 0.967 milliseconds (cumulative count 97280) 193s 98.438% <= 1.007 milliseconds (cumulative count 98510) 193s 99.219% <= 1.063 milliseconds (cumulative count 99310) 193s 99.609% <= 1.119 milliseconds (cumulative count 99630) 193s 99.805% <= 5.719 milliseconds (cumulative count 99810) 193s 99.902% <= 5.759 milliseconds (cumulative count 99910) 193s 99.951% <= 5.775 milliseconds (cumulative count 99960) 193s 99.976% <= 5.791 milliseconds (cumulative count 99990) 193s 99.994% <= 5.799 milliseconds (cumulative count 100000) 193s 100.000% <= 5.799 milliseconds (cumulative count 100000) 193s 193s Cumulative distribution of latencies: 193s 0.010% <= 0.103 milliseconds (cumulative count 10) 193s 1.090% <= 0.207 milliseconds (cumulative count 1090) 193s 8.840% <= 0.303 milliseconds (cumulative count 8840) 193s 26.770% <= 0.407 milliseconds (cumulative count 26770) 193s 45.530% <= 0.503 milliseconds (cumulative count 45530) 193s 72.500% <= 0.607 milliseconds (cumulative count 72500) 193s 86.230% <= 0.703 milliseconds (cumulative count 86230) 193s 89.160% <= 0.807 milliseconds (cumulative count 89160) 193s 91.620% <= 0.903 milliseconds (cumulative count 91620) 193s 98.510% <= 1.007 milliseconds (cumulative count 98510) 193s 99.550% <= 1.103 milliseconds (cumulative count 99550) 193s 99.660% <= 1.207 milliseconds (cumulative count 99660) 193s 100.000% <= 6.103 milliseconds (cumulative count 100000) 193s 193s Summary: 193s throughput summary: 775193.81 requests per second 193s latency summary (msec): 193s avg min p50 p95 p99 max 193s 0.551 0.096 0.527 0.943 1.047 5.799 193s ====== GET ====== 193s 100000 requests completed in 0.12 seconds 193s 50 parallel clients 193s 3 bytes payload 193s keep alive: 1 193s host configuration "save": 3600 1 300 100 60 10000 193s host configuration "appendonly": no 193s multi-thread: no 193s 193s Latency by percentile distribution: 193s 0.000% <= 0.103 milliseconds (cumulative count 50) 193s 50.000% <= 0.431 milliseconds (cumulative count 50900) 193s 75.000% <= 0.551 milliseconds (cumulative count 75650) 193s 87.500% <= 0.695 milliseconds (cumulative count 87640) 193s 93.750% <= 0.791 milliseconds (cumulative count 94170) 193s 96.875% <= 0.847 milliseconds (cumulative count 97110) 193s 98.438% <= 0.871 milliseconds (cumulative count 98730) 193s 99.219% <= 0.887 milliseconds (cumulative count 99460) 193s 99.609% <= 0.895 milliseconds (cumulative count 99610) 193s 99.805% <= 9.047 milliseconds (cumulative count 99810) 193s 99.902% <= 9.087 milliseconds (cumulative count 99920) 193s 99.951% <= 9.103 milliseconds (cumulative count 99960) 193s 99.976% <= 9.111 milliseconds (cumulative count 99980) 193s 99.988% <= 9.119 milliseconds (cumulative count 100000) 193s 100.000% <= 9.119 milliseconds (cumulative count 100000) 193s 193s Cumulative distribution of latencies: 193s 0.050% <= 0.103 milliseconds (cumulative count 50) 193s 3.300% <= 0.207 milliseconds (cumulative count 3300) 193s 16.880% <= 0.303 milliseconds (cumulative count 16880) 193s 44.400% <= 0.407 milliseconds (cumulative count 44400) 193s 67.820% <= 0.503 milliseconds (cumulative count 67820) 193s 79.150% <= 0.607 milliseconds (cumulative count 79150) 193s 88.340% <= 0.703 milliseconds (cumulative count 88340) 193s 95.240% <= 0.807 milliseconds (cumulative count 95240) 193s 99.680% <= 0.903 milliseconds (cumulative count 99680) 193s 99.720% <= 1.007 milliseconds (cumulative count 99720) 193s 99.960% <= 9.103 milliseconds (cumulative count 99960) 193s 100.000% <= 10.103 milliseconds (cumulative count 100000) 193s 193s Summary: 193s throughput summary: 840336.12 requests per second 193s latency summary (msec): 193s avg min p50 p95 p99 max 193s 0.483 0.096 0.431 0.807 0.879 9.119 193s INCR: rps=107490.0 (overall: 817575.8) avg_msec=0.430 (overall: 0.430) ====== INCR ====== 193s 100000 requests completed in 0.12 seconds 193s 50 parallel clients 193s 3 bytes payload 193s keep alive: 1 193s host configuration "save": 3600 1 300 100 60 10000 193s host configuration "appendonly": no 193s multi-thread: no 193s 193s Latency by percentile distribution: 193s 0.000% <= 0.031 milliseconds (cumulative count 10) 193s 50.000% <= 0.431 milliseconds (cumulative count 50220) 193s 75.000% <= 0.559 milliseconds (cumulative count 75610) 193s 87.500% <= 0.711 milliseconds (cumulative count 88100) 193s 93.750% <= 0.799 milliseconds (cumulative count 94050) 193s 96.875% <= 0.847 milliseconds (cumulative count 97030) 193s 98.438% <= 0.879 milliseconds (cumulative count 98550) 193s 99.219% <= 0.895 milliseconds (cumulative count 99240) 193s 99.609% <= 0.911 milliseconds (cumulative count 99720) 193s 99.805% <= 0.919 milliseconds (cumulative count 99840) 193s 99.902% <= 0.927 milliseconds (cumulative count 99920) 193s 99.951% <= 0.943 milliseconds (cumulative count 99970) 193s 99.976% <= 4.855 milliseconds (cumulative count 99980) 193s 99.988% <= 4.863 milliseconds (cumulative count 100000) 193s 100.000% <= 4.863 milliseconds (cumulative count 100000) 193s 193s Cumulative distribution of latencies: 193s 0.140% <= 0.103 milliseconds (cumulative count 140) 193s 3.880% <= 0.207 milliseconds (cumulative count 3880) 193s 16.990% <= 0.303 milliseconds (cumulative count 16990) 193s 43.850% <= 0.407 milliseconds (cumulative count 43850) 193s 67.250% <= 0.503 milliseconds (cumulative count 67250) 193s 79.050% <= 0.607 milliseconds (cumulative count 79050) 193s 87.360% <= 0.703 milliseconds (cumulative count 87360) 193s 94.660% <= 0.807 milliseconds (cumulative count 94660) 193s 99.510% <= 0.903 milliseconds (cumulative count 99510) 193s 99.970% <= 1.007 milliseconds (cumulative count 99970) 193s 100.000% <= 5.103 milliseconds (cumulative count 100000) 193s 193s Summary: 193s throughput summary: 847457.62 requests per second 193s latency summary (msec): 193s avg min p50 p95 p99 max 193s 0.464 0.024 0.431 0.815 0.895 4.863 193s ====== LPUSH ====== 193s 100000 requests completed in 0.14 seconds 193s 50 parallel clients 193s 3 bytes payload 193s keep alive: 1 193s host configuration "save": 3600 1 300 100 60 10000 193s host configuration "appendonly": no 193s multi-thread: no 193s 193s Latency by percentile distribution: 193s 0.000% <= 0.111 milliseconds (cumulative count 70) 193s 50.000% <= 0.551 milliseconds (cumulative count 50460) 193s 75.000% <= 0.703 milliseconds (cumulative count 75980) 193s 87.500% <= 0.791 milliseconds (cumulative count 87940) 193s 93.750% <= 0.951 milliseconds (cumulative count 94040) 193s 96.875% <= 1.031 milliseconds (cumulative count 96910) 193s 98.438% <= 1.079 milliseconds (cumulative count 98590) 193s 99.219% <= 1.127 milliseconds (cumulative count 99290) 193s 99.609% <= 9.639 milliseconds (cumulative count 99620) 193s 99.805% <= 9.719 milliseconds (cumulative count 99820) 193s 99.902% <= 9.759 milliseconds (cumulative count 99920) 193s 99.951% <= 9.775 milliseconds (cumulative count 99960) 193s 99.976% <= 9.783 milliseconds (cumulative count 99990) 193s 99.994% <= 9.791 milliseconds (cumulative count 100000) 193s 100.000% <= 9.791 milliseconds (cumulative count 100000) 193s 193s Cumulative distribution of latencies: 193s 0.000% <= 0.103 milliseconds (cumulative count 0) 193s 1.520% <= 0.207 milliseconds (cumulative count 1520) 193s 5.300% <= 0.303 milliseconds (cumulative count 5300) 193s 21.780% <= 0.407 milliseconds (cumulative count 21780) 193s 40.820% <= 0.503 milliseconds (cumulative count 40820) 193s 60.810% <= 0.607 milliseconds (cumulative count 60810) 193s 75.980% <= 0.703 milliseconds (cumulative count 75980) 193s 89.090% <= 0.807 milliseconds (cumulative count 89090) 193s 92.590% <= 0.903 milliseconds (cumulative count 92590) 193s 96.240% <= 1.007 milliseconds (cumulative count 96240) 193s 98.970% <= 1.103 milliseconds (cumulative count 98970) 193s 99.450% <= 1.207 milliseconds (cumulative count 99450) 193s 99.530% <= 1.303 milliseconds (cumulative count 99530) 193s 100.000% <= 10.103 milliseconds (cumulative count 100000) 193s 193s Summary: 193s throughput summary: 719424.44 requests per second 193s latency summary (msec): 193s avg min p50 p95 p99 max 193s 0.613 0.104 0.551 0.975 1.111 9.791 194s RPUSH: rps=67440.0 (overall: 766363.6) avg_msec=0.446 (overall: 0.446) ====== RPUSH ====== 194s 100000 requests completed in 0.13 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.111 milliseconds (cumulative count 160) 194s 50.000% <= 0.471 milliseconds (cumulative count 52280) 194s 75.000% <= 0.559 milliseconds (cumulative count 75140) 194s 87.500% <= 0.799 milliseconds (cumulative count 87970) 194s 93.750% <= 0.855 milliseconds (cumulative count 94730) 194s 96.875% <= 0.879 milliseconds (cumulative count 97230) 194s 98.438% <= 0.919 milliseconds (cumulative count 98440) 194s 99.219% <= 1.031 milliseconds (cumulative count 99250) 194s 99.609% <= 8.871 milliseconds (cumulative count 99610) 194s 99.805% <= 8.951 milliseconds (cumulative count 99820) 194s 99.902% <= 8.991 milliseconds (cumulative count 99920) 194s 99.951% <= 9.007 milliseconds (cumulative count 99960) 194s 99.976% <= 9.015 milliseconds (cumulative count 99980) 194s 99.988% <= 9.023 milliseconds (cumulative count 100000) 194s 100.000% <= 9.023 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 4.570% <= 0.207 milliseconds (cumulative count 4570) 194s 11.280% <= 0.303 milliseconds (cumulative count 11280) 194s 32.660% <= 0.407 milliseconds (cumulative count 32660) 194s 61.580% <= 0.503 milliseconds (cumulative count 61580) 194s 80.860% <= 0.607 milliseconds (cumulative count 80860) 194s 85.410% <= 0.703 milliseconds (cumulative count 85410) 194s 88.770% <= 0.807 milliseconds (cumulative count 88770) 194s 98.160% <= 0.903 milliseconds (cumulative count 98160) 194s 99.090% <= 1.007 milliseconds (cumulative count 99090) 194s 99.510% <= 1.103 milliseconds (cumulative count 99510) 194s 100.000% <= 9.103 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 781249.94 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.532 0.104 0.471 0.863 0.975 9.023 194s ====== LPOP ====== 194s 100000 requests completed in 0.13 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.103 milliseconds (cumulative count 10) 194s 50.000% <= 0.535 milliseconds (cumulative count 51120) 194s 75.000% <= 0.671 milliseconds (cumulative count 75980) 194s 87.500% <= 0.783 milliseconds (cumulative count 87860) 194s 93.750% <= 0.855 milliseconds (cumulative count 93800) 194s 96.875% <= 0.967 milliseconds (cumulative count 96920) 194s 98.438% <= 1.087 milliseconds (cumulative count 98510) 194s 99.219% <= 1.159 milliseconds (cumulative count 99280) 194s 99.609% <= 1.175 milliseconds (cumulative count 99670) 194s 99.805% <= 1.183 milliseconds (cumulative count 99830) 194s 99.902% <= 1.199 milliseconds (cumulative count 99940) 194s 99.951% <= 1.207 milliseconds (cumulative count 99970) 194s 99.976% <= 1.215 milliseconds (cumulative count 99990) 194s 99.994% <= 1.223 milliseconds (cumulative count 100000) 194s 100.000% <= 1.223 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.010% <= 0.103 milliseconds (cumulative count 10) 194s 2.400% <= 0.207 milliseconds (cumulative count 2400) 194s 5.360% <= 0.303 milliseconds (cumulative count 5360) 194s 21.560% <= 0.407 milliseconds (cumulative count 21560) 194s 43.490% <= 0.503 milliseconds (cumulative count 43490) 194s 66.300% <= 0.607 milliseconds (cumulative count 66300) 194s 79.250% <= 0.703 milliseconds (cumulative count 79250) 194s 89.860% <= 0.807 milliseconds (cumulative count 89860) 194s 96.110% <= 0.903 milliseconds (cumulative count 96110) 194s 97.310% <= 1.007 milliseconds (cumulative count 97310) 194s 98.830% <= 1.103 milliseconds (cumulative count 98830) 194s 99.970% <= 1.207 milliseconds (cumulative count 99970) 194s 100.000% <= 1.303 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 775193.81 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.556 0.096 0.535 0.879 1.135 1.223 194s RPOP: rps=53280.0 (overall: 1024615.4) avg_msec=0.423 (overall: 0.423) ====== RPOP ====== 194s 100000 requests completed in 0.10 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.143 milliseconds (cumulative count 10) 194s 50.000% <= 0.423 milliseconds (cumulative count 51820) 194s 75.000% <= 0.487 milliseconds (cumulative count 76680) 194s 87.500% <= 0.527 milliseconds (cumulative count 87920) 194s 93.750% <= 0.559 milliseconds (cumulative count 94940) 194s 96.875% <= 0.575 milliseconds (cumulative count 97010) 194s 98.438% <= 0.607 milliseconds (cumulative count 98590) 194s 99.219% <= 0.655 milliseconds (cumulative count 99250) 194s 99.609% <= 0.711 milliseconds (cumulative count 99610) 194s 99.805% <= 0.751 milliseconds (cumulative count 99820) 194s 99.902% <= 0.775 milliseconds (cumulative count 99910) 194s 99.951% <= 0.839 milliseconds (cumulative count 99960) 194s 99.976% <= 0.863 milliseconds (cumulative count 99980) 194s 99.988% <= 0.879 milliseconds (cumulative count 99990) 194s 99.994% <= 0.895 milliseconds (cumulative count 100000) 194s 100.000% <= 0.895 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 0.730% <= 0.207 milliseconds (cumulative count 730) 194s 4.140% <= 0.303 milliseconds (cumulative count 4140) 194s 45.550% <= 0.407 milliseconds (cumulative count 45550) 194s 81.530% <= 0.503 milliseconds (cumulative count 81530) 194s 98.590% <= 0.607 milliseconds (cumulative count 98590) 194s 99.560% <= 0.703 milliseconds (cumulative count 99560) 194s 99.930% <= 0.807 milliseconds (cumulative count 99930) 194s 100.000% <= 0.903 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 1030927.81 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.426 0.136 0.423 0.567 0.631 0.895 194s ====== SADD ====== 194s 100000 requests completed in 0.09 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.135 milliseconds (cumulative count 20) 194s 50.000% <= 0.383 milliseconds (cumulative count 51460) 194s 75.000% <= 0.447 milliseconds (cumulative count 75650) 194s 87.500% <= 0.495 milliseconds (cumulative count 89190) 194s 93.750% <= 0.519 milliseconds (cumulative count 94280) 194s 96.875% <= 0.551 milliseconds (cumulative count 97190) 194s 98.438% <= 0.583 milliseconds (cumulative count 98520) 194s 99.219% <= 0.647 milliseconds (cumulative count 99270) 194s 99.609% <= 0.703 milliseconds (cumulative count 99630) 194s 99.805% <= 0.743 milliseconds (cumulative count 99820) 194s 99.902% <= 0.775 milliseconds (cumulative count 99910) 194s 99.951% <= 0.799 milliseconds (cumulative count 99960) 194s 99.976% <= 0.807 milliseconds (cumulative count 99980) 194s 99.988% <= 0.815 milliseconds (cumulative count 99990) 194s 99.994% <= 0.887 milliseconds (cumulative count 100000) 194s 100.000% <= 0.887 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 1.990% <= 0.207 milliseconds (cumulative count 1990) 194s 10.110% <= 0.303 milliseconds (cumulative count 10110) 194s 61.020% <= 0.407 milliseconds (cumulative count 61020) 194s 91.070% <= 0.503 milliseconds (cumulative count 91070) 194s 98.880% <= 0.607 milliseconds (cumulative count 98880) 194s 99.630% <= 0.703 milliseconds (cumulative count 99630) 194s 99.980% <= 0.807 milliseconds (cumulative count 99980) 194s 100.000% <= 0.903 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 1111111.12 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.390 0.128 0.383 0.527 0.623 0.887 194s HSET: rps=300080.0 (overall: 1013783.8) avg_msec=0.443 (overall: 0.443) ====== HSET ====== 194s 100000 requests completed in 0.10 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.135 milliseconds (cumulative count 10) 194s 50.000% <= 0.439 milliseconds (cumulative count 51800) 194s 75.000% <= 0.503 milliseconds (cumulative count 76970) 194s 87.500% <= 0.543 milliseconds (cumulative count 88560) 194s 93.750% <= 0.567 milliseconds (cumulative count 94280) 194s 96.875% <= 0.599 milliseconds (cumulative count 97030) 194s 98.438% <= 0.639 milliseconds (cumulative count 98530) 194s 99.219% <= 0.727 milliseconds (cumulative count 99260) 194s 99.609% <= 0.807 milliseconds (cumulative count 99650) 194s 99.805% <= 0.855 milliseconds (cumulative count 99810) 194s 99.902% <= 0.903 milliseconds (cumulative count 99910) 194s 99.951% <= 0.959 milliseconds (cumulative count 99960) 194s 99.976% <= 0.983 milliseconds (cumulative count 99980) 194s 99.988% <= 0.991 milliseconds (cumulative count 99990) 194s 99.994% <= 1.015 milliseconds (cumulative count 100000) 194s 100.000% <= 1.015 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 0.680% <= 0.207 milliseconds (cumulative count 680) 194s 1.920% <= 0.303 milliseconds (cumulative count 1920) 194s 40.140% <= 0.407 milliseconds (cumulative count 40140) 194s 76.970% <= 0.503 milliseconds (cumulative count 76970) 194s 97.550% <= 0.607 milliseconds (cumulative count 97550) 194s 99.110% <= 0.703 milliseconds (cumulative count 99110) 194s 99.650% <= 0.807 milliseconds (cumulative count 99650) 194s 99.910% <= 0.903 milliseconds (cumulative count 99910) 194s 99.990% <= 1.007 milliseconds (cumulative count 99990) 194s 100.000% <= 1.103 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 1010101.00 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.441 0.128 0.439 0.575 0.695 1.015 194s ====== SPOP ====== 194s 100000 requests completed in 0.08 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.135 milliseconds (cumulative count 20) 194s 50.000% <= 0.335 milliseconds (cumulative count 53850) 194s 75.000% <= 0.391 milliseconds (cumulative count 75900) 194s 87.500% <= 0.439 milliseconds (cumulative count 89640) 194s 93.750% <= 0.455 milliseconds (cumulative count 93900) 194s 96.875% <= 0.479 milliseconds (cumulative count 97010) 194s 98.438% <= 0.511 milliseconds (cumulative count 98730) 194s 99.219% <= 0.543 milliseconds (cumulative count 99250) 194s 99.609% <= 0.599 milliseconds (cumulative count 99630) 194s 99.805% <= 0.647 milliseconds (cumulative count 99810) 194s 99.902% <= 0.687 milliseconds (cumulative count 99910) 194s 99.951% <= 0.719 milliseconds (cumulative count 99960) 194s 99.976% <= 0.743 milliseconds (cumulative count 99990) 194s 99.994% <= 0.759 milliseconds (cumulative count 100000) 194s 100.000% <= 0.759 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 1.550% <= 0.207 milliseconds (cumulative count 1550) 194s 32.330% <= 0.303 milliseconds (cumulative count 32330) 194s 80.480% <= 0.407 milliseconds (cumulative count 80480) 194s 98.270% <= 0.503 milliseconds (cumulative count 98270) 194s 99.660% <= 0.607 milliseconds (cumulative count 99660) 194s 99.920% <= 0.703 milliseconds (cumulative count 99920) 194s 100.000% <= 0.807 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 1265822.75 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.344 0.128 0.335 0.463 0.519 0.759 194s ====== ZADD ====== 194s 100000 requests completed in 0.10 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.143 milliseconds (cumulative count 20) 194s 50.000% <= 0.463 milliseconds (cumulative count 52050) 194s 75.000% <= 0.527 milliseconds (cumulative count 76660) 194s 87.500% <= 0.567 milliseconds (cumulative count 88170) 194s 93.750% <= 0.591 milliseconds (cumulative count 94010) 194s 96.875% <= 0.615 milliseconds (cumulative count 96940) 194s 98.438% <= 0.647 milliseconds (cumulative count 98500) 194s 99.219% <= 0.711 milliseconds (cumulative count 99230) 194s 99.609% <= 0.807 milliseconds (cumulative count 99650) 194s 99.805% <= 0.855 milliseconds (cumulative count 99820) 194s 99.902% <= 0.951 milliseconds (cumulative count 99910) 194s 99.951% <= 1.015 milliseconds (cumulative count 99960) 194s 99.976% <= 1.031 milliseconds (cumulative count 99980) 194s 99.988% <= 1.047 milliseconds (cumulative count 99990) 194s 99.994% <= 1.079 milliseconds (cumulative count 100000) 194s 100.000% <= 1.079 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 0.880% <= 0.207 milliseconds (cumulative count 880) 194s 2.780% <= 0.303 milliseconds (cumulative count 2780) 194s 29.950% <= 0.407 milliseconds (cumulative count 29950) 194s 68.410% <= 0.503 milliseconds (cumulative count 68410) 194s 96.370% <= 0.607 milliseconds (cumulative count 96370) 194s 99.170% <= 0.703 milliseconds (cumulative count 99170) 194s 99.650% <= 0.807 milliseconds (cumulative count 99650) 194s 99.860% <= 0.903 milliseconds (cumulative count 99860) 194s 99.950% <= 1.007 milliseconds (cumulative count 99950) 194s 100.000% <= 1.103 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 970873.81 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.461 0.136 0.463 0.599 0.679 1.079 194s ZPOPMIN: rps=204223.1 (overall: 1250243.9) avg_msec=0.342 (overall: 0.342) ====== ZPOPMIN ====== 194s 100000 requests completed in 0.08 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.127 milliseconds (cumulative count 20) 194s 50.000% <= 0.335 milliseconds (cumulative count 54060) 194s 75.000% <= 0.391 milliseconds (cumulative count 75430) 194s 87.500% <= 0.439 milliseconds (cumulative count 88660) 194s 93.750% <= 0.463 milliseconds (cumulative count 94430) 194s 96.875% <= 0.487 milliseconds (cumulative count 97060) 194s 98.438% <= 0.519 milliseconds (cumulative count 98680) 194s 99.219% <= 0.559 milliseconds (cumulative count 99220) 194s 99.609% <= 0.607 milliseconds (cumulative count 99610) 194s 99.805% <= 0.663 milliseconds (cumulative count 99810) 194s 99.902% <= 0.751 milliseconds (cumulative count 99910) 194s 99.951% <= 0.799 milliseconds (cumulative count 99970) 194s 99.976% <= 0.807 milliseconds (cumulative count 99980) 194s 99.988% <= 0.823 milliseconds (cumulative count 99990) 194s 99.994% <= 0.831 milliseconds (cumulative count 100000) 194s 100.000% <= 0.831 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 3.480% <= 0.207 milliseconds (cumulative count 3480) 194s 32.680% <= 0.303 milliseconds (cumulative count 32680) 194s 79.820% <= 0.407 milliseconds (cumulative count 79820) 194s 97.790% <= 0.503 milliseconds (cumulative count 97790) 194s 99.610% <= 0.607 milliseconds (cumulative count 99610) 194s 99.860% <= 0.703 milliseconds (cumulative count 99860) 194s 99.980% <= 0.807 milliseconds (cumulative count 99980) 194s 100.000% <= 0.903 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 1250000.00 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.342 0.120 0.335 0.471 0.535 0.831 194s ====== LPUSH (needed to benchmark LRANGE) ====== 194s 100000 requests completed in 0.10 seconds 194s 50 parallel clients 194s 3 bytes payload 194s keep alive: 1 194s host configuration "save": 3600 1 300 100 60 10000 194s host configuration "appendonly": no 194s multi-thread: no 194s 194s Latency by percentile distribution: 194s 0.000% <= 0.135 milliseconds (cumulative count 10) 194s 50.000% <= 0.439 milliseconds (cumulative count 52520) 194s 75.000% <= 0.503 milliseconds (cumulative count 76290) 194s 87.500% <= 0.543 milliseconds (cumulative count 87710) 194s 93.750% <= 0.575 milliseconds (cumulative count 94370) 194s 96.875% <= 0.599 milliseconds (cumulative count 97030) 194s 98.438% <= 0.631 milliseconds (cumulative count 98620) 194s 99.219% <= 0.687 milliseconds (cumulative count 99250) 194s 99.609% <= 0.775 milliseconds (cumulative count 99620) 194s 99.805% <= 0.887 milliseconds (cumulative count 99810) 194s 99.902% <= 0.951 milliseconds (cumulative count 99910) 194s 99.951% <= 0.991 milliseconds (cumulative count 99960) 194s 99.976% <= 1.023 milliseconds (cumulative count 99980) 194s 99.988% <= 1.039 milliseconds (cumulative count 99990) 194s 99.994% <= 1.055 milliseconds (cumulative count 100000) 194s 100.000% <= 1.055 milliseconds (cumulative count 100000) 194s 194s Cumulative distribution of latencies: 194s 0.000% <= 0.103 milliseconds (cumulative count 0) 194s 0.770% <= 0.207 milliseconds (cumulative count 770) 194s 4.170% <= 0.303 milliseconds (cumulative count 4170) 194s 39.530% <= 0.407 milliseconds (cumulative count 39530) 194s 76.290% <= 0.503 milliseconds (cumulative count 76290) 194s 97.690% <= 0.607 milliseconds (cumulative count 97690) 194s 99.320% <= 0.703 milliseconds (cumulative count 99320) 194s 99.680% <= 0.807 milliseconds (cumulative count 99680) 194s 99.830% <= 0.903 milliseconds (cumulative count 99830) 194s 99.970% <= 1.007 milliseconds (cumulative count 99970) 194s 100.000% <= 1.103 milliseconds (cumulative count 100000) 194s 194s Summary: 194s throughput summary: 1010101.00 requests per second 194s latency summary (msec): 194s avg min p50 p95 p99 max 194s 0.439 0.128 0.439 0.583 0.655 1.055 195s LRANGE_100 (first 100 elements): rps=63346.6 (overall: 143243.2) avg_msec=2.660 (overall: 2.660) LRANGE_100 (first 100 elements): rps=143545.8 (overall: 143453.0) avg_msec=2.727 (overall: 2.706) LRANGE_100 (first 100 elements): rps=138800.0 (overall: 141552.3) avg_msec=2.893 (overall: 2.781) ====== LRANGE_100 (first 100 elements) ====== 195s 100000 requests completed in 0.70 seconds 195s 50 parallel clients 195s 3 bytes payload 195s keep alive: 1 195s host configuration "save": 3600 1 300 100 60 10000 195s host configuration "appendonly": no 195s multi-thread: no 195s 195s Latency by percentile distribution: 195s 0.000% <= 0.199 milliseconds (cumulative count 10) 195s 50.000% <= 2.711 milliseconds (cumulative count 50410) 195s 75.000% <= 3.111 milliseconds (cumulative count 75170) 195s 87.500% <= 3.375 milliseconds (cumulative count 87540) 195s 93.750% <= 3.767 milliseconds (cumulative count 93880) 195s 96.875% <= 4.191 milliseconds (cumulative count 96940) 195s 98.438% <= 4.559 milliseconds (cumulative count 98460) 195s 99.219% <= 4.879 milliseconds (cumulative count 99220) 195s 99.609% <= 5.183 milliseconds (cumulative count 99620) 195s 99.805% <= 5.359 milliseconds (cumulative count 99810) 195s 99.902% <= 5.559 milliseconds (cumulative count 99910) 195s 99.951% <= 6.007 milliseconds (cumulative count 99960) 195s 99.976% <= 6.239 milliseconds (cumulative count 99980) 195s 99.988% <= 6.263 milliseconds (cumulative count 99990) 195s 99.994% <= 6.359 milliseconds (cumulative count 100000) 195s 100.000% <= 6.359 milliseconds (cumulative count 100000) 195s 195s Cumulative distribution of latencies: 195s 0.000% <= 0.103 milliseconds (cumulative count 0) 195s 0.010% <= 0.207 milliseconds (cumulative count 10) 195s 0.060% <= 1.207 milliseconds (cumulative count 60) 195s 0.170% <= 1.303 milliseconds (cumulative count 170) 195s 0.310% <= 1.407 milliseconds (cumulative count 310) 195s 0.540% <= 1.503 milliseconds (cumulative count 540) 195s 0.880% <= 1.607 milliseconds (cumulative count 880) 195s 1.210% <= 1.703 milliseconds (cumulative count 1210) 195s 1.800% <= 1.807 milliseconds (cumulative count 1800) 195s 3.610% <= 1.903 milliseconds (cumulative count 3610) 195s 7.860% <= 2.007 milliseconds (cumulative count 7860) 195s 13.040% <= 2.103 milliseconds (cumulative count 13040) 195s 74.760% <= 3.103 milliseconds (cumulative count 74760) 195s 96.490% <= 4.103 milliseconds (cumulative count 96490) 195s 99.530% <= 5.103 milliseconds (cumulative count 99530) 195s 99.960% <= 6.103 milliseconds (cumulative count 99960) 195s 100.000% <= 7.103 milliseconds (cumulative count 100000) 195s 195s Summary: 195s throughput summary: 142247.52 requests per second 195s latency summary (msec): 195s avg min p50 p95 p99 max 195s 2.763 0.192 2.711 3.887 4.767 6.359 198s LRANGE_300 (first 300 elements): rps=20238.1 (overall: 31875.0) avg_msec=9.428 (overall: 9.428) LRANGE_300 (first 300 elements): rps=33507.9 (overall: 32873.8) avg_msec=9.125 (overall: 9.239) LRANGE_300 (first 300 elements): rps=36328.0 (overall: 34178.2) avg_msec=8.222 (overall: 8.831) LRANGE_300 (first 300 elements): rps=43259.0 (overall: 36674.7) avg_msec=5.908 (overall: 7.883) LRANGE_300 (first 300 elements): rps=39952.6 (overall: 37385.9) avg_msec=6.871 (overall: 7.649) LRANGE_300 (first 300 elements): rps=28176.0 (overall: 35759.9) avg_msec=11.321 (overall: 8.159) LRANGE_300 (first 300 elements): rps=37444.4 (overall: 36014.4) avg_msec=7.654 (overall: 8.080) LRANGE_300 (first 300 elements): rps=41047.6 (overall: 36675.0) avg_msec=6.982 (overall: 7.919) LRANGE_300 (first 300 elements): rps=41528.0 (overall: 37234.1) avg_msec=6.535 (overall: 7.741) LRANGE_300 (first 300 elements): rps=38825.4 (overall: 37399.7) avg_msec=7.559 (overall: 7.721) LRANGE_300 (first 300 elements): rps=31800.0 (overall: 36875.8) avg_msec=9.428 (overall: 7.859) ====== LRANGE_300 (first 300 elements) ====== 198s 100000 requests completed in 2.72 seconds 198s 50 parallel clients 198s 3 bytes payload 198s keep alive: 1 198s host configuration "save": 3600 1 300 100 60 10000 198s host configuration "appendonly": no 198s multi-thread: no 198s 198s Latency by percentile distribution: 198s 0.000% <= 0.607 milliseconds (cumulative count 20) 198s 50.000% <= 7.007 milliseconds (cumulative count 50000) 198s 75.000% <= 9.767 milliseconds (cumulative count 75050) 198s 87.500% <= 12.247 milliseconds (cumulative count 87510) 198s 93.750% <= 13.975 milliseconds (cumulative count 93750) 198s 96.875% <= 15.455 milliseconds (cumulative count 96880) 198s 98.438% <= 16.559 milliseconds (cumulative count 98470) 198s 99.219% <= 17.487 milliseconds (cumulative count 99230) 198s 99.609% <= 18.191 milliseconds (cumulative count 99610) 198s 99.805% <= 18.559 milliseconds (cumulative count 99810) 198s 99.902% <= 18.959 milliseconds (cumulative count 99910) 198s 99.951% <= 19.343 milliseconds (cumulative count 99960) 198s 99.976% <= 19.615 milliseconds (cumulative count 99980) 198s 99.988% <= 19.759 milliseconds (cumulative count 99990) 198s 99.994% <= 19.903 milliseconds (cumulative count 100000) 198s 100.000% <= 19.903 milliseconds (cumulative count 100000) 198s 198s Cumulative distribution of latencies: 198s 0.000% <= 0.103 milliseconds (cumulative count 0) 198s 0.020% <= 0.607 milliseconds (cumulative count 20) 198s 0.280% <= 0.703 milliseconds (cumulative count 280) 198s 0.530% <= 0.807 milliseconds (cumulative count 530) 198s 0.750% <= 0.903 milliseconds (cumulative count 750) 198s 0.990% <= 1.007 milliseconds (cumulative count 990) 198s 1.150% <= 1.103 milliseconds (cumulative count 1150) 198s 1.260% <= 1.207 milliseconds (cumulative count 1260) 198s 1.420% <= 1.303 milliseconds (cumulative count 1420) 198s 1.540% <= 1.407 milliseconds (cumulative count 1540) 198s 1.630% <= 1.503 milliseconds (cumulative count 1630) 198s 1.690% <= 1.607 milliseconds (cumulative count 1690) 198s 1.740% <= 1.703 milliseconds (cumulative count 1740) 198s 1.810% <= 1.807 milliseconds (cumulative count 1810) 198s 1.860% <= 1.903 milliseconds (cumulative count 1860) 198s 1.930% <= 2.007 milliseconds (cumulative count 1930) 198s 1.980% <= 2.103 milliseconds (cumulative count 1980) 198s 3.150% <= 3.103 milliseconds (cumulative count 3150) 198s 6.220% <= 4.103 milliseconds (cumulative count 6220) 198s 15.850% <= 5.103 milliseconds (cumulative count 15850) 198s 34.030% <= 6.103 milliseconds (cumulative count 34030) 198s 51.300% <= 7.103 milliseconds (cumulative count 51300) 198s 61.180% <= 8.103 milliseconds (cumulative count 61180) 198s 69.710% <= 9.103 milliseconds (cumulative count 69710) 198s 77.300% <= 10.103 milliseconds (cumulative count 77300) 198s 82.500% <= 11.103 milliseconds (cumulative count 82500) 198s 86.890% <= 12.103 milliseconds (cumulative count 86890) 198s 90.700% <= 13.103 milliseconds (cumulative count 90700) 198s 94.140% <= 14.103 milliseconds (cumulative count 94140) 198s 96.320% <= 15.103 milliseconds (cumulative count 96320) 198s 97.960% <= 16.103 milliseconds (cumulative count 97960) 198s 98.940% <= 17.103 milliseconds (cumulative count 98940) 198s 99.530% <= 18.111 milliseconds (cumulative count 99530) 198s 99.930% <= 19.103 milliseconds (cumulative count 99930) 198s 100.000% <= 20.111 milliseconds (cumulative count 100000) 198s 198s Summary: 198s throughput summary: 36764.71 requests per second 198s latency summary (msec): 198s avg min p50 p95 p99 max 198s 7.915 0.600 7.007 14.423 17.183 19.903 203s LRANGE_500 (first 500 elements): rps=10693.2 (overall: 13287.1) avg_msec=23.357 (overall: 23.357) LRANGE_500 (first 500 elements): rps=15064.0 (overall: 14269.9) avg_msec=18.965 (overall: 20.793) LRANGE_500 (first 500 elements): rps=14772.9 (overall: 14449.5) avg_msec=19.099 (overall: 20.175) LRANGE_500 (first 500 elements): rps=20725.1 (overall: 16100.6) avg_msec=12.789 (overall: 17.674) LRANGE_500 (first 500 elements): rps=14900.4 (overall: 15850.6) avg_msec=19.166 (overall: 17.966) LRANGE_500 (first 500 elements): rps=14300.8 (overall: 15579.1) avg_msec=19.662 (overall: 18.239) LRANGE_500 (first 500 elements): rps=14803.1 (overall: 15464.1) avg_msec=19.222 (overall: 18.378) LRANGE_500 (first 500 elements): rps=14801.6 (overall: 15379.3) avg_msec=19.401 (overall: 18.504) LRANGE_500 (first 500 elements): rps=14426.9 (overall: 15270.7) avg_msec=19.615 (overall: 18.624) LRANGE_500 (first 500 elements): rps=14515.7 (overall: 15193.2) avg_msec=19.444 (overall: 18.704) LRANGE_500 (first 500 elements): rps=15988.1 (overall: 15267.0) avg_msec=17.458 (overall: 18.583) LRANGE_500 (first 500 elements): rps=21248.0 (overall: 15776.6) avg_msec=12.815 (overall: 17.921) LRANGE_500 (first 500 elements): rps=21219.1 (overall: 16199.3) avg_msec=12.549 (overall: 17.375) LRANGE_500 (first 500 elements): rps=20215.1 (overall: 16488.7) avg_msec=13.327 (overall: 17.017) LRANGE_500 (first 500 elements): rps=22071.7 (overall: 16864.0) avg_msec=12.383 (overall: 16.609) LRANGE_500 (first 500 elements): rps=21525.5 (overall: 17161.9) avg_msec=12.986 (overall: 16.319) LRANGE_500 (first 500 elements): rps=21928.9 (overall: 17446.3) avg_msec=12.403 (overall: 16.025) LRANGE_500 (first 500 elements): rps=22656.1 (overall: 17739.5) avg_msec=11.357 (overall: 15.690) LRANGE_500 (first 500 elements): rps=20880.5 (overall: 17905.6) avg_msec=13.960 (overall: 15.583) LRANGE_500 (first 500 elements): rps=21370.5 (overall: 18079.6) avg_msec=13.076 (overall: 15.434) LRANGE_500 (first 500 elements): rps=24629.5 (overall: 18392.9) avg_msec=9.329 (overall: 15.043) ====== LRANGE_500 (first 500 elements) ====== 203s 100000 requests completed in 5.39 seconds 203s 50 parallel clients 203s 3 bytes payload 203s keep alive: 1 203s host configuration "save": 3600 1 300 100 60 10000 203s host configuration "appendonly": no 203s multi-thread: no 203s 203s Latency by percentile distribution: 203s 0.000% <= 0.359 milliseconds (cumulative count 10) 203s 50.000% <= 14.791 milliseconds (cumulative count 50030) 203s 75.000% <= 18.927 milliseconds (cumulative count 75060) 203s 87.500% <= 21.263 milliseconds (cumulative count 87520) 203s 93.750% <= 25.775 milliseconds (cumulative count 93750) 203s 96.875% <= 27.599 milliseconds (cumulative count 96900) 203s 98.438% <= 28.319 milliseconds (cumulative count 98440) 203s 99.219% <= 29.471 milliseconds (cumulative count 99220) 203s 99.609% <= 32.495 milliseconds (cumulative count 99610) 203s 99.805% <= 35.807 milliseconds (cumulative count 99810) 203s 99.902% <= 36.671 milliseconds (cumulative count 99910) 203s 99.951% <= 38.495 milliseconds (cumulative count 99960) 203s 99.976% <= 38.783 milliseconds (cumulative count 99980) 203s 99.988% <= 38.911 milliseconds (cumulative count 99990) 203s 99.994% <= 39.039 milliseconds (cumulative count 100000) 203s 100.000% <= 39.039 milliseconds (cumulative count 100000) 203s 203s Cumulative distribution of latencies: 203s 0.000% <= 0.103 milliseconds (cumulative count 0) 203s 0.010% <= 0.407 milliseconds (cumulative count 10) 203s 0.030% <= 0.703 milliseconds (cumulative count 30) 203s 0.140% <= 0.807 milliseconds (cumulative count 140) 203s 0.190% <= 0.903 milliseconds (cumulative count 190) 203s 0.580% <= 1.007 milliseconds (cumulative count 580) 203s 0.780% <= 1.103 milliseconds (cumulative count 780) 203s 1.160% <= 1.207 milliseconds (cumulative count 1160) 203s 1.230% <= 1.303 milliseconds (cumulative count 1230) 203s 1.310% <= 1.407 milliseconds (cumulative count 1310) 203s 1.430% <= 1.503 milliseconds (cumulative count 1430) 203s 1.580% <= 1.607 milliseconds (cumulative count 1580) 203s 1.750% <= 1.703 milliseconds (cumulative count 1750) 203s 1.920% <= 1.807 milliseconds (cumulative count 1920) 203s 2.040% <= 1.903 milliseconds (cumulative count 2040) 203s 2.210% <= 2.007 milliseconds (cumulative count 2210) 203s 2.290% <= 2.103 milliseconds (cumulative count 2290) 203s 3.040% <= 3.103 milliseconds (cumulative count 3040) 203s 3.880% <= 4.103 milliseconds (cumulative count 3880) 203s 4.500% <= 5.103 milliseconds (cumulative count 4500) 203s 5.980% <= 6.103 milliseconds (cumulative count 5980) 203s 7.660% <= 7.103 milliseconds (cumulative count 7660) 203s 11.600% <= 8.103 milliseconds (cumulative count 11600) 203s 18.030% <= 9.103 milliseconds (cumulative count 18030) 203s 25.470% <= 10.103 milliseconds (cumulative count 25470) 203s 31.720% <= 11.103 milliseconds (cumulative count 31720) 203s 38.520% <= 12.103 milliseconds (cumulative count 38520) 203s 43.580% <= 13.103 milliseconds (cumulative count 43580) 203s 47.620% <= 14.103 milliseconds (cumulative count 47620) 203s 51.130% <= 15.103 milliseconds (cumulative count 51130) 203s 55.730% <= 16.103 milliseconds (cumulative count 55730) 203s 62.230% <= 17.103 milliseconds (cumulative count 62230) 203s 69.340% <= 18.111 milliseconds (cumulative count 69340) 203s 76.180% <= 19.103 milliseconds (cumulative count 76180) 203s 82.250% <= 20.111 milliseconds (cumulative count 82250) 203s 86.890% <= 21.103 milliseconds (cumulative count 86890) 203s 89.950% <= 22.111 milliseconds (cumulative count 89950) 203s 91.720% <= 23.103 milliseconds (cumulative count 91720) 203s 92.620% <= 24.111 milliseconds (cumulative count 92620) 203s 93.310% <= 25.103 milliseconds (cumulative count 93310) 203s 94.050% <= 26.111 milliseconds (cumulative count 94050) 203s 95.690% <= 27.103 milliseconds (cumulative count 95690) 203s 98.090% <= 28.111 milliseconds (cumulative count 98090) 203s 99.070% <= 29.103 milliseconds (cumulative count 99070) 203s 99.370% <= 30.111 milliseconds (cumulative count 99370) 203s 99.510% <= 31.103 milliseconds (cumulative count 99510) 203s 99.580% <= 32.111 milliseconds (cumulative count 99580) 203s 99.610% <= 33.119 milliseconds (cumulative count 99610) 203s 99.630% <= 34.111 milliseconds (cumulative count 99630) 203s 99.700% <= 35.103 milliseconds (cumulative count 99700) 203s 99.870% <= 36.127 milliseconds (cumulative count 99870) 203s 99.940% <= 37.119 milliseconds (cumulative count 99940) 203s 99.950% <= 38.111 milliseconds (cumulative count 99950) 203s 100.000% <= 39.103 milliseconds (cumulative count 100000) 203s 203s Summary: 203s throughput summary: 18559.76 requests per second 203s latency summary (msec): 203s avg min p50 p95 p99 max 203s 14.808 0.352 14.791 26.767 28.943 39.039 210s LRANGE_600 (first 600 elements): rps=4302.8 (overall: 9908.3) avg_msec=24.027 (overall: 24.027) LRANGE_600 (first 600 elements): rps=14904.8 (overall: 13396.1) avg_msec=19.523 (overall: 20.529) LRANGE_600 (first 600 elements): rps=13873.5 (overall: 13592.8) avg_msec=19.597 (overall: 20.137) LRANGE_600 (first 600 elements): rps=13099.2 (overall: 13449.2) avg_msec=21.367 (overall: 20.485) LRANGE_600 (first 600 elements): rps=16784.9 (overall: 14198.7) avg_msec=15.891 (overall: 19.265) LRANGE_600 (first 600 elements): rps=16627.0 (overall: 14645.7) avg_msec=16.645 (overall: 18.717) LRANGE_600 (first 600 elements): rps=17490.0 (overall: 15086.4) avg_msec=14.532 (overall: 17.966) LRANGE_600 (first 600 elements): rps=14670.6 (overall: 15030.4) avg_msec=18.319 (overall: 18.012) LRANGE_600 (first 600 elements): rps=14953.3 (overall: 15021.1) avg_msec=18.251 (overall: 18.041) LRANGE_600 (first 600 elements): rps=14645.4 (overall: 14981.5) avg_msec=19.062 (overall: 18.146) LRANGE_600 (first 600 elements): rps=15571.4 (overall: 15038.0) avg_msec=15.885 (overall: 17.922) LRANGE_600 (first 600 elements): rps=12676.0 (overall: 14833.1) avg_msec=22.650 (overall: 18.272) LRANGE_600 (first 600 elements): rps=11851.6 (overall: 14589.9) avg_msec=23.360 (overall: 18.610) LRANGE_600 (first 600 elements): rps=12449.2 (overall: 14428.4) avg_msec=22.133 (overall: 18.839) LRANGE_600 (first 600 elements): rps=16772.9 (overall: 14589.8) avg_msec=15.772 (overall: 18.596) LRANGE_600 (first 600 elements): rps=16513.9 (overall: 14713.8) avg_msec=17.070 (overall: 18.486) LRANGE_600 (first 600 elements): rps=16742.1 (overall: 14837.0) avg_msec=15.882 (overall: 18.307) LRANGE_600 (first 600 elements): rps=13171.3 (overall: 14742.0) avg_msec=18.002 (overall: 18.292) LRANGE_600 (first 600 elements): rps=11885.8 (overall: 14586.1) avg_msec=21.667 (overall: 18.442) LRANGE_600 (first 600 elements): rps=14757.0 (overall: 14594.8) avg_msec=18.693 (overall: 18.455) LRANGE_600 (first 600 elements): rps=13809.5 (overall: 14556.4) avg_msec=18.793 (overall: 18.471) LRANGE_600 (first 600 elements): rps=17308.0 (overall: 14683.7) avg_msec=15.668 (overall: 18.318) LRANGE_600 (first 600 elements): rps=15000.0 (overall: 14697.8) avg_msec=17.469 (overall: 18.279) LRANGE_600 (first 600 elements): rps=16637.5 (overall: 14780.2) avg_msec=16.874 (overall: 18.212) LRANGE_600 (first 600 elements): rps=14547.2 (overall: 14770.6) avg_msec=18.639 (overall: 18.229) LRANGE_600 (first 600 elements): rps=11470.1 (overall: 14641.4) avg_msec=26.120 (overall: 18.471) LRANGE_600 (first 600 elements): rps=13027.8 (overall: 14580.4) avg_msec=21.620 (overall: 18.578) ====== LRANGE_600 (first 600 elements) ====== 210s 100000 requests completed in 6.84 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.439 milliseconds (cumulative count 10) 210s 50.000% <= 19.151 milliseconds (cumulative count 50050) 210s 75.000% <= 23.439 milliseconds (cumulative count 75040) 210s 87.500% <= 25.983 milliseconds (cumulative count 87500) 210s 93.750% <= 29.119 milliseconds (cumulative count 93750) 210s 96.875% <= 31.007 milliseconds (cumulative count 96880) 210s 98.438% <= 32.047 milliseconds (cumulative count 98480) 210s 99.219% <= 33.279 milliseconds (cumulative count 99230) 210s 99.609% <= 34.687 milliseconds (cumulative count 99610) 210s 99.805% <= 37.919 milliseconds (cumulative count 99810) 210s 99.902% <= 39.231 milliseconds (cumulative count 99910) 210s 99.951% <= 39.903 milliseconds (cumulative count 99960) 210s 99.976% <= 40.223 milliseconds (cumulative count 99980) 210s 99.988% <= 40.351 milliseconds (cumulative count 99990) 210s 99.994% <= 40.543 milliseconds (cumulative count 100000) 210s 100.000% <= 40.543 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.020% <= 0.503 milliseconds (cumulative count 20) 210s 0.410% <= 0.903 milliseconds (cumulative count 410) 210s 0.560% <= 1.007 milliseconds (cumulative count 560) 210s 0.750% <= 1.103 milliseconds (cumulative count 750) 210s 1.080% <= 1.207 milliseconds (cumulative count 1080) 210s 1.240% <= 1.303 milliseconds (cumulative count 1240) 210s 1.380% <= 1.407 milliseconds (cumulative count 1380) 210s 1.500% <= 1.503 milliseconds (cumulative count 1500) 210s 1.690% <= 1.607 milliseconds (cumulative count 1690) 210s 1.920% <= 1.703 milliseconds (cumulative count 1920) 210s 2.130% <= 1.807 milliseconds (cumulative count 2130) 210s 2.340% <= 1.903 milliseconds (cumulative count 2340) 210s 2.430% <= 2.007 milliseconds (cumulative count 2430) 210s 2.510% <= 2.103 milliseconds (cumulative count 2510) 210s 3.010% <= 3.103 milliseconds (cumulative count 3010) 210s 3.680% <= 4.103 milliseconds (cumulative count 3680) 210s 4.330% <= 5.103 milliseconds (cumulative count 4330) 210s 5.260% <= 6.103 milliseconds (cumulative count 5260) 210s 6.110% <= 7.103 milliseconds (cumulative count 6110) 210s 7.420% <= 8.103 milliseconds (cumulative count 7420) 210s 9.080% <= 9.103 milliseconds (cumulative count 9080) 210s 11.300% <= 10.103 milliseconds (cumulative count 11300) 210s 14.180% <= 11.103 milliseconds (cumulative count 14180) 210s 17.880% <= 12.103 milliseconds (cumulative count 17880) 210s 21.980% <= 13.103 milliseconds (cumulative count 21980) 210s 26.250% <= 14.103 milliseconds (cumulative count 26250) 210s 30.770% <= 15.103 milliseconds (cumulative count 30770) 210s 35.560% <= 16.103 milliseconds (cumulative count 35560) 210s 40.920% <= 17.103 milliseconds (cumulative count 40920) 210s 45.600% <= 18.111 milliseconds (cumulative count 45600) 210s 49.780% <= 19.103 milliseconds (cumulative count 49780) 210s 54.420% <= 20.111 milliseconds (cumulative count 54420) 210s 60.940% <= 21.103 milliseconds (cumulative count 60940) 210s 67.640% <= 22.111 milliseconds (cumulative count 67640) 210s 73.320% <= 23.103 milliseconds (cumulative count 73320) 210s 78.760% <= 24.111 milliseconds (cumulative count 78760) 210s 84.090% <= 25.103 milliseconds (cumulative count 84090) 210s 87.900% <= 26.111 milliseconds (cumulative count 87900) 210s 90.380% <= 27.103 milliseconds (cumulative count 90380) 210s 92.320% <= 28.111 milliseconds (cumulative count 92320) 210s 93.720% <= 29.103 milliseconds (cumulative count 93720) 210s 95.170% <= 30.111 milliseconds (cumulative count 95170) 210s 97.020% <= 31.103 milliseconds (cumulative count 97020) 210s 98.550% <= 32.111 milliseconds (cumulative count 98550) 210s 99.160% <= 33.119 milliseconds (cumulative count 99160) 210s 99.470% <= 34.111 milliseconds (cumulative count 99470) 210s 99.670% <= 35.103 milliseconds (cumulative count 99670) 210s 99.710% <= 36.127 milliseconds (cumulative count 99710) 210s 99.760% <= 37.119 milliseconds (cumulative count 99760) 210s 99.820% <= 38.111 milliseconds (cumulative count 99820) 210s 99.890% <= 39.103 milliseconds (cumulative count 99890) 210s 99.970% <= 40.127 milliseconds (cumulative count 99970) 210s 100.000% <= 41.119 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 14626.30 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 18.526 0.432 19.151 29.999 32.799 40.543 210s MSET (10 keys): rps=92023.8 (overall: 289875.0) avg_msec=1.629 (overall: 1.629) MSET (10 keys): rps=283800.0 (overall: 285272.7) avg_msec=1.722 (overall: 1.699) ====== MSET (10 keys) ====== 210s 100000 requests completed in 0.35 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.183 milliseconds (cumulative count 10) 210s 50.000% <= 1.703 milliseconds (cumulative count 52020) 210s 75.000% <= 1.783 milliseconds (cumulative count 75540) 210s 87.500% <= 1.887 milliseconds (cumulative count 87530) 210s 93.750% <= 2.343 milliseconds (cumulative count 93750) 210s 96.875% <= 2.719 milliseconds (cumulative count 96910) 210s 98.438% <= 2.847 milliseconds (cumulative count 98450) 210s 99.219% <= 2.951 milliseconds (cumulative count 99220) 210s 99.609% <= 3.031 milliseconds (cumulative count 99630) 210s 99.805% <= 3.151 milliseconds (cumulative count 99810) 210s 99.902% <= 3.287 milliseconds (cumulative count 99910) 210s 99.951% <= 3.335 milliseconds (cumulative count 99960) 210s 99.976% <= 3.359 milliseconds (cumulative count 99980) 210s 99.988% <= 3.367 milliseconds (cumulative count 99990) 210s 99.994% <= 3.375 milliseconds (cumulative count 100000) 210s 100.000% <= 3.375 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.010% <= 0.207 milliseconds (cumulative count 10) 210s 0.080% <= 0.807 milliseconds (cumulative count 80) 210s 1.190% <= 0.903 milliseconds (cumulative count 1190) 210s 8.340% <= 1.007 milliseconds (cumulative count 8340) 210s 9.910% <= 1.103 milliseconds (cumulative count 9910) 210s 10.320% <= 1.207 milliseconds (cumulative count 10320) 210s 11.060% <= 1.303 milliseconds (cumulative count 11060) 210s 11.850% <= 1.407 milliseconds (cumulative count 11850) 210s 14.720% <= 1.503 milliseconds (cumulative count 14720) 210s 25.360% <= 1.607 milliseconds (cumulative count 25360) 210s 52.020% <= 1.703 milliseconds (cumulative count 52020) 210s 80.030% <= 1.807 milliseconds (cumulative count 80030) 210s 88.110% <= 1.903 milliseconds (cumulative count 88110) 210s 90.890% <= 2.007 milliseconds (cumulative count 90890) 210s 92.040% <= 2.103 milliseconds (cumulative count 92040) 210s 99.770% <= 3.103 milliseconds (cumulative count 99770) 210s 100.000% <= 4.103 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 285714.28 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 1.695 0.176 1.703 2.511 2.927 3.375 210s ====== XADD ====== 210s 100000 requests completed in 0.14 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.119 milliseconds (cumulative count 20) 210s 50.000% <= 0.671 milliseconds (cumulative count 52570) 210s 75.000% <= 0.719 milliseconds (cumulative count 76910) 210s 87.500% <= 0.751 milliseconds (cumulative count 88380) 210s 93.750% <= 0.775 milliseconds (cumulative count 93890) 210s 96.875% <= 0.807 milliseconds (cumulative count 96930) 210s 98.438% <= 0.863 milliseconds (cumulative count 98580) 210s 99.219% <= 0.919 milliseconds (cumulative count 99260) 210s 99.609% <= 0.959 milliseconds (cumulative count 99620) 210s 99.805% <= 0.991 milliseconds (cumulative count 99810) 210s 99.902% <= 1.015 milliseconds (cumulative count 99910) 210s 99.951% <= 1.039 milliseconds (cumulative count 99970) 210s 99.976% <= 1.047 milliseconds (cumulative count 99990) 210s 99.994% <= 1.055 milliseconds (cumulative count 100000) 210s 100.000% <= 1.055 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.150% <= 0.207 milliseconds (cumulative count 150) 210s 0.370% <= 0.303 milliseconds (cumulative count 370) 210s 2.780% <= 0.407 milliseconds (cumulative count 2780) 210s 13.910% <= 0.503 milliseconds (cumulative count 13910) 210s 20.910% <= 0.607 milliseconds (cumulative count 20910) 210s 70.200% <= 0.703 milliseconds (cumulative count 70200) 210s 96.930% <= 0.807 milliseconds (cumulative count 96930) 210s 99.110% <= 0.903 milliseconds (cumulative count 99110) 210s 99.880% <= 1.007 milliseconds (cumulative count 99880) 210s 100.000% <= 1.103 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 724637.69 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 0.649 0.112 0.671 0.783 0.895 1.055 210s 211s autopkgtest [19:43:22]: test 0002-benchmark: -----------------------] 211s autopkgtest [19:43:22]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 211s 0002-benchmark PASS 212s autopkgtest [19:43:23]: test 0003-valkey-check-aof: preparing testbed 212s Reading package lists... 212s Building dependency tree... 212s Reading state information... 212s Starting pkgProblemResolver with broken count: 0 212s Starting 2 pkgProblemResolver with broken count: 0 212s Done 212s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 213s autopkgtest [19:43:24]: test 0003-valkey-check-aof: [----------------------- 214s autopkgtest [19:43:25]: test 0003-valkey-check-aof: -----------------------] 214s 0003-valkey-check-aof PASS 214s autopkgtest [19:43:25]: test 0003-valkey-check-aof: - - - - - - - - - - results - - - - - - - - - - 214s autopkgtest [19:43:25]: test 0004-valkey-check-rdb: preparing testbed 215s Reading package lists... 215s Building dependency tree... 215s Reading state information... 215s Starting pkgProblemResolver with broken count: 0 215s Starting 2 pkgProblemResolver with broken count: 0 215s Done 215s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 216s autopkgtest [19:43:27]: test 0004-valkey-check-rdb: [----------------------- 221s OK 221s [offset 0] Checking RDB file /var/lib/valkey/dump.rdb 221s [offset 27] AUX FIELD valkey-ver = '8.0.2' 221s [offset 41] AUX FIELD redis-bits = '64' 221s [offset 53] AUX FIELD ctime = '1742067812' 221s [offset 68] AUX FIELD used-mem = '3060280' 221s [offset 80] AUX FIELD aof-base = '0' 221s [offset 82] Selecting DB ID 0 221s [offset 565047] Checksum OK 221s [offset 565047] \o/ RDB looks OK! \o/ 221s [info] 5 keys read 221s [info] 0 expires 221s [info] 0 already expired 221s autopkgtest [19:43:32]: test 0004-valkey-check-rdb: -----------------------] 222s 0004-valkey-check-rdb PASS 222s autopkgtest [19:43:33]: test 0004-valkey-check-rdb: - - - - - - - - - - results - - - - - - - - - - 222s autopkgtest [19:43:33]: test 0005-cjson: preparing testbed 223s Reading package lists... 223s Building dependency tree... 223s Reading state information... 223s Starting pkgProblemResolver with broken count: 0 223s Starting 2 pkgProblemResolver with broken count: 0 223s Done 223s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 224s autopkgtest [19:43:35]: test 0005-cjson: [----------------------- 229s 229s autopkgtest [19:43:40]: test 0005-cjson: -----------------------] 230s autopkgtest [19:43:41]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 230s 0005-cjson PASS 230s autopkgtest [19:43:41]: test 0006-migrate-from-redis: preparing testbed 350s autopkgtest [19:45:41]: testbed dpkg architecture: s390x 350s autopkgtest [19:45:41]: testbed apt version: 2.9.33 350s autopkgtest [19:45:41]: @@@@@@@@@@@@@@@@@@@@ test bed setup 350s autopkgtest [19:45:41]: testbed release detected to be: plucky 351s autopkgtest [19:45:42]: updating testbed package index (apt update) 351s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [126 kB] 352s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 352s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 352s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 352s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [14.5 kB] 352s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [369 kB] 352s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [45.1 kB] 352s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [77.3 kB] 352s Get:9 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x c-n-f Metadata [1824 B] 352s Get:10 http://ftpmaster.internal/ubuntu plucky-proposed/restricted s390x c-n-f Metadata [116 B] 352s Get:11 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [314 kB] 352s Get:12 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x c-n-f Metadata [13.3 kB] 352s Get:13 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [3532 B] 352s Get:14 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x c-n-f Metadata [240 B] 352s Fetched 965 kB in 1s (989 kB/s) 353s Reading package lists... 354s + lsb_release --codename --short 354s + RELEASE=plucky 354s + cat 354s + [ plucky != trusty ] 354s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y --allow-downgrades -o Dpkg::Options::=--force-confnew dist-upgrade 354s Reading package lists... 354s Building dependency tree... 354s Reading state information... 354s Calculating upgrade... 354s Calculating upgrade... 354s The following packages were automatically installed and are no longer required: 354s libnsl2 libpython3.12-minimal libpython3.12-stdlib libpython3.12t64 354s linux-headers-6.11.0-8 linux-headers-6.11.0-8-generic 354s linux-modules-6.11.0-8-generic linux-tools-6.11.0-8 354s linux-tools-6.11.0-8-generic 354s Use 'sudo apt autoremove' to remove them. 354s The following packages will be upgraded: 354s pinentry-curses python3-jinja2 strace 354s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 354s Need to get 652 kB of archives. 354s After this operation, 27.6 kB of additional disk space will be used. 354s Get:1 http://ftpmaster.internal/ubuntu plucky/main s390x strace s390x 6.13+ds-1ubuntu1 [500 kB] 355s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x pinentry-curses s390x 1.3.1-2ubuntu3 [42.9 kB] 355s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x python3-jinja2 all 3.1.5-2ubuntu1 [109 kB] 355s Fetched 652 kB in 1s (1039 kB/s) 355s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81428 files and directories currently installed.) 355s Preparing to unpack .../strace_6.13+ds-1ubuntu1_s390x.deb ... 355s Unpacking strace (6.13+ds-1ubuntu1) over (6.11-0ubuntu1) ... 355s Preparing to unpack .../pinentry-curses_1.3.1-2ubuntu3_s390x.deb ... 355s Unpacking pinentry-curses (1.3.1-2ubuntu3) over (1.3.1-2ubuntu2) ... 355s Preparing to unpack .../python3-jinja2_3.1.5-2ubuntu1_all.deb ... 355s Unpacking python3-jinja2 (3.1.5-2ubuntu1) over (3.1.5-2) ... 355s Setting up pinentry-curses (1.3.1-2ubuntu3) ... 355s Setting up python3-jinja2 (3.1.5-2ubuntu1) ... 355s Setting up strace (6.13+ds-1ubuntu1) ... 355s Processing triggers for man-db (2.13.0-1) ... 356s + rm /etc/apt/preferences.d/force-downgrade-to-release.pref 356s + /usr/lib/apt/apt-helper analyze-pattern ?true 356s + uname -r 356s + sed s/\./\\./g 356s + running_kernel_pattern=^linux-.*6\.14\.0-10-generic.* 356s + apt list ?obsolete 356s + tail -n+2 356s + + grep -v ^linux-.*6\.14\.0-10-generic.* 356s cut -d/ -f1 356s + obsolete_pkgs=linux-headers-6.11.0-8-generic 356s linux-headers-6.11.0-8 356s linux-modules-6.11.0-8-generic 356s linux-tools-6.11.0-8-generic 356s linux-tools-6.11.0-8 356s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y purge --autoremove linux-headers-6.11.0-8-generic linux-headers-6.11.0-8 linux-modules-6.11.0-8-generic linux-tools-6.11.0-8-generic linux-tools-6.11.0-8 356s Reading package lists... 356s Building dependency tree... 356s Reading state information... 356s Solving dependencies... 357s The following packages will be REMOVED: 357s libnsl2* libpython3.12-minimal* libpython3.12-stdlib* libpython3.12t64* 357s linux-headers-6.11.0-8* linux-headers-6.11.0-8-generic* 357s linux-modules-6.11.0-8-generic* linux-tools-6.11.0-8* 357s linux-tools-6.11.0-8-generic* 357s 0 upgraded, 0 newly installed, 9 to remove and 5 not upgraded. 357s After this operation, 167 MB disk space will be freed. 357s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81428 files and directories currently installed.) 357s Removing linux-tools-6.11.0-8-generic (6.11.0-8.8) ... 357s Removing linux-tools-6.11.0-8 (6.11.0-8.8) ... 357s Removing libpython3.12t64:s390x (3.12.9-1) ... 357s Removing libpython3.12-stdlib:s390x (3.12.9-1) ... 357s Removing libnsl2:s390x (1.3.0-3build3) ... 357s Removing libpython3.12-minimal:s390x (3.12.9-1) ... 357s Removing linux-headers-6.11.0-8-generic (6.11.0-8.8) ... 357s Removing linux-headers-6.11.0-8 (6.11.0-8.8) ... 358s Removing linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 358s Processing triggers for libc-bin (2.41-1ubuntu1) ... 358s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56328 files and directories currently installed.) 358s Purging configuration files for libpython3.12-minimal:s390x (3.12.9-1) ... 358s Purging configuration files for linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 358s + grep -q trusty /etc/lsb-release 358s + [ ! -d /usr/share/doc/unattended-upgrades ] 358s + [ ! -d /usr/share/doc/lxd ] 358s + [ ! -d /usr/share/doc/lxd-client ] 358s + [ ! -d /usr/share/doc/snapd ] 358s + type iptables 358s + cat 358s + chmod 755 /etc/rc.local 358s + . /etc/rc.local 358s + iptables -w -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 358s + iptables -A OUTPUT -d 10.255.255.1/32 -p tcp -j DROP 358s + iptables -A OUTPUT -d 10.255.255.2/32 -p tcp -j DROP 358s + uname -m 358s + [ s390x = ppc64le ] 358s + [ -d /run/systemd/system ] 358s + systemd-detect-virt --quiet --vm 358s + mkdir -p /etc/systemd/system/systemd-random-seed.service.d/ 358s + cat 358s + grep -q lz4 /etc/initramfs-tools/initramfs.conf 358s + echo COMPRESS=lz4 358s autopkgtest [19:45:49]: upgrading testbed (apt dist-upgrade and autopurge) 358s Reading package lists... 358s Building dependency tree... 358s Reading state information... 359s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 359s Starting 2 pkgProblemResolver with broken count: 0 359s Done 359s Entering ResolveByKeep 359s 359s Calculating upgrade... 359s The following packages will be upgraded: 359s libc-bin libc-dev-bin libc6 libc6-dev locales 359s 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 359s Need to get 9512 kB of archives. 359s After this operation, 8192 B of additional disk space will be used. 359s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6-dev s390x 2.41-1ubuntu2 [1678 kB] 360s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-dev-bin s390x 2.41-1ubuntu2 [24.3 kB] 360s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6 s390x 2.41-1ubuntu2 [2892 kB] 361s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-bin s390x 2.41-1ubuntu2 [671 kB] 361s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x locales all 2.41-1ubuntu2 [4246 kB] 362s Preconfiguring packages ... 362s Fetched 9512 kB in 3s (3481 kB/s) 362s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 362s Preparing to unpack .../libc6-dev_2.41-1ubuntu2_s390x.deb ... 362s Unpacking libc6-dev:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 362s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu2_s390x.deb ... 362s Unpacking libc-dev-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 362s Preparing to unpack .../libc6_2.41-1ubuntu2_s390x.deb ... 362s Unpacking libc6:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 362s Setting up libc6:s390x (2.41-1ubuntu2) ... 363s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 363s Preparing to unpack .../libc-bin_2.41-1ubuntu2_s390x.deb ... 363s Unpacking libc-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 363s Setting up libc-bin (2.41-1ubuntu2) ... 363s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 363s Preparing to unpack .../locales_2.41-1ubuntu2_all.deb ... 363s Unpacking locales (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 363s Setting up locales (2.41-1ubuntu2) ... 363s Generating locales (this might take a while)... 367s en_US.UTF-8... done 367s Generation complete. 367s autopkgtest [19:45:57]: rebooting testbed after setup commands that affected boot 367s Setting up libc-dev-bin (2.41-1ubuntu2) ... 367s Setting up libc6-dev:s390x (2.41-1ubuntu2) ... 367s Processing triggers for man-db (2.13.0-1) ... 367s Processing triggers for systemd (257.3-1ubuntu3) ... 367s Reading package lists... 367s Building dependency tree... 367s Reading state information... 367s Starting pkgProblemResolver with broken count: 0 367s Starting 2 pkgProblemResolver with broken count: 0 367s Done 367s Solving dependencies... 367s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 388s Reading package lists... 388s Building dependency tree... 388s Reading state information... 388s Starting pkgProblemResolver with broken count: 0 388s Starting 2 pkgProblemResolver with broken count: 0 388s Done 389s The following NEW packages will be installed: 389s liblzf1 redis-sentinel redis-server redis-tools 389s 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. 389s Need to get 1269 kB of archives. 389s After this operation, 7322 kB of additional disk space will be used. 389s Get:1 http://ftpmaster.internal/ubuntu plucky/universe s390x liblzf1 s390x 3.6-4 [7020 B] 389s Get:2 http://ftpmaster.internal/ubuntu plucky/universe s390x redis-tools s390x 5:7.0.15-3 [1198 kB] 390s Get:3 http://ftpmaster.internal/ubuntu plucky/universe s390x redis-sentinel s390x 5:7.0.15-3 [12.2 kB] 390s Get:4 http://ftpmaster.internal/ubuntu plucky/universe s390x redis-server s390x 5:7.0.15-3 [51.7 kB] 390s Fetched 1269 kB in 1s (1420 kB/s) 390s Selecting previously unselected package liblzf1:s390x. 390s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 390s Preparing to unpack .../liblzf1_3.6-4_s390x.deb ... 390s Unpacking liblzf1:s390x (3.6-4) ... 390s Selecting previously unselected package redis-tools. 390s Preparing to unpack .../redis-tools_5%3a7.0.15-3_s390x.deb ... 390s Unpacking redis-tools (5:7.0.15-3) ... 390s Selecting previously unselected package redis-sentinel. 390s Preparing to unpack .../redis-sentinel_5%3a7.0.15-3_s390x.deb ... 390s Unpacking redis-sentinel (5:7.0.15-3) ... 390s Selecting previously unselected package redis-server. 390s Preparing to unpack .../redis-server_5%3a7.0.15-3_s390x.deb ... 390s Unpacking redis-server (5:7.0.15-3) ... 390s Setting up liblzf1:s390x (3.6-4) ... 390s Setting up redis-tools (5:7.0.15-3) ... 390s Setting up redis-server (5:7.0.15-3) ... 390s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'. 390s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'. 391s Setting up redis-sentinel (5:7.0.15-3) ... 391s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 391s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 391s Processing triggers for man-db (2.13.0-1) ... 392s Processing triggers for libc-bin (2.41-1ubuntu2) ... 397s autopkgtest [19:46:28]: test 0006-migrate-from-redis: [----------------------- 397s + FLAG_FILE=/etc/valkey/REDIS_MIGRATION 397s + sed -i 's#loglevel notice#loglevel debug#' /etc/redis/redis.conf 397s + systemctl restart redis-server 397s OK 397s 1 397s OK 397s + redis-cli -h 127.0.0.1 -p 6379 SET test 1 397s + redis-cli -h 127.0.0.1 -p 6379 GET test 397s + redis-cli -h 127.0.0.1 -p 6379 SAVE 397s + sha256sum /var/lib/redis/dump.rdb 397s acba5b32cd99c69e6626b161ffff072796bd62e70f9aacb6afcb287757c6b123 /var/lib/redis/dump.rdb 397s + apt-get install -y valkey-redis-compat 397s Reading package lists... 397s Building dependency tree... 397s Reading state information... 397s Solving dependencies... 397s The following additional packages will be installed: 397s valkey-server valkey-tools 397s Suggested packages: 397s ruby-redis 397s The following packages will be REMOVED: 397s redis-sentinel redis-server redis-tools 397s The following NEW packages will be installed: 397s valkey-redis-compat valkey-server valkey-tools 398s 0 upgraded, 3 newly installed, 3 to remove and 0 not upgraded. 398s Need to get 1380 kB of archives. 398s After this operation, 456 kB of additional disk space will be used. 398s Get:1 http://ftpmaster.internal/ubuntu plucky/universe s390x valkey-tools s390x 8.0.2+dfsg1-1ubuntu1 [1324 kB] 398s Get:2 http://ftpmaster.internal/ubuntu plucky/universe s390x valkey-server s390x 8.0.2+dfsg1-1ubuntu1 [48.5 kB] 398s Get:3 http://ftpmaster.internal/ubuntu plucky/universe s390x valkey-redis-compat all 8.0.2+dfsg1-1ubuntu1 [7744 B] 399s Fetched 1380 kB in 1s (1338 kB/s) 399s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56377 files and directories currently installed.) 399s Removing redis-sentinel (5:7.0.15-3) ... 400s Removing redis-server (5:7.0.15-3) ... 400s Removing redis-tools (5:7.0.15-3) ... 400s Selecting previously unselected package valkey-tools. 400s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56340 files and directories currently installed.) 400s Preparing to unpack .../valkey-tools_8.0.2+dfsg1-1ubuntu1_s390x.deb ... 400s Unpacking valkey-tools (8.0.2+dfsg1-1ubuntu1) ... 400s Selecting previously unselected package valkey-server. 400s Preparing to unpack .../valkey-server_8.0.2+dfsg1-1ubuntu1_s390x.deb ... 400s Unpacking valkey-server (8.0.2+dfsg1-1ubuntu1) ... 400s Selecting previously unselected package valkey-redis-compat. 400s Preparing to unpack .../valkey-redis-compat_8.0.2+dfsg1-1ubuntu1_all.deb ... 400s Unpacking valkey-redis-compat (8.0.2+dfsg1-1ubuntu1) ... 400s Setting up valkey-tools (8.0.2+dfsg1-1ubuntu1) ... 400s Setting up valkey-server (8.0.2+dfsg1-1ubuntu1) ... 400s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 400s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 400s Setting up valkey-redis-compat (8.0.2+dfsg1-1ubuntu1) ... 400s dpkg-query: no packages found matching valkey-sentinel 400s [I] /etc/redis/redis.conf has been copied to /etc/valkey/valkey.conf. Please, review the content of valkey.conf, especially if you had modified redis.conf. 400s [I] /etc/redis/sentinel.conf has been copied to /etc/valkey/sentinel.conf. Please, review the content of sentinel.conf, especially if you had modified sentinel.conf. 400s [I] On-disk redis dumps moved from /var/lib/redis/ to /var/lib/valkey. 400s Processing triggers for man-db (2.13.0-1) ... 401s 8039992c16838ac923a3c6b37e498185f69acd25be4d2f752074858350b604f7 /var/lib/valkey/dump.rdb 401s + '[' -f /etc/valkey/REDIS_MIGRATION ']' 401s + sha256sum /var/lib/valkey/dump.rdb 401s + systemctl status valkey-server 401s + grep inactive 401s Active: inactive (dead) since Sat 2025-03-15 19:47:57 UTC; 467ms ago 401s + rm /etc/valkey/REDIS_MIGRATION 401s + systemctl start valkey-server 401s + systemctl status valkey-server 401s + grep running 401s Active: active (running) since Sat 2025-03-15 19:47:57 UTC; 6ms ago 401s + sha256sum /var/lib/valkey/dump.rdb 401s 8039992c16838ac923a3c6b37e498185f69acd25be4d2f752074858350b604f7 /var/lib/valkey/dump.rdb 401s + cat /etc/valkey/valkey.conf 401s + grep loglevel 401s + grep debug 401s + valkey-cli -h 127.0.0.1 -p 6379 GET test 401s + grep 1 401s loglevel debug 401s 1 401s autopkgtest [19:46:32]: test 0006-migrate-from-redis: -----------------------] 402s 0006-migrate-from-redis PASS 402s autopkgtest [19:46:33]: test 0006-migrate-from-redis: - - - - - - - - - - results - - - - - - - - - - 402s autopkgtest [19:46:33]: @@@@@@@@@@@@@@@@@@@@ summary 402s 0001-valkey-cli PASS 402s 0002-benchmark PASS 402s 0003-valkey-check-aof PASS 402s 0004-valkey-check-rdb PASS 402s 0005-cjson PASS 402s 0006-migrate-from-redis PASS 420s nova [W] Using flock in prodstack6-s390x 420s flock: timeout while waiting to get lock 420s Creating nova instance adt-plucky-s390x-valkey-20250315-193950-juju-7f2275-prod-proposed-migration-environment-15-775f14b8-eaf5-444b-a91b-72596fbd4e7f from image adt/ubuntu-plucky-s390x-server-20250315.img (UUID 3d3557fa-fd0f-4bba-9b89-8d5964e09f61)... 420s nova [W] Timed out waiting for bc2f146b-6eca-40b0-9c8b-724fb4ff8120 to get deleted. 420s nova [W] Using flock in prodstack6-s390x 420s flock: timeout while waiting to get lock 420s Creating nova instance adt-plucky-s390x-valkey-20250315-193950-juju-7f2275-prod-proposed-migration-environment-15-775f14b8-eaf5-444b-a91b-72596fbd4e7f from image adt/ubuntu-plucky-s390x-server-20250315.img (UUID 3d3557fa-fd0f-4bba-9b89-8d5964e09f61)... 420s nova [W] Timed out waiting for 1e87f0ee-165c-4550-9cf1-2a46dcbb9554 to get deleted.