0s autopkgtest [18:38:40]: starting date and time: 2025-03-15 18:38:40+0000 0s autopkgtest [18:38:40]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production' 0s autopkgtest [18:38:40]: host juju-7f2275-prod-proposed-migration-environment-15; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.zy1lsua3/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:glibc --apt-upgrade redict --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=glibc/2.41-1ubuntu2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-s390x --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-15@bos03-s390x-10.secgroup --name adt-plucky-s390x-redict-20250315-183840-juju-7f2275-prod-proposed-migration-environment-15-1fff7fc0-4fb4-4a8d-8293-bcfe3931d189 --image adt/ubuntu-plucky-s390x-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-15 --net-id=net_prod-proposed-migration-s390x -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 135s autopkgtest [18:40:55]: testbed dpkg architecture: s390x 135s autopkgtest [18:40:55]: testbed apt version: 2.9.33 136s autopkgtest [18:40:56]: @@@@@@@@@@@@@@@@@@@@ test bed setup 136s autopkgtest [18:40:56]: testbed release detected to be: None 136s autopkgtest [18:40:56]: updating testbed package index (apt update) 137s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [126 kB] 137s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 137s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 137s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 137s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [379 kB] 138s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [99.7 kB] 138s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.8 kB] 138s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x Packages [113 kB] 138s Get:9 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x c-n-f Metadata [1824 B] 138s Get:10 http://ftpmaster.internal/ubuntu plucky-proposed/restricted s390x c-n-f Metadata [116 B] 138s Get:11 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x Packages [320 kB] 138s Get:12 http://ftpmaster.internal/ubuntu plucky-proposed/universe s390x c-n-f Metadata [13.4 kB] 138s Get:13 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x Packages [3776 B] 138s Get:14 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse s390x c-n-f Metadata [240 B] 139s Fetched 1073 kB in 2s (606 kB/s) 139s Reading package lists... 140s + lsb_release --codename --short 140s + RELEASE=plucky 140s + cat 140s + [ plucky != trusty ] 140s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y --allow-downgrades -o Dpkg::Options::=--force-confnew dist-upgrade 140s Reading package lists... 140s Building dependency tree... 140s Reading state information... 140s Calculating upgrade... 140s Calculating upgrade... 140s The following packages were automatically installed and are no longer required: 140s libnsl2 libpython3.12-minimal libpython3.12-stdlib libpython3.12t64 140s linux-headers-6.11.0-8 linux-headers-6.11.0-8-generic 140s linux-modules-6.11.0-8-generic linux-tools-6.11.0-8 140s linux-tools-6.11.0-8-generic 140s Use 'sudo apt autoremove' to remove them. 140s The following packages will be upgraded: 140s pinentry-curses python3-jinja2 strace 140s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 140s Need to get 652 kB of archives. 140s After this operation, 27.6 kB of additional disk space will be used. 140s Get:1 http://ftpmaster.internal/ubuntu plucky/main s390x strace s390x 6.13+ds-1ubuntu1 [500 kB] 141s Get:2 http://ftpmaster.internal/ubuntu plucky/main s390x pinentry-curses s390x 1.3.1-2ubuntu3 [42.9 kB] 141s Get:3 http://ftpmaster.internal/ubuntu plucky/main s390x python3-jinja2 all 3.1.5-2ubuntu1 [109 kB] 141s Fetched 652 kB in 1s (617 kB/s) 142s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81428 files and directories currently installed.) 142s Preparing to unpack .../strace_6.13+ds-1ubuntu1_s390x.deb ... 142s Unpacking strace (6.13+ds-1ubuntu1) over (6.11-0ubuntu1) ... 142s Preparing to unpack .../pinentry-curses_1.3.1-2ubuntu3_s390x.deb ... 142s Unpacking pinentry-curses (1.3.1-2ubuntu3) over (1.3.1-2ubuntu2) ... 142s Preparing to unpack .../python3-jinja2_3.1.5-2ubuntu1_all.deb ... 142s Unpacking python3-jinja2 (3.1.5-2ubuntu1) over (3.1.5-2) ... 142s Setting up pinentry-curses (1.3.1-2ubuntu3) ... 142s Setting up python3-jinja2 (3.1.5-2ubuntu1) ... 142s Setting up strace (6.13+ds-1ubuntu1) ... 142s Processing triggers for man-db (2.13.0-1) ... 142s + rm /etc/apt/preferences.d/force-downgrade-to-release.pref 142s + /usr/lib/apt/apt-helper analyze-pattern ?true 142s + uname -r 142s + sed s/\./\\./g 142s + running_kernel_pattern=^linux-.*6\.14\.0-10-generic.* 142s + apt list ?obsolete 142s + tail -n+2+ cut -d/ -f1 142s 142s + grep -v ^linux-.*6\.14\.0-10-generic.* 142s + obsolete_pkgs=linux-headers-6.11.0-8-generic 142s linux-headers-6.11.0-8 142s linux-modules-6.11.0-8-generic 142s linux-tools-6.11.0-8-generic 142s linux-tools-6.11.0-8 142s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y purge --autoremove linux-headers-6.11.0-8-generic linux-headers-6.11.0-8 linux-modules-6.11.0-8-generic linux-tools-6.11.0-8-generic linux-tools-6.11.0-8 142s Reading package lists... 143s Building dependency tree... 143s Reading state information... 143s Solving dependencies... 143s The following packages will be REMOVED: 143s libnsl2* libpython3.12-minimal* libpython3.12-stdlib* libpython3.12t64* 143s linux-headers-6.11.0-8* linux-headers-6.11.0-8-generic* 143s linux-modules-6.11.0-8-generic* linux-tools-6.11.0-8* 143s linux-tools-6.11.0-8-generic* 143s 0 upgraded, 0 newly installed, 9 to remove and 5 not upgraded. 143s After this operation, 167 MB disk space will be freed. 143s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81428 files and directories currently installed.) 143s Removing linux-tools-6.11.0-8-generic (6.11.0-8.8) ... 143s Removing linux-tools-6.11.0-8 (6.11.0-8.8) ... 143s Removing libpython3.12t64:s390x (3.12.9-1) ... 143s Removing libpython3.12-stdlib:s390x (3.12.9-1) ... 143s Removing libnsl2:s390x (1.3.0-3build3) ... 143s Removing libpython3.12-minimal:s390x (3.12.9-1) ... 143s Removing linux-headers-6.11.0-8-generic (6.11.0-8.8) ... 143s Removing linux-headers-6.11.0-8 (6.11.0-8.8) ... 144s Removing linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 144s Processing triggers for libc-bin (2.41-1ubuntu1) ... 144s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56328 files and directories currently installed.) 144s Purging configuration files for libpython3.12-minimal:s390x (3.12.9-1) ... 144s Purging configuration files for linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 144s + grep -q trusty /etc/lsb-release 144s + [ ! -d /usr/share/doc/unattended-upgrades ] 144s + [ ! -d /usr/share/doc/lxd ] 144s + [ ! -d /usr/share/doc/lxd-client ] 144s + [ ! -d /usr/share/doc/snapd ] 144s + type iptables 144s + cat 144s + chmod 755 /etc/rc.local 144s + . /etc/rc.local 144s + iptables -w -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 144s + iptables -A OUTPUT -d 10.255.255.1/32 -p tcp -j DROP 144s + iptables -A OUTPUT -d 10.255.255.2/32 -p tcp -j DROP 144s + uname -m 144s + [ s390x = ppc64le ] 144s + [ -d /run/systemd/system ] 144s + systemd-detect-virt --quiet --vm 144s + mkdir -p /etc/systemd/system/systemd-random-seed.service.d/ 144s + cat 144s + grep -q lz4 /etc/initramfs-tools/initramfs.conf 144s + echo COMPRESS=lz4 144s autopkgtest [18:41:04]: upgrading testbed (apt dist-upgrade and autopurge) 144s Reading package lists... 145s Building dependency tree... 145s Reading state information... 145s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 145s Starting 2 pkgProblemResolver with broken count: 0 145s Done 145s Entering ResolveByKeep 145s 145s Calculating upgrade... 145s The following packages will be upgraded: 145s libc-bin libc-dev-bin libc6 libc6-dev locales 145s 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 145s Need to get 9512 kB of archives. 145s After this operation, 8192 B of additional disk space will be used. 145s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6-dev s390x 2.41-1ubuntu2 [1678 kB] 147s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-dev-bin s390x 2.41-1ubuntu2 [24.3 kB] 147s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc6 s390x 2.41-1ubuntu2 [2892 kB] 150s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x libc-bin s390x 2.41-1ubuntu2 [671 kB] 151s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main s390x locales all 2.41-1ubuntu2 [4246 kB] 156s Preconfiguring packages ... 156s Fetched 9512 kB in 11s (868 kB/s) 156s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 156s Preparing to unpack .../libc6-dev_2.41-1ubuntu2_s390x.deb ... 156s Unpacking libc6-dev:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 156s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu2_s390x.deb ... 156s Unpacking libc-dev-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 156s Preparing to unpack .../libc6_2.41-1ubuntu2_s390x.deb ... 157s Unpacking libc6:s390x (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 157s Setting up libc6:s390x (2.41-1ubuntu2) ... 157s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 157s Preparing to unpack .../libc-bin_2.41-1ubuntu2_s390x.deb ... 157s Unpacking libc-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 157s Setting up libc-bin (2.41-1ubuntu2) ... 157s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 157s Preparing to unpack .../locales_2.41-1ubuntu2_all.deb ... 157s Unpacking locales (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 157s Setting up locales (2.41-1ubuntu2) ... 157s Generating locales (this might take a while)... 158s en_US.UTF-8... done 158s Generation complete. 158s Setting up libc-dev-bin (2.41-1ubuntu2) ... 158s Setting up libc6-dev:s390x (2.41-1ubuntu2) ... 158s Processing triggers for man-db (2.13.0-1) ... 159s Processing triggers for systemd (257.3-1ubuntu3) ... 160s Reading package lists... 160s Building dependency tree... 160s Reading state information... 160s Starting pkgProblemResolver with broken count: 0 160s Starting 2 pkgProblemResolver with broken count: 0 160s Done 160s Solving dependencies... 160s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 160s autopkgtest [18:41:20]: rebooting testbed after setup commands that affected boot 178s autopkgtest [18:41:38]: testbed running kernel: Linux 6.14.0-10-generic #10-Ubuntu SMP Wed Mar 12 14:53:49 UTC 2025 181s autopkgtest [18:41:41]: @@@@@@@@@@@@@@@@@@@@ apt-source redict 185s Get:1 http://ftpmaster.internal/ubuntu plucky/universe redict 7.3.2+ds-1 (dsc) [2417 B] 185s Get:2 http://ftpmaster.internal/ubuntu plucky/universe redict 7.3.2+ds-1 (tar) [1742 kB] 185s Get:3 http://ftpmaster.internal/ubuntu plucky/universe redict 7.3.2+ds-1 (diff) [13.4 kB] 186s gpgv: Signature made Wed Jan 8 14:03:38 2025 UTC 186s gpgv: using RSA key 4A5FD1CD115087CC03DC35C1D597897206C5F07F 186s gpgv: issuer "maytha8thedev@gmail.com" 186s gpgv: Can't check signature: No public key 186s dpkg-source: warning: cannot verify inline signature for ./redict_7.3.2+ds-1.dsc: no acceptable signature found 186s autopkgtest [18:41:46]: testing package redict version 7.3.2+ds-1 187s autopkgtest [18:41:47]: build not needed 188s autopkgtest [18:41:48]: test 0001-redict-cli: preparing testbed 189s Reading package lists... 189s Building dependency tree... 189s Reading state information... 189s Starting pkgProblemResolver with broken count: 0 189s Starting 2 pkgProblemResolver with broken count: 0 189s Done 189s The following NEW packages will be installed: 189s libhiredict1.3.1 liblzf1 redict redict-sentinel redict-server redict-tools 189s 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. 189s Need to get 1324 kB of archives. 189s After this operation, 7319 kB of additional disk space will be used. 189s Get:1 http://ftpmaster.internal/ubuntu plucky/universe s390x libhiredict1.3.1 s390x 1.3.1-2 [41.0 kB] 189s Get:2 http://ftpmaster.internal/ubuntu plucky/universe s390x liblzf1 s390x 3.6-4 [7020 B] 189s Get:3 http://ftpmaster.internal/ubuntu plucky/universe s390x redict-tools s390x 7.3.2+ds-1 [1218 kB] 191s Get:4 http://ftpmaster.internal/ubuntu plucky/universe s390x redict-sentinel s390x 7.3.2+ds-1 [12.6 kB] 191s Get:5 http://ftpmaster.internal/ubuntu plucky/universe s390x redict-server s390x 7.3.2+ds-1 [41.3 kB] 191s Get:6 http://ftpmaster.internal/ubuntu plucky/universe s390x redict all 7.3.2+ds-1 [3720 B] 191s Fetched 1324 kB in 2s (691 kB/s) 191s Selecting previously unselected package libhiredict1.3.1:s390x. 191s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 56326 files and directories currently installed.) 191s Preparing to unpack .../0-libhiredict1.3.1_1.3.1-2_s390x.deb ... 191s Unpacking libhiredict1.3.1:s390x (1.3.1-2) ... 191s Selecting previously unselected package liblzf1:s390x. 191s Preparing to unpack .../1-liblzf1_3.6-4_s390x.deb ... 191s Unpacking liblzf1:s390x (3.6-4) ... 191s Selecting previously unselected package redict-tools. 191s Preparing to unpack .../2-redict-tools_7.3.2+ds-1_s390x.deb ... 191s Unpacking redict-tools (7.3.2+ds-1) ... 191s Selecting previously unselected package redict-sentinel. 191s Preparing to unpack .../3-redict-sentinel_7.3.2+ds-1_s390x.deb ... 191s Unpacking redict-sentinel (7.3.2+ds-1) ... 191s Selecting previously unselected package redict-server. 191s Preparing to unpack .../4-redict-server_7.3.2+ds-1_s390x.deb ... 191s Unpacking redict-server (7.3.2+ds-1) ... 191s Selecting previously unselected package redict. 191s Preparing to unpack .../5-redict_7.3.2+ds-1_all.deb ... 191s Unpacking redict (7.3.2+ds-1) ... 191s Setting up liblzf1:s390x (3.6-4) ... 191s Setting up libhiredict1.3.1:s390x (1.3.1-2) ... 191s Setting up redict-tools (7.3.2+ds-1) ... 191s Creating group 'redict' with GID 988. 191s Creating user 'redict' (Redict Key/Value Store) with UID 988 and GID 988. 191s Setting up redict-server (7.3.2+ds-1) ... 192s Created symlink '/etc/systemd/system/redict.service' → '/usr/lib/systemd/system/redict-server.service'. 192s Created symlink '/etc/systemd/system/multi-user.target.wants/redict-server.service' → '/usr/lib/systemd/system/redict-server.service'. 192s Setting up redict-sentinel (7.3.2+ds-1) ... 192s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redict-sentinel.service'. 192s Created symlink '/etc/systemd/system/multi-user.target.wants/redict-sentinel.service' → '/usr/lib/systemd/system/redict-sentinel.service'. 193s Setting up redict (7.3.2+ds-1) ... 193s Processing triggers for libc-bin (2.41-1ubuntu2) ... 194s autopkgtest [18:41:54]: test 0001-redict-cli: [----------------------- 199s # Server 199s redict_version:7.3.2 199s redict_git_sha1:00000000 199s redict_git_dirty:0 199s redict_build_id:6e1afbc83ca9dd4a 199s redict_mode:standalone 199s redis_version:7.2.4 199s os:Linux 6.14.0-10-generic s390x 199s arch_bits:64 199s monotonic_clock:POSIX clock_gettime 199s multiplexing_api:epoll 199s atomicvar_api:c11-builtin 199s gcc_version:14.2.0 199s process_id:1746 199s process_supervised:systemd 199s run_id:fb30cb0e1cd29fbb503054c3ed37f94084c7b62b 199s tcp_port:6379 199s server_time_usec:1742064205355331 199s uptime_in_seconds:5 199s uptime_in_days:0 199s hz:10 199s configured_hz:10 199s lru_clock:14010957 199s executable:/usr/bin/redict-server 199s config_file:/etc/redict/redict.conf 199s io_threads_active:0 199s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 199s 199s # Clients 199s connected_clients:3 199s cluster_connections:0 199s maxclients:10000 199s client_recent_max_input_buffer:20480 199s client_recent_max_output_buffer:0 199s blocked_clients:0 199s tracking_clients:0 199s pubsub_clients:1 199s watching_clients:0 199s clients_in_timeout_table:0 199s total_watched_keys:0 199s total_blocking_keys:0 199s total_blocking_keys_on_nokey:0 199s 199s # Memory 199s used_memory:1125040 199s used_memory_human:1.07M 199s used_memory_rss:14413824 199s used_memory_rss_human:13.75M 199s used_memory_peak:1125040 199s used_memory_peak_human:1.07M 199s used_memory_peak_perc:100.92% 199s used_memory_overhead:983968 199s used_memory_startup:938912 199s used_memory_dataset:141072 199s used_memory_dataset_perc:75.79% 199s allocator_allocated:4785312 199s allocator_active:9895936 199s allocator_resident:12255232 199s allocator_muzzy:0 199s total_system_memory:4190969856 199s total_system_memory_human:3.90G 199s used_memory_lua:31744 199s used_memory_vm_eval:31744 199s used_memory_lua_human:31.00K 199s used_memory_scripts_eval:0 199s number_of_cached_scripts:0 199s number_of_functions:0 199s number_of_libraries:0 199s used_memory_vm_functions:33792 199s used_memory_vm_total:65536 199s used_memory_vm_total_human:64.00K 199s used_memory_functions:200 199s used_memory_scripts:200 199s used_memory_scripts_human:200B 199s maxmemory:0 199s maxmemory_human:0B 199s maxmemory_policy:noeviction 199s allocator_frag_ratio:2.05 199s allocator_frag_bytes:5045088 199s allocator_rss_ratio:1.24 199s allocator_rss_bytes:2359296 199s rss_overhead_ratio:1.18 199s rss_overhead_bytes:2158592 199s mem_fragmentation_ratio:13.30 199s mem_fragmentation_bytes:13329784 199s mem_not_counted_for_evict:0 199s mem_replication_backlog:0 199s mem_total_replication_buffers:0 199s mem_clients_slaves:0 199s mem_clients_normal:44856 199s mem_cluster_links:0 199s mem_aof_buffer:0 199s mem_allocator:jemalloc-5.3.0 199s mem_overhead_db_hashtable_rehashing:0 199s active_defrag_running:0 199s lazyfree_pending_objects:0 199s lazyfreed_objects:0 199s 199s # Persistence 199s loading:0 199s async_loading:0 199s current_cow_peak:0 199s current_cow_size:0 199s current_cow_size_age:0 199s current_fork_perc:0.00 199s current_save_keys_processed:0 199s current_save_keys_total:0 199s rdb_changes_since_last_save:0 199s rdb_bgsave_in_progress:0 199s rdb_last_save_time:1742064200 199s rdb_last_bgsave_status:ok 199s rdb_last_bgsave_time_sec:-1 199s rdb_current_bgsave_time_sec:-1 199s rdb_saves:0 199s rdb_last_cow_size:0 199s rdb_last_load_keys_expired:0 199s rdb_last_load_keys_loaded:0 199s aof_enabled:0 199s aof_rewrite_in_progress:0 199s aof_rewrite_scheduled:0 199s aof_last_rewrite_time_sec:-1 199s aof_current_rewrite_time_sec:-1 199s aof_last_bgrewrite_status:ok 199s aof_rewrites:0 199s aof_rewrites_consecutive_failures:0 199s aof_last_write_status:ok 199s aof_last_cow_size:0 199s module_fork_in_progress:0 199s module_fork_last_cow_size:0 199s 199s # Stats 199s total_connections_received:3 199s total_commands_processed:9 199s instantaneous_ops_per_sec:0 199s total_net_input_bytes:497 199s total_net_output_bytes:227 199s total_net_repl_input_bytes:0 199s total_net_repl_output_bytes:0 199s instantaneous_input_kbps:0.01 199s instantaneous_output_kbps:0.00 199s instantaneous_input_repl_kbps:0.00 199s instantaneous_output_repl_kbps:0.00 199s rejected_connections:0 199s sync_full:0 199s sync_partial_ok:0 199s sync_partial_err:0 199s expired_keys:0 199s expired_stale_perc:0.00 199s expired_time_cap_reached_count:0 199s expire_cycle_cpu_milliseconds:0 199s evicted_keys:0 199s evicted_clients:0 199s evicted_scripts:0 199s total_eviction_exceeded_time:0 199s current_eviction_exceeded_time:0 199s keyspace_hits:0 199s keyspace_misses:0 199s pubsub_channels:1 199s pubsub_patterns:0 199s pubsubshard_channels:0 199s latest_fork_usec:0 199s total_forks:0 199s migrate_cached_sockets:0 199s slave_expires_tracked_keys:0 199s active_defrag_hits:0 199s active_defrag_misses:0 199s active_defrag_key_hits:0 199s active_defrag_key_misses:0 199s total_active_defrag_time:0 199s current_active_defrag_time:0 199s tracking_total_keys:0 199s tracking_total_items:0 199s tracking_total_prefixes:0 199s unexpected_error_replies:0 199s total_error_replies:0 199s dump_payload_sanitizations:0 199s total_reads_processed:6 199s total_writes_processed:6 199s io_threaded_reads_processed:0 199s io_threaded_writes_processed:0 199s client_query_buffer_limit_disconnections:0 199s client_output_buffer_limit_disconnections:0 199s reply_buffer_shrinks:2 199s reply_buffer_expands:0 199s eventloop_cycles:56 199s eventloop_duration_sum:8457 199s eventloop_duration_cmd_sum:35 199s instantaneous_eventloop_cycles_per_sec:9 199s instantaneous_eventloop_duration_usec:159 199s acl_access_denied_auth:0 199s acl_access_denied_cmd:0 199s acl_access_denied_key:0 199s acl_access_denied_channel:0 199s 199s # Replication 199s role:master 199s connected_slaves:0 199s master_failover_state:no-failover 199s master_replid:f647e610c70953ac9508fc000730fa88d2909ffc 199s master_replid2:0000000000000000000000000000000000000000 199s master_repl_offset:0 199s second_repl_offset:-1 199s repl_backlog_active:0 199s repl_backlog_size:1048576 199s repl_backlog_first_byte_offset:0 199s repl_backlog_histlen:0 199s 199s # CPU 199s used_cpu_sys:0.013721 199s used_cpu_user:0.035761 199s used_cpu_sys_children:0.000283 199s used_cpu_user_children:0.000032 199s used_cpu_sys_main_thread:0.013599 199s used_cpu_user_main_thread:0.035718 199s 199s # Modules 199s 199s # Errorstats 199s 199s # Cluster 199s cluster_enabled:0 199s 199s # Keyspace 199s Redict ver. 7.3.2 200s autopkgtest [18:42:00]: test 0001-redict-cli: -----------------------] 200s 0001-redict-cli PASS 200s autopkgtest [18:42:00]: test 0001-redict-cli: - - - - - - - - - - results - - - - - - - - - - 201s autopkgtest [18:42:01]: test 0002-benchmark: preparing testbed 201s Reading package lists... 201s Building dependency tree... 201s Reading state information... 201s Starting pkgProblemResolver with broken count: 0 201s Starting 2 pkgProblemResolver with broken count: 0 201s Done 201s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 203s autopkgtest [18:42:03]: test 0002-benchmark: [----------------------- 208s PING_INLINE: rps=0.0 (overall: 0.0) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 208s 100000 requests completed in 0.07 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.095 milliseconds (cumulative count 10) 208s 50.000% <= 0.271 milliseconds (cumulative count 51470) 208s 75.000% <= 0.319 milliseconds (cumulative count 76240) 208s 87.500% <= 0.359 milliseconds (cumulative count 88980) 208s 93.750% <= 0.383 milliseconds (cumulative count 94230) 208s 96.875% <= 0.423 milliseconds (cumulative count 97170) 208s 98.438% <= 0.463 milliseconds (cumulative count 98500) 208s 99.219% <= 0.503 milliseconds (cumulative count 99230) 208s 99.609% <= 0.543 milliseconds (cumulative count 99670) 208s 99.805% <= 0.583 milliseconds (cumulative count 99840) 208s 99.902% <= 0.607 milliseconds (cumulative count 99910) 208s 99.951% <= 0.639 milliseconds (cumulative count 99960) 208s 99.976% <= 0.663 milliseconds (cumulative count 99980) 208s 99.988% <= 0.671 milliseconds (cumulative count 99990) 208s 99.994% <= 0.687 milliseconds (cumulative count 100000) 208s 100.000% <= 0.687 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.030% <= 0.103 milliseconds (cumulative count 30) 208s 8.610% <= 0.207 milliseconds (cumulative count 8610) 208s 69.770% <= 0.303 milliseconds (cumulative count 69770) 208s 96.370% <= 0.407 milliseconds (cumulative count 96370) 208s 99.230% <= 0.503 milliseconds (cumulative count 99230) 208s 99.910% <= 0.607 milliseconds (cumulative count 99910) 208s 100.000% <= 0.703 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 1492537.25 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.280 0.088 0.271 0.391 0.495 0.687 208s ====== PING_MBULK ====== 208s 100000 requests completed in 0.05 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.087 milliseconds (cumulative count 20) 208s 50.000% <= 0.199 milliseconds (cumulative count 52050) 208s 75.000% <= 0.239 milliseconds (cumulative count 76730) 208s 87.500% <= 0.279 milliseconds (cumulative count 90090) 208s 93.750% <= 0.295 milliseconds (cumulative count 94820) 208s 96.875% <= 0.319 milliseconds (cumulative count 97480) 208s 98.438% <= 0.415 milliseconds (cumulative count 98440) 208s 99.219% <= 0.511 milliseconds (cumulative count 99240) 208s 99.609% <= 0.583 milliseconds (cumulative count 99670) 208s 99.805% <= 0.615 milliseconds (cumulative count 99830) 208s 99.902% <= 0.647 milliseconds (cumulative count 99910) 208s 99.951% <= 0.687 milliseconds (cumulative count 99960) 208s 99.976% <= 0.703 milliseconds (cumulative count 99980) 208s 99.988% <= 0.711 milliseconds (cumulative count 99990) 208s 99.994% <= 0.831 milliseconds (cumulative count 100000) 208s 100.000% <= 0.831 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.070% <= 0.103 milliseconds (cumulative count 70) 208s 59.130% <= 0.207 milliseconds (cumulative count 59130) 208s 96.310% <= 0.303 milliseconds (cumulative count 96310) 208s 98.430% <= 0.407 milliseconds (cumulative count 98430) 208s 99.160% <= 0.503 milliseconds (cumulative count 99160) 208s 99.790% <= 0.607 milliseconds (cumulative count 99790) 208s 99.980% <= 0.703 milliseconds (cumulative count 99980) 208s 99.990% <= 0.807 milliseconds (cumulative count 99990) 208s 100.000% <= 0.903 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 2000000.00 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.209 0.080 0.199 0.303 0.487 0.831 208s ====== SET ====== 208s 100000 requests completed in 0.08 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.095 milliseconds (cumulative count 20) 208s 50.000% <= 0.327 milliseconds (cumulative count 51460) 208s 75.000% <= 0.375 milliseconds (cumulative count 75400) 208s 87.500% <= 0.415 milliseconds (cumulative count 88860) 208s 93.750% <= 0.439 milliseconds (cumulative count 94460) 208s 96.875% <= 0.463 milliseconds (cumulative count 97090) 208s 98.438% <= 0.495 milliseconds (cumulative count 98550) 208s 99.219% <= 0.535 milliseconds (cumulative count 99270) 208s 99.609% <= 0.799 milliseconds (cumulative count 99640) 208s 99.805% <= 0.847 milliseconds (cumulative count 99810) 208s 99.902% <= 0.911 milliseconds (cumulative count 99910) 208s 99.951% <= 0.951 milliseconds (cumulative count 99960) 208s 99.976% <= 0.959 milliseconds (cumulative count 99980) 208s 99.988% <= 0.975 milliseconds (cumulative count 99990) 208s 99.994% <= 0.983 milliseconds (cumulative count 100000) 208s 100.000% <= 0.983 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.040% <= 0.103 milliseconds (cumulative count 40) 208s 2.900% <= 0.207 milliseconds (cumulative count 2900) 208s 37.290% <= 0.303 milliseconds (cumulative count 37290) 208s 86.500% <= 0.407 milliseconds (cumulative count 86500) 208s 98.790% <= 0.503 milliseconds (cumulative count 98790) 208s 99.500% <= 0.607 milliseconds (cumulative count 99500) 208s 99.660% <= 0.807 milliseconds (cumulative count 99660) 208s 99.900% <= 0.903 milliseconds (cumulative count 99900) 208s 100.000% <= 1.007 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 1298701.25 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.331 0.088 0.327 0.447 0.519 0.983 208s GET: rps=293360.0 (overall: 1358148.1) avg_msec=0.313 (overall: 0.313) ====== GET ====== 208s 100000 requests completed in 0.07 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.079 milliseconds (cumulative count 10) 208s 50.000% <= 0.295 milliseconds (cumulative count 50000) 208s 75.000% <= 0.359 milliseconds (cumulative count 76410) 208s 87.500% <= 0.407 milliseconds (cumulative count 87920) 208s 93.750% <= 0.455 milliseconds (cumulative count 94190) 208s 96.875% <= 0.503 milliseconds (cumulative count 97050) 208s 98.438% <= 0.575 milliseconds (cumulative count 98530) 208s 99.219% <= 0.711 milliseconds (cumulative count 99250) 208s 99.609% <= 0.799 milliseconds (cumulative count 99610) 208s 99.805% <= 0.879 milliseconds (cumulative count 99830) 208s 99.902% <= 0.927 milliseconds (cumulative count 99910) 208s 99.951% <= 0.959 milliseconds (cumulative count 99970) 208s 99.976% <= 0.967 milliseconds (cumulative count 99980) 208s 99.988% <= 0.983 milliseconds (cumulative count 99990) 208s 99.994% <= 0.991 milliseconds (cumulative count 100000) 208s 100.000% <= 0.991 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.060% <= 0.103 milliseconds (cumulative count 60) 208s 8.780% <= 0.207 milliseconds (cumulative count 8780) 208s 54.050% <= 0.303 milliseconds (cumulative count 54050) 208s 87.920% <= 0.407 milliseconds (cumulative count 87920) 208s 97.050% <= 0.503 milliseconds (cumulative count 97050) 208s 98.830% <= 0.607 milliseconds (cumulative count 98830) 208s 99.210% <= 0.703 milliseconds (cumulative count 99210) 208s 99.620% <= 0.807 milliseconds (cumulative count 99620) 208s 99.860% <= 0.903 milliseconds (cumulative count 99860) 208s 100.000% <= 1.007 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 1369863.00 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.310 0.072 0.295 0.471 0.631 0.991 208s ====== INCR ====== 208s 100000 requests completed in 0.07 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.095 milliseconds (cumulative count 10) 208s 50.000% <= 0.279 milliseconds (cumulative count 54270) 208s 75.000% <= 0.327 milliseconds (cumulative count 77670) 208s 87.500% <= 0.359 milliseconds (cumulative count 88700) 208s 93.750% <= 0.391 milliseconds (cumulative count 94880) 208s 96.875% <= 0.423 milliseconds (cumulative count 97250) 208s 98.438% <= 0.463 milliseconds (cumulative count 98480) 208s 99.219% <= 0.535 milliseconds (cumulative count 99250) 208s 99.609% <= 0.647 milliseconds (cumulative count 99610) 208s 99.805% <= 0.759 milliseconds (cumulative count 99820) 208s 99.902% <= 0.791 milliseconds (cumulative count 99920) 208s 99.951% <= 0.823 milliseconds (cumulative count 99960) 208s 99.976% <= 0.839 milliseconds (cumulative count 99980) 208s 99.988% <= 0.855 milliseconds (cumulative count 99990) 208s 99.994% <= 0.871 milliseconds (cumulative count 100000) 208s 100.000% <= 0.871 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.040% <= 0.103 milliseconds (cumulative count 40) 208s 8.490% <= 0.207 milliseconds (cumulative count 8490) 208s 67.950% <= 0.303 milliseconds (cumulative count 67950) 208s 96.370% <= 0.407 milliseconds (cumulative count 96370) 208s 99.040% <= 0.503 milliseconds (cumulative count 99040) 208s 99.520% <= 0.607 milliseconds (cumulative count 99520) 208s 99.730% <= 0.703 milliseconds (cumulative count 99730) 208s 99.950% <= 0.807 milliseconds (cumulative count 99950) 208s 100.000% <= 0.903 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 1515151.50 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.283 0.088 0.279 0.399 0.503 0.871 208s ====== LPUSH ====== 208s 100000 requests completed in 0.08 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.119 milliseconds (cumulative count 10) 208s 50.000% <= 0.359 milliseconds (cumulative count 51260) 208s 75.000% <= 0.415 milliseconds (cumulative count 76570) 208s 87.500% <= 0.455 milliseconds (cumulative count 89160) 208s 93.750% <= 0.479 milliseconds (cumulative count 93780) 208s 96.875% <= 0.519 milliseconds (cumulative count 97080) 208s 98.438% <= 0.583 milliseconds (cumulative count 98460) 208s 99.219% <= 0.711 milliseconds (cumulative count 99220) 208s 99.609% <= 0.911 milliseconds (cumulative count 99630) 208s 99.805% <= 0.975 milliseconds (cumulative count 99830) 208s 99.902% <= 1.031 milliseconds (cumulative count 99920) 208s 99.951% <= 1.079 milliseconds (cumulative count 99960) 208s 99.976% <= 1.095 milliseconds (cumulative count 99980) 208s 99.988% <= 1.111 milliseconds (cumulative count 99990) 208s 99.994% <= 1.127 milliseconds (cumulative count 100000) 208s 100.000% <= 1.127 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.000% <= 0.103 milliseconds (cumulative count 0) 208s 1.630% <= 0.207 milliseconds (cumulative count 1630) 208s 21.520% <= 0.303 milliseconds (cumulative count 21520) 208s 73.640% <= 0.407 milliseconds (cumulative count 73640) 208s 96.200% <= 0.503 milliseconds (cumulative count 96200) 208s 98.660% <= 0.607 milliseconds (cumulative count 98660) 208s 99.180% <= 0.703 milliseconds (cumulative count 99180) 208s 99.500% <= 0.807 milliseconds (cumulative count 99500) 208s 99.600% <= 0.903 milliseconds (cumulative count 99600) 208s 99.880% <= 1.007 milliseconds (cumulative count 99880) 208s 99.980% <= 1.103 milliseconds (cumulative count 99980) 208s 100.000% <= 1.207 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 1190476.25 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.366 0.112 0.359 0.495 0.671 1.127 208s ====== RPUSH ====== 208s 100000 requests completed in 0.07 seconds 208s 50 parallel clients 208s 3 bytes payload 208s keep alive: 1 208s host configuration "save": 3600 1 300 100 60 10000 208s host configuration "appendonly": no 208s multi-thread: no 208s 208s Latency by percentile distribution: 208s 0.000% <= 0.111 milliseconds (cumulative count 30) 208s 50.000% <= 0.311 milliseconds (cumulative count 53040) 208s 75.000% <= 0.359 milliseconds (cumulative count 77880) 208s 87.500% <= 0.391 milliseconds (cumulative count 88350) 208s 93.750% <= 0.415 milliseconds (cumulative count 94630) 208s 96.875% <= 0.431 milliseconds (cumulative count 96920) 208s 98.438% <= 0.479 milliseconds (cumulative count 98460) 208s 99.219% <= 0.903 milliseconds (cumulative count 99220) 208s 99.609% <= 1.231 milliseconds (cumulative count 99620) 208s 99.805% <= 1.351 milliseconds (cumulative count 99810) 208s 99.902% <= 1.399 milliseconds (cumulative count 99920) 208s 99.951% <= 1.423 milliseconds (cumulative count 99960) 208s 99.976% <= 1.447 milliseconds (cumulative count 99980) 208s 99.988% <= 1.455 milliseconds (cumulative count 99990) 208s 99.994% <= 1.471 milliseconds (cumulative count 100000) 208s 100.000% <= 1.471 milliseconds (cumulative count 100000) 208s 208s Cumulative distribution of latencies: 208s 0.000% <= 0.103 milliseconds (cumulative count 0) 208s 2.660% <= 0.207 milliseconds (cumulative count 2660) 208s 48.160% <= 0.303 milliseconds (cumulative count 48160) 208s 92.960% <= 0.407 milliseconds (cumulative count 92960) 208s 98.580% <= 0.503 milliseconds (cumulative count 98580) 208s 98.920% <= 0.607 milliseconds (cumulative count 98920) 208s 99.100% <= 0.703 milliseconds (cumulative count 99100) 208s 99.120% <= 0.807 milliseconds (cumulative count 99120) 208s 99.220% <= 0.903 milliseconds (cumulative count 99220) 208s 99.440% <= 1.007 milliseconds (cumulative count 99440) 208s 99.560% <= 1.103 milliseconds (cumulative count 99560) 208s 99.570% <= 1.207 milliseconds (cumulative count 99570) 208s 99.760% <= 1.303 milliseconds (cumulative count 99760) 208s 99.940% <= 1.407 milliseconds (cumulative count 99940) 208s 100.000% <= 1.503 milliseconds (cumulative count 100000) 208s 208s Summary: 208s throughput summary: 1369863.00 requests per second 208s latency summary (msec): 208s avg min p50 p95 p99 max 208s 0.319 0.104 0.311 0.423 0.647 1.471 209s LPOP: rps=19320.0 (overall: 1207500.0) avg_msec=0.367 (overall: 0.367) ====== LPOP ====== 209s 100000 requests completed in 0.09 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.119 milliseconds (cumulative count 20) 209s 50.000% <= 0.375 milliseconds (cumulative count 53250) 209s 75.000% <= 0.423 milliseconds (cumulative count 75950) 209s 87.500% <= 0.463 milliseconds (cumulative count 89300) 209s 93.750% <= 0.487 milliseconds (cumulative count 94240) 209s 96.875% <= 0.511 milliseconds (cumulative count 97150) 209s 98.438% <= 0.535 milliseconds (cumulative count 98470) 209s 99.219% <= 0.583 milliseconds (cumulative count 99220) 209s 99.609% <= 0.847 milliseconds (cumulative count 99620) 209s 99.805% <= 0.895 milliseconds (cumulative count 99810) 209s 99.902% <= 0.935 milliseconds (cumulative count 99910) 209s 99.951% <= 0.959 milliseconds (cumulative count 99960) 209s 99.976% <= 0.983 milliseconds (cumulative count 99980) 209s 99.988% <= 0.991 milliseconds (cumulative count 99990) 209s 99.994% <= 1.015 milliseconds (cumulative count 100000) 209s 100.000% <= 1.015 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 1.010% <= 0.207 milliseconds (cumulative count 1010) 209s 16.320% <= 0.303 milliseconds (cumulative count 16320) 209s 69.390% <= 0.407 milliseconds (cumulative count 69390) 209s 96.420% <= 0.503 milliseconds (cumulative count 96420) 209s 99.310% <= 0.607 milliseconds (cumulative count 99310) 209s 99.500% <= 0.703 milliseconds (cumulative count 99500) 209s 99.530% <= 0.807 milliseconds (cumulative count 99530) 209s 99.840% <= 0.903 milliseconds (cumulative count 99840) 209s 99.990% <= 1.007 milliseconds (cumulative count 99990) 209s 100.000% <= 1.103 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1176470.62 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.375 0.112 0.375 0.495 0.559 1.015 209s ====== RPOP ====== 209s 100000 requests completed in 0.08 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.095 milliseconds (cumulative count 10) 209s 50.000% <= 0.335 milliseconds (cumulative count 53170) 209s 75.000% <= 0.383 milliseconds (cumulative count 77120) 209s 87.500% <= 0.415 milliseconds (cumulative count 88050) 209s 93.750% <= 0.439 milliseconds (cumulative count 94190) 209s 96.875% <= 0.463 milliseconds (cumulative count 97200) 209s 98.438% <= 0.503 milliseconds (cumulative count 98500) 209s 99.219% <= 0.647 milliseconds (cumulative count 99230) 209s 99.609% <= 0.815 milliseconds (cumulative count 99610) 209s 99.805% <= 0.887 milliseconds (cumulative count 99810) 209s 99.902% <= 0.943 milliseconds (cumulative count 99910) 209s 99.951% <= 0.991 milliseconds (cumulative count 99960) 209s 99.976% <= 1.023 milliseconds (cumulative count 99980) 209s 99.988% <= 1.031 milliseconds (cumulative count 99990) 209s 99.994% <= 1.039 milliseconds (cumulative count 100000) 209s 100.000% <= 1.039 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.030% <= 0.103 milliseconds (cumulative count 30) 209s 1.990% <= 0.207 milliseconds (cumulative count 1990) 209s 34.150% <= 0.303 milliseconds (cumulative count 34150) 209s 85.480% <= 0.407 milliseconds (cumulative count 85480) 209s 98.500% <= 0.503 milliseconds (cumulative count 98500) 209s 99.070% <= 0.607 milliseconds (cumulative count 99070) 209s 99.360% <= 0.703 milliseconds (cumulative count 99360) 209s 99.600% <= 0.807 milliseconds (cumulative count 99600) 209s 99.830% <= 0.903 milliseconds (cumulative count 99830) 209s 99.970% <= 1.007 milliseconds (cumulative count 99970) 209s 100.000% <= 1.103 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1298701.25 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.337 0.088 0.335 0.447 0.599 1.039 209s ====== SADD ====== 209s 100000 requests completed in 0.07 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.111 milliseconds (cumulative count 20) 209s 50.000% <= 0.311 milliseconds (cumulative count 51040) 209s 75.000% <= 0.367 milliseconds (cumulative count 76710) 209s 87.500% <= 0.407 milliseconds (cumulative count 88350) 209s 93.750% <= 0.447 milliseconds (cumulative count 93900) 209s 96.875% <= 0.495 milliseconds (cumulative count 97110) 209s 98.438% <= 0.535 milliseconds (cumulative count 98480) 209s 99.219% <= 0.591 milliseconds (cumulative count 99260) 209s 99.609% <= 0.647 milliseconds (cumulative count 99630) 209s 99.805% <= 0.703 milliseconds (cumulative count 99830) 209s 99.902% <= 0.727 milliseconds (cumulative count 99920) 209s 99.951% <= 0.759 milliseconds (cumulative count 99960) 209s 99.976% <= 0.775 milliseconds (cumulative count 99980) 209s 99.988% <= 0.783 milliseconds (cumulative count 99990) 209s 99.994% <= 0.807 milliseconds (cumulative count 100000) 209s 100.000% <= 0.807 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 3.260% <= 0.207 milliseconds (cumulative count 3260) 209s 46.490% <= 0.303 milliseconds (cumulative count 46490) 209s 88.350% <= 0.407 milliseconds (cumulative count 88350) 209s 97.510% <= 0.503 milliseconds (cumulative count 97510) 209s 99.410% <= 0.607 milliseconds (cumulative count 99410) 209s 99.830% <= 0.703 milliseconds (cumulative count 99830) 209s 100.000% <= 0.807 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1351351.38 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.320 0.104 0.311 0.463 0.567 0.807 209s HSET: rps=49960.2 (overall: 737647.0) avg_msec=0.546 (overall: 0.546) ====== HSET ====== 209s 100000 requests completed in 0.11 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.111 milliseconds (cumulative count 20) 209s 50.000% <= 0.439 milliseconds (cumulative count 51080) 209s 75.000% <= 0.551 milliseconds (cumulative count 75540) 209s 87.500% <= 0.639 milliseconds (cumulative count 87580) 209s 93.750% <= 0.879 milliseconds (cumulative count 93750) 209s 96.875% <= 0.935 milliseconds (cumulative count 97190) 209s 98.438% <= 0.975 milliseconds (cumulative count 98580) 209s 99.219% <= 1.015 milliseconds (cumulative count 99270) 209s 99.609% <= 1.079 milliseconds (cumulative count 99630) 209s 99.805% <= 1.119 milliseconds (cumulative count 99810) 209s 99.902% <= 1.151 milliseconds (cumulative count 99920) 209s 99.951% <= 1.167 milliseconds (cumulative count 99960) 209s 99.976% <= 1.175 milliseconds (cumulative count 99980) 209s 99.988% <= 1.183 milliseconds (cumulative count 100000) 209s 100.000% <= 1.183 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 1.680% <= 0.207 milliseconds (cumulative count 1680) 209s 13.320% <= 0.303 milliseconds (cumulative count 13320) 209s 42.430% <= 0.407 milliseconds (cumulative count 42430) 209s 65.810% <= 0.503 milliseconds (cumulative count 65810) 209s 84.450% <= 0.607 milliseconds (cumulative count 84450) 209s 90.700% <= 0.703 milliseconds (cumulative count 90700) 209s 92.170% <= 0.807 milliseconds (cumulative count 92170) 209s 95.170% <= 0.903 milliseconds (cumulative count 95170) 209s 99.170% <= 1.007 milliseconds (cumulative count 99170) 209s 99.760% <= 1.103 milliseconds (cumulative count 99760) 209s 100.000% <= 1.207 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 917431.19 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.471 0.104 0.439 0.903 0.999 1.183 209s ====== SPOP ====== 209s 100000 requests completed in 0.06 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.111 milliseconds (cumulative count 20) 209s 50.000% <= 0.255 milliseconds (cumulative count 52480) 209s 75.000% <= 0.303 milliseconds (cumulative count 78040) 209s 87.500% <= 0.335 milliseconds (cumulative count 89050) 209s 93.750% <= 0.367 milliseconds (cumulative count 94690) 209s 96.875% <= 0.399 milliseconds (cumulative count 97250) 209s 98.438% <= 0.431 milliseconds (cumulative count 98520) 209s 99.219% <= 0.487 milliseconds (cumulative count 99220) 209s 99.609% <= 0.639 milliseconds (cumulative count 99610) 209s 99.805% <= 0.711 milliseconds (cumulative count 99810) 209s 99.902% <= 0.759 milliseconds (cumulative count 99910) 209s 99.951% <= 0.823 milliseconds (cumulative count 99960) 209s 99.976% <= 0.847 milliseconds (cumulative count 99980) 209s 99.988% <= 0.855 milliseconds (cumulative count 99990) 209s 99.994% <= 0.863 milliseconds (cumulative count 100000) 209s 100.000% <= 0.863 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 19.990% <= 0.207 milliseconds (cumulative count 19990) 209s 78.040% <= 0.303 milliseconds (cumulative count 78040) 209s 97.790% <= 0.407 milliseconds (cumulative count 97790) 209s 99.270% <= 0.503 milliseconds (cumulative count 99270) 209s 99.500% <= 0.607 milliseconds (cumulative count 99500) 209s 99.800% <= 0.703 milliseconds (cumulative count 99800) 209s 99.950% <= 0.807 milliseconds (cumulative count 99950) 209s 100.000% <= 0.903 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1639344.25 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.261 0.104 0.255 0.375 0.463 0.863 209s ====== ZADD ====== 209s 100000 requests completed in 0.08 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.111 milliseconds (cumulative count 10) 209s 50.000% <= 0.367 milliseconds (cumulative count 53840) 209s 75.000% <= 0.415 milliseconds (cumulative count 75380) 209s 87.500% <= 0.455 milliseconds (cumulative count 88980) 209s 93.750% <= 0.487 milliseconds (cumulative count 94020) 209s 96.875% <= 0.527 milliseconds (cumulative count 97250) 209s 98.438% <= 0.607 milliseconds (cumulative count 98460) 209s 99.219% <= 0.735 milliseconds (cumulative count 99230) 209s 99.609% <= 0.839 milliseconds (cumulative count 99610) 209s 99.805% <= 0.895 milliseconds (cumulative count 99830) 209s 99.902% <= 0.919 milliseconds (cumulative count 99910) 209s 99.951% <= 0.951 milliseconds (cumulative count 99970) 209s 99.976% <= 0.959 milliseconds (cumulative count 99990) 209s 99.994% <= 0.983 milliseconds (cumulative count 100000) 209s 100.000% <= 0.983 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 0.990% <= 0.207 milliseconds (cumulative count 990) 209s 18.330% <= 0.303 milliseconds (cumulative count 18330) 209s 72.380% <= 0.407 milliseconds (cumulative count 72380) 209s 95.510% <= 0.503 milliseconds (cumulative count 95510) 209s 98.460% <= 0.607 milliseconds (cumulative count 98460) 209s 98.940% <= 0.703 milliseconds (cumulative count 98940) 209s 99.530% <= 0.807 milliseconds (cumulative count 99530) 209s 99.860% <= 0.903 milliseconds (cumulative count 99860) 209s 100.000% <= 1.007 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1190476.25 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.371 0.104 0.367 0.503 0.711 0.983 209s ZPOPMIN: rps=53240.0 (overall: 1331000.0) avg_msec=0.317 (overall: 0.317) ====== ZPOPMIN ====== 209s 100000 requests completed in 0.07 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.111 milliseconds (cumulative count 10) 209s 50.000% <= 0.303 milliseconds (cumulative count 53340) 209s 75.000% <= 0.359 milliseconds (cumulative count 77190) 209s 87.500% <= 0.399 milliseconds (cumulative count 88590) 209s 93.750% <= 0.423 milliseconds (cumulative count 93890) 209s 96.875% <= 0.455 milliseconds (cumulative count 96970) 209s 98.438% <= 0.503 milliseconds (cumulative count 98470) 209s 99.219% <= 0.591 milliseconds (cumulative count 99280) 209s 99.609% <= 0.655 milliseconds (cumulative count 99630) 209s 99.805% <= 0.727 milliseconds (cumulative count 99820) 209s 99.902% <= 0.791 milliseconds (cumulative count 99910) 209s 99.951% <= 0.839 milliseconds (cumulative count 99970) 209s 99.976% <= 0.855 milliseconds (cumulative count 99980) 209s 99.988% <= 0.863 milliseconds (cumulative count 99990) 209s 99.994% <= 0.887 milliseconds (cumulative count 100000) 209s 100.000% <= 0.887 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 5.350% <= 0.207 milliseconds (cumulative count 5350) 209s 53.340% <= 0.303 milliseconds (cumulative count 53340) 209s 90.650% <= 0.407 milliseconds (cumulative count 90650) 209s 98.470% <= 0.503 milliseconds (cumulative count 98470) 209s 99.390% <= 0.607 milliseconds (cumulative count 99390) 209s 99.770% <= 0.703 milliseconds (cumulative count 99770) 209s 99.920% <= 0.807 milliseconds (cumulative count 99920) 209s 100.000% <= 0.903 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1369863.00 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.310 0.104 0.303 0.439 0.559 0.887 209s ====== LPUSH (needed to benchmark LRANGE) ====== 209s 100000 requests completed in 0.09 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.119 milliseconds (cumulative count 10) 209s 50.000% <= 0.399 milliseconds (cumulative count 50050) 209s 75.000% <= 0.463 milliseconds (cumulative count 76840) 209s 87.500% <= 0.503 milliseconds (cumulative count 88430) 209s 93.750% <= 0.535 milliseconds (cumulative count 94610) 209s 96.875% <= 0.559 milliseconds (cumulative count 97030) 209s 98.438% <= 0.623 milliseconds (cumulative count 98470) 209s 99.219% <= 0.791 milliseconds (cumulative count 99230) 209s 99.609% <= 0.879 milliseconds (cumulative count 99630) 209s 99.805% <= 0.991 milliseconds (cumulative count 99810) 209s 99.902% <= 1.055 milliseconds (cumulative count 99920) 209s 99.951% <= 1.079 milliseconds (cumulative count 99960) 209s 99.976% <= 1.103 milliseconds (cumulative count 99980) 209s 99.988% <= 1.119 milliseconds (cumulative count 99990) 209s 99.994% <= 1.127 milliseconds (cumulative count 100000) 209s 100.000% <= 1.127 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 1.340% <= 0.207 milliseconds (cumulative count 1340) 209s 7.430% <= 0.303 milliseconds (cumulative count 7430) 209s 53.690% <= 0.407 milliseconds (cumulative count 53690) 209s 88.430% <= 0.503 milliseconds (cumulative count 88430) 209s 98.320% <= 0.607 milliseconds (cumulative count 98320) 209s 98.910% <= 0.703 milliseconds (cumulative count 98910) 209s 99.320% <= 0.807 milliseconds (cumulative count 99320) 209s 99.700% <= 0.903 milliseconds (cumulative count 99700) 209s 99.830% <= 1.007 milliseconds (cumulative count 99830) 209s 99.980% <= 1.103 milliseconds (cumulative count 99980) 209s 100.000% <= 1.207 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 1086956.50 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.407 0.112 0.399 0.543 0.727 1.127 210s LRANGE_100 (first 100 elements): rps=54760.0 (overall: 147204.3) avg_msec=2.518 (overall: 2.518) LRANGE_100 (first 100 elements): rps=151031.8 (overall: 150000.0) avg_msec=2.475 (overall: 2.487) LRANGE_100 (first 100 elements): rps=153227.1 (overall: 151359.1) avg_msec=2.471 (overall: 2.480) ====== LRANGE_100 (first 100 elements) ====== 210s 100000 requests completed in 0.66 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.175 milliseconds (cumulative count 10) 210s 50.000% <= 2.375 milliseconds (cumulative count 50390) 210s 75.000% <= 2.895 milliseconds (cumulative count 75120) 210s 87.500% <= 3.463 milliseconds (cumulative count 87520) 210s 93.750% <= 3.999 milliseconds (cumulative count 93750) 210s 96.875% <= 4.327 milliseconds (cumulative count 96920) 210s 98.438% <= 4.519 milliseconds (cumulative count 98440) 210s 99.219% <= 4.663 milliseconds (cumulative count 99250) 210s 99.609% <= 4.759 milliseconds (cumulative count 99620) 210s 99.805% <= 4.855 milliseconds (cumulative count 99810) 210s 99.902% <= 4.919 milliseconds (cumulative count 99920) 210s 99.951% <= 4.975 milliseconds (cumulative count 99960) 210s 99.976% <= 5.063 milliseconds (cumulative count 99980) 210s 99.988% <= 5.071 milliseconds (cumulative count 99990) 210s 99.994% <= 5.095 milliseconds (cumulative count 100000) 210s 100.000% <= 5.095 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.010% <= 0.207 milliseconds (cumulative count 10) 210s 0.280% <= 1.103 milliseconds (cumulative count 280) 210s 1.300% <= 1.207 milliseconds (cumulative count 1300) 210s 2.840% <= 1.303 milliseconds (cumulative count 2840) 210s 5.340% <= 1.407 milliseconds (cumulative count 5340) 210s 8.080% <= 1.503 milliseconds (cumulative count 8080) 210s 11.400% <= 1.607 milliseconds (cumulative count 11400) 210s 14.900% <= 1.703 milliseconds (cumulative count 14900) 210s 19.260% <= 1.807 milliseconds (cumulative count 19260) 210s 23.900% <= 1.903 milliseconds (cumulative count 23900) 210s 29.390% <= 2.007 milliseconds (cumulative count 29390) 210s 34.970% <= 2.103 milliseconds (cumulative count 34970) 210s 81.480% <= 3.103 milliseconds (cumulative count 81480) 210s 94.870% <= 4.103 milliseconds (cumulative count 94870) 210s 100.000% <= 5.103 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 151057.41 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 2.488 0.168 2.375 4.119 4.615 5.095 213s LRANGE_300 (first 300 elements): rps=27206.3 (overall: 37059.5) avg_msec=7.750 (overall: 7.750) LRANGE_300 (first 300 elements): rps=34086.3 (overall: 35336.4) avg_msec=8.487 (overall: 8.162) LRANGE_300 (first 300 elements): rps=32007.8 (overall: 34115.1) avg_msec=9.422 (overall: 8.596) LRANGE_300 (first 300 elements): rps=37200.0 (overall: 34931.2) avg_msec=7.513 (overall: 8.291) LRANGE_300 (first 300 elements): rps=31671.9 (overall: 34236.5) avg_msec=9.627 (overall: 8.554) LRANGE_300 (first 300 elements): rps=28422.3 (overall: 33231.4) avg_msec=11.117 (overall: 8.933) LRANGE_300 (first 300 elements): rps=33737.1 (overall: 33305.9) avg_msec=8.466 (overall: 8.863) LRANGE_300 (first 300 elements): rps=27043.5 (overall: 32495.9) avg_msec=12.102 (overall: 9.212) LRANGE_300 (first 300 elements): rps=34916.3 (overall: 32771.2) avg_msec=8.496 (overall: 9.125) LRANGE_300 (first 300 elements): rps=35984.2 (overall: 33101.6) avg_msec=8.073 (overall: 9.008) LRANGE_300 (first 300 elements): rps=32671.9 (overall: 33061.6) avg_msec=9.245 (overall: 9.030) LRANGE_300 (first 300 elements): rps=34150.8 (overall: 33154.1) avg_msec=8.755 (overall: 9.005) ====== LRANGE_300 (first 300 elements) ====== 213s 100000 requests completed in 3.02 seconds 213s 50 parallel clients 213s 3 bytes payload 213s keep alive: 1 213s host configuration "save": 3600 1 300 100 60 10000 213s host configuration "appendonly": no 213s multi-thread: no 213s 213s Latency by percentile distribution: 213s 0.000% <= 0.391 milliseconds (cumulative count 10) 213s 50.000% <= 8.527 milliseconds (cumulative count 50050) 213s 75.000% <= 11.527 milliseconds (cumulative count 75010) 213s 87.500% <= 13.559 milliseconds (cumulative count 87500) 213s 93.750% <= 15.423 milliseconds (cumulative count 93790) 213s 96.875% <= 16.991 milliseconds (cumulative count 96890) 213s 98.438% <= 18.575 milliseconds (cumulative count 98440) 213s 99.219% <= 20.575 milliseconds (cumulative count 99220) 213s 99.609% <= 23.375 milliseconds (cumulative count 99610) 213s 99.805% <= 25.711 milliseconds (cumulative count 99810) 213s 99.902% <= 26.927 milliseconds (cumulative count 99910) 213s 99.951% <= 27.615 milliseconds (cumulative count 99960) 213s 99.976% <= 27.919 milliseconds (cumulative count 99980) 213s 99.988% <= 28.079 milliseconds (cumulative count 99990) 213s 99.994% <= 28.223 milliseconds (cumulative count 100000) 213s 100.000% <= 28.223 milliseconds (cumulative count 100000) 213s 213s Cumulative distribution of latencies: 213s 0.000% <= 0.103 milliseconds (cumulative count 0) 213s 0.010% <= 0.407 milliseconds (cumulative count 10) 213s 0.020% <= 0.503 milliseconds (cumulative count 20) 213s 0.180% <= 0.607 milliseconds (cumulative count 180) 213s 0.530% <= 0.703 milliseconds (cumulative count 530) 213s 0.970% <= 0.807 milliseconds (cumulative count 970) 213s 1.430% <= 0.903 milliseconds (cumulative count 1430) 213s 1.830% <= 1.007 milliseconds (cumulative count 1830) 213s 2.090% <= 1.103 milliseconds (cumulative count 2090) 213s 2.360% <= 1.207 milliseconds (cumulative count 2360) 213s 2.600% <= 1.303 milliseconds (cumulative count 2600) 213s 2.770% <= 1.407 milliseconds (cumulative count 2770) 213s 2.860% <= 1.503 milliseconds (cumulative count 2860) 213s 2.960% <= 1.607 milliseconds (cumulative count 2960) 213s 3.000% <= 1.703 milliseconds (cumulative count 3000) 213s 3.160% <= 1.807 milliseconds (cumulative count 3160) 213s 3.230% <= 1.903 milliseconds (cumulative count 3230) 213s 3.360% <= 2.007 milliseconds (cumulative count 3360) 213s 3.480% <= 2.103 milliseconds (cumulative count 3480) 213s 5.180% <= 3.103 milliseconds (cumulative count 5180) 213s 7.270% <= 4.103 milliseconds (cumulative count 7270) 213s 13.110% <= 5.103 milliseconds (cumulative count 13110) 213s 22.470% <= 6.103 milliseconds (cumulative count 22470) 213s 35.100% <= 7.103 milliseconds (cumulative count 35100) 213s 46.020% <= 8.103 milliseconds (cumulative count 46020) 213s 55.360% <= 9.103 milliseconds (cumulative count 55360) 213s 64.000% <= 10.103 milliseconds (cumulative count 64000) 213s 72.060% <= 11.103 milliseconds (cumulative count 72060) 213s 78.780% <= 12.103 milliseconds (cumulative count 78780) 213s 85.050% <= 13.103 milliseconds (cumulative count 85050) 213s 89.680% <= 14.103 milliseconds (cumulative count 89680) 213s 92.950% <= 15.103 milliseconds (cumulative count 92950) 213s 95.210% <= 16.103 milliseconds (cumulative count 95210) 213s 97.020% <= 17.103 milliseconds (cumulative count 97020) 213s 98.020% <= 18.111 milliseconds (cumulative count 98020) 213s 98.670% <= 19.103 milliseconds (cumulative count 98670) 213s 99.070% <= 20.111 milliseconds (cumulative count 99070) 213s 99.340% <= 21.103 milliseconds (cumulative count 99340) 213s 99.510% <= 22.111 milliseconds (cumulative count 99510) 213s 99.590% <= 23.103 milliseconds (cumulative count 99590) 213s 99.630% <= 24.111 milliseconds (cumulative count 99630) 213s 99.750% <= 25.103 milliseconds (cumulative count 99750) 213s 99.830% <= 26.111 milliseconds (cumulative count 99830) 213s 99.920% <= 27.103 milliseconds (cumulative count 99920) 213s 99.990% <= 28.111 milliseconds (cumulative count 99990) 213s 100.000% <= 29.103 milliseconds (cumulative count 100000) 213s 213s Summary: 213s throughput summary: 33090.67 requests per second 213s latency summary (msec): 213s avg min p50 p95 p99 max 213s 9.023 0.384 8.527 15.991 19.903 28.223 218s LRANGE_500 (first 500 elements): rps=10757.0 (overall: 13989.6) avg_msec=19.145 (overall: 19.145) LRANGE_500 (first 500 elements): rps=17410.4 (overall: 15923.4) avg_msec=15.653 (overall: 16.987) LRANGE_500 (first 500 elements): rps=22593.6 (overall: 18332.4) avg_msec=11.431 (overall: 14.514) LRANGE_500 (first 500 elements): rps=23254.0 (overall: 19642.0) avg_msec=8.651 (overall: 12.667) LRANGE_500 (first 500 elements): rps=21270.9 (overall: 19983.3) avg_msec=10.012 (overall: 12.075) LRANGE_500 (first 500 elements): rps=19656.0 (overall: 19926.8) avg_msec=13.063 (overall: 12.243) LRANGE_500 (first 500 elements): rps=23382.5 (overall: 20437.3) avg_msec=8.608 (overall: 11.629) LRANGE_500 (first 500 elements): rps=23711.5 (overall: 20861.7) avg_msec=8.505 (overall: 11.168) LRANGE_500 (first 500 elements): rps=23259.0 (overall: 21134.8) avg_msec=8.616 (overall: 10.848) LRANGE_500 (first 500 elements): rps=23648.2 (overall: 21393.7) avg_msec=8.521 (overall: 10.583) LRANGE_500 (first 500 elements): rps=23740.2 (overall: 21613.7) avg_msec=8.452 (overall: 10.364) LRANGE_500 (first 500 elements): rps=23858.3 (overall: 21806.0) avg_msec=8.436 (overall: 10.183) LRANGE_500 (first 500 elements): rps=22007.9 (overall: 21821.8) avg_msec=10.797 (overall: 10.232) LRANGE_500 (first 500 elements): rps=20972.3 (overall: 21759.9) avg_msec=12.730 (overall: 10.407) LRANGE_500 (first 500 elements): rps=23948.2 (overall: 21907.5) avg_msec=9.935 (overall: 10.373) LRANGE_500 (first 500 elements): rps=22685.3 (overall: 21956.7) avg_msec=10.719 (overall: 10.395) LRANGE_500 (first 500 elements): rps=23780.0 (overall: 22064.7) avg_msec=9.526 (overall: 10.340) LRANGE_500 (first 500 elements): rps=23446.6 (overall: 22142.8) avg_msec=8.805 (overall: 10.248) ====== LRANGE_500 (first 500 elements) ====== 218s 100000 requests completed in 4.51 seconds 218s 50 parallel clients 218s 3 bytes payload 218s keep alive: 1 218s host configuration "save": 3600 1 300 100 60 10000 218s host configuration "appendonly": no 218s multi-thread: no 218s 218s Latency by percentile distribution: 218s 0.000% <= 0.711 milliseconds (cumulative count 10) 218s 50.000% <= 9.447 milliseconds (cumulative count 50110) 218s 75.000% <= 11.031 milliseconds (cumulative count 75040) 218s 87.500% <= 14.495 milliseconds (cumulative count 87500) 218s 93.750% <= 17.727 milliseconds (cumulative count 93760) 218s 96.875% <= 19.807 milliseconds (cumulative count 96890) 218s 98.438% <= 21.183 milliseconds (cumulative count 98450) 218s 99.219% <= 22.895 milliseconds (cumulative count 99220) 218s 99.609% <= 27.647 milliseconds (cumulative count 99610) 218s 99.805% <= 28.735 milliseconds (cumulative count 99810) 218s 99.902% <= 29.263 milliseconds (cumulative count 99910) 218s 99.951% <= 29.823 milliseconds (cumulative count 99960) 218s 99.976% <= 29.967 milliseconds (cumulative count 99980) 218s 99.988% <= 30.447 milliseconds (cumulative count 99990) 218s 99.994% <= 30.591 milliseconds (cumulative count 100000) 218s 100.000% <= 30.591 milliseconds (cumulative count 100000) 218s 218s Cumulative distribution of latencies: 218s 0.000% <= 0.103 milliseconds (cumulative count 0) 218s 0.030% <= 0.807 milliseconds (cumulative count 30) 218s 0.070% <= 1.007 milliseconds (cumulative count 70) 218s 0.160% <= 1.103 milliseconds (cumulative count 160) 218s 0.220% <= 1.207 milliseconds (cumulative count 220) 218s 0.260% <= 1.303 milliseconds (cumulative count 260) 218s 0.280% <= 1.407 milliseconds (cumulative count 280) 218s 0.290% <= 1.503 milliseconds (cumulative count 290) 218s 0.310% <= 1.607 milliseconds (cumulative count 310) 218s 0.320% <= 1.807 milliseconds (cumulative count 320) 218s 0.340% <= 1.903 milliseconds (cumulative count 340) 218s 0.370% <= 2.007 milliseconds (cumulative count 370) 218s 0.600% <= 3.103 milliseconds (cumulative count 600) 218s 1.020% <= 4.103 milliseconds (cumulative count 1020) 218s 2.590% <= 5.103 milliseconds (cumulative count 2590) 218s 9.520% <= 6.103 milliseconds (cumulative count 9520) 218s 14.390% <= 7.103 milliseconds (cumulative count 14390) 218s 22.350% <= 8.103 milliseconds (cumulative count 22350) 218s 41.720% <= 9.103 milliseconds (cumulative count 41720) 218s 64.700% <= 10.103 milliseconds (cumulative count 64700) 218s 75.450% <= 11.103 milliseconds (cumulative count 75450) 218s 80.470% <= 12.103 milliseconds (cumulative count 80470) 218s 83.990% <= 13.103 milliseconds (cumulative count 83990) 218s 86.660% <= 14.103 milliseconds (cumulative count 86660) 218s 88.790% <= 15.103 milliseconds (cumulative count 88790) 218s 90.730% <= 16.103 milliseconds (cumulative count 90730) 218s 92.550% <= 17.103 milliseconds (cumulative count 92550) 218s 94.400% <= 18.111 milliseconds (cumulative count 94400) 218s 95.910% <= 19.103 milliseconds (cumulative count 95910) 218s 97.220% <= 20.111 milliseconds (cumulative count 97220) 218s 98.360% <= 21.103 milliseconds (cumulative count 98360) 218s 99.030% <= 22.111 milliseconds (cumulative count 99030) 218s 99.270% <= 23.103 milliseconds (cumulative count 99270) 218s 99.390% <= 24.111 milliseconds (cumulative count 99390) 218s 99.430% <= 25.103 milliseconds (cumulative count 99430) 218s 99.500% <= 27.103 milliseconds (cumulative count 99500) 218s 99.690% <= 28.111 milliseconds (cumulative count 99690) 218s 99.870% <= 29.103 milliseconds (cumulative count 99870) 218s 99.980% <= 30.111 milliseconds (cumulative count 99980) 218s 100.000% <= 31.103 milliseconds (cumulative count 100000) 218s 218s Summary: 218s throughput summary: 22158.21 requests per second 218s latency summary (msec): 218s avg min p50 p95 p99 max 218s 10.232 0.704 9.447 18.463 22.015 30.591 225s LRANGE_600 (first 600 elements): rps=9670.6 (overall: 11469.8) avg_msec=22.417 (overall: 22.417) LRANGE_600 (first 600 elements): rps=11370.5 (overall: 11416.3) avg_msec=22.611 (overall: 22.521) LRANGE_600 (first 600 elements): rps=12408.7 (overall: 11764.6) avg_msec=22.868 (overall: 22.650) LRANGE_600 (first 600 elements): rps=13912.4 (overall: 12320.9) avg_msec=18.975 (overall: 21.575) LRANGE_600 (first 600 elements): rps=16390.4 (overall: 13158.2) avg_msec=15.265 (overall: 19.958) LRANGE_600 (first 600 elements): rps=16278.4 (overall: 13697.6) avg_msec=16.640 (overall: 19.276) LRANGE_600 (first 600 elements): rps=11442.2 (overall: 13369.6) avg_msec=22.325 (overall: 19.656) LRANGE_600 (first 600 elements): rps=14384.9 (overall: 13499.0) avg_msec=19.282 (overall: 19.605) LRANGE_600 (first 600 elements): rps=17885.8 (overall: 13998.2) avg_msec=14.993 (overall: 18.934) LRANGE_600 (first 600 elements): rps=17980.1 (overall: 14400.7) avg_msec=13.519 (overall: 18.251) LRANGE_600 (first 600 elements): rps=13557.8 (overall: 14323.3) avg_msec=19.559 (overall: 18.364) LRANGE_600 (first 600 elements): rps=11912.4 (overall: 14120.6) avg_msec=22.940 (overall: 18.689) LRANGE_600 (first 600 elements): rps=11474.3 (overall: 13913.8) avg_msec=22.480 (overall: 18.933) LRANGE_600 (first 600 elements): rps=11980.1 (overall: 13774.7) avg_msec=22.567 (overall: 19.161) LRANGE_600 (first 600 elements): rps=12398.4 (overall: 13682.4) avg_msec=22.793 (overall: 19.382) LRANGE_600 (first 600 elements): rps=13563.5 (overall: 13674.8) avg_msec=18.634 (overall: 19.335) LRANGE_600 (first 600 elements): rps=12047.4 (overall: 13577.9) avg_msec=22.922 (overall: 19.524) LRANGE_600 (first 600 elements): rps=11436.0 (overall: 13458.7) avg_msec=22.681 (overall: 19.674) LRANGE_600 (first 600 elements): rps=11884.5 (overall: 13375.5) avg_msec=22.597 (overall: 19.811) LRANGE_600 (first 600 elements): rps=12071.4 (overall: 13309.7) avg_msec=22.961 (overall: 19.955) LRANGE_600 (first 600 elements): rps=11695.3 (overall: 13231.1) avg_msec=22.683 (overall: 20.073) LRANGE_600 (first 600 elements): rps=12079.4 (overall: 13178.4) avg_msec=22.217 (overall: 20.163) LRANGE_600 (first 600 elements): rps=16948.6 (overall: 13344.0) avg_msec=15.891 (overall: 19.924) LRANGE_600 (first 600 elements): rps=16063.7 (overall: 13457.6) avg_msec=16.252 (overall: 19.741) LRANGE_600 (first 600 elements): rps=12067.5 (overall: 13401.6) avg_msec=22.822 (overall: 19.853) LRANGE_600 (first 600 elements): rps=11131.5 (overall: 13314.1) avg_msec=22.857 (overall: 19.950) LRANGE_600 (first 600 elements): rps=12171.3 (overall: 13271.7) avg_msec=22.806 (overall: 20.047) LRANGE_600 (first 600 elements): rps=11902.7 (overall: 13221.6) avg_msec=22.928 (overall: 20.142) LRANGE_600 (first 600 elements): rps=12924.9 (overall: 13211.3) avg_msec=20.520 (overall: 20.155) ====== LRANGE_600 (first 600 elements) ====== 225s 100000 requests completed in 7.50 seconds 225s 50 parallel clients 225s 3 bytes payload 225s keep alive: 1 225s host configuration "save": 3600 1 300 100 60 10000 225s host configuration "appendonly": no 225s multi-thread: no 225s 225s Latency by percentile distribution: 225s 0.000% <= 0.447 milliseconds (cumulative count 10) 225s 50.000% <= 21.087 milliseconds (cumulative count 50070) 225s 75.000% <= 24.255 milliseconds (cumulative count 75000) 225s 87.500% <= 26.719 milliseconds (cumulative count 87510) 225s 93.750% <= 29.903 milliseconds (cumulative count 93750) 225s 96.875% <= 31.695 milliseconds (cumulative count 96890) 225s 98.438% <= 32.255 milliseconds (cumulative count 98440) 225s 99.219% <= 32.623 milliseconds (cumulative count 99240) 225s 99.609% <= 32.927 milliseconds (cumulative count 99630) 225s 99.805% <= 33.151 milliseconds (cumulative count 99810) 225s 99.902% <= 33.375 milliseconds (cumulative count 99910) 225s 99.951% <= 33.823 milliseconds (cumulative count 99960) 225s 99.976% <= 34.111 milliseconds (cumulative count 99980) 225s 99.988% <= 34.751 milliseconds (cumulative count 99990) 225s 99.994% <= 34.911 milliseconds (cumulative count 100000) 225s 100.000% <= 34.911 milliseconds (cumulative count 100000) 225s 225s Cumulative distribution of latencies: 225s 0.000% <= 0.103 milliseconds (cumulative count 0) 225s 0.030% <= 0.503 milliseconds (cumulative count 30) 225s 0.070% <= 0.607 milliseconds (cumulative count 70) 225s 0.080% <= 0.703 milliseconds (cumulative count 80) 225s 0.400% <= 0.807 milliseconds (cumulative count 400) 225s 0.890% <= 0.903 milliseconds (cumulative count 890) 225s 1.080% <= 1.007 milliseconds (cumulative count 1080) 225s 1.290% <= 1.103 milliseconds (cumulative count 1290) 225s 2.040% <= 1.207 milliseconds (cumulative count 2040) 225s 2.250% <= 1.303 milliseconds (cumulative count 2250) 225s 2.430% <= 1.407 milliseconds (cumulative count 2430) 225s 2.590% <= 1.503 milliseconds (cumulative count 2590) 225s 2.780% <= 1.607 milliseconds (cumulative count 2780) 225s 2.930% <= 1.703 milliseconds (cumulative count 2930) 225s 3.030% <= 1.807 milliseconds (cumulative count 3030) 225s 3.150% <= 1.903 milliseconds (cumulative count 3150) 225s 3.240% <= 2.007 milliseconds (cumulative count 3240) 225s 3.330% <= 2.103 milliseconds (cumulative count 3330) 225s 3.580% <= 3.103 milliseconds (cumulative count 3580) 225s 3.830% <= 4.103 milliseconds (cumulative count 3830) 225s 4.330% <= 5.103 milliseconds (cumulative count 4330) 225s 4.700% <= 6.103 milliseconds (cumulative count 4700) 225s 5.280% <= 7.103 milliseconds (cumulative count 5280) 225s 6.230% <= 8.103 milliseconds (cumulative count 6230) 225s 7.640% <= 9.103 milliseconds (cumulative count 7640) 225s 9.820% <= 10.103 milliseconds (cumulative count 9820) 225s 12.220% <= 11.103 milliseconds (cumulative count 12220) 225s 14.250% <= 12.103 milliseconds (cumulative count 14250) 225s 16.480% <= 13.103 milliseconds (cumulative count 16480) 225s 19.040% <= 14.103 milliseconds (cumulative count 19040) 225s 21.840% <= 15.103 milliseconds (cumulative count 21840) 225s 24.320% <= 16.103 milliseconds (cumulative count 24320) 225s 26.760% <= 17.103 milliseconds (cumulative count 26760) 225s 29.300% <= 18.111 milliseconds (cumulative count 29300) 225s 33.670% <= 19.103 milliseconds (cumulative count 33670) 225s 41.280% <= 20.111 milliseconds (cumulative count 41280) 225s 50.210% <= 21.103 milliseconds (cumulative count 50210) 225s 59.100% <= 22.111 milliseconds (cumulative count 59100) 225s 66.570% <= 23.103 milliseconds (cumulative count 66570) 225s 73.990% <= 24.111 milliseconds (cumulative count 73990) 225s 80.690% <= 25.103 milliseconds (cumulative count 80690) 225s 85.650% <= 26.111 milliseconds (cumulative count 85650) 225s 88.410% <= 27.103 milliseconds (cumulative count 88410) 225s 90.790% <= 28.111 milliseconds (cumulative count 90790) 225s 92.780% <= 29.103 milliseconds (cumulative count 92780) 225s 93.960% <= 30.111 milliseconds (cumulative count 93960) 225s 95.370% <= 31.103 milliseconds (cumulative count 95370) 225s 98.000% <= 32.111 milliseconds (cumulative count 98000) 225s 99.780% <= 33.119 milliseconds (cumulative count 99780) 225s 99.980% <= 34.111 milliseconds (cumulative count 99980) 225s 100.000% <= 35.103 milliseconds (cumulative count 100000) 225s 225s Summary: 225s throughput summary: 13331.56 requests per second 225s latency summary (msec): 225s avg min p50 p95 p99 max 225s 19.961 0.440 21.087 30.927 32.511 34.911 225s MSET (10 keys): rps=28725.1 (overall: 327727.3) avg_msec=1.379 (overall: 1.379) MSET (10 keys): rps=329880.0 (overall: 329705.9) avg_msec=1.450 (overall: 1.444) ====== MSET (10 keys) ====== 225s 100000 requests completed in 0.30 seconds 225s 50 parallel clients 225s 3 bytes payload 225s keep alive: 1 225s host configuration "save": 3600 1 300 100 60 10000 225s host configuration "appendonly": no 225s multi-thread: no 225s 225s Latency by percentile distribution: 225s 0.000% <= 0.223 milliseconds (cumulative count 10) 225s 50.000% <= 1.479 milliseconds (cumulative count 51300) 225s 75.000% <= 1.575 milliseconds (cumulative count 76210) 225s 87.500% <= 1.647 milliseconds (cumulative count 88500) 225s 93.750% <= 1.719 milliseconds (cumulative count 94010) 225s 96.875% <= 1.831 milliseconds (cumulative count 97000) 225s 98.438% <= 1.951 milliseconds (cumulative count 98440) 225s 99.219% <= 2.127 milliseconds (cumulative count 99220) 225s 99.609% <= 2.287 milliseconds (cumulative count 99620) 225s 99.805% <= 2.615 milliseconds (cumulative count 99810) 225s 99.902% <= 2.703 milliseconds (cumulative count 99910) 225s 99.951% <= 2.791 milliseconds (cumulative count 99960) 225s 99.976% <= 2.855 milliseconds (cumulative count 99980) 225s 99.988% <= 2.887 milliseconds (cumulative count 99990) 225s 99.994% <= 2.911 milliseconds (cumulative count 100000) 225s 100.000% <= 2.911 milliseconds (cumulative count 100000) 225s 225s Cumulative distribution of latencies: 225s 0.000% <= 0.103 milliseconds (cumulative count 0) 225s 0.010% <= 0.303 milliseconds (cumulative count 10) 225s 0.100% <= 0.407 milliseconds (cumulative count 100) 225s 0.120% <= 0.503 milliseconds (cumulative count 120) 225s 0.210% <= 0.703 milliseconds (cumulative count 210) 225s 1.740% <= 0.807 milliseconds (cumulative count 1740) 225s 5.610% <= 0.903 milliseconds (cumulative count 5610) 225s 9.700% <= 1.007 milliseconds (cumulative count 9700) 225s 11.220% <= 1.103 milliseconds (cumulative count 11220) 225s 12.160% <= 1.207 milliseconds (cumulative count 12160) 225s 16.770% <= 1.303 milliseconds (cumulative count 16770) 225s 32.660% <= 1.407 milliseconds (cumulative count 32660) 225s 57.860% <= 1.503 milliseconds (cumulative count 57860) 225s 82.340% <= 1.607 milliseconds (cumulative count 82340) 225s 93.340% <= 1.703 milliseconds (cumulative count 93340) 225s 96.570% <= 1.807 milliseconds (cumulative count 96570) 225s 97.950% <= 1.903 milliseconds (cumulative count 97950) 225s 98.780% <= 2.007 milliseconds (cumulative count 98780) 225s 99.120% <= 2.103 milliseconds (cumulative count 99120) 225s 100.000% <= 3.103 milliseconds (cumulative count 100000) 225s 225s Summary: 225s throughput summary: 331125.84 requests per second 225s latency summary (msec): 225s avg min p50 p95 p99 max 225s 1.441 0.216 1.479 1.751 2.079 2.911 225s ====== XADD ====== 225s 100000 requests completed in 0.15 seconds 225s 50 parallel clients 225s 3 bytes payload 225s keep alive: 1 225s host configuration "save": 3600 1 300 100 60 10000 225s host configuration "appendonly": no 225s multi-thread: no 225s 225s Latency by percentile distribution: 225s 0.000% <= 0.175 milliseconds (cumulative count 10) 225s 50.000% <= 0.687 milliseconds (cumulative count 52910) 225s 75.000% <= 0.751 milliseconds (cumulative count 75860) 225s 87.500% <= 0.799 milliseconds (cumulative count 88530) 225s 93.750% <= 0.831 milliseconds (cumulative count 94970) 225s 96.875% <= 0.847 milliseconds (cumulative count 96950) 225s 98.438% <= 0.871 milliseconds (cumulative count 98650) 225s 99.219% <= 0.887 milliseconds (cumulative count 99220) 225s 99.609% <= 0.911 milliseconds (cumulative count 99640) 225s 99.805% <= 0.943 milliseconds (cumulative count 99820) 225s 99.902% <= 0.975 milliseconds (cumulative count 99920) 225s 99.951% <= 1.007 milliseconds (cumulative count 99970) 225s 99.976% <= 1.023 milliseconds (cumulative count 99980) 225s 99.988% <= 1.031 milliseconds (cumulative count 99990) 225s 99.994% <= 1.039 milliseconds (cumulative count 100000) 225s 100.000% <= 1.039 milliseconds (cumulative count 100000) 225s 225s Cumulative distribution of latencies: 225s 0.000% <= 0.103 milliseconds (cumulative count 0) 225s 0.090% <= 0.207 milliseconds (cumulative count 90) 225s 0.290% <= 0.303 milliseconds (cumulative count 290) 225s 0.920% <= 0.407 milliseconds (cumulative count 920) 225s 10.220% <= 0.503 milliseconds (cumulative count 10220) 225s 25.080% <= 0.607 milliseconds (cumulative count 25080) 225s 59.100% <= 0.703 milliseconds (cumulative count 59100) 225s 90.250% <= 0.807 milliseconds (cumulative count 90250) 225s 99.540% <= 0.903 milliseconds (cumulative count 99540) 225s 99.970% <= 1.007 milliseconds (cumulative count 99970) 225s 100.000% <= 1.103 milliseconds (cumulative count 100000) 225s 225s Summary: 225s throughput summary: 680272.12 requests per second 225s latency summary (msec): 225s avg min p50 p95 p99 max 225s 0.670 0.168 0.687 0.839 0.887 1.039 225s 226s autopkgtest [18:42:26]: test 0002-benchmark: -----------------------] 226s 0002-benchmark PASS 226s autopkgtest [18:42:26]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 226s autopkgtest [18:42:26]: test 0003-redict-check-aof: preparing testbed 227s Reading package lists... 227s Building dependency tree... 227s Reading state information... 227s Starting pkgProblemResolver with broken count: 0 227s Starting 2 pkgProblemResolver with broken count: 0 227s Done 227s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 228s autopkgtest [18:42:28]: test 0003-redict-check-aof: [----------------------- 229s autopkgtest [18:42:29]: test 0003-redict-check-aof: -----------------------] 229s 0003-redict-check-aof PASS 229s autopkgtest [18:42:29]: test 0003-redict-check-aof: - - - - - - - - - - results - - - - - - - - - - 230s autopkgtest [18:42:30]: test 0004-redict-check-rdb: preparing testbed 230s Reading package lists... 230s Building dependency tree... 230s Reading state information... 230s Starting pkgProblemResolver with broken count: 0 230s Starting 2 pkgProblemResolver with broken count: 0 230s Done 230s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 231s autopkgtest [18:42:31]: test 0004-redict-check-rdb: [----------------------- 237s OK 237s [offset 0] Checking RDB file /var/lib/redict/dump.rdb 237s [offset 26] AUX FIELD redis-ver = '7.3.2' 237s [offset 40] AUX FIELD redis-bits = '64' 237s [offset 52] AUX FIELD ctime = '1742064157' 237s [offset 67] AUX FIELD used-mem = '3058432' 237s [offset 79] AUX FIELD aof-base = '0' 237s [offset 81] Selecting DB ID 0 237s [offset 565071] Checksum OK 237s [offset 565071] \o/ RDB looks OK! \o/ 237s [info] 5 keys read 237s [info] 0 expires 237s [info] 0 already expired 237s autopkgtest [18:42:37]: test 0004-redict-check-rdb: -----------------------] 237s 0004-redict-check-rdb PASS 237s autopkgtest [18:42:37]: test 0004-redict-check-rdb: - - - - - - - - - - results - - - - - - - - - - 238s autopkgtest [18:42:38]: test 0005-cjson: preparing testbed 238s Reading package lists... 238s Building dependency tree... 238s Reading state information... 238s Starting pkgProblemResolver with broken count: 0 238s Starting 2 pkgProblemResolver with broken count: 0 238s Done 238s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 240s autopkgtest [18:42:40]: test 0005-cjson: [----------------------- 246s 246s autopkgtest [18:42:46]: test 0005-cjson: -----------------------] 246s 0005-cjson PASS 246s autopkgtest [18:42:46]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 247s autopkgtest [18:42:47]: @@@@@@@@@@@@@@@@@@@@ summary 247s 0001-redict-cli PASS 247s 0002-benchmark PASS 247s 0003-redict-check-aof PASS 247s 0004-redict-check-rdb PASS 247s 0005-cjson PASS 265s nova [W] Using flock in prodstack6-s390x 265s flock: timeout while waiting to get lock 265s Creating nova instance adt-plucky-s390x-redict-20250315-183840-juju-7f2275-prod-proposed-migration-environment-15-1fff7fc0-4fb4-4a8d-8293-bcfe3931d189 from image adt/ubuntu-plucky-s390x-server-20250315.img (UUID 3d3557fa-fd0f-4bba-9b89-8d5964e09f61)... 265s nova [W] Timed out waiting for b4b0e77b-42a2-4850-a2f4-83b7741f2a64 to get deleted.