0s autopkgtest [00:29:07]: starting date and time: 2025-02-28 00:29:07+0000 0s autopkgtest [00:29:07]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production' 0s autopkgtest [00:29:07]: host juju-7f2275-prod-proposed-migration-environment-2; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.7edvr_bx/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:systemd,src:dpdk,src:samba --apt-upgrade redis --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 '--env=ADT_TEST_TRIGGERS=systemd/255.4-1ubuntu8.6 dpdk/23.11.2-0ubuntu0.24.04.1 samba/2:4.19.5+dfsg-4ubuntu9.1' -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-s390x --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-2@bos03-s390x-31.secgroup --name adt-noble-s390x-redis-20250228-002907-juju-7f2275-prod-proposed-migration-environment-2-ae62c2b6-7524-4d17-9fbb-b4777c88b560 --image adt/ubuntu-noble-s390x-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-2 --net-id=net_prod-proposed-migration-s390x -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 99s autopkgtest [00:30:46]: testbed dpkg architecture: s390x 99s autopkgtest [00:30:46]: testbed apt version: 2.7.14build2 99s autopkgtest [00:30:46]: @@@@@@@@@@@@@@@@@@@@ test bed setup 99s autopkgtest [00:30:46]: testbed release detected to be: None 100s autopkgtest [00:30:47]: updating testbed package index (apt update) 101s Get:1 http://ftpmaster.internal/ubuntu noble-proposed InRelease [265 kB] 101s Hit:2 http://ftpmaster.internal/ubuntu noble InRelease 101s Hit:3 http://ftpmaster.internal/ubuntu noble-updates InRelease 101s Hit:4 http://ftpmaster.internal/ubuntu noble-security InRelease 101s Get:5 http://ftpmaster.internal/ubuntu noble-proposed/main Sources [61.6 kB] 101s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/multiverse Sources [9488 B] 101s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/restricted Sources [18.6 kB] 101s Get:8 http://ftpmaster.internal/ubuntu noble-proposed/universe Sources [66.2 kB] 101s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/main s390x Packages [79.9 kB] 101s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/main s390x c-n-f Metadata [3744 B] 101s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/restricted s390x Packages [1380 B] 101s Get:12 http://ftpmaster.internal/ubuntu noble-proposed/restricted s390x c-n-f Metadata [116 B] 101s Get:13 http://ftpmaster.internal/ubuntu noble-proposed/universe s390x Packages [275 kB] 101s Get:14 http://ftpmaster.internal/ubuntu noble-proposed/universe s390x c-n-f Metadata [5504 B] 101s Get:15 http://ftpmaster.internal/ubuntu noble-proposed/multiverse s390x Packages [968 B] 101s Get:16 http://ftpmaster.internal/ubuntu noble-proposed/multiverse s390x c-n-f Metadata [116 B] 103s Fetched 788 kB in 1s (996 kB/s) 104s Reading package lists... 105s Reading package lists... 105s Building dependency tree... 105s Reading state information... 105s Calculating upgrade... 105s The following packages will be upgraded: 105s cloud-init cryptsetup-bin libcryptsetup12 105s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 105s Need to get 1075 kB of archives. 105s After this operation, 3072 B disk space will be freed. 105s Get:1 http://ftpmaster.internal/ubuntu noble-proposed/main s390x libcryptsetup12 s390x 2:2.7.0-1ubuntu4.2 [261 kB] 105s Get:2 http://ftpmaster.internal/ubuntu noble-proposed/main s390x cryptsetup-bin s390x 2:2.7.0-1ubuntu4.2 [210 kB] 105s Get:3 http://ftpmaster.internal/ubuntu noble-proposed/main s390x cloud-init all 24.4.1-0ubuntu0~24.04.1 [604 kB] 106s Preconfiguring packages ... 106s Fetched 1075 kB in 1s (1886 kB/s) 106s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54222 files and directories currently installed.) 106s Preparing to unpack .../libcryptsetup12_2%3a2.7.0-1ubuntu4.2_s390x.deb ... 106s Unpacking libcryptsetup12:s390x (2:2.7.0-1ubuntu4.2) over (2:2.7.0-1ubuntu4.1) ... 106s Preparing to unpack .../cryptsetup-bin_2%3a2.7.0-1ubuntu4.2_s390x.deb ... 106s Unpacking cryptsetup-bin (2:2.7.0-1ubuntu4.2) over (2:2.7.0-1ubuntu4.1) ... 106s Preparing to unpack .../cloud-init_24.4.1-0ubuntu0~24.04.1_all.deb ... 106s Unpacking cloud-init (24.4.1-0ubuntu0~24.04.1) over (24.4-0ubuntu1~24.04.2) ... 106s Setting up cloud-init (24.4.1-0ubuntu0~24.04.1) ... 107s Setting up libcryptsetup12:s390x (2:2.7.0-1ubuntu4.2) ... 107s Setting up cryptsetup-bin (2:2.7.0-1ubuntu4.2) ... 107s Processing triggers for rsyslog (8.2312.0-3ubuntu9) ... 107s Processing triggers for man-db (2.12.0-4build2) ... 108s Processing triggers for libc-bin (2.39-0ubuntu8.4) ... 108s Reading package lists... 108s Building dependency tree... 108s Reading state information... 108s 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. 108s autopkgtest [00:30:55]: upgrading testbed (apt dist-upgrade and autopurge) 108s Reading package lists... 108s Building dependency tree... 108s Reading state information... 109s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 109s Starting 2 pkgProblemResolver with broken count: 0 109s Done 109s Entering ResolveByKeep 109s 109s The following packages will be upgraded: 109s libnss-systemd libpam-systemd libsystemd-shared libsystemd0 libudev1 systemd 109s systemd-dev systemd-resolved systemd-sysv systemd-timesyncd udev 109s 11 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 109s Need to get 9021 kB of archives. 109s After this operation, 0 B of additional disk space will be used. 109s Get:1 http://ftpmaster.internal/ubuntu noble-proposed/main s390x libnss-systemd s390x 255.4-1ubuntu8.6 [165 kB] 109s Get:2 http://ftpmaster.internal/ubuntu noble-proposed/main s390x systemd-dev all 255.4-1ubuntu8.6 [104 kB] 109s Get:3 http://ftpmaster.internal/ubuntu noble-proposed/main s390x systemd-timesyncd s390x 255.4-1ubuntu8.6 [35.1 kB] 109s Get:4 http://ftpmaster.internal/ubuntu noble-proposed/main s390x systemd-resolved s390x 255.4-1ubuntu8.6 [302 kB] 110s Get:5 http://ftpmaster.internal/ubuntu noble-proposed/main s390x libsystemd-shared s390x 255.4-1ubuntu8.6 [2126 kB] 110s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/main s390x libsystemd0 s390x 255.4-1ubuntu8.6 [443 kB] 110s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/main s390x systemd-sysv s390x 255.4-1ubuntu8.6 [11.9 kB] 110s Get:8 http://ftpmaster.internal/ubuntu noble-proposed/main s390x libpam-systemd s390x 255.4-1ubuntu8.6 [241 kB] 110s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/main s390x systemd s390x 255.4-1ubuntu8.6 [3530 kB] 110s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/main s390x udev s390x 255.4-1ubuntu8.6 [1883 kB] 110s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/main s390x libudev1 s390x 255.4-1ubuntu8.6 [179 kB] 110s Fetched 9021 kB in 1s (10.5 MB/s) 110s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54222 files and directories currently installed.) 110s Preparing to unpack .../0-libnss-systemd_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking libnss-systemd:s390x (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../1-systemd-dev_255.4-1ubuntu8.6_all.deb ... 110s Unpacking systemd-dev (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../2-systemd-timesyncd_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking systemd-timesyncd (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../3-systemd-resolved_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking systemd-resolved (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../4-libsystemd-shared_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking libsystemd-shared:s390x (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../5-libsystemd0_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking libsystemd0:s390x (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Setting up libsystemd0:s390x (255.4-1ubuntu8.6) ... 110s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54222 files and directories currently installed.) 110s Preparing to unpack .../systemd-sysv_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking systemd-sysv (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../libpam-systemd_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking libpam-systemd:s390x (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../systemd_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking systemd (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 110s Preparing to unpack .../udev_255.4-1ubuntu8.6_s390x.deb ... 110s Unpacking udev (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 111s Preparing to unpack .../libudev1_255.4-1ubuntu8.6_s390x.deb ... 111s Unpacking libudev1:s390x (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 111s Setting up libudev1:s390x (255.4-1ubuntu8.6) ... 111s Setting up systemd-dev (255.4-1ubuntu8.6) ... 111s Setting up libsystemd-shared:s390x (255.4-1ubuntu8.6) ... 111s Setting up systemd (255.4-1ubuntu8.6) ... 111s Setting up systemd-timesyncd (255.4-1ubuntu8.6) ... 111s Setting up udev (255.4-1ubuntu8.6) ... 112s Setting up systemd-resolved (255.4-1ubuntu8.6) ... 112s Setting up systemd-sysv (255.4-1ubuntu8.6) ... 112s Setting up libnss-systemd:s390x (255.4-1ubuntu8.6) ... 112s Setting up libpam-systemd:s390x (255.4-1ubuntu8.6) ... 112s Processing triggers for libc-bin (2.39-0ubuntu8.4) ... 112s Processing triggers for man-db (2.12.0-4build2) ... 113s Processing triggers for dbus (1.14.10-4ubuntu4.1) ... 113s Processing triggers for initramfs-tools (0.142ubuntu25.5) ... 113s update-initramfs: Generating /boot/initrd.img-6.8.0-54-generic 113s W: No lz4 in /usr/bin:/sbin:/bin, using gzip 114s Using config file '/etc/zipl.conf' 114s Building bootmap in '/boot' 114s Adding IPL section 'ubuntu' (default) 114s Preparing boot device for LD-IPL: vda (0000). 114s Done. 115s Reading package lists... 115s Building dependency tree... 115s Reading state information... 115s Starting pkgProblemResolver with broken count: 0 115s Starting 2 pkgProblemResolver with broken count: 0 115s Done 115s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 116s autopkgtest [00:31:03]: rebooting testbed after setup commands that affected boot 120s autopkgtest-virt-ssh: WARNING: ssh connection failed. Retrying in 3 seconds... 136s autopkgtest [00:31:23]: testbed running kernel: Linux 6.8.0-54-generic #56-Ubuntu SMP Fri Feb 7 23:38:34 UTC 2025 139s autopkgtest [00:31:26]: @@@@@@@@@@@@@@@@@@@@ apt-source redis 143s Get:1 http://ftpmaster.internal/ubuntu noble/universe redis 5:7.0.15-1build2 (dsc) [2376 B] 143s Get:2 http://ftpmaster.internal/ubuntu noble/universe redis 5:7.0.15-1build2 (tar) [3026 kB] 143s Get:3 http://ftpmaster.internal/ubuntu noble/universe redis 5:7.0.15-1build2 (diff) [29.3 kB] 143s gpgv: Signature made Mon Apr 1 07:33:50 2024 UTC 143s gpgv: using RSA key A089FB36AAFBDAD5ACC1325069F790171A210984 143s gpgv: Can't check signature: No public key 143s dpkg-source: warning: cannot verify inline signature for ./redis_7.0.15-1build2.dsc: no acceptable signature found 143s autopkgtest [00:31:30]: testing package redis version 5:7.0.15-1build2 144s autopkgtest [00:31:31]: build not needed 149s autopkgtest [00:31:36]: test 0001-redis-cli: preparing testbed 149s Reading package lists... 149s Building dependency tree... 149s Reading state information... 149s Starting pkgProblemResolver with broken count: 0 149s Starting 2 pkgProblemResolver with broken count: 0 149s Done 149s The following NEW packages will be installed: 149s libatomic1 libjemalloc2 liblzf1 redis redis-sentinel redis-server 149s redis-tools 149s 0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded. 149s Need to get 1491 kB of archives. 149s After this operation, 8109 kB of additional disk space will be used. 149s Get:1 http://ftpmaster.internal/ubuntu noble-updates/main s390x libatomic1 s390x 14.2.0-4ubuntu2~24.04 [9576 B] 150s Get:2 http://ftpmaster.internal/ubuntu noble/universe s390x libjemalloc2 s390x 5.3.0-2build1 [204 kB] 150s Get:3 http://ftpmaster.internal/ubuntu noble/universe s390x liblzf1 s390x 3.6-4 [7020 B] 150s Get:4 http://ftpmaster.internal/ubuntu noble/universe s390x redis-tools s390x 5:7.0.15-1build2 [1204 kB] 150s Get:5 http://ftpmaster.internal/ubuntu noble/universe s390x redis-sentinel s390x 5:7.0.15-1build2 [12.2 kB] 150s Get:6 http://ftpmaster.internal/ubuntu noble/universe s390x redis-server s390x 5:7.0.15-1build2 [51.7 kB] 150s Get:7 http://ftpmaster.internal/ubuntu noble/universe s390x redis all 5:7.0.15-1build2 [2920 B] 150s Fetched 1491 kB in 1s (2361 kB/s) 150s Selecting previously unselected package libatomic1:s390x. 150s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54222 files and directories currently installed.) 150s Preparing to unpack .../0-libatomic1_14.2.0-4ubuntu2~24.04_s390x.deb ... 150s Unpacking libatomic1:s390x (14.2.0-4ubuntu2~24.04) ... 150s Selecting previously unselected package libjemalloc2:s390x. 150s Preparing to unpack .../1-libjemalloc2_5.3.0-2build1_s390x.deb ... 150s Unpacking libjemalloc2:s390x (5.3.0-2build1) ... 150s Selecting previously unselected package liblzf1:s390x. 150s Preparing to unpack .../2-liblzf1_3.6-4_s390x.deb ... 150s Unpacking liblzf1:s390x (3.6-4) ... 150s Selecting previously unselected package redis-tools. 150s Preparing to unpack .../3-redis-tools_5%3a7.0.15-1build2_s390x.deb ... 150s Unpacking redis-tools (5:7.0.15-1build2) ... 150s Selecting previously unselected package redis-sentinel. 150s Preparing to unpack .../4-redis-sentinel_5%3a7.0.15-1build2_s390x.deb ... 150s Unpacking redis-sentinel (5:7.0.15-1build2) ... 150s Selecting previously unselected package redis-server. 150s Preparing to unpack .../5-redis-server_5%3a7.0.15-1build2_s390x.deb ... 150s Unpacking redis-server (5:7.0.15-1build2) ... 150s Selecting previously unselected package redis. 150s Preparing to unpack .../6-redis_5%3a7.0.15-1build2_all.deb ... 150s Unpacking redis (5:7.0.15-1build2) ... 150s Setting up libjemalloc2:s390x (5.3.0-2build1) ... 150s Setting up liblzf1:s390x (3.6-4) ... 150s Setting up libatomic1:s390x (14.2.0-4ubuntu2~24.04) ... 150s Setting up redis-tools (5:7.0.15-1build2) ... 150s Setting up redis-server (5:7.0.15-1build2) ... 151s Created symlink /etc/systemd/system/redis.service → /usr/lib/systemd/system/redis-server.service. 151s Created symlink /etc/systemd/system/multi-user.target.wants/redis-server.service → /usr/lib/systemd/system/redis-server.service. 151s Setting up redis-sentinel (5:7.0.15-1build2) ... 151s Created symlink /etc/systemd/system/sentinel.service → /usr/lib/systemd/system/redis-sentinel.service. 151s Created symlink /etc/systemd/system/multi-user.target.wants/redis-sentinel.service → /usr/lib/systemd/system/redis-sentinel.service. 151s Setting up redis (5:7.0.15-1build2) ... 151s Processing triggers for man-db (2.12.0-4build2) ... 152s Processing triggers for libc-bin (2.39-0ubuntu8.4) ... 154s autopkgtest [00:31:41]: test 0001-redis-cli: [----------------------- 159s # Server 159s redis_version:7.0.15 159s redis_git_sha1:00000000 159s redis_git_dirty:0 159s redis_build_id:d81b8ff71cfb150e 159s redis_mode:standalone 159s os:Linux 6.8.0-54-generic s390x 159s arch_bits:64 159s monotonic_clock:POSIX clock_gettime 159s multiplexing_api:epoll 159s atomicvar_api:c11-builtin 159s gcc_version:13.2.0 159s process_id:1703 159s process_supervised:systemd 159s run_id:d1626837e4cd67546be4b91a1dd94414c26e4066 159s tcp_port:6379 159s server_time_usec:1740702792008750 159s uptime_in_seconds:6 159s uptime_in_days:0 159s hz:10 159s configured_hz:10 159s lru_clock:12649543 159s executable:/usr/bin/redis-server 159s config_file:/etc/redis/redis.conf 159s io_threads_active:0 159s 159s # Clients 159s connected_clients:3 159s cluster_connections:0 159s maxclients:10000 159s client_recent_max_input_buffer:20480 159s client_recent_max_output_buffer:0 159s blocked_clients:0 159s tracking_clients:0 159s clients_in_timeout_table:0 159s 159s # Memory 159s used_memory:1094112 159s used_memory_human:1.04M 159s used_memory_rss:13631488 159s used_memory_rss_human:13.00M 159s used_memory_peak:1094112 159s used_memory_peak_human:1.04M 159s used_memory_peak_perc:102.15% 159s used_memory_overhead:953568 159s used_memory_startup:908768 159s used_memory_dataset:140544 159s used_memory_dataset_perc:75.83% 159s allocator_allocated:4623392 159s allocator_active:9437184 159s allocator_resident:11665408 159s total_system_memory:4192186368 159s total_system_memory_human:3.90G 159s used_memory_lua:31744 159s used_memory_vm_eval:31744 159s used_memory_lua_human:31.00K 159s used_memory_scripts_eval:0 159s number_of_cached_scripts:0 159s number_of_functions:0 159s number_of_libraries:0 159s used_memory_vm_functions:32768 159s used_memory_vm_total:64512 159s used_memory_vm_total_human:63.00K 159s used_memory_functions:200 159s used_memory_scripts:200 159s used_memory_scripts_human:200B 159s maxmemory:0 159s maxmemory_human:0B 159s maxmemory_policy:noeviction 159s allocator_frag_ratio:2.04 159s allocator_frag_bytes:4813792 159s allocator_rss_ratio:1.24 159s allocator_rss_bytes:2228224 159s rss_overhead_ratio:1.17 159s rss_overhead_bytes:1966080 159s mem_fragmentation_ratio:12.94 159s mem_fragmentation_bytes:12577976 159s mem_not_counted_for_evict:0 159s mem_replication_backlog:0 159s mem_total_replication_buffers:0 159s mem_clients_slaves:0 159s mem_clients_normal:44600 159s mem_cluster_links:0 159s mem_aof_buffer:0 159s mem_allocator:jemalloc-5.3.0 159s active_defrag_running:0 159s lazyfree_pending_objects:0 159s lazyfreed_objects:0 159s 159s # Persistence 159s loading:0 159s async_loading:0 159s current_cow_peak:0 159s current_cow_size:0 159s current_cow_size_age:0 159s current_fork_perc:0.00 159s current_save_keys_processed:0 159s current_save_keys_total:0 159s rdb_changes_since_last_save:0 159s rdb_bgsave_in_progress:0 159s rdb_last_save_time:1740702786 159s rdb_last_bgsave_status:ok 159s rdb_last_bgsave_time_sec:-1 159s rdb_current_bgsave_time_sec:-1 159s rdb_saves:0 159s rdb_last_cow_size:0 159s rdb_last_load_keys_expired:0 159s rdb_last_load_keys_loaded:0 159s aof_enabled:0 159s aof_rewrite_in_progress:0 159s aof_rewrite_scheduled:0 159s aof_last_rewrite_time_sec:-1 159s aof_current_rewrite_time_sec:-1 159s aof_last_bgrewrite_status:ok 159s aof_rewrites:0 159s aof_rewrites_consecutive_failures:0 159s aof_last_write_status:ok 159s aof_last_cow_size:0 159s module_fork_in_progress:0 159s module_fork_last_cow_size:0 159s 159s # Stats 159s total_connections_received:3 159s total_commands_processed:9 159s instantaneous_ops_per_sec:1 159s total_net_input_bytes:497 159s total_net_output_bytes:360 159s total_net_repl_input_bytes:0 159s total_net_repl_output_bytes:0 159s instantaneous_input_kbps:0.09 159s instantaneous_output_kbps:0.09 159s instantaneous_input_repl_kbps:0.00 159s instantaneous_output_repl_kbps:0.00 159s rejected_connections:0 159s sync_full:0 159s sync_partial_ok:0 159s sync_partial_err:0 159s expired_keys:0 159s expired_stale_perc:0.00 159s expired_time_cap_reached_count:0 159s expire_cycle_cpu_milliseconds:0 159s evicted_keys:0 159s evicted_clients:0 159s total_eviction_exceeded_time:0 159s current_eviction_exceeded_time:0 159s keyspace_hits:0 159s keyspace_misses:0 159s pubsub_channels:1 159s pubsub_patterns:0 159s pubsubshard_channels:0 159s latest_fork_usec:0 159s total_forks:0 159s migrate_cached_sockets:0 159s slave_expires_tracked_keys:0 159s active_defrag_hits:0 159s active_defrag_misses:0 159s active_defrag_key_hits:0 159s active_defrag_key_misses:0 159s total_active_defrag_time:0 159s current_active_defrag_time:0 159s tracking_total_keys:0 159s tracking_total_items:0 159s tracking_total_prefixes:0 159s unexpected_error_replies:0 159s total_error_replies:0 159s dump_payload_sanitizations:0 159s total_reads_processed:8 159s total_writes_processed:9 159s io_threaded_reads_processed:0 159s io_threaded_writes_processed:0 159s reply_buffer_shrinks:2 159s reply_buffer_expands:0 159s 159s # Replication 159s role:master 159s connected_slaves:0 159s master_failover_state:no-failover 159s master_replid:2c58c92200f3b0054c75e7afd945093b6d81baa8 159s master_replid2:0000000000000000000000000000000000000000 159s master_repl_offset:0 159s second_repl_offset:-1 159s repl_backlog_active:0 159s repl_backlog_size:1048576 159s repl_backlog_first_byte_offset:0 159s repl_backlog_histlen:0 159s 159s # CPU 159s used_cpu_sys:0.011650 159s used_cpu_user:0.028804 159s used_cpu_sys_children:0.000157 159s used_cpu_user_children:0.000025 159s used_cpu_sys_main_thread:0.011632 159s used_cpu_user_main_thread:0.028793 159s 159s # Modules 159s 159s # Errorstats 159s 159s # Cluster 159s cluster_enabled:0 159s 159s # Keyspace 159s Redis ver. 7.0.15 159s autopkgtest [00:31:46]: test 0001-redis-cli: -----------------------] 160s autopkgtest [00:31:47]: test 0001-redis-cli: - - - - - - - - - - results - - - - - - - - - - 160s 0001-redis-cli PASS 160s autopkgtest [00:31:47]: test 0002-benchmark: preparing testbed 160s Reading package lists... 161s Building dependency tree... 161s Reading state information... 161s Starting pkgProblemResolver with broken count: 0 161s Starting 2 pkgProblemResolver with broken count: 0 161s Done 161s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 162s autopkgtest [00:31:49]: test 0002-benchmark: [----------------------- 168s PING_INLINE: rps=0.0 (overall: nan) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 168s 100000 requests completed in 0.12 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.135 milliseconds (cumulative count 20) 168s 50.000% <= 0.463 milliseconds (cumulative count 50130) 168s 75.000% <= 0.599 milliseconds (cumulative count 76130) 168s 87.500% <= 0.711 milliseconds (cumulative count 87910) 168s 93.750% <= 0.847 milliseconds (cumulative count 93810) 168s 96.875% <= 1.111 milliseconds (cumulative count 96890) 168s 98.438% <= 2.079 milliseconds (cumulative count 98440) 168s 99.219% <= 3.871 milliseconds (cumulative count 99230) 168s 99.609% <= 4.071 milliseconds (cumulative count 99630) 168s 99.805% <= 5.399 milliseconds (cumulative count 99810) 168s 99.902% <= 5.495 milliseconds (cumulative count 99910) 168s 99.951% <= 6.303 milliseconds (cumulative count 99960) 168s 99.976% <= 6.319 milliseconds (cumulative count 99980) 168s 99.988% <= 6.335 milliseconds (cumulative count 99990) 168s 99.994% <= 6.431 milliseconds (cumulative count 100000) 168s 100.000% <= 6.431 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 2.260% <= 0.207 milliseconds (cumulative count 2260) 168s 13.520% <= 0.303 milliseconds (cumulative count 13520) 168s 36.520% <= 0.407 milliseconds (cumulative count 36520) 168s 59.080% <= 0.503 milliseconds (cumulative count 59080) 168s 77.450% <= 0.607 milliseconds (cumulative count 77450) 168s 87.400% <= 0.703 milliseconds (cumulative count 87400) 168s 92.710% <= 0.807 milliseconds (cumulative count 92710) 168s 94.970% <= 0.903 milliseconds (cumulative count 94970) 168s 96.210% <= 1.007 milliseconds (cumulative count 96210) 168s 96.850% <= 1.103 milliseconds (cumulative count 96850) 168s 97.130% <= 1.207 milliseconds (cumulative count 97130) 168s 97.310% <= 1.303 milliseconds (cumulative count 97310) 168s 97.500% <= 1.407 milliseconds (cumulative count 97500) 168s 97.900% <= 1.503 milliseconds (cumulative count 97900) 168s 98.090% <= 1.607 milliseconds (cumulative count 98090) 168s 98.150% <= 1.703 milliseconds (cumulative count 98150) 168s 98.190% <= 1.807 milliseconds (cumulative count 98190) 168s 98.280% <= 1.903 milliseconds (cumulative count 98280) 168s 98.380% <= 2.007 milliseconds (cumulative count 98380) 168s 98.460% <= 2.103 milliseconds (cumulative count 98460) 168s 99.080% <= 3.103 milliseconds (cumulative count 99080) 168s 99.730% <= 4.103 milliseconds (cumulative count 99730) 168s 99.800% <= 5.103 milliseconds (cumulative count 99800) 168s 99.920% <= 6.103 milliseconds (cumulative count 99920) 168s 100.000% <= 7.103 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 840336.12 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.542 0.128 0.463 0.911 2.495 6.431 168s ====== PING_MBULK ====== 168s 100000 requests completed in 0.07 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.135 milliseconds (cumulative count 20) 168s 50.000% <= 0.239 milliseconds (cumulative count 52830) 168s 75.000% <= 0.287 milliseconds (cumulative count 77500) 168s 87.500% <= 0.327 milliseconds (cumulative count 88320) 168s 93.750% <= 0.367 milliseconds (cumulative count 94330) 168s 96.875% <= 0.391 milliseconds (cumulative count 97000) 168s 98.438% <= 0.415 milliseconds (cumulative count 98450) 168s 99.219% <= 0.511 milliseconds (cumulative count 99240) 168s 99.609% <= 0.591 milliseconds (cumulative count 99620) 168s 99.805% <= 0.631 milliseconds (cumulative count 99810) 168s 99.902% <= 0.671 milliseconds (cumulative count 99920) 168s 99.951% <= 0.687 milliseconds (cumulative count 99970) 168s 99.976% <= 0.695 milliseconds (cumulative count 99980) 168s 99.988% <= 0.727 milliseconds (cumulative count 99990) 168s 99.994% <= 0.751 milliseconds (cumulative count 100000) 168s 100.000% <= 0.751 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 32.170% <= 0.207 milliseconds (cumulative count 32170) 168s 82.900% <= 0.303 milliseconds (cumulative count 82900) 168s 98.220% <= 0.407 milliseconds (cumulative count 98220) 168s 99.190% <= 0.503 milliseconds (cumulative count 99190) 168s 99.670% <= 0.607 milliseconds (cumulative count 99670) 168s 99.980% <= 0.703 milliseconds (cumulative count 99980) 168s 100.000% <= 0.807 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1515151.50 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.248 0.128 0.239 0.375 0.479 0.751 168s SET: rps=278960.0 (overall: 1106984.1) avg_msec=0.383 (overall: 0.383) ====== SET ====== 168s 100000 requests completed in 0.09 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.135 milliseconds (cumulative count 50) 168s 50.000% <= 0.383 milliseconds (cumulative count 50750) 168s 75.000% <= 0.455 milliseconds (cumulative count 76840) 168s 87.500% <= 0.503 milliseconds (cumulative count 87520) 168s 93.750% <= 0.543 milliseconds (cumulative count 94570) 168s 96.875% <= 0.567 milliseconds (cumulative count 96930) 168s 98.438% <= 0.615 milliseconds (cumulative count 98540) 168s 99.219% <= 0.671 milliseconds (cumulative count 99220) 168s 99.609% <= 0.727 milliseconds (cumulative count 99650) 168s 99.805% <= 0.767 milliseconds (cumulative count 99810) 168s 99.902% <= 0.815 milliseconds (cumulative count 99920) 168s 99.951% <= 0.863 milliseconds (cumulative count 99980) 168s 99.988% <= 0.887 milliseconds (cumulative count 99990) 168s 99.994% <= 0.903 milliseconds (cumulative count 100000) 168s 100.000% <= 0.903 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 2.360% <= 0.207 milliseconds (cumulative count 2360) 168s 18.100% <= 0.303 milliseconds (cumulative count 18100) 168s 60.730% <= 0.407 milliseconds (cumulative count 60730) 168s 87.520% <= 0.503 milliseconds (cumulative count 87520) 168s 98.420% <= 0.607 milliseconds (cumulative count 98420) 168s 99.460% <= 0.703 milliseconds (cumulative count 99460) 168s 99.900% <= 0.807 milliseconds (cumulative count 99900) 168s 100.000% <= 0.903 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1086956.50 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.389 0.128 0.383 0.551 0.655 0.903 168s ====== GET ====== 168s 100000 requests completed in 0.08 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.127 milliseconds (cumulative count 10) 168s 50.000% <= 0.327 milliseconds (cumulative count 51300) 168s 75.000% <= 0.391 milliseconds (cumulative count 77390) 168s 87.500% <= 0.439 milliseconds (cumulative count 88990) 168s 93.750% <= 0.463 milliseconds (cumulative count 93990) 168s 96.875% <= 0.495 milliseconds (cumulative count 97320) 168s 98.438% <= 0.527 milliseconds (cumulative count 98690) 168s 99.219% <= 0.559 milliseconds (cumulative count 99250) 168s 99.609% <= 0.631 milliseconds (cumulative count 99630) 168s 99.805% <= 0.719 milliseconds (cumulative count 99810) 168s 99.902% <= 0.767 milliseconds (cumulative count 99910) 168s 99.951% <= 0.807 milliseconds (cumulative count 99960) 168s 99.976% <= 0.831 milliseconds (cumulative count 99980) 168s 99.988% <= 0.839 milliseconds (cumulative count 99990) 168s 99.994% <= 0.847 milliseconds (cumulative count 100000) 168s 100.000% <= 0.847 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 6.420% <= 0.207 milliseconds (cumulative count 6420) 168s 38.350% <= 0.303 milliseconds (cumulative count 38350) 168s 81.510% <= 0.407 milliseconds (cumulative count 81510) 168s 97.770% <= 0.503 milliseconds (cumulative count 97770) 168s 99.540% <= 0.607 milliseconds (cumulative count 99540) 168s 99.790% <= 0.703 milliseconds (cumulative count 99790) 168s 99.960% <= 0.807 milliseconds (cumulative count 99960) 168s 100.000% <= 0.903 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1265822.75 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.332 0.120 0.327 0.471 0.543 0.847 168s ====== INCR ====== 168s 100000 requests completed in 0.08 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.111 milliseconds (cumulative count 40) 168s 50.000% <= 0.319 milliseconds (cumulative count 53900) 168s 75.000% <= 0.375 milliseconds (cumulative count 75770) 168s 87.500% <= 0.423 milliseconds (cumulative count 87870) 168s 93.750% <= 0.463 milliseconds (cumulative count 94450) 168s 96.875% <= 0.495 milliseconds (cumulative count 97120) 168s 98.438% <= 0.543 milliseconds (cumulative count 98450) 168s 99.219% <= 0.623 milliseconds (cumulative count 99290) 168s 99.609% <= 0.671 milliseconds (cumulative count 99630) 168s 99.805% <= 0.727 milliseconds (cumulative count 99820) 168s 99.902% <= 0.759 milliseconds (cumulative count 99930) 168s 99.951% <= 0.791 milliseconds (cumulative count 99960) 168s 99.976% <= 0.815 milliseconds (cumulative count 99980) 168s 99.988% <= 0.823 milliseconds (cumulative count 99990) 168s 99.994% <= 0.847 milliseconds (cumulative count 100000) 168s 100.000% <= 0.847 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 10.350% <= 0.207 milliseconds (cumulative count 10350) 168s 46.020% <= 0.303 milliseconds (cumulative count 46020) 168s 84.390% <= 0.407 milliseconds (cumulative count 84390) 168s 97.420% <= 0.503 milliseconds (cumulative count 97420) 168s 99.140% <= 0.607 milliseconds (cumulative count 99140) 168s 99.760% <= 0.703 milliseconds (cumulative count 99760) 168s 99.970% <= 0.807 milliseconds (cumulative count 99970) 168s 100.000% <= 0.903 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1298701.25 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.319 0.104 0.319 0.471 0.591 0.847 168s LPUSH: rps=262000.0 (overall: 1056451.6) avg_msec=0.407 (overall: 0.407) ====== LPUSH ====== 168s 100000 requests completed in 0.10 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.135 milliseconds (cumulative count 30) 168s 50.000% <= 0.415 milliseconds (cumulative count 53000) 168s 75.000% <= 0.479 milliseconds (cumulative count 75330) 168s 87.500% <= 0.535 milliseconds (cumulative count 89190) 168s 93.750% <= 0.567 milliseconds (cumulative count 94270) 168s 96.875% <= 0.599 milliseconds (cumulative count 97040) 168s 98.438% <= 0.639 milliseconds (cumulative count 98590) 168s 99.219% <= 0.695 milliseconds (cumulative count 99250) 168s 99.609% <= 0.743 milliseconds (cumulative count 99610) 168s 99.805% <= 0.807 milliseconds (cumulative count 99810) 168s 99.902% <= 0.863 milliseconds (cumulative count 99920) 168s 99.951% <= 0.903 milliseconds (cumulative count 99960) 168s 99.976% <= 0.927 milliseconds (cumulative count 99980) 168s 99.988% <= 0.943 milliseconds (cumulative count 99990) 168s 99.994% <= 0.967 milliseconds (cumulative count 100000) 168s 100.000% <= 0.967 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 2.140% <= 0.207 milliseconds (cumulative count 2140) 168s 12.040% <= 0.303 milliseconds (cumulative count 12040) 168s 49.850% <= 0.407 milliseconds (cumulative count 49850) 168s 81.790% <= 0.503 milliseconds (cumulative count 81790) 168s 97.440% <= 0.607 milliseconds (cumulative count 97440) 168s 99.300% <= 0.703 milliseconds (cumulative count 99300) 168s 99.810% <= 0.807 milliseconds (cumulative count 99810) 168s 99.960% <= 0.903 milliseconds (cumulative count 99960) 168s 100.000% <= 1.007 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1041666.69 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.414 0.128 0.415 0.575 0.671 0.967 168s ====== RPUSH ====== 168s 100000 requests completed in 0.09 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.111 milliseconds (cumulative count 40) 168s 50.000% <= 0.375 milliseconds (cumulative count 53620) 168s 75.000% <= 0.431 milliseconds (cumulative count 75190) 168s 87.500% <= 0.479 milliseconds (cumulative count 87760) 168s 93.750% <= 0.511 milliseconds (cumulative count 94430) 168s 96.875% <= 0.543 milliseconds (cumulative count 97280) 168s 98.438% <= 0.575 milliseconds (cumulative count 98500) 168s 99.219% <= 0.631 milliseconds (cumulative count 99220) 168s 99.609% <= 0.703 milliseconds (cumulative count 99610) 168s 99.805% <= 0.759 milliseconds (cumulative count 99840) 168s 99.902% <= 0.791 milliseconds (cumulative count 99910) 168s 99.951% <= 0.823 milliseconds (cumulative count 99960) 168s 99.976% <= 0.839 milliseconds (cumulative count 99980) 168s 99.988% <= 0.855 milliseconds (cumulative count 99990) 168s 99.994% <= 0.863 milliseconds (cumulative count 100000) 168s 100.000% <= 0.863 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 3.320% <= 0.207 milliseconds (cumulative count 3320) 168s 19.540% <= 0.303 milliseconds (cumulative count 19540) 168s 67.140% <= 0.407 milliseconds (cumulative count 67140) 168s 93.080% <= 0.503 milliseconds (cumulative count 93080) 168s 99.030% <= 0.607 milliseconds (cumulative count 99030) 168s 99.610% <= 0.703 milliseconds (cumulative count 99610) 168s 99.930% <= 0.807 milliseconds (cumulative count 99930) 168s 100.000% <= 0.903 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1149425.38 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.375 0.104 0.375 0.519 0.607 0.863 168s ====== LPOP ====== 168s 100000 requests completed in 0.10 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.143 milliseconds (cumulative count 20) 168s 50.000% <= 0.423 milliseconds (cumulative count 51980) 168s 75.000% <= 0.487 milliseconds (cumulative count 75490) 168s 87.500% <= 0.535 milliseconds (cumulative count 88090) 168s 93.750% <= 0.567 milliseconds (cumulative count 93920) 168s 96.875% <= 0.599 milliseconds (cumulative count 97120) 168s 98.438% <= 0.623 milliseconds (cumulative count 98470) 168s 99.219% <= 0.663 milliseconds (cumulative count 99240) 168s 99.609% <= 0.711 milliseconds (cumulative count 99650) 168s 99.805% <= 0.751 milliseconds (cumulative count 99810) 168s 99.902% <= 0.783 milliseconds (cumulative count 99920) 168s 99.951% <= 0.815 milliseconds (cumulative count 99960) 168s 99.976% <= 0.831 milliseconds (cumulative count 99980) 168s 99.988% <= 0.855 milliseconds (cumulative count 99990) 168s 99.994% <= 0.879 milliseconds (cumulative count 100000) 168s 100.000% <= 0.879 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 1.220% <= 0.207 milliseconds (cumulative count 1220) 168s 7.860% <= 0.303 milliseconds (cumulative count 7860) 168s 44.970% <= 0.407 milliseconds (cumulative count 44970) 168s 80.180% <= 0.503 milliseconds (cumulative count 80180) 168s 97.710% <= 0.607 milliseconds (cumulative count 97710) 168s 99.600% <= 0.703 milliseconds (cumulative count 99600) 168s 99.950% <= 0.807 milliseconds (cumulative count 99950) 168s 100.000% <= 0.903 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1030927.81 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.425 0.136 0.423 0.575 0.647 0.879 168s RPOP: rps=121792.8 (overall: 1019000.0) avg_msec=0.413 (overall: 0.413) ====== RPOP ====== 168s 100000 requests completed in 0.10 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.095 milliseconds (cumulative count 10) 168s 50.000% <= 0.415 milliseconds (cumulative count 52960) 168s 75.000% <= 0.495 milliseconds (cumulative count 76700) 168s 87.500% <= 0.551 milliseconds (cumulative count 88130) 168s 93.750% <= 0.607 milliseconds (cumulative count 93760) 168s 96.875% <= 0.687 milliseconds (cumulative count 97120) 168s 98.438% <= 0.743 milliseconds (cumulative count 98450) 168s 99.219% <= 0.815 milliseconds (cumulative count 99260) 168s 99.609% <= 0.975 milliseconds (cumulative count 99630) 168s 99.805% <= 1.063 milliseconds (cumulative count 99810) 168s 99.902% <= 1.143 milliseconds (cumulative count 99910) 168s 99.951% <= 1.191 milliseconds (cumulative count 99960) 168s 99.976% <= 1.207 milliseconds (cumulative count 99980) 168s 99.988% <= 1.215 milliseconds (cumulative count 99990) 168s 99.994% <= 1.223 milliseconds (cumulative count 100000) 168s 100.000% <= 1.223 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.020% <= 0.103 milliseconds (cumulative count 20) 168s 1.920% <= 0.207 milliseconds (cumulative count 1920) 168s 13.470% <= 0.303 milliseconds (cumulative count 13470) 168s 49.700% <= 0.407 milliseconds (cumulative count 49700) 168s 78.530% <= 0.503 milliseconds (cumulative count 78530) 168s 93.760% <= 0.607 milliseconds (cumulative count 93760) 168s 97.590% <= 0.703 milliseconds (cumulative count 97590) 168s 99.190% <= 0.807 milliseconds (cumulative count 99190) 168s 99.450% <= 0.903 milliseconds (cumulative count 99450) 168s 99.700% <= 1.007 milliseconds (cumulative count 99700) 168s 99.880% <= 1.103 milliseconds (cumulative count 99880) 168s 99.980% <= 1.207 milliseconds (cumulative count 99980) 168s 100.000% <= 1.303 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1010101.00 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.423 0.088 0.415 0.631 0.791 1.223 168s ====== SADD ====== 168s 100000 requests completed in 0.09 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.111 milliseconds (cumulative count 10) 168s 50.000% <= 0.351 milliseconds (cumulative count 51790) 168s 75.000% <= 0.415 milliseconds (cumulative count 75210) 168s 87.500% <= 0.471 milliseconds (cumulative count 87900) 168s 93.750% <= 0.511 milliseconds (cumulative count 93900) 168s 96.875% <= 0.607 milliseconds (cumulative count 96900) 168s 98.438% <= 0.951 milliseconds (cumulative count 98520) 168s 99.219% <= 1.303 milliseconds (cumulative count 99220) 168s 99.609% <= 1.439 milliseconds (cumulative count 99610) 168s 99.805% <= 1.591 milliseconds (cumulative count 99810) 168s 99.902% <= 1.855 milliseconds (cumulative count 99920) 168s 99.951% <= 1.919 milliseconds (cumulative count 99960) 168s 99.976% <= 1.943 milliseconds (cumulative count 99980) 168s 99.988% <= 2.215 milliseconds (cumulative count 99990) 168s 99.994% <= 2.231 milliseconds (cumulative count 100000) 168s 100.000% <= 2.231 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 4.410% <= 0.207 milliseconds (cumulative count 4410) 168s 30.700% <= 0.303 milliseconds (cumulative count 30700) 168s 73.090% <= 0.407 milliseconds (cumulative count 73090) 168s 93.140% <= 0.503 milliseconds (cumulative count 93140) 168s 96.900% <= 0.607 milliseconds (cumulative count 96900) 168s 97.570% <= 0.703 milliseconds (cumulative count 97570) 168s 97.900% <= 0.807 milliseconds (cumulative count 97900) 168s 98.100% <= 0.903 milliseconds (cumulative count 98100) 168s 98.760% <= 1.007 milliseconds (cumulative count 98760) 168s 98.940% <= 1.103 milliseconds (cumulative count 98940) 168s 99.120% <= 1.207 milliseconds (cumulative count 99120) 168s 99.220% <= 1.303 milliseconds (cumulative count 99220) 168s 99.550% <= 1.407 milliseconds (cumulative count 99550) 168s 99.740% <= 1.503 milliseconds (cumulative count 99740) 168s 99.810% <= 1.607 milliseconds (cumulative count 99810) 168s 99.840% <= 1.703 milliseconds (cumulative count 99840) 168s 99.880% <= 1.807 milliseconds (cumulative count 99880) 168s 99.950% <= 1.903 milliseconds (cumulative count 99950) 168s 99.980% <= 2.007 milliseconds (cumulative count 99980) 168s 100.000% <= 3.103 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1162790.62 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.370 0.104 0.351 0.535 1.143 2.231 168s HSET: rps=373720.0 (overall: 1004623.6) avg_msec=0.423 (overall: 0.423) ====== HSET ====== 168s 100000 requests completed in 0.10 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.135 milliseconds (cumulative count 20) 168s 50.000% <= 0.399 milliseconds (cumulative count 51390) 168s 75.000% <= 0.471 milliseconds (cumulative count 75370) 168s 87.500% <= 0.527 milliseconds (cumulative count 87520) 168s 93.750% <= 0.583 milliseconds (cumulative count 94300) 168s 96.875% <= 0.655 milliseconds (cumulative count 96920) 168s 98.438% <= 0.807 milliseconds (cumulative count 98500) 168s 99.219% <= 0.991 milliseconds (cumulative count 99230) 168s 99.609% <= 1.335 milliseconds (cumulative count 99610) 168s 99.805% <= 1.511 milliseconds (cumulative count 99810) 168s 99.902% <= 2.087 milliseconds (cumulative count 99910) 168s 99.951% <= 2.119 milliseconds (cumulative count 99960) 168s 99.976% <= 2.135 milliseconds (cumulative count 99980) 168s 99.988% <= 2.151 milliseconds (cumulative count 99990) 168s 99.994% <= 2.287 milliseconds (cumulative count 100000) 168s 100.000% <= 2.287 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 1.120% <= 0.207 milliseconds (cumulative count 1120) 168s 12.780% <= 0.303 milliseconds (cumulative count 12780) 168s 54.360% <= 0.407 milliseconds (cumulative count 54360) 168s 82.750% <= 0.503 milliseconds (cumulative count 82750) 168s 95.520% <= 0.607 milliseconds (cumulative count 95520) 168s 97.590% <= 0.703 milliseconds (cumulative count 97590) 168s 98.500% <= 0.807 milliseconds (cumulative count 98500) 168s 99.010% <= 0.903 milliseconds (cumulative count 99010) 168s 99.310% <= 1.007 milliseconds (cumulative count 99310) 168s 99.430% <= 1.103 milliseconds (cumulative count 99430) 168s 99.490% <= 1.207 milliseconds (cumulative count 99490) 168s 99.570% <= 1.303 milliseconds (cumulative count 99570) 168s 99.640% <= 1.407 milliseconds (cumulative count 99640) 168s 99.800% <= 1.503 milliseconds (cumulative count 99800) 168s 99.840% <= 1.607 milliseconds (cumulative count 99840) 168s 99.940% <= 2.103 milliseconds (cumulative count 99940) 168s 100.000% <= 3.103 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1020408.19 requests per second 168s latency summary (msec): 168s avg min p50 p95 p99 max 168s 0.418 0.128 0.399 0.599 0.903 2.287 168s ====== SPOP ====== 168s 100000 requests completed in 0.08 seconds 168s 50 parallel clients 168s 3 bytes payload 168s keep alive: 1 168s host configuration "save": 3600 1 300 100 60 10000 168s host configuration "appendonly": no 168s multi-thread: no 168s 168s Latency by percentile distribution: 168s 0.000% <= 0.119 milliseconds (cumulative count 10) 168s 50.000% <= 0.319 milliseconds (cumulative count 51800) 168s 75.000% <= 0.375 milliseconds (cumulative count 75330) 168s 87.500% <= 0.423 milliseconds (cumulative count 88910) 168s 93.750% <= 0.447 milliseconds (cumulative count 94540) 168s 96.875% <= 0.479 milliseconds (cumulative count 97140) 168s 98.438% <= 0.511 milliseconds (cumulative count 98490) 168s 99.219% <= 0.575 milliseconds (cumulative count 99240) 168s 99.609% <= 0.631 milliseconds (cumulative count 99620) 168s 99.805% <= 0.687 milliseconds (cumulative count 99820) 168s 99.902% <= 0.727 milliseconds (cumulative count 99920) 168s 99.951% <= 0.751 milliseconds (cumulative count 99960) 168s 99.976% <= 0.767 milliseconds (cumulative count 99990) 168s 99.994% <= 0.783 milliseconds (cumulative count 100000) 168s 100.000% <= 0.783 milliseconds (cumulative count 100000) 168s 168s Cumulative distribution of latencies: 168s 0.000% <= 0.103 milliseconds (cumulative count 0) 168s 4.820% <= 0.207 milliseconds (cumulative count 4820) 168s 40.920% <= 0.303 milliseconds (cumulative count 40920) 168s 84.550% <= 0.407 milliseconds (cumulative count 84550) 168s 98.310% <= 0.503 milliseconds (cumulative count 98310) 168s 99.480% <= 0.607 milliseconds (cumulative count 99480) 168s 99.860% <= 0.703 milliseconds (cumulative count 99860) 168s 100.000% <= 0.807 milliseconds (cumulative count 100000) 168s 168s Summary: 168s throughput summary: 1298701.25 requests per second 169s latency summary (msec): 169s avg min p50 p95 p99 max 169s 0.328 0.112 0.319 0.455 0.543 0.783 169s ====== ZADD ====== 169s 100000 requests completed in 0.10 seconds 169s 50 parallel clients 169s 3 bytes payload 169s keep alive: 1 169s host configuration "save": 3600 1 300 100 60 10000 169s host configuration "appendonly": no 169s multi-thread: no 169s 169s Latency by percentile distribution: 169s 0.000% <= 0.143 milliseconds (cumulative count 10) 169s 50.000% <= 0.423 milliseconds (cumulative count 50920) 169s 75.000% <= 0.495 milliseconds (cumulative count 75950) 169s 87.500% <= 0.543 milliseconds (cumulative count 88130) 169s 93.750% <= 0.583 milliseconds (cumulative count 94400) 169s 96.875% <= 0.623 milliseconds (cumulative count 96980) 169s 98.438% <= 0.679 milliseconds (cumulative count 98530) 169s 99.219% <= 0.743 milliseconds (cumulative count 99220) 169s 99.609% <= 0.823 milliseconds (cumulative count 99630) 169s 99.805% <= 0.871 milliseconds (cumulative count 99830) 169s 99.902% <= 0.903 milliseconds (cumulative count 99920) 169s 99.951% <= 0.943 milliseconds (cumulative count 99960) 169s 99.976% <= 0.975 milliseconds (cumulative count 99980) 169s 99.988% <= 0.991 milliseconds (cumulative count 99990) 169s 99.994% <= 1.079 milliseconds (cumulative count 100000) 169s 100.000% <= 1.079 milliseconds (cumulative count 100000) 169s 169s Cumulative distribution of latencies: 169s 0.000% <= 0.103 milliseconds (cumulative count 0) 169s 0.930% <= 0.207 milliseconds (cumulative count 930) 169s 6.820% <= 0.303 milliseconds (cumulative count 6820) 169s 44.160% <= 0.407 milliseconds (cumulative count 44160) 169s 78.100% <= 0.503 milliseconds (cumulative count 78100) 169s 96.120% <= 0.607 milliseconds (cumulative count 96120) 169s 98.950% <= 0.703 milliseconds (cumulative count 98950) 169s 99.530% <= 0.807 milliseconds (cumulative count 99530) 169s 99.920% <= 0.903 milliseconds (cumulative count 99920) 169s 99.990% <= 1.007 milliseconds (cumulative count 99990) 169s 100.000% <= 1.103 milliseconds (cumulative count 100000) 169s 169s Summary: 169s throughput summary: 1010101.00 requests per second 169s latency summary (msec): 169s avg min p50 p95 p99 max 169s 0.432 0.136 0.423 0.591 0.711 1.079 169s ZPOPMIN: rps=347600.0 (overall: 1297014.9) avg_msec=0.327 (overall: 0.327) ====== ZPOPMIN ====== 169s 100000 requests completed in 0.08 seconds 169s 50 parallel clients 169s 3 bytes payload 169s keep alive: 1 169s host configuration "save": 3600 1 300 100 60 10000 169s host configuration "appendonly": no 169s multi-thread: no 169s 169s Latency by percentile distribution: 169s 0.000% <= 0.111 milliseconds (cumulative count 10) 169s 50.000% <= 0.319 milliseconds (cumulative count 53470) 169s 75.000% <= 0.383 milliseconds (cumulative count 75830) 169s 87.500% <= 0.439 milliseconds (cumulative count 88370) 169s 93.750% <= 0.487 milliseconds (cumulative count 93860) 169s 96.875% <= 0.543 milliseconds (cumulative count 96960) 169s 98.438% <= 0.607 milliseconds (cumulative count 98640) 169s 99.219% <= 0.647 milliseconds (cumulative count 99230) 169s 99.609% <= 0.687 milliseconds (cumulative count 99650) 169s 99.805% <= 0.711 milliseconds (cumulative count 99810) 169s 99.902% <= 0.735 milliseconds (cumulative count 99920) 169s 99.951% <= 0.751 milliseconds (cumulative count 99970) 169s 99.976% <= 0.759 milliseconds (cumulative count 99980) 169s 99.988% <= 0.767 milliseconds (cumulative count 99990) 169s 99.994% <= 0.775 milliseconds (cumulative count 100000) 169s 100.000% <= 0.775 milliseconds (cumulative count 100000) 169s 169s Cumulative distribution of latencies: 169s 0.000% <= 0.103 milliseconds (cumulative count 0) 169s 9.440% <= 0.207 milliseconds (cumulative count 9440) 169s 45.630% <= 0.303 milliseconds (cumulative count 45630) 169s 81.840% <= 0.407 milliseconds (cumulative count 81840) 169s 95.000% <= 0.503 milliseconds (cumulative count 95000) 169s 98.640% <= 0.607 milliseconds (cumulative count 98640) 169s 99.780% <= 0.703 milliseconds (cumulative count 99780) 169s 100.000% <= 0.807 milliseconds (cumulative count 100000) 169s 169s Summary: 169s throughput summary: 1298701.25 requests per second 169s latency summary (msec): 169s avg min p50 p95 p99 max 169s 0.326 0.104 0.319 0.503 0.631 0.775 169s ====== LPUSH (needed to benchmark LRANGE) ====== 169s 100000 requests completed in 0.11 seconds 169s 50 parallel clients 169s 3 bytes payload 169s keep alive: 1 169s host configuration "save": 3600 1 300 100 60 10000 169s host configuration "appendonly": no 169s multi-thread: no 169s 169s Latency by percentile distribution: 169s 0.000% <= 0.127 milliseconds (cumulative count 30) 169s 50.000% <= 0.423 milliseconds (cumulative count 51840) 169s 75.000% <= 0.503 milliseconds (cumulative count 75770) 169s 87.500% <= 0.567 milliseconds (cumulative count 87760) 169s 93.750% <= 0.671 milliseconds (cumulative count 93840) 169s 96.875% <= 1.023 milliseconds (cumulative count 96900) 169s 98.438% <= 1.687 milliseconds (cumulative count 98540) 169s 99.219% <= 2.111 milliseconds (cumulative count 99220) 169s 99.609% <= 4.559 milliseconds (cumulative count 99640) 169s 99.805% <= 4.703 milliseconds (cumulative count 99810) 169s 99.902% <= 4.919 milliseconds (cumulative count 99910) 169s 99.951% <= 5.887 milliseconds (cumulative count 99960) 169s 99.976% <= 5.919 milliseconds (cumulative count 99980) 169s 99.988% <= 6.543 milliseconds (cumulative count 99990) 169s 99.994% <= 6.559 milliseconds (cumulative count 100000) 169s 100.000% <= 6.559 milliseconds (cumulative count 100000) 169s 169s Cumulative distribution of latencies: 169s 0.000% <= 0.103 milliseconds (cumulative count 0) 169s 1.540% <= 0.207 milliseconds (cumulative count 1540) 169s 10.270% <= 0.303 milliseconds (cumulative count 10270) 169s 45.890% <= 0.407 milliseconds (cumulative count 45890) 169s 75.770% <= 0.503 milliseconds (cumulative count 75770) 169s 91.630% <= 0.607 milliseconds (cumulative count 91630) 169s 94.340% <= 0.703 milliseconds (cumulative count 94340) 169s 95.550% <= 0.807 milliseconds (cumulative count 95550) 169s 96.110% <= 0.903 milliseconds (cumulative count 96110) 169s 96.800% <= 1.007 milliseconds (cumulative count 96800) 169s 97.390% <= 1.103 milliseconds (cumulative count 97390) 169s 97.620% <= 1.207 milliseconds (cumulative count 97620) 169s 97.680% <= 1.303 milliseconds (cumulative count 97680) 169s 97.690% <= 1.503 milliseconds (cumulative count 97690) 169s 98.250% <= 1.607 milliseconds (cumulative count 98250) 169s 98.590% <= 1.703 milliseconds (cumulative count 98590) 169s 98.890% <= 1.807 milliseconds (cumulative count 98890) 169s 98.960% <= 1.903 milliseconds (cumulative count 98960) 169s 98.990% <= 2.007 milliseconds (cumulative count 98990) 169s 99.190% <= 2.103 milliseconds (cumulative count 99190) 169s 99.480% <= 3.103 milliseconds (cumulative count 99480) 169s 99.500% <= 4.103 milliseconds (cumulative count 99500) 169s 99.910% <= 5.103 milliseconds (cumulative count 99910) 169s 99.980% <= 6.103 milliseconds (cumulative count 99980) 169s 100.000% <= 7.103 milliseconds (cumulative count 100000) 169s 169s Summary: 169s throughput summary: 917431.19 requests per second 169s latency summary (msec): 169s avg min p50 p95 p99 max 169s 0.482 0.120 0.423 0.767 2.031 6.559 170s LRANGE_100 (first 100 elements): rps=70199.2 (overall: 135538.5) avg_msec=2.766 (overall: 2.766) LRANGE_100 (first 100 elements): rps=136892.4 (overall: 136430.4) avg_msec=2.840 (overall: 2.815) LRANGE_100 (first 100 elements): rps=139560.0 (overall: 137670.4) avg_msec=2.708 (overall: 2.772) ====== LRANGE_100 (first 100 elements) ====== 170s 100000 requests completed in 0.72 seconds 170s 50 parallel clients 170s 3 bytes payload 170s keep alive: 1 170s host configuration "save": 3600 1 300 100 60 10000 170s host configuration "appendonly": no 170s multi-thread: no 170s 170s Latency by percentile distribution: 170s 0.000% <= 0.199 milliseconds (cumulative count 10) 170s 50.000% <= 2.615 milliseconds (cumulative count 50150) 170s 75.000% <= 3.135 milliseconds (cumulative count 75240) 170s 87.500% <= 3.671 milliseconds (cumulative count 87550) 170s 93.750% <= 4.335 milliseconds (cumulative count 93770) 170s 96.875% <= 4.775 milliseconds (cumulative count 96890) 170s 98.438% <= 5.407 milliseconds (cumulative count 98480) 170s 99.219% <= 6.159 milliseconds (cumulative count 99220) 170s 99.609% <= 6.615 milliseconds (cumulative count 99610) 170s 99.805% <= 6.999 milliseconds (cumulative count 99810) 170s 99.902% <= 7.479 milliseconds (cumulative count 99910) 170s 99.951% <= 7.967 milliseconds (cumulative count 99960) 170s 99.976% <= 8.327 milliseconds (cumulative count 99980) 170s 99.988% <= 8.535 milliseconds (cumulative count 99990) 170s 99.994% <= 9.367 milliseconds (cumulative count 100000) 170s 100.000% <= 9.367 milliseconds (cumulative count 100000) 170s 170s Cumulative distribution of latencies: 170s 0.000% <= 0.103 milliseconds (cumulative count 0) 170s 0.010% <= 0.207 milliseconds (cumulative count 10) 170s 0.020% <= 0.407 milliseconds (cumulative count 20) 170s 0.030% <= 0.703 milliseconds (cumulative count 30) 170s 0.050% <= 0.807 milliseconds (cumulative count 50) 170s 0.100% <= 0.903 milliseconds (cumulative count 100) 170s 0.160% <= 1.007 milliseconds (cumulative count 160) 170s 0.280% <= 1.103 milliseconds (cumulative count 280) 170s 0.580% <= 1.207 milliseconds (cumulative count 580) 170s 1.260% <= 1.303 milliseconds (cumulative count 1260) 170s 2.440% <= 1.407 milliseconds (cumulative count 2440) 170s 3.970% <= 1.503 milliseconds (cumulative count 3970) 170s 5.830% <= 1.607 milliseconds (cumulative count 5830) 170s 7.750% <= 1.703 milliseconds (cumulative count 7750) 170s 10.700% <= 1.807 milliseconds (cumulative count 10700) 170s 14.230% <= 1.903 milliseconds (cumulative count 14230) 170s 18.600% <= 2.007 milliseconds (cumulative count 18600) 170s 23.080% <= 2.103 milliseconds (cumulative count 23080) 170s 74.060% <= 3.103 milliseconds (cumulative count 74060) 170s 91.970% <= 4.103 milliseconds (cumulative count 91970) 170s 97.990% <= 5.103 milliseconds (cumulative count 97990) 170s 99.170% <= 6.103 milliseconds (cumulative count 99170) 170s 99.820% <= 7.103 milliseconds (cumulative count 99820) 170s 99.960% <= 8.103 milliseconds (cumulative count 99960) 170s 99.990% <= 9.103 milliseconds (cumulative count 99990) 170s 100.000% <= 10.103 milliseconds (cumulative count 100000) 170s 170s Summary: 170s throughput summary: 138888.89 requests per second 170s latency summary (msec): 170s avg min p50 p95 p99 max 170s 2.745 0.192 2.615 4.503 5.911 9.367 173s LRANGE_300 (first 300 elements): rps=18398.4 (overall: 28683.2) avg_msec=10.573 (overall: 10.573) LRANGE_300 (first 300 elements): rps=34768.6 (overall: 32413.5) avg_msec=7.918 (overall: 8.827) LRANGE_300 (first 300 elements): rps=26741.0 (overall: 30278.9) avg_msec=11.590 (overall: 9.745) LRANGE_300 (first 300 elements): rps=29016.0 (overall: 29934.6) avg_msec=10.630 (overall: 9.979) LRANGE_300 (first 300 elements): rps=32771.7 (overall: 30550.0) avg_msec=9.327 (overall: 9.827) LRANGE_300 (first 300 elements): rps=21952.0 (overall: 29037.3) avg_msec=14.637 (overall: 10.467) LRANGE_300 (first 300 elements): rps=27652.2 (overall: 28828.0) avg_msec=10.180 (overall: 10.425) LRANGE_300 (first 300 elements): rps=31233.2 (overall: 29143.7) avg_msec=9.262 (overall: 10.262) LRANGE_300 (first 300 elements): rps=26600.0 (overall: 28851.6) avg_msec=11.244 (overall: 10.366) LRANGE_300 (first 300 elements): rps=32007.9 (overall: 29179.1) avg_msec=9.556 (overall: 10.274) LRANGE_300 (first 300 elements): rps=35203.2 (overall: 29743.3) avg_msec=7.692 (overall: 9.987) LRANGE_300 (first 300 elements): rps=34326.7 (overall: 30135.8) avg_msec=8.109 (overall: 9.804) LRANGE_300 (first 300 elements): rps=28023.7 (overall: 29968.0) avg_msec=11.167 (overall: 9.905) ====== LRANGE_300 (first 300 elements) ====== 173s 100000 requests completed in 3.37 seconds 173s 50 parallel clients 173s 3 bytes payload 173s keep alive: 1 173s host configuration "save": 3600 1 300 100 60 10000 173s host configuration "appendonly": no 173s multi-thread: no 173s 173s Latency by percentile distribution: 173s 0.000% <= 0.247 milliseconds (cumulative count 10) 173s 50.000% <= 9.487 milliseconds (cumulative count 50040) 173s 75.000% <= 13.079 milliseconds (cumulative count 75010) 173s 87.500% <= 15.559 milliseconds (cumulative count 87530) 173s 93.750% <= 17.503 milliseconds (cumulative count 93770) 173s 96.875% <= 19.071 milliseconds (cumulative count 96890) 173s 98.438% <= 20.495 milliseconds (cumulative count 98450) 173s 99.219% <= 21.551 milliseconds (cumulative count 99220) 173s 99.609% <= 22.543 milliseconds (cumulative count 99610) 173s 99.805% <= 24.495 milliseconds (cumulative count 99810) 173s 99.902% <= 25.215 milliseconds (cumulative count 99910) 173s 99.951% <= 25.663 milliseconds (cumulative count 99960) 173s 99.976% <= 25.807 milliseconds (cumulative count 99980) 173s 99.988% <= 25.887 milliseconds (cumulative count 99990) 173s 99.994% <= 25.951 milliseconds (cumulative count 100000) 173s 100.000% <= 25.951 milliseconds (cumulative count 100000) 173s 173s Cumulative distribution of latencies: 173s 0.000% <= 0.103 milliseconds (cumulative count 0) 173s 0.010% <= 0.303 milliseconds (cumulative count 10) 173s 0.020% <= 0.407 milliseconds (cumulative count 20) 173s 0.050% <= 0.503 milliseconds (cumulative count 50) 173s 0.080% <= 0.607 milliseconds (cumulative count 80) 173s 0.390% <= 0.703 milliseconds (cumulative count 390) 173s 0.600% <= 0.807 milliseconds (cumulative count 600) 173s 0.980% <= 0.903 milliseconds (cumulative count 980) 173s 1.260% <= 1.007 milliseconds (cumulative count 1260) 173s 1.580% <= 1.103 milliseconds (cumulative count 1580) 173s 1.850% <= 1.207 milliseconds (cumulative count 1850) 173s 2.120% <= 1.303 milliseconds (cumulative count 2120) 173s 2.410% <= 1.407 milliseconds (cumulative count 2410) 173s 2.620% <= 1.503 milliseconds (cumulative count 2620) 173s 2.850% <= 1.607 milliseconds (cumulative count 2850) 173s 3.030% <= 1.703 milliseconds (cumulative count 3030) 173s 3.190% <= 1.807 milliseconds (cumulative count 3190) 173s 3.300% <= 1.903 milliseconds (cumulative count 3300) 173s 3.410% <= 2.007 milliseconds (cumulative count 3410) 173s 3.550% <= 2.103 milliseconds (cumulative count 3550) 173s 4.550% <= 3.103 milliseconds (cumulative count 4550) 173s 6.400% <= 4.103 milliseconds (cumulative count 6400) 173s 10.530% <= 5.103 milliseconds (cumulative count 10530) 173s 18.230% <= 6.103 milliseconds (cumulative count 18230) 173s 29.070% <= 7.103 milliseconds (cumulative count 29070) 173s 38.250% <= 8.103 milliseconds (cumulative count 38250) 173s 46.880% <= 9.103 milliseconds (cumulative count 46880) 173s 55.300% <= 10.103 milliseconds (cumulative count 55300) 173s 62.700% <= 11.103 milliseconds (cumulative count 62700) 173s 69.400% <= 12.103 milliseconds (cumulative count 69400) 173s 75.190% <= 13.103 milliseconds (cumulative count 75190) 173s 80.840% <= 14.103 milliseconds (cumulative count 80840) 173s 85.560% <= 15.103 milliseconds (cumulative count 85560) 173s 89.850% <= 16.103 milliseconds (cumulative count 89850) 173s 92.940% <= 17.103 milliseconds (cumulative count 92940) 173s 95.040% <= 18.111 milliseconds (cumulative count 95040) 173s 96.960% <= 19.103 milliseconds (cumulative count 96960) 173s 98.100% <= 20.111 milliseconds (cumulative count 98100) 173s 98.970% <= 21.103 milliseconds (cumulative count 98970) 173s 99.420% <= 22.111 milliseconds (cumulative count 99420) 173s 99.720% <= 23.103 milliseconds (cumulative count 99720) 173s 99.790% <= 24.111 milliseconds (cumulative count 99790) 173s 99.900% <= 25.103 milliseconds (cumulative count 99900) 173s 100.000% <= 26.111 milliseconds (cumulative count 100000) 173s 173s Summary: 173s throughput summary: 29664.79 requests per second 173s latency summary (msec): 173s avg min p50 p95 p99 max 173s 10.019 0.240 9.487 18.095 21.215 25.951 178s LRANGE_500 (first 500 elements): rps=2793.7 (overall: 11000.0) avg_msec=21.545 (overall: 21.545) LRANGE_500 (first 500 elements): rps=13776.5 (overall: 13219.4) avg_msec=21.838 (overall: 21.789) LRANGE_500 (first 500 elements): rps=11427.5 (overall: 12423.3) avg_msec=24.084 (overall: 22.727) LRANGE_500 (first 500 elements): rps=13547.6 (overall: 12766.3) avg_msec=21.352 (overall: 22.282) LRANGE_500 (first 500 elements): rps=13340.0 (overall: 12899.6) avg_msec=21.311 (overall: 22.048) LRANGE_500 (first 500 elements): rps=13117.6 (overall: 12941.4) avg_msec=21.569 (overall: 21.955) LRANGE_500 (first 500 elements): rps=13090.2 (overall: 12965.3) avg_msec=23.085 (overall: 22.139) LRANGE_500 (first 500 elements): rps=13334.7 (overall: 13015.8) avg_msec=21.776 (overall: 22.088) LRANGE_500 (first 500 elements): rps=13631.0 (overall: 13090.0) avg_msec=21.540 (overall: 22.019) LRANGE_500 (first 500 elements): rps=13286.9 (overall: 13111.1) avg_msec=22.561 (overall: 22.078) LRANGE_500 (first 500 elements): rps=12877.5 (overall: 13088.3) avg_msec=21.454 (overall: 22.018) LRANGE_500 (first 500 elements): rps=17287.4 (overall: 13462.9) avg_msec=16.593 (overall: 21.397) LRANGE_500 (first 500 elements): rps=23972.5 (overall: 14326.9) avg_msec=8.829 (overall: 19.668) LRANGE_500 (first 500 elements): rps=24063.0 (overall: 15063.8) avg_msec=8.441 (overall: 18.311) LRANGE_500 (first 500 elements): rps=24134.4 (overall: 15699.6) avg_msec=8.465 (overall: 17.250) LRANGE_500 (first 500 elements): rps=23693.2 (overall: 16219.4) avg_msec=8.705 (overall: 16.438) LRANGE_500 (first 500 elements): rps=23580.4 (overall: 16675.6) avg_msec=8.663 (overall: 15.757) LRANGE_500 (first 500 elements): rps=23678.6 (overall: 17079.7) avg_msec=8.666 (overall: 15.189) LRANGE_500 (first 500 elements): rps=20428.6 (overall: 17262.4) avg_msec=12.835 (overall: 15.037) LRANGE_500 (first 500 elements): rps=22571.4 (overall: 17537.1) avg_msec=11.436 (overall: 14.798) LRANGE_500 (first 500 elements): rps=21188.0 (overall: 17715.3) avg_msec=12.123 (overall: 14.641) LRANGE_500 (first 500 elements): rps=20043.3 (overall: 17825.3) avg_msec=13.041 (overall: 14.556) ====== LRANGE_500 (first 500 elements) ====== 178s 100000 requests completed in 5.58 seconds 178s 50 parallel clients 178s 3 bytes payload 178s keep alive: 1 178s host configuration "save": 3600 1 300 100 60 10000 178s host configuration "appendonly": no 178s multi-thread: no 178s 178s Latency by percentile distribution: 178s 0.000% <= 0.391 milliseconds (cumulative count 10) 178s 50.000% <= 11.367 milliseconds (cumulative count 50050) 178s 75.000% <= 20.111 milliseconds (cumulative count 75010) 178s 87.500% <= 24.015 milliseconds (cumulative count 87500) 178s 93.750% <= 28.287 milliseconds (cumulative count 93760) 178s 96.875% <= 29.503 milliseconds (cumulative count 96920) 178s 98.438% <= 30.543 milliseconds (cumulative count 98450) 178s 99.219% <= 32.255 milliseconds (cumulative count 99220) 178s 99.609% <= 34.111 milliseconds (cumulative count 99620) 178s 99.805% <= 34.975 milliseconds (cumulative count 99810) 178s 99.902% <= 36.191 milliseconds (cumulative count 99910) 178s 99.951% <= 36.799 milliseconds (cumulative count 99960) 178s 99.976% <= 37.215 milliseconds (cumulative count 99980) 178s 99.988% <= 37.439 milliseconds (cumulative count 99990) 178s 99.994% <= 37.663 milliseconds (cumulative count 100000) 178s 100.000% <= 37.663 milliseconds (cumulative count 100000) 178s 178s Cumulative distribution of latencies: 178s 0.000% <= 0.103 milliseconds (cumulative count 0) 178s 0.010% <= 0.407 milliseconds (cumulative count 10) 178s 0.030% <= 0.703 milliseconds (cumulative count 30) 178s 0.060% <= 0.807 milliseconds (cumulative count 60) 178s 0.240% <= 0.903 milliseconds (cumulative count 240) 178s 0.470% <= 1.007 milliseconds (cumulative count 470) 178s 0.570% <= 1.103 milliseconds (cumulative count 570) 178s 0.590% <= 1.207 milliseconds (cumulative count 590) 178s 0.650% <= 1.303 milliseconds (cumulative count 650) 178s 0.690% <= 1.407 milliseconds (cumulative count 690) 178s 0.700% <= 1.503 milliseconds (cumulative count 700) 178s 0.770% <= 1.607 milliseconds (cumulative count 770) 178s 0.820% <= 1.703 milliseconds (cumulative count 820) 178s 0.840% <= 1.807 milliseconds (cumulative count 840) 178s 1.000% <= 1.903 milliseconds (cumulative count 1000) 178s 1.080% <= 2.007 milliseconds (cumulative count 1080) 178s 1.170% <= 2.103 milliseconds (cumulative count 1170) 178s 1.710% <= 3.103 milliseconds (cumulative count 1710) 178s 2.230% <= 4.103 milliseconds (cumulative count 2230) 178s 3.270% <= 5.103 milliseconds (cumulative count 3270) 178s 7.540% <= 6.103 milliseconds (cumulative count 7540) 178s 9.870% <= 7.103 milliseconds (cumulative count 9870) 178s 14.830% <= 8.103 milliseconds (cumulative count 14830) 178s 27.330% <= 9.103 milliseconds (cumulative count 27330) 178s 42.580% <= 10.103 milliseconds (cumulative count 42580) 178s 48.990% <= 11.103 milliseconds (cumulative count 48990) 178s 52.900% <= 12.103 milliseconds (cumulative count 52900) 178s 55.600% <= 13.103 milliseconds (cumulative count 55600) 178s 57.210% <= 14.103 milliseconds (cumulative count 57210) 178s 58.900% <= 15.103 milliseconds (cumulative count 58900) 178s 60.190% <= 16.103 milliseconds (cumulative count 60190) 178s 62.430% <= 17.103 milliseconds (cumulative count 62430) 178s 65.990% <= 18.111 milliseconds (cumulative count 65990) 178s 70.450% <= 19.103 milliseconds (cumulative count 70450) 178s 75.010% <= 20.111 milliseconds (cumulative count 75010) 178s 79.460% <= 21.103 milliseconds (cumulative count 79460) 178s 83.360% <= 22.111 milliseconds (cumulative count 83360) 178s 86.120% <= 23.103 milliseconds (cumulative count 86120) 178s 87.600% <= 24.111 milliseconds (cumulative count 87600) 178s 88.900% <= 25.103 milliseconds (cumulative count 88900) 178s 90.300% <= 26.111 milliseconds (cumulative count 90300) 178s 91.790% <= 27.103 milliseconds (cumulative count 91790) 178s 93.430% <= 28.111 milliseconds (cumulative count 93430) 178s 95.910% <= 29.103 milliseconds (cumulative count 95910) 178s 98.060% <= 30.111 milliseconds (cumulative count 98060) 178s 98.750% <= 31.103 milliseconds (cumulative count 98750) 178s 99.190% <= 32.111 milliseconds (cumulative count 99190) 178s 99.390% <= 33.119 milliseconds (cumulative count 99390) 178s 99.620% <= 34.111 milliseconds (cumulative count 99620) 178s 99.830% <= 35.103 milliseconds (cumulative count 99830) 178s 99.890% <= 36.127 milliseconds (cumulative count 99890) 178s 99.970% <= 37.119 milliseconds (cumulative count 99970) 178s 100.000% <= 38.111 milliseconds (cumulative count 100000) 178s 178s Summary: 178s throughput summary: 17934.00 requests per second 178s latency summary (msec): 178s avg min p50 p95 p99 max 178s 14.507 0.384 11.367 28.783 31.567 37.663 186s LRANGE_600 (first 600 elements): rps=2071.7 (overall: 10612.2) avg_msec=25.401 (overall: 25.401) LRANGE_600 (first 600 elements): rps=13737.1 (overall: 13226.7) avg_msec=19.384 (overall: 20.173) LRANGE_600 (first 600 elements): rps=16191.2 (overall: 14577.1) avg_msec=16.694 (overall: 18.412) LRANGE_600 (first 600 elements): rps=15430.8 (overall: 14845.8) avg_msec=16.797 (overall: 17.884) LRANGE_600 (first 600 elements): rps=11988.1 (overall: 14161.8) avg_msec=21.589 (overall: 18.635) LRANGE_600 (first 600 elements): rps=11398.4 (overall: 13631.5) avg_msec=23.932 (overall: 19.485) LRANGE_600 (first 600 elements): rps=11468.3 (overall: 13282.1) avg_msec=23.999 (overall: 20.114) LRANGE_600 (first 600 elements): rps=11440.0 (overall: 13027.6) avg_msec=23.764 (overall: 20.557) LRANGE_600 (first 600 elements): rps=11357.1 (overall: 12823.5) avg_msec=23.996 (overall: 20.929) LRANGE_600 (first 600 elements): rps=11705.2 (overall: 12702.1) avg_msec=23.632 (overall: 21.199) LRANGE_600 (first 600 elements): rps=11610.9 (overall: 12593.0) avg_msec=23.995 (overall: 21.457) LRANGE_600 (first 600 elements): rps=11059.8 (overall: 12456.6) avg_msec=23.744 (overall: 21.638) LRANGE_600 (first 600 elements): rps=11905.9 (overall: 12410.9) avg_msec=24.191 (overall: 21.841) LRANGE_600 (first 600 elements): rps=10976.0 (overall: 12303.1) avg_msec=24.138 (overall: 21.995) LRANGE_600 (first 600 elements): rps=11940.2 (overall: 12277.6) avg_msec=24.263 (overall: 22.150) LRANGE_600 (first 600 elements): rps=11207.2 (overall: 12207.4) avg_msec=25.758 (overall: 22.367) LRANGE_600 (first 600 elements): rps=10517.9 (overall: 12103.5) avg_msec=25.642 (overall: 22.542) LRANGE_600 (first 600 elements): rps=12143.4 (overall: 12105.8) avg_msec=20.105 (overall: 22.400) LRANGE_600 (first 600 elements): rps=14964.0 (overall: 12261.8) avg_msec=17.184 (overall: 22.053) LRANGE_600 (first 600 elements): rps=16119.5 (overall: 12462.2) avg_msec=16.316 (overall: 21.667) LRANGE_600 (first 600 elements): rps=16326.8 (overall: 12655.3) avg_msec=16.272 (overall: 21.320) LRANGE_600 (first 600 elements): rps=16456.0 (overall: 12833.4) avg_msec=16.052 (overall: 21.003) LRANGE_600 (first 600 elements): rps=16103.6 (overall: 12980.3) avg_msec=17.136 (overall: 20.787) LRANGE_600 (first 600 elements): rps=16735.2 (overall: 13143.0) avg_msec=15.509 (overall: 20.496) LRANGE_600 (first 600 elements): rps=15729.1 (overall: 13249.6) avg_msec=17.268 (overall: 20.338) LRANGE_600 (first 600 elements): rps=16926.1 (overall: 13398.5) avg_msec=16.077 (overall: 20.120) LRANGE_600 (first 600 elements): rps=16564.0 (overall: 13518.4) avg_msec=16.773 (overall: 19.965) LRANGE_600 (first 600 elements): rps=16003.9 (overall: 13610.6) avg_msec=16.198 (overall: 19.801) LRANGE_600 (first 600 elements): rps=14239.0 (overall: 13632.8) avg_msec=19.301 (overall: 19.782) LRANGE_600 (first 600 elements): rps=10541.8 (overall: 13527.3) avg_msec=24.843 (overall: 19.917) ====== LRANGE_600 (first 600 elements) ====== 186s 100000 requests completed in 7.38 seconds 186s 50 parallel clients 186s 3 bytes payload 186s keep alive: 1 186s host configuration "save": 3600 1 300 100 60 10000 186s host configuration "appendonly": no 186s multi-thread: no 186s 186s Latency by percentile distribution: 186s 0.000% <= 0.447 milliseconds (cumulative count 10) 186s 50.000% <= 20.975 milliseconds (cumulative count 50020) 186s 75.000% <= 24.975 milliseconds (cumulative count 75000) 186s 87.500% <= 27.983 milliseconds (cumulative count 87520) 186s 93.750% <= 30.239 milliseconds (cumulative count 93760) 186s 96.875% <= 32.927 milliseconds (cumulative count 96890) 186s 98.438% <= 33.823 milliseconds (cumulative count 98460) 186s 99.219% <= 34.815 milliseconds (cumulative count 99220) 186s 99.609% <= 36.799 milliseconds (cumulative count 99620) 186s 99.805% <= 37.663 milliseconds (cumulative count 99810) 186s 99.902% <= 38.367 milliseconds (cumulative count 99910) 186s 99.951% <= 38.943 milliseconds (cumulative count 99960) 186s 99.976% <= 39.295 milliseconds (cumulative count 99980) 186s 99.988% <= 42.719 milliseconds (cumulative count 99990) 186s 99.994% <= 42.975 milliseconds (cumulative count 100000) 186s 100.000% <= 42.975 milliseconds (cumulative count 100000) 186s 186s Cumulative distribution of latencies: 186s 0.000% <= 0.103 milliseconds (cumulative count 0) 186s 0.010% <= 0.503 milliseconds (cumulative count 10) 186s 0.020% <= 0.703 milliseconds (cumulative count 20) 186s 0.030% <= 0.807 milliseconds (cumulative count 30) 186s 0.680% <= 0.903 milliseconds (cumulative count 680) 186s 0.750% <= 1.007 milliseconds (cumulative count 750) 186s 0.820% <= 1.103 milliseconds (cumulative count 820) 186s 1.260% <= 1.207 milliseconds (cumulative count 1260) 186s 1.600% <= 1.303 milliseconds (cumulative count 1600) 186s 1.760% <= 1.407 milliseconds (cumulative count 1760) 186s 1.980% <= 1.503 milliseconds (cumulative count 1980) 186s 2.280% <= 1.607 milliseconds (cumulative count 2280) 186s 2.440% <= 1.703 milliseconds (cumulative count 2440) 186s 2.680% <= 1.807 milliseconds (cumulative count 2680) 186s 2.780% <= 1.903 milliseconds (cumulative count 2780) 186s 2.900% <= 2.007 milliseconds (cumulative count 2900) 186s 2.970% <= 2.103 milliseconds (cumulative count 2970) 186s 3.260% <= 3.103 milliseconds (cumulative count 3260) 186s 3.540% <= 4.103 milliseconds (cumulative count 3540) 186s 4.110% <= 5.103 milliseconds (cumulative count 4110) 186s 4.510% <= 6.103 milliseconds (cumulative count 4510) 186s 4.980% <= 7.103 milliseconds (cumulative count 4980) 186s 5.530% <= 8.103 milliseconds (cumulative count 5530) 186s 7.150% <= 9.103 milliseconds (cumulative count 7150) 186s 10.000% <= 10.103 milliseconds (cumulative count 10000) 186s 13.190% <= 11.103 milliseconds (cumulative count 13190) 186s 16.430% <= 12.103 milliseconds (cumulative count 16430) 186s 19.260% <= 13.103 milliseconds (cumulative count 19260) 186s 22.060% <= 14.103 milliseconds (cumulative count 22060) 186s 25.000% <= 15.103 milliseconds (cumulative count 25000) 186s 28.200% <= 16.103 milliseconds (cumulative count 28200) 186s 31.970% <= 17.103 milliseconds (cumulative count 31970) 186s 36.050% <= 18.111 milliseconds (cumulative count 36050) 186s 39.990% <= 19.103 milliseconds (cumulative count 39990) 186s 44.480% <= 20.111 milliseconds (cumulative count 44480) 186s 50.920% <= 21.103 milliseconds (cumulative count 50920) 186s 58.390% <= 22.111 milliseconds (cumulative count 58390) 186s 65.330% <= 23.103 milliseconds (cumulative count 65330) 186s 70.410% <= 24.111 milliseconds (cumulative count 70410) 186s 75.670% <= 25.103 milliseconds (cumulative count 75670) 186s 80.700% <= 26.111 milliseconds (cumulative count 80700) 186s 84.730% <= 27.103 milliseconds (cumulative count 84730) 186s 87.850% <= 28.111 milliseconds (cumulative count 87850) 186s 90.670% <= 29.103 milliseconds (cumulative count 90670) 186s 93.460% <= 30.111 milliseconds (cumulative count 93460) 186s 95.320% <= 31.103 milliseconds (cumulative count 95320) 186s 95.900% <= 32.111 milliseconds (cumulative count 95900) 186s 97.160% <= 33.119 milliseconds (cumulative count 97160) 186s 98.920% <= 34.111 milliseconds (cumulative count 98920) 186s 99.260% <= 35.103 milliseconds (cumulative count 99260) 186s 99.480% <= 36.127 milliseconds (cumulative count 99480) 186s 99.690% <= 37.119 milliseconds (cumulative count 99690) 186s 99.870% <= 38.111 milliseconds (cumulative count 99870) 186s 99.960% <= 39.103 milliseconds (cumulative count 99960) 186s 99.980% <= 40.127 milliseconds (cumulative count 99980) 186s 100.000% <= 43.103 milliseconds (cumulative count 100000) 186s 186s Summary: 186s throughput summary: 13546.46 requests per second 186s latency summary (msec): 186s avg min p50 p95 p99 max 186s 19.932 0.440 20.975 30.895 34.207 42.975 186s MSET (10 keys): rps=232350.6 (overall: 263891.4) avg_msec=1.840 (overall: 1.840) ====== MSET (10 keys) ====== 186s 100000 requests completed in 0.35 seconds 186s 50 parallel clients 186s 3 bytes payload 186s keep alive: 1 186s host configuration "save": 3600 1 300 100 60 10000 186s host configuration "appendonly": no 186s multi-thread: no 186s 186s Latency by percentile distribution: 186s 0.000% <= 0.175 milliseconds (cumulative count 10) 186s 50.000% <= 1.599 milliseconds (cumulative count 51280) 186s 75.000% <= 1.679 milliseconds (cumulative count 76390) 186s 87.500% <= 1.927 milliseconds (cumulative count 87620) 186s 93.750% <= 3.039 milliseconds (cumulative count 93750) 186s 96.875% <= 3.511 milliseconds (cumulative count 96920) 186s 98.438% <= 3.727 milliseconds (cumulative count 98490) 186s 99.219% <= 3.871 milliseconds (cumulative count 99240) 186s 99.609% <= 3.983 milliseconds (cumulative count 99610) 186s 99.805% <= 4.071 milliseconds (cumulative count 99810) 186s 99.902% <= 4.215 milliseconds (cumulative count 99920) 186s 99.951% <= 4.271 milliseconds (cumulative count 99960) 186s 99.976% <= 4.295 milliseconds (cumulative count 99980) 186s 99.988% <= 4.303 milliseconds (cumulative count 99990) 186s 99.994% <= 4.543 milliseconds (cumulative count 100000) 186s 100.000% <= 4.543 milliseconds (cumulative count 100000) 186s 186s Cumulative distribution of latencies: 186s 0.000% <= 0.103 milliseconds (cumulative count 0) 186s 0.010% <= 0.207 milliseconds (cumulative count 10) 186s 0.300% <= 0.807 milliseconds (cumulative count 300) 186s 4.960% <= 0.903 milliseconds (cumulative count 4960) 186s 9.600% <= 1.007 milliseconds (cumulative count 9600) 186s 9.660% <= 1.103 milliseconds (cumulative count 9660) 186s 9.900% <= 1.207 milliseconds (cumulative count 9900) 186s 10.140% <= 1.303 milliseconds (cumulative count 10140) 186s 10.600% <= 1.407 milliseconds (cumulative count 10600) 186s 17.360% <= 1.503 milliseconds (cumulative count 17360) 186s 54.400% <= 1.607 milliseconds (cumulative count 54400) 186s 80.740% <= 1.703 milliseconds (cumulative count 80740) 186s 85.580% <= 1.807 milliseconds (cumulative count 85580) 186s 86.920% <= 1.903 milliseconds (cumulative count 86920) 186s 88.660% <= 2.007 milliseconds (cumulative count 88660) 186s 89.360% <= 2.103 milliseconds (cumulative count 89360) 186s 94.150% <= 3.103 milliseconds (cumulative count 94150) 186s 99.810% <= 4.103 milliseconds (cumulative count 99810) 186s 100.000% <= 5.103 milliseconds (cumulative count 100000) 186s 186s Summary: 186s throughput summary: 284900.28 requests per second 186s latency summary (msec): 186s avg min p50 p95 p99 max 186s 1.704 0.168 1.599 3.215 3.823 4.543 186s 186s autopkgtest [00:32:13]: test 0002-benchmark: -----------------------] 187s autopkgtest [00:32:14]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 187s 0002-benchmark PASS 187s autopkgtest [00:32:14]: test 0003-redis-check-aof: preparing testbed 188s Reading package lists... 188s Building dependency tree... 188s Reading state information... 188s Starting pkgProblemResolver with broken count: 0 188s Starting 2 pkgProblemResolver with broken count: 0 188s Done 188s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 189s autopkgtest [00:32:16]: test 0003-redis-check-aof: [----------------------- 190s autopkgtest [00:32:17]: test 0003-redis-check-aof: -----------------------] 190s 0003-redis-check-aof PASS 190s autopkgtest [00:32:17]: test 0003-redis-check-aof: - - - - - - - - - - results - - - - - - - - - - 191s autopkgtest [00:32:18]: test 0004-redis-check-rdb: preparing testbed 191s Reading package lists... 191s Building dependency tree... 191s Reading state information... 191s Starting pkgProblemResolver with broken count: 0 191s Starting 2 pkgProblemResolver with broken count: 0 191s Done 191s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 192s autopkgtest [00:32:19]: test 0004-redis-check-rdb: [----------------------- 198s OK 198s [offset 0] Checking RDB file /var/lib/redis/dump.rdb 198s [offset 27] AUX FIELD redis-ver = '7.0.15' 198s [offset 41] AUX FIELD redis-bits = '64' 198s [offset 53] AUX FIELD ctime = '1740702745' 198s [offset 68] AUX FIELD used-mem = '1451696' 198s [offset 80] AUX FIELD aof-base = '0' 198s [offset 82] Selecting DB ID 0 198s [offset 7184] Checksum OK 198s [offset 7184] \o/ RDB looks OK! \o/ 198s [info] 4 keys read 198s [info] 0 expires 198s [info] 0 already expired 198s autopkgtest [00:32:25]: test 0004-redis-check-rdb: -----------------------] 199s 0004-redis-check-rdb PASS 199s autopkgtest [00:32:26]: test 0004-redis-check-rdb: - - - - - - - - - - results - - - - - - - - - - 199s autopkgtest [00:32:26]: test 0005-cjson: preparing testbed 199s Reading package lists... 199s Building dependency tree... 199s Reading state information... 199s Starting pkgProblemResolver with broken count: 0 200s Starting 2 pkgProblemResolver with broken count: 0 200s Done 200s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 202s autopkgtest [00:32:29]: test 0005-cjson: [----------------------- 207s 207s autopkgtest [00:32:34]: test 0005-cjson: -----------------------] 208s 0005-cjson PASS 208s autopkgtest [00:32:35]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 208s autopkgtest [00:32:35]: @@@@@@@@@@@@@@@@@@@@ summary 208s 0001-redis-cli PASS 208s 0002-benchmark PASS 208s 0003-redis-check-aof PASS 208s 0004-redis-check-rdb PASS 208s 0005-cjson PASS 226s nova [W] Using flock in prodstack6-s390x 226s flock: timeout while waiting to get lock 226s Creating nova instance adt-noble-s390x-redis-20250228-002907-juju-7f2275-prod-proposed-migration-environment-2-ae62c2b6-7524-4d17-9fbb-b4777c88b560 from image adt/ubuntu-noble-s390x-server-20250227.img (UUID 93d795ea-699a-4683-8340-e9d329dfbce8)... 226s nova [W] Timed out waiting for 4bea8bea-ede6-4adc-8500-896c7adaa925 to get deleted.