0s autopkgtest [01:06:36]: starting date and time: 2025-02-28 01:06:36+0000 0s autopkgtest [01:06:36]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production' 0s autopkgtest [01:06:36]: host juju-7f2275-prod-proposed-migration-environment-15; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.fffgiw9k/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:systemd,src:dpdk,src:samba --apt-upgrade redis --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 '--env=ADT_TEST_TRIGGERS=systemd/255.4-1ubuntu8.6 dpdk/23.11.2-0ubuntu0.24.04.1 samba/2:4.19.5+dfsg-4ubuntu9.1' -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-15@bos03-arm64-25.secgroup --name adt-noble-arm64-redis-20250228-010635-juju-7f2275-prod-proposed-migration-environment-15-ce3a019c-4628-40c2-8e1f-cea97deb8f34 --image adt/ubuntu-noble-arm64-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-15 --net-id=net_prod-proposed-migration -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 123s autopkgtest [01:08:39]: testbed dpkg architecture: arm64 123s autopkgtest [01:08:39]: testbed apt version: 2.7.14build2 123s autopkgtest [01:08:39]: @@@@@@@@@@@@@@@@@@@@ test bed setup 124s autopkgtest [01:08:40]: testbed release detected to be: None 124s autopkgtest [01:08:40]: updating testbed package index (apt update) 125s Get:1 http://ftpmaster.internal/ubuntu noble-proposed InRelease [265 kB] 125s Hit:2 http://ftpmaster.internal/ubuntu noble InRelease 125s Hit:3 http://ftpmaster.internal/ubuntu noble-updates InRelease 125s Hit:4 http://ftpmaster.internal/ubuntu noble-security InRelease 125s Get:5 http://ftpmaster.internal/ubuntu noble-proposed/main Sources [61.6 kB] 125s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/universe Sources [66.2 kB] 125s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/multiverse Sources [9488 B] 125s Get:8 http://ftpmaster.internal/ubuntu noble-proposed/restricted Sources [18.6 kB] 125s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 Packages [163 kB] 125s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 c-n-f Metadata [3756 B] 125s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/restricted arm64 Packages [212 kB] 125s Get:12 http://ftpmaster.internal/ubuntu noble-proposed/restricted arm64 c-n-f Metadata [352 B] 125s Get:13 http://ftpmaster.internal/ubuntu noble-proposed/universe arm64 Packages [425 kB] 126s Get:14 http://ftpmaster.internal/ubuntu noble-proposed/universe arm64 c-n-f Metadata [9620 B] 126s Get:15 http://ftpmaster.internal/ubuntu noble-proposed/multiverse arm64 Packages [23.1 kB] 126s Get:16 http://ftpmaster.internal/ubuntu noble-proposed/multiverse arm64 c-n-f Metadata [344 B] 131s Fetched 1259 kB in 1s (1133 kB/s) 133s Reading package lists... 133s + lsb_release --codename --short 133s + RELEASE=noble 133s + cat 133s + [ noble != trusty ] 133s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y --allow-downgrades -o Dpkg::Options::=--force-confnew dist-upgrade 133s Reading package lists... 134s Building dependency tree... 134s Reading state information... 135s Calculating upgrade... 135s The following packages will be upgraded: 135s cloud-init cryptsetup-bin libcryptsetup12 135s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 135s Need to get 1075 kB of archives. 135s After this operation, 13.3 kB of additional disk space will be used. 135s Get:1 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 libcryptsetup12 arm64 2:2.7.0-1ubuntu4.2 [261 kB] 136s Get:2 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 cryptsetup-bin arm64 2:2.7.0-1ubuntu4.2 [210 kB] 136s Get:3 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 cloud-init all 24.4.1-0ubuntu0~24.04.1 [604 kB] 137s Preconfiguring packages ... 137s Fetched 1075 kB in 1s (1404 kB/s) 137s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78429 files and directories currently installed.) 137s Preparing to unpack .../libcryptsetup12_2%3a2.7.0-1ubuntu4.2_arm64.deb ... 137s Unpacking libcryptsetup12:arm64 (2:2.7.0-1ubuntu4.2) over (2:2.7.0-1ubuntu4.1) ... 137s Preparing to unpack .../cryptsetup-bin_2%3a2.7.0-1ubuntu4.2_arm64.deb ... 137s Unpacking cryptsetup-bin (2:2.7.0-1ubuntu4.2) over (2:2.7.0-1ubuntu4.1) ... 137s Preparing to unpack .../cloud-init_24.4.1-0ubuntu0~24.04.1_all.deb ... 138s Unpacking cloud-init (24.4.1-0ubuntu0~24.04.1) over (24.4-0ubuntu1~24.04.2) ... 138s Setting up cloud-init (24.4.1-0ubuntu0~24.04.1) ... 140s Setting up libcryptsetup12:arm64 (2:2.7.0-1ubuntu4.2) ... 140s Setting up cryptsetup-bin (2:2.7.0-1ubuntu4.2) ... 140s Processing triggers for rsyslog (8.2312.0-3ubuntu9) ... 140s Processing triggers for man-db (2.12.0-4build2) ... 141s Processing triggers for libc-bin (2.39-0ubuntu8.4) ... 141s + rm /etc/apt/preferences.d/force-downgrade-to-release.pref 141s + /usr/lib/apt/apt-helper analyze-pattern ?true 141s + uname -r 141s + sed s/\./\\./g 141s + running_kernel_pattern=^linux-.*6\.8\.0-54-generic.* 141s + apt list ?obsolete 141s + tail -n+2 141s + cut -d/ -f1 141s + grep -v ^linux-.*6\.8\.0-54-generic.* 142s + true 142s + obsolete_pkgs= 142s + DEBIAN_FRONTEND=noninteractive eatmydata apt-get -y purgeReading package lists... 142s Building dependency tree... --autoremove 142s 142s Reading state information... 143s 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. 143s + grep -q trusty /etc/lsb-release 143s + [ ! -d /usr/share/doc/unattended-upgrades ] 143s + [ ! -d /usr/share/doc/lxd ] 143s + [ ! -d /usr/share/doc/lxd-client ] 143s + [ ! -d /usr/share/doc/snapd ] 143s + type iptables 143s + cat 143s + chmod 755 /etc/rc.local 143s + . /etc/rc.local 143s + iptables -w -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 143s + iptables -A OUTPUT -d 10.255.255.1/32 -p tcp -j DROP 143s + iptables -A OUTPUT -d 10.255.255.2/32 -p tcp -j DROP 143s + uname -m 143s + [ aarch64 = ppc64le ] 143s + [ -d /run/systemd/system ] 143s + systemd-detect-virt --quiet --vm 143s + mkdir -p /etc/systemd/system/systemd-random-seed.service.d/ 143s + cat 143s + grep -q lz4 /etc/initramfs-tools/initramfs.conf 143s + echo COMPRESS=lz4 143s autopkgtest [01:08:59]: upgrading testbed (apt dist-upgrade and autopurge) 144s Reading package lists... 144s Building dependency tree... 144s Reading state information... 144s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 144s Starting 2 pkgProblemResolver with broken count: 0 144s Done 145s Entering ResolveByKeep 145s 146s The following packages will be upgraded: 146s libnss-systemd libpam-systemd libsystemd-shared libsystemd0 libudev1 systemd 146s systemd-dev systemd-resolved systemd-sysv systemd-timesyncd udev 146s 11 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 146s Need to get 8700 kB of archives. 146s After this operation, 0 B of additional disk space will be used. 146s Get:1 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 libnss-systemd arm64 255.4-1ubuntu8.6 [155 kB] 146s Get:2 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 systemd-dev all 255.4-1ubuntu8.6 [104 kB] 146s Get:3 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 systemd-timesyncd arm64 255.4-1ubuntu8.6 [34.8 kB] 146s Get:4 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 systemd-resolved arm64 255.4-1ubuntu8.6 [291 kB] 146s Get:5 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 libsystemd-shared arm64 255.4-1ubuntu8.6 [2018 kB] 147s Get:6 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 libsystemd0 arm64 255.4-1ubuntu8.6 [425 kB] 147s Get:7 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 systemd-sysv arm64 255.4-1ubuntu8.6 [11.9 kB] 147s Get:8 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 libpam-systemd arm64 255.4-1ubuntu8.6 [232 kB] 147s Get:9 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 systemd arm64 255.4-1ubuntu8.6 [3404 kB] 148s Get:10 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 udev arm64 255.4-1ubuntu8.6 [1852 kB] 148s Get:11 http://ftpmaster.internal/ubuntu noble-proposed/main arm64 libudev1 arm64 255.4-1ubuntu8.6 [174 kB] 149s Fetched 8700 kB in 2s (3721 kB/s) 149s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78429 files and directories currently installed.) 149s Preparing to unpack .../0-libnss-systemd_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking libnss-systemd:arm64 (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Preparing to unpack .../1-systemd-dev_255.4-1ubuntu8.6_all.deb ... 149s Unpacking systemd-dev (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Preparing to unpack .../2-systemd-timesyncd_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking systemd-timesyncd (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Preparing to unpack .../3-systemd-resolved_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking systemd-resolved (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Preparing to unpack .../4-libsystemd-shared_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking libsystemd-shared:arm64 (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Preparing to unpack .../5-libsystemd0_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking libsystemd0:arm64 (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Setting up libsystemd0:arm64 (255.4-1ubuntu8.6) ... 149s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78429 files and directories currently installed.) 149s Preparing to unpack .../systemd-sysv_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking systemd-sysv (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 149s Preparing to unpack .../libpam-systemd_255.4-1ubuntu8.6_arm64.deb ... 149s Unpacking libpam-systemd:arm64 (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 150s Preparing to unpack .../systemd_255.4-1ubuntu8.6_arm64.deb ... 150s Unpacking systemd (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 150s Preparing to unpack .../udev_255.4-1ubuntu8.6_arm64.deb ... 150s Unpacking udev (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 150s Preparing to unpack .../libudev1_255.4-1ubuntu8.6_arm64.deb ... 150s Unpacking libudev1:arm64 (255.4-1ubuntu8.6) over (255.4-1ubuntu8.5) ... 150s Setting up libudev1:arm64 (255.4-1ubuntu8.6) ... 150s Setting up systemd-dev (255.4-1ubuntu8.6) ... 150s Setting up libsystemd-shared:arm64 (255.4-1ubuntu8.6) ... 150s Setting up systemd (255.4-1ubuntu8.6) ... 151s Setting up systemd-timesyncd (255.4-1ubuntu8.6) ... 152s Setting up udev (255.4-1ubuntu8.6) ... 153s Setting up systemd-resolved (255.4-1ubuntu8.6) ... 154s Setting up systemd-sysv (255.4-1ubuntu8.6) ... 154s Setting up libnss-systemd:arm64 (255.4-1ubuntu8.6) ... 154s Setting up libpam-systemd:arm64 (255.4-1ubuntu8.6) ... 154s Processing triggers for libc-bin (2.39-0ubuntu8.4) ... 154s Processing triggers for man-db (2.12.0-4build2) ... 154s Processing triggers for dbus (1.14.10-4ubuntu4.1) ... 154s Processing triggers for initramfs-tools (0.142ubuntu25.5) ... 154s update-initramfs: Generating /boot/initrd.img-6.8.0-54-generic 155s W: No lz4 in /usr/bin:/sbin:/bin, using gzip 179s System running in EFI mode, skipping. 180s Reading package lists... 181s Building dependency tree... 181s Reading state information... 181s Starting pkgProblemResolver with broken count: 0 182s Starting 2 pkgProblemResolver with broken count: 0 182s Done 183s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 184s autopkgtest [01:09:40]: rebooting testbed after setup commands that affected boot 188s autopkgtest-virt-ssh: WARNING: ssh connection failed. Retrying in 3 seconds... 210s autopkgtest [01:10:06]: testbed running kernel: Linux 6.8.0-54-generic #56-Ubuntu SMP PREEMPT_DYNAMIC Sat Feb 8 00:17:08 UTC 2025 213s autopkgtest [01:10:09]: @@@@@@@@@@@@@@@@@@@@ apt-source redis 218s Get:1 http://ftpmaster.internal/ubuntu noble/universe redis 5:7.0.15-1build2 (dsc) [2376 B] 218s Get:2 http://ftpmaster.internal/ubuntu noble/universe redis 5:7.0.15-1build2 (tar) [3026 kB] 218s Get:3 http://ftpmaster.internal/ubuntu noble/universe redis 5:7.0.15-1build2 (diff) [29.3 kB] 219s gpgv: Signature made Mon Apr 1 07:33:50 2024 UTC 219s gpgv: using RSA key A089FB36AAFBDAD5ACC1325069F790171A210984 219s gpgv: Can't check signature: No public key 219s dpkg-source: warning: cannot verify inline signature for ./redis_7.0.15-1build2.dsc: no acceptable signature found 219s autopkgtest [01:10:15]: testing package redis version 5:7.0.15-1build2 220s autopkgtest [01:10:16]: build not needed 222s autopkgtest [01:10:18]: test 0001-redis-cli: preparing testbed 223s Reading package lists... 223s Building dependency tree... 223s Reading state information... 224s Starting pkgProblemResolver with broken count: 0 224s Starting 2 pkgProblemResolver with broken count: 0 224s Done 225s The following NEW packages will be installed: 225s libatomic1 libjemalloc2 liblzf1 redis redis-sentinel redis-server 225s redis-tools 225s 0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded. 225s Need to get 1461 kB of archives. 225s After this operation, 7941 kB of additional disk space will be used. 225s Get:1 http://ftpmaster.internal/ubuntu noble-updates/main arm64 libatomic1 arm64 14.2.0-4ubuntu2~24.04 [11.6 kB] 225s Get:2 http://ftpmaster.internal/ubuntu noble/universe arm64 libjemalloc2 arm64 5.3.0-2build1 [204 kB] 225s Get:3 http://ftpmaster.internal/ubuntu noble/universe arm64 liblzf1 arm64 3.6-4 [7426 B] 225s Get:4 http://ftpmaster.internal/ubuntu noble/universe arm64 redis-tools arm64 5:7.0.15-1build2 [1171 kB] 226s Get:5 http://ftpmaster.internal/ubuntu noble/universe arm64 redis-sentinel arm64 5:7.0.15-1build2 [12.2 kB] 226s Get:6 http://ftpmaster.internal/ubuntu noble/universe arm64 redis-server arm64 5:7.0.15-1build2 [51.7 kB] 226s Get:7 http://ftpmaster.internal/ubuntu noble/universe arm64 redis all 5:7.0.15-1build2 [2920 B] 226s Fetched 1461 kB in 1s (1904 kB/s) 226s Selecting previously unselected package libatomic1:arm64. 227s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 78429 files and directories currently installed.) 227s Preparing to unpack .../0-libatomic1_14.2.0-4ubuntu2~24.04_arm64.deb ... 227s Unpacking libatomic1:arm64 (14.2.0-4ubuntu2~24.04) ... 227s Selecting previously unselected package libjemalloc2:arm64. 227s Preparing to unpack .../1-libjemalloc2_5.3.0-2build1_arm64.deb ... 227s Unpacking libjemalloc2:arm64 (5.3.0-2build1) ... 227s Selecting previously unselected package liblzf1:arm64. 227s Preparing to unpack .../2-liblzf1_3.6-4_arm64.deb ... 227s Unpacking liblzf1:arm64 (3.6-4) ... 227s Selecting previously unselected package redis-tools. 227s Preparing to unpack .../3-redis-tools_5%3a7.0.15-1build2_arm64.deb ... 227s Unpacking redis-tools (5:7.0.15-1build2) ... 227s Selecting previously unselected package redis-sentinel. 227s Preparing to unpack .../4-redis-sentinel_5%3a7.0.15-1build2_arm64.deb ... 227s Unpacking redis-sentinel (5:7.0.15-1build2) ... 227s Selecting previously unselected package redis-server. 227s Preparing to unpack .../5-redis-server_5%3a7.0.15-1build2_arm64.deb ... 227s Unpacking redis-server (5:7.0.15-1build2) ... 227s Selecting previously unselected package redis. 227s Preparing to unpack .../6-redis_5%3a7.0.15-1build2_all.deb ... 227s Unpacking redis (5:7.0.15-1build2) ... 227s Setting up libjemalloc2:arm64 (5.3.0-2build1) ... 227s Setting up liblzf1:arm64 (3.6-4) ... 227s Setting up libatomic1:arm64 (14.2.0-4ubuntu2~24.04) ... 227s Setting up redis-tools (5:7.0.15-1build2) ... 227s Setting up redis-server (5:7.0.15-1build2) ... 228s Created symlink /etc/systemd/system/redis.service → /usr/lib/systemd/system/redis-server.service. 228s Created symlink /etc/systemd/system/multi-user.target.wants/redis-server.service → /usr/lib/systemd/system/redis-server.service. 228s Setting up redis-sentinel (5:7.0.15-1build2) ... 229s Created symlink /etc/systemd/system/sentinel.service → /usr/lib/systemd/system/redis-sentinel.service. 229s Created symlink /etc/systemd/system/multi-user.target.wants/redis-sentinel.service → /usr/lib/systemd/system/redis-sentinel.service. 229s Setting up redis (5:7.0.15-1build2) ... 229s Processing triggers for man-db (2.12.0-4build2) ... 230s Processing triggers for libc-bin (2.39-0ubuntu8.4) ... 232s autopkgtest [01:10:28]: test 0001-redis-cli: [----------------------- 237s # Server 237s redis_version:7.0.15 237s redis_git_sha1:00000000 237s redis_git_dirty:0 237s redis_build_id:d81b8ff71cfb150e 237s redis_mode:standalone 237s os:Linux 6.8.0-54-generic aarch64 237s arch_bits:64 237s monotonic_clock:POSIX clock_gettime 237s multiplexing_api:epoll 237s atomicvar_api:c11-builtin 237s gcc_version:13.2.0 237s process_id:1686 237s process_supervised:systemd 237s run_id:4fa75c8693fb73547878e5ad5ffb4043d94f3c91 237s tcp_port:6379 237s server_time_usec:1740705033793854 237s uptime_in_seconds:3 237s uptime_in_days:0 237s hz:10 237s configured_hz:10 237s lru_clock:12651785 237s executable:/usr/bin/redis-server 237s config_file:/etc/redis/redis.conf 237s io_threads_active:0 237s 237s # Clients 237s connected_clients:3 237s cluster_connections:0 237s maxclients:10000 237s client_recent_max_input_buffer:20480 237s client_recent_max_output_buffer:0 237s blocked_clients:0 237s tracking_clients:0 237s clients_in_timeout_table:0 237s 237s # Memory 237s used_memory:1094112 237s used_memory_human:1.04M 237s used_memory_rss:12713984 237s used_memory_rss_human:12.12M 237s used_memory_peak:1094112 237s used_memory_peak_human:1.04M 237s used_memory_peak_perc:100.90% 237s used_memory_overhead:953568 237s used_memory_startup:908768 237s used_memory_dataset:140544 237s used_memory_dataset_perc:75.83% 237s allocator_allocated:4623456 237s allocator_active:9437184 237s allocator_resident:10616832 237s total_system_memory:4090732544 237s total_system_memory_human:3.81G 237s used_memory_lua:31744 237s used_memory_vm_eval:31744 237s used_memory_lua_human:31.00K 237s used_memory_scripts_eval:0 237s number_of_cached_scripts:0 237s number_of_functions:0 237s number_of_libraries:0 237s used_memory_vm_functions:32768 237s used_memory_vm_total:64512 237s used_memory_vm_total_human:63.00K 237s used_memory_functions:200 237s used_memory_scripts:200 237s used_memory_scripts_human:200B 237s maxmemory:0 237s maxmemory_human:0B 237s maxmemory_policy:noeviction 237s allocator_frag_ratio:2.04 237s allocator_frag_bytes:4813728 237s allocator_rss_ratio:1.12 237s allocator_rss_bytes:1179648 237s rss_overhead_ratio:1.20 237s rss_overhead_bytes:2097152 237s mem_fragmentation_ratio:12.07 237s mem_fragmentation_bytes:11660472 237s mem_not_counted_for_evict:0 237s mem_replication_backlog:0 237s mem_total_replication_buffers:0 237s mem_clients_slaves:0 237s mem_clients_normal:44600 237s mem_cluster_links:0 237s mem_aof_buffer:0 237s mem_allocator:jemalloc-5.3.0 237s active_defrag_running:0 237s lazyfree_pending_objects:0 237s lazyfreed_objects:0 237s 237s # Persistence 237s loading:0 237s async_loading:0 237s current_cow_peak:0 237s current_cow_size:0 237s current_cow_size_age:0 237s current_fork_perc:0.00 237s current_save_keys_processed:0 237s current_save_keys_total:0 237s rdb_changes_since_last_save:0 237s rdb_bgsave_in_progress:0 237s rdb_last_save_time:1740705030 237s rdb_last_bgsave_status:ok 237s rdb_last_bgsave_time_sec:-1 237s rdb_current_bgsave_time_sec:-1 237s rdb_saves:0 237s rdb_last_cow_size:0 237s rdb_last_load_keys_expired:0 237s rdb_last_load_keys_loaded:0 237s aof_enabled:0 237s aof_rewrite_in_progress:0 237s aof_rewrite_scheduled:0 237s aof_last_rewrite_time_sec:-1 237s aof_current_rewrite_time_sec:-1 237s aof_last_bgrewrite_status:ok 237s aof_rewrites:0 237s aof_rewrites_consecutive_failures:0 237s aof_last_write_status:ok 237s aof_last_cow_size:0 237s module_fork_in_progress:0 237s module_fork_last_cow_size:0 237s 237s # Stats 237s total_connections_received:3 237s total_commands_processed:8 237s instantaneous_ops_per_sec:1 237s total_net_input_bytes:483 237s total_net_output_bytes:220 237s total_net_repl_input_bytes:0 237s total_net_repl_output_bytes:0 237s instantaneous_input_kbps:0.09 237s instantaneous_output_kbps:0.09 237s instantaneous_input_repl_kbps:0.00 237s instantaneous_output_repl_kbps:0.00 237s rejected_connections:0 237s sync_full:0 237s sync_partial_ok:0 237s sync_partial_err:0 237s expired_keys:0 237s expired_stale_perc:0.00 237s expired_time_cap_reached_count:0 237s expire_cycle_cpu_milliseconds:0 237s evicted_keys:0 237s evicted_clients:0 237s total_eviction_exceeded_time:0 237s current_eviction_exceeded_time:0 237s keyspace_hits:0 237s keyspace_misses:0 237s pubsub_channels:1 237s pubsub_patterns:0 237s pubsubshard_channels:0 237s latest_fork_usec:0 237s total_forks:0 237s migrate_cached_sockets:0 237s slave_expires_tracked_keys:0 237s active_defrag_hits:0 237s active_defrag_misses:0 237s active_defrag_key_hits:0 237s active_defrag_key_misses:0 237s total_active_defrag_time:0 237s current_active_defrag_time:0 237s tracking_total_keys:0 237s tracking_total_items:0 237s tracking_total_prefixes:0 237s unexpected_error_replies:0 237s total_error_replies:0 237s dump_payload_sanitizations:0 237s total_reads_processed:5 237s total_writes_processed:5 237s io_threaded_reads_processed:0 237s io_threaded_writes_processed:0 237s reply_buffer_shrinks:2 237s reply_buffer_expands:0 237s 237s # Replication 237s role:master 237s connected_slaves:0 237s master_failover_state:no-failover 237s master_replid:c98e4cf7db3b9d5cc405955128f7fb0843a0e461 237s master_replid2:0000000000000000000000000000000000000000 237s master_repl_offset:0 237s second_repl_offset:-1 237s repl_backlog_active:0 237s repl_backlog_size:1048576 237s repl_backlog_first_byte_offset:0 237s repl_backlog_histlen:0 237s 237s # CPU 237s used_cpu_sys:0.029774 237s used_cpu_user:0.039015 237s used_cpu_sys_children:0.003801 237s used_cpu_user_children:0.000899 237s used_cpu_sys_main_thread:0.029354 237s used_cpu_user_main_thread:0.039199 237s 237s # Modules 237s 237s # Errorstats 237s 237s # Cluster 237s cluster_enabled:0 237s 237s # Keyspace 237s Redis ver. 7.0.15 238s autopkgtest [01:10:34]: test 0001-redis-cli: -----------------------] 238s autopkgtest [01:10:34]: test 0001-redis-cli: - - - - - - - - - - results - - - - - - - - - - 238s 0001-redis-cli PASS 239s autopkgtest [01:10:35]: test 0002-benchmark: preparing testbed 239s Reading package lists... 239s Building dependency tree... 239s Reading state information... 240s Starting pkgProblemResolver with broken count: 0 240s Starting 2 pkgProblemResolver with broken count: 0 240s Done 241s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 243s autopkgtest [01:10:39]: test 0002-benchmark: [----------------------- 249s PING_INLINE: rps=0.0 (overall: 0.0) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 249s 100000 requests completed in 0.24 seconds 249s 50 parallel clients 249s 3 bytes payload 249s keep alive: 1 249s host configuration "save": 3600 1 300 100 60 10000 249s host configuration "appendonly": no 249s multi-thread: no 249s 249s Latency by percentile distribution: 249s 0.000% <= 0.223 milliseconds (cumulative count 10) 249s 50.000% <= 0.775 milliseconds (cumulative count 50120) 249s 75.000% <= 0.919 milliseconds (cumulative count 75280) 249s 87.500% <= 1.055 milliseconds (cumulative count 87730) 249s 93.750% <= 1.215 milliseconds (cumulative count 93850) 249s 96.875% <= 1.383 milliseconds (cumulative count 96910) 249s 98.438% <= 1.567 milliseconds (cumulative count 98460) 249s 99.219% <= 1.719 milliseconds (cumulative count 99220) 249s 99.609% <= 1.863 milliseconds (cumulative count 99610) 249s 99.805% <= 1.983 milliseconds (cumulative count 99810) 249s 99.902% <= 2.215 milliseconds (cumulative count 99910) 249s 99.951% <= 2.431 milliseconds (cumulative count 99960) 249s 99.976% <= 2.655 milliseconds (cumulative count 99980) 249s 99.988% <= 2.695 milliseconds (cumulative count 99990) 249s 99.994% <= 2.719 milliseconds (cumulative count 100000) 249s 100.000% <= 2.719 milliseconds (cumulative count 100000) 249s 249s Cumulative distribution of latencies: 249s 0.000% <= 0.103 milliseconds (cumulative count 0) 249s 0.060% <= 0.303 milliseconds (cumulative count 60) 249s 0.150% <= 0.407 milliseconds (cumulative count 150) 249s 1.560% <= 0.503 milliseconds (cumulative count 1560) 249s 13.810% <= 0.607 milliseconds (cumulative count 13810) 249s 34.220% <= 0.703 milliseconds (cumulative count 34220) 249s 56.820% <= 0.807 milliseconds (cumulative count 56820) 249s 73.130% <= 0.903 milliseconds (cumulative count 73130) 249s 84.340% <= 1.007 milliseconds (cumulative count 84340) 249s 90.130% <= 1.103 milliseconds (cumulative count 90130) 249s 93.620% <= 1.207 milliseconds (cumulative count 93620) 249s 95.830% <= 1.303 milliseconds (cumulative count 95830) 249s 97.180% <= 1.407 milliseconds (cumulative count 97180) 249s 98.050% <= 1.503 milliseconds (cumulative count 98050) 249s 98.740% <= 1.607 milliseconds (cumulative count 98740) 249s 99.160% <= 1.703 milliseconds (cumulative count 99160) 249s 99.490% <= 1.807 milliseconds (cumulative count 99490) 249s 99.710% <= 1.903 milliseconds (cumulative count 99710) 249s 99.820% <= 2.007 milliseconds (cumulative count 99820) 249s 99.880% <= 2.103 milliseconds (cumulative count 99880) 249s 100.000% <= 3.103 milliseconds (cumulative count 100000) 249s 249s Summary: 249s throughput summary: 409836.06 requests per second 249s latency summary (msec): 249s avg min p50 p95 p99 max 249s 0.820 0.216 0.775 1.263 1.663 2.719 249s PING_MBULK: rps=4920.0 (overall: 307500.0) avg_msec=1.154 (overall: 1.154) ====== PING_MBULK ====== 249s 100000 requests completed in 0.24 seconds 249s 50 parallel clients 249s 3 bytes payload 249s keep alive: 1 249s host configuration "save": 3600 1 300 100 60 10000 249s host configuration "appendonly": no 249s multi-thread: no 249s 249s Latency by percentile distribution: 249s 0.000% <= 0.199 milliseconds (cumulative count 10) 249s 50.000% <= 0.687 milliseconds (cumulative count 50800) 249s 75.000% <= 0.783 milliseconds (cumulative count 75330) 249s 87.500% <= 0.911 milliseconds (cumulative count 87770) 249s 93.750% <= 1.071 milliseconds (cumulative count 93870) 249s 96.875% <= 1.215 milliseconds (cumulative count 96950) 249s 98.438% <= 1.335 milliseconds (cumulative count 98540) 249s 99.219% <= 1.455 milliseconds (cumulative count 99230) 249s 99.609% <= 1.551 milliseconds (cumulative count 99610) 249s 99.805% <= 1.663 milliseconds (cumulative count 99810) 249s 99.902% <= 1.743 milliseconds (cumulative count 99910) 249s 99.951% <= 1.871 milliseconds (cumulative count 99960) 249s 99.976% <= 1.903 milliseconds (cumulative count 99980) 249s 99.988% <= 2.071 milliseconds (cumulative count 99990) 249s 99.994% <= 2.183 milliseconds (cumulative count 100000) 249s 100.000% <= 2.183 milliseconds (cumulative count 100000) 249s 249s Cumulative distribution of latencies: 249s 0.000% <= 0.103 milliseconds (cumulative count 0) 249s 0.010% <= 0.207 milliseconds (cumulative count 10) 249s 0.060% <= 0.303 milliseconds (cumulative count 60) 249s 0.260% <= 0.407 milliseconds (cumulative count 260) 249s 2.610% <= 0.503 milliseconds (cumulative count 2610) 249s 23.920% <= 0.607 milliseconds (cumulative count 23920) 249s 55.950% <= 0.703 milliseconds (cumulative count 55950) 249s 79.090% <= 0.807 milliseconds (cumulative count 79090) 249s 87.340% <= 0.903 milliseconds (cumulative count 87340) 249s 91.880% <= 1.007 milliseconds (cumulative count 91880) 249s 94.810% <= 1.103 milliseconds (cumulative count 94810) 249s 96.820% <= 1.207 milliseconds (cumulative count 96820) 249s 98.240% <= 1.303 milliseconds (cumulative count 98240) 249s 99.060% <= 1.407 milliseconds (cumulative count 99060) 249s 99.470% <= 1.503 milliseconds (cumulative count 99470) 249s 99.710% <= 1.607 milliseconds (cumulative count 99710) 249s 99.870% <= 1.703 milliseconds (cumulative count 99870) 249s 99.930% <= 1.807 milliseconds (cumulative count 99930) 249s 99.980% <= 1.903 milliseconds (cumulative count 99980) 249s 99.990% <= 2.103 milliseconds (cumulative count 99990) 249s 100.000% <= 3.103 milliseconds (cumulative count 100000) 249s 249s Summary: 249s throughput summary: 423728.81 requests per second 249s latency summary (msec): 249s avg min p50 p95 p99 max 249s 0.726 0.192 0.687 1.119 1.399 2.183 249s SET: rps=22151.4 (overall: 347500.0) avg_msec=1.092 (overall: 1.092) ====== SET ====== 249s 100000 requests completed in 0.25 seconds 249s 50 parallel clients 249s 3 bytes payload 249s keep alive: 1 249s host configuration "save": 3600 1 300 100 60 10000 249s host configuration "appendonly": no 249s multi-thread: no 249s 249s Latency by percentile distribution: 249s 0.000% <= 0.351 milliseconds (cumulative count 10) 249s 50.000% <= 0.999 milliseconds (cumulative count 50590) 249s 75.000% <= 1.199 milliseconds (cumulative count 75540) 249s 87.500% <= 1.367 milliseconds (cumulative count 87660) 249s 93.750% <= 1.479 milliseconds (cumulative count 93920) 249s 96.875% <= 1.615 milliseconds (cumulative count 96960) 249s 98.438% <= 1.767 milliseconds (cumulative count 98450) 249s 99.219% <= 1.895 milliseconds (cumulative count 99220) 249s 99.609% <= 2.023 milliseconds (cumulative count 99610) 249s 99.805% <= 2.183 milliseconds (cumulative count 99820) 249s 99.902% <= 2.295 milliseconds (cumulative count 99920) 249s 99.951% <= 2.471 milliseconds (cumulative count 99960) 249s 99.976% <= 2.503 milliseconds (cumulative count 99980) 249s 99.988% <= 2.631 milliseconds (cumulative count 99990) 249s 99.994% <= 2.679 milliseconds (cumulative count 100000) 249s 100.000% <= 2.679 milliseconds (cumulative count 100000) 249s 249s Cumulative distribution of latencies: 249s 0.000% <= 0.103 milliseconds (cumulative count 0) 249s 0.050% <= 0.407 milliseconds (cumulative count 50) 249s 0.370% <= 0.503 milliseconds (cumulative count 370) 249s 3.000% <= 0.607 milliseconds (cumulative count 3000) 249s 10.020% <= 0.703 milliseconds (cumulative count 10020) 249s 21.570% <= 0.807 milliseconds (cumulative count 21570) 249s 34.400% <= 0.903 milliseconds (cumulative count 34400) 249s 52.050% <= 1.007 milliseconds (cumulative count 52050) 249s 65.930% <= 1.103 milliseconds (cumulative count 65930) 249s 76.220% <= 1.207 milliseconds (cumulative count 76220) 249s 83.440% <= 1.303 milliseconds (cumulative count 83440) 249s 90.150% <= 1.407 milliseconds (cumulative count 90150) 249s 94.780% <= 1.503 milliseconds (cumulative count 94780) 249s 96.840% <= 1.607 milliseconds (cumulative count 96840) 249s 97.850% <= 1.703 milliseconds (cumulative count 97850) 249s 98.810% <= 1.807 milliseconds (cumulative count 98810) 249s 99.250% <= 1.903 milliseconds (cumulative count 99250) 249s 99.550% <= 2.007 milliseconds (cumulative count 99550) 249s 99.730% <= 2.103 milliseconds (cumulative count 99730) 249s 100.000% <= 3.103 milliseconds (cumulative count 100000) 249s 249s Summary: 249s throughput summary: 401606.44 requests per second 249s latency summary (msec): 249s avg min p50 p95 p99 max 249s 1.033 0.344 0.999 1.511 1.847 2.679 249s GET: rps=29880.0 (overall: 498000.0) avg_msec=0.710 (overall: 0.710) ====== GET ====== 249s 100000 requests completed in 0.17 seconds 249s 50 parallel clients 249s 3 bytes payload 249s keep alive: 1 249s host configuration "save": 3600 1 300 100 60 10000 249s host configuration "appendonly": no 249s multi-thread: no 249s 249s Latency by percentile distribution: 249s 0.000% <= 0.207 milliseconds (cumulative count 20) 249s 50.000% <= 0.535 milliseconds (cumulative count 50340) 249s 75.000% <= 0.647 milliseconds (cumulative count 75290) 249s 87.500% <= 0.751 milliseconds (cumulative count 88000) 249s 93.750% <= 0.855 milliseconds (cumulative count 93930) 249s 96.875% <= 0.975 milliseconds (cumulative count 97000) 249s 98.438% <= 1.103 milliseconds (cumulative count 98440) 249s 99.219% <= 1.271 milliseconds (cumulative count 99250) 249s 99.609% <= 1.423 milliseconds (cumulative count 99620) 249s 99.805% <= 1.703 milliseconds (cumulative count 99810) 249s 99.902% <= 2.023 milliseconds (cumulative count 99910) 249s 99.951% <= 2.143 milliseconds (cumulative count 99960) 249s 99.976% <= 2.191 milliseconds (cumulative count 99980) 249s 99.988% <= 2.207 milliseconds (cumulative count 99990) 249s 99.994% <= 2.327 milliseconds (cumulative count 100000) 249s 100.000% <= 2.327 milliseconds (cumulative count 100000) 249s 249s Cumulative distribution of latencies: 249s 0.000% <= 0.103 milliseconds (cumulative count 0) 249s 0.020% <= 0.207 milliseconds (cumulative count 20) 249s 0.380% <= 0.303 milliseconds (cumulative count 380) 249s 13.850% <= 0.407 milliseconds (cumulative count 13850) 249s 41.220% <= 0.503 milliseconds (cumulative count 41220) 249s 67.660% <= 0.607 milliseconds (cumulative count 67660) 249s 83.210% <= 0.703 milliseconds (cumulative count 83210) 249s 91.630% <= 0.807 milliseconds (cumulative count 91630) 249s 95.450% <= 0.903 milliseconds (cumulative count 95450) 249s 97.440% <= 1.007 milliseconds (cumulative count 97440) 249s 98.440% <= 1.103 milliseconds (cumulative count 98440) 249s 99.000% <= 1.207 milliseconds (cumulative count 99000) 249s 99.350% <= 1.303 milliseconds (cumulative count 99350) 249s 99.570% <= 1.407 milliseconds (cumulative count 99570) 249s 99.710% <= 1.503 milliseconds (cumulative count 99710) 249s 99.780% <= 1.607 milliseconds (cumulative count 99780) 249s 99.810% <= 1.703 milliseconds (cumulative count 99810) 249s 99.830% <= 1.807 milliseconds (cumulative count 99830) 249s 99.860% <= 1.903 milliseconds (cumulative count 99860) 249s 99.900% <= 2.007 milliseconds (cumulative count 99900) 249s 99.940% <= 2.103 milliseconds (cumulative count 99940) 249s 100.000% <= 3.103 milliseconds (cumulative count 100000) 249s 249s Summary: 249s throughput summary: 595238.12 requests per second 249s latency summary (msec): 249s avg min p50 p95 p99 max 249s 0.570 0.200 0.535 0.895 1.207 2.327 250s INCR: rps=175600.0 (overall: 462105.3) avg_msec=0.777 (overall: 0.777) ====== INCR ====== 250s 100000 requests completed in 0.19 seconds 250s 50 parallel clients 250s 3 bytes payload 250s keep alive: 1 250s host configuration "save": 3600 1 300 100 60 10000 250s host configuration "appendonly": no 250s multi-thread: no 250s 250s Latency by percentile distribution: 250s 0.000% <= 0.231 milliseconds (cumulative count 10) 250s 50.000% <= 0.551 milliseconds (cumulative count 50220) 250s 75.000% <= 0.695 milliseconds (cumulative count 75240) 250s 87.500% <= 0.895 milliseconds (cumulative count 87500) 250s 93.750% <= 1.311 milliseconds (cumulative count 93770) 250s 96.875% <= 1.951 milliseconds (cumulative count 96900) 250s 98.438% <= 2.519 milliseconds (cumulative count 98440) 250s 99.219% <= 2.879 milliseconds (cumulative count 99240) 250s 99.609% <= 3.103 milliseconds (cumulative count 99630) 250s 99.805% <= 3.287 milliseconds (cumulative count 99810) 250s 99.902% <= 3.471 milliseconds (cumulative count 99910) 250s 99.951% <= 3.527 milliseconds (cumulative count 99960) 250s 99.976% <= 3.551 milliseconds (cumulative count 99980) 250s 99.988% <= 3.567 milliseconds (cumulative count 99990) 250s 99.994% <= 3.591 milliseconds (cumulative count 100000) 250s 100.000% <= 3.591 milliseconds (cumulative count 100000) 250s 250s Cumulative distribution of latencies: 250s 0.000% <= 0.103 milliseconds (cumulative count 0) 250s 0.190% <= 0.303 milliseconds (cumulative count 190) 250s 11.930% <= 0.407 milliseconds (cumulative count 11930) 250s 37.930% <= 0.503 milliseconds (cumulative count 37930) 250s 62.300% <= 0.607 milliseconds (cumulative count 62300) 250s 76.070% <= 0.703 milliseconds (cumulative count 76070) 250s 83.770% <= 0.807 milliseconds (cumulative count 83770) 250s 87.880% <= 0.903 milliseconds (cumulative count 87880) 250s 90.710% <= 1.007 milliseconds (cumulative count 90710) 250s 91.920% <= 1.103 milliseconds (cumulative count 91920) 250s 92.830% <= 1.207 milliseconds (cumulative count 92830) 250s 93.700% <= 1.303 milliseconds (cumulative count 93700) 250s 94.670% <= 1.407 milliseconds (cumulative count 94670) 250s 95.310% <= 1.503 milliseconds (cumulative count 95310) 250s 95.740% <= 1.607 milliseconds (cumulative count 95740) 250s 96.050% <= 1.703 milliseconds (cumulative count 96050) 250s 96.260% <= 1.807 milliseconds (cumulative count 96260) 250s 96.640% <= 1.903 milliseconds (cumulative count 96640) 250s 97.170% <= 2.007 milliseconds (cumulative count 97170) 250s 97.550% <= 2.103 milliseconds (cumulative count 97550) 250s 99.630% <= 3.103 milliseconds (cumulative count 99630) 250s 100.000% <= 4.103 milliseconds (cumulative count 100000) 250s 250s Summary: 250s throughput summary: 518134.72 requests per second 250s latency summary (msec): 250s avg min p50 p95 p99 max 250s 0.671 0.224 0.551 1.455 2.767 3.591 250s LPUSH: rps=307120.0 (overall: 511866.7) avg_msec=0.850 (overall: 0.850) ====== LPUSH ====== 250s 100000 requests completed in 0.19 seconds 250s 50 parallel clients 250s 3 bytes payload 250s keep alive: 1 250s host configuration "save": 3600 1 300 100 60 10000 250s host configuration "appendonly": no 250s multi-thread: no 250s 250s Latency by percentile distribution: 250s 0.000% <= 0.319 milliseconds (cumulative count 10) 250s 50.000% <= 0.823 milliseconds (cumulative count 50400) 250s 75.000% <= 0.975 milliseconds (cumulative count 75680) 250s 87.500% <= 1.079 milliseconds (cumulative count 87750) 250s 93.750% <= 1.159 milliseconds (cumulative count 93880) 250s 96.875% <= 1.239 milliseconds (cumulative count 97000) 250s 98.438% <= 1.335 milliseconds (cumulative count 98510) 250s 99.219% <= 1.423 milliseconds (cumulative count 99220) 250s 99.609% <= 1.503 milliseconds (cumulative count 99610) 250s 99.805% <= 1.591 milliseconds (cumulative count 99820) 250s 99.902% <= 1.671 milliseconds (cumulative count 99910) 250s 99.951% <= 1.727 milliseconds (cumulative count 99960) 250s 99.976% <= 1.775 milliseconds (cumulative count 99980) 250s 99.988% <= 1.791 milliseconds (cumulative count 99990) 250s 99.994% <= 1.807 milliseconds (cumulative count 100000) 250s 100.000% <= 1.807 milliseconds (cumulative count 100000) 250s 250s Cumulative distribution of latencies: 250s 0.000% <= 0.103 milliseconds (cumulative count 0) 250s 0.730% <= 0.407 milliseconds (cumulative count 730) 250s 2.790% <= 0.503 milliseconds (cumulative count 2790) 250s 8.350% <= 0.607 milliseconds (cumulative count 8350) 250s 22.210% <= 0.703 milliseconds (cumulative count 22210) 250s 46.750% <= 0.807 milliseconds (cumulative count 46750) 250s 65.420% <= 0.903 milliseconds (cumulative count 65420) 250s 79.750% <= 1.007 milliseconds (cumulative count 79750) 250s 89.900% <= 1.103 milliseconds (cumulative count 89900) 250s 96.080% <= 1.207 milliseconds (cumulative count 96080) 250s 98.140% <= 1.303 milliseconds (cumulative count 98140) 250s 99.100% <= 1.407 milliseconds (cumulative count 99100) 250s 99.610% <= 1.503 milliseconds (cumulative count 99610) 250s 99.830% <= 1.607 milliseconds (cumulative count 99830) 250s 99.940% <= 1.703 milliseconds (cumulative count 99940) 250s 100.000% <= 1.807 milliseconds (cumulative count 100000) 250s 250s Summary: 250s throughput summary: 512820.53 requests per second 250s latency summary (msec): 250s avg min p50 p95 p99 max 250s 0.847 0.312 0.823 1.183 1.399 1.807 250s ====== RPUSH ====== 250s 100000 requests completed in 0.19 seconds 250s 50 parallel clients 250s 3 bytes payload 250s keep alive: 1 250s host configuration "save": 3600 1 300 100 60 10000 250s host configuration "appendonly": no 250s multi-thread: no 250s 250s Latency by percentile distribution: 250s 0.000% <= 0.159 milliseconds (cumulative count 10) 250s 50.000% <= 0.759 milliseconds (cumulative count 50820) 250s 75.000% <= 0.911 milliseconds (cumulative count 75860) 250s 87.500% <= 1.015 milliseconds (cumulative count 87790) 250s 93.750% <= 1.103 milliseconds (cumulative count 94050) 250s 96.875% <= 1.215 milliseconds (cumulative count 96890) 250s 98.438% <= 1.351 milliseconds (cumulative count 98510) 250s 99.219% <= 1.431 milliseconds (cumulative count 99240) 250s 99.609% <= 1.519 milliseconds (cumulative count 99630) 250s 99.805% <= 1.575 milliseconds (cumulative count 99810) 250s 99.902% <= 1.615 milliseconds (cumulative count 99910) 250s 99.951% <= 1.671 milliseconds (cumulative count 99960) 250s 99.976% <= 1.695 milliseconds (cumulative count 99980) 250s 99.988% <= 1.719 milliseconds (cumulative count 99990) 250s 99.994% <= 1.799 milliseconds (cumulative count 100000) 250s 100.000% <= 1.799 milliseconds (cumulative count 100000) 250s 250s Cumulative distribution of latencies: 250s 0.000% <= 0.103 milliseconds (cumulative count 0) 250s 0.020% <= 0.207 milliseconds (cumulative count 20) 250s 0.140% <= 0.303 milliseconds (cumulative count 140) 250s 1.330% <= 0.407 milliseconds (cumulative count 1330) 250s 6.540% <= 0.503 milliseconds (cumulative count 6540) 250s 18.310% <= 0.607 milliseconds (cumulative count 18310) 250s 37.420% <= 0.703 milliseconds (cumulative count 37420) 250s 60.870% <= 0.807 milliseconds (cumulative count 60870) 250s 74.850% <= 0.903 milliseconds (cumulative count 74850) 250s 86.930% <= 1.007 milliseconds (cumulative count 86930) 250s 94.050% <= 1.103 milliseconds (cumulative count 94050) 250s 96.740% <= 1.207 milliseconds (cumulative count 96740) 250s 98.070% <= 1.303 milliseconds (cumulative count 98070) 250s 99.050% <= 1.407 milliseconds (cumulative count 99050) 250s 99.550% <= 1.503 milliseconds (cumulative count 99550) 250s 99.880% <= 1.607 milliseconds (cumulative count 99880) 250s 99.980% <= 1.703 milliseconds (cumulative count 99980) 250s 100.000% <= 1.807 milliseconds (cumulative count 100000) 250s 250s Summary: 250s throughput summary: 540540.56 requests per second 250s latency summary (msec): 250s avg min p50 p95 p99 max 250s 0.782 0.152 0.759 1.135 1.407 1.799 250s LPOP: rps=23984.1 (overall: 401333.3) avg_msec=1.050 (overall: 1.050) ====== LPOP ====== 250s 100000 requests completed in 0.22 seconds 250s 50 parallel clients 250s 3 bytes payload 250s keep alive: 1 250s host configuration "save": 3600 1 300 100 60 10000 250s host configuration "appendonly": no 250s multi-thread: no 250s 250s Latency by percentile distribution: 250s 0.000% <= 0.303 milliseconds (cumulative count 10) 250s 50.000% <= 0.951 milliseconds (cumulative count 50540) 250s 75.000% <= 1.127 milliseconds (cumulative count 75800) 250s 87.500% <= 1.247 milliseconds (cumulative count 87950) 250s 93.750% <= 1.335 milliseconds (cumulative count 93800) 250s 96.875% <= 1.431 milliseconds (cumulative count 97100) 250s 98.438% <= 1.511 milliseconds (cumulative count 98500) 250s 99.219% <= 1.607 milliseconds (cumulative count 99250) 250s 99.609% <= 1.703 milliseconds (cumulative count 99640) 250s 99.805% <= 1.775 milliseconds (cumulative count 99820) 250s 99.902% <= 1.863 milliseconds (cumulative count 99910) 250s 99.951% <= 1.919 milliseconds (cumulative count 99960) 250s 99.976% <= 1.951 milliseconds (cumulative count 99980) 250s 99.988% <= 1.959 milliseconds (cumulative count 99990) 250s 99.994% <= 1.967 milliseconds (cumulative count 100000) 250s 100.000% <= 1.967 milliseconds (cumulative count 100000) 250s 250s Cumulative distribution of latencies: 250s 0.000% <= 0.103 milliseconds (cumulative count 0) 250s 0.010% <= 0.303 milliseconds (cumulative count 10) 250s 0.130% <= 0.407 milliseconds (cumulative count 130) 250s 0.460% <= 0.503 milliseconds (cumulative count 460) 250s 1.310% <= 0.607 milliseconds (cumulative count 1310) 250s 4.470% <= 0.703 milliseconds (cumulative count 4470) 250s 19.300% <= 0.807 milliseconds (cumulative count 19300) 250s 40.970% <= 0.903 milliseconds (cumulative count 40970) 250s 59.760% <= 1.007 milliseconds (cumulative count 59760) 250s 72.950% <= 1.103 milliseconds (cumulative count 72950) 250s 84.300% <= 1.207 milliseconds (cumulative count 84300) 250s 92.110% <= 1.303 milliseconds (cumulative count 92110) 250s 96.390% <= 1.407 milliseconds (cumulative count 96390) 250s 98.420% <= 1.503 milliseconds (cumulative count 98420) 250s 99.250% <= 1.607 milliseconds (cumulative count 99250) 250s 99.640% <= 1.703 milliseconds (cumulative count 99640) 250s 99.870% <= 1.807 milliseconds (cumulative count 99870) 250s 99.950% <= 1.903 milliseconds (cumulative count 99950) 250s 100.000% <= 2.007 milliseconds (cumulative count 100000) 250s 250s Summary: 250s throughput summary: 450450.44 requests per second 250s latency summary (msec): 250s avg min p50 p95 p99 max 250s 0.987 0.296 0.951 1.375 1.567 1.967 250s RPOP: rps=76120.0 (overall: 475750.0) avg_msec=0.925 (overall: 0.925) ====== RPOP ====== 250s 100000 requests completed in 0.21 seconds 250s 50 parallel clients 250s 3 bytes payload 250s keep alive: 1 250s host configuration "save": 3600 1 300 100 60 10000 250s host configuration "appendonly": no 250s multi-thread: no 250s 250s Latency by percentile distribution: 250s 0.000% <= 0.311 milliseconds (cumulative count 10) 250s 50.000% <= 0.895 milliseconds (cumulative count 51420) 250s 75.000% <= 1.055 milliseconds (cumulative count 75580) 250s 87.500% <= 1.167 milliseconds (cumulative count 87840) 250s 93.750% <= 1.255 milliseconds (cumulative count 94090) 250s 96.875% <= 1.335 milliseconds (cumulative count 96980) 250s 98.438% <= 1.415 milliseconds (cumulative count 98510) 250s 99.219% <= 1.487 milliseconds (cumulative count 99290) 250s 99.609% <= 1.535 milliseconds (cumulative count 99610) 250s 99.805% <= 1.599 milliseconds (cumulative count 99820) 250s 99.902% <= 1.695 milliseconds (cumulative count 99910) 250s 99.951% <= 1.783 milliseconds (cumulative count 99960) 250s 99.976% <= 1.815 milliseconds (cumulative count 99980) 250s 99.988% <= 1.887 milliseconds (cumulative count 99990) 250s 99.994% <= 1.935 milliseconds (cumulative count 100000) 250s 100.000% <= 1.935 milliseconds (cumulative count 100000) 250s 250s Cumulative distribution of latencies: 250s 0.000% <= 0.103 milliseconds (cumulative count 0) 250s 0.140% <= 0.407 milliseconds (cumulative count 140) 250s 0.670% <= 0.503 milliseconds (cumulative count 670) 250s 2.510% <= 0.607 milliseconds (cumulative count 2510) 250s 9.300% <= 0.703 milliseconds (cumulative count 9300) 250s 30.170% <= 0.807 milliseconds (cumulative count 30170) 250s 52.850% <= 0.903 milliseconds (cumulative count 52850) 250s 69.610% <= 1.007 milliseconds (cumulative count 69610) 250s 81.220% <= 1.103 milliseconds (cumulative count 81220) 250s 91.050% <= 1.207 milliseconds (cumulative count 91050) 250s 95.930% <= 1.303 milliseconds (cumulative count 95930) 250s 98.380% <= 1.407 milliseconds (cumulative count 98380) 250s 99.400% <= 1.503 milliseconds (cumulative count 99400) 250s 99.820% <= 1.607 milliseconds (cumulative count 99820) 250s 99.910% <= 1.703 milliseconds (cumulative count 99910) 250s 99.970% <= 1.807 milliseconds (cumulative count 99970) 250s 99.990% <= 1.903 milliseconds (cumulative count 99990) 250s 100.000% <= 2.007 milliseconds (cumulative count 100000) 250s 250s Summary: 250s throughput summary: 476190.50 requests per second 250s latency summary (msec): 250s avg min p50 p95 p99 max 250s 0.925 0.304 0.895 1.279 1.455 1.935 251s SADD: rps=183600.0 (overall: 588461.5) avg_msec=0.657 (overall: 0.657) ====== SADD ====== 251s 100000 requests completed in 0.17 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.271 milliseconds (cumulative count 10) 251s 50.000% <= 0.639 milliseconds (cumulative count 50910) 251s 75.000% <= 0.775 milliseconds (cumulative count 75420) 251s 87.500% <= 0.879 milliseconds (cumulative count 87780) 251s 93.750% <= 0.975 milliseconds (cumulative count 93970) 251s 96.875% <= 1.055 milliseconds (cumulative count 97030) 251s 98.438% <= 1.119 milliseconds (cumulative count 98490) 251s 99.219% <= 1.199 milliseconds (cumulative count 99270) 251s 99.609% <= 1.287 milliseconds (cumulative count 99610) 251s 99.805% <= 1.383 milliseconds (cumulative count 99810) 251s 99.902% <= 1.455 milliseconds (cumulative count 99910) 251s 99.951% <= 1.511 milliseconds (cumulative count 99960) 251s 99.976% <= 1.535 milliseconds (cumulative count 99980) 251s 99.988% <= 1.575 milliseconds (cumulative count 99990) 251s 99.994% <= 1.607 milliseconds (cumulative count 100000) 251s 100.000% <= 1.607 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.120% <= 0.303 milliseconds (cumulative count 120) 251s 7.220% <= 0.407 milliseconds (cumulative count 7220) 251s 23.580% <= 0.503 milliseconds (cumulative count 23580) 251s 44.720% <= 0.607 milliseconds (cumulative count 44720) 251s 63.230% <= 0.703 milliseconds (cumulative count 63230) 251s 80.140% <= 0.807 milliseconds (cumulative count 80140) 251s 89.670% <= 0.903 milliseconds (cumulative count 89670) 251s 95.470% <= 1.007 milliseconds (cumulative count 95470) 251s 98.210% <= 1.103 milliseconds (cumulative count 98210) 251s 99.310% <= 1.207 milliseconds (cumulative count 99310) 251s 99.650% <= 1.303 milliseconds (cumulative count 99650) 251s 99.860% <= 1.407 milliseconds (cumulative count 99860) 251s 99.950% <= 1.503 milliseconds (cumulative count 99950) 251s 100.000% <= 1.607 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 578034.69 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 0.655 0.264 0.639 0.999 1.167 1.607 251s HSET: rps=323466.2 (overall: 527207.8) avg_msec=0.813 (overall: 0.813) ====== HSET ====== 251s 100000 requests completed in 0.19 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.327 milliseconds (cumulative count 10) 251s 50.000% <= 0.791 milliseconds (cumulative count 50710) 251s 75.000% <= 0.927 milliseconds (cumulative count 75080) 251s 87.500% <= 1.039 milliseconds (cumulative count 88160) 251s 93.750% <= 1.111 milliseconds (cumulative count 94100) 251s 96.875% <= 1.175 milliseconds (cumulative count 97120) 251s 98.438% <= 1.231 milliseconds (cumulative count 98450) 251s 99.219% <= 1.311 milliseconds (cumulative count 99240) 251s 99.609% <= 1.383 milliseconds (cumulative count 99640) 251s 99.805% <= 1.463 milliseconds (cumulative count 99810) 251s 99.902% <= 1.511 milliseconds (cumulative count 99910) 251s 99.951% <= 1.559 milliseconds (cumulative count 99960) 251s 99.976% <= 1.583 milliseconds (cumulative count 99980) 251s 99.988% <= 1.599 milliseconds (cumulative count 99990) 251s 99.994% <= 1.623 milliseconds (cumulative count 100000) 251s 100.000% <= 1.623 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.900% <= 0.407 milliseconds (cumulative count 900) 251s 3.890% <= 0.503 milliseconds (cumulative count 3890) 251s 11.400% <= 0.607 milliseconds (cumulative count 11400) 251s 28.540% <= 0.703 milliseconds (cumulative count 28540) 251s 54.560% <= 0.807 milliseconds (cumulative count 54560) 251s 71.760% <= 0.903 milliseconds (cumulative count 71760) 251s 84.550% <= 1.007 milliseconds (cumulative count 84550) 251s 93.520% <= 1.103 milliseconds (cumulative count 93520) 251s 97.930% <= 1.207 milliseconds (cumulative count 97930) 251s 99.210% <= 1.303 milliseconds (cumulative count 99210) 251s 99.720% <= 1.407 milliseconds (cumulative count 99720) 251s 99.890% <= 1.503 milliseconds (cumulative count 99890) 251s 99.990% <= 1.607 milliseconds (cumulative count 99990) 251s 100.000% <= 1.703 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 529100.56 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 0.810 0.320 0.791 1.135 1.279 1.623 251s ====== SPOP ====== 251s 100000 requests completed in 0.17 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.167 milliseconds (cumulative count 10) 251s 50.000% <= 0.511 milliseconds (cumulative count 52350) 251s 75.000% <= 0.615 milliseconds (cumulative count 75940) 251s 87.500% <= 0.719 milliseconds (cumulative count 87860) 251s 93.750% <= 0.823 milliseconds (cumulative count 94110) 251s 96.875% <= 0.911 milliseconds (cumulative count 97000) 251s 98.438% <= 0.999 milliseconds (cumulative count 98450) 251s 99.219% <= 1.111 milliseconds (cumulative count 99220) 251s 99.609% <= 1.223 milliseconds (cumulative count 99630) 251s 99.805% <= 1.295 milliseconds (cumulative count 99820) 251s 99.902% <= 1.479 milliseconds (cumulative count 99910) 251s 99.951% <= 1.615 milliseconds (cumulative count 99960) 251s 99.976% <= 1.663 milliseconds (cumulative count 99980) 251s 99.988% <= 1.671 milliseconds (cumulative count 99990) 251s 99.994% <= 1.695 milliseconds (cumulative count 100000) 251s 100.000% <= 1.695 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.030% <= 0.207 milliseconds (cumulative count 30) 251s 0.330% <= 0.303 milliseconds (cumulative count 330) 251s 17.970% <= 0.407 milliseconds (cumulative count 17970) 251s 49.950% <= 0.503 milliseconds (cumulative count 49950) 251s 74.750% <= 0.607 milliseconds (cumulative count 74750) 251s 86.610% <= 0.703 milliseconds (cumulative count 86610) 251s 93.320% <= 0.807 milliseconds (cumulative count 93320) 251s 96.860% <= 0.903 milliseconds (cumulative count 96860) 251s 98.500% <= 1.007 milliseconds (cumulative count 98500) 251s 99.170% <= 1.103 milliseconds (cumulative count 99170) 251s 99.570% <= 1.207 milliseconds (cumulative count 99570) 251s 99.830% <= 1.303 milliseconds (cumulative count 99830) 251s 99.870% <= 1.407 milliseconds (cumulative count 99870) 251s 99.920% <= 1.503 milliseconds (cumulative count 99920) 251s 99.950% <= 1.607 milliseconds (cumulative count 99950) 251s 100.000% <= 1.703 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 588235.31 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 0.541 0.160 0.511 0.847 1.079 1.695 251s ZADD: rps=72120.0 (overall: 450750.0) avg_msec=0.958 (overall: 0.958) ====== ZADD ====== 251s 100000 requests completed in 0.20 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.327 milliseconds (cumulative count 20) 251s 50.000% <= 0.847 milliseconds (cumulative count 50140) 251s 75.000% <= 0.999 milliseconds (cumulative count 75550) 251s 87.500% <= 1.103 milliseconds (cumulative count 87960) 251s 93.750% <= 1.191 milliseconds (cumulative count 93950) 251s 96.875% <= 1.279 milliseconds (cumulative count 97040) 251s 98.438% <= 1.359 milliseconds (cumulative count 98490) 251s 99.219% <= 1.431 milliseconds (cumulative count 99250) 251s 99.609% <= 1.495 milliseconds (cumulative count 99660) 251s 99.805% <= 1.567 milliseconds (cumulative count 99830) 251s 99.902% <= 1.615 milliseconds (cumulative count 99910) 251s 99.951% <= 1.703 milliseconds (cumulative count 99960) 251s 99.976% <= 1.759 milliseconds (cumulative count 99980) 251s 99.988% <= 1.783 milliseconds (cumulative count 99990) 251s 99.994% <= 1.831 milliseconds (cumulative count 100000) 251s 100.000% <= 1.831 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.590% <= 0.407 milliseconds (cumulative count 590) 251s 2.320% <= 0.503 milliseconds (cumulative count 2320) 251s 7.010% <= 0.607 milliseconds (cumulative count 7010) 251s 19.320% <= 0.703 milliseconds (cumulative count 19320) 251s 41.740% <= 0.807 milliseconds (cumulative count 41740) 251s 60.790% <= 0.903 milliseconds (cumulative count 60790) 251s 76.640% <= 1.007 milliseconds (cumulative count 76640) 251s 87.960% <= 1.103 milliseconds (cumulative count 87960) 251s 94.640% <= 1.207 milliseconds (cumulative count 94640) 251s 97.480% <= 1.303 milliseconds (cumulative count 97480) 251s 99.040% <= 1.407 milliseconds (cumulative count 99040) 251s 99.680% <= 1.503 milliseconds (cumulative count 99680) 251s 99.900% <= 1.607 milliseconds (cumulative count 99900) 251s 99.960% <= 1.703 milliseconds (cumulative count 99960) 251s 99.990% <= 1.807 milliseconds (cumulative count 99990) 251s 100.000% <= 1.903 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 500000.00 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 0.869 0.320 0.847 1.215 1.407 1.831 251s ZPOPMIN: rps=209360.0 (overall: 594772.8) avg_msec=0.530 (overall: 0.530) ====== ZPOPMIN ====== 251s 100000 requests completed in 0.17 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.159 milliseconds (cumulative count 10) 251s 50.000% <= 0.503 milliseconds (cumulative count 50310) 251s 75.000% <= 0.591 milliseconds (cumulative count 75250) 251s 87.500% <= 0.679 milliseconds (cumulative count 88250) 251s 93.750% <= 0.759 milliseconds (cumulative count 94080) 251s 96.875% <= 0.839 milliseconds (cumulative count 96930) 251s 98.438% <= 0.927 milliseconds (cumulative count 98630) 251s 99.219% <= 0.975 milliseconds (cumulative count 99220) 251s 99.609% <= 1.047 milliseconds (cumulative count 99630) 251s 99.805% <= 1.167 milliseconds (cumulative count 99810) 251s 99.902% <= 1.303 milliseconds (cumulative count 99910) 251s 99.951% <= 1.375 milliseconds (cumulative count 99960) 251s 99.976% <= 1.407 milliseconds (cumulative count 99980) 251s 99.988% <= 1.439 milliseconds (cumulative count 100000) 251s 100.000% <= 1.439 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.040% <= 0.207 milliseconds (cumulative count 40) 251s 0.300% <= 0.303 milliseconds (cumulative count 300) 251s 16.590% <= 0.407 milliseconds (cumulative count 16590) 251s 50.310% <= 0.503 milliseconds (cumulative count 50310) 251s 78.330% <= 0.607 milliseconds (cumulative count 78330) 251s 90.380% <= 0.703 milliseconds (cumulative count 90380) 251s 96.000% <= 0.807 milliseconds (cumulative count 96000) 251s 98.130% <= 0.903 milliseconds (cumulative count 98130) 251s 99.450% <= 1.007 milliseconds (cumulative count 99450) 251s 99.760% <= 1.103 milliseconds (cumulative count 99760) 251s 99.840% <= 1.207 milliseconds (cumulative count 99840) 251s 99.910% <= 1.303 milliseconds (cumulative count 99910) 251s 99.980% <= 1.407 milliseconds (cumulative count 99980) 251s 100.000% <= 1.503 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 588235.31 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 0.527 0.152 0.503 0.783 0.959 1.439 252s LPUSH (needed to benchmark LRANGE): rps=350199.2 (overall: 526347.3) avg_msec=0.819 (overall: 0.819) ====== LPUSH (needed to benchmark LRANGE) ====== 252s 100000 requests completed in 0.19 seconds 252s 50 parallel clients 252s 3 bytes payload 252s keep alive: 1 252s host configuration "save": 3600 1 300 100 60 10000 252s host configuration "appendonly": no 252s multi-thread: no 252s 252s Latency by percentile distribution: 252s 0.000% <= 0.279 milliseconds (cumulative count 10) 252s 50.000% <= 0.799 milliseconds (cumulative count 51120) 252s 75.000% <= 0.935 milliseconds (cumulative count 75290) 252s 87.500% <= 1.047 milliseconds (cumulative count 88260) 252s 93.750% <= 1.119 milliseconds (cumulative count 94150) 252s 96.875% <= 1.175 milliseconds (cumulative count 96940) 252s 98.438% <= 1.247 milliseconds (cumulative count 98490) 252s 99.219% <= 1.319 milliseconds (cumulative count 99250) 252s 99.609% <= 1.383 milliseconds (cumulative count 99610) 252s 99.805% <= 1.479 milliseconds (cumulative count 99810) 252s 99.902% <= 1.575 milliseconds (cumulative count 99910) 252s 99.951% <= 1.655 milliseconds (cumulative count 99960) 252s 99.976% <= 1.695 milliseconds (cumulative count 99980) 252s 99.988% <= 1.711 milliseconds (cumulative count 99990) 252s 99.994% <= 1.839 milliseconds (cumulative count 100000) 252s 100.000% <= 1.839 milliseconds (cumulative count 100000) 252s 252s Cumulative distribution of latencies: 252s 0.000% <= 0.103 milliseconds (cumulative count 0) 252s 0.060% <= 0.303 milliseconds (cumulative count 60) 252s 0.950% <= 0.407 milliseconds (cumulative count 950) 252s 3.370% <= 0.503 milliseconds (cumulative count 3370) 252s 9.830% <= 0.607 milliseconds (cumulative count 9830) 252s 25.780% <= 0.703 milliseconds (cumulative count 25780) 252s 52.950% <= 0.807 milliseconds (cumulative count 52950) 252s 70.970% <= 0.903 milliseconds (cumulative count 70970) 252s 84.070% <= 1.007 milliseconds (cumulative count 84070) 252s 93.210% <= 1.103 milliseconds (cumulative count 93210) 252s 97.840% <= 1.207 milliseconds (cumulative count 97840) 252s 99.170% <= 1.303 milliseconds (cumulative count 99170) 252s 99.670% <= 1.407 milliseconds (cumulative count 99670) 252s 99.850% <= 1.503 milliseconds (cumulative count 99850) 252s 99.930% <= 1.607 milliseconds (cumulative count 99930) 252s 99.980% <= 1.703 milliseconds (cumulative count 99980) 252s 99.990% <= 1.807 milliseconds (cumulative count 99990) 252s 100.000% <= 1.903 milliseconds (cumulative count 100000) 252s 252s Summary: 252s throughput summary: 529100.56 requests per second 252s latency summary (msec): 252s avg min p50 p95 p99 max 252s 0.819 0.272 0.799 1.135 1.295 1.839 252s LRANGE_100 (first 100 elements): rps=96294.8 (overall: 106946.9) avg_msec=3.662 (overall: 3.662) LRANGE_100 (first 100 elements): rps=109322.7 (overall: 108197.1) avg_msec=3.625 (overall: 3.643) LRANGE_100 (first 100 elements): rps=107600.0 (overall: 107991.8) avg_msec=3.704 (overall: 3.664) ====== LRANGE_100 (first 100 elements) ====== 252s 100000 requests completed in 0.92 seconds 252s 50 parallel clients 252s 3 bytes payload 252s keep alive: 1 252s host configuration "save": 3600 1 300 100 60 10000 252s host configuration "appendonly": no 252s multi-thread: no 252s 252s Latency by percentile distribution: 252s 0.000% <= 0.567 milliseconds (cumulative count 10) 252s 50.000% <= 3.575 milliseconds (cumulative count 50340) 252s 75.000% <= 4.103 milliseconds (cumulative count 75200) 252s 87.500% <= 4.479 milliseconds (cumulative count 87500) 252s 93.750% <= 5.039 milliseconds (cumulative count 93770) 252s 96.875% <= 5.487 milliseconds (cumulative count 96890) 252s 98.438% <= 5.751 milliseconds (cumulative count 98470) 252s 99.219% <= 5.951 milliseconds (cumulative count 99230) 252s 99.609% <= 6.111 milliseconds (cumulative count 99610) 252s 99.805% <= 6.327 milliseconds (cumulative count 99810) 252s 99.902% <= 6.503 milliseconds (cumulative count 99910) 252s 99.951% <= 6.639 milliseconds (cumulative count 99960) 252s 99.976% <= 6.711 milliseconds (cumulative count 99980) 252s 99.988% <= 6.767 milliseconds (cumulative count 99990) 252s 99.994% <= 6.855 milliseconds (cumulative count 100000) 252s 100.000% <= 6.855 milliseconds (cumulative count 100000) 252s 252s Cumulative distribution of latencies: 252s 0.000% <= 0.103 milliseconds (cumulative count 0) 252s 0.010% <= 0.607 milliseconds (cumulative count 10) 252s 0.030% <= 1.607 milliseconds (cumulative count 30) 252s 0.070% <= 1.703 milliseconds (cumulative count 70) 252s 0.170% <= 1.807 milliseconds (cumulative count 170) 252s 0.320% <= 1.903 milliseconds (cumulative count 320) 252s 0.620% <= 2.007 milliseconds (cumulative count 620) 252s 0.860% <= 2.103 milliseconds (cumulative count 860) 252s 27.260% <= 3.103 milliseconds (cumulative count 27260) 252s 75.200% <= 4.103 milliseconds (cumulative count 75200) 252s 94.280% <= 5.103 milliseconds (cumulative count 94280) 252s 99.600% <= 6.103 milliseconds (cumulative count 99600) 252s 100.000% <= 7.103 milliseconds (cumulative count 100000) 252s 252s Summary: 252s throughput summary: 108459.87 requests per second 252s latency summary (msec): 252s avg min p50 p95 p99 max 252s 3.643 0.560 3.575 5.215 5.879 6.855 255s LRANGE_300 (first 300 elements): rps=6466.4 (overall: 29745.5) avg_msec=9.825 (overall: 9.825) LRANGE_300 (first 300 elements): rps=34858.3 (overall: 33948.2) avg_msec=7.479 (overall: 7.845) LRANGE_300 (first 300 elements): rps=32804.8 (overall: 33435.7) avg_msec=8.488 (overall: 8.128) LRANGE_300 (first 300 elements): rps=33198.4 (overall: 33362.1) avg_msec=8.482 (overall: 8.237) LRANGE_300 (first 300 elements): rps=34175.3 (overall: 33554.1) avg_msec=7.880 (overall: 8.151) LRANGE_300 (first 300 elements): rps=34603.2 (overall: 33755.1) avg_msec=7.482 (overall: 8.020) LRANGE_300 (first 300 elements): rps=34349.2 (overall: 33850.7) avg_msec=7.765 (overall: 7.978) LRANGE_300 (first 300 elements): rps=34587.3 (overall: 33952.7) avg_msec=7.691 (overall: 7.938) LRANGE_300 (first 300 elements): rps=34533.9 (overall: 34023.2) avg_msec=7.534 (overall: 7.888) LRANGE_300 (first 300 elements): rps=34426.9 (overall: 34067.2) avg_msec=7.868 (overall: 7.886) LRANGE_300 (first 300 elements): rps=32874.0 (overall: 33949.6) avg_msec=7.954 (overall: 7.892) LRANGE_300 (first 300 elements): rps=31992.0 (overall: 33775.8) avg_msec=8.869 (overall: 7.974) ====== LRANGE_300 (first 300 elements) ====== 255s 100000 requests completed in 2.96 seconds 255s 50 parallel clients 255s 3 bytes payload 255s keep alive: 1 255s host configuration "save": 3600 1 300 100 60 10000 255s host configuration "appendonly": no 255s multi-thread: no 255s 255s Latency by percentile distribution: 255s 0.000% <= 0.511 milliseconds (cumulative count 10) 255s 50.000% <= 7.551 milliseconds (cumulative count 50190) 255s 75.000% <= 8.759 milliseconds (cumulative count 75010) 255s 87.500% <= 10.127 milliseconds (cumulative count 87530) 255s 93.750% <= 11.767 milliseconds (cumulative count 93750) 255s 96.875% <= 13.647 milliseconds (cumulative count 96880) 255s 98.438% <= 15.359 milliseconds (cumulative count 98440) 255s 99.219% <= 17.071 milliseconds (cumulative count 99220) 255s 99.609% <= 20.223 milliseconds (cumulative count 99610) 255s 99.805% <= 22.479 milliseconds (cumulative count 99810) 255s 99.902% <= 23.295 milliseconds (cumulative count 99910) 255s 99.951% <= 23.823 milliseconds (cumulative count 99960) 255s 99.976% <= 24.143 milliseconds (cumulative count 99980) 255s 99.988% <= 24.319 milliseconds (cumulative count 99990) 255s 99.994% <= 24.543 milliseconds (cumulative count 100000) 255s 100.000% <= 24.543 milliseconds (cumulative count 100000) 255s 255s Cumulative distribution of latencies: 255s 0.000% <= 0.103 milliseconds (cumulative count 0) 255s 0.010% <= 0.607 milliseconds (cumulative count 10) 255s 0.020% <= 0.807 milliseconds (cumulative count 20) 255s 0.030% <= 1.503 milliseconds (cumulative count 30) 255s 0.040% <= 1.807 milliseconds (cumulative count 40) 255s 0.050% <= 3.103 milliseconds (cumulative count 50) 255s 0.770% <= 4.103 milliseconds (cumulative count 770) 255s 4.560% <= 5.103 milliseconds (cumulative count 4560) 255s 17.060% <= 6.103 milliseconds (cumulative count 17060) 255s 38.750% <= 7.103 milliseconds (cumulative count 38750) 255s 64.010% <= 8.103 milliseconds (cumulative count 64010) 255s 78.980% <= 9.103 milliseconds (cumulative count 78980) 255s 87.380% <= 10.103 milliseconds (cumulative count 87380) 255s 91.710% <= 11.103 milliseconds (cumulative count 91710) 255s 94.550% <= 12.103 milliseconds (cumulative count 94550) 255s 96.180% <= 13.103 milliseconds (cumulative count 96180) 255s 97.340% <= 14.103 milliseconds (cumulative count 97340) 255s 98.240% <= 15.103 milliseconds (cumulative count 98240) 255s 98.870% <= 16.103 milliseconds (cumulative count 98870) 255s 99.220% <= 17.103 milliseconds (cumulative count 99220) 255s 99.320% <= 18.111 milliseconds (cumulative count 99320) 255s 99.450% <= 19.103 milliseconds (cumulative count 99450) 255s 99.590% <= 20.111 milliseconds (cumulative count 99590) 255s 99.700% <= 21.103 milliseconds (cumulative count 99700) 255s 99.770% <= 22.111 milliseconds (cumulative count 99770) 255s 99.880% <= 23.103 milliseconds (cumulative count 99880) 255s 99.970% <= 24.111 milliseconds (cumulative count 99970) 255s 100.000% <= 25.103 milliseconds (cumulative count 100000) 255s 255s Summary: 255s throughput summary: 33818.06 requests per second 255s latency summary (msec): 255s avg min p50 p95 p99 max 255s 7.949 0.504 7.551 12.351 16.327 24.543 261s LRANGE_500 (first 500 elements): rps=5326.8 (overall: 11090.2) avg_msec=22.596 (overall: 22.596) LRANGE_500 (first 500 elements): rps=14705.9 (overall: 13535.8) avg_msec=19.474 (overall: 20.302) LRANGE_500 (first 500 elements): rps=14560.3 (overall: 13951.1) avg_msec=19.766 (overall: 20.075) LRANGE_500 (first 500 elements): rps=14164.1 (overall: 14012.4) avg_msec=20.017 (overall: 20.058) LRANGE_500 (first 500 elements): rps=17541.2 (overall: 14798.3) avg_msec=15.934 (overall: 18.970) LRANGE_500 (first 500 elements): rps=20091.3 (overall: 15753.0) avg_msec=10.900 (overall: 17.113) LRANGE_500 (first 500 elements): rps=19800.0 (overall: 16377.7) avg_msec=11.257 (overall: 16.020) LRANGE_500 (first 500 elements): rps=20074.8 (overall: 16870.4) avg_msec=10.241 (overall: 15.104) LRANGE_500 (first 500 elements): rps=18452.4 (overall: 17055.1) avg_msec=13.979 (overall: 14.962) LRANGE_500 (first 500 elements): rps=18507.9 (overall: 17207.1) avg_msec=13.649 (overall: 14.814) LRANGE_500 (first 500 elements): rps=19845.8 (overall: 17457.8) avg_msec=12.065 (overall: 14.517) LRANGE_500 (first 500 elements): rps=19182.5 (overall: 17606.9) avg_msec=10.771 (overall: 14.164) LRANGE_500 (first 500 elements): rps=20432.0 (overall: 17830.0) avg_msec=10.155 (overall: 13.802) LRANGE_500 (first 500 elements): rps=19460.3 (overall: 17950.2) avg_msec=10.664 (overall: 13.551) LRANGE_500 (first 500 elements): rps=20133.9 (overall: 18101.3) avg_msec=10.312 (overall: 13.301) LRANGE_500 (first 500 elements): rps=20905.9 (overall: 18283.5) avg_msec=9.948 (overall: 13.052) LRANGE_500 (first 500 elements): rps=21094.1 (overall: 18454.9) avg_msec=10.267 (overall: 12.858) LRANGE_500 (first 500 elements): rps=20878.4 (overall: 18594.2) avg_msec=9.907 (overall: 12.668) LRANGE_500 (first 500 elements): rps=21436.5 (overall: 18747.0) avg_msec=9.669 (overall: 12.483) LRANGE_500 (first 500 elements): rps=21223.1 (overall: 18872.8) avg_msec=9.736 (overall: 12.326) LRANGE_500 (first 500 elements): rps=21412.7 (overall: 18996.1) avg_msec=10.748 (overall: 12.240) ====== LRANGE_500 (first 500 elements) ====== 261s 100000 requests completed in 5.26 seconds 261s 50 parallel clients 261s 3 bytes payload 261s keep alive: 1 261s host configuration "save": 3600 1 300 100 60 10000 261s host configuration "appendonly": no 261s multi-thread: no 261s 261s Latency by percentile distribution: 261s 0.000% <= 0.535 milliseconds (cumulative count 10) 261s 50.000% <= 10.719 milliseconds (cumulative count 50070) 261s 75.000% <= 12.207 milliseconds (cumulative count 75010) 261s 87.500% <= 18.735 milliseconds (cumulative count 87500) 261s 93.750% <= 24.079 milliseconds (cumulative count 93800) 261s 96.875% <= 25.967 milliseconds (cumulative count 96900) 261s 98.438% <= 28.383 milliseconds (cumulative count 98440) 261s 99.219% <= 31.103 milliseconds (cumulative count 99220) 261s 99.609% <= 32.735 milliseconds (cumulative count 99610) 261s 99.805% <= 33.887 milliseconds (cumulative count 99810) 261s 99.902% <= 34.655 milliseconds (cumulative count 99910) 261s 99.951% <= 36.223 milliseconds (cumulative count 99960) 261s 99.976% <= 36.895 milliseconds (cumulative count 99980) 261s 99.988% <= 40.223 milliseconds (cumulative count 99990) 261s 99.994% <= 41.407 milliseconds (cumulative count 100000) 261s 100.000% <= 41.407 milliseconds (cumulative count 100000) 261s 261s Cumulative distribution of latencies: 261s 0.000% <= 0.103 milliseconds (cumulative count 0) 261s 0.010% <= 0.607 milliseconds (cumulative count 10) 261s 0.020% <= 1.703 milliseconds (cumulative count 20) 261s 0.030% <= 1.807 milliseconds (cumulative count 30) 261s 0.040% <= 1.903 milliseconds (cumulative count 40) 261s 0.050% <= 2.007 milliseconds (cumulative count 50) 261s 0.080% <= 2.103 milliseconds (cumulative count 80) 261s 0.460% <= 3.103 milliseconds (cumulative count 460) 261s 0.950% <= 4.103 milliseconds (cumulative count 950) 261s 1.430% <= 5.103 milliseconds (cumulative count 1430) 261s 3.290% <= 6.103 milliseconds (cumulative count 3290) 261s 8.690% <= 7.103 milliseconds (cumulative count 8690) 261s 11.440% <= 8.103 milliseconds (cumulative count 11440) 261s 17.740% <= 9.103 milliseconds (cumulative count 17740) 261s 34.830% <= 10.103 milliseconds (cumulative count 34830) 261s 59.080% <= 11.103 milliseconds (cumulative count 59080) 261s 74.190% <= 12.103 milliseconds (cumulative count 74190) 261s 79.310% <= 13.103 milliseconds (cumulative count 79310) 261s 81.560% <= 14.103 milliseconds (cumulative count 81560) 261s 82.700% <= 15.103 milliseconds (cumulative count 82700) 261s 83.750% <= 16.103 milliseconds (cumulative count 83750) 261s 85.140% <= 17.103 milliseconds (cumulative count 85140) 261s 86.700% <= 18.111 milliseconds (cumulative count 86700) 261s 87.950% <= 19.103 milliseconds (cumulative count 87950) 261s 88.890% <= 20.111 milliseconds (cumulative count 88890) 261s 89.920% <= 21.103 milliseconds (cumulative count 89920) 261s 91.010% <= 22.111 milliseconds (cumulative count 91010) 261s 92.230% <= 23.103 milliseconds (cumulative count 92230) 261s 93.850% <= 24.111 milliseconds (cumulative count 93850) 261s 95.560% <= 25.103 milliseconds (cumulative count 95560) 261s 97.050% <= 26.111 milliseconds (cumulative count 97050) 261s 97.780% <= 27.103 milliseconds (cumulative count 97780) 261s 98.290% <= 28.111 milliseconds (cumulative count 98290) 261s 98.760% <= 29.103 milliseconds (cumulative count 98760) 261s 99.010% <= 30.111 milliseconds (cumulative count 99010) 261s 99.220% <= 31.103 milliseconds (cumulative count 99220) 261s 99.490% <= 32.111 milliseconds (cumulative count 99490) 261s 99.680% <= 33.119 milliseconds (cumulative count 99680) 261s 99.840% <= 34.111 milliseconds (cumulative count 99840) 261s 99.930% <= 35.103 milliseconds (cumulative count 99930) 261s 99.950% <= 36.127 milliseconds (cumulative count 99950) 261s 99.980% <= 37.119 milliseconds (cumulative count 99980) 261s 99.990% <= 41.119 milliseconds (cumulative count 99990) 261s 100.000% <= 42.111 milliseconds (cumulative count 100000) 261s 261s Summary: 261s throughput summary: 19029.50 requests per second 261s latency summary (msec): 261s avg min p50 p95 p99 max 261s 12.206 0.528 10.719 24.751 30.063 41.407 267s LRANGE_600 (first 600 elements): rps=7766.5 (overall: 10505.3) avg_msec=25.775 (overall: 25.775) LRANGE_600 (first 600 elements): rps=15761.0 (overall: 13496.6) avg_msec=17.314 (overall: 20.151) LRANGE_600 (first 600 elements): rps=14968.3 (overall: 14031.7) avg_msec=17.549 (overall: 19.142) LRANGE_600 (first 600 elements): rps=15896.0 (overall: 14526.0) avg_msec=17.342 (overall: 18.620) LRANGE_600 (first 600 elements): rps=13696.8 (overall: 14350.0) avg_msec=20.392 (overall: 18.979) LRANGE_600 (first 600 elements): rps=16370.5 (overall: 14700.3) avg_msec=14.402 (overall: 18.095) LRANGE_600 (first 600 elements): rps=15472.4 (overall: 14815.5) avg_msec=17.624 (overall: 18.022) LRANGE_600 (first 600 elements): rps=14482.2 (overall: 14772.4) avg_msec=20.044 (overall: 18.278) LRANGE_600 (first 600 elements): rps=16761.9 (overall: 14999.5) avg_msec=15.911 (overall: 17.976) LRANGE_600 (first 600 elements): rps=15043.8 (overall: 15004.1) avg_msec=17.383 (overall: 17.915) LRANGE_600 (first 600 elements): rps=11035.9 (overall: 14636.4) avg_msec=25.360 (overall: 18.436) LRANGE_600 (first 600 elements): rps=14225.3 (overall: 14601.3) avg_msec=19.826 (overall: 18.551) LRANGE_600 (first 600 elements): rps=16059.8 (overall: 14715.2) avg_msec=16.040 (overall: 18.337) LRANGE_600 (first 600 elements): rps=15444.0 (overall: 14767.8) avg_msec=17.258 (overall: 18.256) LRANGE_600 (first 600 elements): rps=11647.1 (overall: 14553.8) avg_msec=24.148 (overall: 18.579) LRANGE_600 (first 600 elements): rps=15260.0 (overall: 14598.3) avg_msec=18.702 (overall: 18.587) LRANGE_600 (first 600 elements): rps=12645.4 (overall: 14482.1) avg_msec=21.610 (overall: 18.744) LRANGE_600 (first 600 elements): rps=13115.1 (overall: 14405.1) avg_msec=20.959 (overall: 18.858) LRANGE_600 (first 600 elements): rps=15682.5 (overall: 14473.2) avg_msec=17.078 (overall: 18.755) LRANGE_600 (first 600 elements): rps=15151.4 (overall: 14507.4) avg_msec=18.414 (overall: 18.737) LRANGE_600 (first 600 elements): rps=14956.7 (overall: 14529.3) avg_msec=18.407 (overall: 18.721) LRANGE_600 (first 600 elements): rps=14960.2 (overall: 14549.0) avg_msec=17.985 (overall: 18.686) LRANGE_600 (first 600 elements): rps=16356.9 (overall: 14629.4) avg_msec=16.847 (overall: 18.594) LRANGE_600 (first 600 elements): rps=15458.5 (overall: 14664.4) avg_msec=17.158 (overall: 18.530) LRANGE_600 (first 600 elements): rps=17738.1 (overall: 14788.6) avg_msec=12.114 (overall: 18.220) LRANGE_600 (first 600 elements): rps=15888.9 (overall: 14831.3) avg_msec=15.886 (overall: 18.123) ====== LRANGE_600 (first 600 elements) ====== 267s 100000 requests completed in 6.71 seconds 267s 50 parallel clients 267s 3 bytes payload 267s keep alive: 1 267s host configuration "save": 3600 1 300 100 60 10000 267s host configuration "appendonly": no 267s multi-thread: no 267s 267s Latency by percentile distribution: 267s 0.000% <= 0.735 milliseconds (cumulative count 10) 267s 50.000% <= 17.551 milliseconds (cumulative count 50050) 267s 75.000% <= 24.015 milliseconds (cumulative count 75040) 267s 87.500% <= 27.487 milliseconds (cumulative count 87530) 267s 93.750% <= 29.471 milliseconds (cumulative count 93800) 267s 96.875% <= 31.183 milliseconds (cumulative count 96900) 267s 98.438% <= 33.183 milliseconds (cumulative count 98440) 267s 99.219% <= 34.975 milliseconds (cumulative count 99220) 267s 99.609% <= 36.767 milliseconds (cumulative count 99610) 267s 99.805% <= 40.255 milliseconds (cumulative count 99810) 267s 99.902% <= 42.527 milliseconds (cumulative count 99910) 267s 99.951% <= 43.391 milliseconds (cumulative count 99960) 267s 99.976% <= 43.711 milliseconds (cumulative count 99980) 267s 99.988% <= 43.903 milliseconds (cumulative count 99990) 267s 99.994% <= 44.127 milliseconds (cumulative count 100000) 267s 100.000% <= 44.127 milliseconds (cumulative count 100000) 267s 267s Cumulative distribution of latencies: 267s 0.000% <= 0.103 milliseconds (cumulative count 0) 267s 0.010% <= 0.807 milliseconds (cumulative count 10) 267s 0.020% <= 1.207 milliseconds (cumulative count 20) 267s 0.050% <= 1.407 milliseconds (cumulative count 50) 267s 0.090% <= 1.503 milliseconds (cumulative count 90) 267s 0.150% <= 1.607 milliseconds (cumulative count 150) 267s 0.260% <= 1.703 milliseconds (cumulative count 260) 267s 0.310% <= 1.807 milliseconds (cumulative count 310) 267s 0.370% <= 1.903 milliseconds (cumulative count 370) 267s 0.480% <= 2.007 milliseconds (cumulative count 480) 267s 0.560% <= 2.103 milliseconds (cumulative count 560) 267s 0.930% <= 3.103 milliseconds (cumulative count 930) 267s 1.450% <= 4.103 milliseconds (cumulative count 1450) 267s 2.820% <= 5.103 milliseconds (cumulative count 2820) 267s 4.970% <= 6.103 milliseconds (cumulative count 4970) 267s 6.790% <= 7.103 milliseconds (cumulative count 6790) 267s 9.580% <= 8.103 milliseconds (cumulative count 9580) 267s 13.050% <= 9.103 milliseconds (cumulative count 13050) 267s 16.620% <= 10.103 milliseconds (cumulative count 16620) 267s 21.610% <= 11.103 milliseconds (cumulative count 21610) 267s 26.920% <= 12.103 milliseconds (cumulative count 26920) 267s 31.760% <= 13.103 milliseconds (cumulative count 31760) 267s 35.890% <= 14.103 milliseconds (cumulative count 35890) 267s 39.380% <= 15.103 milliseconds (cumulative count 39380) 267s 43.350% <= 16.103 milliseconds (cumulative count 43350) 267s 47.850% <= 17.103 milliseconds (cumulative count 47850) 267s 52.650% <= 18.111 milliseconds (cumulative count 52650) 267s 56.290% <= 19.103 milliseconds (cumulative count 56290) 267s 59.230% <= 20.111 milliseconds (cumulative count 59230) 267s 62.780% <= 21.103 milliseconds (cumulative count 62780) 267s 66.780% <= 22.111 milliseconds (cumulative count 66780) 267s 71.220% <= 23.103 milliseconds (cumulative count 71220) 267s 75.300% <= 24.111 milliseconds (cumulative count 75300) 267s 78.970% <= 25.103 milliseconds (cumulative count 78970) 267s 82.910% <= 26.111 milliseconds (cumulative count 82910) 267s 86.260% <= 27.103 milliseconds (cumulative count 86260) 267s 89.730% <= 28.111 milliseconds (cumulative count 89730) 267s 92.730% <= 29.103 milliseconds (cumulative count 92730) 267s 95.320% <= 30.111 milliseconds (cumulative count 95320) 267s 96.790% <= 31.103 milliseconds (cumulative count 96790) 267s 97.860% <= 32.111 milliseconds (cumulative count 97860) 267s 98.420% <= 33.119 milliseconds (cumulative count 98420) 267s 98.890% <= 34.111 milliseconds (cumulative count 98890) 267s 99.240% <= 35.103 milliseconds (cumulative count 99240) 267s 99.510% <= 36.127 milliseconds (cumulative count 99510) 267s 99.650% <= 37.119 milliseconds (cumulative count 99650) 267s 99.770% <= 38.111 milliseconds (cumulative count 99770) 267s 99.780% <= 39.103 milliseconds (cumulative count 99780) 267s 99.800% <= 40.127 milliseconds (cumulative count 99800) 267s 99.830% <= 41.119 milliseconds (cumulative count 99830) 267s 99.880% <= 42.111 milliseconds (cumulative count 99880) 267s 99.940% <= 43.103 milliseconds (cumulative count 99940) 267s 100.000% <= 44.127 milliseconds (cumulative count 100000) 267s 267s Summary: 267s throughput summary: 14909.80 requests per second 267s latency summary (msec): 267s avg min p50 p95 p99 max 267s 17.974 0.728 17.551 29.951 34.335 44.127 268s MSET (10 keys): rps=31235.1 (overall: 245000.0) avg_msec=1.825 (overall: 1.825) MSET (10 keys): rps=265800.0 (overall: 263439.7) avg_msec=1.745 (overall: 1.754) ====== MSET (10 keys) ====== 268s 100000 requests completed in 0.38 seconds 268s 50 parallel clients 268s 3 bytes payload 268s keep alive: 1 268s host configuration "save": 3600 1 300 100 60 10000 268s host configuration "appendonly": no 268s multi-thread: no 268s 268s Latency by percentile distribution: 268s 0.000% <= 0.383 milliseconds (cumulative count 10) 268s 50.000% <= 1.775 milliseconds (cumulative count 50240) 268s 75.000% <= 1.943 milliseconds (cumulative count 75180) 268s 87.500% <= 2.047 milliseconds (cumulative count 87640) 268s 93.750% <= 2.127 milliseconds (cumulative count 94040) 268s 96.875% <= 2.199 milliseconds (cumulative count 97080) 268s 98.438% <= 2.263 milliseconds (cumulative count 98490) 268s 99.219% <= 2.343 milliseconds (cumulative count 99240) 268s 99.609% <= 2.439 milliseconds (cumulative count 99610) 268s 99.805% <= 2.535 milliseconds (cumulative count 99820) 268s 99.902% <= 2.679 milliseconds (cumulative count 99910) 268s 99.951% <= 2.759 milliseconds (cumulative count 99960) 268s 99.976% <= 2.799 milliseconds (cumulative count 99980) 268s 99.988% <= 3.023 milliseconds (cumulative count 99990) 268s 99.994% <= 3.071 milliseconds (cumulative count 100000) 268s 100.000% <= 3.071 milliseconds (cumulative count 100000) 268s 268s Cumulative distribution of latencies: 268s 0.000% <= 0.103 milliseconds (cumulative count 0) 268s 0.020% <= 0.407 milliseconds (cumulative count 20) 268s 0.060% <= 0.503 milliseconds (cumulative count 60) 268s 0.100% <= 0.607 milliseconds (cumulative count 100) 268s 0.120% <= 0.703 milliseconds (cumulative count 120) 268s 0.180% <= 0.903 milliseconds (cumulative count 180) 268s 0.410% <= 1.007 milliseconds (cumulative count 410) 268s 1.860% <= 1.103 milliseconds (cumulative count 1860) 268s 6.860% <= 1.207 milliseconds (cumulative count 6860) 268s 10.330% <= 1.303 milliseconds (cumulative count 10330) 268s 12.530% <= 1.407 milliseconds (cumulative count 12530) 268s 16.160% <= 1.503 milliseconds (cumulative count 16160) 268s 25.720% <= 1.607 milliseconds (cumulative count 25720) 268s 39.080% <= 1.703 milliseconds (cumulative count 39080) 268s 55.380% <= 1.807 milliseconds (cumulative count 55380) 268s 69.810% <= 1.903 milliseconds (cumulative count 69810) 268s 83.160% <= 2.007 milliseconds (cumulative count 83160) 268s 92.500% <= 2.103 milliseconds (cumulative count 92500) 268s 100.000% <= 3.103 milliseconds (cumulative count 100000) 268s 268s Summary: 268s throughput summary: 266666.66 requests per second 268s latency summary (msec): 268s avg min p50 p95 p99 max 268s 1.746 0.376 1.775 2.151 2.311 3.071 268s 268s autopkgtest [01:11:04]: test 0002-benchmark: -----------------------] 271s autopkgtest [01:11:07]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 271s 0002-benchmark PASS 272s autopkgtest [01:11:08]: test 0003-redis-check-aof: preparing testbed 272s Reading package lists... 272s Building dependency tree... 272s Reading state information... 273s Starting pkgProblemResolver with broken count: 0 273s Starting 2 pkgProblemResolver with broken count: 0 273s Done 274s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 275s autopkgtest [01:11:11]: test 0003-redis-check-aof: [----------------------- 276s autopkgtest [01:11:12]: test 0003-redis-check-aof: -----------------------] 278s autopkgtest [01:11:14]: test 0003-redis-check-aof: - - - - - - - - - - results - - - - - - - - - - 278s 0003-redis-check-aof PASS 279s autopkgtest [01:11:15]: test 0004-redis-check-rdb: preparing testbed 279s Reading package lists... 279s Building dependency tree... 279s Reading state information... 280s Starting pkgProblemResolver with broken count: 0 280s Starting 2 pkgProblemResolver with broken count: 0 280s Done 280s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 282s autopkgtest [01:11:18]: test 0004-redis-check-rdb: [----------------------- 287s OK 287s [offset 0] Checking RDB file /var/lib/redis/dump.rdb 287s [offset 27] AUX FIELD redis-ver = '7.0.15' 287s [offset 41] AUX FIELD redis-bits = '64' 287s [offset 53] AUX FIELD ctime = '1740705083' 287s [offset 68] AUX FIELD used-mem = '1596432' 287s [offset 80] AUX FIELD aof-base = '0' 287s [offset 82] Selecting DB ID 0 287s [offset 7184] Checksum OK 287s [offset 7184] \o/ RDB looks OK! \o/ 287s [info] 4 keys read 287s [info] 0 expires 287s [info] 0 already expired 288s autopkgtest [01:11:24]: test 0004-redis-check-rdb: -----------------------] 288s autopkgtest [01:11:24]: test 0004-redis-check-rdb: - - - - - - - - - - results - - - - - - - - - - 288s 0004-redis-check-rdb PASS 288s autopkgtest [01:11:24]: test 0005-cjson: preparing testbed 289s Reading package lists... 289s Building dependency tree... 289s Reading state information... 289s Starting pkgProblemResolver with broken count: 0 290s Starting 2 pkgProblemResolver with broken count: 0 290s Done 290s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 292s autopkgtest [01:11:28]: test 0005-cjson: [----------------------- 297s 297s autopkgtest [01:11:33]: test 0005-cjson: -----------------------] 298s 0005-cjson PASS 298s autopkgtest [01:11:34]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 299s autopkgtest [01:11:35]: @@@@@@@@@@@@@@@@@@@@ summary 299s 0001-redis-cli PASS 299s 0002-benchmark PASS 299s 0003-redis-check-aof PASS 299s 0004-redis-check-rdb PASS 299s 0005-cjson PASS 317s nova [W] Using flock in prodstack6-arm64 317s flock: timeout while waiting to get lock 317s Creating nova instance adt-noble-arm64-redis-20250228-010635-juju-7f2275-prod-proposed-migration-environment-15-ce3a019c-4628-40c2-8e1f-cea97deb8f34 from image adt/ubuntu-noble-arm64-server-20250227.img (UUID cad5d987-a0b7-4ecb-bf4f-5b8c7880be0a)... 317s nova [W] Timed out waiting for 3343dcdd-530b-4357-9523-48483585ce02 to get deleted.