0s autopkgtest [16:10:36]: starting date and time: 2025-03-15 16:10:36+0000 0s autopkgtest [16:10:36]: git checkout: 325255d2 Merge branch 'pin-any-arch' into 'ubuntu/production' 0s autopkgtest [16:10:36]: host juju-7f2275-prod-proposed-migration-environment-2; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.z4f4prje/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:glibc --apt-upgrade redict --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=glibc/2.41-1ubuntu2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-2@bos03-arm64-40.secgroup --name adt-plucky-arm64-redict-20250315-161036-juju-7f2275-prod-proposed-migration-environment-2-a75c7f0c-6614-4f8e-93d3-3e089d7de15e --image adt/ubuntu-plucky-arm64-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-2 --net-id=net_prod-proposed-migration -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 174s autopkgtest [16:13:30]: testbed dpkg architecture: arm64 174s autopkgtest [16:13:30]: testbed apt version: 2.9.33 174s autopkgtest [16:13:30]: @@@@@@@@@@@@@@@@@@@@ test bed setup 174s autopkgtest [16:13:30]: testbed release detected to be: None 175s autopkgtest [16:13:31]: updating testbed package index (apt update) 176s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed InRelease [126 kB] 176s Hit:2 http://ftpmaster.internal/ubuntu plucky InRelease 176s Hit:3 http://ftpmaster.internal/ubuntu plucky-updates InRelease 176s Hit:4 http://ftpmaster.internal/ubuntu plucky-security InRelease 176s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse Sources [15.8 kB] 176s Get:6 http://ftpmaster.internal/ubuntu plucky-proposed/main Sources [99.7 kB] 176s Get:7 http://ftpmaster.internal/ubuntu plucky-proposed/universe Sources [379 kB] 177s Get:8 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 Packages [111 kB] 177s Get:9 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 c-n-f Metadata [1856 B] 177s Get:10 http://ftpmaster.internal/ubuntu plucky-proposed/restricted arm64 c-n-f Metadata [116 B] 177s Get:11 http://ftpmaster.internal/ubuntu plucky-proposed/universe arm64 Packages [324 kB] 177s Get:12 http://ftpmaster.internal/ubuntu plucky-proposed/universe arm64 c-n-f Metadata [14.7 kB] 177s Get:13 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse arm64 Packages [4948 B] 177s Get:14 http://ftpmaster.internal/ubuntu plucky-proposed/multiverse arm64 c-n-f Metadata [268 B] 178s Fetched 1078 kB in 2s (575 kB/s) 179s Reading package lists... 179s Reading package lists... 180s Building dependency tree... 180s Reading state information... 181s Calculating upgrade... 181s Calculating upgrade... 181s The following packages will be upgraded: 181s pinentry-curses python3-jinja2 strace 182s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 182s Need to get 647 kB of archives. 182s After this operation, 11.3 kB of additional disk space will be used. 182s Get:1 http://ftpmaster.internal/ubuntu plucky/main arm64 strace arm64 6.13+ds-1ubuntu1 [499 kB] 182s Get:2 http://ftpmaster.internal/ubuntu plucky/main arm64 pinentry-curses arm64 1.3.1-2ubuntu3 [39.2 kB] 182s Get:3 http://ftpmaster.internal/ubuntu plucky/main arm64 python3-jinja2 all 3.1.5-2ubuntu1 [109 kB] 183s Fetched 647 kB in 1s (609 kB/s) 183s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117701 files and directories currently installed.) 183s Preparing to unpack .../strace_6.13+ds-1ubuntu1_arm64.deb ... 183s Unpacking strace (6.13+ds-1ubuntu1) over (6.11-0ubuntu1) ... 183s Preparing to unpack .../pinentry-curses_1.3.1-2ubuntu3_arm64.deb ... 183s Unpacking pinentry-curses (1.3.1-2ubuntu3) over (1.3.1-2ubuntu2) ... 183s Preparing to unpack .../python3-jinja2_3.1.5-2ubuntu1_all.deb ... 183s Unpacking python3-jinja2 (3.1.5-2ubuntu1) over (3.1.5-2) ... 184s Setting up pinentry-curses (1.3.1-2ubuntu3) ... 184s Setting up python3-jinja2 (3.1.5-2ubuntu1) ... 184s Setting up strace (6.13+ds-1ubuntu1) ... 184s Processing triggers for man-db (2.13.0-1) ... 185s Reading package lists... 185s Building dependency tree... 185s Reading state information... 185s Solving dependencies... 186s The following packages will be REMOVED: 186s libnsl2* libpython3.12-minimal* libpython3.12-stdlib* libpython3.12t64* 186s libunwind8* linux-headers-6.11.0-8* linux-headers-6.11.0-8-generic* 186s linux-image-6.11.0-8-generic* linux-modules-6.11.0-8-generic* 186s linux-tools-6.11.0-8* linux-tools-6.11.0-8-generic* 186s 0 upgraded, 0 newly installed, 11 to remove and 5 not upgraded. 186s After this operation, 267 MB disk space will be freed. 186s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117701 files and directories currently installed.) 186s Removing linux-tools-6.11.0-8-generic (6.11.0-8.8) ... 186s Removing linux-tools-6.11.0-8 (6.11.0-8.8) ... 186s Removing libpython3.12t64:arm64 (3.12.9-1) ... 186s Removing libpython3.12-stdlib:arm64 (3.12.9-1) ... 186s Removing libnsl2:arm64 (1.3.0-3build3) ... 186s Removing libpython3.12-minimal:arm64 (3.12.9-1) ... 186s Removing libunwind8:arm64 (1.6.2-3.1) ... 186s Removing linux-headers-6.11.0-8-generic (6.11.0-8.8) ... 187s Removing linux-headers-6.11.0-8 (6.11.0-8.8) ... 188s Removing linux-image-6.11.0-8-generic (6.11.0-8.8) ... 188s I: /boot/vmlinuz.old is now a symlink to vmlinuz-6.14.0-10-generic 188s I: /boot/initrd.img.old is now a symlink to initrd.img-6.14.0-10-generic 188s /etc/kernel/postrm.d/initramfs-tools: 188s update-initramfs: Deleting /boot/initrd.img-6.11.0-8-generic 188s /etc/kernel/postrm.d/zz-flash-kernel: 188s flash-kernel: Kernel 6.11.0-8-generic has been removed. 189s flash-kernel: A higher version (6.14.0-10-generic) is still installed, no reflashing required. 189s /etc/kernel/postrm.d/zz-update-grub: 189s Sourcing file `/etc/default/grub' 189s Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' 189s Generating grub configuration file ... 189s Found linux image: /boot/vmlinuz-6.14.0-10-generic 189s Found initrd image: /boot/initrd.img-6.14.0-10-generic 189s Warning: os-prober will not be executed to detect other bootable partitions. 189s Systems on them will not be added to the GRUB boot configuration. 189s Check GRUB_DISABLE_OS_PROBER documentation entry. 189s Adding boot menu entry for UEFI Firmware Settings ... 189s done 189s Removing linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 190s Processing triggers for libc-bin (2.41-1ubuntu1) ... 190s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81650 files and directories currently installed.) 190s Purging configuration files for linux-image-6.11.0-8-generic (6.11.0-8.8) ... 190s Purging configuration files for libpython3.12-minimal:arm64 (3.12.9-1) ... 190s Purging configuration files for linux-modules-6.11.0-8-generic (6.11.0-8.8) ... 190s autopkgtest [16:13:46]: upgrading testbed (apt dist-upgrade and autopurge) 190s Reading package lists... 190s Building dependency tree... 190s Reading state information... 191s Calculating upgrade...Starting pkgProblemResolver with broken count: 0 191s Starting 2 pkgProblemResolver with broken count: 0 191s Done 192s Entering ResolveByKeep 192s 192s Calculating upgrade... 192s The following packages will be upgraded: 192s libc-bin libc-dev-bin libc6 libc6-dev locales 193s 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 193s Need to get 9530 kB of archives. 193s After this operation, 0 B of additional disk space will be used. 193s Get:1 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 libc6-dev arm64 2.41-1ubuntu2 [1750 kB] 195s Get:2 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 libc-dev-bin arm64 2.41-1ubuntu2 [24.0 kB] 195s Get:3 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 libc6 arm64 2.41-1ubuntu2 [2910 kB] 198s Get:4 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 libc-bin arm64 2.41-1ubuntu2 [600 kB] 199s Get:5 http://ftpmaster.internal/ubuntu plucky-proposed/main arm64 locales all 2.41-1ubuntu2 [4246 kB] 204s Preconfiguring packages ... 204s Fetched 9530 kB in 11s (906 kB/s) 204s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81647 files and directories currently installed.) 204s Preparing to unpack .../libc6-dev_2.41-1ubuntu2_arm64.deb ... 204s Unpacking libc6-dev:arm64 (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 204s Preparing to unpack .../libc-dev-bin_2.41-1ubuntu2_arm64.deb ... 204s Unpacking libc-dev-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 204s Preparing to unpack .../libc6_2.41-1ubuntu2_arm64.deb ... 204s Unpacking libc6:arm64 (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 205s Setting up libc6:arm64 (2.41-1ubuntu2) ... 205s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81647 files and directories currently installed.) 205s Preparing to unpack .../libc-bin_2.41-1ubuntu2_arm64.deb ... 205s Unpacking libc-bin (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 205s Setting up libc-bin (2.41-1ubuntu2) ... 205s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81647 files and directories currently installed.) 205s Preparing to unpack .../locales_2.41-1ubuntu2_all.deb ... 205s Unpacking locales (2.41-1ubuntu2) over (2.41-1ubuntu1) ... 206s Setting up locales (2.41-1ubuntu2) ... 206s Generating locales (this might take a while)... 208s en_US.UTF-8... done 208s Generation complete. 208s Setting up libc-dev-bin (2.41-1ubuntu2) ... 208s Setting up libc6-dev:arm64 (2.41-1ubuntu2) ... 208s Processing triggers for man-db (2.13.0-1) ... 209s Processing triggers for systemd (257.3-1ubuntu3) ... 210s Reading package lists... 210s Building dependency tree... 210s Reading state information... 210s Starting pkgProblemResolver with broken count: 0 211s Starting 2 pkgProblemResolver with broken count: 0 211s Done 211s Solving dependencies... 211s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 212s autopkgtest [16:14:08]: rebooting testbed after setup commands that affected boot 235s autopkgtest [16:14:31]: testbed running kernel: Linux 6.14.0-10-generic #10-Ubuntu SMP PREEMPT_DYNAMIC Wed Mar 12 15:45:31 UTC 2025 237s autopkgtest [16:14:33]: @@@@@@@@@@@@@@@@@@@@ apt-source redict 242s Get:1 http://ftpmaster.internal/ubuntu plucky/universe redict 7.3.2+ds-1 (dsc) [2417 B] 242s Get:2 http://ftpmaster.internal/ubuntu plucky/universe redict 7.3.2+ds-1 (tar) [1742 kB] 242s Get:3 http://ftpmaster.internal/ubuntu plucky/universe redict 7.3.2+ds-1 (diff) [13.4 kB] 243s gpgv: Signature made Wed Jan 8 14:03:38 2025 UTC 243s gpgv: using RSA key 4A5FD1CD115087CC03DC35C1D597897206C5F07F 243s gpgv: issuer "maytha8thedev@gmail.com" 243s gpgv: Can't check signature: No public key 243s dpkg-source: warning: cannot verify inline signature for ./redict_7.3.2+ds-1.dsc: no acceptable signature found 243s autopkgtest [16:14:39]: testing package redict version 7.3.2+ds-1 244s autopkgtest [16:14:40]: build not needed 246s autopkgtest [16:14:42]: test 0001-redict-cli: preparing testbed 246s Reading package lists... 247s Building dependency tree... 247s Reading state information... 247s Starting pkgProblemResolver with broken count: 0 247s Starting 2 pkgProblemResolver with broken count: 0 247s Done 248s The following NEW packages will be installed: 248s libhiredict1.3.1 liblzf1 redict redict-sentinel redict-server redict-tools 248s 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. 248s Need to get 1266 kB of archives. 248s After this operation, 7216 kB of additional disk space will be used. 248s Get:1 http://ftpmaster.internal/ubuntu plucky/universe arm64 libhiredict1.3.1 arm64 1.3.1-2 [39.9 kB] 248s Get:2 http://ftpmaster.internal/ubuntu plucky/universe arm64 liblzf1 arm64 3.6-4 [7426 B] 248s Get:3 http://ftpmaster.internal/ubuntu plucky/universe arm64 redict-tools arm64 7.3.2+ds-1 [1161 kB] 249s Get:4 http://ftpmaster.internal/ubuntu plucky/universe arm64 redict-sentinel arm64 7.3.2+ds-1 [12.6 kB] 249s Get:5 http://ftpmaster.internal/ubuntu plucky/universe arm64 redict-server arm64 7.3.2+ds-1 [41.3 kB] 250s Get:6 http://ftpmaster.internal/ubuntu plucky/universe arm64 redict all 7.3.2+ds-1 [3720 B] 250s Fetched 1266 kB in 2s (806 kB/s) 250s Selecting previously unselected package libhiredict1.3.1:arm64. 250s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 81647 files and directories currently installed.) 250s Preparing to unpack .../0-libhiredict1.3.1_1.3.1-2_arm64.deb ... 250s Unpacking libhiredict1.3.1:arm64 (1.3.1-2) ... 250s Selecting previously unselected package liblzf1:arm64. 250s Preparing to unpack .../1-liblzf1_3.6-4_arm64.deb ... 250s Unpacking liblzf1:arm64 (3.6-4) ... 250s Selecting previously unselected package redict-tools. 250s Preparing to unpack .../2-redict-tools_7.3.2+ds-1_arm64.deb ... 250s Unpacking redict-tools (7.3.2+ds-1) ... 250s Selecting previously unselected package redict-sentinel. 250s Preparing to unpack .../3-redict-sentinel_7.3.2+ds-1_arm64.deb ... 250s Unpacking redict-sentinel (7.3.2+ds-1) ... 250s Selecting previously unselected package redict-server. 250s Preparing to unpack .../4-redict-server_7.3.2+ds-1_arm64.deb ... 250s Unpacking redict-server (7.3.2+ds-1) ... 250s Selecting previously unselected package redict. 250s Preparing to unpack .../5-redict_7.3.2+ds-1_all.deb ... 250s Unpacking redict (7.3.2+ds-1) ... 250s Setting up liblzf1:arm64 (3.6-4) ... 250s Setting up libhiredict1.3.1:arm64 (1.3.1-2) ... 250s Setting up redict-tools (7.3.2+ds-1) ... 250s Creating group 'redict' with GID 988. 250s Creating user 'redict' (Redict Key/Value Store) with UID 988 and GID 988. 250s Setting up redict-server (7.3.2+ds-1) ... 251s Created symlink '/etc/systemd/system/redict.service' → '/usr/lib/systemd/system/redict-server.service'. 251s Created symlink '/etc/systemd/system/multi-user.target.wants/redict-server.service' → '/usr/lib/systemd/system/redict-server.service'. 251s Setting up redict-sentinel (7.3.2+ds-1) ... 252s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redict-sentinel.service'. 252s Created symlink '/etc/systemd/system/multi-user.target.wants/redict-sentinel.service' → '/usr/lib/systemd/system/redict-sentinel.service'. 252s Setting up redict (7.3.2+ds-1) ... 252s Processing triggers for libc-bin (2.41-1ubuntu2) ... 254s autopkgtest [16:14:50]: test 0001-redict-cli: [----------------------- 259s # Server 259s redict_version:7.3.2 259s redict_git_sha1:00000000 259s redict_git_dirty:0 259s redict_build_id:6e1afbc83ca9dd4a 259s redict_mode:standalone 259s redis_version:7.2.4 259s os:Linux 6.14.0-10-generic aarch64 259s arch_bits:64 259s monotonic_clock:POSIX clock_gettime 259s multiplexing_api:epoll 259s atomicvar_api:c11-builtin 259s gcc_version:14.2.0 259s process_id:1702 259s process_supervised:systemd 259s run_id:4b869b2bc4b71478d49fd7777f12e36427cd8a9f 259s tcp_port:6379 259s server_time_usec:1742055295442474 259s uptime_in_seconds:5 259s uptime_in_days:0 259s hz:10 259s configured_hz:10 259s lru_clock:14002047 259s executable:/usr/bin/redict-server 259s config_file:/etc/redict/redict.conf 259s io_threads_active:0 259s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 259s 259s # Clients 259s connected_clients:3 259s cluster_connections:0 259s maxclients:10000 259s client_recent_max_input_buffer:20480 259s client_recent_max_output_buffer:0 259s blocked_clients:0 259s tracking_clients:0 259s pubsub_clients:1 259s watching_clients:0 259s clients_in_timeout_table:0 259s total_watched_keys:0 259s total_blocking_keys:0 259s total_blocking_keys_on_nokey:0 259s 259s # Memory 259s used_memory:1125136 259s used_memory_human:1.07M 259s used_memory_rss:13864960 259s used_memory_rss_human:13.22M 259s used_memory_peak:1125136 259s used_memory_peak_human:1.07M 259s used_memory_peak_perc:102.11% 259s used_memory_overhead:984064 259s used_memory_startup:939008 259s used_memory_dataset:141072 259s used_memory_dataset_perc:75.79% 259s allocator_allocated:4785440 259s allocator_active:9895936 259s allocator_resident:11206656 259s allocator_muzzy:0 259s total_system_memory:4088070144 259s total_system_memory_human:3.81G 259s used_memory_lua:31744 259s used_memory_vm_eval:31744 259s used_memory_lua_human:31.00K 259s used_memory_scripts_eval:0 259s number_of_cached_scripts:0 259s number_of_functions:0 259s number_of_libraries:0 259s used_memory_vm_functions:33792 259s used_memory_vm_total:65536 259s used_memory_vm_total_human:64.00K 259s used_memory_functions:200 259s used_memory_scripts:200 259s used_memory_scripts_human:200B 259s maxmemory:0 259s maxmemory_human:0B 259s maxmemory_policy:noeviction 259s allocator_frag_ratio:2.05 259s allocator_frag_bytes:5044960 259s allocator_rss_ratio:1.13 259s allocator_rss_bytes:1310720 259s rss_overhead_ratio:1.24 259s rss_overhead_bytes:2658304 259s mem_fragmentation_ratio:12.79 259s mem_fragmentation_bytes:12780824 259s mem_not_counted_for_evict:0 259s mem_replication_backlog:0 259s mem_total_replication_buffers:0 259s mem_clients_slaves:0 259s mem_clients_normal:44856 259s mem_cluster_links:0 259s mem_aof_buffer:0 259s mem_allocator:jemalloc-5.3.0 259s mem_overhead_db_hashtable_rehashing:0 259s active_defrag_running:0 259s lazyfree_pending_objects:0 259s lazyfreed_objects:0 259s 259s # Persistence 259s loading:0 259s async_loading:0 259s current_cow_peak:0 259s current_cow_size:0 259s current_cow_size_age:0 259s current_fork_perc:0.00 259s current_save_keys_processed:0 259s current_save_keys_total:0 259s rdb_changes_since_last_save:0 259s rdb_bgsave_in_progress:0 259s rdb_last_save_time:1742055290 259s rdb_last_bgsave_status:ok 259s rdb_last_bgsave_time_sec:-1 259s rdb_current_bgsave_time_sec:-1 259s rdb_saves:0 259s rdb_last_cow_size:0 259s rdb_last_load_keys_expired:0 259s rdb_last_load_keys_loaded:0 259s aof_enabled:0 259s aof_rewrite_in_progress:0 259s aof_rewrite_scheduled:0 259s aof_last_rewrite_time_sec:-1 259s aof_current_rewrite_time_sec:-1 259s aof_last_bgrewrite_status:ok 259s aof_rewrites:0 259s aof_rewrites_consecutive_failures:0 259s aof_last_write_status:ok 259s aof_last_cow_size:0 259s module_fork_in_progress:0 259s module_fork_last_cow_size:0 259s 259s # Stats 259s total_connections_received:3 259s total_commands_processed:9 259s instantaneous_ops_per_sec:1 259s total_net_input_bytes:497 259s total_net_output_bytes:360 259s total_net_repl_input_bytes:0 259s total_net_repl_output_bytes:0 259s instantaneous_input_kbps:0.09 259s instantaneous_output_kbps:0.09 259s instantaneous_input_repl_kbps:0.00 259s instantaneous_output_repl_kbps:0.00 259s rejected_connections:0 259s sync_full:0 259s sync_partial_ok:0 259s sync_partial_err:0 259s expired_keys:0 259s expired_stale_perc:0.00 259s expired_time_cap_reached_count:0 259s expire_cycle_cpu_milliseconds:0 259s evicted_keys:0 259s evicted_clients:0 259s evicted_scripts:0 259s total_eviction_exceeded_time:0 259s current_eviction_exceeded_time:0 259s keyspace_hits:0 259s keyspace_misses:0 259s pubsub_channels:1 259s pubsub_patterns:0 259s pubsubshard_channels:0 259s latest_fork_usec:0 259s total_forks:0 259s migrate_cached_sockets:0 259s slave_expires_tracked_keys:0 259s active_defrag_hits:0 259s active_defrag_misses:0 259s active_defrag_key_hits:0 259s active_defrag_key_misses:0 259s total_active_defrag_time:0 259s current_active_defrag_time:0 259s tracking_total_keys:0 259s tracking_total_items:0 259s tracking_total_prefixes:0 259s unexpected_error_replies:0 259s total_error_replies:0 259s dump_payload_sanitizations:0 259s total_reads_processed:8 259s total_writes_processed:9 259s io_threaded_reads_processed:0 259s io_threaded_writes_processed:0 259s client_query_buffer_limit_disconnections:0 259s client_output_buffer_limit_disconnections:0 259s reply_buffer_shrinks:2 259s reply_buffer_expands:0 259s eventloop_cycles:58 259s eventloop_duration_sum:10790 259s eventloop_duration_cmd_sum:36 259s instantaneous_eventloop_cycles_per_sec:10 259s instantaneous_eventloop_duration_usec:194 259s acl_access_denied_auth:0 259s acl_access_denied_cmd:0 259s acl_access_denied_key:0 259s acl_access_denied_channel:0 259s 259s # Replication 259s role:master 259s connected_slaves:0 259s master_failover_state:no-failover 259s master_replid:225a856792de16cd650bb89ac894c7d74ede6bdb 259s master_replid2:0000000000000000000000000000000000000000 259s master_repl_offset:0 259s second_repl_offset:-1 259s repl_backlog_active:0 259s repl_backlog_size:1048576 259s repl_backlog_first_byte_offset:0 259s repl_backlog_histlen:0 259s 259s # CPU 259s used_cpu_sys:0.020229 259s used_cpu_user:0.041917 259s used_cpu_sys_children:0.002710 259s used_cpu_user_children:0.000425 259s used_cpu_sys_main_thread:0.019254 259s used_cpu_user_main_thread:0.042544 259s 259s # Modules 259s 259s # Errorstats 259s 259s # Cluster 259s cluster_enabled:0 259s 259s # Keyspace 259s Redict ver. 7.3.2 259s autopkgtest [16:14:55]: test 0001-redict-cli: -----------------------] 260s autopkgtest [16:14:56]: test 0001-redict-cli: - - - - - - - - - - results - - - - - - - - - - 260s 0001-redict-cli PASS 260s autopkgtest [16:14:56]: test 0002-benchmark: preparing testbed 260s Reading package lists... 261s Building dependency tree... 261s Reading state information... 261s Starting pkgProblemResolver with broken count: 0 261s Starting 2 pkgProblemResolver with broken count: 0 261s Done 262s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 263s autopkgtest [16:14:59]: test 0002-benchmark: [----------------------- 269s PING_INLINE: rps=0.0 (overall: nan) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 269s 100000 requests completed in 0.15 seconds 269s 50 parallel clients 269s 3 bytes payload 269s keep alive: 1 269s host configuration "save": 3600 1 300 100 60 10000 269s host configuration "appendonly": no 269s multi-thread: no 269s 269s Latency by percentile distribution: 269s 0.000% <= 0.151 milliseconds (cumulative count 10) 269s 50.000% <= 0.431 milliseconds (cumulative count 53840) 269s 75.000% <= 0.471 milliseconds (cumulative count 77930) 269s 87.500% <= 0.511 milliseconds (cumulative count 88930) 269s 93.750% <= 0.567 milliseconds (cumulative count 94110) 269s 96.875% <= 0.671 milliseconds (cumulative count 96880) 269s 98.438% <= 0.815 milliseconds (cumulative count 98460) 269s 99.219% <= 0.927 milliseconds (cumulative count 99260) 269s 99.609% <= 0.991 milliseconds (cumulative count 99620) 269s 99.805% <= 1.055 milliseconds (cumulative count 99810) 269s 99.902% <= 1.103 milliseconds (cumulative count 99910) 269s 99.951% <= 1.191 milliseconds (cumulative count 99960) 269s 99.976% <= 1.343 milliseconds (cumulative count 99980) 269s 99.988% <= 1.375 milliseconds (cumulative count 99990) 269s 99.994% <= 1.511 milliseconds (cumulative count 100000) 269s 100.000% <= 1.511 milliseconds (cumulative count 100000) 269s 269s Cumulative distribution of latencies: 269s 0.000% <= 0.103 milliseconds (cumulative count 0) 269s 0.080% <= 0.207 milliseconds (cumulative count 80) 269s 0.320% <= 0.303 milliseconds (cumulative count 320) 269s 33.950% <= 0.407 milliseconds (cumulative count 33950) 269s 87.440% <= 0.503 milliseconds (cumulative count 87440) 269s 95.450% <= 0.607 milliseconds (cumulative count 95450) 269s 97.240% <= 0.703 milliseconds (cumulative count 97240) 269s 98.360% <= 0.807 milliseconds (cumulative count 98360) 269s 99.100% <= 0.903 milliseconds (cumulative count 99100) 269s 99.680% <= 1.007 milliseconds (cumulative count 99680) 269s 99.910% <= 1.103 milliseconds (cumulative count 99910) 269s 99.960% <= 1.207 milliseconds (cumulative count 99960) 269s 99.990% <= 1.407 milliseconds (cumulative count 99990) 269s 100.000% <= 1.607 milliseconds (cumulative count 100000) 269s 269s Summary: 269s throughput summary: 653594.81 requests per second 269s latency summary (msec): 269s avg min p50 p95 p99 max 269s 0.447 0.144 0.431 0.591 0.887 1.511 269s PING_MBULK: rps=240278.9 (overall: 628229.2) avg_msec=0.426 (overall: 0.426) ====== PING_MBULK ====== 269s 100000 requests completed in 0.16 seconds 269s 50 parallel clients 269s 3 bytes payload 269s keep alive: 1 269s host configuration "save": 3600 1 300 100 60 10000 269s host configuration "appendonly": no 269s multi-thread: no 269s 269s Latency by percentile distribution: 269s 0.000% <= 0.159 milliseconds (cumulative count 10) 269s 50.000% <= 0.407 milliseconds (cumulative count 51450) 269s 75.000% <= 0.439 milliseconds (cumulative count 76750) 269s 87.500% <= 0.471 milliseconds (cumulative count 87670) 269s 93.750% <= 0.519 milliseconds (cumulative count 94440) 269s 96.875% <= 0.591 milliseconds (cumulative count 96960) 269s 98.438% <= 0.679 milliseconds (cumulative count 98460) 269s 99.219% <= 0.791 milliseconds (cumulative count 99220) 269s 99.609% <= 0.943 milliseconds (cumulative count 99610) 269s 99.805% <= 1.783 milliseconds (cumulative count 99820) 269s 99.902% <= 1.807 milliseconds (cumulative count 99930) 269s 99.951% <= 1.823 milliseconds (cumulative count 99980) 269s 99.988% <= 1.831 milliseconds (cumulative count 100000) 269s 100.000% <= 1.831 milliseconds (cumulative count 100000) 269s 269s Cumulative distribution of latencies: 269s 0.000% <= 0.103 milliseconds (cumulative count 0) 269s 0.020% <= 0.207 milliseconds (cumulative count 20) 269s 0.250% <= 0.303 milliseconds (cumulative count 250) 269s 51.450% <= 0.407 milliseconds (cumulative count 51450) 269s 92.840% <= 0.503 milliseconds (cumulative count 92840) 269s 97.200% <= 0.607 milliseconds (cumulative count 97200) 269s 98.720% <= 0.703 milliseconds (cumulative count 98720) 269s 99.270% <= 0.807 milliseconds (cumulative count 99270) 269s 99.520% <= 0.903 milliseconds (cumulative count 99520) 269s 99.680% <= 1.007 milliseconds (cumulative count 99680) 269s 99.720% <= 1.103 milliseconds (cumulative count 99720) 269s 99.730% <= 1.207 milliseconds (cumulative count 99730) 269s 99.930% <= 1.807 milliseconds (cumulative count 99930) 269s 100.000% <= 1.903 milliseconds (cumulative count 100000) 269s 269s Summary: 269s throughput summary: 632911.38 requests per second 269s latency summary (msec): 269s avg min p50 p95 p99 max 269s 0.424 0.152 0.407 0.527 0.743 1.831 269s ====== SET ====== 269s 100000 requests completed in 0.17 seconds 269s 50 parallel clients 269s 3 bytes payload 269s keep alive: 1 269s host configuration "save": 3600 1 300 100 60 10000 269s host configuration "appendonly": no 269s multi-thread: no 269s 269s Latency by percentile distribution: 269s 0.000% <= 0.295 milliseconds (cumulative count 10) 269s 50.000% <= 0.679 milliseconds (cumulative count 51610) 269s 75.000% <= 0.799 milliseconds (cumulative count 75270) 269s 87.500% <= 0.895 milliseconds (cumulative count 88080) 269s 93.750% <= 0.959 milliseconds (cumulative count 94260) 269s 96.875% <= 1.007 milliseconds (cumulative count 97040) 269s 98.438% <= 1.047 milliseconds (cumulative count 98480) 269s 99.219% <= 1.095 milliseconds (cumulative count 99290) 269s 99.609% <= 1.143 milliseconds (cumulative count 99650) 269s 99.805% <= 1.207 milliseconds (cumulative count 99840) 269s 99.902% <= 1.247 milliseconds (cumulative count 99910) 269s 99.951% <= 1.279 milliseconds (cumulative count 99960) 269s 99.976% <= 1.311 milliseconds (cumulative count 99980) 269s 99.988% <= 1.335 milliseconds (cumulative count 99990) 269s 99.994% <= 1.383 milliseconds (cumulative count 100000) 269s 100.000% <= 1.383 milliseconds (cumulative count 100000) 269s 269s Cumulative distribution of latencies: 269s 0.000% <= 0.103 milliseconds (cumulative count 0) 269s 0.010% <= 0.303 milliseconds (cumulative count 10) 269s 2.380% <= 0.407 milliseconds (cumulative count 2380) 269s 11.500% <= 0.503 milliseconds (cumulative count 11500) 269s 31.770% <= 0.607 milliseconds (cumulative count 31770) 269s 57.140% <= 0.703 milliseconds (cumulative count 57140) 269s 76.500% <= 0.807 milliseconds (cumulative count 76500) 269s 88.940% <= 0.903 milliseconds (cumulative count 88940) 269s 97.040% <= 1.007 milliseconds (cumulative count 97040) 269s 99.390% <= 1.103 milliseconds (cumulative count 99390) 269s 99.840% <= 1.207 milliseconds (cumulative count 99840) 269s 99.970% <= 1.303 milliseconds (cumulative count 99970) 269s 100.000% <= 1.407 milliseconds (cumulative count 100000) 269s 269s Summary: 269s throughput summary: 602409.69 requests per second 269s latency summary (msec): 269s avg min p50 p95 p99 max 269s 0.691 0.288 0.679 0.975 1.079 1.383 269s GET: rps=42480.0 (overall: 590000.0) avg_msec=0.507 (overall: 0.507) ====== GET ====== 269s 100000 requests completed in 0.16 seconds 269s 50 parallel clients 269s 3 bytes payload 269s keep alive: 1 269s host configuration "save": 3600 1 300 100 60 10000 269s host configuration "appendonly": no 269s multi-thread: no 269s 269s Latency by percentile distribution: 269s 0.000% <= 0.183 milliseconds (cumulative count 10) 269s 50.000% <= 0.471 milliseconds (cumulative count 50480) 269s 75.000% <= 0.527 milliseconds (cumulative count 75080) 269s 87.500% <= 0.591 milliseconds (cumulative count 88330) 269s 93.750% <= 0.663 milliseconds (cumulative count 93920) 269s 96.875% <= 0.743 milliseconds (cumulative count 96900) 269s 98.438% <= 0.807 milliseconds (cumulative count 98440) 269s 99.219% <= 0.863 milliseconds (cumulative count 99220) 269s 99.609% <= 0.911 milliseconds (cumulative count 99630) 269s 99.805% <= 0.951 milliseconds (cumulative count 99830) 269s 99.902% <= 0.983 milliseconds (cumulative count 99910) 269s 99.951% <= 1.023 milliseconds (cumulative count 99960) 269s 99.976% <= 1.047 milliseconds (cumulative count 99980) 269s 99.988% <= 1.071 milliseconds (cumulative count 99990) 269s 99.994% <= 1.127 milliseconds (cumulative count 100000) 269s 100.000% <= 1.127 milliseconds (cumulative count 100000) 269s 269s Cumulative distribution of latencies: 269s 0.000% <= 0.103 milliseconds (cumulative count 0) 269s 0.040% <= 0.207 milliseconds (cumulative count 40) 269s 0.250% <= 0.303 milliseconds (cumulative count 250) 269s 15.450% <= 0.407 milliseconds (cumulative count 15450) 269s 66.050% <= 0.503 milliseconds (cumulative count 66050) 269s 90.030% <= 0.607 milliseconds (cumulative count 90030) 269s 95.690% <= 0.703 milliseconds (cumulative count 95690) 269s 98.440% <= 0.807 milliseconds (cumulative count 98440) 269s 99.570% <= 0.903 milliseconds (cumulative count 99570) 269s 99.940% <= 1.007 milliseconds (cumulative count 99940) 269s 99.990% <= 1.103 milliseconds (cumulative count 99990) 269s 100.000% <= 1.207 milliseconds (cumulative count 100000) 269s 269s Summary: 269s throughput summary: 613496.94 requests per second 269s latency summary (msec): 269s avg min p50 p95 p99 max 269s 0.491 0.176 0.471 0.687 0.847 1.127 269s INCR: rps=266880.0 (overall: 647767.0) avg_msec=0.461 (overall: 0.461) ====== INCR ====== 269s 100000 requests completed in 0.16 seconds 269s 50 parallel clients 269s 3 bytes payload 269s keep alive: 1 269s host configuration "save": 3600 1 300 100 60 10000 269s host configuration "appendonly": no 269s multi-thread: no 269s 269s Latency by percentile distribution: 269s 0.000% <= 0.151 milliseconds (cumulative count 10) 269s 50.000% <= 0.455 milliseconds (cumulative count 53090) 269s 75.000% <= 0.519 milliseconds (cumulative count 76990) 269s 87.500% <= 0.575 milliseconds (cumulative count 88490) 269s 93.750% <= 0.631 milliseconds (cumulative count 94340) 269s 96.875% <= 0.679 milliseconds (cumulative count 97070) 269s 98.438% <= 0.727 milliseconds (cumulative count 98510) 269s 99.219% <= 0.791 milliseconds (cumulative count 99280) 269s 99.609% <= 0.863 milliseconds (cumulative count 99620) 269s 99.805% <= 0.911 milliseconds (cumulative count 99810) 269s 99.902% <= 0.983 milliseconds (cumulative count 99910) 269s 99.951% <= 1.047 milliseconds (cumulative count 99960) 269s 99.976% <= 1.071 milliseconds (cumulative count 99980) 269s 99.988% <= 1.079 milliseconds (cumulative count 99990) 269s 99.994% <= 1.095 milliseconds (cumulative count 100000) 269s 100.000% <= 1.095 milliseconds (cumulative count 100000) 269s 269s Cumulative distribution of latencies: 269s 0.000% <= 0.103 milliseconds (cumulative count 0) 269s 0.050% <= 0.207 milliseconds (cumulative count 50) 269s 0.260% <= 0.303 milliseconds (cumulative count 260) 269s 28.830% <= 0.407 milliseconds (cumulative count 28830) 269s 72.010% <= 0.503 milliseconds (cumulative count 72010) 269s 92.300% <= 0.607 milliseconds (cumulative count 92300) 269s 97.950% <= 0.703 milliseconds (cumulative count 97950) 269s 99.340% <= 0.807 milliseconds (cumulative count 99340) 269s 99.790% <= 0.903 milliseconds (cumulative count 99790) 269s 99.920% <= 1.007 milliseconds (cumulative count 99920) 269s 100.000% <= 1.103 milliseconds (cumulative count 100000) 269s 269s Summary: 269s throughput summary: 645161.31 requests per second 269s latency summary (msec): 269s avg min p50 p95 p99 max 269s 0.467 0.144 0.455 0.647 0.767 1.095 269s ====== LPUSH ====== 269s 100000 requests completed in 0.18 seconds 269s 50 parallel clients 269s 3 bytes payload 269s keep alive: 1 269s host configuration "save": 3600 1 300 100 60 10000 269s host configuration "appendonly": no 269s multi-thread: no 269s 269s Latency by percentile distribution: 269s 0.000% <= 0.303 milliseconds (cumulative count 10) 269s 50.000% <= 0.759 milliseconds (cumulative count 51260) 269s 75.000% <= 0.895 milliseconds (cumulative count 75490) 269s 87.500% <= 0.999 milliseconds (cumulative count 88070) 269s 93.750% <= 1.055 milliseconds (cumulative count 93810) 269s 96.875% <= 1.103 milliseconds (cumulative count 96950) 269s 98.438% <= 1.143 milliseconds (cumulative count 98490) 269s 99.219% <= 1.175 milliseconds (cumulative count 99220) 269s 99.609% <= 1.223 milliseconds (cumulative count 99630) 269s 99.805% <= 1.271 milliseconds (cumulative count 99830) 269s 99.902% <= 1.319 milliseconds (cumulative count 99930) 269s 99.951% <= 1.367 milliseconds (cumulative count 99960) 269s 99.976% <= 1.399 milliseconds (cumulative count 99980) 269s 99.988% <= 1.423 milliseconds (cumulative count 100000) 269s 100.000% <= 1.423 milliseconds (cumulative count 100000) 269s 269s Cumulative distribution of latencies: 269s 0.000% <= 0.103 milliseconds (cumulative count 0) 269s 0.010% <= 0.303 milliseconds (cumulative count 10) 269s 0.590% <= 0.407 milliseconds (cumulative count 590) 269s 3.940% <= 0.503 milliseconds (cumulative count 3940) 269s 15.400% <= 0.607 milliseconds (cumulative count 15400) 269s 36.650% <= 0.703 milliseconds (cumulative count 36650) 269s 62.100% <= 0.807 milliseconds (cumulative count 62100) 269s 76.570% <= 0.903 milliseconds (cumulative count 76570) 269s 89.050% <= 1.007 milliseconds (cumulative count 89050) 269s 96.950% <= 1.103 milliseconds (cumulative count 96950) 269s 99.530% <= 1.207 milliseconds (cumulative count 99530) 269s 99.900% <= 1.303 milliseconds (cumulative count 99900) 269s 99.980% <= 1.407 milliseconds (cumulative count 99980) 269s 100.000% <= 1.503 milliseconds (cumulative count 100000) 269s 269s Summary: 269s throughput summary: 549450.56 requests per second 269s latency summary (msec): 270s avg min p50 p95 p99 max 270s 0.775 0.296 0.759 1.071 1.167 1.423 270s RPUSH: rps=27400.0 (overall: 570833.3) avg_msec=0.660 (overall: 0.660) ====== RPUSH ====== 270s 100000 requests completed in 0.17 seconds 270s 50 parallel clients 270s 3 bytes payload 270s keep alive: 1 270s host configuration "save": 3600 1 300 100 60 10000 270s host configuration "appendonly": no 270s multi-thread: no 270s 270s Latency by percentile distribution: 270s 0.000% <= 0.279 milliseconds (cumulative count 10) 270s 50.000% <= 0.647 milliseconds (cumulative count 50280) 270s 75.000% <= 0.767 milliseconds (cumulative count 75410) 270s 87.500% <= 0.863 milliseconds (cumulative count 88160) 270s 93.750% <= 0.935 milliseconds (cumulative count 94280) 270s 96.875% <= 0.983 milliseconds (cumulative count 97180) 270s 98.438% <= 1.023 milliseconds (cumulative count 98500) 270s 99.219% <= 1.071 milliseconds (cumulative count 99320) 270s 99.609% <= 1.119 milliseconds (cumulative count 99610) 270s 99.805% <= 1.191 milliseconds (cumulative count 99820) 270s 99.902% <= 1.231 milliseconds (cumulative count 99920) 270s 99.951% <= 1.271 milliseconds (cumulative count 99970) 270s 99.976% <= 1.287 milliseconds (cumulative count 99980) 270s 99.988% <= 1.319 milliseconds (cumulative count 99990) 270s 99.994% <= 1.367 milliseconds (cumulative count 100000) 270s 100.000% <= 1.367 milliseconds (cumulative count 100000) 270s 270s Cumulative distribution of latencies: 270s 0.000% <= 0.103 milliseconds (cumulative count 0) 270s 0.040% <= 0.303 milliseconds (cumulative count 40) 270s 3.760% <= 0.407 milliseconds (cumulative count 3760) 270s 18.290% <= 0.503 milliseconds (cumulative count 18290) 270s 40.570% <= 0.607 milliseconds (cumulative count 40570) 270s 62.840% <= 0.703 milliseconds (cumulative count 62840) 270s 81.590% <= 0.807 milliseconds (cumulative count 81590) 270s 91.850% <= 0.903 milliseconds (cumulative count 91850) 270s 98.100% <= 1.007 milliseconds (cumulative count 98100) 270s 99.520% <= 1.103 milliseconds (cumulative count 99520) 270s 99.860% <= 1.207 milliseconds (cumulative count 99860) 270s 99.980% <= 1.303 milliseconds (cumulative count 99980) 270s 100.000% <= 1.407 milliseconds (cumulative count 100000) 270s 270s Summary: 270s throughput summary: 595238.12 requests per second 270s latency summary (msec): 270s avg min p50 p95 p99 max 270s 0.659 0.272 0.647 0.951 1.055 1.367 270s LPOP: rps=189880.5 (overall: 512473.1) avg_msec=0.840 (overall: 0.840) ====== LPOP ====== 270s 100000 requests completed in 0.19 seconds 270s 50 parallel clients 270s 3 bytes payload 270s keep alive: 1 270s host configuration "save": 3600 1 300 100 60 10000 270s host configuration "appendonly": no 270s multi-thread: no 270s 270s Latency by percentile distribution: 270s 0.000% <= 0.351 milliseconds (cumulative count 10) 270s 50.000% <= 0.815 milliseconds (cumulative count 50790) 270s 75.000% <= 0.967 milliseconds (cumulative count 75670) 270s 87.500% <= 1.071 milliseconds (cumulative count 88250) 270s 93.750% <= 1.135 milliseconds (cumulative count 94330) 270s 96.875% <= 1.183 milliseconds (cumulative count 97020) 270s 98.438% <= 1.231 milliseconds (cumulative count 98590) 270s 99.219% <= 1.271 milliseconds (cumulative count 99270) 270s 99.609% <= 1.311 milliseconds (cumulative count 99610) 270s 99.805% <= 1.351 milliseconds (cumulative count 99840) 270s 99.902% <= 1.391 milliseconds (cumulative count 99910) 270s 99.951% <= 1.447 milliseconds (cumulative count 99960) 270s 99.976% <= 1.471 milliseconds (cumulative count 99980) 270s 99.988% <= 1.479 milliseconds (cumulative count 99990) 270s 99.994% <= 1.495 milliseconds (cumulative count 100000) 270s 100.000% <= 1.495 milliseconds (cumulative count 100000) 270s 270s Cumulative distribution of latencies: 270s 0.000% <= 0.103 milliseconds (cumulative count 0) 270s 0.150% <= 0.407 milliseconds (cumulative count 150) 270s 1.150% <= 0.503 milliseconds (cumulative count 1150) 270s 5.630% <= 0.607 milliseconds (cumulative count 5630) 270s 21.520% <= 0.703 milliseconds (cumulative count 21520) 270s 49.020% <= 0.807 milliseconds (cumulative count 49020) 270s 66.610% <= 0.903 milliseconds (cumulative count 66610) 270s 80.840% <= 1.007 milliseconds (cumulative count 80840) 270s 91.440% <= 1.103 milliseconds (cumulative count 91440) 270s 97.870% <= 1.207 milliseconds (cumulative count 97870) 270s 99.590% <= 1.303 milliseconds (cumulative count 99590) 270s 99.920% <= 1.407 milliseconds (cumulative count 99920) 270s 100.000% <= 1.503 milliseconds (cumulative count 100000) 270s 270s Summary: 270s throughput summary: 515463.91 requests per second 270s latency summary (msec): 270s avg min p50 p95 p99 max 270s 0.842 0.344 0.815 1.143 1.255 1.495 270s RPOP: rps=320200.0 (overall: 540878.4) avg_msec=0.794 (overall: 0.794) ====== RPOP ====== 270s 100000 requests completed in 0.19 seconds 270s 50 parallel clients 270s 3 bytes payload 270s keep alive: 1 270s host configuration "save": 3600 1 300 100 60 10000 270s host configuration "appendonly": no 270s multi-thread: no 270s 270s Latency by percentile distribution: 270s 0.000% <= 0.295 milliseconds (cumulative count 20) 270s 50.000% <= 0.767 milliseconds (cumulative count 50140) 270s 75.000% <= 0.911 milliseconds (cumulative count 75290) 270s 87.500% <= 1.015 milliseconds (cumulative count 88430) 270s 93.750% <= 1.071 milliseconds (cumulative count 94370) 270s 96.875% <= 1.111 milliseconds (cumulative count 96950) 270s 98.438% <= 1.159 milliseconds (cumulative count 98580) 270s 99.219% <= 1.199 milliseconds (cumulative count 99230) 270s 99.609% <= 1.239 milliseconds (cumulative count 99620) 270s 99.805% <= 1.279 milliseconds (cumulative count 99860) 270s 99.902% <= 1.311 milliseconds (cumulative count 99910) 270s 99.951% <= 1.351 milliseconds (cumulative count 99970) 270s 99.976% <= 1.359 milliseconds (cumulative count 99980) 270s 99.988% <= 1.399 milliseconds (cumulative count 99990) 270s 99.994% <= 1.407 milliseconds (cumulative count 100000) 270s 100.000% <= 1.407 milliseconds (cumulative count 100000) 270s 270s Cumulative distribution of latencies: 270s 0.000% <= 0.103 milliseconds (cumulative count 0) 270s 0.030% <= 0.303 milliseconds (cumulative count 30) 270s 0.520% <= 0.407 milliseconds (cumulative count 520) 270s 2.010% <= 0.503 milliseconds (cumulative count 2010) 270s 11.300% <= 0.607 milliseconds (cumulative count 11300) 270s 32.320% <= 0.703 milliseconds (cumulative count 32320) 270s 58.750% <= 0.807 milliseconds (cumulative count 58750) 270s 74.120% <= 0.903 milliseconds (cumulative count 74120) 270s 87.490% <= 1.007 milliseconds (cumulative count 87490) 270s 96.540% <= 1.103 milliseconds (cumulative count 96540) 270s 99.310% <= 1.207 milliseconds (cumulative count 99310) 270s 99.880% <= 1.303 milliseconds (cumulative count 99880) 270s 100.000% <= 1.407 milliseconds (cumulative count 100000) 270s 270s Summary: 270s throughput summary: 540540.56 requests per second 270s latency summary (msec): 270s avg min p50 p95 p99 max 270s 0.794 0.288 0.767 1.079 1.183 1.407 270s ====== SADD ====== 270s 100000 requests completed in 0.16 seconds 270s 50 parallel clients 270s 3 bytes payload 270s keep alive: 1 270s host configuration "save": 3600 1 300 100 60 10000 270s host configuration "appendonly": no 270s multi-thread: no 270s 270s Latency by percentile distribution: 270s 0.000% <= 0.231 milliseconds (cumulative count 10) 270s 50.000% <= 0.503 milliseconds (cumulative count 50240) 270s 75.000% <= 0.575 milliseconds (cumulative count 75260) 270s 87.500% <= 0.639 milliseconds (cumulative count 88370) 270s 93.750% <= 0.703 milliseconds (cumulative count 94150) 270s 96.875% <= 0.759 milliseconds (cumulative count 96880) 270s 98.438% <= 0.831 milliseconds (cumulative count 98530) 270s 99.219% <= 0.903 milliseconds (cumulative count 99260) 270s 99.609% <= 0.951 milliseconds (cumulative count 99640) 270s 99.805% <= 0.983 milliseconds (cumulative count 99820) 270s 99.902% <= 1.023 milliseconds (cumulative count 99920) 270s 99.951% <= 1.071 milliseconds (cumulative count 99970) 270s 99.976% <= 1.103 milliseconds (cumulative count 99980) 270s 99.988% <= 1.143 milliseconds (cumulative count 100000) 270s 100.000% <= 1.143 milliseconds (cumulative count 100000) 270s 270s Cumulative distribution of latencies: 270s 0.000% <= 0.103 milliseconds (cumulative count 0) 270s 0.110% <= 0.303 milliseconds (cumulative count 110) 270s 11.800% <= 0.407 milliseconds (cumulative count 11800) 270s 50.240% <= 0.503 milliseconds (cumulative count 50240) 270s 83.120% <= 0.607 milliseconds (cumulative count 83120) 270s 94.150% <= 0.703 milliseconds (cumulative count 94150) 270s 98.090% <= 0.807 milliseconds (cumulative count 98090) 270s 99.260% <= 0.903 milliseconds (cumulative count 99260) 270s 99.890% <= 1.007 milliseconds (cumulative count 99890) 270s 99.980% <= 1.103 milliseconds (cumulative count 99980) 270s 100.000% <= 1.207 milliseconds (cumulative count 100000) 270s 270s Summary: 270s throughput summary: 621118.00 requests per second 270s latency summary (msec): 270s avg min p50 p95 p99 max 270s 0.520 0.224 0.503 0.719 0.879 1.143 271s HSET: rps=111200.0 (overall: 579166.7) avg_msec=0.743 (overall: 0.743) ====== HSET ====== 271s 100000 requests completed in 0.17 seconds 271s 50 parallel clients 271s 3 bytes payload 271s keep alive: 1 271s host configuration "save": 3600 1 300 100 60 10000 271s host configuration "appendonly": no 271s multi-thread: no 271s 271s Latency by percentile distribution: 271s 0.000% <= 0.271 milliseconds (cumulative count 10) 271s 50.000% <= 0.703 milliseconds (cumulative count 50650) 271s 75.000% <= 0.831 milliseconds (cumulative count 75200) 271s 87.500% <= 0.927 milliseconds (cumulative count 88430) 271s 93.750% <= 0.975 milliseconds (cumulative count 93960) 271s 96.875% <= 1.015 milliseconds (cumulative count 97020) 271s 98.438% <= 1.063 milliseconds (cumulative count 98580) 271s 99.219% <= 1.103 milliseconds (cumulative count 99270) 271s 99.609% <= 1.159 milliseconds (cumulative count 99610) 271s 99.805% <= 1.207 milliseconds (cumulative count 99810) 271s 99.902% <= 1.255 milliseconds (cumulative count 99910) 271s 99.951% <= 1.303 milliseconds (cumulative count 99960) 271s 99.976% <= 1.319 milliseconds (cumulative count 99980) 271s 99.988% <= 1.327 milliseconds (cumulative count 99990) 271s 99.994% <= 1.423 milliseconds (cumulative count 100000) 271s 100.000% <= 1.423 milliseconds (cumulative count 100000) 271s 271s Cumulative distribution of latencies: 271s 0.000% <= 0.103 milliseconds (cumulative count 0) 271s 0.050% <= 0.303 milliseconds (cumulative count 50) 271s 1.350% <= 0.407 milliseconds (cumulative count 1350) 271s 5.840% <= 0.503 milliseconds (cumulative count 5840) 271s 22.830% <= 0.607 milliseconds (cumulative count 22830) 271s 50.650% <= 0.703 milliseconds (cumulative count 50650) 271s 71.420% <= 0.807 milliseconds (cumulative count 71420) 271s 85.260% <= 0.903 milliseconds (cumulative count 85260) 271s 96.510% <= 1.007 milliseconds (cumulative count 96510) 271s 99.270% <= 1.103 milliseconds (cumulative count 99270) 271s 99.810% <= 1.207 milliseconds (cumulative count 99810) 271s 99.960% <= 1.303 milliseconds (cumulative count 99960) 271s 99.990% <= 1.407 milliseconds (cumulative count 99990) 271s 100.000% <= 1.503 milliseconds (cumulative count 100000) 271s 271s Summary: 271s throughput summary: 591716.00 requests per second 271s latency summary (msec): 271s avg min p50 p95 p99 max 271s 0.723 0.264 0.703 0.991 1.087 1.423 271s SPOP: rps=306613.6 (overall: 596589.2) avg_msec=0.466 (overall: 0.466) ====== SPOP ====== 271s 100000 requests completed in 0.17 seconds 271s 50 parallel clients 271s 3 bytes payload 271s keep alive: 1 271s host configuration "save": 3600 1 300 100 60 10000 271s host configuration "appendonly": no 271s multi-thread: no 271s 271s Latency by percentile distribution: 271s 0.000% <= 0.135 milliseconds (cumulative count 10) 271s 50.000% <= 0.455 milliseconds (cumulative count 50870) 271s 75.000% <= 0.495 milliseconds (cumulative count 76030) 271s 87.500% <= 0.527 milliseconds (cumulative count 87750) 271s 93.750% <= 0.567 milliseconds (cumulative count 94440) 271s 96.875% <= 0.615 milliseconds (cumulative count 96920) 271s 98.438% <= 0.727 milliseconds (cumulative count 98460) 271s 99.219% <= 0.799 milliseconds (cumulative count 99230) 271s 99.609% <= 0.855 milliseconds (cumulative count 99610) 271s 99.805% <= 0.895 milliseconds (cumulative count 99810) 271s 99.902% <= 0.935 milliseconds (cumulative count 99920) 271s 99.951% <= 0.959 milliseconds (cumulative count 99960) 271s 99.976% <= 0.983 milliseconds (cumulative count 99990) 271s 99.994% <= 0.999 milliseconds (cumulative count 100000) 271s 100.000% <= 0.999 milliseconds (cumulative count 100000) 271s 271s Cumulative distribution of latencies: 271s 0.000% <= 0.103 milliseconds (cumulative count 0) 271s 0.070% <= 0.207 milliseconds (cumulative count 70) 271s 0.200% <= 0.303 milliseconds (cumulative count 200) 271s 14.780% <= 0.407 milliseconds (cumulative count 14780) 271s 79.290% <= 0.503 milliseconds (cumulative count 79290) 271s 96.710% <= 0.607 milliseconds (cumulative count 96710) 271s 98.180% <= 0.703 milliseconds (cumulative count 98180) 271s 99.280% <= 0.807 milliseconds (cumulative count 99280) 271s 99.840% <= 0.903 milliseconds (cumulative count 99840) 271s 100.000% <= 1.007 milliseconds (cumulative count 100000) 271s 271s Summary: 271s throughput summary: 606060.56 requests per second 271s latency summary (msec): 271s avg min p50 p95 p99 max 271s 0.467 0.128 0.455 0.575 0.775 0.999 271s ====== ZADD ====== 271s 100000 requests completed in 0.19 seconds 271s 50 parallel clients 271s 3 bytes payload 271s keep alive: 1 271s host configuration "save": 3600 1 300 100 60 10000 271s host configuration "appendonly": no 271s multi-thread: no 271s 271s Latency by percentile distribution: 271s 0.000% <= 0.335 milliseconds (cumulative count 10) 271s 50.000% <= 0.791 milliseconds (cumulative count 51600) 271s 75.000% <= 0.935 milliseconds (cumulative count 75750) 271s 87.500% <= 1.031 milliseconds (cumulative count 87600) 271s 93.750% <= 1.095 milliseconds (cumulative count 94080) 271s 96.875% <= 1.143 milliseconds (cumulative count 97180) 271s 98.438% <= 1.191 milliseconds (cumulative count 98580) 271s 99.219% <= 1.231 milliseconds (cumulative count 99230) 271s 99.609% <= 1.287 milliseconds (cumulative count 99660) 271s 99.805% <= 1.319 milliseconds (cumulative count 99810) 271s 99.902% <= 1.359 milliseconds (cumulative count 99910) 271s 99.951% <= 1.407 milliseconds (cumulative count 99960) 271s 99.976% <= 1.423 milliseconds (cumulative count 99990) 271s 99.994% <= 1.447 milliseconds (cumulative count 100000) 271s 100.000% <= 1.447 milliseconds (cumulative count 100000) 271s 271s Cumulative distribution of latencies: 271s 0.000% <= 0.103 milliseconds (cumulative count 0) 271s 0.400% <= 0.407 milliseconds (cumulative count 400) 271s 2.140% <= 0.503 milliseconds (cumulative count 2140) 271s 9.790% <= 0.607 milliseconds (cumulative count 9790) 271s 27.980% <= 0.703 milliseconds (cumulative count 27980) 271s 55.170% <= 0.807 milliseconds (cumulative count 55170) 271s 71.700% <= 0.903 milliseconds (cumulative count 71700) 271s 84.800% <= 1.007 milliseconds (cumulative count 84800) 271s 94.770% <= 1.103 milliseconds (cumulative count 94770) 271s 98.930% <= 1.207 milliseconds (cumulative count 98930) 271s 99.720% <= 1.303 milliseconds (cumulative count 99720) 271s 99.960% <= 1.407 milliseconds (cumulative count 99960) 271s 100.000% <= 1.503 milliseconds (cumulative count 100000) 271s 271s Summary: 271s throughput summary: 529100.56 requests per second 271s latency summary (msec): 271s avg min p50 p95 p99 max 271s 0.811 0.328 0.791 1.111 1.215 1.447 271s ZPOPMIN: rps=52120.0 (overall: 620476.2) avg_msec=0.456 (overall: 0.456) ====== ZPOPMIN ====== 271s 100000 requests completed in 0.16 seconds 271s 50 parallel clients 271s 3 bytes payload 271s keep alive: 1 271s host configuration "save": 3600 1 300 100 60 10000 271s host configuration "appendonly": no 271s multi-thread: no 271s 271s Latency by percentile distribution: 271s 0.000% <= 0.143 milliseconds (cumulative count 10) 271s 50.000% <= 0.439 milliseconds (cumulative count 52850) 271s 75.000% <= 0.487 milliseconds (cumulative count 77780) 271s 87.500% <= 0.527 milliseconds (cumulative count 88790) 271s 93.750% <= 0.567 milliseconds (cumulative count 93850) 271s 96.875% <= 0.647 milliseconds (cumulative count 97010) 271s 98.438% <= 0.743 milliseconds (cumulative count 98480) 271s 99.219% <= 0.807 milliseconds (cumulative count 99220) 271s 99.609% <= 0.863 milliseconds (cumulative count 99630) 271s 99.805% <= 0.927 milliseconds (cumulative count 99830) 271s 99.902% <= 0.967 milliseconds (cumulative count 99910) 271s 99.951% <= 1.007 milliseconds (cumulative count 99970) 271s 99.976% <= 1.015 milliseconds (cumulative count 99980) 271s 99.988% <= 1.031 milliseconds (cumulative count 100000) 271s 100.000% <= 1.031 milliseconds (cumulative count 100000) 271s 271s Cumulative distribution of latencies: 271s 0.000% <= 0.103 milliseconds (cumulative count 0) 271s 0.040% <= 0.207 milliseconds (cumulative count 40) 271s 0.150% <= 0.303 milliseconds (cumulative count 150) 271s 30.600% <= 0.407 milliseconds (cumulative count 30600) 271s 83.000% <= 0.503 milliseconds (cumulative count 83000) 271s 95.890% <= 0.607 milliseconds (cumulative count 95890) 271s 97.920% <= 0.703 milliseconds (cumulative count 97920) 271s 99.220% <= 0.807 milliseconds (cumulative count 99220) 271s 99.770% <= 0.903 milliseconds (cumulative count 99770) 271s 99.970% <= 1.007 milliseconds (cumulative count 99970) 271s 100.000% <= 1.103 milliseconds (cumulative count 100000) 271s 271s Summary: 271s throughput summary: 636942.62 requests per second 271s latency summary (msec): 271s avg min p50 p95 p99 max 271s 0.451 0.136 0.439 0.591 0.791 1.031 271s LPUSH (needed to benchmark LRANGE): rps=241520.0 (overall: 539107.1) avg_msec=0.787 (overall: 0.787) ====== LPUSH (needed to benchmark LRANGE) ====== 271s 100000 requests completed in 0.19 seconds 271s 50 parallel clients 271s 3 bytes payload 271s keep alive: 1 271s host configuration "save": 3600 1 300 100 60 10000 271s host configuration "appendonly": no 271s multi-thread: no 271s 271s Latency by percentile distribution: 271s 0.000% <= 0.343 milliseconds (cumulative count 10) 271s 50.000% <= 0.783 milliseconds (cumulative count 51690) 271s 75.000% <= 0.927 milliseconds (cumulative count 75860) 271s 87.500% <= 1.031 milliseconds (cumulative count 87990) 271s 93.750% <= 1.103 milliseconds (cumulative count 94350) 271s 96.875% <= 1.151 milliseconds (cumulative count 96970) 271s 98.438% <= 1.207 milliseconds (cumulative count 98580) 271s 99.219% <= 1.255 milliseconds (cumulative count 99280) 271s 99.609% <= 1.311 milliseconds (cumulative count 99620) 271s 99.805% <= 1.367 milliseconds (cumulative count 99820) 271s 99.902% <= 1.399 milliseconds (cumulative count 99910) 271s 99.951% <= 1.455 milliseconds (cumulative count 99980) 271s 99.988% <= 1.487 milliseconds (cumulative count 99990) 271s 99.994% <= 1.543 milliseconds (cumulative count 100000) 271s 100.000% <= 1.543 milliseconds (cumulative count 100000) 271s 271s Cumulative distribution of latencies: 271s 0.000% <= 0.103 milliseconds (cumulative count 0) 271s 0.250% <= 0.407 milliseconds (cumulative count 250) 271s 3.080% <= 0.503 milliseconds (cumulative count 3080) 271s 12.060% <= 0.607 milliseconds (cumulative count 12060) 271s 30.570% <= 0.703 milliseconds (cumulative count 30570) 271s 56.950% <= 0.807 milliseconds (cumulative count 56950) 271s 72.610% <= 0.903 milliseconds (cumulative count 72610) 271s 85.390% <= 1.007 milliseconds (cumulative count 85390) 271s 94.350% <= 1.103 milliseconds (cumulative count 94350) 271s 98.580% <= 1.207 milliseconds (cumulative count 98580) 271s 99.560% <= 1.303 milliseconds (cumulative count 99560) 271s 99.920% <= 1.407 milliseconds (cumulative count 99920) 271s 99.990% <= 1.503 milliseconds (cumulative count 99990) 271s 100.000% <= 1.607 milliseconds (cumulative count 100000) 271s 271s Summary: 271s throughput summary: 531914.94 requests per second 271s latency summary (msec): 271s avg min p50 p95 p99 max 271s 0.803 0.336 0.783 1.119 1.239 1.543 272s LRANGE_100 (first 100 elements): rps=87370.5 (overall: 126763.0) avg_msec=3.074 (overall: 3.074) LRANGE_100 (first 100 elements): rps=128000.0 (overall: 127494.1) avg_msec=3.089 (overall: 3.083) LRANGE_100 (first 100 elements): rps=128000.0 (overall: 127682.0) avg_msec=3.079 (overall: 3.082) ====== LRANGE_100 (first 100 elements) ====== 272s 100000 requests completed in 0.78 seconds 272s 50 parallel clients 272s 3 bytes payload 272s keep alive: 1 272s host configuration "save": 3600 1 300 100 60 10000 272s host configuration "appendonly": no 272s multi-thread: no 272s 272s Latency by percentile distribution: 272s 0.000% <= 0.487 milliseconds (cumulative count 10) 272s 50.000% <= 3.015 milliseconds (cumulative count 50050) 272s 75.000% <= 3.463 milliseconds (cumulative count 75020) 272s 87.500% <= 3.719 milliseconds (cumulative count 87500) 272s 93.750% <= 4.311 milliseconds (cumulative count 93820) 272s 96.875% <= 4.807 milliseconds (cumulative count 96980) 272s 98.438% <= 4.903 milliseconds (cumulative count 98530) 272s 99.219% <= 4.967 milliseconds (cumulative count 99260) 272s 99.609% <= 5.063 milliseconds (cumulative count 99620) 272s 99.805% <= 5.207 milliseconds (cumulative count 99810) 272s 99.902% <= 5.511 milliseconds (cumulative count 99910) 272s 99.951% <= 5.871 milliseconds (cumulative count 99960) 272s 99.976% <= 6.023 milliseconds (cumulative count 99980) 272s 99.988% <= 6.095 milliseconds (cumulative count 99990) 272s 99.994% <= 6.199 milliseconds (cumulative count 100000) 272s 100.000% <= 6.199 milliseconds (cumulative count 100000) 272s 272s Cumulative distribution of latencies: 272s 0.000% <= 0.103 milliseconds (cumulative count 0) 272s 0.010% <= 0.503 milliseconds (cumulative count 10) 272s 0.020% <= 1.303 milliseconds (cumulative count 20) 272s 0.040% <= 1.407 milliseconds (cumulative count 40) 272s 0.060% <= 1.503 milliseconds (cumulative count 60) 272s 0.090% <= 1.607 milliseconds (cumulative count 90) 272s 0.130% <= 1.703 milliseconds (cumulative count 130) 272s 0.190% <= 1.807 milliseconds (cumulative count 190) 272s 0.280% <= 1.903 milliseconds (cumulative count 280) 272s 0.450% <= 2.007 milliseconds (cumulative count 450) 272s 1.070% <= 2.103 milliseconds (cumulative count 1070) 272s 55.310% <= 3.103 milliseconds (cumulative count 55310) 272s 91.790% <= 4.103 milliseconds (cumulative count 91790) 272s 99.710% <= 5.103 milliseconds (cumulative count 99710) 272s 99.990% <= 6.103 milliseconds (cumulative count 99990) 272s 100.000% <= 7.103 milliseconds (cumulative count 100000) 272s 272s Summary: 272s throughput summary: 127713.92 requests per second 272s latency summary (msec): 272s avg min p50 p95 p99 max 272s 3.082 0.480 3.015 4.407 4.943 6.199 275s LRANGE_300 (first 300 elements): rps=15170.0 (overall: 27219.9) avg_msec=11.104 (overall: 11.104) LRANGE_300 (first 300 elements): rps=35770.8 (overall: 32710.7) avg_msec=7.123 (overall: 8.309) LRANGE_300 (first 300 elements): rps=35928.3 (overall: 33962.8) avg_msec=7.157 (overall: 7.835) LRANGE_300 (first 300 elements): rps=34611.1 (overall: 34144.9) avg_msec=7.960 (overall: 7.870) LRANGE_300 (first 300 elements): rps=36143.4 (overall: 34581.9) avg_msec=6.966 (overall: 7.664) LRANGE_300 (first 300 elements): rps=36344.0 (overall: 34897.0) avg_msec=6.934 (overall: 7.528) LRANGE_300 (first 300 elements): rps=36230.2 (overall: 35100.6) avg_msec=6.934 (overall: 7.434) LRANGE_300 (first 300 elements): rps=34687.5 (overall: 35045.1) avg_msec=7.694 (overall: 7.469) LRANGE_300 (first 300 elements): rps=35561.8 (overall: 35105.2) avg_msec=7.432 (overall: 7.464) LRANGE_300 (first 300 elements): rps=36292.5 (overall: 35229.9) avg_msec=6.843 (overall: 7.397) LRANGE_300 (first 300 elements): rps=36376.0 (overall: 35337.6) avg_msec=7.054 (overall: 7.364) ====== LRANGE_300 (first 300 elements) ====== 275s 100000 requests completed in 2.83 seconds 275s 50 parallel clients 275s 3 bytes payload 275s keep alive: 1 275s host configuration "save": 3600 1 300 100 60 10000 275s host configuration "appendonly": no 275s multi-thread: no 275s 275s Latency by percentile distribution: 275s 0.000% <= 0.695 milliseconds (cumulative count 10) 275s 50.000% <= 6.959 milliseconds (cumulative count 50020) 275s 75.000% <= 8.175 milliseconds (cumulative count 75070) 275s 87.500% <= 9.759 milliseconds (cumulative count 87510) 275s 93.750% <= 11.079 milliseconds (cumulative count 93770) 275s 96.875% <= 12.719 milliseconds (cumulative count 96880) 275s 98.438% <= 14.807 milliseconds (cumulative count 98440) 275s 99.219% <= 16.095 milliseconds (cumulative count 99220) 275s 99.609% <= 17.231 milliseconds (cumulative count 99610) 275s 99.805% <= 18.287 milliseconds (cumulative count 99820) 275s 99.902% <= 19.039 milliseconds (cumulative count 99910) 275s 99.951% <= 19.887 milliseconds (cumulative count 99960) 275s 99.976% <= 20.047 milliseconds (cumulative count 99980) 275s 99.988% <= 20.207 milliseconds (cumulative count 99990) 275s 99.994% <= 20.223 milliseconds (cumulative count 100000) 275s 100.000% <= 20.223 milliseconds (cumulative count 100000) 275s 275s Cumulative distribution of latencies: 275s 0.000% <= 0.103 milliseconds (cumulative count 0) 275s 0.010% <= 0.703 milliseconds (cumulative count 10) 275s 0.020% <= 0.807 milliseconds (cumulative count 20) 275s 0.040% <= 0.903 milliseconds (cumulative count 40) 275s 0.050% <= 1.007 milliseconds (cumulative count 50) 275s 0.070% <= 1.103 milliseconds (cumulative count 70) 275s 0.080% <= 1.207 milliseconds (cumulative count 80) 275s 0.130% <= 1.303 milliseconds (cumulative count 130) 275s 0.160% <= 1.407 milliseconds (cumulative count 160) 275s 0.180% <= 1.503 milliseconds (cumulative count 180) 275s 0.200% <= 1.607 milliseconds (cumulative count 200) 275s 0.210% <= 1.807 milliseconds (cumulative count 210) 275s 0.220% <= 2.007 milliseconds (cumulative count 220) 275s 0.510% <= 3.103 milliseconds (cumulative count 510) 275s 2.570% <= 4.103 milliseconds (cumulative count 2570) 275s 9.600% <= 5.103 milliseconds (cumulative count 9600) 275s 27.980% <= 6.103 milliseconds (cumulative count 27980) 275s 53.930% <= 7.103 milliseconds (cumulative count 53930) 275s 74.100% <= 8.103 milliseconds (cumulative count 74100) 275s 83.730% <= 9.103 milliseconds (cumulative count 83730) 275s 89.420% <= 10.103 milliseconds (cumulative count 89420) 275s 93.830% <= 11.103 milliseconds (cumulative count 93830) 275s 96.200% <= 12.103 milliseconds (cumulative count 96200) 275s 97.190% <= 13.103 milliseconds (cumulative count 97190) 275s 97.960% <= 14.103 milliseconds (cumulative count 97960) 275s 98.670% <= 15.103 milliseconds (cumulative count 98670) 275s 99.220% <= 16.103 milliseconds (cumulative count 99220) 275s 99.590% <= 17.103 milliseconds (cumulative count 99590) 275s 99.780% <= 18.111 milliseconds (cumulative count 99780) 275s 99.910% <= 19.103 milliseconds (cumulative count 99910) 275s 99.980% <= 20.111 milliseconds (cumulative count 99980) 275s 100.000% <= 21.103 milliseconds (cumulative count 100000) 275s 275s Summary: 275s throughput summary: 35385.70 requests per second 275s latency summary (msec): 275s avg min p50 p95 p99 max 275s 7.358 0.688 6.959 11.463 15.583 20.223 280s LRANGE_500 (first 500 elements): rps=4286.9 (overall: 13122.0) avg_msec=21.243 (overall: 21.243) LRANGE_500 (first 500 elements): rps=15434.3 (overall: 14864.9) avg_msec=17.169 (overall: 18.054) LRANGE_500 (first 500 elements): rps=20398.4 (overall: 17243.2) avg_msec=11.814 (overall: 14.882) LRANGE_500 (first 500 elements): rps=20610.2 (overall: 18263.7) avg_msec=10.228 (overall: 13.290) LRANGE_500 (first 500 elements): rps=20724.0 (overall: 18829.0) avg_msec=9.713 (overall: 12.385) LRANGE_500 (first 500 elements): rps=20626.0 (overall: 19169.2) avg_msec=9.655 (overall: 11.829) LRANGE_500 (first 500 elements): rps=20792.2 (overall: 19428.3) avg_msec=9.959 (overall: 11.510) LRANGE_500 (first 500 elements): rps=20806.3 (overall: 19616.8) avg_msec=9.957 (overall: 11.284) LRANGE_500 (first 500 elements): rps=20700.0 (overall: 19745.7) avg_msec=10.916 (overall: 11.238) LRANGE_500 (first 500 elements): rps=18603.1 (overall: 19621.1) avg_msec=14.302 (overall: 11.555) LRANGE_500 (first 500 elements): rps=16784.9 (overall: 19348.2) avg_msec=15.941 (overall: 11.921) LRANGE_500 (first 500 elements): rps=17089.5 (overall: 19145.5) avg_msec=16.628 (overall: 12.298) LRANGE_500 (first 500 elements): rps=16381.0 (overall: 18922.0) avg_msec=16.871 (overall: 12.618) LRANGE_500 (first 500 elements): rps=20277.8 (overall: 19023.4) avg_msec=11.102 (overall: 12.497) LRANGE_500 (first 500 elements): rps=20567.5 (overall: 19130.9) avg_msec=9.903 (overall: 12.303) LRANGE_500 (first 500 elements): rps=20515.9 (overall: 19221.0) avg_msec=9.792 (overall: 12.129) LRANGE_500 (first 500 elements): rps=20600.0 (overall: 19306.2) avg_msec=10.468 (overall: 12.019) LRANGE_500 (first 500 elements): rps=20496.1 (overall: 19375.7) avg_msec=9.822 (overall: 11.884) LRANGE_500 (first 500 elements): rps=20782.6 (overall: 19452.4) avg_msec=9.885 (overall: 11.767) LRANGE_500 (first 500 elements): rps=20640.6 (overall: 19514.6) avg_msec=10.092 (overall: 11.674) ====== LRANGE_500 (first 500 elements) ====== 280s 100000 requests completed in 5.11 seconds 280s 50 parallel clients 280s 3 bytes payload 280s keep alive: 1 280s host configuration "save": 3600 1 300 100 60 10000 280s host configuration "appendonly": no 280s multi-thread: no 280s 280s Latency by percentile distribution: 280s 0.000% <= 1.103 milliseconds (cumulative count 10) 280s 50.000% <= 10.639 milliseconds (cumulative count 50020) 280s 75.000% <= 12.543 milliseconds (cumulative count 75010) 280s 87.500% <= 17.663 milliseconds (cumulative count 87500) 280s 93.750% <= 21.407 milliseconds (cumulative count 93750) 280s 96.875% <= 23.231 milliseconds (cumulative count 96880) 280s 98.438% <= 24.767 milliseconds (cumulative count 98440) 280s 99.219% <= 26.127 milliseconds (cumulative count 99220) 280s 99.609% <= 27.455 milliseconds (cumulative count 99610) 280s 99.805% <= 28.911 milliseconds (cumulative count 99810) 280s 99.902% <= 29.567 milliseconds (cumulative count 99910) 280s 99.951% <= 30.015 milliseconds (cumulative count 99960) 280s 99.976% <= 30.191 milliseconds (cumulative count 99980) 280s 99.988% <= 30.655 milliseconds (cumulative count 100000) 280s 100.000% <= 30.655 milliseconds (cumulative count 100000) 280s 280s Cumulative distribution of latencies: 280s 0.000% <= 0.103 milliseconds (cumulative count 0) 280s 0.010% <= 1.103 milliseconds (cumulative count 10) 280s 0.020% <= 1.207 milliseconds (cumulative count 20) 280s 0.030% <= 1.407 milliseconds (cumulative count 30) 280s 0.050% <= 1.607 milliseconds (cumulative count 50) 280s 0.070% <= 1.703 milliseconds (cumulative count 70) 280s 0.080% <= 1.807 milliseconds (cumulative count 80) 280s 0.120% <= 1.903 milliseconds (cumulative count 120) 280s 0.130% <= 2.007 milliseconds (cumulative count 130) 280s 0.140% <= 2.103 milliseconds (cumulative count 140) 280s 0.380% <= 3.103 milliseconds (cumulative count 380) 280s 1.160% <= 4.103 milliseconds (cumulative count 1160) 280s 2.290% <= 5.103 milliseconds (cumulative count 2290) 280s 4.670% <= 6.103 milliseconds (cumulative count 4670) 280s 10.970% <= 7.103 milliseconds (cumulative count 10970) 280s 16.530% <= 8.103 milliseconds (cumulative count 16530) 280s 24.540% <= 9.103 milliseconds (cumulative count 24540) 280s 39.310% <= 10.103 milliseconds (cumulative count 39310) 280s 59.440% <= 11.103 milliseconds (cumulative count 59440) 280s 72.620% <= 12.103 milliseconds (cumulative count 72620) 280s 77.030% <= 13.103 milliseconds (cumulative count 77030) 280s 79.700% <= 14.103 milliseconds (cumulative count 79700) 280s 82.210% <= 15.103 milliseconds (cumulative count 82210) 280s 84.800% <= 16.103 milliseconds (cumulative count 84800) 280s 86.700% <= 17.103 milliseconds (cumulative count 86700) 280s 88.220% <= 18.111 milliseconds (cumulative count 88220) 280s 89.600% <= 19.103 milliseconds (cumulative count 89600) 280s 91.200% <= 20.111 milliseconds (cumulative count 91200) 280s 93.100% <= 21.103 milliseconds (cumulative count 93100) 280s 95.130% <= 22.111 milliseconds (cumulative count 95130) 280s 96.760% <= 23.103 milliseconds (cumulative count 96760) 280s 97.880% <= 24.111 milliseconds (cumulative count 97880) 280s 98.680% <= 25.103 milliseconds (cumulative count 98680) 280s 99.210% <= 26.111 milliseconds (cumulative count 99210) 280s 99.530% <= 27.103 milliseconds (cumulative count 99530) 280s 99.760% <= 28.111 milliseconds (cumulative count 99760) 280s 99.810% <= 29.103 milliseconds (cumulative count 99810) 280s 99.970% <= 30.111 milliseconds (cumulative count 99970) 280s 100.000% <= 31.103 milliseconds (cumulative count 100000) 280s 280s Summary: 280s throughput summary: 19554.17 requests per second 280s latency summary (msec): 280s avg min p50 p95 p99 max 280s 11.691 1.096 10.639 22.063 25.679 30.655 286s LRANGE_600 (first 600 elements): rps=694.4 (overall: 6034.5) avg_msec=22.999 (overall: 22.999) LRANGE_600 (first 600 elements): rps=13843.1 (overall: 13045.8) avg_msec=19.200 (overall: 19.380) LRANGE_600 (first 600 elements): rps=16280.6 (overall: 14569.8) avg_msec=13.076 (overall: 16.061) LRANGE_600 (first 600 elements): rps=17155.4 (overall: 15393.4) avg_msec=10.861 (overall: 14.215) LRANGE_600 (first 600 elements): rps=15678.6 (overall: 15462.5) avg_msec=15.565 (overall: 14.547) LRANGE_600 (first 600 elements): rps=15494.0 (overall: 15468.6) avg_msec=14.126 (overall: 14.465) LRANGE_600 (first 600 elements): rps=15341.3 (overall: 15447.8) avg_msec=17.265 (overall: 14.919) LRANGE_600 (first 600 elements): rps=16243.0 (overall: 15559.1) avg_msec=15.220 (overall: 14.963) LRANGE_600 (first 600 elements): rps=14388.0 (overall: 15415.9) avg_msec=17.651 (overall: 15.270) LRANGE_600 (first 600 elements): rps=15396.1 (overall: 15413.7) avg_msec=15.593 (overall: 15.305) LRANGE_600 (first 600 elements): rps=16498.0 (overall: 15520.4) avg_msec=15.612 (overall: 15.338) LRANGE_600 (first 600 elements): rps=14545.5 (overall: 15432.4) avg_msec=15.239 (overall: 15.329) LRANGE_600 (first 600 elements): rps=16100.8 (overall: 15488.7) avg_msec=14.756 (overall: 15.279) LRANGE_600 (first 600 elements): rps=15566.9 (overall: 15494.7) avg_msec=15.065 (overall: 15.262) LRANGE_600 (first 600 elements): rps=16278.9 (overall: 15549.9) avg_msec=13.677 (overall: 15.146) LRANGE_600 (first 600 elements): rps=15187.3 (overall: 15526.1) avg_msec=16.441 (overall: 15.229) LRANGE_600 (first 600 elements): rps=13912.4 (overall: 15426.5) avg_msec=18.642 (overall: 15.419) LRANGE_600 (first 600 elements): rps=15677.3 (overall: 15441.1) avg_msec=16.083 (overall: 15.458) LRANGE_600 (first 600 elements): rps=14126.0 (overall: 15368.0) avg_msec=17.245 (overall: 15.549) LRANGE_600 (first 600 elements): rps=14007.9 (overall: 15297.0) avg_msec=19.654 (overall: 15.746) LRANGE_600 (first 600 elements): rps=16893.7 (overall: 15376.8) avg_msec=12.595 (overall: 15.573) LRANGE_600 (first 600 elements): rps=17255.9 (overall: 15466.3) avg_msec=11.345 (overall: 15.348) LRANGE_600 (first 600 elements): rps=16124.0 (overall: 15496.7) avg_msec=13.900 (overall: 15.278) LRANGE_600 (first 600 elements): rps=15080.0 (overall: 15478.9) avg_msec=16.793 (overall: 15.342) LRANGE_600 (first 600 elements): rps=16200.8 (overall: 15508.9) avg_msec=13.092 (overall: 15.244) LRANGE_600 (first 600 elements): rps=17004.0 (overall: 15568.1) avg_msec=9.923 (overall: 15.014) ====== LRANGE_600 (first 600 elements) ====== 286s 100000 requests completed in 6.42 seconds 286s 50 parallel clients 286s 3 bytes payload 286s keep alive: 1 286s host configuration "save": 3600 1 300 100 60 10000 286s host configuration "appendonly": no 286s multi-thread: no 286s 286s Latency by percentile distribution: 286s 0.000% <= 0.551 milliseconds (cumulative count 10) 286s 50.000% <= 13.319 milliseconds (cumulative count 50010) 286s 75.000% <= 19.663 milliseconds (cumulative count 75070) 286s 87.500% <= 23.567 milliseconds (cumulative count 87500) 286s 93.750% <= 26.079 milliseconds (cumulative count 93750) 286s 96.875% <= 27.759 milliseconds (cumulative count 96880) 286s 98.438% <= 29.087 milliseconds (cumulative count 98460) 286s 99.219% <= 30.319 milliseconds (cumulative count 99220) 286s 99.609% <= 31.839 milliseconds (cumulative count 99610) 286s 99.805% <= 33.215 milliseconds (cumulative count 99810) 286s 99.902% <= 34.687 milliseconds (cumulative count 99910) 286s 99.951% <= 35.551 milliseconds (cumulative count 99960) 286s 99.976% <= 35.775 milliseconds (cumulative count 99980) 286s 99.988% <= 36.959 milliseconds (cumulative count 99990) 286s 99.994% <= 37.759 milliseconds (cumulative count 100000) 286s 100.000% <= 37.759 milliseconds (cumulative count 100000) 286s 286s Cumulative distribution of latencies: 286s 0.000% <= 0.103 milliseconds (cumulative count 0) 286s 0.010% <= 0.607 milliseconds (cumulative count 10) 286s 0.030% <= 1.207 milliseconds (cumulative count 30) 286s 0.050% <= 1.303 milliseconds (cumulative count 50) 286s 0.090% <= 1.407 milliseconds (cumulative count 90) 286s 0.120% <= 1.503 milliseconds (cumulative count 120) 286s 0.140% <= 1.607 milliseconds (cumulative count 140) 286s 0.210% <= 1.703 milliseconds (cumulative count 210) 286s 0.250% <= 1.807 milliseconds (cumulative count 250) 286s 0.320% <= 1.903 milliseconds (cumulative count 320) 286s 0.350% <= 2.007 milliseconds (cumulative count 350) 286s 0.410% <= 2.103 milliseconds (cumulative count 410) 286s 0.810% <= 3.103 milliseconds (cumulative count 810) 286s 1.250% <= 4.103 milliseconds (cumulative count 1250) 286s 2.330% <= 5.103 milliseconds (cumulative count 2330) 286s 4.300% <= 6.103 milliseconds (cumulative count 4300) 286s 6.900% <= 7.103 milliseconds (cumulative count 6900) 286s 10.550% <= 8.103 milliseconds (cumulative count 10550) 286s 15.550% <= 9.103 milliseconds (cumulative count 15550) 286s 23.660% <= 10.103 milliseconds (cumulative count 23660) 286s 33.060% <= 11.103 milliseconds (cumulative count 33060) 286s 41.890% <= 12.103 milliseconds (cumulative count 41890) 286s 48.650% <= 13.103 milliseconds (cumulative count 48650) 286s 54.210% <= 14.103 milliseconds (cumulative count 54210) 286s 59.260% <= 15.103 milliseconds (cumulative count 59260) 286s 63.100% <= 16.103 milliseconds (cumulative count 63100) 286s 66.870% <= 17.103 milliseconds (cumulative count 66870) 286s 70.180% <= 18.111 milliseconds (cumulative count 70180) 286s 73.230% <= 19.103 milliseconds (cumulative count 73230) 286s 76.530% <= 20.111 milliseconds (cumulative count 76530) 286s 79.560% <= 21.103 milliseconds (cumulative count 79560) 286s 82.810% <= 22.111 milliseconds (cumulative count 82810) 286s 86.130% <= 23.103 milliseconds (cumulative count 86130) 286s 89.080% <= 24.111 milliseconds (cumulative count 89080) 286s 91.740% <= 25.103 milliseconds (cumulative count 91740) 286s 93.810% <= 26.111 milliseconds (cumulative count 93810) 286s 95.750% <= 27.103 milliseconds (cumulative count 95750) 286s 97.380% <= 28.111 milliseconds (cumulative count 97380) 286s 98.480% <= 29.103 milliseconds (cumulative count 98480) 286s 99.140% <= 30.111 milliseconds (cumulative count 99140) 286s 99.430% <= 31.103 milliseconds (cumulative count 99430) 286s 99.680% <= 32.111 milliseconds (cumulative count 99680) 286s 99.800% <= 33.119 milliseconds (cumulative count 99800) 286s 99.870% <= 34.111 milliseconds (cumulative count 99870) 286s 99.910% <= 35.103 milliseconds (cumulative count 99910) 286s 99.980% <= 36.127 milliseconds (cumulative count 99980) 286s 99.990% <= 37.119 milliseconds (cumulative count 99990) 286s 100.000% <= 38.111 milliseconds (cumulative count 100000) 286s 286s Summary: 286s throughput summary: 15583.61 requests per second 286s latency summary (msec): 286s avg min p50 p95 p99 max 286s 14.953 0.544 13.319 26.719 29.887 37.759 287s MSET (10 keys): rps=191000.0 (overall: 269774.0) avg_msec=1.697 (overall: 1.697) ====== MSET (10 keys) ====== 287s 100000 requests completed in 0.37 seconds 287s 50 parallel clients 287s 3 bytes payload 287s keep alive: 1 287s host configuration "save": 3600 1 300 100 60 10000 287s host configuration "appendonly": no 287s multi-thread: no 287s 287s Latency by percentile distribution: 287s 0.000% <= 0.543 milliseconds (cumulative count 10) 287s 50.000% <= 1.727 milliseconds (cumulative count 50700) 287s 75.000% <= 1.895 milliseconds (cumulative count 75720) 287s 87.500% <= 1.999 milliseconds (cumulative count 88130) 287s 93.750% <= 2.055 milliseconds (cumulative count 93820) 287s 96.875% <= 2.103 milliseconds (cumulative count 97140) 287s 98.438% <= 2.143 milliseconds (cumulative count 98500) 287s 99.219% <= 2.199 milliseconds (cumulative count 99280) 287s 99.609% <= 2.247 milliseconds (cumulative count 99610) 287s 99.805% <= 2.311 milliseconds (cumulative count 99820) 287s 99.902% <= 2.359 milliseconds (cumulative count 99910) 287s 99.951% <= 2.407 milliseconds (cumulative count 99960) 287s 99.976% <= 2.447 milliseconds (cumulative count 99980) 287s 99.988% <= 2.463 milliseconds (cumulative count 99990) 287s 99.994% <= 2.487 milliseconds (cumulative count 100000) 287s 100.000% <= 2.487 milliseconds (cumulative count 100000) 287s 287s Cumulative distribution of latencies: 287s 0.000% <= 0.103 milliseconds (cumulative count 0) 287s 0.040% <= 0.607 milliseconds (cumulative count 40) 287s 0.110% <= 0.703 milliseconds (cumulative count 110) 287s 0.310% <= 1.007 milliseconds (cumulative count 310) 287s 1.750% <= 1.103 milliseconds (cumulative count 1750) 287s 6.060% <= 1.207 milliseconds (cumulative count 6060) 287s 9.690% <= 1.303 milliseconds (cumulative count 9690) 287s 13.010% <= 1.407 milliseconds (cumulative count 13010) 287s 19.880% <= 1.503 milliseconds (cumulative count 19880) 287s 33.290% <= 1.607 milliseconds (cumulative count 33290) 287s 47.100% <= 1.703 milliseconds (cumulative count 47100) 287s 63.050% <= 1.807 milliseconds (cumulative count 63050) 287s 76.840% <= 1.903 milliseconds (cumulative count 76840) 287s 88.930% <= 2.007 milliseconds (cumulative count 88930) 287s 97.140% <= 2.103 milliseconds (cumulative count 97140) 287s 100.000% <= 3.103 milliseconds (cumulative count 100000) 287s 287s Summary: 287s throughput summary: 269541.78 requests per second 287s latency summary (msec): 287s avg min p50 p95 p99 max 287s 1.699 0.536 1.727 2.071 2.183 2.487 287s XADD: rps=88406.4 (overall: 403454.6) avg_msec=1.101 (overall: 1.101) ====== XADD ====== 287s 100000 requests completed in 0.24 seconds 287s 50 parallel clients 287s 3 bytes payload 287s keep alive: 1 287s host configuration "save": 3600 1 300 100 60 10000 287s host configuration "appendonly": no 287s multi-thread: no 287s 287s Latency by percentile distribution: 287s 0.000% <= 0.407 milliseconds (cumulative count 20) 287s 50.000% <= 1.079 milliseconds (cumulative count 50960) 287s 75.000% <= 1.231 milliseconds (cumulative count 75810) 287s 87.500% <= 1.327 milliseconds (cumulative count 88400) 287s 93.750% <= 1.383 milliseconds (cumulative count 94450) 287s 96.875% <= 1.423 milliseconds (cumulative count 97000) 287s 98.438% <= 1.463 milliseconds (cumulative count 98550) 287s 99.219% <= 1.503 milliseconds (cumulative count 99240) 287s 99.609% <= 1.575 milliseconds (cumulative count 99630) 287s 99.805% <= 1.647 milliseconds (cumulative count 99810) 287s 99.902% <= 1.743 milliseconds (cumulative count 99910) 287s 99.951% <= 1.847 milliseconds (cumulative count 99960) 287s 99.976% <= 1.887 milliseconds (cumulative count 99980) 287s 99.988% <= 1.919 milliseconds (cumulative count 99990) 287s 99.994% <= 1.935 milliseconds (cumulative count 100000) 287s 100.000% <= 1.935 milliseconds (cumulative count 100000) 287s 287s Cumulative distribution of latencies: 287s 0.000% <= 0.103 milliseconds (cumulative count 0) 287s 0.020% <= 0.407 milliseconds (cumulative count 20) 287s 0.150% <= 0.503 milliseconds (cumulative count 150) 287s 0.310% <= 0.607 milliseconds (cumulative count 310) 287s 0.520% <= 0.703 milliseconds (cumulative count 520) 287s 4.890% <= 0.807 milliseconds (cumulative count 4890) 287s 19.630% <= 0.903 milliseconds (cumulative count 19630) 287s 37.760% <= 1.007 milliseconds (cumulative count 37760) 287s 55.320% <= 1.103 milliseconds (cumulative count 55320) 287s 72.510% <= 1.207 milliseconds (cumulative count 72510) 287s 85.410% <= 1.303 milliseconds (cumulative count 85410) 287s 96.230% <= 1.407 milliseconds (cumulative count 96230) 287s 99.240% <= 1.503 milliseconds (cumulative count 99240) 287s 99.720% <= 1.607 milliseconds (cumulative count 99720) 287s 99.890% <= 1.703 milliseconds (cumulative count 99890) 287s 99.940% <= 1.807 milliseconds (cumulative count 99940) 287s 99.980% <= 1.903 milliseconds (cumulative count 99980) 287s 100.000% <= 2.007 milliseconds (cumulative count 100000) 287s 287s Summary: 287s throughput summary: 414937.75 requests per second 287s latency summary (msec): 287s avg min p50 p95 p99 max 287s 1.085 0.400 1.079 1.391 1.487 1.935 287s 287s autopkgtest [16:15:23]: test 0002-benchmark: -----------------------] 288s autopkgtest [16:15:24]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 288s 0002-benchmark PASS 288s autopkgtest [16:15:24]: test 0003-redict-check-aof: preparing testbed 288s Reading package lists... 289s Building dependency tree... 289s Reading state information... 289s Starting pkgProblemResolver with broken count: 0 289s Starting 2 pkgProblemResolver with broken count: 0 289s Done 290s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 291s autopkgtest [16:15:27]: test 0003-redict-check-aof: [----------------------- 292s autopkgtest [16:15:28]: test 0003-redict-check-aof: -----------------------] 292s 0003-redict-check-aof PASS 292s autopkgtest [16:15:28]: test 0003-redict-check-aof: - - - - - - - - - - results - - - - - - - - - - 293s autopkgtest [16:15:29]: test 0004-redict-check-rdb: preparing testbed 293s Reading package lists... 293s Building dependency tree... 293s Reading state information... 294s Starting pkgProblemResolver with broken count: 0 294s Starting 2 pkgProblemResolver with broken count: 0 294s Done 295s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 296s autopkgtest [16:15:32]: test 0004-redict-check-rdb: [----------------------- 302s OK 302s [offset 0] Checking RDB file /var/lib/redict/dump.rdb 302s [offset 26] AUX FIELD redis-ver = '7.3.2' 302s [offset 40] AUX FIELD redis-bits = '64' 302s [offset 52] AUX FIELD ctime = '1742055338' 302s [offset 67] AUX FIELD used-mem = '3210544' 302s [offset 79] AUX FIELD aof-base = '0' 302s [offset 81] Selecting DB ID 0 302s [offset 565601] Checksum OK 302s [offset 565601] \o/ RDB looks OK! \o/ 302s [info] 5 keys read 302s [info] 0 expires 302s [info] 0 already expired 302s autopkgtest [16:15:38]: test 0004-redict-check-rdb: -----------------------] 303s autopkgtest [16:15:39]: test 0004-redict-check-rdb: - - - - - - - - - - results - - - - - - - - - - 303s 0004-redict-check-rdb PASS 303s autopkgtest [16:15:39]: test 0005-cjson: preparing testbed 303s Reading package lists... 303s Building dependency tree... 303s Reading state information... 304s Starting pkgProblemResolver with broken count: 0 304s Starting 2 pkgProblemResolver with broken count: 0 304s Done 305s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 306s autopkgtest [16:15:42]: test 0005-cjson: [----------------------- 311s 312s autopkgtest [16:15:48]: test 0005-cjson: -----------------------] 312s autopkgtest [16:15:48]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 312s 0005-cjson PASS 312s autopkgtest [16:15:48]: @@@@@@@@@@@@@@@@@@@@ summary 312s 0001-redict-cli PASS 312s 0002-benchmark PASS 312s 0003-redict-check-aof PASS 312s 0004-redict-check-rdb PASS 312s 0005-cjson PASS 318s nova [W] Using flock in prodstack6-arm64 318s Creating nova instance adt-plucky-arm64-redict-20250315-161036-juju-7f2275-prod-proposed-migration-environment-2-a75c7f0c-6614-4f8e-93d3-3e089d7de15e from image adt/ubuntu-plucky-arm64-server-20250315.img (UUID bd6e766c-b51f-4b53-86d6-23aa4d18f524)... 318s nova [W] Timed out waiting for 0d4a6d2d-8197-4a71-8dad-ec925679adc5 to get deleted.