0s autopkgtest [14:16:41]: starting date and time: 2025-06-19 14:16:41+0000 0s autopkgtest [14:16:41]: git checkout: 9986aa8c Merge branch 'skia/fix_network_interface' into 'ubuntu/production' 0s autopkgtest [14:16:41]: host juju-7f2275-prod-proposed-migration-environment-9; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.1662t4g7/out --timeout-copy=6000 --setup-commands 'ln -s /dev/null /etc/systemd/system/bluetooth.service; printf "http_proxy=http://squid.internal:3128\nhttps_proxy=http://squid.internal:3128\nno_proxy=127.0.0.1,127.0.1.1,localhost,localdomain,internal,login.ubuntu.com,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com,radosgw.ps5.canonical.com\n" >> /etc/environment' --apt-pocket=proposed=src:redis --apt-upgrade valkey --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=redis/5:8.0.0-2 -- lxd -r lxd-armhf-10.145.243.28 lxd-armhf-10.145.243.28:autopkgtest/ubuntu/questing/armhf 25s autopkgtest [14:17:06]: testbed dpkg architecture: armhf 27s autopkgtest [14:17:08]: testbed apt version: 3.1.2 31s autopkgtest [14:17:12]: @@@@@@@@@@@@@@@@@@@@ test bed setup 33s autopkgtest [14:17:14]: testbed release detected to be: None 41s autopkgtest [14:17:22]: updating testbed package index (apt update) 43s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 43s Get:2 http://ftpmaster.internal/ubuntu questing InRelease [249 kB] 43s Get:3 http://ftpmaster.internal/ubuntu questing-updates InRelease [110 kB] 43s Get:4 http://ftpmaster.internal/ubuntu questing-security InRelease [110 kB] 43s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [426 kB] 43s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/restricted Sources [4716 B] 43s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.4 kB] 43s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [38.3 kB] 43s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/main armhf Packages [60.5 kB] 43s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/restricted armhf Packages [724 B] 43s Get:11 http://ftpmaster.internal/ubuntu questing-proposed/universe armhf Packages [352 kB] 43s Get:12 http://ftpmaster.internal/ubuntu questing-proposed/multiverse armhf Packages [4268 B] 43s Get:13 http://ftpmaster.internal/ubuntu questing/universe Sources [21.3 MB] 47s Get:14 http://ftpmaster.internal/ubuntu questing/multiverse Sources [309 kB] 47s Get:15 http://ftpmaster.internal/ubuntu questing/universe armhf Packages [15.1 MB] 51s Fetched 38.3 MB in 8s (4554 kB/s) 53s Reading package lists... 58s autopkgtest [14:17:39]: upgrading testbed (apt dist-upgrade and autopurge) 60s Reading package lists... 61s Building dependency tree... 61s Reading state information... 61s Calculating upgrade... 63s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 64s Reading package lists... 64s Building dependency tree... 64s Reading state information... 64s Solving dependencies... 65s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 68s autopkgtest [14:17:49]: rebooting testbed after setup commands that affected boot 109s autopkgtest [14:18:30]: testbed running kernel: Linux 6.8.0-58-generic #60~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 28 14:48:37 UTC 2 134s autopkgtest [14:18:55]: @@@@@@@@@@@@@@@@@@@@ apt-source valkey 157s Get:1 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (dsc) [2484 B] 157s Get:2 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (tar) [2726 kB] 157s Get:3 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (diff) [20.4 kB] 157s gpgv: Signature made Wed Jun 18 14:39:32 2025 UTC 157s gpgv: using RSA key 63EEFC3DE14D5146CE7F24BF34B8AD7D9529E793 157s gpgv: issuer "lena.voytek@canonical.com" 157s gpgv: Can't check signature: No public key 157s dpkg-source: warning: cannot verify inline signature for ./valkey_8.1.1+dfsg1-2ubuntu1.dsc: no acceptable signature found 158s autopkgtest [14:19:19]: testing package valkey version 8.1.1+dfsg1-2ubuntu1 162s autopkgtest [14:19:23]: build not needed 168s autopkgtest [14:19:29]: test 0001-valkey-cli: preparing testbed 170s Reading package lists... 170s Building dependency tree... 170s Reading state information... 170s Solving dependencies... 171s The following NEW packages will be installed: 171s liblzf1 valkey-server valkey-tools 171s 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. 171s Need to get 1256 kB of archives. 171s After this operation, 5097 kB of additional disk space will be used. 171s Get:1 http://ftpmaster.internal/ubuntu questing/universe armhf liblzf1 armhf 3.6-4 [6554 B] 171s Get:2 http://ftpmaster.internal/ubuntu questing/universe armhf valkey-tools armhf 8.1.1+dfsg1-2ubuntu1 [1198 kB] 172s Get:3 http://ftpmaster.internal/ubuntu questing/universe armhf valkey-server armhf 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 173s Fetched 1256 kB in 1s (1309 kB/s) 173s Selecting previously unselected package liblzf1:armhf. 173s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 59700 files and directories currently installed.) 173s Preparing to unpack .../liblzf1_3.6-4_armhf.deb ... 173s Unpacking liblzf1:armhf (3.6-4) ... 173s Selecting previously unselected package valkey-tools. 173s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_armhf.deb ... 173s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 173s Selecting previously unselected package valkey-server. 173s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_armhf.deb ... 173s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 173s Setting up liblzf1:armhf (3.6-4) ... 173s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 174s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 174s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 174s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 175s Processing triggers for man-db (2.13.1-1) ... 175s Processing triggers for libc-bin (2.41-6ubuntu2) ... 183s autopkgtest [14:19:44]: test 0001-valkey-cli: [----------------------- 191s # Server 191s redis_version:7.2.4 191s server_name:valkey 191s valkey_version:8.1.1 191s valkey_release_stage:ga 191s redis_git_sha1:00000000 191s redis_git_dirty:0 191s redis_build_id:454dc2cf719509d2 191s server_mode:standalone 191s os:Linux 6.8.0-58-generic armv7l 191s arch_bits:32 191s monotonic_clock:POSIX clock_gettime 191s multiplexing_api:epoll 191s gcc_version:14.3.0 191s process_id:1121 191s process_supervised:systemd 191s run_id:1bf1766909dec193cc99328f0cd84bf69a9cccce 191s tcp_port:6379 191s server_time_usec:1750342792013419 191s uptime_in_seconds:5 191s uptime_in_days:0 191s hz:10 191s configured_hz:10 191s clients_hz:10 191s lru_clock:5512327 191s executable:/usr/bin/valkey-server 191s config_file:/etc/valkey/valkey.conf 191s io_threads_active:0 191s availability_zone: 191s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 191s 191s # Clients 191s connected_clients:1 191s cluster_connections:0 191s maxclients:10000 191s client_recent_max_input_buffer:0 191s client_recent_max_output_buffer:0 191s blocked_clients:0 191s tracking_clients:0 191s pubsub_clients:0 191s watching_clients:0 191s clients_in_timeout_table:0 191s total_watched_keys:0 191s total_blocking_keys:0 191s total_blocking_keys_on_nokey:0 191s paused_reason:none 191s paused_actions:none 191s paused_timeout_milliseconds:0 191s 191s # Memory 191s used_memory:737432 191s used_memory_human:720.15K 191s used_memory_rss:10223616 191s used_memory_rss_human:9.75M 191s used_memory_peak:737432 191s used_memory_peak_human:720.15K 191s used_memory_peak_perc:100.35% 191s used_memory_overhead:718032 191s used_memory_startup:717896 191s used_memory_dataset:19400 191s used_memory_dataset_perc:99.30% 191s allocator_allocated:3967936 191s allocator_active:9502720 191s allocator_resident:10289152 191s allocator_muzzy:0 191s total_system_memory:3844009984 191s total_system_memory_human:3.58G 191s used_memory_lua:23552 191s used_memory_vm_eval:23552 191s used_memory_lua_human:23.00K 191s used_memory_scripts_eval:0 191s number_of_cached_scripts:0 191s number_of_functions:0 191s number_of_libraries:0 191s used_memory_vm_functions:24576 191s used_memory_vm_total:48128 191s used_memory_vm_total_human:47.00K 191s used_memory_functions:136 191s used_memory_scripts:136 191s used_memory_scripts_human:136B 191s maxmemory:3221225472 191s maxmemory_human:3.00G 191s maxmemory_policy:noeviction 191s allocator_frag_ratio:1.00 191s allocator_frag_bytes:0 191s allocator_rss_ratio:1.08 191s allocator_rss_bytes:786432 191s rss_overhead_ratio:0.99 191s rss_overhead_bytes:-65536 191s mem_fragmentation_ratio:14.24 191s mem_fragmentation_bytes:9505632 191s mem_not_counted_for_evict:0 191s mem_replication_backlog:0 191s mem_total_replication_buffers:0 191s mem_clients_slaves:0 191s mem_clients_normal:0 191s mem_cluster_links:0 191s mem_aof_buffer:0 191s mem_allocator:jemalloc-5.3.0 191s mem_overhead_db_hashtable_rehashing:0 191s active_defrag_running:0 191s lazyfree_pending_objects:0 191s lazyfreed_objects:0 191s 191s # Persistence 191s loading:0 191s async_loading:0 191s current_cow_peak:0 191s current_cow_size:0 191s current_cow_size_age:0 191s current_fork_perc:0.00 191s current_save_keys_processed:0 191s current_save_keys_total:0 191s rdb_changes_since_last_save:0 191s rdb_bgsave_in_progress:0 191s rdb_last_save_time:1750342787 191s rdb_last_bgsave_status:ok 191s rdb_last_bgsave_time_sec:-1 191s rdb_current_bgsave_time_sec:-1 191s rdb_saves:0 191s rdb_last_cow_size:0 191s rdb_last_load_keys_expired:0 191s rdb_last_load_keys_loaded:0 191s aof_enabled:0 191s aof_rewrite_in_progress:0 191s aof_rewrite_scheduled:0 191s aof_last_rewrite_time_sec:-1 191s aof_current_rewrite_time_sec:-1 191s aof_last_bgrewrite_status:ok 191s aof_rewrites:0 191s aof_rewrites_consecutive_failures:0 191s aof_last_write_status:ok 191s aof_last_cow_size:0 191s module_fork_in_progress:0 191s module_fork_last_cow_size:0 191s 191s # Stats 191s total_connections_received:1 191s total_commands_processed:0 191s instantaneous_ops_per_sec:0 191s total_net_input_bytes:14 191s total_net_output_bytes:0 191s total_net_repl_input_bytes:0 191s total_net_repl_output_bytes:0 191s instantaneous_input_kbps:0.00 191s instantaneous_output_kbps:0.00 191s instantaneous_input_repl_kbps:0.00 191s instantaneous_output_repl_kbps:0.00 191s rejected_connections:0 191s sync_full:0 191s sync_partial_ok:0 191s sync_partial_err:0 191s expired_keys:0 191s expired_stale_perc:0.00 191s expired_time_cap_reached_count:0 191s expire_cycle_cpu_milliseconds:0 191s evicted_keys:0 191s evicted_clients:0 191s evicted_scripts:0 191s total_eviction_exceeded_time:0 191s current_eviction_exceeded_time:0 191s keyspace_hits:0 191s keyspace_misses:0 191s pubsub_channels:0 191s pubsub_patterns:0 191s pubsubshard_channels:0 191s latest_fork_usec:0 191s total_forks:0 191s migrate_cached_sockets:0 191s slave_expires_tracked_keys:0 191s active_defrag_hits:0 191s active_defrag_misses:0 191s active_defrag_key_hits:0 191s active_defrag_key_misses:0 191s total_active_defrag_time:0 191s current_active_defrag_time:0 191s tracking_total_keys:0 191s tracking_total_items:0 191s tracking_total_prefixes:0 191s unexpected_error_replies:0 191s total_error_replies:0 191s dump_payload_sanitizations:0 191s total_reads_processed:1 191s total_writes_processed:0 191s io_threaded_reads_processed:0 191s io_threaded_writes_processed:0 191s io_threaded_freed_objects:0 191s io_threaded_accept_processed:0 191s io_threaded_poll_processed:0 191s io_threaded_total_prefetch_batches:0 191s io_threaded_total_prefetch_entries:0 191s client_query_buffer_limit_disconnections:0 191s client_output_buffer_limit_disconnections:0 191s reply_buffer_shrinks:0 191s reply_buffer_expands:0 191s eventloop_cycles:51 191s eventloop_duration_sum:11688 191s eventloop_duration_cmd_sum:0 191s instantaneous_eventloop_cycles_per_sec:9 191s instantaneous_eventloop_duration_usec:217 191s acl_access_denied_auth:0 191s acl_access_denied_cmd:0 191s acl_access_denied_key:0 191s acl_access_denied_channel:0 191s 191s # Replication 191s role:master 191s connected_slaves:0 191s replicas_waiting_psync:0 191s master_failover_state:no-failover 191s master_replid:18f62cb9d966646ca75c573487b7d35742a7bed4 191s master_replid2:0000000000000000000000000000000000000000 191s master_repl_offset:0 191s second_repl_offset:-1 191s repl_backlog_active:0 191s repl_backlog_size:10485760 191s repl_backlog_first_byte_offset:0 191s repl_backlog_histlen:0 191s 191s # CPU 191s used_cpu_sys:0.047320 191s used_cpu_user:0.071724 191s used_cpu_sys_children:0.000000 191s used_cpu_user_children:0.002021 191s used_cpu_sys_main_thread:0.045181 191s used_cpu_user_main_thread:0.072274 191s 191s # Modules 191s 191s # Errorstats 191s 191s # Cluster 191s cluster_enabled:0 191s 191s # Keyspace 191s Redis ver. 8.1.1 191s autopkgtest [14:19:52]: test 0001-valkey-cli: -----------------------] 195s 0001-valkey-cli PASS 195s autopkgtest [14:19:56]: test 0001-valkey-cli: - - - - - - - - - - results - - - - - - - - - - 199s autopkgtest [14:20:00]: test 0002-benchmark: preparing testbed 200s Reading package lists... 201s Building dependency tree... 201s Reading state information... 201s Solving dependencies... 202s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 209s autopkgtest [14:20:10]: test 0002-benchmark: [----------------------- 218s PING_INLINE: rps=0.0 (overall: 0.0) avg_msec=nan (overall: nan) PING_INLINE: rps=365976.1 (overall: 364523.8) avg_msec=1.211 (overall: 1.211) ====== PING_INLINE ====== 218s 100000 requests completed in 0.27 seconds 218s 50 parallel clients 218s 3 bytes payload 218s keep alive: 1 218s host configuration "save": 3600 1 300 100 60 10000 218s host configuration "appendonly": no 218s multi-thread: no 218s 218s Latency by percentile distribution: 218s 0.000% <= 0.503 milliseconds (cumulative count 10) 218s 50.000% <= 1.167 milliseconds (cumulative count 51060) 218s 75.000% <= 1.375 milliseconds (cumulative count 75440) 218s 87.500% <= 1.519 milliseconds (cumulative count 87690) 218s 93.750% <= 1.615 milliseconds (cumulative count 93770) 218s 96.875% <= 1.703 milliseconds (cumulative count 97070) 218s 98.438% <= 1.807 milliseconds (cumulative count 98500) 218s 99.219% <= 1.975 milliseconds (cumulative count 99220) 218s 99.609% <= 2.143 milliseconds (cumulative count 99610) 218s 99.805% <= 2.263 milliseconds (cumulative count 99810) 218s 99.902% <= 2.335 milliseconds (cumulative count 99910) 218s 99.951% <= 2.407 milliseconds (cumulative count 99960) 218s 99.976% <= 2.495 milliseconds (cumulative count 99980) 218s 99.988% <= 2.543 milliseconds (cumulative count 99990) 218s 99.994% <= 2.559 milliseconds (cumulative count 100000) 218s 100.000% <= 2.559 milliseconds (cumulative count 100000) 218s 218s Cumulative distribution of latencies: 218s 0.000% <= 0.103 milliseconds (cumulative count 0) 218s 0.010% <= 0.503 milliseconds (cumulative count 10) 218s 0.130% <= 0.607 milliseconds (cumulative count 130) 218s 0.430% <= 0.703 milliseconds (cumulative count 430) 218s 1.120% <= 0.807 milliseconds (cumulative count 1120) 218s 4.870% <= 0.903 milliseconds (cumulative count 4870) 218s 20.170% <= 1.007 milliseconds (cumulative count 20170) 218s 39.770% <= 1.103 milliseconds (cumulative count 39770) 218s 56.430% <= 1.207 milliseconds (cumulative count 56430) 218s 67.960% <= 1.303 milliseconds (cumulative count 67960) 218s 78.310% <= 1.407 milliseconds (cumulative count 78310) 218s 86.400% <= 1.503 milliseconds (cumulative count 86400) 218s 93.430% <= 1.607 milliseconds (cumulative count 93430) 218s 97.070% <= 1.703 milliseconds (cumulative count 97070) 218s 98.500% <= 1.807 milliseconds (cumulative count 98500) 218s 99.070% <= 1.903 milliseconds (cumulative count 99070) 218s 99.310% <= 2.007 milliseconds (cumulative count 99310) 218s 99.540% <= 2.103 milliseconds (cumulative count 99540) 218s 100.000% <= 3.103 milliseconds (cumulative count 100000) 218s 218s Summary: 218s throughput summary: 364963.53 requests per second 218s latency summary (msec): 218s avg min p50 p95 p99 max 218s 1.214 0.496 1.167 1.647 1.903 2.559 218s PING_MBULK: rps=344240.0 (overall: 382488.9) avg_msec=1.131 (overall: 1.131) ====== PING_MBULK ====== 218s 100000 requests completed in 0.26 seconds 218s 50 parallel clients 218s 3 bytes payload 218s keep alive: 1 218s host configuration "save": 3600 1 300 100 60 10000 218s host configuration "appendonly": no 218s multi-thread: no 218s 218s Latency by percentile distribution: 218s 0.000% <= 0.431 milliseconds (cumulative count 10) 218s 50.000% <= 1.079 milliseconds (cumulative count 51210) 218s 75.000% <= 1.279 milliseconds (cumulative count 75210) 218s 87.500% <= 1.431 milliseconds (cumulative count 87720) 218s 93.750% <= 1.535 milliseconds (cumulative count 93830) 218s 96.875% <= 1.655 milliseconds (cumulative count 96960) 218s 98.438% <= 1.775 milliseconds (cumulative count 98470) 218s 99.219% <= 2.007 milliseconds (cumulative count 99220) 218s 99.609% <= 2.319 milliseconds (cumulative count 99610) 218s 99.805% <= 2.519 milliseconds (cumulative count 99810) 218s 99.902% <= 2.655 milliseconds (cumulative count 99910) 218s 99.951% <= 2.735 milliseconds (cumulative count 99960) 218s 99.976% <= 2.919 milliseconds (cumulative count 99980) 218s 99.988% <= 2.967 milliseconds (cumulative count 99990) 218s 99.994% <= 2.991 milliseconds (cumulative count 100000) 218s 100.000% <= 2.991 milliseconds (cumulative count 100000) 218s 218s Cumulative distribution of latencies: 218s 0.000% <= 0.103 milliseconds (cumulative count 0) 218s 0.110% <= 0.503 milliseconds (cumulative count 110) 218s 1.160% <= 0.607 milliseconds (cumulative count 1160) 218s 2.840% <= 0.703 milliseconds (cumulative count 2840) 218s 6.350% <= 0.807 milliseconds (cumulative count 6350) 218s 15.790% <= 0.903 milliseconds (cumulative count 15790) 218s 37.550% <= 1.007 milliseconds (cumulative count 37550) 218s 54.990% <= 1.103 milliseconds (cumulative count 54990) 218s 68.140% <= 1.207 milliseconds (cumulative count 68140) 218s 77.280% <= 1.303 milliseconds (cumulative count 77280) 218s 85.780% <= 1.407 milliseconds (cumulative count 85780) 218s 92.280% <= 1.503 milliseconds (cumulative count 92280) 218s 96.050% <= 1.607 milliseconds (cumulative count 96050) 218s 97.820% <= 1.703 milliseconds (cumulative count 97820) 218s 98.680% <= 1.807 milliseconds (cumulative count 98680) 218s 99.000% <= 1.903 milliseconds (cumulative count 99000) 218s 99.220% <= 2.007 milliseconds (cumulative count 99220) 218s 99.390% <= 2.103 milliseconds (cumulative count 99390) 218s 100.000% <= 3.103 milliseconds (cumulative count 100000) 218s 218s Summary: 218s throughput summary: 384615.41 requests per second 218s latency summary (msec): 218s avg min p50 p95 p99 max 218s 1.125 0.424 1.079 1.575 1.903 2.991 218s SET: rps=257131.5 (overall: 303004.7) avg_msec=1.476 (overall: 1.476) ====== SET ====== 218s 100000 requests completed in 0.33 seconds 218s 50 parallel clients 218s 3 bytes payload 218s keep alive: 1 218s host configuration "save": 3600 1 300 100 60 10000 218s host configuration "appendonly": no 218s multi-thread: no 218s 218s Latency by percentile distribution: 218s 0.000% <= 0.527 milliseconds (cumulative count 10) 218s 50.000% <= 1.471 milliseconds (cumulative count 50300) 218s 75.000% <= 1.695 milliseconds (cumulative count 75130) 218s 87.500% <= 1.847 milliseconds (cumulative count 87900) 218s 93.750% <= 1.935 milliseconds (cumulative count 94030) 218s 96.875% <= 2.007 milliseconds (cumulative count 97020) 218s 98.438% <= 2.095 milliseconds (cumulative count 98490) 218s 99.219% <= 2.167 milliseconds (cumulative count 99250) 218s 99.609% <= 2.255 milliseconds (cumulative count 99630) 218s 99.805% <= 2.359 milliseconds (cumulative count 99820) 218s 99.902% <= 2.423 milliseconds (cumulative count 99910) 218s 99.951% <= 2.479 milliseconds (cumulative count 99970) 218s 99.976% <= 2.487 milliseconds (cumulative count 99980) 218s 99.988% <= 2.543 milliseconds (cumulative count 99990) 218s 99.994% <= 2.647 milliseconds (cumulative count 100000) 218s 100.000% <= 2.647 milliseconds (cumulative count 100000) 218s 218s Cumulative distribution of latencies: 218s 0.000% <= 0.103 milliseconds (cumulative count 0) 218s 0.080% <= 0.607 milliseconds (cumulative count 80) 218s 0.270% <= 0.703 milliseconds (cumulative count 270) 218s 0.530% <= 0.807 milliseconds (cumulative count 530) 218s 0.850% <= 0.903 milliseconds (cumulative count 850) 218s 1.660% <= 1.007 milliseconds (cumulative count 1660) 218s 4.880% <= 1.103 milliseconds (cumulative count 4880) 218s 17.460% <= 1.207 milliseconds (cumulative count 17460) 218s 31.130% <= 1.303 milliseconds (cumulative count 31130) 218s 42.520% <= 1.407 milliseconds (cumulative count 42520) 218s 54.130% <= 1.503 milliseconds (cumulative count 54130) 218s 66.120% <= 1.607 milliseconds (cumulative count 66120) 218s 75.760% <= 1.703 milliseconds (cumulative count 75760) 218s 84.830% <= 1.807 milliseconds (cumulative count 84830) 218s 92.180% <= 1.903 milliseconds (cumulative count 92180) 218s 97.020% <= 2.007 milliseconds (cumulative count 97020) 218s 98.580% <= 2.103 milliseconds (cumulative count 98580) 218s 100.000% <= 3.103 milliseconds (cumulative count 100000) 218s 218s Summary: 218s throughput summary: 303030.28 requests per second 218s latency summary (msec): 218s avg min p50 p95 p99 max 218s 1.489 0.520 1.471 1.951 2.143 2.647 218s GET: rps=190320.0 (overall: 366000.0) avg_msec=1.227 (overall: 1.227) ====== GET ====== 218s 100000 requests completed in 0.27 seconds 218s 50 parallel clients 218s 3 bytes payload 218s keep alive: 1 218s host configuration "save": 3600 1 300 100 60 10000 218s host configuration "appendonly": no 218s multi-thread: no 218s 218s Latency by percentile distribution: 218s 0.000% <= 0.431 milliseconds (cumulative count 10) 218s 50.000% <= 1.191 milliseconds (cumulative count 50090) 218s 75.000% <= 1.391 milliseconds (cumulative count 75770) 218s 87.500% <= 1.519 milliseconds (cumulative count 87930) 218s 93.750% <= 1.599 milliseconds (cumulative count 94070) 218s 96.875% <= 1.671 milliseconds (cumulative count 96910) 218s 98.438% <= 1.743 milliseconds (cumulative count 98570) 218s 99.219% <= 1.807 milliseconds (cumulative count 99270) 218s 99.609% <= 1.863 milliseconds (cumulative count 99630) 218s 99.805% <= 1.935 milliseconds (cumulative count 99810) 218s 99.902% <= 2.047 milliseconds (cumulative count 99910) 218s 99.951% <= 2.135 milliseconds (cumulative count 99960) 218s 99.976% <= 2.215 milliseconds (cumulative count 99980) 218s 99.988% <= 2.279 milliseconds (cumulative count 99990) 218s 99.994% <= 2.351 milliseconds (cumulative count 100000) 218s 100.000% <= 2.351 milliseconds (cumulative count 100000) 218s 218s Cumulative distribution of latencies: 218s 0.000% <= 0.103 milliseconds (cumulative count 0) 218s 0.070% <= 0.503 milliseconds (cumulative count 70) 218s 0.180% <= 0.607 milliseconds (cumulative count 180) 218s 0.340% <= 0.703 milliseconds (cumulative count 340) 218s 0.520% <= 0.807 milliseconds (cumulative count 520) 218s 3.320% <= 0.903 milliseconds (cumulative count 3320) 218s 18.380% <= 1.007 milliseconds (cumulative count 18380) 218s 36.720% <= 1.103 milliseconds (cumulative count 36720) 218s 52.360% <= 1.207 milliseconds (cumulative count 52360) 218s 65.630% <= 1.303 milliseconds (cumulative count 65630) 218s 77.340% <= 1.407 milliseconds (cumulative count 77340) 218s 86.580% <= 1.503 milliseconds (cumulative count 86580) 218s 94.530% <= 1.607 milliseconds (cumulative count 94530) 218s 97.810% <= 1.703 milliseconds (cumulative count 97810) 218s 99.270% <= 1.807 milliseconds (cumulative count 99270) 218s 99.750% <= 1.903 milliseconds (cumulative count 99750) 218s 99.890% <= 2.007 milliseconds (cumulative count 99890) 218s 99.940% <= 2.103 milliseconds (cumulative count 99940) 218s 100.000% <= 3.103 milliseconds (cumulative count 100000) 218s 218s Summary: 218s throughput summary: 367647.03 requests per second 218s latency summary (msec): 218s avg min p50 p95 p99 max 218s 1.223 0.424 1.191 1.623 1.783 2.351 219s INCR: rps=148964.2 (overall: 349439.2) avg_msec=1.272 (overall: 1.272) ====== INCR ====== 219s 100000 requests completed in 0.28 seconds 219s 50 parallel clients 219s 3 bytes payload 219s keep alive: 1 219s host configuration "save": 3600 1 300 100 60 10000 219s host configuration "appendonly": no 219s multi-thread: no 219s 219s Latency by percentile distribution: 219s 0.000% <= 0.455 milliseconds (cumulative count 10) 219s 50.000% <= 1.239 milliseconds (cumulative count 50410) 219s 75.000% <= 1.439 milliseconds (cumulative count 75520) 219s 87.500% <= 1.567 milliseconds (cumulative count 88060) 219s 93.750% <= 1.647 milliseconds (cumulative count 93830) 219s 96.875% <= 1.719 milliseconds (cumulative count 96940) 219s 98.438% <= 1.783 milliseconds (cumulative count 98520) 219s 99.219% <= 1.839 milliseconds (cumulative count 99250) 219s 99.609% <= 1.895 milliseconds (cumulative count 99640) 219s 99.805% <= 1.935 milliseconds (cumulative count 99810) 219s 99.902% <= 1.991 milliseconds (cumulative count 99910) 219s 99.951% <= 2.095 milliseconds (cumulative count 99960) 219s 99.976% <= 2.223 milliseconds (cumulative count 99980) 219s 99.988% <= 2.239 milliseconds (cumulative count 99990) 219s 99.994% <= 2.271 milliseconds (cumulative count 100000) 219s 100.000% <= 2.271 milliseconds (cumulative count 100000) 219s 219s Cumulative distribution of latencies: 219s 0.000% <= 0.103 milliseconds (cumulative count 0) 219s 0.030% <= 0.503 milliseconds (cumulative count 30) 219s 0.100% <= 0.607 milliseconds (cumulative count 100) 219s 0.130% <= 0.703 milliseconds (cumulative count 130) 219s 0.220% <= 0.807 milliseconds (cumulative count 220) 219s 1.400% <= 0.903 milliseconds (cumulative count 1400) 219s 12.520% <= 1.007 milliseconds (cumulative count 12520) 219s 29.830% <= 1.103 milliseconds (cumulative count 29830) 219s 45.840% <= 1.207 milliseconds (cumulative count 45840) 219s 59.120% <= 1.303 milliseconds (cumulative count 59120) 219s 71.970% <= 1.407 milliseconds (cumulative count 71970) 219s 82.020% <= 1.503 milliseconds (cumulative count 82020) 219s 91.290% <= 1.607 milliseconds (cumulative count 91290) 219s 96.430% <= 1.703 milliseconds (cumulative count 96430) 219s 98.910% <= 1.807 milliseconds (cumulative count 98910) 219s 99.680% <= 1.903 milliseconds (cumulative count 99680) 219s 99.920% <= 2.007 milliseconds (cumulative count 99920) 219s 99.970% <= 2.103 milliseconds (cumulative count 99970) 219s 100.000% <= 3.103 milliseconds (cumulative count 100000) 219s 219s Summary: 219s throughput summary: 354609.94 requests per second 219s latency summary (msec): 219s avg min p50 p95 p99 max 219s 1.267 0.448 1.239 1.671 1.815 2.271 219s LPUSH: rps=77600.0 (overall: 269444.5) avg_msec=1.654 (overall: 1.654) LPUSH: rps=270637.5 (overall: 270371.5) avg_msec=1.667 (overall: 1.664) ====== LPUSH ====== 219s 100000 requests completed in 0.37 seconds 219s 50 parallel clients 219s 3 bytes payload 219s keep alive: 1 219s host configuration "save": 3600 1 300 100 60 10000 219s host configuration "appendonly": no 219s multi-thread: no 219s 219s Latency by percentile distribution: 219s 0.000% <= 0.551 milliseconds (cumulative count 10) 219s 50.000% <= 1.655 milliseconds (cumulative count 50440) 219s 75.000% <= 1.887 milliseconds (cumulative count 75170) 219s 87.500% <= 2.039 milliseconds (cumulative count 87780) 219s 93.750% <= 2.159 milliseconds (cumulative count 93980) 219s 96.875% <= 2.263 milliseconds (cumulative count 97050) 219s 98.438% <= 2.375 milliseconds (cumulative count 98450) 219s 99.219% <= 2.495 milliseconds (cumulative count 99270) 219s 99.609% <= 2.607 milliseconds (cumulative count 99610) 219s 99.805% <= 2.687 milliseconds (cumulative count 99810) 219s 99.902% <= 2.799 milliseconds (cumulative count 99910) 219s 99.951% <= 2.831 milliseconds (cumulative count 99960) 219s 99.976% <= 2.895 milliseconds (cumulative count 99990) 219s 99.994% <= 3.287 milliseconds (cumulative count 100000) 219s 100.000% <= 3.287 milliseconds (cumulative count 100000) 219s 219s Cumulative distribution of latencies: 219s 0.000% <= 0.103 milliseconds (cumulative count 0) 219s 0.050% <= 0.607 milliseconds (cumulative count 50) 219s 0.210% <= 0.703 milliseconds (cumulative count 210) 219s 0.390% <= 0.807 milliseconds (cumulative count 390) 219s 0.530% <= 0.903 milliseconds (cumulative count 530) 219s 1.200% <= 1.007 milliseconds (cumulative count 1200) 219s 3.150% <= 1.103 milliseconds (cumulative count 3150) 219s 8.080% <= 1.207 milliseconds (cumulative count 8080) 219s 14.200% <= 1.303 milliseconds (cumulative count 14200) 219s 23.570% <= 1.407 milliseconds (cumulative count 23570) 219s 33.880% <= 1.503 milliseconds (cumulative count 33880) 219s 45.240% <= 1.607 milliseconds (cumulative count 45240) 219s 55.740% <= 1.703 milliseconds (cumulative count 55740) 219s 67.130% <= 1.807 milliseconds (cumulative count 67130) 219s 76.760% <= 1.903 milliseconds (cumulative count 76760) 219s 85.580% <= 2.007 milliseconds (cumulative count 85580) 219s 91.510% <= 2.103 milliseconds (cumulative count 91510) 219s 99.990% <= 3.103 milliseconds (cumulative count 99990) 219s 100.000% <= 4.103 milliseconds (cumulative count 100000) 219s 219s Summary: 219s throughput summary: 271739.12 requests per second 219s latency summary (msec): 219s avg min p50 p95 p99 max 219s 1.659 0.544 1.655 2.191 2.439 3.287 219s RPUSH: rps=271120.0 (overall: 333891.6) avg_msec=1.351 (overall: 1.351) ====== RPUSH ====== 219s 100000 requests completed in 0.30 seconds 219s 50 parallel clients 219s 3 bytes payload 219s keep alive: 1 219s host configuration "save": 3600 1 300 100 60 10000 219s host configuration "appendonly": no 219s multi-thread: no 219s 219s Latency by percentile distribution: 219s 0.000% <= 0.407 milliseconds (cumulative count 10) 219s 50.000% <= 1.351 milliseconds (cumulative count 50530) 219s 75.000% <= 1.543 milliseconds (cumulative count 75180) 219s 87.500% <= 1.671 milliseconds (cumulative count 87640) 219s 93.750% <= 1.759 milliseconds (cumulative count 93860) 219s 96.875% <= 1.831 milliseconds (cumulative count 96930) 219s 98.438% <= 1.903 milliseconds (cumulative count 98480) 219s 99.219% <= 1.991 milliseconds (cumulative count 99240) 219s 99.609% <= 2.215 milliseconds (cumulative count 99610) 219s 99.805% <= 2.527 milliseconds (cumulative count 99820) 219s 99.902% <= 2.671 milliseconds (cumulative count 99910) 219s 99.951% <= 2.791 milliseconds (cumulative count 99960) 219s 99.976% <= 2.879 milliseconds (cumulative count 99980) 219s 99.988% <= 2.911 milliseconds (cumulative count 99990) 219s 99.994% <= 2.943 milliseconds (cumulative count 100000) 219s 100.000% <= 2.943 milliseconds (cumulative count 100000) 219s 219s Cumulative distribution of latencies: 219s 0.000% <= 0.103 milliseconds (cumulative count 0) 219s 0.010% <= 0.407 milliseconds (cumulative count 10) 219s 0.050% <= 0.503 milliseconds (cumulative count 50) 219s 0.150% <= 0.607 milliseconds (cumulative count 150) 219s 0.280% <= 0.703 milliseconds (cumulative count 280) 219s 0.410% <= 0.807 milliseconds (cumulative count 410) 219s 0.980% <= 0.903 milliseconds (cumulative count 980) 219s 6.160% <= 1.007 milliseconds (cumulative count 6160) 219s 18.200% <= 1.103 milliseconds (cumulative count 18200) 219s 31.740% <= 1.207 milliseconds (cumulative count 31740) 219s 43.770% <= 1.303 milliseconds (cumulative count 43770) 219s 58.390% <= 1.407 milliseconds (cumulative count 58390) 219s 70.750% <= 1.503 milliseconds (cumulative count 70750) 219s 81.820% <= 1.607 milliseconds (cumulative count 81820) 219s 90.230% <= 1.703 milliseconds (cumulative count 90230) 219s 96.030% <= 1.807 milliseconds (cumulative count 96030) 219s 98.480% <= 1.903 milliseconds (cumulative count 98480) 219s 99.350% <= 2.007 milliseconds (cumulative count 99350) 219s 99.540% <= 2.103 milliseconds (cumulative count 99540) 219s 100.000% <= 3.103 milliseconds (cumulative count 100000) 219s 219s Summary: 219s throughput summary: 332225.91 requests per second 219s latency summary (msec): 219s avg min p50 p95 p99 max 219s 1.361 0.400 1.351 1.783 1.951 2.943 220s LPOP: rps=170239.0 (overall: 282980.2) avg_msec=1.600 (overall: 1.600) ====== LPOP ====== 220s 100000 requests completed in 0.35 seconds 220s 50 parallel clients 220s 3 bytes payload 220s keep alive: 1 220s host configuration "save": 3600 1 300 100 60 10000 220s host configuration "appendonly": no 220s multi-thread: no 220s 220s Latency by percentile distribution: 220s 0.000% <= 0.607 milliseconds (cumulative count 10) 220s 50.000% <= 1.599 milliseconds (cumulative count 51010) 220s 75.000% <= 1.807 milliseconds (cumulative count 75050) 220s 87.500% <= 1.951 milliseconds (cumulative count 88130) 220s 93.750% <= 2.047 milliseconds (cumulative count 94030) 220s 96.875% <= 2.135 milliseconds (cumulative count 96900) 220s 98.438% <= 2.223 milliseconds (cumulative count 98450) 220s 99.219% <= 2.327 milliseconds (cumulative count 99220) 220s 99.609% <= 2.439 milliseconds (cumulative count 99630) 220s 99.805% <= 2.535 milliseconds (cumulative count 99810) 220s 99.902% <= 2.687 milliseconds (cumulative count 99910) 220s 99.951% <= 2.727 milliseconds (cumulative count 99960) 220s 99.976% <= 2.751 milliseconds (cumulative count 99980) 220s 99.988% <= 2.767 milliseconds (cumulative count 99990) 220s 99.994% <= 2.799 milliseconds (cumulative count 100000) 220s 100.000% <= 2.799 milliseconds (cumulative count 100000) 220s 220s Cumulative distribution of latencies: 220s 0.000% <= 0.103 milliseconds (cumulative count 0) 220s 0.010% <= 0.607 milliseconds (cumulative count 10) 220s 0.070% <= 0.703 milliseconds (cumulative count 70) 220s 0.140% <= 0.807 milliseconds (cumulative count 140) 220s 0.240% <= 0.903 milliseconds (cumulative count 240) 220s 0.650% <= 1.007 milliseconds (cumulative count 650) 220s 2.870% <= 1.103 milliseconds (cumulative count 2870) 220s 11.730% <= 1.207 milliseconds (cumulative count 11730) 220s 22.030% <= 1.303 milliseconds (cumulative count 22030) 220s 31.040% <= 1.407 milliseconds (cumulative count 31040) 220s 40.080% <= 1.503 milliseconds (cumulative count 40080) 220s 51.940% <= 1.607 milliseconds (cumulative count 51940) 220s 63.640% <= 1.703 milliseconds (cumulative count 63640) 220s 75.050% <= 1.807 milliseconds (cumulative count 75050) 220s 84.360% <= 1.903 milliseconds (cumulative count 84360) 220s 91.910% <= 2.007 milliseconds (cumulative count 91910) 220s 96.080% <= 2.103 milliseconds (cumulative count 96080) 220s 100.000% <= 3.103 milliseconds (cumulative count 100000) 220s 220s Summary: 220s throughput summary: 286532.94 requests per second 220s latency summary (msec): 220s avg min p50 p95 p99 max 220s 1.588 0.600 1.599 2.079 2.295 2.799 220s RPOP: rps=60400.0 (overall: 302000.0) avg_msec=1.498 (overall: 1.498) RPOP: rps=302390.5 (overall: 302325.6) avg_msec=1.502 (overall: 1.501) ====== RPOP ====== 220s 100000 requests completed in 0.33 seconds 220s 50 parallel clients 220s 3 bytes payload 220s keep alive: 1 220s host configuration "save": 3600 1 300 100 60 10000 220s host configuration "appendonly": no 220s multi-thread: no 220s 220s Latency by percentile distribution: 220s 0.000% <= 0.575 milliseconds (cumulative count 10) 220s 50.000% <= 1.503 milliseconds (cumulative count 50550) 220s 75.000% <= 1.719 milliseconds (cumulative count 75780) 220s 87.500% <= 1.855 milliseconds (cumulative count 87960) 220s 93.750% <= 1.935 milliseconds (cumulative count 93820) 220s 96.875% <= 2.015 milliseconds (cumulative count 97090) 220s 98.438% <= 2.079 milliseconds (cumulative count 98510) 220s 99.219% <= 2.143 milliseconds (cumulative count 99220) 220s 99.609% <= 2.199 milliseconds (cumulative count 99610) 220s 99.805% <= 2.279 milliseconds (cumulative count 99810) 220s 99.902% <= 2.335 milliseconds (cumulative count 99910) 220s 99.951% <= 2.423 milliseconds (cumulative count 99960) 220s 99.976% <= 2.463 milliseconds (cumulative count 99980) 220s 99.988% <= 2.495 milliseconds (cumulative count 99990) 220s 99.994% <= 2.567 milliseconds (cumulative count 100000) 220s 100.000% <= 2.567 milliseconds (cumulative count 100000) 220s 220s Cumulative distribution of latencies: 220s 0.000% <= 0.103 milliseconds (cumulative count 0) 220s 0.050% <= 0.607 milliseconds (cumulative count 50) 220s 0.170% <= 0.703 milliseconds (cumulative count 170) 220s 0.280% <= 0.807 milliseconds (cumulative count 280) 220s 0.390% <= 0.903 milliseconds (cumulative count 390) 220s 0.840% <= 1.007 milliseconds (cumulative count 840) 220s 4.200% <= 1.103 milliseconds (cumulative count 4200) 220s 17.540% <= 1.207 milliseconds (cumulative count 17540) 220s 29.280% <= 1.303 milliseconds (cumulative count 29280) 220s 39.370% <= 1.407 milliseconds (cumulative count 39370) 220s 50.550% <= 1.503 milliseconds (cumulative count 50550) 220s 63.230% <= 1.607 milliseconds (cumulative count 63230) 220s 74.150% <= 1.703 milliseconds (cumulative count 74150) 220s 83.920% <= 1.807 milliseconds (cumulative count 83920) 220s 91.740% <= 1.903 milliseconds (cumulative count 91740) 220s 96.840% <= 2.007 milliseconds (cumulative count 96840) 220s 98.870% <= 2.103 milliseconds (cumulative count 98870) 220s 100.000% <= 3.103 milliseconds (cumulative count 100000) 220s 220s Summary: 220s throughput summary: 302114.81 requests per second 220s latency summary (msec): 220s avg min p50 p95 p99 max 220s 1.505 0.568 1.503 1.967 2.127 2.567 220s SADD: rps=307450.2 (overall: 353990.8) avg_msec=1.273 (overall: 1.273) ====== SADD ====== 220s 100000 requests completed in 0.28 seconds 220s 50 parallel clients 220s 3 bytes payload 220s keep alive: 1 220s host configuration "save": 3600 1 300 100 60 10000 220s host configuration "appendonly": no 220s multi-thread: no 220s 220s Latency by percentile distribution: 220s 0.000% <= 0.447 milliseconds (cumulative count 10) 220s 50.000% <= 1.263 milliseconds (cumulative count 51230) 220s 75.000% <= 1.439 milliseconds (cumulative count 75110) 220s 87.500% <= 1.559 milliseconds (cumulative count 87890) 220s 93.750% <= 1.639 milliseconds (cumulative count 94220) 220s 96.875% <= 1.703 milliseconds (cumulative count 96940) 220s 98.438% <= 1.775 milliseconds (cumulative count 98440) 220s 99.219% <= 1.847 milliseconds (cumulative count 99270) 220s 99.609% <= 1.919 milliseconds (cumulative count 99640) 220s 99.805% <= 1.999 milliseconds (cumulative count 99810) 220s 99.902% <= 2.055 milliseconds (cumulative count 99920) 220s 99.951% <= 2.079 milliseconds (cumulative count 99960) 220s 99.976% <= 2.111 milliseconds (cumulative count 99980) 220s 99.988% <= 2.215 milliseconds (cumulative count 99990) 220s 99.994% <= 2.367 milliseconds (cumulative count 100000) 220s 100.000% <= 2.367 milliseconds (cumulative count 100000) 220s 220s Cumulative distribution of latencies: 220s 0.000% <= 0.103 milliseconds (cumulative count 0) 220s 0.040% <= 0.503 milliseconds (cumulative count 40) 220s 0.150% <= 0.607 milliseconds (cumulative count 150) 220s 0.310% <= 0.703 milliseconds (cumulative count 310) 220s 0.740% <= 0.807 milliseconds (cumulative count 740) 220s 2.600% <= 0.903 milliseconds (cumulative count 2600) 220s 13.450% <= 1.007 milliseconds (cumulative count 13450) 220s 27.680% <= 1.103 milliseconds (cumulative count 27680) 220s 42.540% <= 1.207 milliseconds (cumulative count 42540) 220s 57.260% <= 1.303 milliseconds (cumulative count 57260) 220s 71.430% <= 1.407 milliseconds (cumulative count 71430) 220s 82.190% <= 1.503 milliseconds (cumulative count 82190) 220s 91.990% <= 1.607 milliseconds (cumulative count 91990) 220s 96.940% <= 1.703 milliseconds (cumulative count 96940) 220s 98.970% <= 1.807 milliseconds (cumulative count 98970) 220s 99.560% <= 1.903 milliseconds (cumulative count 99560) 220s 99.830% <= 2.007 milliseconds (cumulative count 99830) 220s 99.970% <= 2.103 milliseconds (cumulative count 99970) 220s 100.000% <= 3.103 milliseconds (cumulative count 100000) 220s 220s Summary: 220s throughput summary: 355871.91 requests per second 220s latency summary (msec): 220s avg min p50 p95 p99 max 220s 1.270 0.440 1.263 1.655 1.815 2.367 221s HSET: rps=244040.0 (overall: 329783.8) avg_msec=1.367 (overall: 1.367) ====== HSET ====== 221s 100000 requests completed in 0.30 seconds 221s 50 parallel clients 221s 3 bytes payload 221s keep alive: 1 221s host configuration "save": 3600 1 300 100 60 10000 221s host configuration "appendonly": no 221s multi-thread: no 221s 221s Latency by percentile distribution: 221s 0.000% <= 0.391 milliseconds (cumulative count 10) 221s 50.000% <= 1.359 milliseconds (cumulative count 50220) 221s 75.000% <= 1.543 milliseconds (cumulative count 75450) 221s 87.500% <= 1.655 milliseconds (cumulative count 87710) 221s 93.750% <= 1.735 milliseconds (cumulative count 94250) 221s 96.875% <= 1.799 milliseconds (cumulative count 96880) 221s 98.438% <= 1.879 milliseconds (cumulative count 98460) 221s 99.219% <= 1.951 milliseconds (cumulative count 99280) 221s 99.609% <= 1.999 milliseconds (cumulative count 99640) 221s 99.805% <= 2.055 milliseconds (cumulative count 99810) 221s 99.902% <= 2.103 milliseconds (cumulative count 99910) 221s 99.951% <= 2.135 milliseconds (cumulative count 99960) 221s 99.976% <= 2.191 milliseconds (cumulative count 99980) 221s 99.988% <= 2.199 milliseconds (cumulative count 99990) 221s 99.994% <= 2.255 milliseconds (cumulative count 100000) 221s 100.000% <= 2.255 milliseconds (cumulative count 100000) 221s 221s Cumulative distribution of latencies: 221s 0.000% <= 0.103 milliseconds (cumulative count 0) 221s 0.020% <= 0.407 milliseconds (cumulative count 20) 221s 0.080% <= 0.503 milliseconds (cumulative count 80) 221s 0.170% <= 0.607 milliseconds (cumulative count 170) 221s 0.320% <= 0.703 milliseconds (cumulative count 320) 221s 0.520% <= 0.807 milliseconds (cumulative count 520) 221s 1.650% <= 0.903 milliseconds (cumulative count 1650) 221s 7.580% <= 1.007 milliseconds (cumulative count 7580) 221s 17.160% <= 1.103 milliseconds (cumulative count 17160) 221s 28.700% <= 1.207 milliseconds (cumulative count 28700) 221s 41.400% <= 1.303 milliseconds (cumulative count 41400) 221s 57.540% <= 1.407 milliseconds (cumulative count 57540) 221s 70.640% <= 1.503 milliseconds (cumulative count 70640) 221s 82.710% <= 1.607 milliseconds (cumulative count 82710) 221s 92.060% <= 1.703 milliseconds (cumulative count 92060) 221s 97.080% <= 1.807 milliseconds (cumulative count 97080) 221s 98.770% <= 1.903 milliseconds (cumulative count 98770) 221s 99.680% <= 2.007 milliseconds (cumulative count 99680) 221s 99.910% <= 2.103 milliseconds (cumulative count 99910) 221s 100.000% <= 3.103 milliseconds (cumulative count 100000) 221s 221s Summary: 221s throughput summary: 333333.31 requests per second 221s latency summary (msec): 221s avg min p50 p95 p99 max 221s 1.360 0.384 1.359 1.751 1.927 2.255 221s SPOP: rps=201952.2 (overall: 381127.8) avg_msec=1.167 (overall: 1.167) ====== SPOP ====== 221s 100000 requests completed in 0.26 seconds 221s 50 parallel clients 221s 3 bytes payload 221s keep alive: 1 221s host configuration "save": 3600 1 300 100 60 10000 221s host configuration "appendonly": no 221s multi-thread: no 221s 221s Latency by percentile distribution: 221s 0.000% <= 0.471 milliseconds (cumulative count 10) 221s 50.000% <= 1.135 milliseconds (cumulative count 50180) 221s 75.000% <= 1.327 milliseconds (cumulative count 75340) 221s 87.500% <= 1.455 milliseconds (cumulative count 88210) 221s 93.750% <= 1.527 milliseconds (cumulative count 93930) 221s 96.875% <= 1.591 milliseconds (cumulative count 97050) 221s 98.438% <= 1.655 milliseconds (cumulative count 98560) 221s 99.219% <= 1.719 milliseconds (cumulative count 99280) 221s 99.609% <= 1.767 milliseconds (cumulative count 99630) 221s 99.805% <= 1.815 milliseconds (cumulative count 99820) 221s 99.902% <= 1.879 milliseconds (cumulative count 99910) 221s 99.951% <= 1.983 milliseconds (cumulative count 99960) 221s 99.976% <= 2.015 milliseconds (cumulative count 99980) 221s 99.988% <= 2.031 milliseconds (cumulative count 99990) 221s 99.994% <= 2.055 milliseconds (cumulative count 100000) 221s 100.000% <= 2.055 milliseconds (cumulative count 100000) 221s 221s Cumulative distribution of latencies: 221s 0.000% <= 0.103 milliseconds (cumulative count 0) 221s 0.020% <= 0.503 milliseconds (cumulative count 20) 221s 0.230% <= 0.607 milliseconds (cumulative count 230) 221s 0.560% <= 0.703 milliseconds (cumulative count 560) 221s 1.590% <= 0.807 milliseconds (cumulative count 1590) 221s 8.490% <= 0.903 milliseconds (cumulative count 8490) 221s 28.370% <= 1.007 milliseconds (cumulative count 28370) 221s 45.410% <= 1.103 milliseconds (cumulative count 45410) 221s 60.680% <= 1.207 milliseconds (cumulative count 60680) 221s 72.730% <= 1.303 milliseconds (cumulative count 72730) 221s 83.570% <= 1.407 milliseconds (cumulative count 83570) 221s 92.280% <= 1.503 milliseconds (cumulative count 92280) 221s 97.480% <= 1.607 milliseconds (cumulative count 97480) 221s 99.130% <= 1.703 milliseconds (cumulative count 99130) 221s 99.800% <= 1.807 milliseconds (cumulative count 99800) 221s 99.920% <= 1.903 milliseconds (cumulative count 99920) 221s 99.970% <= 2.007 milliseconds (cumulative count 99970) 221s 100.000% <= 2.103 milliseconds (cumulative count 100000) 221s 221s Summary: 221s throughput summary: 383141.75 requests per second 221s latency summary (msec): 221s avg min p50 p95 p99 max 221s 1.166 0.464 1.135 1.551 1.695 2.055 221s ZADD: rps=149160.0 (overall: 310750.0) avg_msec=1.445 (overall: 1.445) ====== ZADD ====== 221s 100000 requests completed in 0.32 seconds 221s 50 parallel clients 221s 3 bytes payload 221s keep alive: 1 221s host configuration "save": 3600 1 300 100 60 10000 221s host configuration "appendonly": no 221s multi-thread: no 221s 221s Latency by percentile distribution: 221s 0.000% <= 0.535 milliseconds (cumulative count 10) 221s 50.000% <= 1.439 milliseconds (cumulative count 50010) 221s 75.000% <= 1.639 milliseconds (cumulative count 75880) 221s 87.500% <= 1.759 milliseconds (cumulative count 87740) 221s 93.750% <= 1.839 milliseconds (cumulative count 94120) 221s 96.875% <= 1.903 milliseconds (cumulative count 97080) 221s 98.438% <= 1.959 milliseconds (cumulative count 98560) 221s 99.219% <= 2.007 milliseconds (cumulative count 99230) 221s 99.609% <= 2.063 milliseconds (cumulative count 99640) 221s 99.805% <= 2.111 milliseconds (cumulative count 99810) 221s 99.902% <= 2.207 milliseconds (cumulative count 99910) 221s 99.951% <= 2.319 milliseconds (cumulative count 99960) 221s 99.976% <= 2.359 milliseconds (cumulative count 99980) 221s 99.988% <= 2.391 milliseconds (cumulative count 99990) 221s 99.994% <= 2.415 milliseconds (cumulative count 100000) 221s 100.000% <= 2.415 milliseconds (cumulative count 100000) 221s 221s Cumulative distribution of latencies: 221s 0.000% <= 0.103 milliseconds (cumulative count 0) 221s 0.030% <= 0.607 milliseconds (cumulative count 30) 221s 0.140% <= 0.703 milliseconds (cumulative count 140) 221s 0.200% <= 0.807 milliseconds (cumulative count 200) 221s 0.470% <= 0.903 milliseconds (cumulative count 470) 221s 2.430% <= 1.007 milliseconds (cumulative count 2430) 221s 10.500% <= 1.103 milliseconds (cumulative count 10500) 221s 22.320% <= 1.207 milliseconds (cumulative count 22320) 221s 32.300% <= 1.303 milliseconds (cumulative count 32300) 221s 45.410% <= 1.407 milliseconds (cumulative count 45410) 221s 59.170% <= 1.503 milliseconds (cumulative count 59170) 221s 72.380% <= 1.607 milliseconds (cumulative count 72380) 221s 82.280% <= 1.703 milliseconds (cumulative count 82280) 221s 91.940% <= 1.807 milliseconds (cumulative count 91940) 221s 97.080% <= 1.903 milliseconds (cumulative count 97080) 221s 99.230% <= 2.007 milliseconds (cumulative count 99230) 221s 99.800% <= 2.103 milliseconds (cumulative count 99800) 221s 100.000% <= 3.103 milliseconds (cumulative count 100000) 221s 221s Summary: 221s throughput summary: 314465.41 requests per second 221s latency summary (msec): 221s avg min p50 p95 p99 max 221s 1.441 0.528 1.439 1.855 1.991 2.415 222s ZPOPMIN: rps=76055.8 (overall: 381800.0) avg_msec=1.150 (overall: 1.150) ====== ZPOPMIN ====== 222s 100000 requests completed in 0.26 seconds 222s 50 parallel clients 222s 3 bytes payload 222s keep alive: 1 222s host configuration "save": 3600 1 300 100 60 10000 222s host configuration "appendonly": no 222s multi-thread: no 222s 222s Latency by percentile distribution: 222s 0.000% <= 0.423 milliseconds (cumulative count 10) 222s 50.000% <= 1.143 milliseconds (cumulative count 50560) 222s 75.000% <= 1.327 milliseconds (cumulative count 75160) 222s 87.500% <= 1.463 milliseconds (cumulative count 88120) 222s 93.750% <= 1.535 milliseconds (cumulative count 93750) 222s 96.875% <= 1.607 milliseconds (cumulative count 97090) 222s 98.438% <= 1.663 milliseconds (cumulative count 98550) 222s 99.219% <= 1.719 milliseconds (cumulative count 99340) 222s 99.609% <= 1.759 milliseconds (cumulative count 99650) 222s 99.805% <= 1.791 milliseconds (cumulative count 99810) 222s 99.902% <= 1.839 milliseconds (cumulative count 99910) 222s 99.951% <= 1.879 milliseconds (cumulative count 99960) 222s 99.976% <= 1.943 milliseconds (cumulative count 99980) 222s 99.988% <= 1.959 milliseconds (cumulative count 99990) 222s 99.994% <= 1.975 milliseconds (cumulative count 100000) 222s 100.000% <= 1.975 milliseconds (cumulative count 100000) 222s 222s Cumulative distribution of latencies: 222s 0.000% <= 0.103 milliseconds (cumulative count 0) 222s 0.040% <= 0.503 milliseconds (cumulative count 40) 222s 0.190% <= 0.607 milliseconds (cumulative count 190) 222s 0.370% <= 0.703 milliseconds (cumulative count 370) 222s 0.940% <= 0.807 milliseconds (cumulative count 940) 222s 7.540% <= 0.903 milliseconds (cumulative count 7540) 222s 26.300% <= 1.007 milliseconds (cumulative count 26300) 222s 44.110% <= 1.103 milliseconds (cumulative count 44110) 222s 60.060% <= 1.207 milliseconds (cumulative count 60060) 222s 72.550% <= 1.303 milliseconds (cumulative count 72550) 222s 83.080% <= 1.407 milliseconds (cumulative count 83080) 222s 91.480% <= 1.503 milliseconds (cumulative count 91480) 222s 97.090% <= 1.607 milliseconds (cumulative count 97090) 222s 99.100% <= 1.703 milliseconds (cumulative count 99100) 222s 99.870% <= 1.807 milliseconds (cumulative count 99870) 222s 99.960% <= 1.903 milliseconds (cumulative count 99960) 222s 100.000% <= 2.007 milliseconds (cumulative count 100000) 222s 222s Summary: 222s throughput summary: 381679.41 requests per second 222s latency summary (msec): 222s avg min p50 p95 p99 max 222s 1.173 0.416 1.143 1.559 1.703 1.975 222s LPUSH (needed to benchmark LRANGE): rps=40400.0 (overall: 288571.4) avg_msec=1.551 (overall: 1.551) LPUSH (needed to benchmark LRANGE): rps=290079.7 (overall: 289895.1) avg_msec=1.562 (overall: 1.561) ====== LPUSH (needed to benchmark LRANGE) ====== 222s 100000 requests completed in 0.34 seconds 222s 50 parallel clients 222s 3 bytes payload 222s keep alive: 1 222s host configuration "save": 3600 1 300 100 60 10000 222s host configuration "appendonly": no 222s multi-thread: no 222s 222s Latency by percentile distribution: 222s 0.000% <= 0.511 milliseconds (cumulative count 10) 222s 50.000% <= 1.583 milliseconds (cumulative count 51110) 222s 75.000% <= 1.783 milliseconds (cumulative count 75700) 222s 87.500% <= 1.911 milliseconds (cumulative count 87700) 222s 93.750% <= 2.015 milliseconds (cumulative count 93930) 222s 96.875% <= 2.103 milliseconds (cumulative count 96880) 222s 98.438% <= 2.191 milliseconds (cumulative count 98450) 222s 99.219% <= 2.311 milliseconds (cumulative count 99240) 222s 99.609% <= 2.399 milliseconds (cumulative count 99620) 222s 99.805% <= 2.495 milliseconds (cumulative count 99810) 222s 99.902% <= 2.687 milliseconds (cumulative count 99910) 222s 99.951% <= 2.839 milliseconds (cumulative count 99960) 222s 99.976% <= 2.919 milliseconds (cumulative count 99980) 222s 99.988% <= 2.999 milliseconds (cumulative count 99990) 222s 99.994% <= 3.071 milliseconds (cumulative count 100000) 222s 100.000% <= 3.071 milliseconds (cumulative count 100000) 222s 222s Cumulative distribution of latencies: 222s 0.000% <= 0.103 milliseconds (cumulative count 0) 222s 0.100% <= 0.607 milliseconds (cumulative count 100) 222s 0.170% <= 0.703 milliseconds (cumulative count 170) 222s 0.240% <= 0.807 milliseconds (cumulative count 240) 222s 0.490% <= 0.903 milliseconds (cumulative count 490) 222s 1.800% <= 1.007 milliseconds (cumulative count 1800) 222s 6.000% <= 1.103 milliseconds (cumulative count 6000) 222s 13.830% <= 1.207 milliseconds (cumulative count 13830) 222s 21.960% <= 1.303 milliseconds (cumulative count 21960) 222s 31.060% <= 1.407 milliseconds (cumulative count 31060) 222s 41.360% <= 1.503 milliseconds (cumulative count 41360) 222s 54.040% <= 1.607 milliseconds (cumulative count 54040) 222s 66.330% <= 1.703 milliseconds (cumulative count 66330) 222s 78.290% <= 1.807 milliseconds (cumulative count 78290) 222s 87.050% <= 1.903 milliseconds (cumulative count 87050) 222s 93.540% <= 2.007 milliseconds (cumulative count 93540) 222s 96.880% <= 2.103 milliseconds (cumulative count 96880) 222s 100.000% <= 3.103 milliseconds (cumulative count 100000) 222s 222s Summary: 222s throughput summary: 290697.66 requests per second 222s latency summary (msec): 222s avg min p50 p95 p99 max 222s 1.566 0.504 1.583 2.047 2.271 3.071 223s LRANGE_100 (first 100 elements): rps=60840.0 (overall: 80476.2) avg_msec=3.260 (overall: 3.260) LRANGE_100 (first 100 elements): rps=82222.2 (overall: 81473.9) avg_msec=3.081 (overall: 3.156) LRANGE_100 (first 100 elements): rps=82350.6 (overall: 81791.9) avg_msec=3.051 (overall: 3.118) LRANGE_100 (first 100 elements): rps=82023.8 (overall: 81853.8) avg_msec=3.060 (overall: 3.102) LRANGE_100 (first 100 elements): rps=81620.6 (overall: 81804.5) avg_msec=3.092 (overall: 3.100) ====== LRANGE_100 (first 100 elements) ====== 223s 100000 requests completed in 1.22 seconds 223s 50 parallel clients 223s 3 bytes payload 223s keep alive: 1 223s host configuration "save": 3600 1 300 100 60 10000 223s host configuration "appendonly": no 223s multi-thread: no 223s 223s Latency by percentile distribution: 223s 0.000% <= 0.823 milliseconds (cumulative count 10) 223s 50.000% <= 3.063 milliseconds (cumulative count 50560) 223s 75.000% <= 3.143 milliseconds (cumulative count 76630) 223s 87.500% <= 3.207 milliseconds (cumulative count 87570) 223s 93.750% <= 3.287 milliseconds (cumulative count 94000) 223s 96.875% <= 3.407 milliseconds (cumulative count 96890) 223s 98.438% <= 3.599 milliseconds (cumulative count 98480) 223s 99.219% <= 4.583 milliseconds (cumulative count 99220) 223s 99.609% <= 6.055 milliseconds (cumulative count 99610) 223s 99.805% <= 7.823 milliseconds (cumulative count 99810) 223s 99.902% <= 8.519 milliseconds (cumulative count 99920) 223s 99.951% <= 8.919 milliseconds (cumulative count 99960) 223s 99.976% <= 9.191 milliseconds (cumulative count 99980) 223s 99.988% <= 9.319 milliseconds (cumulative count 99990) 223s 99.994% <= 9.439 milliseconds (cumulative count 100000) 223s 100.000% <= 9.439 milliseconds (cumulative count 100000) 223s 223s Cumulative distribution of latencies: 223s 0.000% <= 0.103 milliseconds (cumulative count 0) 223s 0.010% <= 0.903 milliseconds (cumulative count 10) 223s 0.020% <= 1.007 milliseconds (cumulative count 20) 223s 0.030% <= 1.207 milliseconds (cumulative count 30) 223s 0.040% <= 1.303 milliseconds (cumulative count 40) 223s 0.050% <= 1.407 milliseconds (cumulative count 50) 223s 0.060% <= 1.503 milliseconds (cumulative count 60) 223s 0.070% <= 1.607 milliseconds (cumulative count 70) 223s 0.080% <= 1.703 milliseconds (cumulative count 80) 223s 0.090% <= 1.807 milliseconds (cumulative count 90) 223s 0.130% <= 1.903 milliseconds (cumulative count 130) 223s 0.170% <= 2.007 milliseconds (cumulative count 170) 223s 0.190% <= 2.103 milliseconds (cumulative count 190) 223s 65.440% <= 3.103 milliseconds (cumulative count 65440) 223s 99.070% <= 4.103 milliseconds (cumulative count 99070) 223s 99.360% <= 5.103 milliseconds (cumulative count 99360) 223s 99.610% <= 6.103 milliseconds (cumulative count 99610) 223s 99.730% <= 7.103 milliseconds (cumulative count 99730) 223s 99.840% <= 8.103 milliseconds (cumulative count 99840) 223s 99.970% <= 9.103 milliseconds (cumulative count 99970) 223s 100.000% <= 10.103 milliseconds (cumulative count 100000) 223s 223s Summary: 223s throughput summary: 81833.06 requests per second 223s latency summary (msec): 223s avg min p50 p95 p99 max 223s 3.100 0.816 3.063 3.319 3.935 9.439 227s LRANGE_300 (first 300 elements): rps=17399.2 (overall: 19564.4) avg_msec=14.880 (overall: 14.880) LRANGE_300 (first 300 elements): rps=23007.8 (overall: 21397.1) avg_msec=11.354 (overall: 12.862) LRANGE_300 (first 300 elements): rps=22502.0 (overall: 21776.0) avg_msec=11.673 (overall: 12.441) LRANGE_300 (first 300 elements): rps=24533.9 (overall: 22480.2) avg_msec=10.174 (overall: 11.809) LRANGE_300 (first 300 elements): rps=22948.2 (overall: 22575.4) avg_msec=11.573 (overall: 11.760) LRANGE_300 (first 300 elements): rps=21091.6 (overall: 22324.6) avg_msec=13.199 (overall: 11.990) LRANGE_300 (first 300 elements): rps=22373.0 (overall: 22331.6) avg_msec=11.979 (overall: 11.988) LRANGE_300 (first 300 elements): rps=24071.7 (overall: 22551.3) avg_msec=10.688 (overall: 11.813) LRANGE_300 (first 300 elements): rps=23396.8 (overall: 22646.4) avg_msec=10.682 (overall: 11.682) LRANGE_300 (first 300 elements): rps=22178.3 (overall: 22598.1) avg_msec=12.314 (overall: 11.746) LRANGE_300 (first 300 elements): rps=22775.2 (overall: 22614.7) avg_msec=11.428 (overall: 11.716) LRANGE_300 (first 300 elements): rps=23631.4 (overall: 22700.8) avg_msec=11.241 (overall: 11.674) LRANGE_300 (first 300 elements): rps=20112.0 (overall: 22502.3) avg_msec=12.841 (overall: 11.754) LRANGE_300 (first 300 elements): rps=24087.0 (overall: 22616.4) avg_msec=11.421 (overall: 11.728) LRANGE_300 (first 300 elements): rps=24381.0 (overall: 22734.5) avg_msec=10.041 (overall: 11.607) LRANGE_300 (first 300 elements): rps=24908.4 (overall: 22870.3) avg_msec=9.537 (overall: 11.466) LRANGE_300 (first 300 elements): rps=24318.7 (overall: 22955.5) avg_msec=10.497 (overall: 11.406) ====== LRANGE_300 (first 300 elements) ====== 227s 100000 requests completed in 4.36 seconds 227s 50 parallel clients 227s 3 bytes payload 227s keep alive: 1 227s host configuration "save": 3600 1 300 100 60 10000 227s host configuration "appendonly": no 227s multi-thread: no 227s 227s Latency by percentile distribution: 227s 0.000% <= 0.759 milliseconds (cumulative count 10) 227s 50.000% <= 10.671 milliseconds (cumulative count 50080) 227s 75.000% <= 13.375 milliseconds (cumulative count 75010) 227s 87.500% <= 16.199 milliseconds (cumulative count 87530) 227s 93.750% <= 19.039 milliseconds (cumulative count 93750) 227s 96.875% <= 22.047 milliseconds (cumulative count 96890) 227s 98.438% <= 26.271 milliseconds (cumulative count 98440) 227s 99.219% <= 29.599 milliseconds (cumulative count 99220) 227s 99.609% <= 33.439 milliseconds (cumulative count 99610) 227s 99.805% <= 48.767 milliseconds (cumulative count 99810) 227s 99.902% <= 51.839 milliseconds (cumulative count 99910) 227s 99.951% <= 53.119 milliseconds (cumulative count 99960) 227s 99.976% <= 57.183 milliseconds (cumulative count 99980) 227s 99.988% <= 57.407 milliseconds (cumulative count 99990) 227s 99.994% <= 57.695 milliseconds (cumulative count 100000) 227s 100.000% <= 57.695 milliseconds (cumulative count 100000) 227s 227s Cumulative distribution of latencies: 227s 0.000% <= 0.103 milliseconds (cumulative count 0) 227s 0.030% <= 0.807 milliseconds (cumulative count 30) 227s 0.040% <= 0.903 milliseconds (cumulative count 40) 227s 0.060% <= 1.007 milliseconds (cumulative count 60) 227s 0.070% <= 1.103 milliseconds (cumulative count 70) 227s 0.100% <= 1.207 milliseconds (cumulative count 100) 227s 0.120% <= 1.303 milliseconds (cumulative count 120) 227s 0.140% <= 1.407 milliseconds (cumulative count 140) 227s 0.180% <= 1.503 milliseconds (cumulative count 180) 227s 0.210% <= 1.607 milliseconds (cumulative count 210) 227s 0.240% <= 1.703 milliseconds (cumulative count 240) 227s 0.290% <= 1.807 milliseconds (cumulative count 290) 227s 0.360% <= 1.903 milliseconds (cumulative count 360) 227s 0.420% <= 2.007 milliseconds (cumulative count 420) 227s 0.460% <= 2.103 milliseconds (cumulative count 460) 227s 1.000% <= 3.103 milliseconds (cumulative count 1000) 227s 1.800% <= 4.103 milliseconds (cumulative count 1800) 227s 3.960% <= 5.103 milliseconds (cumulative count 3960) 227s 8.080% <= 6.103 milliseconds (cumulative count 8080) 227s 14.410% <= 7.103 milliseconds (cumulative count 14410) 227s 22.890% <= 8.103 milliseconds (cumulative count 22890) 227s 33.120% <= 9.103 milliseconds (cumulative count 33120) 227s 43.660% <= 10.103 milliseconds (cumulative count 43660) 227s 54.780% <= 11.103 milliseconds (cumulative count 54780) 227s 65.240% <= 12.103 milliseconds (cumulative count 65240) 227s 73.240% <= 13.103 milliseconds (cumulative count 73240) 227s 78.870% <= 14.103 milliseconds (cumulative count 78870) 227s 83.400% <= 15.103 milliseconds (cumulative count 83400) 227s 87.200% <= 16.103 milliseconds (cumulative count 87200) 227s 90.310% <= 17.103 milliseconds (cumulative count 90310) 227s 92.320% <= 18.111 milliseconds (cumulative count 92320) 227s 93.830% <= 19.103 milliseconds (cumulative count 93830) 227s 95.120% <= 20.111 milliseconds (cumulative count 95120) 227s 96.180% <= 21.103 milliseconds (cumulative count 96180) 227s 96.920% <= 22.111 milliseconds (cumulative count 96920) 227s 97.520% <= 23.103 milliseconds (cumulative count 97520) 227s 97.830% <= 24.111 milliseconds (cumulative count 97830) 227s 98.160% <= 25.103 milliseconds (cumulative count 98160) 227s 98.400% <= 26.111 milliseconds (cumulative count 98400) 227s 98.610% <= 27.103 milliseconds (cumulative count 98610) 227s 98.850% <= 28.111 milliseconds (cumulative count 98850) 227s 99.100% <= 29.103 milliseconds (cumulative count 99100) 227s 99.310% <= 30.111 milliseconds (cumulative count 99310) 227s 99.450% <= 31.103 milliseconds (cumulative count 99450) 227s 99.500% <= 32.111 milliseconds (cumulative count 99500) 227s 99.580% <= 33.119 milliseconds (cumulative count 99580) 227s 99.640% <= 34.111 milliseconds (cumulative count 99640) 227s 99.680% <= 35.103 milliseconds (cumulative count 99680) 227s 99.720% <= 36.127 milliseconds (cumulative count 99720) 227s 99.740% <= 39.103 milliseconds (cumulative count 99740) 227s 99.750% <= 45.119 milliseconds (cumulative count 99750) 227s 99.760% <= 46.111 milliseconds (cumulative count 99760) 227s 99.780% <= 47.103 milliseconds (cumulative count 99780) 227s 99.790% <= 48.127 milliseconds (cumulative count 99790) 227s 99.820% <= 49.119 milliseconds (cumulative count 99820) 227s 99.850% <= 50.111 milliseconds (cumulative count 99850) 227s 99.880% <= 51.103 milliseconds (cumulative count 99880) 227s 99.920% <= 52.127 milliseconds (cumulative count 99920) 227s 99.960% <= 53.119 milliseconds (cumulative count 99960) 227s 99.970% <= 57.119 milliseconds (cumulative count 99970) 227s 100.000% <= 58.111 milliseconds (cumulative count 100000) 227s 227s Summary: 227s throughput summary: 22946.31 requests per second 227s latency summary (msec): 227s avg min p50 p95 p99 max 227s 11.420 0.752 10.671 20.047 28.671 57.695 234s LRANGE_500 (first 500 elements): rps=6692.9 (overall: 10559.0) avg_msec=24.423 (overall: 24.423) LRANGE_500 (first 500 elements): rps=13735.2 (overall: 12500.0) avg_msec=17.476 (overall: 19.758) LRANGE_500 (first 500 elements): rps=14551.2 (overall: 13279.9) avg_msec=13.215 (overall: 17.032) LRANGE_500 (first 500 elements): rps=14571.4 (overall: 13633.7) avg_msec=13.750 (overall: 16.071) LRANGE_500 (first 500 elements): rps=14585.0 (overall: 13838.9) avg_msec=13.422 (overall: 15.469) LRANGE_500 (first 500 elements): rps=14583.3 (overall: 13970.5) avg_msec=13.666 (overall: 15.136) LRANGE_500 (first 500 elements): rps=14279.5 (overall: 14017.3) avg_msec=13.727 (overall: 14.919) LRANGE_500 (first 500 elements): rps=14468.5 (overall: 14076.6) avg_msec=13.672 (overall: 14.750) LRANGE_500 (first 500 elements): rps=14387.4 (overall: 14112.5) avg_msec=13.405 (overall: 14.592) LRANGE_500 (first 500 elements): rps=14068.0 (overall: 14108.0) avg_msec=13.494 (overall: 14.479) LRANGE_500 (first 500 elements): rps=12482.1 (overall: 13956.1) avg_msec=20.260 (overall: 14.962) LRANGE_500 (first 500 elements): rps=13980.2 (overall: 13958.2) avg_msec=17.458 (overall: 15.177) LRANGE_500 (first 500 elements): rps=14208.0 (overall: 13977.7) avg_msec=15.846 (overall: 15.231) LRANGE_500 (first 500 elements): rps=14418.6 (overall: 14010.7) avg_msec=13.756 (overall: 15.117) LRANGE_500 (first 500 elements): rps=14415.7 (overall: 14038.6) avg_msec=13.798 (overall: 15.024) LRANGE_500 (first 500 elements): rps=14490.2 (overall: 14067.7) avg_msec=13.725 (overall: 14.938) LRANGE_500 (first 500 elements): rps=14414.1 (overall: 14088.8) avg_msec=13.281 (overall: 14.835) LRANGE_500 (first 500 elements): rps=14447.5 (overall: 14109.4) avg_msec=13.877 (overall: 14.778) LRANGE_500 (first 500 elements): rps=14484.3 (overall: 14129.5) avg_msec=13.612 (overall: 14.714) LRANGE_500 (first 500 elements): rps=14470.6 (overall: 14147.0) avg_msec=13.609 (overall: 14.656) LRANGE_500 (first 500 elements): rps=14431.9 (overall: 14161.0) avg_msec=13.447 (overall: 14.596) LRANGE_500 (first 500 elements): rps=14458.2 (overall: 14174.6) avg_msec=13.986 (overall: 14.567) LRANGE_500 (first 500 elements): rps=14454.5 (overall: 14186.9) avg_msec=13.342 (overall: 14.512) LRANGE_500 (first 500 elements): rps=13862.2 (overall: 14173.1) avg_msec=16.236 (overall: 14.584) LRANGE_500 (first 500 elements): rps=13093.4 (overall: 14128.8) avg_msec=19.108 (overall: 14.756) LRANGE_500 (first 500 elements): rps=12350.0 (overall: 14057.7) avg_msec=20.276 (overall: 14.950) LRANGE_500 (first 500 elements): rps=13832.7 (overall: 14049.4) avg_msec=16.982 (overall: 15.024) LRANGE_500 (first 500 elements): rps=14246.0 (overall: 14056.5) avg_msec=15.803 (overall: 15.052) ====== LRANGE_500 (first 500 elements) ====== 234s 100000 requests completed in 7.11 seconds 234s 50 parallel clients 234s 3 bytes payload 234s keep alive: 1 234s host configuration "save": 3600 1 300 100 60 10000 234s host configuration "appendonly": no 234s multi-thread: no 234s 234s Latency by percentile distribution: 234s 0.000% <= 1.335 milliseconds (cumulative count 10) 234s 50.000% <= 14.223 milliseconds (cumulative count 50080) 234s 75.000% <= 17.583 milliseconds (cumulative count 75100) 234s 87.500% <= 21.007 milliseconds (cumulative count 87530) 234s 93.750% <= 25.327 milliseconds (cumulative count 93750) 234s 96.875% <= 28.367 milliseconds (cumulative count 96880) 234s 98.438% <= 30.191 milliseconds (cumulative count 98470) 234s 99.219% <= 31.711 milliseconds (cumulative count 99230) 234s 99.609% <= 33.151 milliseconds (cumulative count 99610) 234s 99.805% <= 34.207 milliseconds (cumulative count 99810) 234s 99.902% <= 35.135 milliseconds (cumulative count 99910) 234s 99.951% <= 37.855 milliseconds (cumulative count 99960) 234s 99.976% <= 40.415 milliseconds (cumulative count 99980) 234s 99.988% <= 40.671 milliseconds (cumulative count 99990) 234s 99.994% <= 40.927 milliseconds (cumulative count 100000) 234s 100.000% <= 40.927 milliseconds (cumulative count 100000) 234s 234s Cumulative distribution of latencies: 234s 0.000% <= 0.103 milliseconds (cumulative count 0) 234s 0.010% <= 1.407 milliseconds (cumulative count 10) 234s 0.020% <= 1.607 milliseconds (cumulative count 20) 234s 0.030% <= 1.703 milliseconds (cumulative count 30) 234s 0.040% <= 1.807 milliseconds (cumulative count 40) 234s 0.050% <= 1.903 milliseconds (cumulative count 50) 234s 0.060% <= 2.007 milliseconds (cumulative count 60) 234s 0.070% <= 2.103 milliseconds (cumulative count 70) 234s 0.140% <= 3.103 milliseconds (cumulative count 140) 234s 0.340% <= 4.103 milliseconds (cumulative count 340) 234s 0.880% <= 5.103 milliseconds (cumulative count 880) 234s 1.950% <= 6.103 milliseconds (cumulative count 1950) 234s 3.250% <= 7.103 milliseconds (cumulative count 3250) 234s 5.040% <= 8.103 milliseconds (cumulative count 5040) 234s 8.700% <= 9.103 milliseconds (cumulative count 8700) 234s 14.820% <= 10.103 milliseconds (cumulative count 14820) 234s 23.270% <= 11.103 milliseconds (cumulative count 23270) 234s 32.590% <= 12.103 milliseconds (cumulative count 32590) 234s 41.140% <= 13.103 milliseconds (cumulative count 41140) 234s 49.280% <= 14.103 milliseconds (cumulative count 49280) 234s 57.320% <= 15.103 milliseconds (cumulative count 57320) 234s 64.790% <= 16.103 milliseconds (cumulative count 64790) 234s 72.090% <= 17.103 milliseconds (cumulative count 72090) 234s 78.280% <= 18.111 milliseconds (cumulative count 78280) 234s 83.070% <= 19.103 milliseconds (cumulative count 83070) 234s 85.670% <= 20.111 milliseconds (cumulative count 85670) 234s 87.730% <= 21.103 milliseconds (cumulative count 87730) 234s 89.630% <= 22.111 milliseconds (cumulative count 89630) 234s 90.880% <= 23.103 milliseconds (cumulative count 90880) 234s 92.230% <= 24.111 milliseconds (cumulative count 92230) 234s 93.470% <= 25.103 milliseconds (cumulative count 93470) 234s 94.620% <= 26.111 milliseconds (cumulative count 94620) 234s 95.580% <= 27.103 milliseconds (cumulative count 95580) 234s 96.650% <= 28.111 milliseconds (cumulative count 96650) 234s 97.570% <= 29.103 milliseconds (cumulative count 97570) 234s 98.380% <= 30.111 milliseconds (cumulative count 98380) 234s 98.980% <= 31.103 milliseconds (cumulative count 98980) 234s 99.330% <= 32.111 milliseconds (cumulative count 99330) 234s 99.600% <= 33.119 milliseconds (cumulative count 99600) 234s 99.790% <= 34.111 milliseconds (cumulative count 99790) 234s 99.900% <= 35.103 milliseconds (cumulative count 99900) 234s 99.920% <= 36.127 milliseconds (cumulative count 99920) 234s 99.940% <= 37.119 milliseconds (cumulative count 99940) 234s 99.960% <= 38.111 milliseconds (cumulative count 99960) 234s 99.970% <= 40.127 milliseconds (cumulative count 99970) 234s 100.000% <= 41.119 milliseconds (cumulative count 100000) 234s 234s Summary: 234s throughput summary: 14064.70 requests per second 234s latency summary (msec): 234s avg min p50 p95 p99 max 234s 15.040 1.328 14.223 26.463 31.183 40.927 244s LRANGE_600 (first 600 elements): rps=5217.6 (overall: 8335.4) avg_msec=27.627 (overall: 27.627) LRANGE_600 (first 600 elements): rps=9773.8 (overall: 9206.7) avg_msec=24.835 (overall: 25.831) LRANGE_600 (first 600 elements): rps=10677.3 (overall: 9760.1) avg_msec=21.279 (overall: 23.957) LRANGE_600 (first 600 elements): rps=10377.4 (overall: 9931.8) avg_msec=23.869 (overall: 23.932) LRANGE_600 (first 600 elements): rps=11940.9 (overall: 10365.0) avg_msec=15.394 (overall: 21.811) LRANGE_600 (first 600 elements): rps=11264.8 (overall: 10524.1) avg_msec=18.971 (overall: 21.274) LRANGE_600 (first 600 elements): rps=9222.2 (overall: 10323.3) avg_msec=26.236 (overall: 21.957) LRANGE_600 (first 600 elements): rps=7900.4 (overall: 10010.3) avg_msec=31.126 (overall: 22.892) LRANGE_600 (first 600 elements): rps=8486.1 (overall: 9835.9) avg_msec=30.104 (overall: 23.604) LRANGE_600 (first 600 elements): rps=8804.8 (overall: 9730.1) avg_msec=30.058 (overall: 24.204) LRANGE_600 (first 600 elements): rps=7273.8 (overall: 9500.6) avg_msec=33.336 (overall: 24.857) LRANGE_600 (first 600 elements): rps=10557.3 (overall: 9591.2) avg_msec=23.382 (overall: 24.718) LRANGE_600 (first 600 elements): rps=11315.0 (overall: 9727.8) avg_msec=18.697 (overall: 24.163) LRANGE_600 (first 600 elements): rps=9798.4 (overall: 9733.0) avg_msec=22.144 (overall: 24.014) LRANGE_600 (first 600 elements): rps=8200.0 (overall: 9627.7) avg_msec=30.040 (overall: 24.366) LRANGE_600 (first 600 elements): rps=11398.4 (overall: 9739.8) avg_msec=20.015 (overall: 24.044) LRANGE_600 (first 600 elements): rps=11273.8 (overall: 9831.6) avg_msec=18.431 (overall: 23.659) LRANGE_600 (first 600 elements): rps=10239.0 (overall: 9854.5) avg_msec=20.234 (overall: 23.459) LRANGE_600 (first 600 elements): rps=10135.5 (overall: 9869.4) avg_msec=25.609 (overall: 23.577) LRANGE_600 (first 600 elements): rps=9103.6 (overall: 9830.7) avg_msec=27.962 (overall: 23.782) LRANGE_600 (first 600 elements): rps=9525.9 (overall: 9816.1) avg_msec=24.699 (overall: 23.825) LRANGE_600 (first 600 elements): rps=9698.8 (overall: 9810.5) avg_msec=24.455 (overall: 23.854) LRANGE_600 (first 600 elements): rps=9551.2 (overall: 9799.0) avg_msec=25.640 (overall: 23.931) LRANGE_600 (first 600 elements): rps=11043.8 (overall: 9851.2) avg_msec=18.053 (overall: 23.655) LRANGE_600 (first 600 elements): rps=9924.6 (overall: 9854.2) avg_msec=26.066 (overall: 23.753) LRANGE_600 (first 600 elements): rps=10000.0 (overall: 9859.8) avg_msec=25.086 (overall: 23.805) LRANGE_600 (first 600 elements): rps=10996.1 (overall: 9903.0) avg_msec=19.170 (overall: 23.610) LRANGE_600 (first 600 elements): rps=11988.2 (overall: 9979.0) avg_msec=14.729 (overall: 23.221) LRANGE_600 (first 600 elements): rps=12182.5 (overall: 10055.6) avg_msec=13.708 (overall: 22.820) LRANGE_600 (first 600 elements): rps=10406.4 (overall: 10067.3) avg_msec=21.629 (overall: 22.779) LRANGE_600 (first 600 elements): rps=8848.6 (overall: 10027.9) avg_msec=29.009 (overall: 22.957) LRANGE_600 (first 600 elements): rps=11027.9 (overall: 10059.2) avg_msec=18.630 (overall: 22.808) LRANGE_600 (first 600 elements): rps=9457.0 (overall: 10040.6) avg_msec=26.745 (overall: 22.923) LRANGE_600 (first 600 elements): rps=10038.9 (overall: 10040.5) avg_msec=24.525 (overall: 22.971) LRANGE_600 (first 600 elements): rps=9492.1 (overall: 10024.6) avg_msec=25.034 (overall: 23.028) LRANGE_600 (first 600 elements): rps=9416.0 (overall: 10007.8) avg_msec=27.539 (overall: 23.146) LRANGE_600 (first 600 elements): rps=10251.0 (overall: 10014.3) avg_msec=24.789 (overall: 23.191) LRANGE_600 (first 600 elements): rps=9368.6 (overall: 9997.1) avg_msec=28.295 (overall: 23.319) LRANGE_600 (first 600 elements): rps=9067.5 (overall: 9973.1) avg_msec=26.715 (overall: 23.399) ====== LRANGE_600 (first 600 elements) ====== 244s 100000 requests completed in 10.00 seconds 244s 50 parallel clients 244s 3 bytes payload 244s keep alive: 1 244s host configuration "save": 3600 1 300 100 60 10000 244s host configuration "appendonly": no 244s multi-thread: no 244s 244s Latency by percentile distribution: 244s 0.000% <= 1.015 milliseconds (cumulative count 10) 244s 50.000% <= 24.015 milliseconds (cumulative count 50000) 244s 75.000% <= 31.807 milliseconds (cumulative count 75070) 244s 87.500% <= 35.199 milliseconds (cumulative count 87510) 244s 93.750% <= 37.631 milliseconds (cumulative count 93770) 244s 96.875% <= 39.871 milliseconds (cumulative count 96880) 244s 98.438% <= 42.431 milliseconds (cumulative count 98440) 244s 99.219% <= 44.031 milliseconds (cumulative count 99220) 244s 99.609% <= 45.535 milliseconds (cumulative count 99610) 244s 99.805% <= 46.751 milliseconds (cumulative count 99810) 244s 99.902% <= 48.703 milliseconds (cumulative count 99910) 244s 99.951% <= 49.951 milliseconds (cumulative count 99960) 244s 99.976% <= 57.503 milliseconds (cumulative count 99980) 244s 99.988% <= 57.951 milliseconds (cumulative count 99990) 244s 99.994% <= 58.207 milliseconds (cumulative count 100000) 244s 100.000% <= 58.207 milliseconds (cumulative count 100000) 244s 244s Cumulative distribution of latencies: 244s 0.000% <= 0.103 milliseconds (cumulative count 0) 244s 0.010% <= 1.103 milliseconds (cumulative count 10) 244s 0.030% <= 1.207 milliseconds (cumulative count 30) 244s 0.040% <= 1.303 milliseconds (cumulative count 40) 244s 0.060% <= 1.407 milliseconds (cumulative count 60) 244s 0.080% <= 1.503 milliseconds (cumulative count 80) 244s 0.130% <= 1.607 milliseconds (cumulative count 130) 244s 0.200% <= 1.703 milliseconds (cumulative count 200) 244s 0.220% <= 1.807 milliseconds (cumulative count 220) 244s 0.290% <= 1.903 milliseconds (cumulative count 290) 244s 0.420% <= 2.007 milliseconds (cumulative count 420) 244s 0.550% <= 2.103 milliseconds (cumulative count 550) 244s 1.640% <= 3.103 milliseconds (cumulative count 1640) 244s 2.320% <= 4.103 milliseconds (cumulative count 2320) 244s 3.270% <= 5.103 milliseconds (cumulative count 3270) 244s 4.930% <= 6.103 milliseconds (cumulative count 4930) 244s 6.440% <= 7.103 milliseconds (cumulative count 6440) 244s 7.660% <= 8.103 milliseconds (cumulative count 7660) 244s 9.140% <= 9.103 milliseconds (cumulative count 9140) 244s 11.440% <= 10.103 milliseconds (cumulative count 11440) 244s 13.780% <= 11.103 milliseconds (cumulative count 13780) 244s 16.460% <= 12.103 milliseconds (cumulative count 16460) 244s 19.450% <= 13.103 milliseconds (cumulative count 19450) 244s 22.650% <= 14.103 milliseconds (cumulative count 22650) 244s 25.810% <= 15.103 milliseconds (cumulative count 25810) 244s 28.710% <= 16.103 milliseconds (cumulative count 28710) 244s 31.280% <= 17.103 milliseconds (cumulative count 31280) 244s 33.740% <= 18.111 milliseconds (cumulative count 33740) 244s 36.350% <= 19.103 milliseconds (cumulative count 36350) 244s 38.910% <= 20.111 milliseconds (cumulative count 38910) 244s 41.280% <= 21.103 milliseconds (cumulative count 41280) 244s 44.120% <= 22.111 milliseconds (cumulative count 44120) 244s 46.830% <= 23.103 milliseconds (cumulative count 46830) 244s 50.270% <= 24.111 milliseconds (cumulative count 50270) 244s 53.870% <= 25.103 milliseconds (cumulative count 53870) 244s 56.970% <= 26.111 milliseconds (cumulative count 56970) 244s 59.750% <= 27.103 milliseconds (cumulative count 59750) 244s 62.770% <= 28.111 milliseconds (cumulative count 62770) 244s 66.050% <= 29.103 milliseconds (cumulative count 66050) 244s 69.490% <= 30.111 milliseconds (cumulative count 69490) 244s 72.810% <= 31.103 milliseconds (cumulative count 72810) 244s 76.110% <= 32.111 milliseconds (cumulative count 76110) 244s 79.770% <= 33.119 milliseconds (cumulative count 79770) 244s 83.550% <= 34.111 milliseconds (cumulative count 83550) 244s 87.210% <= 35.103 milliseconds (cumulative count 87210) 244s 90.140% <= 36.127 milliseconds (cumulative count 90140) 244s 92.590% <= 37.119 milliseconds (cumulative count 92590) 244s 94.610% <= 38.111 milliseconds (cumulative count 94610) 244s 96.040% <= 39.103 milliseconds (cumulative count 96040) 244s 97.070% <= 40.127 milliseconds (cumulative count 97070) 244s 97.720% <= 41.119 milliseconds (cumulative count 97720) 244s 98.250% <= 42.111 milliseconds (cumulative count 98250) 244s 98.770% <= 43.103 milliseconds (cumulative count 98770) 244s 99.260% <= 44.127 milliseconds (cumulative count 99260) 244s 99.470% <= 45.119 milliseconds (cumulative count 99470) 244s 99.730% <= 46.111 milliseconds (cumulative count 99730) 244s 99.830% <= 47.103 milliseconds (cumulative count 99830) 244s 99.880% <= 48.127 milliseconds (cumulative count 99880) 244s 99.920% <= 49.119 milliseconds (cumulative count 99920) 244s 99.960% <= 50.111 milliseconds (cumulative count 99960) 244s 99.990% <= 58.111 milliseconds (cumulative count 99990) 244s 100.000% <= 59.103 milliseconds (cumulative count 100000) 244s 244s Summary: 244s throughput summary: 9998.00 requests per second 244s latency summary (msec): 244s avg min p50 p95 p99 max 244s 23.323 1.008 24.015 38.399 43.519 58.207 245s MSET (10 keys): rps=9600.0 (overall: 114285.7) avg_msec=3.775 (overall: 3.775) MSET (10 keys): rps=120833.3 (overall: 120329.7) avg_msec=3.875 (overall: 3.867) MSET (10 keys): rps=122840.0 (overall: 121529.6) avg_msec=3.855 (overall: 3.861) MSET (10 keys): rps=121354.6 (overall: 121472.9) avg_msec=3.886 (overall: 3.869) ====== MSET (10 keys) ====== 245s 100000 requests completed in 0.82 seconds 245s 50 parallel clients 245s 3 bytes payload 245s keep alive: 1 245s host configuration "save": 3600 1 300 100 60 10000 245s host configuration "appendonly": no 245s multi-thread: no 245s 245s Latency by percentile distribution: 245s 0.000% <= 0.799 milliseconds (cumulative count 10) 245s 50.000% <= 4.023 milliseconds (cumulative count 50350) 245s 75.000% <= 4.287 milliseconds (cumulative count 75690) 245s 87.500% <= 4.439 milliseconds (cumulative count 87610) 245s 93.750% <= 4.551 milliseconds (cumulative count 94040) 245s 96.875% <= 4.623 milliseconds (cumulative count 96890) 245s 98.438% <= 4.711 milliseconds (cumulative count 98510) 245s 99.219% <= 4.775 milliseconds (cumulative count 99260) 245s 99.609% <= 4.855 milliseconds (cumulative count 99610) 245s 99.805% <= 4.919 milliseconds (cumulative count 99840) 245s 99.902% <= 4.951 milliseconds (cumulative count 99910) 245s 99.951% <= 4.991 milliseconds (cumulative count 99960) 245s 99.976% <= 5.047 milliseconds (cumulative count 99980) 245s 99.988% <= 5.055 milliseconds (cumulative count 99990) 245s 99.994% <= 5.111 milliseconds (cumulative count 100000) 245s 100.000% <= 5.111 milliseconds (cumulative count 100000) 245s 245s Cumulative distribution of latencies: 245s 0.000% <= 0.103 milliseconds (cumulative count 0) 245s 0.010% <= 0.807 milliseconds (cumulative count 10) 245s 0.020% <= 0.903 milliseconds (cumulative count 20) 245s 0.040% <= 1.703 milliseconds (cumulative count 40) 245s 0.060% <= 1.807 milliseconds (cumulative count 60) 245s 0.080% <= 1.903 milliseconds (cumulative count 80) 245s 0.130% <= 2.007 milliseconds (cumulative count 130) 245s 0.230% <= 2.103 milliseconds (cumulative count 230) 245s 14.640% <= 3.103 milliseconds (cumulative count 14640) 245s 57.970% <= 4.103 milliseconds (cumulative count 57970) 245s 99.990% <= 5.103 milliseconds (cumulative count 99990) 245s 100.000% <= 6.103 milliseconds (cumulative count 100000) 245s 245s Summary: 245s throughput summary: 121802.68 requests per second 245s latency summary (msec): 245s avg min p50 p95 p99 max 245s 3.869 0.792 4.023 4.575 4.759 5.111 246s XADD: rps=188685.3 (overall: 235621.9) avg_msec=1.927 (overall: 1.927) ====== XADD ====== 246s 100000 requests completed in 0.42 seconds 246s 50 parallel clients 246s 3 bytes payload 246s keep alive: 1 246s host configuration "save": 3600 1 300 100 60 10000 246s host configuration "appendonly": no 246s multi-thread: no 246s 246s Latency by percentile distribution: 246s 0.000% <= 0.559 milliseconds (cumulative count 10) 246s 50.000% <= 1.967 milliseconds (cumulative count 50690) 246s 75.000% <= 2.175 milliseconds (cumulative count 75550) 246s 87.500% <= 2.311 milliseconds (cumulative count 88060) 246s 93.750% <= 2.391 milliseconds (cumulative count 93830) 246s 96.875% <= 2.463 milliseconds (cumulative count 96910) 246s 98.438% <= 2.527 milliseconds (cumulative count 98530) 246s 99.219% <= 2.567 milliseconds (cumulative count 99230) 246s 99.609% <= 2.623 milliseconds (cumulative count 99610) 246s 99.805% <= 2.663 milliseconds (cumulative count 99810) 246s 99.902% <= 2.719 milliseconds (cumulative count 99910) 246s 99.951% <= 2.751 milliseconds (cumulative count 99960) 246s 99.976% <= 2.775 milliseconds (cumulative count 99980) 246s 99.988% <= 2.791 milliseconds (cumulative count 99990) 246s 99.994% <= 2.903 milliseconds (cumulative count 100000) 246s 100.000% <= 2.903 milliseconds (cumulative count 100000) 246s 246s Cumulative distribution of latencies: 246s 0.000% <= 0.103 milliseconds (cumulative count 0) 246s 0.030% <= 0.607 milliseconds (cumulative count 30) 246s 0.090% <= 0.703 milliseconds (cumulative count 90) 246s 0.120% <= 0.807 milliseconds (cumulative count 120) 246s 0.140% <= 0.903 milliseconds (cumulative count 140) 246s 0.160% <= 1.103 milliseconds (cumulative count 160) 246s 0.420% <= 1.207 milliseconds (cumulative count 420) 246s 2.530% <= 1.303 milliseconds (cumulative count 2530) 246s 7.470% <= 1.407 milliseconds (cumulative count 7470) 246s 13.010% <= 1.503 milliseconds (cumulative count 13010) 246s 17.750% <= 1.607 milliseconds (cumulative count 17750) 246s 23.180% <= 1.703 milliseconds (cumulative count 23180) 246s 32.120% <= 1.807 milliseconds (cumulative count 32120) 246s 42.820% <= 1.903 milliseconds (cumulative count 42820) 246s 55.580% <= 2.007 milliseconds (cumulative count 55580) 246s 67.410% <= 2.103 milliseconds (cumulative count 67410) 246s 100.000% <= 3.103 milliseconds (cumulative count 100000) 246s 246s Summary: 246s throughput summary: 235849.06 requests per second 246s latency summary (msec): 246s avg min p50 p95 p99 max 246s 1.934 0.552 1.967 2.415 2.559 2.903 251s FUNCTION LOAD: rps=1553.8 (overall: 15600.0) avg_msec=19.238 (overall: 19.238) FUNCTION LOAD: rps=19560.0 (overall: 19200.0) avg_msec=24.387 (overall: 24.007) FUNCTION LOAD: rps=20238.1 (overall: 19696.4) avg_msec=24.709 (overall: 24.352) FUNCTION LOAD: rps=19641.4 (overall: 19678.7) avg_msec=24.976 (overall: 24.553) FUNCTION LOAD: rps=19960.2 (overall: 19747.3) avg_msec=24.796 (overall: 24.613) FUNCTION LOAD: rps=19641.4 (overall: 19726.6) avg_msec=25.882 (overall: 24.861) FUNCTION LOAD: rps=19404.8 (overall: 19673.6) avg_msec=25.282 (overall: 24.929) FUNCTION LOAD: rps=19640.0 (overall: 19668.9) avg_msec=25.300 (overall: 24.981) FUNCTION LOAD: rps=19285.7 (overall: 19621.4) avg_msec=25.237 (overall: 25.012) FUNCTION LOAD: rps=19243.0 (overall: 19579.9) avg_msec=25.691 (overall: 25.085) FUNCTION LOAD: rps=20079.7 (overall: 19629.3) avg_msec=24.970 (overall: 25.074) FUNCTION LOAD: rps=19282.9 (overall: 19598.1) avg_msec=25.168 (overall: 25.082) FUNCTION LOAD: rps=19362.6 (overall: 19578.7) avg_msec=25.228 (overall: 25.094) FUNCTION LOAD: rps=20238.1 (overall: 19629.2) avg_msec=25.197 (overall: 25.102) FUNCTION LOAD: rps=19440.0 (overall: 19615.8) avg_msec=24.999 (overall: 25.095) FUNCTION LOAD: rps=19442.2 (overall: 19604.3) avg_msec=25.172 (overall: 25.100) FUNCTION LOAD: rps=20358.6 (overall: 19651.2) avg_msec=25.051 (overall: 25.097) FUNCTION LOAD: rps=19484.1 (overall: 19641.4) avg_msec=24.535 (overall: 25.064) FUNCTION LOAD: rps=20079.7 (overall: 19665.6) avg_msec=24.463 (overall: 25.030) FUNCTION LOAD: rps=20438.2 (overall: 19706.0) avg_msec=24.456 (overall: 24.999) FUNCTION LOAD: rps=20480.0 (overall: 19744.4) avg_msec=24.207 (overall: 24.958) ====== FUNCTION LOAD ====== 251s 100000 requests completed in 5.06 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.975 milliseconds (cumulative count 10) 251s 50.000% <= 25.455 milliseconds (cumulative count 50140) 251s 75.000% <= 26.127 milliseconds (cumulative count 75290) 251s 87.500% <= 26.607 milliseconds (cumulative count 87650) 251s 93.750% <= 27.023 milliseconds (cumulative count 93920) 251s 96.875% <= 27.439 milliseconds (cumulative count 96930) 251s 98.438% <= 28.015 milliseconds (cumulative count 98440) 251s 99.219% <= 28.783 milliseconds (cumulative count 99220) 251s 99.609% <= 30.991 milliseconds (cumulative count 99620) 251s 99.805% <= 33.855 milliseconds (cumulative count 99810) 251s 99.902% <= 34.175 milliseconds (cumulative count 99920) 251s 99.951% <= 34.335 milliseconds (cumulative count 99960) 251s 99.976% <= 34.463 milliseconds (cumulative count 99980) 251s 99.988% <= 34.495 milliseconds (cumulative count 99990) 251s 99.994% <= 34.591 milliseconds (cumulative count 100000) 251s 100.000% <= 34.591 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.010% <= 1.007 milliseconds (cumulative count 10) 251s 0.100% <= 10.103 milliseconds (cumulative count 100) 251s 0.120% <= 11.103 milliseconds (cumulative count 120) 251s 0.300% <= 12.103 milliseconds (cumulative count 300) 251s 1.500% <= 13.103 milliseconds (cumulative count 1500) 251s 3.610% <= 14.103 milliseconds (cumulative count 3610) 251s 4.610% <= 15.103 milliseconds (cumulative count 4610) 251s 5.000% <= 16.103 milliseconds (cumulative count 5000) 251s 5.040% <= 17.103 milliseconds (cumulative count 5040) 251s 5.130% <= 18.111 milliseconds (cumulative count 5130) 251s 5.280% <= 19.103 milliseconds (cumulative count 5280) 251s 5.710% <= 20.111 milliseconds (cumulative count 5710) 251s 5.860% <= 21.103 milliseconds (cumulative count 5860) 251s 5.870% <= 22.111 milliseconds (cumulative count 5870) 251s 9.640% <= 24.111 milliseconds (cumulative count 9640) 251s 37.000% <= 25.103 milliseconds (cumulative count 37000) 251s 74.760% <= 26.111 milliseconds (cumulative count 74760) 251s 94.680% <= 27.103 milliseconds (cumulative count 94680) 251s 98.590% <= 28.111 milliseconds (cumulative count 98590) 251s 99.300% <= 29.103 milliseconds (cumulative count 99300) 251s 99.380% <= 30.111 milliseconds (cumulative count 99380) 251s 99.640% <= 31.103 milliseconds (cumulative count 99640) 251s 99.770% <= 32.111 milliseconds (cumulative count 99770) 251s 99.890% <= 34.111 milliseconds (cumulative count 99890) 251s 100.000% <= 35.103 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 19778.48 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 24.948 0.968 25.455 27.151 28.511 34.591 251s FCALL: rps=234223.1 (overall: 248059.1) avg_msec=1.829 (overall: 1.829) ====== FCALL ====== 251s 100000 requests completed in 0.41 seconds 251s 50 parallel clients 251s 3 bytes payload 251s keep alive: 1 251s host configuration "save": 3600 1 300 100 60 10000 251s host configuration "appendonly": no 251s multi-thread: no 251s 251s Latency by percentile distribution: 251s 0.000% <= 0.639 milliseconds (cumulative count 10) 251s 50.000% <= 1.879 milliseconds (cumulative count 50150) 251s 75.000% <= 2.087 milliseconds (cumulative count 75100) 251s 87.500% <= 2.215 milliseconds (cumulative count 87590) 251s 93.750% <= 2.303 milliseconds (cumulative count 93970) 251s 96.875% <= 2.367 milliseconds (cumulative count 97030) 251s 98.438% <= 2.431 milliseconds (cumulative count 98470) 251s 99.219% <= 2.503 milliseconds (cumulative count 99230) 251s 99.609% <= 2.607 milliseconds (cumulative count 99610) 251s 99.805% <= 2.759 milliseconds (cumulative count 99810) 251s 99.902% <= 2.903 milliseconds (cumulative count 99910) 251s 99.951% <= 3.023 milliseconds (cumulative count 99960) 251s 99.976% <= 3.047 milliseconds (cumulative count 99980) 251s 99.988% <= 3.087 milliseconds (cumulative count 99990) 251s 99.994% <= 3.487 milliseconds (cumulative count 100000) 251s 100.000% <= 3.487 milliseconds (cumulative count 100000) 251s 251s Cumulative distribution of latencies: 251s 0.000% <= 0.103 milliseconds (cumulative count 0) 251s 0.050% <= 0.703 milliseconds (cumulative count 50) 251s 0.130% <= 0.807 milliseconds (cumulative count 130) 251s 0.190% <= 0.903 milliseconds (cumulative count 190) 251s 0.220% <= 1.007 milliseconds (cumulative count 220) 251s 0.400% <= 1.103 milliseconds (cumulative count 400) 251s 1.670% <= 1.207 milliseconds (cumulative count 1670) 251s 5.240% <= 1.303 milliseconds (cumulative count 5240) 251s 10.910% <= 1.407 milliseconds (cumulative count 10910) 251s 17.180% <= 1.503 milliseconds (cumulative count 17180) 251s 23.290% <= 1.607 milliseconds (cumulative count 23290) 251s 30.570% <= 1.703 milliseconds (cumulative count 30570) 251s 41.700% <= 1.807 milliseconds (cumulative count 41700) 251s 52.970% <= 1.903 milliseconds (cumulative count 52970) 251s 65.910% <= 2.007 milliseconds (cumulative count 65910) 251s 76.820% <= 2.103 milliseconds (cumulative count 76820) 251s 99.990% <= 3.103 milliseconds (cumulative count 99990) 251s 100.000% <= 4.103 milliseconds (cumulative count 100000) 251s 251s Summary: 251s throughput summary: 246305.42 requests per second 251s latency summary (msec): 251s avg min p50 p95 p99 max 251s 1.851 0.632 1.879 2.327 2.471 3.487 251s 251s autopkgtest [14:20:52]: test 0002-benchmark: -----------------------] 255s 0002-benchmark PASS 255s autopkgtest [14:20:56]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 259s autopkgtest [14:21:00]: test 0003-valkey-check-aof: preparing testbed 261s Reading package lists... 261s Building dependency tree... 261s Reading state information... 262s Solving dependencies... 262s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 270s autopkgtest [14:21:11]: test 0003-valkey-check-aof: [----------------------- 272s autopkgtest [14:21:13]: test 0003-valkey-check-aof: -----------------------] 276s 0003-valkey-check-aof PASS 276s autopkgtest [14:21:17]: test 0003-valkey-check-aof: - - - - - - - - - - results - - - - - - - - - - 280s autopkgtest [14:21:21]: test 0004-valkey-check-rdb: preparing testbed 282s Reading package lists... 282s Building dependency tree... 282s Reading state information... 282s Solving dependencies... 283s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 290s autopkgtest [14:21:31]: test 0004-valkey-check-rdb: [----------------------- 297s OK 298s [offset 0] Checking RDB file /var/lib/valkey/dump.rdb 298s [offset 27] AUX FIELD valkey-ver = '8.1.1' 298s [offset 41] AUX FIELD redis-bits = '32' 298s [offset 53] AUX FIELD ctime = '1750342898' 298s [offset 68] AUX FIELD used-mem = '2799880' 298s [offset 80] AUX FIELD aof-base = '0' 298s [offset 191] Selecting DB ID 0 298s [offset 566754] Checksum OK 298s [offset 566754] \o/ RDB looks OK! \o/ 298s [info] 5 keys read 298s [info] 0 expires 298s [info] 0 already expired 298s autopkgtest [14:21:39]: test 0004-valkey-check-rdb: -----------------------] 302s 0004-valkey-check-rdb PASS 302s autopkgtest [14:21:43]: test 0004-valkey-check-rdb: - - - - - - - - - - results - - - - - - - - - - 305s autopkgtest [14:21:46]: test 0005-cjson: preparing testbed 307s Reading package lists... 307s Building dependency tree... 307s Reading state information... 308s Solving dependencies... 308s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 316s autopkgtest [14:21:57]: test 0005-cjson: [----------------------- 323s 323s autopkgtest [14:22:04]: test 0005-cjson: -----------------------] 327s 0005-cjson PASS 327s autopkgtest [14:22:08]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 331s autopkgtest [14:22:12]: test 0006-migrate-from-redis: preparing testbed 357s autopkgtest [14:22:38]: testbed dpkg architecture: armhf 359s autopkgtest [14:22:40]: testbed apt version: 3.1.2 363s autopkgtest [14:22:44]: @@@@@@@@@@@@@@@@@@@@ test bed setup 364s autopkgtest [14:22:45]: testbed release detected to be: questing 372s autopkgtest [14:22:53]: updating testbed package index (apt update) 374s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 374s Get:2 http://ftpmaster.internal/ubuntu questing InRelease [249 kB] 374s Get:3 http://ftpmaster.internal/ubuntu questing-updates InRelease [110 kB] 374s Get:4 http://ftpmaster.internal/ubuntu questing-security InRelease [110 kB] 374s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/restricted Sources [4716 B] 374s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [38.3 kB] 374s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [426 kB] 374s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.4 kB] 374s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/main armhf Packages [60.5 kB] 374s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/restricted armhf Packages [724 B] 374s Get:11 http://ftpmaster.internal/ubuntu questing-proposed/universe armhf Packages [352 kB] 374s Get:12 http://ftpmaster.internal/ubuntu questing-proposed/multiverse armhf Packages [4268 B] 374s Get:13 http://ftpmaster.internal/ubuntu questing/universe Sources [21.3 MB] 376s Get:14 http://ftpmaster.internal/ubuntu questing/multiverse Sources [309 kB] 376s Get:15 http://ftpmaster.internal/ubuntu questing/universe armhf Packages [15.1 MB] 379s Fetched 38.3 MB in 6s (6648 kB/s) 381s Reading package lists... 386s autopkgtest [14:23:07]: upgrading testbed (apt dist-upgrade and autopurge) 388s Reading package lists... 388s Building dependency tree... 388s Reading state information... 389s Calculating upgrade... 389s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 391s Reading package lists... 391s Building dependency tree... 391s Reading state information... 392s Solving dependencies... 392s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 394s autopkgtest [14:23:15]: rebooting testbed after setup commands that affected boot 456s Reading package lists... 456s Building dependency tree... 456s Reading state information... 456s Solving dependencies... 457s The following NEW packages will be installed: 457s liblzf1 redis-sentinel redis-server redis-tools 457s 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. 457s Need to get 1308 kB of archives. 457s After this operation, 5361 kB of additional disk space will be used. 457s Get:1 http://ftpmaster.internal/ubuntu questing/universe armhf liblzf1 armhf 3.6-4 [6554 B] 457s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe armhf redis-tools armhf 5:8.0.0-2 [1236 kB] 458s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe armhf redis-sentinel armhf 5:8.0.0-2 [12.5 kB] 458s Get:4 http://ftpmaster.internal/ubuntu questing-proposed/universe armhf redis-server armhf 5:8.0.0-2 [53.2 kB] 458s Fetched 1308 kB in 1s (1958 kB/s) 458s Selecting previously unselected package liblzf1:armhf. 458s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 59700 files and directories currently installed.) 458s Preparing to unpack .../liblzf1_3.6-4_armhf.deb ... 458s Unpacking liblzf1:armhf (3.6-4) ... 458s Selecting previously unselected package redis-tools. 458s Preparing to unpack .../redis-tools_5%3a8.0.0-2_armhf.deb ... 458s Unpacking redis-tools (5:8.0.0-2) ... 458s Selecting previously unselected package redis-sentinel. 458s Preparing to unpack .../redis-sentinel_5%3a8.0.0-2_armhf.deb ... 458s Unpacking redis-sentinel (5:8.0.0-2) ... 458s Selecting previously unselected package redis-server. 458s Preparing to unpack .../redis-server_5%3a8.0.0-2_armhf.deb ... 460s Unpacking redis-server (5:8.0.0-2) ... 460s Setting up liblzf1:armhf (3.6-4) ... 460s Setting up redis-tools (5:8.0.0-2) ... 460s Setting up redis-server (5:8.0.0-2) ... 460s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'. 460s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'. 460s Setting up redis-sentinel (5:8.0.0-2) ... 460s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 460s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 460s Processing triggers for man-db (2.13.1-1) ... 461s Processing triggers for libc-bin (2.41-6ubuntu2) ... 473s autopkgtest [14:24:34]: test 0006-migrate-from-redis: [----------------------- 475s + FLAG_FILE=/etc/valkey/REDIS_MIGRATION 475s + sed -i 's#loglevel notice#loglevel debug#' /etc/redis/redis.conf 475s + systemctl restart redis-server 475s + redis-cli -h 127.0.0.1 -p 6379 SET test 1 475s + redis-cli -h 127.0.0.1 -p 6379 GET test 475s OK 475s 1 475s + redis-cli -h 127.0.0.1 -p 6379 SAVE 475s OK 475s + sha256sum /var/lib/redis/dump.rdb 475s dcb26c3331b73e34d92ae284b23dce57034e502f75b35a06b899acb4fc1bbf58 /var/lib/redis/dump.rdb 475s + apt-get install -y valkey-redis-compat 475s Reading package lists... 475s Building dependency tree... 475s Reading state information... 476s Solving dependencies... 476s The following additional packages will be installed: 476s valkey-server valkey-tools 476s Suggested packages: 476s ruby-redis 476s The following packages will be REMOVED: 476s redis-sentinel redis-server redis-tools 476s The following NEW packages will be installed: 476s valkey-redis-compat valkey-server valkey-tools 476s 0 upgraded, 3 newly installed, 3 to remove and 0 not upgraded. 476s Need to get 1257 kB of archives. 476s After this operation, 220 kB disk space will be freed. 476s Get:1 http://ftpmaster.internal/ubuntu questing/universe armhf valkey-tools armhf 8.1.1+dfsg1-2ubuntu1 [1198 kB] 477s Get:2 http://ftpmaster.internal/ubuntu questing/universe armhf valkey-server armhf 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 477s Get:3 http://ftpmaster.internal/ubuntu questing/universe armhf valkey-redis-compat all 8.1.1+dfsg1-2ubuntu1 [7794 B] 477s Fetched 1257 kB in 1s (1911 kB/s) 477s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 59749 files and directories currently installed.) 477s Removing redis-sentinel (5:8.0.0-2) ... 478s Removing redis-server (5:8.0.0-2) ... 478s Removing redis-tools (5:8.0.0-2) ... 478s Selecting previously unselected package valkey-tools. 478s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 59714 files and directories currently installed.) 478s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_armhf.deb ... 478s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 478s Selecting previously unselected package valkey-server. 478s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_armhf.deb ... 478s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 478s Selecting previously unselected package valkey-redis-compat. 479s Preparing to unpack .../valkey-redis-compat_8.1.1+dfsg1-2ubuntu1_all.deb ... 479s Unpacking valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 479s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 479s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 479s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 479s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 480s Setting up valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 480s dpkg-query: no packages found matching valkey-sentinel 480s [I] /etc/redis/redis.conf has been copied to /etc/valkey/valkey.conf. Please, review the content of valkey.conf, especially if you had modified redis.conf. 480s [I] /etc/redis/sentinel.conf has been copied to /etc/valkey/sentinel.conf. Please, review the content of sentinel.conf, especially if you had modified sentinel.conf. 480s [I] On-disk redis dumps moved from /var/lib/redis/ to /var/lib/valkey. 480s Processing triggers for man-db (2.13.1-1) ... 480s + '[' -f /etc/valkey/REDIS_MIGRATION ']' 480s + sha256sum /var/lib/valkey/dump.rdb 480s 3337f809a6dbead5a2606d69b72c5dfe38d875845beafe828d3866101c86e699 /var/lib/valkey/dump.rdb 480s + systemctl status valkey-server 480s + grep inactive 480s Active: inactive (dead) since Thu 2025-06-19 14:24:41 UTC; 669ms ago 480s + rm /etc/valkey/REDIS_MIGRATION 480s + systemctl start valkey-server 481s Job for valkey-server.service failed because the control process exited with error code. 481s See "systemctl status valkey-server.service" and "journalctl -xeu valkey-server.service" for details. 481s autopkgtest [14:24:42]: test 0006-migrate-from-redis: -----------------------] 485s autopkgtest [14:24:46]: test 0006-migrate-from-redis: - - - - - - - - - - results - - - - - - - - - - 485s 0006-migrate-from-redis FAIL non-zero exit status 1 488s autopkgtest [14:24:49]: @@@@@@@@@@@@@@@@@@@@ summary 488s 0001-valkey-cli PASS 488s 0002-benchmark PASS 488s 0003-valkey-check-aof PASS 488s 0004-valkey-check-rdb PASS 488s 0005-cjson PASS 488s 0006-migrate-from-redis FAIL non-zero exit status 1