0s autopkgtest [08:20:52]: starting date and time: 2025-06-30 08:20:52+0000 0s autopkgtest [08:20:52]: git checkout: 508d4a25 a-v-ssh wait_for_ssh: demote "ssh connection failed" to a debug message 0s autopkgtest [08:20:52]: host juju-7f2275-prod-proposed-migration-environment-21; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.s3i372ih/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:redis --apt-upgrade valkey --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=redis/5:8.0.0-2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-ppc64el --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-21@bos03-ppc64el-8.secgroup --name adt-questing-ppc64el-valkey-20250630-082052-juju-7f2275-prod-proposed-migration-environment-21-b529d37f-bf31-4d01-8d77-27ee0906176a --image adt/ubuntu-questing-ppc64el-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-21 --net-id=net_prod-proposed-migration-ppc64el -e TERM=linux --mirror=http://ftpmaster.internal/ubuntu/ 4s Creating nova instance adt-questing-ppc64el-valkey-20250630-082052-juju-7f2275-prod-proposed-migration-environment-21-b529d37f-bf31-4d01-8d77-27ee0906176a from image adt/ubuntu-questing-ppc64el-server-20250630.img (UUID 47357e88-256c-460f-8237-18b657912c63)... 77s autopkgtest [08:22:09]: testbed dpkg architecture: ppc64el 78s autopkgtest [08:22:10]: testbed apt version: 3.1.2 78s autopkgtest [08:22:10]: @@@@@@@@@@@@@@@@@@@@ test bed setup 78s autopkgtest [08:22:10]: testbed release detected to be: None 79s autopkgtest [08:22:11]: updating testbed package index (apt update) 79s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 80s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 80s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 80s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 80s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [429 kB] 80s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [26.6 kB] 80s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.5 kB] 80s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main ppc64el Packages [33.1 kB] 80s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el Packages [375 kB] 80s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/multiverse ppc64el Packages [5260 B] 80s Fetched 1136 kB in 1s (1083 kB/s) 81s Reading package lists... 82s autopkgtest [08:22:14]: upgrading testbed (apt dist-upgrade and autopurge) 82s Reading package lists... 82s Building dependency tree... 82s Reading state information... 82s Calculating upgrade... 83s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 83s Reading package lists... 83s Building dependency tree... 83s Reading state information... 83s Solving dependencies... 83s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 86s autopkgtest [08:22:18]: testbed running kernel: Linux 6.15.0-3-generic #3-Ubuntu SMP Wed Jun 4 08:35:52 UTC 2025 86s autopkgtest [08:22:18]: @@@@@@@@@@@@@@@@@@@@ apt-source valkey 90s Get:1 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (dsc) [2484 B] 90s Get:2 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (tar) [2726 kB] 90s Get:3 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (diff) [20.4 kB] 90s gpgv: Signature made Wed Jun 18 14:39:32 2025 UTC 90s gpgv: using RSA key 63EEFC3DE14D5146CE7F24BF34B8AD7D9529E793 90s gpgv: issuer "lena.voytek@canonical.com" 90s gpgv: Can't check signature: No public key 90s dpkg-source: warning: cannot verify inline signature for ./valkey_8.1.1+dfsg1-2ubuntu1.dsc: no acceptable signature found 90s autopkgtest [08:22:22]: testing package valkey version 8.1.1+dfsg1-2ubuntu1 92s autopkgtest [08:22:24]: build not needed 94s autopkgtest [08:22:26]: test 0001-valkey-cli: preparing testbed 95s Reading package lists... 95s Building dependency tree... 95s Reading state information... 95s Solving dependencies... 95s The following NEW packages will be installed: 95s liblzf1 valkey-server valkey-tools 95s 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. 95s Need to get 1695 kB of archives. 95s After this operation, 10.1 MB of additional disk space will be used. 95s Get:1 http://ftpmaster.internal/ubuntu questing/universe ppc64el liblzf1 ppc64el 3.6-4 [7920 B] 95s Get:2 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-tools ppc64el 8.1.1+dfsg1-2ubuntu1 [1636 kB] 96s Get:3 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-server ppc64el 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 97s Fetched 1695 kB in 1s (1189 kB/s) 97s Selecting previously unselected package liblzf1:ppc64el. 97s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 114358 files and directories currently installed.) 97s Preparing to unpack .../liblzf1_3.6-4_ppc64el.deb ... 97s Unpacking liblzf1:ppc64el (3.6-4) ... 97s Selecting previously unselected package valkey-tools. 97s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 97s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 97s Selecting previously unselected package valkey-server. 97s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 97s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 97s Setting up liblzf1:ppc64el (3.6-4) ... 97s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 97s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 98s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 98s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 98s Processing triggers for man-db (2.13.1-1) ... 99s Processing triggers for libc-bin (2.41-6ubuntu2) ... 100s autopkgtest [08:22:32]: test 0001-valkey-cli: [----------------------- 105s # Server 105s redis_version:7.2.4 105s server_name:valkey 105s valkey_version:8.1.1 105s valkey_release_stage:ga 105s redis_git_sha1:00000000 105s redis_git_dirty:0 105s redis_build_id:454dc2cf719509d2 105s server_mode:standalone 105s os:Linux 6.15.0-3-generic ppc64le 105s arch_bits:64 105s monotonic_clock:POSIX clock_gettime 105s multiplexing_api:epoll 105s gcc_version:14.3.0 105s process_id:2053 105s process_supervised:systemd 105s run_id:1bd84d3374b8fd71e0ead0459bd6ac80b7c6d657 105s tcp_port:6379 105s server_time_usec:1751271757657620 105s uptime_in_seconds:5 105s uptime_in_days:0 105s hz:10 105s configured_hz:10 105s clients_hz:10 105s lru_clock:6441293 105s executable:/usr/bin/valkey-server 105s config_file:/etc/valkey/valkey.conf 105s io_threads_active:0 105s availability_zone: 105s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 105s 105s # Clients 105s connected_clients:1 105s cluster_connections:0 105s maxclients:10000 105s client_recent_max_input_buffer:0 105s client_recent_max_output_buffer:0 105s blocked_clients:0 105s tracking_clients:0 105s pubsub_clients:0 105s watching_clients:0 105s clients_in_timeout_table:0 105s total_watched_keys:0 105s total_blocking_keys:0 105s total_blocking_keys_on_nokey:0 105s paused_reason:none 105s paused_actions:none 105s paused_timeout_milliseconds:0 105s 105s # Memory 105s used_memory:944864 105s used_memory_human:922.72K 105s used_memory_rss:22216704 105s used_memory_rss_human:21.19M 105s used_memory_peak:944864 105s used_memory_peak_human:922.72K 105s used_memory_peak_perc:100.29% 105s used_memory_overhead:924960 105s used_memory_startup:924736 105s used_memory_dataset:19904 105s used_memory_dataset_perc:98.89% 105s allocator_allocated:4426880 105s allocator_active:9043968 105s allocator_resident:11403264 105s allocator_muzzy:0 105s total_system_memory:4208852992 105s total_system_memory_human:3.92G 105s used_memory_lua:32768 105s used_memory_vm_eval:32768 105s used_memory_lua_human:32.00K 105s used_memory_scripts_eval:0 105s number_of_cached_scripts:0 105s number_of_functions:0 105s number_of_libraries:0 105s used_memory_vm_functions:33792 105s used_memory_vm_total:66560 105s used_memory_vm_total_human:65.00K 105s used_memory_functions:224 105s used_memory_scripts:224 105s used_memory_scripts_human:224B 105s maxmemory:0 105s maxmemory_human:0B 105s maxmemory_policy:noeviction 105s allocator_frag_ratio:1.00 105s allocator_frag_bytes:0 105s allocator_rss_ratio:1.26 105s allocator_rss_bytes:2359296 105s rss_overhead_ratio:1.95 105s rss_overhead_bytes:10813440 105s mem_fragmentation_ratio:24.02 105s mem_fragmentation_bytes:21291824 105s mem_not_counted_for_evict:0 105s mem_replication_backlog:0 105s mem_total_replication_buffers:0 105s mem_clients_slaves:0 105s mem_clients_normal:0 105s mem_cluster_links:0 105s mem_aof_buffer:0 105s mem_allocator:jemalloc-5.3.0 105s mem_overhead_db_hashtable_rehashing:0 105s active_defrag_running:0 105s lazyfree_pending_objects:0 105s lazyfreed_objects:0 105s 105s # Persistence 105s loading:0 105s async_loading:0 105s current_cow_peak:0 105s current_cow_size:0 105s current_cow_size_age:0 105s current_fork_perc:0.00 105s current_save_keys_processed:0 105s current_save_keys_total:0 105s rdb_changes_since_last_save:0 105s rdb_bgsave_in_progress:0 105s rdb_last_save_time:1751271752 105s rdb_last_bgsave_status:ok 105s rdb_last_bgsave_time_sec:-1 105s rdb_current_bgsave_time_sec:-1 105s rdb_saves:0 105s rdb_last_cow_size:0 105s rdb_last_load_keys_expired:0 105s rdb_last_load_keys_loaded:0 105s aof_enabled:0 105s aof_rewrite_in_progress:0 105s aof_rewrite_scheduled:0 105s aof_last_rewrite_time_sec:-1 105s aof_current_rewrite_time_sec:-1 105s aof_last_bgrewrite_status:ok 105s aof_rewrites:0 105s aof_rewrites_consecutive_failures:0 105s aof_last_write_status:ok 105s aof_last_cow_size:0 105s module_fork_in_progress:0 105s module_fork_last_cow_size:0 105s 105s # Stats 105s total_connections_received:1 105s total_commands_processed:0 105s instantaneous_ops_per_sec:0 105s total_net_input_bytes:14 105s total_net_output_bytes:0 105s total_net_repl_input_bytes:0 105s total_net_repl_output_bytes:0 105s instantaneous_input_kbps:0.00 105s instantaneous_output_kbps:0.00 105s instantaneous_input_repl_kbps:0.00 105s instantaneous_output_repl_kbps:0.00 105s rejected_connections:0 105s sync_full:0 105s sync_partial_ok:0 105s sync_partial_err:0 105s expired_keys:0 105s expired_stale_perc:0.00 105s expired_time_cap_reached_count:0 105s expire_cycle_cpu_milliseconds:0 105s evicted_keys:0 105s evicted_clients:0 105s evicted_scripts:0 105s total_eviction_exceeded_time:0 105s current_eviction_exceeded_time:0 105s keyspace_hits:0 105s keyspace_misses:0 105s pubsub_channels:0 105s pubsub_patterns:0 105s pubsubshard_channels:0 105s latest_fork_usec:0 105s total_forks:0 105s migrate_cached_sockets:0 105s slave_expires_tracked_keys:0 105s active_defrag_hits:0 105s active_defrag_misses:0 105s active_defrag_key_hits:0 105s active_defrag_key_misses:0 105s total_active_defrag_time:0 105s current_active_defrag_time:0 105s tracking_total_keys:0 105s tracking_total_items:0 105s tracking_total_prefixes:0 105s unexpected_error_replies:0 105s total_error_replies:0 105s dump_payload_sanitizations:0 105s total_reads_processed:1 105s total_writes_processed:0 105s io_threaded_reads_processed:0 105s io_threaded_writes_processed:0 105s io_threaded_freed_objects:0 105s io_threaded_accept_processed:0 105s io_threaded_poll_processed:0 105s io_threaded_total_prefetch_batches:0 105s io_threaded_total_prefetch_entries:0 105s client_query_buffer_limit_disconnections:0 105s client_output_buffer_limit_disconnections:0 105s reply_buffer_shrinks:0 105s reply_buffer_expands:0 105s eventloop_cycles:51 105s eventloop_duration_sum:7457 105s eventloop_duration_cmd_sum:0 105s instantaneous_eventloop_cycles_per_sec:9 105s instantaneous_eventloop_duration_usec:149 105s acl_access_denied_auth:0 105s acl_access_denied_cmd:0 105s acl_access_denied_key:0 105s acl_access_denied_channel:0 105s 105s # Replication 105s role:master 105s connected_slaves:0 105s replicas_waiting_psync:0 105s master_failover_state:no-failover 105s master_replid:8a228f752a018e23dae88163c13d6d6a3bf6118b 105s master_replid2:0000000000000000000000000000000000000000 105s master_repl_offset:0 105s second_repl_offset:-1 105s repl_backlog_active:0 105s repl_backlog_size:10485760 105s repl_backlog_first_byte_offset:0 105s repl_backlog_histlen:0 105s 105s # CPU 105s used_cpu_sys:0.016789 105s used_cpu_user:0.054104 105s used_cpu_sys_children:0.000000 105s used_cpu_user_children:0.000765 105s used_cpu_sys_main_thread:0.014841 105s used_cpu_user_main_thread:0.055654 105s 105s # Modules 105s 105s # Errorstats 105s 105s # Cluster 105s cluster_enabled:0 105s 105s # Keyspace 105s Redis ver. 8.1.1 106s autopkgtest [08:22:38]: test 0001-valkey-cli: -----------------------] 106s 0001-valkey-cli PASS 106s autopkgtest [08:22:38]: test 0001-valkey-cli: - - - - - - - - - - results - - - - - - - - - - 106s autopkgtest [08:22:38]: test 0002-benchmark: preparing testbed 107s Reading package lists... 107s Building dependency tree... 107s Reading state information... 107s Solving dependencies... 107s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 108s autopkgtest [08:22:40]: test 0002-benchmark: [----------------------- 114s PING_INLINE: rps=0.0 (overall: 0.0) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 114s 100000 requests completed in 0.21 seconds 114s 50 parallel clients 114s 3 bytes payload 114s keep alive: 1 114s host configuration "save": 3600 1 300 100 60 10000 114s host configuration "appendonly": no 114s multi-thread: no 114s 114s Latency by percentile distribution: 114s 0.000% <= 0.367 milliseconds (cumulative count 20) 114s 50.000% <= 0.847 milliseconds (cumulative count 50100) 114s 75.000% <= 1.015 milliseconds (cumulative count 75730) 114s 87.500% <= 1.127 milliseconds (cumulative count 88000) 114s 93.750% <= 1.231 milliseconds (cumulative count 93950) 114s 96.875% <= 1.327 milliseconds (cumulative count 96930) 114s 98.438% <= 1.631 milliseconds (cumulative count 98440) 114s 99.219% <= 2.111 milliseconds (cumulative count 99220) 114s 99.609% <= 3.927 milliseconds (cumulative count 99610) 114s 99.805% <= 4.359 milliseconds (cumulative count 99810) 114s 99.902% <= 4.615 milliseconds (cumulative count 99910) 114s 99.951% <= 4.751 milliseconds (cumulative count 99960) 114s 99.976% <= 4.815 milliseconds (cumulative count 99980) 114s 99.988% <= 4.847 milliseconds (cumulative count 99990) 114s 99.994% <= 4.887 milliseconds (cumulative count 100000) 114s 100.000% <= 4.887 milliseconds (cumulative count 100000) 114s 114s Cumulative distribution of latencies: 114s 0.000% <= 0.103 milliseconds (cumulative count 0) 114s 0.210% <= 0.407 milliseconds (cumulative count 210) 114s 1.660% <= 0.503 milliseconds (cumulative count 1660) 114s 3.850% <= 0.607 milliseconds (cumulative count 3850) 114s 14.330% <= 0.703 milliseconds (cumulative count 14330) 114s 41.770% <= 0.807 milliseconds (cumulative count 41770) 114s 60.280% <= 0.903 milliseconds (cumulative count 60280) 114s 74.730% <= 1.007 milliseconds (cumulative count 74730) 114s 85.480% <= 1.103 milliseconds (cumulative count 85480) 114s 92.990% <= 1.207 milliseconds (cumulative count 92990) 114s 96.460% <= 1.303 milliseconds (cumulative count 96460) 114s 97.740% <= 1.407 milliseconds (cumulative count 97740) 114s 98.120% <= 1.503 milliseconds (cumulative count 98120) 114s 98.370% <= 1.607 milliseconds (cumulative count 98370) 114s 98.540% <= 1.703 milliseconds (cumulative count 98540) 114s 98.740% <= 1.807 milliseconds (cumulative count 98740) 114s 98.930% <= 1.903 milliseconds (cumulative count 98930) 114s 99.070% <= 2.007 milliseconds (cumulative count 99070) 114s 99.190% <= 2.103 milliseconds (cumulative count 99190) 114s 99.500% <= 3.103 milliseconds (cumulative count 99500) 114s 99.710% <= 4.103 milliseconds (cumulative count 99710) 114s 100.000% <= 5.103 milliseconds (cumulative count 100000) 114s 114s Summary: 114s throughput summary: 478468.88 requests per second 114s latency summary (msec): 114s avg min p50 p95 p99 max 114s 0.906 0.360 0.847 1.255 1.959 4.887 114s PING_MBULK: rps=92560.0 (overall: 578500.0) avg_msec=0.654 (overall: 0.654) ====== PING_MBULK ====== 114s 100000 requests completed in 0.17 seconds 114s 50 parallel clients 114s 3 bytes payload 114s keep alive: 1 114s host configuration "save": 3600 1 300 100 60 10000 114s host configuration "appendonly": no 114s multi-thread: no 114s 114s Latency by percentile distribution: 114s 0.000% <= 0.311 milliseconds (cumulative count 10) 114s 50.000% <= 0.687 milliseconds (cumulative count 51670) 114s 75.000% <= 0.807 milliseconds (cumulative count 75960) 114s 87.500% <= 0.919 milliseconds (cumulative count 87960) 114s 93.750% <= 0.991 milliseconds (cumulative count 93970) 114s 96.875% <= 1.087 milliseconds (cumulative count 97150) 114s 98.438% <= 1.127 milliseconds (cumulative count 98450) 114s 99.219% <= 1.175 milliseconds (cumulative count 99260) 114s 99.609% <= 1.223 milliseconds (cumulative count 99610) 114s 99.805% <= 1.399 milliseconds (cumulative count 99810) 114s 99.902% <= 1.631 milliseconds (cumulative count 99910) 114s 99.951% <= 1.735 milliseconds (cumulative count 99960) 114s 99.976% <= 1.791 milliseconds (cumulative count 99980) 114s 99.988% <= 1.807 milliseconds (cumulative count 99990) 114s 99.994% <= 1.847 milliseconds (cumulative count 100000) 114s 100.000% <= 1.847 milliseconds (cumulative count 100000) 114s 114s Cumulative distribution of latencies: 114s 0.000% <= 0.103 milliseconds (cumulative count 0) 114s 1.520% <= 0.407 milliseconds (cumulative count 1520) 114s 8.660% <= 0.503 milliseconds (cumulative count 8660) 114s 26.630% <= 0.607 milliseconds (cumulative count 26630) 114s 56.400% <= 0.703 milliseconds (cumulative count 56400) 114s 75.960% <= 0.807 milliseconds (cumulative count 75960) 114s 86.490% <= 0.903 milliseconds (cumulative count 86490) 114s 94.870% <= 1.007 milliseconds (cumulative count 94870) 114s 97.700% <= 1.103 milliseconds (cumulative count 97700) 114s 99.470% <= 1.207 milliseconds (cumulative count 99470) 114s 99.730% <= 1.303 milliseconds (cumulative count 99730) 114s 99.810% <= 1.407 milliseconds (cumulative count 99810) 114s 99.850% <= 1.503 milliseconds (cumulative count 99850) 114s 99.900% <= 1.607 milliseconds (cumulative count 99900) 114s 99.940% <= 1.703 milliseconds (cumulative count 99940) 114s 99.990% <= 1.807 milliseconds (cumulative count 99990) 114s 100.000% <= 1.903 milliseconds (cumulative count 100000) 114s 114s Summary: 114s throughput summary: 584795.31 requests per second 114s latency summary (msec): 114s avg min p50 p95 p99 max 114s 0.709 0.304 0.687 1.015 1.159 1.847 114s SET: rps=169083.7 (overall: 359661.0) avg_msec=1.246 (overall: 1.246) ====== SET ====== 114s 100000 requests completed in 0.27 seconds 114s 50 parallel clients 114s 3 bytes payload 114s keep alive: 1 114s host configuration "save": 3600 1 300 100 60 10000 114s host configuration "appendonly": no 114s multi-thread: no 114s 114s Latency by percentile distribution: 114s 0.000% <= 0.391 milliseconds (cumulative count 10) 114s 50.000% <= 1.199 milliseconds (cumulative count 50060) 114s 75.000% <= 1.359 milliseconds (cumulative count 75660) 114s 87.500% <= 1.455 milliseconds (cumulative count 87960) 114s 93.750% <= 1.535 milliseconds (cumulative count 94060) 114s 96.875% <= 1.599 milliseconds (cumulative count 96900) 114s 98.438% <= 1.663 milliseconds (cumulative count 98450) 114s 99.219% <= 1.791 milliseconds (cumulative count 99240) 114s 99.609% <= 4.735 milliseconds (cumulative count 99610) 114s 99.805% <= 6.191 milliseconds (cumulative count 99810) 114s 99.902% <= 6.735 milliseconds (cumulative count 99910) 114s 99.951% <= 6.959 milliseconds (cumulative count 99960) 114s 99.976% <= 7.023 milliseconds (cumulative count 99980) 114s 99.988% <= 7.063 milliseconds (cumulative count 99990) 114s 99.994% <= 7.095 milliseconds (cumulative count 100000) 114s 100.000% <= 7.095 milliseconds (cumulative count 100000) 114s 114s Cumulative distribution of latencies: 114s 0.000% <= 0.103 milliseconds (cumulative count 0) 114s 0.010% <= 0.407 milliseconds (cumulative count 10) 114s 0.130% <= 0.503 milliseconds (cumulative count 130) 114s 0.250% <= 0.607 milliseconds (cumulative count 250) 114s 0.370% <= 0.703 milliseconds (cumulative count 370) 114s 1.070% <= 0.807 milliseconds (cumulative count 1070) 114s 10.960% <= 0.903 milliseconds (cumulative count 10960) 114s 23.140% <= 1.007 milliseconds (cumulative count 23140) 114s 35.630% <= 1.103 milliseconds (cumulative count 35630) 114s 51.100% <= 1.207 milliseconds (cumulative count 51100) 114s 66.270% <= 1.303 milliseconds (cumulative count 66270) 114s 82.560% <= 1.407 milliseconds (cumulative count 82560) 114s 92.350% <= 1.503 milliseconds (cumulative count 92350) 114s 97.110% <= 1.607 milliseconds (cumulative count 97110) 114s 98.950% <= 1.703 milliseconds (cumulative count 98950) 114s 99.280% <= 1.807 milliseconds (cumulative count 99280) 114s 99.390% <= 1.903 milliseconds (cumulative count 99390) 114s 99.470% <= 2.007 milliseconds (cumulative count 99470) 114s 99.480% <= 2.103 milliseconds (cumulative count 99480) 114s 99.500% <= 3.103 milliseconds (cumulative count 99500) 114s 99.700% <= 5.103 milliseconds (cumulative count 99700) 114s 99.800% <= 6.103 milliseconds (cumulative count 99800) 114s 100.000% <= 7.103 milliseconds (cumulative count 100000) 114s 114s Summary: 114s throughput summary: 371747.22 requests per second 114s latency summary (msec): 114s avg min p50 p95 p99 max 114s 1.216 0.384 1.199 1.559 1.719 7.095 114s GET: rps=185160.0 (overall: 477216.5) avg_msec=0.925 (overall: 0.925) ====== GET ====== 114s 100000 requests completed in 0.21 seconds 114s 50 parallel clients 114s 3 bytes payload 114s keep alive: 1 114s host configuration "save": 3600 1 300 100 60 10000 114s host configuration "appendonly": no 114s multi-thread: no 114s 114s Latency by percentile distribution: 114s 0.000% <= 0.327 milliseconds (cumulative count 10) 114s 50.000% <= 0.895 milliseconds (cumulative count 50970) 114s 75.000% <= 1.055 milliseconds (cumulative count 75920) 114s 87.500% <= 1.151 milliseconds (cumulative count 88180) 114s 93.750% <= 1.215 milliseconds (cumulative count 93970) 114s 96.875% <= 1.295 milliseconds (cumulative count 97110) 114s 98.438% <= 1.335 milliseconds (cumulative count 98630) 114s 99.219% <= 1.359 milliseconds (cumulative count 99280) 114s 99.609% <= 1.399 milliseconds (cumulative count 99640) 114s 99.805% <= 1.431 milliseconds (cumulative count 99840) 114s 99.902% <= 1.455 milliseconds (cumulative count 99910) 114s 99.951% <= 1.511 milliseconds (cumulative count 99960) 114s 99.976% <= 1.615 milliseconds (cumulative count 99980) 114s 99.988% <= 1.655 milliseconds (cumulative count 99990) 114s 99.994% <= 1.663 milliseconds (cumulative count 100000) 114s 100.000% <= 1.663 milliseconds (cumulative count 100000) 114s 114s Cumulative distribution of latencies: 114s 0.000% <= 0.103 milliseconds (cumulative count 0) 114s 0.090% <= 0.407 milliseconds (cumulative count 90) 114s 0.280% <= 0.503 milliseconds (cumulative count 280) 114s 0.690% <= 0.607 milliseconds (cumulative count 690) 114s 3.110% <= 0.703 milliseconds (cumulative count 3110) 114s 33.490% <= 0.807 milliseconds (cumulative count 33490) 114s 52.140% <= 0.903 milliseconds (cumulative count 52140) 114s 68.500% <= 1.007 milliseconds (cumulative count 68500) 114s 82.410% <= 1.103 milliseconds (cumulative count 82410) 114s 93.460% <= 1.207 milliseconds (cumulative count 93460) 114s 97.490% <= 1.303 milliseconds (cumulative count 97490) 114s 99.700% <= 1.407 milliseconds (cumulative count 99700) 114s 99.940% <= 1.503 milliseconds (cumulative count 99940) 114s 99.970% <= 1.607 milliseconds (cumulative count 99970) 114s 100.000% <= 1.703 milliseconds (cumulative count 100000) 114s 114s Summary: 114s throughput summary: 478468.88 requests per second 114s latency summary (msec): 114s avg min p50 p95 p99 max 114s 0.925 0.320 0.895 1.239 1.351 1.663 115s INCR: rps=248440.0 (overall: 456691.2) avg_msec=0.927 (overall: 0.927) ====== INCR ====== 115s 100000 requests completed in 0.22 seconds 115s 50 parallel clients 115s 3 bytes payload 115s keep alive: 1 115s host configuration "save": 3600 1 300 100 60 10000 115s host configuration "appendonly": no 115s multi-thread: no 115s 115s Latency by percentile distribution: 115s 0.000% <= 0.351 milliseconds (cumulative count 10) 115s 50.000% <= 0.911 milliseconds (cumulative count 50090) 115s 75.000% <= 1.079 milliseconds (cumulative count 75920) 115s 87.500% <= 1.183 milliseconds (cumulative count 87890) 115s 93.750% <= 1.255 milliseconds (cumulative count 94030) 115s 96.875% <= 1.335 milliseconds (cumulative count 97210) 115s 98.438% <= 1.383 milliseconds (cumulative count 98570) 115s 99.219% <= 1.439 milliseconds (cumulative count 99240) 115s 99.609% <= 1.527 milliseconds (cumulative count 99640) 115s 99.805% <= 1.607 milliseconds (cumulative count 99810) 115s 99.902% <= 1.823 milliseconds (cumulative count 99910) 115s 99.951% <= 2.039 milliseconds (cumulative count 99960) 115s 99.976% <= 2.175 milliseconds (cumulative count 99980) 115s 99.988% <= 2.207 milliseconds (cumulative count 99990) 115s 99.994% <= 2.239 milliseconds (cumulative count 100000) 115s 100.000% <= 2.239 milliseconds (cumulative count 100000) 115s 115s Cumulative distribution of latencies: 115s 0.000% <= 0.103 milliseconds (cumulative count 0) 115s 0.110% <= 0.407 milliseconds (cumulative count 110) 115s 1.260% <= 0.503 milliseconds (cumulative count 1260) 115s 3.380% <= 0.607 milliseconds (cumulative count 3380) 115s 6.360% <= 0.703 milliseconds (cumulative count 6360) 115s 28.900% <= 0.807 milliseconds (cumulative count 28900) 115s 48.730% <= 0.903 milliseconds (cumulative count 48730) 115s 65.550% <= 1.007 milliseconds (cumulative count 65550) 115s 78.900% <= 1.103 milliseconds (cumulative count 78900) 115s 90.410% <= 1.207 milliseconds (cumulative count 90410) 115s 96.000% <= 1.303 milliseconds (cumulative count 96000) 115s 98.910% <= 1.407 milliseconds (cumulative count 98910) 115s 99.560% <= 1.503 milliseconds (cumulative count 99560) 115s 99.810% <= 1.607 milliseconds (cumulative count 99810) 115s 99.850% <= 1.703 milliseconds (cumulative count 99850) 115s 99.900% <= 1.807 milliseconds (cumulative count 99900) 115s 99.930% <= 1.903 milliseconds (cumulative count 99930) 115s 99.950% <= 2.007 milliseconds (cumulative count 99950) 115s 99.960% <= 2.103 milliseconds (cumulative count 99960) 115s 100.000% <= 3.103 milliseconds (cumulative count 100000) 115s 115s Summary: 115s throughput summary: 458715.59 requests per second 115s latency summary (msec): 115s avg min p50 p95 p99 max 115s 0.939 0.344 0.911 1.279 1.415 2.239 115s LPUSH: rps=235360.0 (overall: 354457.8) avg_msec=1.265 (overall: 1.265) ====== LPUSH ====== 115s 100000 requests completed in 0.28 seconds 115s 50 parallel clients 115s 3 bytes payload 115s keep alive: 1 115s host configuration "save": 3600 1 300 100 60 10000 115s host configuration "appendonly": no 115s multi-thread: no 115s 115s Latency by percentile distribution: 115s 0.000% <= 0.359 milliseconds (cumulative count 20) 115s 50.000% <= 1.207 milliseconds (cumulative count 50070) 115s 75.000% <= 1.383 milliseconds (cumulative count 75450) 115s 87.500% <= 1.503 milliseconds (cumulative count 87630) 115s 93.750% <= 1.623 milliseconds (cumulative count 93840) 115s 96.875% <= 1.823 milliseconds (cumulative count 96910) 115s 98.438% <= 3.167 milliseconds (cumulative count 98440) 115s 99.219% <= 5.415 milliseconds (cumulative count 99230) 115s 99.609% <= 5.767 milliseconds (cumulative count 99620) 115s 99.805% <= 5.983 milliseconds (cumulative count 99810) 115s 99.902% <= 6.167 milliseconds (cumulative count 99910) 115s 99.951% <= 6.263 milliseconds (cumulative count 99960) 115s 99.976% <= 6.311 milliseconds (cumulative count 99980) 115s 99.988% <= 6.327 milliseconds (cumulative count 99990) 115s 99.994% <= 6.359 milliseconds (cumulative count 100000) 115s 100.000% <= 6.359 milliseconds (cumulative count 100000) 115s 115s Cumulative distribution of latencies: 115s 0.000% <= 0.103 milliseconds (cumulative count 0) 115s 0.160% <= 0.407 milliseconds (cumulative count 160) 115s 0.860% <= 0.503 milliseconds (cumulative count 860) 115s 1.550% <= 0.607 milliseconds (cumulative count 1550) 115s 2.420% <= 0.703 milliseconds (cumulative count 2420) 115s 3.750% <= 0.807 milliseconds (cumulative count 3750) 115s 11.470% <= 0.903 milliseconds (cumulative count 11470) 115s 24.350% <= 1.007 milliseconds (cumulative count 24350) 115s 35.180% <= 1.103 milliseconds (cumulative count 35180) 115s 50.070% <= 1.207 milliseconds (cumulative count 50070) 115s 63.980% <= 1.303 milliseconds (cumulative count 63980) 115s 78.380% <= 1.407 milliseconds (cumulative count 78380) 115s 87.630% <= 1.503 milliseconds (cumulative count 87630) 115s 93.420% <= 1.607 milliseconds (cumulative count 93420) 115s 95.760% <= 1.703 milliseconds (cumulative count 95760) 115s 96.820% <= 1.807 milliseconds (cumulative count 96820) 115s 97.280% <= 1.903 milliseconds (cumulative count 97280) 115s 97.690% <= 2.007 milliseconds (cumulative count 97690) 115s 97.940% <= 2.103 milliseconds (cumulative count 97940) 115s 98.420% <= 3.103 milliseconds (cumulative count 98420) 115s 98.500% <= 4.103 milliseconds (cumulative count 98500) 115s 98.860% <= 5.103 milliseconds (cumulative count 98860) 115s 99.870% <= 6.103 milliseconds (cumulative count 99870) 115s 100.000% <= 7.103 milliseconds (cumulative count 100000) 115s 115s Summary: 115s throughput summary: 354609.94 requests per second 115s latency summary (msec): 115s avg min p50 p95 p99 max 115s 1.267 0.352 1.207 1.671 5.231 6.359 115s RPUSH: rps=218286.9 (overall: 411954.9) avg_msec=1.083 (overall: 1.083) ====== RPUSH ====== 115s 100000 requests completed in 0.24 seconds 115s 50 parallel clients 115s 3 bytes payload 115s keep alive: 1 115s host configuration "save": 3600 1 300 100 60 10000 115s host configuration "appendonly": no 115s multi-thread: no 115s 115s Latency by percentile distribution: 115s 0.000% <= 0.383 milliseconds (cumulative count 10) 115s 50.000% <= 1.079 milliseconds (cumulative count 50740) 115s 75.000% <= 1.239 milliseconds (cumulative count 75800) 115s 87.500% <= 1.335 milliseconds (cumulative count 87650) 115s 93.750% <= 1.415 milliseconds (cumulative count 94110) 115s 96.875% <= 1.479 milliseconds (cumulative count 96990) 115s 98.438% <= 1.527 milliseconds (cumulative count 98440) 115s 99.219% <= 1.575 milliseconds (cumulative count 99220) 115s 99.609% <= 1.679 milliseconds (cumulative count 99610) 115s 99.805% <= 1.855 milliseconds (cumulative count 99810) 115s 99.902% <= 2.063 milliseconds (cumulative count 99910) 115s 99.951% <= 2.175 milliseconds (cumulative count 99960) 115s 99.976% <= 2.223 milliseconds (cumulative count 99980) 115s 99.988% <= 2.247 milliseconds (cumulative count 99990) 115s 99.994% <= 2.271 milliseconds (cumulative count 100000) 115s 100.000% <= 2.271 milliseconds (cumulative count 100000) 115s 115s Cumulative distribution of latencies: 115s 0.000% <= 0.103 milliseconds (cumulative count 0) 115s 0.030% <= 0.407 milliseconds (cumulative count 30) 115s 0.180% <= 0.503 milliseconds (cumulative count 180) 115s 0.540% <= 0.607 milliseconds (cumulative count 540) 115s 1.100% <= 0.703 milliseconds (cumulative count 1100) 115s 4.350% <= 0.807 milliseconds (cumulative count 4350) 115s 22.260% <= 0.903 milliseconds (cumulative count 22260) 115s 39.710% <= 1.007 milliseconds (cumulative count 39710) 115s 54.210% <= 1.103 milliseconds (cumulative count 54210) 115s 70.650% <= 1.207 milliseconds (cumulative count 70650) 115s 84.310% <= 1.303 milliseconds (cumulative count 84310) 115s 93.640% <= 1.407 milliseconds (cumulative count 93640) 115s 97.920% <= 1.503 milliseconds (cumulative count 97920) 115s 99.450% <= 1.607 milliseconds (cumulative count 99450) 115s 99.660% <= 1.703 milliseconds (cumulative count 99660) 115s 99.780% <= 1.807 milliseconds (cumulative count 99780) 115s 99.840% <= 1.903 milliseconds (cumulative count 99840) 115s 99.890% <= 2.007 milliseconds (cumulative count 99890) 115s 99.930% <= 2.103 milliseconds (cumulative count 99930) 115s 100.000% <= 3.103 milliseconds (cumulative count 100000) 115s 115s Summary: 115s throughput summary: 411522.62 requests per second 115s latency summary (msec): 115s avg min p50 p95 p99 max 115s 1.089 0.376 1.079 1.439 1.559 2.271 115s LPOP: rps=190280.0 (overall: 344710.2) avg_msec=1.309 (overall: 1.309) ====== LPOP ====== 115s 100000 requests completed in 0.29 seconds 115s 50 parallel clients 115s 3 bytes payload 115s keep alive: 1 115s host configuration "save": 3600 1 300 100 60 10000 115s host configuration "appendonly": no 115s multi-thread: no 115s 115s Latency by percentile distribution: 115s 0.000% <= 0.375 milliseconds (cumulative count 10) 115s 50.000% <= 1.343 milliseconds (cumulative count 50810) 115s 75.000% <= 1.503 milliseconds (cumulative count 75550) 115s 87.500% <= 1.599 milliseconds (cumulative count 87850) 115s 93.750% <= 1.679 milliseconds (cumulative count 94130) 115s 96.875% <= 1.751 milliseconds (cumulative count 97220) 115s 98.438% <= 1.799 milliseconds (cumulative count 98500) 115s 99.219% <= 1.847 milliseconds (cumulative count 99240) 115s 99.609% <= 1.887 milliseconds (cumulative count 99630) 115s 99.805% <= 1.943 milliseconds (cumulative count 99820) 115s 99.902% <= 1.991 milliseconds (cumulative count 99910) 115s 99.951% <= 2.047 milliseconds (cumulative count 99960) 115s 99.976% <= 2.063 milliseconds (cumulative count 99980) 115s 99.988% <= 2.087 milliseconds (cumulative count 99990) 115s 99.994% <= 2.183 milliseconds (cumulative count 100000) 115s 100.000% <= 2.183 milliseconds (cumulative count 100000) 115s 115s Cumulative distribution of latencies: 115s 0.000% <= 0.103 milliseconds (cumulative count 0) 115s 0.050% <= 0.407 milliseconds (cumulative count 50) 115s 0.290% <= 0.503 milliseconds (cumulative count 290) 115s 0.650% <= 0.607 milliseconds (cumulative count 650) 115s 1.000% <= 0.703 milliseconds (cumulative count 1000) 115s 1.350% <= 0.807 milliseconds (cumulative count 1350) 115s 3.770% <= 0.903 milliseconds (cumulative count 3770) 115s 16.370% <= 1.007 milliseconds (cumulative count 16370) 115s 24.250% <= 1.103 milliseconds (cumulative count 24250) 115s 32.020% <= 1.207 milliseconds (cumulative count 32020) 115s 44.740% <= 1.303 milliseconds (cumulative count 44740) 115s 60.610% <= 1.407 milliseconds (cumulative count 60610) 115s 75.550% <= 1.503 milliseconds (cumulative count 75550) 115s 88.610% <= 1.607 milliseconds (cumulative count 88610) 115s 95.340% <= 1.703 milliseconds (cumulative count 95340) 115s 98.690% <= 1.807 milliseconds (cumulative count 98690) 115s 99.720% <= 1.903 milliseconds (cumulative count 99720) 115s 99.920% <= 2.007 milliseconds (cumulative count 99920) 115s 99.990% <= 2.103 milliseconds (cumulative count 99990) 115s 100.000% <= 3.103 milliseconds (cumulative count 100000) 115s 115s Summary: 115s throughput summary: 346020.75 requests per second 115s latency summary (msec): 115s avg min p50 p95 p99 max 115s 1.313 0.368 1.343 1.703 1.831 2.183 116s RPOP: rps=142350.6 (overall: 360909.1) avg_msec=1.239 (overall: 1.239) ====== RPOP ====== 116s 100000 requests completed in 0.27 seconds 116s 50 parallel clients 116s 3 bytes payload 116s keep alive: 1 116s host configuration "save": 3600 1 300 100 60 10000 116s host configuration "appendonly": no 116s multi-thread: no 116s 116s Latency by percentile distribution: 116s 0.000% <= 0.375 milliseconds (cumulative count 20) 116s 50.000% <= 1.239 milliseconds (cumulative count 50390) 116s 75.000% <= 1.399 milliseconds (cumulative count 75510) 116s 87.500% <= 1.495 milliseconds (cumulative count 88180) 116s 93.750% <= 1.567 milliseconds (cumulative count 93860) 116s 96.875% <= 1.647 milliseconds (cumulative count 96940) 116s 98.438% <= 1.711 milliseconds (cumulative count 98470) 116s 99.219% <= 1.839 milliseconds (cumulative count 99230) 116s 99.609% <= 1.967 milliseconds (cumulative count 99610) 116s 99.805% <= 2.311 milliseconds (cumulative count 99810) 116s 99.902% <= 3.311 milliseconds (cumulative count 99910) 116s 99.951% <= 3.479 milliseconds (cumulative count 99960) 116s 99.976% <= 3.519 milliseconds (cumulative count 99980) 116s 99.988% <= 3.543 milliseconds (cumulative count 99990) 116s 99.994% <= 3.575 milliseconds (cumulative count 100000) 116s 100.000% <= 3.575 milliseconds (cumulative count 100000) 116s 116s Cumulative distribution of latencies: 116s 0.000% <= 0.103 milliseconds (cumulative count 0) 116s 0.070% <= 0.407 milliseconds (cumulative count 70) 116s 0.270% <= 0.503 milliseconds (cumulative count 270) 116s 0.450% <= 0.607 milliseconds (cumulative count 450) 116s 0.590% <= 0.703 milliseconds (cumulative count 590) 116s 1.140% <= 0.807 milliseconds (cumulative count 1140) 116s 7.950% <= 0.903 milliseconds (cumulative count 7950) 116s 22.230% <= 1.007 milliseconds (cumulative count 22230) 116s 29.920% <= 1.103 milliseconds (cumulative count 29920) 116s 45.990% <= 1.207 milliseconds (cumulative count 45990) 116s 59.840% <= 1.303 milliseconds (cumulative count 59840) 116s 76.700% <= 1.407 milliseconds (cumulative count 76700) 116s 89.130% <= 1.503 milliseconds (cumulative count 89130) 116s 95.550% <= 1.607 milliseconds (cumulative count 95550) 116s 98.310% <= 1.703 milliseconds (cumulative count 98310) 116s 99.100% <= 1.807 milliseconds (cumulative count 99100) 116s 99.450% <= 1.903 milliseconds (cumulative count 99450) 116s 99.690% <= 2.007 milliseconds (cumulative count 99690) 116s 99.760% <= 2.103 milliseconds (cumulative count 99760) 116s 99.900% <= 3.103 milliseconds (cumulative count 99900) 116s 100.000% <= 4.103 milliseconds (cumulative count 100000) 116s 116s Summary: 116s throughput summary: 369003.69 requests per second 116s latency summary (msec): 116s avg min p50 p95 p99 max 116s 1.231 0.368 1.239 1.599 1.783 3.575 116s SADD: rps=120320.0 (overall: 395789.5) avg_msec=1.101 (overall: 1.101) ====== SADD ====== 116s 100000 requests completed in 0.24 seconds 116s 50 parallel clients 116s 3 bytes payload 116s keep alive: 1 116s host configuration "save": 3600 1 300 100 60 10000 116s host configuration "appendonly": no 116s multi-thread: no 116s 116s Latency by percentile distribution: 116s 0.000% <= 0.311 milliseconds (cumulative count 10) 116s 50.000% <= 1.015 milliseconds (cumulative count 50260) 116s 75.000% <= 1.183 milliseconds (cumulative count 75440) 116s 87.500% <= 1.287 milliseconds (cumulative count 87790) 116s 93.750% <= 1.391 milliseconds (cumulative count 94110) 116s 96.875% <= 1.479 milliseconds (cumulative count 96900) 116s 98.438% <= 1.711 milliseconds (cumulative count 98460) 116s 99.219% <= 2.559 milliseconds (cumulative count 99220) 116s 99.609% <= 5.207 milliseconds (cumulative count 99610) 116s 99.805% <= 5.655 milliseconds (cumulative count 99810) 116s 99.902% <= 5.887 milliseconds (cumulative count 99910) 116s 99.951% <= 5.999 milliseconds (cumulative count 99960) 116s 99.976% <= 6.039 milliseconds (cumulative count 99980) 116s 99.988% <= 6.071 milliseconds (cumulative count 99990) 116s 99.994% <= 6.103 milliseconds (cumulative count 100000) 116s 100.000% <= 6.103 milliseconds (cumulative count 100000) 116s 116s Cumulative distribution of latencies: 116s 0.000% <= 0.103 milliseconds (cumulative count 0) 116s 0.330% <= 0.407 milliseconds (cumulative count 330) 116s 1.150% <= 0.503 milliseconds (cumulative count 1150) 116s 2.200% <= 0.607 milliseconds (cumulative count 2200) 116s 3.380% <= 0.703 milliseconds (cumulative count 3380) 116s 12.750% <= 0.807 milliseconds (cumulative count 12750) 116s 32.640% <= 0.903 milliseconds (cumulative count 32640) 116s 49.140% <= 1.007 milliseconds (cumulative count 49140) 116s 63.280% <= 1.103 milliseconds (cumulative count 63280) 116s 78.730% <= 1.207 milliseconds (cumulative count 78730) 116s 89.280% <= 1.303 milliseconds (cumulative count 89280) 116s 94.750% <= 1.407 milliseconds (cumulative count 94750) 116s 97.320% <= 1.503 milliseconds (cumulative count 97320) 116s 98.050% <= 1.607 milliseconds (cumulative count 98050) 116s 98.430% <= 1.703 milliseconds (cumulative count 98430) 116s 98.650% <= 1.807 milliseconds (cumulative count 98650) 116s 98.760% <= 1.903 milliseconds (cumulative count 98760) 116s 98.860% <= 2.007 milliseconds (cumulative count 98860) 116s 98.920% <= 2.103 milliseconds (cumulative count 98920) 116s 99.490% <= 3.103 milliseconds (cumulative count 99490) 116s 99.500% <= 4.103 milliseconds (cumulative count 99500) 116s 99.560% <= 5.103 milliseconds (cumulative count 99560) 116s 100.000% <= 6.103 milliseconds (cumulative count 100000) 116s 116s Summary: 116s throughput summary: 420168.06 requests per second 116s latency summary (msec): 116s avg min p50 p95 p99 max 116s 1.059 0.304 1.015 1.415 2.239 6.103 116s HSET: rps=122520.0 (overall: 356162.8) avg_msec=1.263 (overall: 1.263) ====== HSET ====== 116s 100000 requests completed in 0.27 seconds 116s 50 parallel clients 116s 3 bytes payload 116s keep alive: 1 116s host configuration "save": 3600 1 300 100 60 10000 116s host configuration "appendonly": no 116s multi-thread: no 116s 116s Latency by percentile distribution: 116s 0.000% <= 0.367 milliseconds (cumulative count 10) 116s 50.000% <= 1.247 milliseconds (cumulative count 50380) 116s 75.000% <= 1.415 milliseconds (cumulative count 75650) 116s 87.500% <= 1.519 milliseconds (cumulative count 87770) 116s 93.750% <= 1.607 milliseconds (cumulative count 94040) 116s 96.875% <= 1.687 milliseconds (cumulative count 97140) 116s 98.438% <= 1.759 milliseconds (cumulative count 98540) 116s 99.219% <= 1.863 milliseconds (cumulative count 99220) 116s 99.609% <= 1.959 milliseconds (cumulative count 99620) 116s 99.805% <= 2.055 milliseconds (cumulative count 99820) 116s 99.902% <= 2.135 milliseconds (cumulative count 99910) 116s 99.951% <= 2.183 milliseconds (cumulative count 99960) 116s 99.976% <= 2.223 milliseconds (cumulative count 99980) 116s 99.988% <= 2.255 milliseconds (cumulative count 99990) 116s 99.994% <= 2.271 milliseconds (cumulative count 100000) 116s 100.000% <= 2.271 milliseconds (cumulative count 100000) 116s 116s Cumulative distribution of latencies: 116s 0.000% <= 0.103 milliseconds (cumulative count 0) 116s 0.070% <= 0.407 milliseconds (cumulative count 70) 116s 0.450% <= 0.503 milliseconds (cumulative count 450) 116s 1.120% <= 0.607 milliseconds (cumulative count 1120) 116s 1.680% <= 0.703 milliseconds (cumulative count 1680) 116s 2.520% <= 0.807 milliseconds (cumulative count 2520) 116s 7.550% <= 0.903 milliseconds (cumulative count 7550) 116s 21.790% <= 1.007 milliseconds (cumulative count 21790) 116s 30.230% <= 1.103 milliseconds (cumulative count 30230) 116s 44.940% <= 1.207 milliseconds (cumulative count 44940) 116s 58.300% <= 1.303 milliseconds (cumulative count 58300) 116s 74.460% <= 1.407 milliseconds (cumulative count 74460) 116s 86.100% <= 1.503 milliseconds (cumulative count 86100) 116s 94.040% <= 1.607 milliseconds (cumulative count 94040) 116s 97.570% <= 1.703 milliseconds (cumulative count 97570) 116s 98.920% <= 1.807 milliseconds (cumulative count 98920) 116s 99.400% <= 1.903 milliseconds (cumulative count 99400) 116s 99.720% <= 2.007 milliseconds (cumulative count 99720) 116s 99.880% <= 2.103 milliseconds (cumulative count 99880) 116s 100.000% <= 3.103 milliseconds (cumulative count 100000) 116s 116s Summary: 116s throughput summary: 364963.53 requests per second 116s latency summary (msec): 116s avg min p50 p95 p99 max 116s 1.237 0.360 1.247 1.631 1.823 2.271 116s SPOP: rps=123760.0 (overall: 515666.7) avg_msec=0.846 (overall: 0.846) ====== SPOP ====== 116s 100000 requests completed in 0.19 seconds 116s 50 parallel clients 116s 3 bytes payload 116s keep alive: 1 116s host configuration "save": 3600 1 300 100 60 10000 116s host configuration "appendonly": no 116s multi-thread: no 116s 116s Latency by percentile distribution: 116s 0.000% <= 0.319 milliseconds (cumulative count 10) 116s 50.000% <= 0.807 milliseconds (cumulative count 51200) 116s 75.000% <= 0.959 milliseconds (cumulative count 75620) 116s 87.500% <= 1.063 milliseconds (cumulative count 88180) 116s 93.750% <= 1.143 milliseconds (cumulative count 93970) 116s 96.875% <= 1.223 milliseconds (cumulative count 97370) 116s 98.438% <= 1.263 milliseconds (cumulative count 98590) 116s 99.219% <= 1.311 milliseconds (cumulative count 99340) 116s 99.609% <= 1.351 milliseconds (cumulative count 99630) 116s 99.805% <= 1.455 milliseconds (cumulative count 99810) 116s 99.902% <= 1.567 milliseconds (cumulative count 99910) 116s 99.951% <= 1.671 milliseconds (cumulative count 99960) 116s 99.976% <= 1.711 milliseconds (cumulative count 99980) 116s 99.988% <= 1.759 milliseconds (cumulative count 99990) 116s 99.994% <= 1.775 milliseconds (cumulative count 100000) 116s 100.000% <= 1.775 milliseconds (cumulative count 100000) 116s 116s Cumulative distribution of latencies: 116s 0.000% <= 0.103 milliseconds (cumulative count 0) 116s 0.330% <= 0.407 milliseconds (cumulative count 330) 116s 1.080% <= 0.503 milliseconds (cumulative count 1080) 116s 2.120% <= 0.607 milliseconds (cumulative count 2120) 116s 19.060% <= 0.703 milliseconds (cumulative count 19060) 116s 51.200% <= 0.807 milliseconds (cumulative count 51200) 116s 67.490% <= 0.903 milliseconds (cumulative count 67490) 116s 81.910% <= 1.007 milliseconds (cumulative count 81910) 116s 91.990% <= 1.103 milliseconds (cumulative count 91990) 116s 96.390% <= 1.207 milliseconds (cumulative count 96390) 116s 99.200% <= 1.303 milliseconds (cumulative count 99200) 116s 99.750% <= 1.407 milliseconds (cumulative count 99750) 116s 99.870% <= 1.503 milliseconds (cumulative count 99870) 116s 99.930% <= 1.607 milliseconds (cumulative count 99930) 116s 99.970% <= 1.703 milliseconds (cumulative count 99970) 116s 100.000% <= 1.807 milliseconds (cumulative count 100000) 116s 116s Summary: 116s throughput summary: 518134.72 requests per second 116s latency summary (msec): 116s avg min p50 p95 p99 max 116s 0.845 0.312 0.807 1.183 1.295 1.775 117s ZADD: rps=162709.2 (overall: 352069.0) avg_msec=1.287 (overall: 1.287) ====== ZADD ====== 117s 100000 requests completed in 0.28 seconds 117s 50 parallel clients 117s 3 bytes payload 117s keep alive: 1 117s host configuration "save": 3600 1 300 100 60 10000 117s host configuration "appendonly": no 117s multi-thread: no 117s 117s Latency by percentile distribution: 117s 0.000% <= 0.407 milliseconds (cumulative count 10) 117s 50.000% <= 1.295 milliseconds (cumulative count 50910) 117s 75.000% <= 1.455 milliseconds (cumulative count 75970) 117s 87.500% <= 1.551 milliseconds (cumulative count 87930) 117s 93.750% <= 1.631 milliseconds (cumulative count 94070) 117s 96.875% <= 1.695 milliseconds (cumulative count 97150) 117s 98.438% <= 1.751 milliseconds (cumulative count 98490) 117s 99.219% <= 1.807 milliseconds (cumulative count 99230) 117s 99.609% <= 1.879 milliseconds (cumulative count 99640) 117s 99.805% <= 1.951 milliseconds (cumulative count 99810) 117s 99.902% <= 2.047 milliseconds (cumulative count 99910) 117s 99.951% <= 2.095 milliseconds (cumulative count 99960) 117s 99.976% <= 2.159 milliseconds (cumulative count 99980) 117s 99.988% <= 2.183 milliseconds (cumulative count 99990) 117s 99.994% <= 2.239 milliseconds (cumulative count 100000) 117s 100.000% <= 2.239 milliseconds (cumulative count 100000) 117s 117s Cumulative distribution of latencies: 117s 0.000% <= 0.103 milliseconds (cumulative count 0) 117s 0.010% <= 0.407 milliseconds (cumulative count 10) 117s 0.170% <= 0.503 milliseconds (cumulative count 170) 117s 0.300% <= 0.607 milliseconds (cumulative count 300) 117s 0.350% <= 0.703 milliseconds (cumulative count 350) 117s 0.620% <= 0.807 milliseconds (cumulative count 620) 117s 3.780% <= 0.903 milliseconds (cumulative count 3780) 117s 18.820% <= 1.007 milliseconds (cumulative count 18820) 117s 25.290% <= 1.103 milliseconds (cumulative count 25290) 117s 38.440% <= 1.207 milliseconds (cumulative count 38440) 117s 52.260% <= 1.303 milliseconds (cumulative count 52260) 117s 68.570% <= 1.407 milliseconds (cumulative count 68570) 117s 82.500% <= 1.503 milliseconds (cumulative count 82500) 117s 92.450% <= 1.607 milliseconds (cumulative count 92450) 117s 97.420% <= 1.703 milliseconds (cumulative count 97420) 117s 99.230% <= 1.807 milliseconds (cumulative count 99230) 117s 99.690% <= 1.903 milliseconds (cumulative count 99690) 117s 99.860% <= 2.007 milliseconds (cumulative count 99860) 117s 99.960% <= 2.103 milliseconds (cumulative count 99960) 117s 100.000% <= 3.103 milliseconds (cumulative count 100000) 117s 117s Summary: 117s throughput summary: 357142.84 requests per second 117s latency summary (msec): 117s avg min p50 p95 p99 max 117s 1.276 0.400 1.295 1.655 1.783 2.239 117s ZPOPMIN: rps=172120.0 (overall: 512261.9) avg_msec=0.854 (overall: 0.854) ====== ZPOPMIN ====== 117s 100000 requests completed in 0.20 seconds 117s 50 parallel clients 117s 3 bytes payload 117s keep alive: 1 117s host configuration "save": 3600 1 300 100 60 10000 117s host configuration "appendonly": no 117s multi-thread: no 117s 117s Latency by percentile distribution: 117s 0.000% <= 0.303 milliseconds (cumulative count 10) 117s 50.000% <= 0.815 milliseconds (cumulative count 51040) 117s 75.000% <= 0.967 milliseconds (cumulative count 75060) 117s 87.500% <= 1.079 milliseconds (cumulative count 88310) 117s 93.750% <= 1.175 milliseconds (cumulative count 93990) 117s 96.875% <= 1.231 milliseconds (cumulative count 96890) 117s 98.438% <= 1.295 milliseconds (cumulative count 98550) 117s 99.219% <= 1.463 milliseconds (cumulative count 99230) 117s 99.609% <= 3.567 milliseconds (cumulative count 99610) 117s 99.805% <= 3.839 milliseconds (cumulative count 99810) 117s 99.902% <= 4.079 milliseconds (cumulative count 99910) 117s 99.951% <= 4.175 milliseconds (cumulative count 99960) 117s 99.976% <= 4.223 milliseconds (cumulative count 99980) 117s 99.988% <= 4.247 milliseconds (cumulative count 99990) 117s 99.994% <= 4.271 milliseconds (cumulative count 100000) 117s 100.000% <= 4.271 milliseconds (cumulative count 100000) 117s 117s Cumulative distribution of latencies: 117s 0.000% <= 0.103 milliseconds (cumulative count 0) 117s 0.010% <= 0.303 milliseconds (cumulative count 10) 117s 0.310% <= 0.407 milliseconds (cumulative count 310) 117s 0.920% <= 0.503 milliseconds (cumulative count 920) 117s 2.410% <= 0.607 milliseconds (cumulative count 2410) 117s 18.810% <= 0.703 milliseconds (cumulative count 18810) 117s 49.450% <= 0.807 milliseconds (cumulative count 49450) 117s 65.870% <= 0.903 milliseconds (cumulative count 65870) 117s 80.330% <= 1.007 milliseconds (cumulative count 80330) 117s 90.830% <= 1.103 milliseconds (cumulative count 90830) 117s 95.460% <= 1.207 milliseconds (cumulative count 95460) 117s 98.650% <= 1.303 milliseconds (cumulative count 98650) 117s 99.130% <= 1.407 milliseconds (cumulative count 99130) 117s 99.260% <= 1.503 milliseconds (cumulative count 99260) 117s 99.330% <= 1.607 milliseconds (cumulative count 99330) 117s 99.400% <= 1.703 milliseconds (cumulative count 99400) 117s 99.460% <= 1.807 milliseconds (cumulative count 99460) 117s 99.500% <= 1.903 milliseconds (cumulative count 99500) 117s 99.930% <= 4.103 milliseconds (cumulative count 99930) 117s 100.000% <= 5.103 milliseconds (cumulative count 100000) 117s 117s Summary: 117s throughput summary: 507614.22 requests per second 117s latency summary (msec): 117s avg min p50 p95 p99 max 117s 0.866 0.296 0.815 1.207 1.367 4.271 117s LPUSH (needed to benchmark LRANGE): rps=198080.0 (overall: 366814.8) avg_msec=1.235 (overall: 1.235) ====== LPUSH (needed to benchmark LRANGE) ====== 117s 100000 requests completed in 0.27 seconds 117s 50 parallel clients 117s 3 bytes payload 117s keep alive: 1 117s host configuration "save": 3600 1 300 100 60 10000 117s host configuration "appendonly": no 117s multi-thread: no 117s 117s Latency by percentile distribution: 117s 0.000% <= 0.407 milliseconds (cumulative count 10) 117s 50.000% <= 1.207 milliseconds (cumulative count 50640) 117s 75.000% <= 1.367 milliseconds (cumulative count 75020) 117s 87.500% <= 1.471 milliseconds (cumulative count 87960) 117s 93.750% <= 1.551 milliseconds (cumulative count 94170) 117s 96.875% <= 1.623 milliseconds (cumulative count 97060) 117s 98.438% <= 1.695 milliseconds (cumulative count 98610) 117s 99.219% <= 1.807 milliseconds (cumulative count 99230) 117s 99.609% <= 4.767 milliseconds (cumulative count 99610) 117s 99.805% <= 5.255 milliseconds (cumulative count 99810) 117s 99.902% <= 5.391 milliseconds (cumulative count 99910) 117s 99.951% <= 5.479 milliseconds (cumulative count 99960) 117s 99.976% <= 5.535 milliseconds (cumulative count 99980) 117s 99.988% <= 5.567 milliseconds (cumulative count 99990) 117s 99.994% <= 5.623 milliseconds (cumulative count 100000) 117s 100.000% <= 5.623 milliseconds (cumulative count 100000) 117s 117s Cumulative distribution of latencies: 117s 0.000% <= 0.103 milliseconds (cumulative count 0) 117s 0.010% <= 0.407 milliseconds (cumulative count 10) 117s 0.120% <= 0.503 milliseconds (cumulative count 120) 117s 0.310% <= 0.607 milliseconds (cumulative count 310) 117s 0.450% <= 0.703 milliseconds (cumulative count 450) 117s 0.890% <= 0.807 milliseconds (cumulative count 890) 117s 10.080% <= 0.903 milliseconds (cumulative count 10080) 117s 24.220% <= 1.007 milliseconds (cumulative count 24220) 117s 35.010% <= 1.103 milliseconds (cumulative count 35010) 117s 50.640% <= 1.207 milliseconds (cumulative count 50640) 117s 65.440% <= 1.303 milliseconds (cumulative count 65440) 117s 80.590% <= 1.407 milliseconds (cumulative count 80590) 117s 90.880% <= 1.503 milliseconds (cumulative count 90880) 117s 96.580% <= 1.607 milliseconds (cumulative count 96580) 117s 98.740% <= 1.703 milliseconds (cumulative count 98740) 117s 99.230% <= 1.807 milliseconds (cumulative count 99230) 117s 99.300% <= 1.903 milliseconds (cumulative count 99300) 117s 99.320% <= 2.103 milliseconds (cumulative count 99320) 117s 99.470% <= 3.103 milliseconds (cumulative count 99470) 117s 99.500% <= 4.103 milliseconds (cumulative count 99500) 117s 99.780% <= 5.103 milliseconds (cumulative count 99780) 117s 100.000% <= 6.103 milliseconds (cumulative count 100000) 117s 117s Summary: 117s throughput summary: 371747.22 requests per second 117s latency summary (msec): 117s avg min p50 p95 p99 max 117s 1.220 0.400 1.207 1.567 1.735 5.623 118s LRANGE_100 (first 100 elements): rps=47280.0 (overall: 103684.2) avg_msec=3.801 (overall: 3.801) LRANGE_100 (first 100 elements): rps=104502.0 (overall: 104246.6) avg_msec=3.805 (overall: 3.804) LRANGE_100 (first 100 elements): rps=104302.8 (overall: 104269.5) avg_msec=3.851 (overall: 3.823) LRANGE_100 (first 100 elements): rps=104223.1 (overall: 104256.1) avg_msec=3.797 (overall: 3.816) ====== LRANGE_100 (first 100 elements) ====== 118s 100000 requests completed in 0.96 seconds 118s 50 parallel clients 118s 3 bytes payload 118s keep alive: 1 118s host configuration "save": 3600 1 300 100 60 10000 118s host configuration "appendonly": no 118s multi-thread: no 118s 118s Latency by percentile distribution: 118s 0.000% <= 1.111 milliseconds (cumulative count 10) 118s 50.000% <= 3.759 milliseconds (cumulative count 50130) 118s 75.000% <= 4.287 milliseconds (cumulative count 75320) 118s 87.500% <= 4.567 milliseconds (cumulative count 87510) 118s 93.750% <= 5.135 milliseconds (cumulative count 93790) 118s 96.875% <= 5.639 milliseconds (cumulative count 96940) 118s 98.438% <= 5.807 milliseconds (cumulative count 98440) 118s 99.219% <= 6.055 milliseconds (cumulative count 99230) 118s 99.609% <= 6.383 milliseconds (cumulative count 99610) 118s 99.805% <= 6.775 milliseconds (cumulative count 99820) 118s 99.902% <= 9.895 milliseconds (cumulative count 99910) 118s 99.951% <= 10.295 milliseconds (cumulative count 99960) 118s 99.976% <= 10.463 milliseconds (cumulative count 99980) 118s 99.988% <= 10.567 milliseconds (cumulative count 99990) 118s 99.994% <= 10.647 milliseconds (cumulative count 100000) 118s 100.000% <= 10.647 milliseconds (cumulative count 100000) 118s 118s Cumulative distribution of latencies: 118s 0.000% <= 0.103 milliseconds (cumulative count 0) 118s 0.020% <= 1.207 milliseconds (cumulative count 20) 118s 0.030% <= 1.303 milliseconds (cumulative count 30) 118s 0.040% <= 1.407 milliseconds (cumulative count 40) 118s 0.050% <= 1.503 milliseconds (cumulative count 50) 118s 0.060% <= 1.607 milliseconds (cumulative count 60) 118s 0.100% <= 1.703 milliseconds (cumulative count 100) 118s 0.230% <= 1.807 milliseconds (cumulative count 230) 118s 0.410% <= 1.903 milliseconds (cumulative count 410) 118s 0.640% <= 2.007 milliseconds (cumulative count 640) 118s 0.880% <= 2.103 milliseconds (cumulative count 880) 118s 18.390% <= 3.103 milliseconds (cumulative count 18390) 118s 66.700% <= 4.103 milliseconds (cumulative count 66700) 118s 93.460% <= 5.103 milliseconds (cumulative count 93460) 118s 99.290% <= 6.103 milliseconds (cumulative count 99290) 118s 99.880% <= 7.103 milliseconds (cumulative count 99880) 118s 99.900% <= 8.103 milliseconds (cumulative count 99900) 118s 99.930% <= 10.103 milliseconds (cumulative count 99930) 118s 100.000% <= 11.103 milliseconds (cumulative count 100000) 118s 118s Summary: 118s throughput summary: 104275.29 requests per second 118s latency summary (msec): 118s avg min p50 p95 p99 max 118s 3.814 1.104 3.759 5.295 5.935 10.647 122s LRANGE_300 (first 300 elements): rps=14189.7 (overall: 22578.6) avg_msec=12.555 (overall: 12.555) LRANGE_300 (first 300 elements): rps=25553.8 (overall: 24400.0) avg_msec=11.481 (overall: 11.866) LRANGE_300 (first 300 elements): rps=29306.8 (overall: 26263.2) avg_msec=8.492 (overall: 10.437) LRANGE_300 (first 300 elements): rps=29697.2 (overall: 27208.3) avg_msec=8.301 (overall: 9.795) LRANGE_300 (first 300 elements): rps=24234.4 (overall: 26556.5) avg_msec=11.362 (overall: 10.109) LRANGE_300 (first 300 elements): rps=27333.3 (overall: 26694.4) avg_msec=10.320 (overall: 10.147) LRANGE_300 (first 300 elements): rps=21472.9 (overall: 25891.5) avg_msec=13.687 (overall: 10.598) LRANGE_300 (first 300 elements): rps=26172.5 (overall: 25928.6) avg_msec=10.798 (overall: 10.625) LRANGE_300 (first 300 elements): rps=30344.0 (overall: 26434.3) avg_msec=8.181 (overall: 10.304) LRANGE_300 (first 300 elements): rps=22685.0 (overall: 26043.5) avg_msec=13.204 (overall: 10.567) LRANGE_300 (first 300 elements): rps=25595.2 (overall: 26001.5) avg_msec=10.996 (overall: 10.607) LRANGE_300 (first 300 elements): rps=29865.1 (overall: 26332.5) avg_msec=8.192 (overall: 10.372) LRANGE_300 (first 300 elements): rps=27779.5 (overall: 26447.6) avg_msec=9.120 (overall: 10.267) LRANGE_300 (first 300 elements): rps=29291.3 (overall: 26657.0) avg_msec=9.083 (overall: 10.172) LRANGE_300 (first 300 elements): rps=24907.0 (overall: 26535.2) avg_msec=11.236 (overall: 10.241) ====== LRANGE_300 (first 300 elements) ====== 122s 100000 requests completed in 3.78 seconds 122s 50 parallel clients 122s 3 bytes payload 122s keep alive: 1 122s host configuration "save": 3600 1 300 100 60 10000 122s host configuration "appendonly": no 122s multi-thread: no 122s 122s Latency by percentile distribution: 122s 0.000% <= 0.479 milliseconds (cumulative count 10) 122s 50.000% <= 9.103 milliseconds (cumulative count 50190) 122s 75.000% <= 12.295 milliseconds (cumulative count 75010) 122s 87.500% <= 16.223 milliseconds (cumulative count 87540) 122s 93.750% <= 18.847 milliseconds (cumulative count 93760) 122s 96.875% <= 20.543 milliseconds (cumulative count 96900) 122s 98.438% <= 21.903 milliseconds (cumulative count 98450) 122s 99.219% <= 22.911 milliseconds (cumulative count 99230) 122s 99.609% <= 24.111 milliseconds (cumulative count 99610) 122s 99.805% <= 25.311 milliseconds (cumulative count 99810) 122s 99.902% <= 26.463 milliseconds (cumulative count 99910) 122s 99.951% <= 27.247 milliseconds (cumulative count 99960) 122s 99.976% <= 27.823 milliseconds (cumulative count 99980) 122s 99.988% <= 28.031 milliseconds (cumulative count 99990) 122s 99.994% <= 28.239 milliseconds (cumulative count 100000) 122s 100.000% <= 28.239 milliseconds (cumulative count 100000) 122s 122s Cumulative distribution of latencies: 122s 0.000% <= 0.103 milliseconds (cumulative count 0) 122s 0.010% <= 0.503 milliseconds (cumulative count 10) 122s 0.060% <= 1.407 milliseconds (cumulative count 60) 122s 0.100% <= 1.503 milliseconds (cumulative count 100) 122s 0.160% <= 1.607 milliseconds (cumulative count 160) 122s 0.220% <= 1.703 milliseconds (cumulative count 220) 122s 0.370% <= 1.807 milliseconds (cumulative count 370) 122s 0.420% <= 1.903 milliseconds (cumulative count 420) 122s 0.570% <= 2.007 milliseconds (cumulative count 570) 122s 0.720% <= 2.103 milliseconds (cumulative count 720) 122s 2.200% <= 3.103 milliseconds (cumulative count 2200) 122s 3.220% <= 4.103 milliseconds (cumulative count 3220) 122s 5.630% <= 5.103 milliseconds (cumulative count 5630) 122s 11.420% <= 6.103 milliseconds (cumulative count 11420) 122s 21.120% <= 7.103 milliseconds (cumulative count 21120) 122s 35.350% <= 8.103 milliseconds (cumulative count 35350) 122s 50.190% <= 9.103 milliseconds (cumulative count 50190) 122s 61.250% <= 10.103 milliseconds (cumulative count 61250) 122s 68.510% <= 11.103 milliseconds (cumulative count 68510) 122s 74.140% <= 12.103 milliseconds (cumulative count 74140) 122s 78.170% <= 13.103 milliseconds (cumulative count 78170) 122s 81.370% <= 14.103 milliseconds (cumulative count 81370) 122s 84.430% <= 15.103 milliseconds (cumulative count 84430) 122s 87.210% <= 16.103 milliseconds (cumulative count 87210) 122s 89.760% <= 17.103 milliseconds (cumulative count 89760) 122s 92.190% <= 18.111 milliseconds (cumulative count 92190) 122s 94.240% <= 19.103 milliseconds (cumulative count 94240) 122s 96.250% <= 20.111 milliseconds (cumulative count 96250) 122s 97.610% <= 21.103 milliseconds (cumulative count 97610) 122s 98.670% <= 22.111 milliseconds (cumulative count 98670) 122s 99.310% <= 23.103 milliseconds (cumulative count 99310) 122s 99.610% <= 24.111 milliseconds (cumulative count 99610) 122s 99.780% <= 25.103 milliseconds (cumulative count 99780) 122s 99.880% <= 26.111 milliseconds (cumulative count 99880) 122s 99.950% <= 27.103 milliseconds (cumulative count 99950) 122s 99.990% <= 28.111 milliseconds (cumulative count 99990) 122s 100.000% <= 29.103 milliseconds (cumulative count 100000) 122s 122s Summary: 122s throughput summary: 26483.05 requests per second 122s latency summary (msec): 122s avg min p50 p95 p99 max 122s 10.282 0.472 9.103 19.455 22.495 28.239 130s LRANGE_500 (first 500 elements): rps=8434.3 (overall: 11761.1) avg_msec=21.487 (overall: 21.487) LRANGE_500 (first 500 elements): rps=11303.1 (overall: 11493.1) avg_msec=24.387 (overall: 23.156) LRANGE_500 (first 500 elements): rps=11721.1 (overall: 11576.6) avg_msec=22.603 (overall: 22.951) LRANGE_500 (first 500 elements): rps=11278.9 (overall: 11496.8) avg_msec=23.356 (overall: 23.058) LRANGE_500 (first 500 elements): rps=12254.0 (overall: 11657.4) avg_msec=22.462 (overall: 22.925) LRANGE_500 (first 500 elements): rps=12644.0 (overall: 11828.9) avg_msec=22.593 (overall: 22.863) LRANGE_500 (first 500 elements): rps=11354.6 (overall: 11758.4) avg_msec=22.806 (overall: 22.855) LRANGE_500 (first 500 elements): rps=11035.7 (overall: 11664.6) avg_msec=24.657 (overall: 23.076) LRANGE_500 (first 500 elements): rps=12621.5 (overall: 11774.2) avg_msec=22.657 (overall: 23.025) LRANGE_500 (first 500 elements): rps=11278.9 (overall: 11723.3) avg_msec=24.291 (overall: 23.150) LRANGE_500 (first 500 elements): rps=10640.0 (overall: 11622.7) avg_msec=25.600 (overall: 23.358) LRANGE_500 (first 500 elements): rps=14023.1 (overall: 11834.1) avg_msec=18.968 (overall: 22.900) LRANGE_500 (first 500 elements): rps=13936.0 (overall: 11998.1) avg_msec=19.145 (overall: 22.560) LRANGE_500 (first 500 elements): rps=12908.7 (overall: 12064.5) avg_msec=20.551 (overall: 22.403) LRANGE_500 (first 500 elements): rps=12247.1 (overall: 12077.1) avg_msec=21.469 (overall: 22.338) LRANGE_500 (first 500 elements): rps=11054.3 (overall: 12010.6) avg_msec=25.409 (overall: 22.522) LRANGE_500 (first 500 elements): rps=11043.8 (overall: 11953.1) avg_msec=24.893 (overall: 22.652) LRANGE_500 (first 500 elements): rps=11840.0 (overall: 11946.7) avg_msec=22.054 (overall: 22.619) LRANGE_500 (first 500 elements): rps=11792.8 (overall: 11938.6) avg_msec=22.196 (overall: 22.597) LRANGE_500 (first 500 elements): rps=12734.1 (overall: 11978.9) avg_msec=22.318 (overall: 22.582) LRANGE_500 (first 500 elements): rps=14274.5 (overall: 12090.9) avg_msec=17.774 (overall: 22.305) LRANGE_500 (first 500 elements): rps=15090.2 (overall: 12230.4) avg_msec=16.531 (overall: 21.973) LRANGE_500 (first 500 elements): rps=15828.7 (overall: 12387.9) avg_msec=15.392 (overall: 21.605) LRANGE_500 (first 500 elements): rps=16677.3 (overall: 12567.8) avg_msec=14.677 (overall: 21.220) LRANGE_500 (first 500 elements): rps=11370.5 (overall: 12519.6) avg_msec=22.609 (overall: 21.270) LRANGE_500 (first 500 elements): rps=11619.0 (overall: 12484.7) avg_msec=22.905 (overall: 21.330) LRANGE_500 (first 500 elements): rps=12031.0 (overall: 12467.3) avg_msec=23.043 (overall: 21.393) LRANGE_500 (first 500 elements): rps=11544.0 (overall: 12434.3) avg_msec=23.008 (overall: 21.446) LRANGE_500 (first 500 elements): rps=11163.3 (overall: 12390.3) avg_msec=22.881 (overall: 21.491) LRANGE_500 (first 500 elements): rps=11235.1 (overall: 12351.6) avg_msec=23.431 (overall: 21.550) LRANGE_500 (first 500 elements): rps=11280.2 (overall: 12316.1) avg_msec=22.745 (overall: 21.587) LRANGE_500 (first 500 elements): rps=12654.8 (overall: 12326.8) avg_msec=21.496 (overall: 21.584) ====== LRANGE_500 (first 500 elements) ====== 130s 100000 requests completed in 8.12 seconds 130s 50 parallel clients 130s 3 bytes payload 130s keep alive: 1 130s host configuration "save": 3600 1 300 100 60 10000 130s host configuration "appendonly": no 130s multi-thread: no 130s 130s Latency by percentile distribution: 130s 0.000% <= 1.319 milliseconds (cumulative count 10) 130s 50.000% <= 22.783 milliseconds (cumulative count 50050) 130s 75.000% <= 26.591 milliseconds (cumulative count 75110) 130s 87.500% <= 30.399 milliseconds (cumulative count 87530) 130s 93.750% <= 34.047 milliseconds (cumulative count 93760) 130s 96.875% <= 35.775 milliseconds (cumulative count 96890) 130s 98.438% <= 36.863 milliseconds (cumulative count 98480) 130s 99.219% <= 37.535 milliseconds (cumulative count 99240) 130s 99.609% <= 38.047 milliseconds (cumulative count 99630) 130s 99.805% <= 38.463 milliseconds (cumulative count 99810) 130s 99.902% <= 38.687 milliseconds (cumulative count 99910) 130s 99.951% <= 39.071 milliseconds (cumulative count 99960) 130s 99.976% <= 40.063 milliseconds (cumulative count 99980) 130s 99.988% <= 40.287 milliseconds (cumulative count 99990) 130s 99.994% <= 40.575 milliseconds (cumulative count 100000) 130s 100.000% <= 40.575 milliseconds (cumulative count 100000) 130s 130s Cumulative distribution of latencies: 130s 0.000% <= 0.103 milliseconds (cumulative count 0) 130s 0.010% <= 1.407 milliseconds (cumulative count 10) 130s 0.020% <= 1.503 milliseconds (cumulative count 20) 130s 0.030% <= 1.607 milliseconds (cumulative count 30) 130s 0.050% <= 1.807 milliseconds (cumulative count 50) 130s 0.070% <= 1.903 milliseconds (cumulative count 70) 130s 0.120% <= 2.007 milliseconds (cumulative count 120) 130s 0.170% <= 2.103 milliseconds (cumulative count 170) 130s 1.780% <= 3.103 milliseconds (cumulative count 1780) 130s 4.040% <= 4.103 milliseconds (cumulative count 4040) 130s 6.500% <= 5.103 milliseconds (cumulative count 6500) 130s 8.560% <= 6.103 milliseconds (cumulative count 8560) 130s 9.890% <= 7.103 milliseconds (cumulative count 9890) 130s 10.730% <= 8.103 milliseconds (cumulative count 10730) 130s 11.400% <= 9.103 milliseconds (cumulative count 11400) 130s 12.320% <= 10.103 milliseconds (cumulative count 12320) 130s 13.510% <= 11.103 milliseconds (cumulative count 13510) 130s 14.850% <= 12.103 milliseconds (cumulative count 14850) 130s 16.460% <= 13.103 milliseconds (cumulative count 16460) 130s 17.990% <= 14.103 milliseconds (cumulative count 17990) 130s 19.550% <= 15.103 milliseconds (cumulative count 19550) 130s 21.150% <= 16.103 milliseconds (cumulative count 21150) 130s 23.120% <= 17.103 milliseconds (cumulative count 23120) 130s 25.690% <= 18.111 milliseconds (cumulative count 25690) 130s 28.700% <= 19.103 milliseconds (cumulative count 28700) 130s 32.870% <= 20.111 milliseconds (cumulative count 32870) 130s 38.760% <= 21.103 milliseconds (cumulative count 38760) 130s 45.400% <= 22.111 milliseconds (cumulative count 45400) 130s 52.220% <= 23.103 milliseconds (cumulative count 52220) 130s 59.210% <= 24.111 milliseconds (cumulative count 59210) 130s 66.050% <= 25.103 milliseconds (cumulative count 66050) 130s 72.310% <= 26.111 milliseconds (cumulative count 72310) 130s 77.910% <= 27.103 milliseconds (cumulative count 77910) 130s 82.200% <= 28.111 milliseconds (cumulative count 82200) 130s 84.820% <= 29.103 milliseconds (cumulative count 84820) 130s 86.970% <= 30.111 milliseconds (cumulative count 86970) 130s 88.720% <= 31.103 milliseconds (cumulative count 88720) 130s 90.400% <= 32.111 milliseconds (cumulative count 90400) 130s 92.080% <= 33.119 milliseconds (cumulative count 92080) 130s 93.850% <= 34.111 milliseconds (cumulative count 93850) 130s 95.740% <= 35.103 milliseconds (cumulative count 95740) 130s 97.370% <= 36.127 milliseconds (cumulative count 97370) 130s 98.800% <= 37.119 milliseconds (cumulative count 98800) 130s 99.650% <= 38.111 milliseconds (cumulative count 99650) 130s 99.960% <= 39.103 milliseconds (cumulative count 99960) 130s 99.980% <= 40.127 milliseconds (cumulative count 99980) 130s 100.000% <= 41.119 milliseconds (cumulative count 100000) 130s 130s Summary: 130s throughput summary: 12312.24 requests per second 130s latency summary (msec): 130s avg min p50 p95 p99 max 130s 21.591 1.312 22.783 34.719 37.279 40.575 140s LRANGE_600 (first 600 elements): rps=4563.5 (overall: 8582.1) avg_msec=26.414 (overall: 26.414) LRANGE_600 (first 600 elements): rps=11393.7 (overall: 10422.7) avg_msec=23.172 (overall: 24.094) LRANGE_600 (first 600 elements): rps=13352.9 (overall: 11584.8) avg_msec=16.146 (overall: 20.461) LRANGE_600 (first 600 elements): rps=11015.7 (overall: 11423.6) avg_msec=22.030 (overall: 20.889) LRANGE_600 (first 600 elements): rps=10132.0 (overall: 11142.1) avg_msec=24.399 (overall: 21.585) LRANGE_600 (first 600 elements): rps=10070.3 (overall: 10946.5) avg_msec=26.683 (overall: 22.441) LRANGE_600 (first 600 elements): rps=9848.6 (overall: 10779.9) avg_msec=26.275 (overall: 22.972) LRANGE_600 (first 600 elements): rps=8932.5 (overall: 10535.7) avg_msec=28.527 (overall: 23.595) LRANGE_600 (first 600 elements): rps=13167.3 (overall: 10841.9) avg_msec=19.631 (overall: 23.035) LRANGE_600 (first 600 elements): rps=8900.4 (overall: 10639.5) avg_msec=27.694 (overall: 23.441) LRANGE_600 (first 600 elements): rps=9755.8 (overall: 10554.0) avg_msec=27.072 (overall: 23.766) LRANGE_600 (first 600 elements): rps=9573.7 (overall: 10469.7) avg_msec=28.353 (overall: 24.127) LRANGE_600 (first 600 elements): rps=8764.9 (overall: 10334.6) avg_msec=28.346 (overall: 24.410) LRANGE_600 (first 600 elements): rps=10298.8 (overall: 10332.0) avg_msec=25.468 (overall: 24.488) LRANGE_600 (first 600 elements): rps=9853.3 (overall: 10298.3) avg_msec=26.608 (overall: 24.631) LRANGE_600 (first 600 elements): rps=10308.3 (overall: 10298.9) avg_msec=24.840 (overall: 24.644) LRANGE_600 (first 600 elements): rps=8940.7 (overall: 10216.8) avg_msec=27.696 (overall: 24.806) LRANGE_600 (first 600 elements): rps=10108.9 (overall: 10210.5) avg_msec=28.151 (overall: 24.997) LRANGE_600 (first 600 elements): rps=9820.0 (overall: 10189.7) avg_msec=26.851 (overall: 25.093) LRANGE_600 (first 600 elements): rps=9123.5 (overall: 10135.6) avg_msec=27.441 (overall: 25.200) LRANGE_600 (first 600 elements): rps=9735.2 (overall: 10116.1) avg_msec=27.661 (overall: 25.315) LRANGE_600 (first 600 elements): rps=9727.3 (overall: 10098.0) avg_msec=27.394 (overall: 25.408) LRANGE_600 (first 600 elements): rps=9832.7 (overall: 10086.3) avg_msec=26.427 (overall: 25.452) LRANGE_600 (first 600 elements): rps=8672.0 (overall: 10026.9) avg_msec=27.978 (overall: 25.544) LRANGE_600 (first 600 elements): rps=9724.1 (overall: 10014.2) avg_msec=28.283 (overall: 25.656) LRANGE_600 (first 600 elements): rps=9968.0 (overall: 10012.4) avg_msec=26.577 (overall: 25.691) LRANGE_600 (first 600 elements): rps=9521.9 (overall: 9994.0) avg_msec=25.787 (overall: 25.695) LRANGE_600 (first 600 elements): rps=11108.1 (overall: 10035.4) avg_msec=24.546 (overall: 25.647) LRANGE_600 (first 600 elements): rps=9852.6 (overall: 10029.1) avg_msec=25.266 (overall: 25.634) LRANGE_600 (first 600 elements): rps=9677.2 (overall: 10017.1) avg_msec=26.324 (overall: 25.657) LRANGE_600 (first 600 elements): rps=10768.0 (overall: 10041.4) avg_msec=24.808 (overall: 25.627) LRANGE_600 (first 600 elements): rps=9800.8 (overall: 10033.9) avg_msec=24.973 (overall: 25.607) LRANGE_600 (first 600 elements): rps=9877.5 (overall: 10029.0) avg_msec=26.352 (overall: 25.630) LRANGE_600 (first 600 elements): rps=9772.5 (overall: 10021.3) avg_msec=27.942 (overall: 25.698) LRANGE_600 (first 600 elements): rps=9640.3 (overall: 10010.3) avg_msec=24.830 (overall: 25.673) LRANGE_600 (first 600 elements): rps=10355.2 (overall: 10020.2) avg_msec=25.945 (overall: 25.682) LRANGE_600 (first 600 elements): rps=9354.6 (overall: 10002.2) avg_msec=26.297 (overall: 25.697) LRANGE_600 (first 600 elements): rps=9519.5 (overall: 9989.2) avg_msec=26.226 (overall: 25.711) LRANGE_600 (first 600 elements): rps=9673.2 (overall: 9980.8) avg_msec=27.713 (overall: 25.762) LRANGE_600 (first 600 elements): rps=9519.8 (overall: 9969.2) avg_msec=26.821 (overall: 25.787) ====== LRANGE_600 (first 600 elements) ====== 140s 100000 requests completed in 10.03 seconds 140s 50 parallel clients 140s 3 bytes payload 140s keep alive: 1 140s host configuration "save": 3600 1 300 100 60 10000 140s host configuration "appendonly": no 140s multi-thread: no 140s 140s Latency by percentile distribution: 140s 0.000% <= 1.127 milliseconds (cumulative count 10) 140s 50.000% <= 27.295 milliseconds (cumulative count 50030) 140s 75.000% <= 31.871 milliseconds (cumulative count 75040) 140s 87.500% <= 35.999 milliseconds (cumulative count 87510) 140s 93.750% <= 38.335 milliseconds (cumulative count 93830) 140s 96.875% <= 39.359 milliseconds (cumulative count 96920) 140s 98.438% <= 40.127 milliseconds (cumulative count 98450) 140s 99.219% <= 40.895 milliseconds (cumulative count 99240) 140s 99.609% <= 41.759 milliseconds (cumulative count 99610) 140s 99.805% <= 42.495 milliseconds (cumulative count 99820) 140s 99.902% <= 42.847 milliseconds (cumulative count 99910) 140s 99.951% <= 43.039 milliseconds (cumulative count 99960) 140s 99.976% <= 43.423 milliseconds (cumulative count 99980) 140s 99.988% <= 43.807 milliseconds (cumulative count 99990) 140s 99.994% <= 44.031 milliseconds (cumulative count 100000) 140s 100.000% <= 44.031 milliseconds (cumulative count 100000) 140s 140s Cumulative distribution of latencies: 140s 0.000% <= 0.103 milliseconds (cumulative count 0) 140s 0.010% <= 1.207 milliseconds (cumulative count 10) 140s 0.020% <= 1.407 milliseconds (cumulative count 20) 140s 0.120% <= 1.703 milliseconds (cumulative count 120) 140s 0.160% <= 1.807 milliseconds (cumulative count 160) 140s 0.210% <= 1.903 milliseconds (cumulative count 210) 140s 0.320% <= 2.007 milliseconds (cumulative count 320) 140s 0.390% <= 2.103 milliseconds (cumulative count 390) 140s 1.980% <= 3.103 milliseconds (cumulative count 1980) 140s 3.700% <= 4.103 milliseconds (cumulative count 3700) 140s 6.260% <= 5.103 milliseconds (cumulative count 6260) 140s 8.460% <= 6.103 milliseconds (cumulative count 8460) 140s 9.530% <= 7.103 milliseconds (cumulative count 9530) 140s 10.200% <= 8.103 milliseconds (cumulative count 10200) 140s 10.770% <= 9.103 milliseconds (cumulative count 10770) 140s 11.280% <= 10.103 milliseconds (cumulative count 11280) 140s 11.690% <= 11.103 milliseconds (cumulative count 11690) 140s 12.300% <= 12.103 milliseconds (cumulative count 12300) 140s 12.950% <= 13.103 milliseconds (cumulative count 12950) 140s 13.690% <= 14.103 milliseconds (cumulative count 13690) 140s 14.600% <= 15.103 milliseconds (cumulative count 14600) 140s 15.570% <= 16.103 milliseconds (cumulative count 15570) 140s 16.690% <= 17.103 milliseconds (cumulative count 16690) 140s 17.970% <= 18.111 milliseconds (cumulative count 17970) 140s 18.980% <= 19.103 milliseconds (cumulative count 18980) 140s 20.080% <= 20.111 milliseconds (cumulative count 20080) 140s 21.160% <= 21.103 milliseconds (cumulative count 21160) 140s 22.790% <= 22.111 milliseconds (cumulative count 22790) 140s 25.210% <= 23.103 milliseconds (cumulative count 25210) 140s 29.310% <= 24.111 milliseconds (cumulative count 29310) 140s 35.330% <= 25.103 milliseconds (cumulative count 35330) 140s 42.110% <= 26.111 milliseconds (cumulative count 42110) 140s 48.700% <= 27.103 milliseconds (cumulative count 48700) 140s 55.120% <= 28.111 milliseconds (cumulative count 55120) 140s 60.990% <= 29.103 milliseconds (cumulative count 60990) 140s 66.850% <= 30.111 milliseconds (cumulative count 66850) 140s 71.910% <= 31.103 milliseconds (cumulative count 71910) 140s 75.930% <= 32.111 milliseconds (cumulative count 75930) 140s 79.310% <= 33.119 milliseconds (cumulative count 79310) 140s 82.300% <= 34.111 milliseconds (cumulative count 82300) 140s 85.060% <= 35.103 milliseconds (cumulative count 85060) 140s 87.790% <= 36.127 milliseconds (cumulative count 87790) 140s 90.270% <= 37.119 milliseconds (cumulative count 90270) 140s 93.150% <= 38.111 milliseconds (cumulative count 93150) 140s 96.200% <= 39.103 milliseconds (cumulative count 96200) 140s 98.450% <= 40.127 milliseconds (cumulative count 98450) 140s 99.360% <= 41.119 milliseconds (cumulative count 99360) 140s 99.720% <= 42.111 milliseconds (cumulative count 99720) 140s 99.960% <= 43.103 milliseconds (cumulative count 99960) 140s 100.000% <= 44.127 milliseconds (cumulative count 100000) 140s 140s Summary: 140s throughput summary: 9973.07 requests per second 140s latency summary (msec): 140s avg min p50 p95 p99 max 140s 25.767 1.120 27.295 38.719 40.575 44.031 141s MSET (10 keys): rps=117370.5 (overall: 125897.4) avg_msec=3.783 (overall: 3.783) MSET (10 keys): rps=128645.4 (overall: 127319.6) avg_msec=3.757 (overall: 3.770) MSET (10 keys): rps=128400.0 (overall: 127687.1) avg_msec=3.768 (overall: 3.769) ====== MSET (10 keys) ====== 141s 100000 requests completed in 0.78 seconds 141s 50 parallel clients 141s 3 bytes payload 141s keep alive: 1 141s host configuration "save": 3600 1 300 100 60 10000 141s host configuration "appendonly": no 141s multi-thread: no 141s 141s Latency by percentile distribution: 141s 0.000% <= 0.895 milliseconds (cumulative count 10) 141s 50.000% <= 4.039 milliseconds (cumulative count 50730) 141s 75.000% <= 4.247 milliseconds (cumulative count 75110) 141s 87.500% <= 4.391 milliseconds (cumulative count 88000) 141s 93.750% <= 4.527 milliseconds (cumulative count 93840) 141s 96.875% <= 4.743 milliseconds (cumulative count 96890) 141s 98.438% <= 5.471 milliseconds (cumulative count 98440) 141s 99.219% <= 6.263 milliseconds (cumulative count 99240) 141s 99.609% <= 6.543 milliseconds (cumulative count 99610) 141s 99.805% <= 8.687 milliseconds (cumulative count 99810) 141s 99.902% <= 8.871 milliseconds (cumulative count 99910) 141s 99.951% <= 9.055 milliseconds (cumulative count 99960) 141s 99.976% <= 9.159 milliseconds (cumulative count 99980) 141s 99.988% <= 9.215 milliseconds (cumulative count 99990) 141s 99.994% <= 9.247 milliseconds (cumulative count 100000) 141s 100.000% <= 9.247 milliseconds (cumulative count 100000) 141s 141s Cumulative distribution of latencies: 141s 0.000% <= 0.103 milliseconds (cumulative count 0) 141s 0.010% <= 0.903 milliseconds (cumulative count 10) 141s 0.070% <= 1.007 milliseconds (cumulative count 70) 141s 0.110% <= 1.103 milliseconds (cumulative count 110) 141s 0.120% <= 1.207 milliseconds (cumulative count 120) 141s 0.130% <= 1.407 milliseconds (cumulative count 130) 141s 0.240% <= 1.503 milliseconds (cumulative count 240) 141s 0.290% <= 1.607 milliseconds (cumulative count 290) 141s 0.320% <= 1.807 milliseconds (cumulative count 320) 141s 0.430% <= 1.903 milliseconds (cumulative count 430) 141s 0.560% <= 2.007 milliseconds (cumulative count 560) 141s 0.950% <= 2.103 milliseconds (cumulative count 950) 141s 22.190% <= 3.103 milliseconds (cumulative count 22190) 141s 58.110% <= 4.103 milliseconds (cumulative count 58110) 141s 98.090% <= 5.103 milliseconds (cumulative count 98090) 141s 99.070% <= 6.103 milliseconds (cumulative count 99070) 141s 99.720% <= 7.103 milliseconds (cumulative count 99720) 141s 99.960% <= 9.103 milliseconds (cumulative count 99960) 141s 100.000% <= 10.103 milliseconds (cumulative count 100000) 141s 141s Summary: 141s throughput summary: 127877.23 requests per second 141s latency summary (msec): 141s avg min p50 p95 p99 max 141s 3.769 0.888 4.039 4.583 6.031 9.247 141s XADD: rps=203027.9 (overall: 252277.2) avg_msec=1.839 (overall: 1.839) ====== XADD ====== 141s 100000 requests completed in 0.40 seconds 141s 50 parallel clients 141s 3 bytes payload 141s keep alive: 1 141s host configuration "save": 3600 1 300 100 60 10000 141s host configuration "appendonly": no 141s multi-thread: no 141s 141s Latency by percentile distribution: 141s 0.000% <= 0.719 milliseconds (cumulative count 10) 141s 50.000% <= 1.911 milliseconds (cumulative count 50180) 141s 75.000% <= 2.103 milliseconds (cumulative count 75220) 141s 87.500% <= 2.223 milliseconds (cumulative count 87880) 141s 93.750% <= 2.311 milliseconds (cumulative count 93830) 141s 96.875% <= 2.399 milliseconds (cumulative count 96940) 141s 98.438% <= 2.503 milliseconds (cumulative count 98490) 141s 99.219% <= 2.647 milliseconds (cumulative count 99250) 141s 99.609% <= 2.775 milliseconds (cumulative count 99610) 141s 99.805% <= 3.135 milliseconds (cumulative count 99810) 141s 99.902% <= 4.175 milliseconds (cumulative count 99910) 141s 99.951% <= 4.367 milliseconds (cumulative count 99960) 141s 99.976% <= 4.447 milliseconds (cumulative count 99980) 141s 99.988% <= 4.471 milliseconds (cumulative count 99990) 141s 99.994% <= 4.519 milliseconds (cumulative count 100000) 141s 100.000% <= 4.519 milliseconds (cumulative count 100000) 141s 141s Cumulative distribution of latencies: 141s 0.000% <= 0.103 milliseconds (cumulative count 0) 141s 0.060% <= 0.807 milliseconds (cumulative count 60) 141s 0.090% <= 0.903 milliseconds (cumulative count 90) 141s 0.160% <= 1.007 milliseconds (cumulative count 160) 141s 0.490% <= 1.103 milliseconds (cumulative count 490) 141s 4.050% <= 1.207 milliseconds (cumulative count 4050) 141s 13.650% <= 1.303 milliseconds (cumulative count 13650) 141s 20.120% <= 1.407 milliseconds (cumulative count 20120) 141s 21.180% <= 1.503 milliseconds (cumulative count 21180) 141s 22.210% <= 1.607 milliseconds (cumulative count 22210) 141s 25.930% <= 1.703 milliseconds (cumulative count 25930) 141s 35.780% <= 1.807 milliseconds (cumulative count 35780) 141s 49.110% <= 1.903 milliseconds (cumulative count 49110) 141s 62.950% <= 2.007 milliseconds (cumulative count 62950) 141s 75.220% <= 2.103 milliseconds (cumulative count 75220) 141s 99.800% <= 3.103 milliseconds (cumulative count 99800) 141s 99.900% <= 4.103 milliseconds (cumulative count 99900) 141s 100.000% <= 5.103 milliseconds (cumulative count 100000) 141s 141s Summary: 141s throughput summary: 251889.16 requests per second 141s latency summary (msec): 141s avg min p50 p95 p99 max 141s 1.853 0.712 1.911 2.343 2.591 4.519 147s FUNCTION LOAD: rps=3280.0 (overall: 15471.7) avg_msec=25.178 (overall: 25.178) FUNCTION LOAD: rps=18565.7 (overall: 18026.3) avg_msec=26.379 (overall: 26.199) FUNCTION LOAD: rps=17808.8 (overall: 17927.9) avg_msec=27.725 (overall: 26.885) FUNCTION LOAD: rps=18480.0 (overall: 18099.4) avg_msec=26.357 (overall: 26.717) FUNCTION LOAD: rps=19561.8 (overall: 18447.0) avg_msec=26.548 (overall: 26.675) FUNCTION LOAD: rps=18525.9 (overall: 18462.1) avg_msec=26.681 (overall: 26.676) FUNCTION LOAD: rps=18366.5 (overall: 18446.7) avg_msec=26.759 (overall: 26.689) FUNCTION LOAD: rps=18760.0 (overall: 18490.0) avg_msec=26.632 (overall: 26.681) FUNCTION LOAD: rps=19007.9 (overall: 18553.4) avg_msec=26.931 (overall: 26.712) FUNCTION LOAD: rps=17928.3 (overall: 18485.5) avg_msec=26.841 (overall: 26.726) FUNCTION LOAD: rps=18924.3 (overall: 18528.5) avg_msec=26.689 (overall: 26.722) FUNCTION LOAD: rps=18600.0 (overall: 18534.8) avg_msec=26.503 (overall: 26.703) FUNCTION LOAD: rps=18804.8 (overall: 18557.0) avg_msec=26.301 (overall: 26.669) FUNCTION LOAD: rps=18804.8 (overall: 18575.7) avg_msec=26.425 (overall: 26.651) FUNCTION LOAD: rps=18840.0 (overall: 18594.3) avg_msec=26.428 (overall: 26.635) FUNCTION LOAD: rps=19203.2 (overall: 18634.3) avg_msec=26.410 (overall: 26.620) FUNCTION LOAD: rps=18840.0 (overall: 18647.0) avg_msec=26.419 (overall: 26.607) FUNCTION LOAD: rps=18240.0 (overall: 18623.4) avg_msec=26.327 (overall: 26.591) FUNCTION LOAD: rps=19325.4 (overall: 18662.1) avg_msec=26.622 (overall: 26.593) FUNCTION LOAD: rps=18804.8 (overall: 18669.6) avg_msec=26.343 (overall: 26.580) FUNCTION LOAD: rps=18000.0 (overall: 18636.5) avg_msec=26.938 (overall: 26.597) FUNCTION LOAD: rps=18804.8 (overall: 18644.5) avg_msec=26.386 (overall: 26.587) ====== FUNCTION LOAD ====== 147s 100000 requests completed in 5.36 seconds 147s 50 parallel clients 147s 3 bytes payload 147s keep alive: 1 147s host configuration "save": 3600 1 300 100 60 10000 147s host configuration "appendonly": no 147s multi-thread: no 147s 147s Latency by percentile distribution: 147s 0.000% <= 1.831 milliseconds (cumulative count 10) 147s 50.000% <= 29.679 milliseconds (cumulative count 50070) 147s 75.000% <= 31.327 milliseconds (cumulative count 75330) 147s 87.500% <= 31.903 milliseconds (cumulative count 87590) 147s 93.750% <= 32.431 milliseconds (cumulative count 93870) 147s 96.875% <= 33.343 milliseconds (cumulative count 96940) 147s 98.438% <= 34.271 milliseconds (cumulative count 98490) 147s 99.219% <= 37.727 milliseconds (cumulative count 99220) 147s 99.609% <= 40.607 milliseconds (cumulative count 99620) 147s 99.805% <= 43.551 milliseconds (cumulative count 99810) 147s 99.902% <= 43.743 milliseconds (cumulative count 99910) 147s 99.951% <= 43.999 milliseconds (cumulative count 99960) 147s 99.976% <= 46.047 milliseconds (cumulative count 99980) 147s 99.988% <= 46.079 milliseconds (cumulative count 99990) 147s 99.994% <= 46.111 milliseconds (cumulative count 100000) 147s 100.000% <= 46.111 milliseconds (cumulative count 100000) 147s 147s Cumulative distribution of latencies: 147s 0.000% <= 0.103 milliseconds (cumulative count 0) 147s 0.030% <= 1.903 milliseconds (cumulative count 30) 147s 0.170% <= 11.103 milliseconds (cumulative count 170) 147s 0.290% <= 12.103 milliseconds (cumulative count 290) 147s 0.650% <= 13.103 milliseconds (cumulative count 650) 147s 1.410% <= 14.103 milliseconds (cumulative count 1410) 147s 4.120% <= 15.103 milliseconds (cumulative count 4120) 147s 13.300% <= 16.103 milliseconds (cumulative count 13300) 147s 21.320% <= 17.103 milliseconds (cumulative count 21320) 147s 23.120% <= 18.111 milliseconds (cumulative count 23120) 147s 24.180% <= 19.103 milliseconds (cumulative count 24180) 147s 24.470% <= 20.111 milliseconds (cumulative count 24470) 147s 24.790% <= 21.103 milliseconds (cumulative count 24790) 147s 25.080% <= 23.103 milliseconds (cumulative count 25080) 147s 25.190% <= 25.103 milliseconds (cumulative count 25190) 147s 29.200% <= 26.111 milliseconds (cumulative count 29200) 147s 37.570% <= 27.103 milliseconds (cumulative count 37570) 147s 42.770% <= 28.111 milliseconds (cumulative count 42770) 147s 46.720% <= 29.103 milliseconds (cumulative count 46720) 147s 53.870% <= 30.111 milliseconds (cumulative count 53870) 147s 69.690% <= 31.103 milliseconds (cumulative count 69690) 147s 90.540% <= 32.111 milliseconds (cumulative count 90540) 147s 96.360% <= 33.119 milliseconds (cumulative count 96360) 147s 98.170% <= 34.111 milliseconds (cumulative count 98170) 147s 99.190% <= 35.103 milliseconds (cumulative count 99190) 147s 99.470% <= 38.111 milliseconds (cumulative count 99470) 147s 99.560% <= 39.103 milliseconds (cumulative count 99560) 147s 99.760% <= 41.119 milliseconds (cumulative count 99760) 147s 99.960% <= 44.127 milliseconds (cumulative count 99960) 147s 99.970% <= 45.119 milliseconds (cumulative count 99970) 147s 100.000% <= 46.111 milliseconds (cumulative count 100000) 147s 147s Summary: 147s throughput summary: 18653.24 requests per second 147s latency summary (msec): 147s avg min p50 p95 p99 max 147s 26.617 1.824 29.679 32.655 34.559 46.111 147s FCALL: rps=241314.8 (overall: 292608.7) avg_msec=1.559 (overall: 1.559) ====== FCALL ====== 147s 100000 requests completed in 0.34 seconds 147s 50 parallel clients 147s 3 bytes payload 147s keep alive: 1 147s host configuration "save": 3600 1 300 100 60 10000 147s host configuration "appendonly": no 147s multi-thread: no 147s 147s Latency by percentile distribution: 147s 0.000% <= 0.407 milliseconds (cumulative count 20) 147s 50.000% <= 1.551 milliseconds (cumulative count 50490) 147s 75.000% <= 1.743 milliseconds (cumulative count 75240) 147s 87.500% <= 1.871 milliseconds (cumulative count 87860) 147s 93.750% <= 1.991 milliseconds (cumulative count 94000) 147s 96.875% <= 2.159 milliseconds (cumulative count 96940) 147s 98.438% <= 2.679 milliseconds (cumulative count 98450) 147s 99.219% <= 4.031 milliseconds (cumulative count 99230) 147s 99.609% <= 4.695 milliseconds (cumulative count 99610) 147s 99.805% <= 4.831 milliseconds (cumulative count 99810) 147s 99.902% <= 4.935 milliseconds (cumulative count 99920) 147s 99.951% <= 4.983 milliseconds (cumulative count 99960) 147s 99.976% <= 5.015 milliseconds (cumulative count 99980) 147s 99.988% <= 5.063 milliseconds (cumulative count 99990) 147s 99.994% <= 5.071 milliseconds (cumulative count 100000) 147s 100.000% <= 5.071 milliseconds (cumulative count 100000) 147s 147s Cumulative distribution of latencies: 147s 0.000% <= 0.103 milliseconds (cumulative count 0) 147s 0.020% <= 0.407 milliseconds (cumulative count 20) 147s 0.230% <= 0.503 milliseconds (cumulative count 230) 147s 0.540% <= 0.607 milliseconds (cumulative count 540) 147s 0.790% <= 0.703 milliseconds (cumulative count 790) 147s 1.130% <= 0.807 milliseconds (cumulative count 1130) 147s 1.720% <= 0.903 milliseconds (cumulative count 1720) 147s 4.730% <= 1.007 milliseconds (cumulative count 4730) 147s 14.840% <= 1.103 milliseconds (cumulative count 14840) 147s 19.820% <= 1.207 milliseconds (cumulative count 19820) 147s 23.320% <= 1.303 milliseconds (cumulative count 23320) 147s 30.170% <= 1.407 milliseconds (cumulative count 30170) 147s 43.640% <= 1.503 milliseconds (cumulative count 43640) 147s 58.390% <= 1.607 milliseconds (cumulative count 58390) 147s 70.640% <= 1.703 milliseconds (cumulative count 70640) 147s 82.070% <= 1.807 milliseconds (cumulative count 82070) 147s 89.970% <= 1.903 milliseconds (cumulative count 89970) 147s 94.410% <= 2.007 milliseconds (cumulative count 94410) 147s 96.330% <= 2.103 milliseconds (cumulative count 96330) 147s 98.780% <= 3.103 milliseconds (cumulative count 98780) 147s 99.320% <= 4.103 milliseconds (cumulative count 99320) 147s 100.000% <= 5.103 milliseconds (cumulative count 100000) 147s 147s Summary: 147s throughput summary: 295858.00 requests per second 147s latency summary (msec): 147s avg min p50 p95 p99 max 147s 1.554 0.400 1.551 2.039 3.839 5.071 147s 147s autopkgtest [08:23:19]: test 0002-benchmark: -----------------------] 148s 0002-benchmark PASS 148s autopkgtest [08:23:20]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 148s autopkgtest [08:23:20]: test 0003-valkey-check-aof: preparing testbed 148s Reading package lists... 149s Building dependency tree... 149s Reading state information... 149s Solving dependencies... 149s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 149s autopkgtest [08:23:21]: test 0003-valkey-check-aof: [----------------------- 150s autopkgtest [08:23:22]: test 0003-valkey-check-aof: -----------------------] 150s autopkgtest [08:23:22]: test 0003-valkey-check-aof: - - - - - - - - - - results - - - - - - - - - - 150s 0003-valkey-check-aof PASS 151s autopkgtest [08:23:23]: test 0004-valkey-check-rdb: preparing testbed 151s Reading package lists... 151s Building dependency tree... 151s Reading state information... 151s Solving dependencies... 152s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 152s autopkgtest [08:23:24]: test 0004-valkey-check-rdb: [----------------------- 158s OK 158s [offset 0] Checking RDB file /var/lib/valkey/dump.rdb 158s [offset 27] AUX FIELD valkey-ver = '8.1.1' 158s [offset 41] AUX FIELD redis-bits = '64' 158s [offset 53] AUX FIELD ctime = '1751271810' 158s [offset 68] AUX FIELD used-mem = '3029080' 158s [offset 80] AUX FIELD aof-base = '0' 158s [offset 191] Selecting DB ID 0 158s [offset 566633] Checksum OK 158s [offset 566633] \o/ RDB looks OK! \o/ 158s [info] 5 keys read 158s [info] 0 expires 158s [info] 0 already expired 158s autopkgtest [08:23:30]: test 0004-valkey-check-rdb: -----------------------] 158s autopkgtest [08:23:30]: test 0004-valkey-check-rdb: - - - - - - - - - - results - - - - - - - - - - 158s 0004-valkey-check-rdb PASS 159s autopkgtest [08:23:31]: test 0005-cjson: preparing testbed 159s Reading package lists... 159s Building dependency tree... 159s Reading state information... 159s Solving dependencies... 160s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 160s autopkgtest [08:23:32]: test 0005-cjson: [----------------------- 166s 166s autopkgtest [08:23:38]: test 0005-cjson: -----------------------] 167s 0005-cjson PASS 167s autopkgtest [08:23:39]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 167s autopkgtest [08:23:39]: test 0006-migrate-from-redis: preparing testbed 175s Creating nova instance adt-questing-ppc64el-valkey-20250630-082052-juju-7f2275-prod-proposed-migration-environment-21-b529d37f-bf31-4d01-8d77-27ee0906176a from image adt/ubuntu-questing-ppc64el-server-20250630.img (UUID 47357e88-256c-460f-8237-18b657912c63)... 216s autopkgtest [08:24:28]: testbed dpkg architecture: ppc64el 216s autopkgtest [08:24:28]: testbed apt version: 3.1.2 216s autopkgtest [08:24:28]: @@@@@@@@@@@@@@@@@@@@ test bed setup 216s autopkgtest [08:24:28]: testbed release detected to be: questing 217s autopkgtest [08:24:29]: updating testbed package index (apt update) 217s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 218s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 218s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 218s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 218s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [429 kB] 218s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [26.6 kB] 218s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.5 kB] 218s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main ppc64el Packages [33.1 kB] 218s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el Packages [375 kB] 218s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/multiverse ppc64el Packages [5260 B] 218s Fetched 1136 kB in 1s (1247 kB/s) 219s Reading package lists... 220s autopkgtest [08:24:32]: upgrading testbed (apt dist-upgrade and autopurge) 220s Reading package lists... 220s Building dependency tree... 220s Reading state information... 220s Calculating upgrade... 220s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 220s Reading package lists... 221s Building dependency tree... 221s Reading state information... 221s Solving dependencies... 221s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 224s Reading package lists... 224s Building dependency tree... 224s Reading state information... 224s Solving dependencies... 224s The following NEW packages will be installed: 224s liblzf1 redis-sentinel redis-server redis-tools 224s 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. 224s Need to get 1812 kB of archives. 224s After this operation, 10.6 MB of additional disk space will be used. 224s Get:1 http://ftpmaster.internal/ubuntu questing/universe ppc64el liblzf1 ppc64el 3.6-4 [7920 B] 224s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el redis-tools ppc64el 5:8.0.0-2 [1738 kB] 225s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el redis-sentinel ppc64el 5:8.0.0-2 [12.5 kB] 225s Get:4 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el redis-server ppc64el 5:8.0.0-2 [53.2 kB] 225s Fetched 1812 kB in 1s (2620 kB/s) 225s Selecting previously unselected package liblzf1:ppc64el. 225s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 114358 files and directories currently installed.) 225s Preparing to unpack .../liblzf1_3.6-4_ppc64el.deb ... 225s Unpacking liblzf1:ppc64el (3.6-4) ... 225s Selecting previously unselected package redis-tools. 225s Preparing to unpack .../redis-tools_5%3a8.0.0-2_ppc64el.deb ... 225s Unpacking redis-tools (5:8.0.0-2) ... 225s Selecting previously unselected package redis-sentinel. 225s Preparing to unpack .../redis-sentinel_5%3a8.0.0-2_ppc64el.deb ... 225s Unpacking redis-sentinel (5:8.0.0-2) ... 225s Selecting previously unselected package redis-server. 225s Preparing to unpack .../redis-server_5%3a8.0.0-2_ppc64el.deb ... 225s Unpacking redis-server (5:8.0.0-2) ... 225s Setting up liblzf1:ppc64el (3.6-4) ... 225s Setting up redis-tools (5:8.0.0-2) ... 225s Setting up redis-server (5:8.0.0-2) ... 226s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'. 226s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'. 226s Setting up redis-sentinel (5:8.0.0-2) ... 227s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 227s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 227s Processing triggers for man-db (2.13.1-1) ... 228s Processing triggers for libc-bin (2.41-6ubuntu2) ... 235s autopkgtest [08:24:47]: test 0006-migrate-from-redis: [----------------------- 235s + FLAG_FILE=/etc/valkey/REDIS_MIGRATION 235s + sed -i 's#loglevel notice#loglevel debug#' /etc/redis/redis.conf 235s + systemctl restart redis-server 236s + redis-cli -h 127.0.0.1 -p 6379 SET test 1 236s + redis-cli -h 127.0.0.1 -p 6379 GET test 236s OK 236s + redis-cli -h 127.0.0.1 -p 6379 SAVE 236s 1 236s + sha256sum /var/lib/redis/dump.rdb 236s OK 236s 62aa4d94cd01003efaf211bc3c470e47e607af23901e035a3dd932eda5c4db94 /var/lib/redis/dump.rdb 236s + apt-get install -y valkey-redis-compat 236s Reading package lists... 236s Building dependency tree... 236s Reading state information... 236s Solving dependencies... 236s The following additional packages will be installed: 236s valkey-server valkey-tools 236s Suggested packages: 236s ruby-redis 236s The following packages will be REMOVED: 236s redis-sentinel redis-server redis-tools 236s The following NEW packages will be installed: 236s valkey-redis-compat valkey-server valkey-tools 236s 0 upgraded, 3 newly installed, 3 to remove and 0 not upgraded. 236s Need to get 1695 kB of archives. 236s After this operation, 476 kB disk space will be freed. 236s Get:1 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-tools ppc64el 8.1.1+dfsg1-2ubuntu1 [1636 kB] 237s Get:2 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-server ppc64el 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 237s Get:3 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-redis-compat all 8.1.1+dfsg1-2ubuntu1 [7794 B] 237s Fetched 1695 kB in 1s (2424 kB/s) 237s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 114409 files and directories currently installed.) 237s Removing redis-sentinel (5:8.0.0-2) ... 237s Removing redis-server (5:8.0.0-2) ... 238s Removing redis-tools (5:8.0.0-2) ... 238s Selecting previously unselected package valkey-tools. 238s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 114372 files and directories currently installed.) 238s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 238s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 238s Selecting previously unselected package valkey-server. 238s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 238s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 238s Selecting previously unselected package valkey-redis-compat. 238s Preparing to unpack .../valkey-redis-compat_8.1.1+dfsg1-2ubuntu1_all.deb ... 238s Unpacking valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 238s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 238s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 238s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 238s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 239s Setting up valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 239s dpkg-query: no packages found matching valkey-sentinel 239s [I] /etc/redis/redis.conf has been copied to /etc/valkey/valkey.conf. Please, review the content of valkey.conf, especially if you had modified redis.conf. 239s [I] /etc/redis/sentinel.conf has been copied to /etc/valkey/sentinel.conf. Please, review the content of sentinel.conf, especially if you had modified sentinel.conf. 239s [I] On-disk redis dumps moved from /var/lib/redis/ to /var/lib/valkey. 239s Processing triggers for man-db (2.13.1-1) ... 239s 81720b17f6c0502c4db24bec20dd9d867e0d57d4a9b6aff926d7eafac1b09bb2 /var/lib/valkey/dump.rdb 239s + '[' -f /etc/valkey/REDIS_MIGRATION ']' 239s + sha256sum /var/lib/valkey/dump.rdb 239s + systemctl status valkey-server 239s + grep inactive 239s Active: inactive (dead) since Mon 2025-06-30 08:24:50 UTC; 478ms ago 239s + rm /etc/valkey/REDIS_MIGRATION 239s + systemctl start valkey-server 240s Job for valkey-server.service failed because the control process exited with error code. 240s See "systemctl status valkey-server.service" and "journalctl -xeu valkey-server.service" for details. 240s autopkgtest [08:24:52]: test 0006-migrate-from-redis: -----------------------] 240s 0006-migrate-from-redis FAIL non-zero exit status 1 240s autopkgtest [08:24:52]: test 0006-migrate-from-redis: - - - - - - - - - - results - - - - - - - - - - 241s autopkgtest [08:24:53]: @@@@@@@@@@@@@@@@@@@@ summary 241s 0001-valkey-cli PASS 241s 0002-benchmark PASS 241s 0003-valkey-check-aof PASS 241s 0004-valkey-check-rdb PASS 241s 0005-cjson PASS 241s 0006-migrate-from-redis FAIL non-zero exit status 1