0s autopkgtest [14:52:04]: starting date and time: 2025-06-19 14:52:04+0000 1s autopkgtest [14:52:05]: git checkout: 9986aa8c Merge branch 'skia/fix_network_interface' into 'ubuntu/production' 1s autopkgtest [14:52:05]: host juju-7f2275-prod-proposed-migration-environment-15; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.ou7ixo4z/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:redis --apt-upgrade valkey --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=redis/5:8.0.0-2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest-cpu2-ram4-disk20-ppc64el --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-15@sto01-ppc64el-13.secgroup --name adt-questing-ppc64el-valkey-20250619-145204-juju-7f2275-prod-proposed-migration-environment-15-e5854d58-6d10-4857-bcc0-c48b9c274b05 --image adt/ubuntu-questing-ppc64el-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-15 --net-id=net_prod-autopkgtest-workers-ppc64el -e TERM=linux --mirror=http://ftpmaster.internal/ubuntu/ 98s autopkgtest [14:53:42]: testbed dpkg architecture: ppc64el 98s autopkgtest [14:53:42]: testbed apt version: 3.1.2 98s autopkgtest [14:53:42]: @@@@@@@@@@@@@@@@@@@@ test bed setup 99s autopkgtest [14:53:43]: testbed release detected to be: None 99s autopkgtest [14:53:43]: updating testbed package index (apt update) 100s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 100s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 100s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 100s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 100s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [426 kB] 100s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.4 kB] 100s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/restricted Sources [4716 B] 100s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [38.3 kB] 100s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/main ppc64el Packages [66.7 kB] 100s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/restricted ppc64el Packages [724 B] 100s Get:11 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el Packages [340 kB] 100s Get:12 http://ftpmaster.internal/ubuntu questing-proposed/multiverse ppc64el Packages [6448 B] 100s Fetched 1149 kB in 1s (2252 kB/s) 101s Reading package lists... 101s autopkgtest [14:53:45]: upgrading testbed (apt dist-upgrade and autopurge) 102s Reading package lists... 102s Building dependency tree... 102s Reading state information... 102s Calculating upgrade... 102s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 102s Reading package lists... 102s Building dependency tree... 102s Reading state information... 102s Solving dependencies... 102s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 105s autopkgtest [14:53:49]: testbed running kernel: Linux 6.14.0-15-generic #15-Ubuntu SMP Sun Apr 6 14:52:42 UTC 2025 105s autopkgtest [14:53:49]: @@@@@@@@@@@@@@@@@@@@ apt-source valkey 108s Get:1 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (dsc) [2484 B] 108s Get:2 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (tar) [2726 kB] 108s Get:3 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (diff) [20.4 kB] 108s gpgv: Signature made Wed Jun 18 14:39:32 2025 UTC 108s gpgv: using RSA key 63EEFC3DE14D5146CE7F24BF34B8AD7D9529E793 108s gpgv: issuer "lena.voytek@canonical.com" 108s gpgv: Can't check signature: No public key 108s dpkg-source: warning: cannot verify inline signature for ./valkey_8.1.1+dfsg1-2ubuntu1.dsc: no acceptable signature found 108s autopkgtest [14:53:52]: testing package valkey version 8.1.1+dfsg1-2ubuntu1 110s autopkgtest [14:53:54]: build not needed 113s autopkgtest [14:53:57]: test 0001-valkey-cli: preparing testbed 113s Reading package lists... 113s Building dependency tree... 113s Reading state information... 113s Solving dependencies... 113s The following NEW packages will be installed: 113s liblzf1 valkey-server valkey-tools 113s 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. 113s Need to get 1695 kB of archives. 113s After this operation, 10.1 MB of additional disk space will be used. 113s Get:1 http://ftpmaster.internal/ubuntu questing/universe ppc64el liblzf1 ppc64el 3.6-4 [7920 B] 113s Get:2 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-tools ppc64el 8.1.1+dfsg1-2ubuntu1 [1636 kB] 114s Get:3 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-server ppc64el 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 114s Fetched 1695 kB in 0s (5155 kB/s) 114s Selecting previously unselected package liblzf1:ppc64el. 115s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 79652 files and directories currently installed.) 115s Preparing to unpack .../liblzf1_3.6-4_ppc64el.deb ... 115s Unpacking liblzf1:ppc64el (3.6-4) ... 115s Selecting previously unselected package valkey-tools. 115s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 115s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 115s Selecting previously unselected package valkey-server. 115s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 115s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 115s Setting up liblzf1:ppc64el (3.6-4) ... 115s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 115s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 116s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 116s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 116s Processing triggers for man-db (2.13.1-1) ... 118s Processing triggers for libc-bin (2.41-6ubuntu2) ... 119s autopkgtest [14:54:03]: test 0001-valkey-cli: [----------------------- 119s ************************************************************************** 119s # A new feature in cloud-init identified possible datasources for # 119s # this system as: # 119s # [] # 119s # However, the datasource used was: OpenStack # 119s # # 119s # In the future, cloud-init will only attempt to use datasources that # 119s # are identified or specifically configured. # 119s # For more information see # 119s # https://bugs.launchpad.net/bugs/1669675 # 119s # # 119s # If you are seeing this message, please file a bug against # 119s # cloud-init at # 119s # https://github.com/canonical/cloud-init/issues # 119s # Make sure to include the cloud provider your instance is # 119s # running on. # 119s # # 119s # After you have filed a bug, you can disable this warning by launching # 119s # your instance with the cloud-config below, or putting that content # 119s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 119s # # 119s # #cloud-config # 119s # warnings: # 119s # dsid_missing_source: off # 119s ************************************************************************** 119s 119s Disable the warnings above by: 119s touch /root/.cloud-warnings.skip 119s or 119s touch /var/lib/cloud/instance/warnings/.skip 124s # Server 124s redis_version:7.2.4 124s server_name:valkey 124s valkey_version:8.1.1 124s valkey_release_stage:ga 124s redis_git_sha1:00000000 124s redis_git_dirty:0 124s redis_build_id:454dc2cf719509d2 124s server_mode:standalone 124s os:Linux 6.14.0-15-generic ppc64le 124s arch_bits:64 124s monotonic_clock:POSIX clock_gettime 124s multiplexing_api:epoll 124s gcc_version:14.3.0 124s process_id:2150 124s process_supervised:systemd 124s run_id:dacb4a585d3a050830da999ff1d956364c7f8459 124s tcp_port:6379 124s server_time_usec:1750344847983624 124s uptime_in_seconds:5 124s uptime_in_days:0 124s hz:10 124s configured_hz:10 124s clients_hz:10 124s lru_clock:5514383 124s executable:/usr/bin/valkey-server 124s config_file:/etc/valkey/valkey.conf 124s io_threads_active:0 124s availability_zone: 124s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 124s 124s # Clients 124s connected_clients:1 124s cluster_connections:0 124s maxclients:10000 124s client_recent_max_input_buffer:0 124s client_recent_max_output_buffer:0 124s blocked_clients:0 124s tracking_clients:0 124s pubsub_clients:0 124s watching_clients:0 124s clients_in_timeout_table:0 124s total_watched_keys:0 124s total_blocking_keys:0 124s total_blocking_keys_on_nokey:0 124s paused_reason:none 124s paused_actions:none 124s paused_timeout_milliseconds:0 124s 124s # Memory 124s used_memory:944544 124s used_memory_human:922.41K 124s used_memory_rss:22216704 124s used_memory_rss_human:21.19M 124s used_memory_peak:944544 124s used_memory_peak_human:922.41K 124s used_memory_peak_perc:100.29% 124s used_memory_overhead:924640 124s used_memory_startup:924416 124s used_memory_dataset:19904 124s used_memory_dataset_perc:98.89% 124s allocator_allocated:4426880 124s allocator_active:9043968 124s allocator_resident:11403264 124s allocator_muzzy:0 124s total_system_memory:4208918528 124s total_system_memory_human:3.92G 124s used_memory_lua:32768 124s used_memory_vm_eval:32768 124s used_memory_lua_human:32.00K 124s used_memory_scripts_eval:0 124s number_of_cached_scripts:0 124s number_of_functions:0 124s number_of_libraries:0 124s used_memory_vm_functions:33792 124s used_memory_vm_total:66560 124s used_memory_vm_total_human:65.00K 124s used_memory_functions:224 124s used_memory_scripts:224 124s used_memory_scripts_human:224B 124s maxmemory:0 124s maxmemory_human:0B 124s maxmemory_policy:noeviction 124s allocator_frag_ratio:1.00 124s allocator_frag_bytes:0 124s allocator_rss_ratio:1.26 124s allocator_rss_bytes:2359296 124s rss_overhead_ratio:1.95 124s rss_overhead_bytes:10813440 124s mem_fragmentation_ratio:24.03 124s mem_fragmentation_bytes:21292144 124s mem_not_counted_for_evict:0 124s mem_replication_backlog:0 124s mem_total_replication_buffers:0 124s mem_clients_slaves:0 124s mem_clients_normal:0 124s mem_cluster_links:0 124s mem_aof_buffer:0 124s mem_allocator:jemalloc-5.3.0 124s mem_overhead_db_hashtable_rehashing:0 124s active_defrag_running:0 124s lazyfree_pending_objects:0 124s lazyfreed_objects:0 124s 124s # Persistence 124s loading:0 124s async_loading:0 124s current_cow_peak:0 124s current_cow_size:0 124s current_cow_size_age:0 124s current_fork_perc:0.00 124s current_save_keys_processed:0 124s current_save_keys_total:0 124s rdb_changes_since_last_save:0 124s rdb_bgsave_in_progress:0 124s rdb_last_save_time:1750344842 124s rdb_last_bgsave_status:ok 124s rdb_last_bgsave_time_sec:-1 124s rdb_current_bgsave_time_sec:-1 124s rdb_saves:0 124s rdb_last_cow_size:0 124s rdb_last_load_keys_expired:0 124s rdb_last_load_keys_loaded:0 124s aof_enabled:0 124s aof_rewrite_in_progress:0 124s aof_rewrite_scheduled:0 124s aof_last_rewrite_time_sec:-1 124s aof_current_rewrite_time_sec:-1 124s aof_last_bgrewrite_status:ok 124s aof_rewrites:0 124s aof_rewrites_consecutive_failures:0 124s aof_last_write_status:ok 124s aof_last_cow_size:0 124s module_fork_in_progress:0 124s module_fork_last_cow_size:0 124s 124s # Stats 124s total_connections_received:1 124s total_commands_processed:0 124s instantaneous_ops_per_sec:0 124s total_net_input_bytes:14 124s total_net_output_bytes:0 124s total_net_repl_input_bytes:0 124s total_net_repl_output_bytes:0 124s instantaneous_input_kbps:0.00 124s instantaneous_output_kbps:0.00 124s instantaneous_input_repl_kbps:0.00 124s instantaneous_output_repl_kbps:0.00 124s rejected_connections:0 124s sync_full:0 124s sync_partial_ok:0 124s sync_partial_err:0 124s expired_keys:0 124s expired_stale_perc:0.00 124s expired_time_cap_reached_count:0 124s expire_cycle_cpu_milliseconds:0 124s evicted_keys:0 124s evicted_clients:0 124s evicted_scripts:0 124s total_eviction_exceeded_time:0 124s current_eviction_exceeded_time:0 124s keyspace_hits:0 124s keyspace_misses:0 124s pubsub_channels:0 124s pubsub_patterns:0 124s pubsubshard_channels:0 124s latest_fork_usec:0 124s total_forks:0 124s migrate_cached_sockets:0 124s slave_expires_tracked_keys:0 124s active_defrag_hits:0 124s active_defrag_misses:0 124s active_defrag_key_hits:0 124s active_defrag_key_misses:0 124s total_active_defrag_time:0 124s current_active_defrag_time:0 124s tracking_total_keys:0 124s tracking_total_items:0 124s tracking_total_prefixes:0 124s unexpected_error_replies:0 124s total_error_replies:0 124s dump_payload_sanitizations:0 124s total_reads_processed:1 124s total_writes_processed:0 124s io_threaded_reads_processed:0 124s io_threaded_writes_processed:0 124s io_threaded_freed_objects:0 124s io_threaded_accept_processed:0 124s io_threaded_poll_processed:0 124s io_threaded_total_prefetch_batches:0 124s io_threaded_total_prefetch_entries:0 124s client_query_buffer_limit_disconnections:0 124s client_output_buffer_limit_disconnections:0 124s reply_buffer_shrinks:0 124s reply_buffer_expands:0 124s eventloop_cycles:51 124s eventloop_duration_sum:2623 124s eventloop_duration_cmd_sum:0 124s instantaneous_eventloop_cycles_per_sec:9 124s instantaneous_eventloop_duration_usec:63 124s acl_access_denied_auth:0 124s acl_access_denied_cmd:0 124s acl_access_denied_key:0 124s acl_access_denied_channel:0 124s 124s # Replication 124s role:master 124s connected_slaves:0 124s replicas_waiting_psync:0 124s master_failover_state:no-failover 124s master_replid:ce6f78461adf65fc64bbc6b8759dbfea10b09bdc 124s master_replid2:0000000000000000000000000000000000000000 124s master_repl_offset:0 124s second_repl_offset:-1 124s repl_backlog_active:0 124s repl_backlog_size:10485760 124s repl_backlog_first_byte_offset:0 124s repl_backlog_histlen:0 124s 124s # CPU 124s used_cpu_sys:0.018572 124s used_cpu_user:0.041767 124s used_cpu_sys_children:0.000000 124s used_cpu_user_children:0.001381 124s used_cpu_sys_main_thread:0.018525 124s used_cpu_user_main_thread:0.041662 124s 124s # Modules 124s 124s # Errorstats 124s 124s # Cluster 124s cluster_enabled:0 124s 124s # Keyspace 124s Redis ver. 8.1.1 124s autopkgtest [14:54:08]: test 0001-valkey-cli: -----------------------] 125s 0001-valkey-cli PASS 125s autopkgtest [14:54:09]: test 0001-valkey-cli: - - - - - - - - - - results - - - - - - - - - - 125s autopkgtest [14:54:09]: test 0002-benchmark: preparing testbed 125s Reading package lists... 125s Building dependency tree... 125s Reading state information... 125s Solving dependencies... 126s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 126s autopkgtest [14:54:10]: test 0002-benchmark: [----------------------- 126s ************************************************************************** 126s # A new feature in cloud-init identified possible datasources for # 126s # this system as: # 126s # [] # 126s # However, the datasource used was: OpenStack # 126s # # 126s # In the future, cloud-init will only attempt to use datasources that # 126s # are identified or specifically configured. # 126s # For more information see # 126s # https://bugs.launchpad.net/bugs/1669675 # 126s # # 126s # If you are seeing this message, please file a bug against # 126s # cloud-init at # 126s # https://github.com/canonical/cloud-init/issues # 126s # Make sure to include the cloud provider your instance is # 126s # running on. # 126s # # 126s # After you have filed a bug, you can disable this warning by launching # 126s # your instance with the cloud-config below, or putting that content # 126s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 126s # # 126s # #cloud-config # 126s # warnings: # 126s # dsid_missing_source: off # 126s ************************************************************************** 126s 126s Disable the warnings above by: 126s touch /root/.cloud-warnings.skip 126s or 126s touch /var/lib/cloud/instance/warnings/.skip 132s PING_INLINE: rps=0.0 (overall: nan) avg_msec=nan (overall: nan) PING_INLINE: rps=248760.0 (overall: 248760.0) avg_msec=1.728 (overall: 1.728) ====== PING_INLINE ====== 132s 100000 requests completed in 0.41 seconds 132s 50 parallel clients 132s 3 bytes payload 132s keep alive: 1 132s host configuration "save": 3600 1 300 100 60 10000 132s host configuration "appendonly": no 132s multi-thread: no 132s 132s Latency by percentile distribution: 132s 0.000% <= 0.247 milliseconds (cumulative count 10) 132s 50.000% <= 1.711 milliseconds (cumulative count 50020) 132s 75.000% <= 1.967 milliseconds (cumulative count 75040) 132s 87.500% <= 2.319 milliseconds (cumulative count 87550) 132s 93.750% <= 2.543 milliseconds (cumulative count 93820) 132s 96.875% <= 3.055 milliseconds (cumulative count 96900) 132s 98.438% <= 3.903 milliseconds (cumulative count 98440) 132s 99.219% <= 4.143 milliseconds (cumulative count 99220) 132s 99.609% <= 4.295 milliseconds (cumulative count 99610) 132s 99.805% <= 4.415 milliseconds (cumulative count 99810) 132s 99.902% <= 4.527 milliseconds (cumulative count 99910) 132s 99.951% <= 4.583 milliseconds (cumulative count 99960) 132s 99.976% <= 4.607 milliseconds (cumulative count 99980) 132s 99.988% <= 4.615 milliseconds (cumulative count 99990) 132s 99.994% <= 4.631 milliseconds (cumulative count 100000) 132s 100.000% <= 4.631 milliseconds (cumulative count 100000) 132s 132s Cumulative distribution of latencies: 132s 0.000% <= 0.103 milliseconds (cumulative count 0) 132s 0.080% <= 0.303 milliseconds (cumulative count 80) 132s 1.160% <= 0.407 milliseconds (cumulative count 1160) 132s 1.940% <= 0.503 milliseconds (cumulative count 1940) 132s 2.610% <= 0.607 milliseconds (cumulative count 2610) 132s 4.850% <= 0.703 milliseconds (cumulative count 4850) 132s 6.140% <= 0.807 milliseconds (cumulative count 6140) 132s 7.570% <= 0.903 milliseconds (cumulative count 7570) 132s 9.060% <= 1.007 milliseconds (cumulative count 9060) 132s 10.400% <= 1.103 milliseconds (cumulative count 10400) 132s 12.360% <= 1.207 milliseconds (cumulative count 12360) 132s 14.690% <= 1.303 milliseconds (cumulative count 14690) 132s 18.450% <= 1.407 milliseconds (cumulative count 18450) 132s 27.900% <= 1.503 milliseconds (cumulative count 27900) 132s 39.000% <= 1.607 milliseconds (cumulative count 39000) 132s 49.190% <= 1.703 milliseconds (cumulative count 49190) 132s 60.190% <= 1.807 milliseconds (cumulative count 60190) 132s 69.500% <= 1.903 milliseconds (cumulative count 69500) 132s 76.810% <= 2.007 milliseconds (cumulative count 76810) 132s 80.200% <= 2.103 milliseconds (cumulative count 80200) 132s 97.030% <= 3.103 milliseconds (cumulative count 97030) 132s 99.070% <= 4.103 milliseconds (cumulative count 99070) 132s 100.000% <= 5.103 milliseconds (cumulative count 100000) 132s 132s Summary: 132s throughput summary: 245700.25 requests per second 132s latency summary (msec): 132s avg min p50 p95 p99 max 132s 1.754 0.240 1.711 2.599 4.087 4.631 132s PING_MBULK: rps=120517.9 (overall: 336111.1) avg_msec=1.227 (overall: 1.227) ====== PING_MBULK ====== 132s 100000 requests completed in 0.33 seconds 132s 50 parallel clients 132s 3 bytes payload 132s keep alive: 1 132s host configuration "save": 3600 1 300 100 60 10000 132s host configuration "appendonly": no 132s multi-thread: no 132s 132s Latency by percentile distribution: 132s 0.000% <= 0.215 milliseconds (cumulative count 10) 132s 50.000% <= 1.447 milliseconds (cumulative count 50030) 132s 75.000% <= 1.743 milliseconds (cumulative count 75450) 132s 87.500% <= 1.991 milliseconds (cumulative count 87620) 132s 93.750% <= 2.263 milliseconds (cumulative count 93850) 132s 96.875% <= 2.431 milliseconds (cumulative count 96970) 132s 98.438% <= 2.591 milliseconds (cumulative count 98450) 132s 99.219% <= 2.967 milliseconds (cumulative count 99220) 132s 99.609% <= 3.639 milliseconds (cumulative count 99610) 132s 99.805% <= 3.863 milliseconds (cumulative count 99810) 132s 99.902% <= 3.975 milliseconds (cumulative count 99910) 132s 99.951% <= 4.031 milliseconds (cumulative count 99960) 132s 99.976% <= 4.055 milliseconds (cumulative count 99980) 132s 99.988% <= 4.063 milliseconds (cumulative count 99990) 132s 99.994% <= 4.079 milliseconds (cumulative count 100000) 132s 100.000% <= 4.079 milliseconds (cumulative count 100000) 132s 132s Cumulative distribution of latencies: 132s 0.000% <= 0.103 milliseconds (cumulative count 0) 132s 0.570% <= 0.303 milliseconds (cumulative count 570) 132s 5.010% <= 0.407 milliseconds (cumulative count 5010) 132s 9.950% <= 0.503 milliseconds (cumulative count 9950) 132s 14.710% <= 0.607 milliseconds (cumulative count 14710) 132s 18.450% <= 0.703 milliseconds (cumulative count 18450) 132s 21.180% <= 0.807 milliseconds (cumulative count 21180) 132s 22.830% <= 0.903 milliseconds (cumulative count 22830) 132s 24.570% <= 1.007 milliseconds (cumulative count 24570) 132s 26.780% <= 1.103 milliseconds (cumulative count 26780) 132s 31.590% <= 1.207 milliseconds (cumulative count 31590) 132s 36.110% <= 1.303 milliseconds (cumulative count 36110) 132s 46.020% <= 1.407 milliseconds (cumulative count 46020) 132s 55.580% <= 1.503 milliseconds (cumulative count 55580) 132s 64.550% <= 1.607 milliseconds (cumulative count 64550) 132s 72.460% <= 1.703 milliseconds (cumulative count 72460) 132s 80.220% <= 1.807 milliseconds (cumulative count 80220) 132s 85.740% <= 1.903 milliseconds (cumulative count 85740) 132s 87.990% <= 2.007 milliseconds (cumulative count 87990) 132s 90.310% <= 2.103 milliseconds (cumulative count 90310) 132s 99.440% <= 3.103 milliseconds (cumulative count 99440) 132s 100.000% <= 4.103 milliseconds (cumulative count 100000) 132s 132s Summary: 132s throughput summary: 301204.84 requests per second 132s latency summary (msec): 132s avg min p50 p95 p99 max 132s 1.389 0.208 1.447 2.327 2.847 4.079 133s SET: rps=8333.3 (overall: 262500.0) avg_msec=1.716 (overall: 1.716) SET: rps=213027.9 (overall: 214556.0) avg_msec=2.046 (overall: 2.034) ====== SET ====== 133s 100000 requests completed in 0.47 seconds 133s 50 parallel clients 133s 3 bytes payload 133s keep alive: 1 133s host configuration "save": 3600 1 300 100 60 10000 133s host configuration "appendonly": no 133s multi-thread: no 133s 133s Latency by percentile distribution: 133s 0.000% <= 0.279 milliseconds (cumulative count 20) 133s 50.000% <= 2.015 milliseconds (cumulative count 50620) 133s 75.000% <= 2.423 milliseconds (cumulative count 75050) 133s 87.500% <= 2.687 milliseconds (cumulative count 87670) 133s 93.750% <= 2.927 milliseconds (cumulative count 93810) 133s 96.875% <= 3.175 milliseconds (cumulative count 96940) 133s 98.438% <= 3.511 milliseconds (cumulative count 98440) 133s 99.219% <= 4.119 milliseconds (cumulative count 99220) 133s 99.609% <= 4.687 milliseconds (cumulative count 99610) 133s 99.805% <= 4.903 milliseconds (cumulative count 99810) 133s 99.902% <= 5.015 milliseconds (cumulative count 99910) 133s 99.951% <= 5.071 milliseconds (cumulative count 99960) 133s 99.976% <= 5.095 milliseconds (cumulative count 99980) 133s 99.988% <= 5.103 milliseconds (cumulative count 99990) 133s 99.994% <= 5.119 milliseconds (cumulative count 100000) 133s 100.000% <= 5.119 milliseconds (cumulative count 100000) 133s 133s Cumulative distribution of latencies: 133s 0.000% <= 0.103 milliseconds (cumulative count 0) 133s 0.050% <= 0.303 milliseconds (cumulative count 50) 133s 0.710% <= 0.407 milliseconds (cumulative count 710) 133s 1.440% <= 0.503 milliseconds (cumulative count 1440) 133s 1.970% <= 0.607 milliseconds (cumulative count 1970) 133s 2.190% <= 0.703 milliseconds (cumulative count 2190) 133s 2.300% <= 0.807 milliseconds (cumulative count 2300) 133s 2.410% <= 0.903 milliseconds (cumulative count 2410) 133s 2.620% <= 1.007 milliseconds (cumulative count 2620) 133s 2.970% <= 1.103 milliseconds (cumulative count 2970) 133s 3.680% <= 1.207 milliseconds (cumulative count 3680) 133s 4.990% <= 1.303 milliseconds (cumulative count 4990) 133s 7.190% <= 1.407 milliseconds (cumulative count 7190) 133s 10.390% <= 1.503 milliseconds (cumulative count 10390) 133s 14.960% <= 1.607 milliseconds (cumulative count 14960) 133s 23.340% <= 1.703 milliseconds (cumulative count 23340) 133s 32.530% <= 1.807 milliseconds (cumulative count 32530) 133s 40.820% <= 1.903 milliseconds (cumulative count 40820) 133s 49.940% <= 2.007 milliseconds (cumulative count 49940) 133s 57.890% <= 2.103 milliseconds (cumulative count 57890) 133s 96.160% <= 3.103 milliseconds (cumulative count 96160) 133s 99.200% <= 4.103 milliseconds (cumulative count 99200) 133s 99.990% <= 5.103 milliseconds (cumulative count 99990) 133s 100.000% <= 6.103 milliseconds (cumulative count 100000) 133s 133s Summary: 133s throughput summary: 210526.31 requests per second 133s latency summary (msec): 133s avg min p50 p95 p99 max 133s 2.077 0.272 2.015 3.015 3.927 5.119 133s GET: rps=37051.8 (overall: 273529.4) avg_msec=1.546 (overall: 1.546) GET: rps=243600.0 (overall: 247183.1) avg_msec=1.762 (overall: 1.734) ====== GET ====== 133s 100000 requests completed in 0.41 seconds 133s 50 parallel clients 133s 3 bytes payload 133s keep alive: 1 133s host configuration "save": 3600 1 300 100 60 10000 133s host configuration "appendonly": no 133s multi-thread: no 133s 133s Latency by percentile distribution: 133s 0.000% <= 0.279 milliseconds (cumulative count 20) 133s 50.000% <= 1.735 milliseconds (cumulative count 50750) 133s 75.000% <= 1.951 milliseconds (cumulative count 75720) 133s 87.500% <= 2.199 milliseconds (cumulative count 87760) 133s 93.750% <= 2.415 milliseconds (cumulative count 93760) 133s 96.875% <= 2.591 milliseconds (cumulative count 96910) 133s 98.438% <= 3.207 milliseconds (cumulative count 98440) 133s 99.219% <= 4.319 milliseconds (cumulative count 99220) 133s 99.609% <= 5.807 milliseconds (cumulative count 99610) 133s 99.805% <= 6.031 milliseconds (cumulative count 99810) 133s 99.902% <= 6.143 milliseconds (cumulative count 99910) 133s 99.951% <= 6.199 milliseconds (cumulative count 99960) 133s 99.976% <= 6.223 milliseconds (cumulative count 99980) 133s 99.988% <= 6.231 milliseconds (cumulative count 99990) 133s 99.994% <= 6.247 milliseconds (cumulative count 100000) 133s 100.000% <= 6.247 milliseconds (cumulative count 100000) 133s 133s Cumulative distribution of latencies: 133s 0.000% <= 0.103 milliseconds (cumulative count 0) 133s 0.050% <= 0.303 milliseconds (cumulative count 50) 133s 0.600% <= 0.407 milliseconds (cumulative count 600) 133s 1.120% <= 0.503 milliseconds (cumulative count 1120) 133s 1.560% <= 0.607 milliseconds (cumulative count 1560) 133s 2.340% <= 0.703 milliseconds (cumulative count 2340) 133s 3.050% <= 0.807 milliseconds (cumulative count 3050) 133s 3.790% <= 0.903 milliseconds (cumulative count 3790) 133s 4.810% <= 1.007 milliseconds (cumulative count 4810) 133s 5.860% <= 1.103 milliseconds (cumulative count 5860) 133s 7.400% <= 1.207 milliseconds (cumulative count 7400) 133s 9.380% <= 1.303 milliseconds (cumulative count 9380) 133s 12.580% <= 1.407 milliseconds (cumulative count 12580) 133s 21.970% <= 1.503 milliseconds (cumulative count 21970) 133s 34.720% <= 1.607 milliseconds (cumulative count 34720) 133s 46.850% <= 1.703 milliseconds (cumulative count 46850) 133s 59.320% <= 1.807 milliseconds (cumulative count 59320) 133s 70.410% <= 1.903 milliseconds (cumulative count 70410) 133s 81.430% <= 2.007 milliseconds (cumulative count 81430) 133s 84.800% <= 2.103 milliseconds (cumulative count 84800) 133s 98.350% <= 3.103 milliseconds (cumulative count 98350) 133s 98.940% <= 4.103 milliseconds (cumulative count 98940) 133s 99.500% <= 5.103 milliseconds (cumulative count 99500) 133s 99.870% <= 6.103 milliseconds (cumulative count 99870) 133s 100.000% <= 7.103 milliseconds (cumulative count 100000) 133s 133s Summary: 133s throughput summary: 240963.86 requests per second 133s latency summary (msec): 133s avg min p50 p95 p99 max 133s 1.773 0.272 1.735 2.471 4.143 6.247 134s INCR: rps=110515.9 (overall: 232083.3) avg_msec=1.854 (overall: 1.854) INCR: rps=227400.0 (overall: 228918.9) avg_msec=1.893 (overall: 1.880) ====== INCR ====== 134s 100000 requests completed in 0.44 seconds 134s 50 parallel clients 134s 3 bytes payload 134s keep alive: 1 134s host configuration "save": 3600 1 300 100 60 10000 134s host configuration "appendonly": no 134s multi-thread: no 134s 134s Latency by percentile distribution: 134s 0.000% <= 0.295 milliseconds (cumulative count 10) 134s 50.000% <= 1.807 milliseconds (cumulative count 50480) 134s 75.000% <= 2.055 milliseconds (cumulative count 75250) 134s 87.500% <= 2.351 milliseconds (cumulative count 87560) 134s 93.750% <= 2.591 milliseconds (cumulative count 93820) 134s 96.875% <= 3.735 milliseconds (cumulative count 96880) 134s 98.438% <= 4.119 milliseconds (cumulative count 98460) 134s 99.219% <= 4.383 milliseconds (cumulative count 99230) 134s 99.609% <= 4.663 milliseconds (cumulative count 99610) 134s 99.805% <= 4.895 milliseconds (cumulative count 99810) 134s 99.902% <= 5.007 milliseconds (cumulative count 99910) 134s 99.951% <= 5.063 milliseconds (cumulative count 99960) 134s 99.976% <= 5.087 milliseconds (cumulative count 99980) 134s 99.988% <= 5.095 milliseconds (cumulative count 99990) 134s 99.994% <= 5.111 milliseconds (cumulative count 100000) 134s 100.000% <= 5.111 milliseconds (cumulative count 100000) 134s 134s Cumulative distribution of latencies: 134s 0.000% <= 0.103 milliseconds (cumulative count 0) 134s 0.030% <= 0.303 milliseconds (cumulative count 30) 134s 0.910% <= 0.407 milliseconds (cumulative count 910) 134s 1.510% <= 0.503 milliseconds (cumulative count 1510) 134s 1.980% <= 0.607 milliseconds (cumulative count 1980) 134s 2.350% <= 0.703 milliseconds (cumulative count 2350) 134s 2.670% <= 0.807 milliseconds (cumulative count 2670) 134s 2.930% <= 0.903 milliseconds (cumulative count 2930) 134s 3.300% <= 1.007 milliseconds (cumulative count 3300) 134s 3.980% <= 1.103 milliseconds (cumulative count 3980) 134s 5.380% <= 1.207 milliseconds (cumulative count 5380) 134s 7.280% <= 1.303 milliseconds (cumulative count 7280) 134s 10.250% <= 1.407 milliseconds (cumulative count 10250) 134s 17.290% <= 1.503 milliseconds (cumulative count 17290) 134s 28.530% <= 1.607 milliseconds (cumulative count 28530) 134s 38.900% <= 1.703 milliseconds (cumulative count 38900) 134s 50.480% <= 1.807 milliseconds (cumulative count 50480) 134s 61.140% <= 1.903 milliseconds (cumulative count 61140) 134s 72.140% <= 2.007 milliseconds (cumulative count 72140) 134s 77.400% <= 2.103 milliseconds (cumulative count 77400) 134s 95.900% <= 3.103 milliseconds (cumulative count 95900) 134s 98.400% <= 4.103 milliseconds (cumulative count 98400) 134s 99.990% <= 5.103 milliseconds (cumulative count 99990) 134s 100.000% <= 6.103 milliseconds (cumulative count 100000) 134s 134s Summary: 134s throughput summary: 228310.50 requests per second 134s latency summary (msec): 134s avg min p50 p95 p99 max 134s 1.884 0.288 1.807 2.751 4.303 5.111 134s LPUSH: rps=166600.0 (overall: 230110.5) avg_msec=1.902 (overall: 1.902) LPUSH: rps=222031.9 (overall: 225416.7) avg_msec=1.984 (overall: 1.949) ====== LPUSH ====== 134s 100000 requests completed in 0.44 seconds 134s 50 parallel clients 134s 3 bytes payload 134s keep alive: 1 134s host configuration "save": 3600 1 300 100 60 10000 134s host configuration "appendonly": no 134s multi-thread: no 134s 134s Latency by percentile distribution: 134s 0.000% <= 0.327 milliseconds (cumulative count 10) 134s 50.000% <= 1.911 milliseconds (cumulative count 50670) 134s 75.000% <= 2.215 milliseconds (cumulative count 75230) 134s 87.500% <= 2.559 milliseconds (cumulative count 87720) 134s 93.750% <= 2.807 milliseconds (cumulative count 93820) 134s 96.875% <= 3.111 milliseconds (cumulative count 96940) 134s 98.438% <= 4.367 milliseconds (cumulative count 98440) 134s 99.219% <= 4.743 milliseconds (cumulative count 99230) 134s 99.609% <= 5.159 milliseconds (cumulative count 99610) 134s 99.805% <= 5.391 milliseconds (cumulative count 99810) 134s 99.902% <= 5.511 milliseconds (cumulative count 99910) 134s 99.951% <= 5.591 milliseconds (cumulative count 99960) 134s 99.976% <= 5.623 milliseconds (cumulative count 99980) 134s 99.988% <= 5.639 milliseconds (cumulative count 99990) 134s 99.994% <= 5.655 milliseconds (cumulative count 100000) 134s 100.000% <= 5.655 milliseconds (cumulative count 100000) 134s 134s Cumulative distribution of latencies: 134s 0.000% <= 0.103 milliseconds (cumulative count 0) 134s 0.200% <= 0.407 milliseconds (cumulative count 200) 134s 0.540% <= 0.503 milliseconds (cumulative count 540) 134s 1.110% <= 0.607 milliseconds (cumulative count 1110) 134s 1.350% <= 0.703 milliseconds (cumulative count 1350) 134s 1.800% <= 0.807 milliseconds (cumulative count 1800) 134s 2.770% <= 0.903 milliseconds (cumulative count 2770) 134s 4.620% <= 1.007 milliseconds (cumulative count 4620) 134s 6.670% <= 1.103 milliseconds (cumulative count 6670) 134s 9.360% <= 1.207 milliseconds (cumulative count 9360) 134s 12.330% <= 1.303 milliseconds (cumulative count 12330) 134s 15.680% <= 1.407 milliseconds (cumulative count 15680) 134s 18.990% <= 1.503 milliseconds (cumulative count 18990) 134s 24.450% <= 1.607 milliseconds (cumulative count 24450) 134s 32.120% <= 1.703 milliseconds (cumulative count 32120) 134s 41.050% <= 1.807 milliseconds (cumulative count 41050) 134s 49.930% <= 1.903 milliseconds (cumulative count 49930) 134s 59.550% <= 2.007 milliseconds (cumulative count 59550) 134s 68.020% <= 2.103 milliseconds (cumulative count 68020) 134s 96.860% <= 3.103 milliseconds (cumulative count 96860) 134s 98.130% <= 4.103 milliseconds (cumulative count 98130) 134s 99.560% <= 5.103 milliseconds (cumulative count 99560) 134s 100.000% <= 6.103 milliseconds (cumulative count 100000) 134s 134s Summary: 134s throughput summary: 224719.11 requests per second 134s latency summary (msec): 134s avg min p50 p95 p99 max 134s 1.953 0.320 1.911 2.895 4.615 5.655 135s RPUSH: rps=202310.8 (overall: 215169.5) avg_msec=2.015 (overall: 2.015) ====== RPUSH ====== 135s 100000 requests completed in 0.45 seconds 135s 50 parallel clients 135s 3 bytes payload 135s keep alive: 1 135s host configuration "save": 3600 1 300 100 60 10000 135s host configuration "appendonly": no 135s multi-thread: no 135s 135s Latency by percentile distribution: 135s 0.000% <= 0.279 milliseconds (cumulative count 10) 135s 50.000% <= 1.903 milliseconds (cumulative count 50260) 135s 75.000% <= 2.247 milliseconds (cumulative count 75110) 135s 87.500% <= 2.559 milliseconds (cumulative count 87720) 135s 93.750% <= 2.759 milliseconds (cumulative count 93850) 135s 96.875% <= 3.063 milliseconds (cumulative count 96880) 135s 98.438% <= 3.463 milliseconds (cumulative count 98440) 135s 99.219% <= 4.119 milliseconds (cumulative count 99220) 135s 99.609% <= 4.343 milliseconds (cumulative count 99610) 135s 99.805% <= 4.471 milliseconds (cumulative count 99810) 135s 99.902% <= 4.591 milliseconds (cumulative count 99910) 135s 99.951% <= 4.655 milliseconds (cumulative count 99960) 135s 99.976% <= 4.679 milliseconds (cumulative count 99980) 135s 99.988% <= 4.687 milliseconds (cumulative count 99990) 135s 99.994% <= 4.703 milliseconds (cumulative count 100000) 135s 100.000% <= 4.703 milliseconds (cumulative count 100000) 135s 135s Cumulative distribution of latencies: 135s 0.000% <= 0.103 milliseconds (cumulative count 0) 135s 0.030% <= 0.303 milliseconds (cumulative count 30) 135s 0.390% <= 0.407 milliseconds (cumulative count 390) 135s 0.720% <= 0.503 milliseconds (cumulative count 720) 135s 1.360% <= 0.607 milliseconds (cumulative count 1360) 135s 1.690% <= 0.703 milliseconds (cumulative count 1690) 135s 2.300% <= 0.807 milliseconds (cumulative count 2300) 135s 3.570% <= 0.903 milliseconds (cumulative count 3570) 135s 4.940% <= 1.007 milliseconds (cumulative count 4940) 135s 6.390% <= 1.103 milliseconds (cumulative count 6390) 135s 8.240% <= 1.207 milliseconds (cumulative count 8240) 135s 10.380% <= 1.303 milliseconds (cumulative count 10380) 135s 13.130% <= 1.407 milliseconds (cumulative count 13130) 135s 16.090% <= 1.503 milliseconds (cumulative count 16090) 135s 23.630% <= 1.607 milliseconds (cumulative count 23630) 135s 32.370% <= 1.703 milliseconds (cumulative count 32370) 135s 41.690% <= 1.807 milliseconds (cumulative count 41690) 135s 50.260% <= 1.903 milliseconds (cumulative count 50260) 135s 59.590% <= 2.007 milliseconds (cumulative count 59590) 135s 68.090% <= 2.103 milliseconds (cumulative count 68090) 135s 97.140% <= 3.103 milliseconds (cumulative count 97140) 135s 99.200% <= 4.103 milliseconds (cumulative count 99200) 135s 100.000% <= 5.103 milliseconds (cumulative count 100000) 135s 135s Summary: 135s throughput summary: 223214.28 requests per second 135s latency summary (msec): 135s avg min p50 p95 p99 max 135s 1.943 0.272 1.903 2.847 3.983 4.703 135s LPOP: rps=32988.1 (overall: 217894.8) avg_msec=1.899 (overall: 1.899) LPOP: rps=211440.0 (overall: 212291.7) avg_msec=2.080 (overall: 2.055) ====== LPOP ====== 135s 100000 requests completed in 0.49 seconds 135s 50 parallel clients 135s 3 bytes payload 135s keep alive: 1 135s host configuration "save": 3600 1 300 100 60 10000 135s host configuration "appendonly": no 135s multi-thread: no 135s 135s Latency by percentile distribution: 135s 0.000% <= 0.319 milliseconds (cumulative count 10) 135s 50.000% <= 2.087 milliseconds (cumulative count 50260) 135s 75.000% <= 2.447 milliseconds (cumulative count 75210) 135s 87.500% <= 2.703 milliseconds (cumulative count 87700) 135s 93.750% <= 2.855 milliseconds (cumulative count 93850) 135s 96.875% <= 3.071 milliseconds (cumulative count 96920) 135s 98.438% <= 3.391 milliseconds (cumulative count 98440) 135s 99.219% <= 4.351 milliseconds (cumulative count 99220) 135s 99.609% <= 4.567 milliseconds (cumulative count 99610) 135s 99.805% <= 4.679 milliseconds (cumulative count 99820) 135s 99.902% <= 4.735 milliseconds (cumulative count 99910) 135s 99.951% <= 4.791 milliseconds (cumulative count 99960) 135s 99.976% <= 4.815 milliseconds (cumulative count 99980) 135s 99.988% <= 4.823 milliseconds (cumulative count 99990) 135s 99.994% <= 4.839 milliseconds (cumulative count 100000) 135s 100.000% <= 4.839 milliseconds (cumulative count 100000) 135s 135s Cumulative distribution of latencies: 135s 0.000% <= 0.103 milliseconds (cumulative count 0) 135s 0.180% <= 0.407 milliseconds (cumulative count 180) 135s 0.510% <= 0.503 milliseconds (cumulative count 510) 135s 1.090% <= 0.607 milliseconds (cumulative count 1090) 135s 1.380% <= 0.703 milliseconds (cumulative count 1380) 135s 1.440% <= 0.903 milliseconds (cumulative count 1440) 135s 1.610% <= 1.007 milliseconds (cumulative count 1610) 135s 1.820% <= 1.103 milliseconds (cumulative count 1820) 135s 2.010% <= 1.207 milliseconds (cumulative count 2010) 135s 2.410% <= 1.303 milliseconds (cumulative count 2410) 135s 3.220% <= 1.407 milliseconds (cumulative count 3220) 135s 4.450% <= 1.503 milliseconds (cumulative count 4450) 135s 6.820% <= 1.607 milliseconds (cumulative count 6820) 135s 11.150% <= 1.703 milliseconds (cumulative count 11150) 135s 21.250% <= 1.807 milliseconds (cumulative count 21250) 135s 31.200% <= 1.903 milliseconds (cumulative count 31200) 135s 42.000% <= 2.007 milliseconds (cumulative count 42000) 135s 51.950% <= 2.103 milliseconds (cumulative count 51950) 135s 97.190% <= 3.103 milliseconds (cumulative count 97190) 135s 98.850% <= 4.103 milliseconds (cumulative count 98850) 135s 100.000% <= 5.103 milliseconds (cumulative count 100000) 135s 135s Summary: 135s throughput summary: 203665.98 requests per second 135s latency summary (msec): 135s avg min p50 p95 p99 max 135s 2.155 0.312 2.087 2.903 4.223 4.839 136s RPOP: rps=44720.0 (overall: 248444.4) avg_msec=1.696 (overall: 1.696) RPOP: rps=193520.0 (overall: 201898.3) avg_msec=2.312 (overall: 2.196) ====== RPOP ====== 136s 100000 requests completed in 0.47 seconds 136s 50 parallel clients 136s 3 bytes payload 136s keep alive: 1 136s host configuration "save": 3600 1 300 100 60 10000 136s host configuration "appendonly": no 136s multi-thread: no 136s 136s Latency by percentile distribution: 136s 0.000% <= 0.087 milliseconds (cumulative count 10) 136s 50.000% <= 1.999 milliseconds (cumulative count 50390) 136s 75.000% <= 2.423 milliseconds (cumulative count 75060) 136s 87.500% <= 2.735 milliseconds (cumulative count 87840) 136s 93.750% <= 3.015 milliseconds (cumulative count 93820) 136s 96.875% <= 3.295 milliseconds (cumulative count 96900) 136s 98.438% <= 3.703 milliseconds (cumulative count 98450) 136s 99.219% <= 5.255 milliseconds (cumulative count 99220) 136s 99.609% <= 6.367 milliseconds (cumulative count 99610) 136s 99.805% <= 6.591 milliseconds (cumulative count 99810) 136s 99.902% <= 6.703 milliseconds (cumulative count 99910) 136s 99.951% <= 6.759 milliseconds (cumulative count 99960) 136s 99.976% <= 6.783 milliseconds (cumulative count 99980) 136s 99.988% <= 6.791 milliseconds (cumulative count 99990) 136s 99.994% <= 6.807 milliseconds (cumulative count 100000) 136s 100.000% <= 6.807 milliseconds (cumulative count 100000) 136s 136s Cumulative distribution of latencies: 136s 0.020% <= 0.103 milliseconds (cumulative count 20) 136s 0.050% <= 0.207 milliseconds (cumulative count 50) 136s 0.120% <= 0.303 milliseconds (cumulative count 120) 136s 0.460% <= 0.407 milliseconds (cumulative count 460) 136s 0.890% <= 0.503 milliseconds (cumulative count 890) 136s 1.490% <= 0.607 milliseconds (cumulative count 1490) 136s 1.890% <= 0.703 milliseconds (cumulative count 1890) 136s 2.210% <= 0.807 milliseconds (cumulative count 2210) 136s 2.620% <= 0.903 milliseconds (cumulative count 2620) 136s 4.090% <= 1.007 milliseconds (cumulative count 4090) 136s 5.620% <= 1.103 milliseconds (cumulative count 5620) 136s 7.430% <= 1.207 milliseconds (cumulative count 7430) 136s 9.780% <= 1.303 milliseconds (cumulative count 9780) 136s 13.250% <= 1.407 milliseconds (cumulative count 13250) 136s 17.010% <= 1.503 milliseconds (cumulative count 17010) 136s 20.840% <= 1.607 milliseconds (cumulative count 20840) 136s 27.280% <= 1.703 milliseconds (cumulative count 27280) 136s 35.480% <= 1.807 milliseconds (cumulative count 35480) 136s 43.150% <= 1.903 milliseconds (cumulative count 43150) 136s 51.000% <= 2.007 milliseconds (cumulative count 51000) 136s 57.850% <= 2.103 milliseconds (cumulative count 57850) 136s 94.940% <= 3.103 milliseconds (cumulative count 94940) 136s 98.500% <= 4.103 milliseconds (cumulative count 98500) 136s 98.940% <= 5.103 milliseconds (cumulative count 98940) 136s 99.500% <= 6.103 milliseconds (cumulative count 99500) 136s 100.000% <= 7.103 milliseconds (cumulative count 100000) 136s 136s Summary: 136s throughput summary: 214592.28 requests per second 136s latency summary (msec): 136s avg min p50 p95 p99 max 136s 2.068 0.080 1.999 3.111 5.135 6.807 136s SADD: rps=61394.4 (overall: 197564.1) avg_msec=2.085 (overall: 2.085) SADD: rps=240876.5 (overall: 230607.9) avg_msec=1.835 (overall: 1.886) ====== SADD ====== 136s 100000 requests completed in 0.45 seconds 136s 50 parallel clients 136s 3 bytes payload 136s keep alive: 1 136s host configuration "save": 3600 1 300 100 60 10000 136s host configuration "appendonly": no 136s multi-thread: no 136s 136s Latency by percentile distribution: 136s 0.000% <= 0.263 milliseconds (cumulative count 10) 136s 50.000% <= 1.855 milliseconds (cumulative count 50070) 136s 75.000% <= 2.263 milliseconds (cumulative count 75360) 136s 87.500% <= 2.567 milliseconds (cumulative count 87660) 136s 93.750% <= 2.895 milliseconds (cumulative count 93760) 136s 96.875% <= 4.047 milliseconds (cumulative count 96880) 136s 98.438% <= 4.967 milliseconds (cumulative count 98450) 136s 99.219% <= 5.775 milliseconds (cumulative count 99220) 136s 99.609% <= 7.783 milliseconds (cumulative count 99610) 136s 99.805% <= 8.007 milliseconds (cumulative count 99810) 136s 99.902% <= 8.127 milliseconds (cumulative count 99910) 136s 99.951% <= 8.183 milliseconds (cumulative count 99960) 136s 99.976% <= 8.207 milliseconds (cumulative count 99980) 136s 99.988% <= 8.223 milliseconds (cumulative count 99990) 136s 99.994% <= 8.231 milliseconds (cumulative count 100000) 136s 100.000% <= 8.231 milliseconds (cumulative count 100000) 136s 136s Cumulative distribution of latencies: 136s 0.000% <= 0.103 milliseconds (cumulative count 0) 136s 0.130% <= 0.303 milliseconds (cumulative count 130) 136s 1.170% <= 0.407 milliseconds (cumulative count 1170) 136s 2.140% <= 0.503 milliseconds (cumulative count 2140) 136s 2.880% <= 0.607 milliseconds (cumulative count 2880) 136s 3.210% <= 0.703 milliseconds (cumulative count 3210) 136s 3.780% <= 0.807 milliseconds (cumulative count 3780) 136s 4.490% <= 0.903 milliseconds (cumulative count 4490) 136s 5.870% <= 1.007 milliseconds (cumulative count 5870) 136s 7.540% <= 1.103 milliseconds (cumulative count 7540) 136s 9.770% <= 1.207 milliseconds (cumulative count 9770) 136s 12.760% <= 1.303 milliseconds (cumulative count 12760) 136s 16.510% <= 1.407 milliseconds (cumulative count 16510) 136s 21.300% <= 1.503 milliseconds (cumulative count 21300) 136s 29.390% <= 1.607 milliseconds (cumulative count 29390) 136s 37.620% <= 1.703 milliseconds (cumulative count 37620) 136s 46.280% <= 1.807 milliseconds (cumulative count 46280) 136s 53.930% <= 1.903 milliseconds (cumulative count 53930) 136s 62.010% <= 2.007 milliseconds (cumulative count 62010) 136s 67.670% <= 2.103 milliseconds (cumulative count 67670) 136s 95.140% <= 3.103 milliseconds (cumulative count 95140) 136s 96.940% <= 4.103 milliseconds (cumulative count 96940) 136s 98.790% <= 5.103 milliseconds (cumulative count 98790) 136s 99.500% <= 6.103 milliseconds (cumulative count 99500) 136s 99.890% <= 8.103 milliseconds (cumulative count 99890) 136s 100.000% <= 9.103 milliseconds (cumulative count 100000) 136s 136s Summary: 136s throughput summary: 221238.94 requests per second 136s latency summary (msec): 136s avg min p50 p95 p99 max 136s 1.971 0.256 1.855 3.079 5.255 8.231 137s HSET: rps=120796.8 (overall: 240634.9) avg_msec=1.786 (overall: 1.786) HSET: rps=217440.0 (overall: 225212.8) avg_msec=2.029 (overall: 1.942) ====== HSET ====== 137s 100000 requests completed in 0.44 seconds 137s 50 parallel clients 137s 3 bytes payload 137s keep alive: 1 137s host configuration "save": 3600 1 300 100 60 10000 137s host configuration "appendonly": no 137s multi-thread: no 137s 137s Latency by percentile distribution: 137s 0.000% <= 0.207 milliseconds (cumulative count 10) 137s 50.000% <= 1.919 milliseconds (cumulative count 50370) 137s 75.000% <= 2.183 milliseconds (cumulative count 75350) 137s 87.500% <= 2.535 milliseconds (cumulative count 87560) 137s 93.750% <= 2.775 milliseconds (cumulative count 93900) 137s 96.875% <= 3.023 milliseconds (cumulative count 96890) 137s 98.438% <= 3.295 milliseconds (cumulative count 98440) 137s 99.219% <= 4.023 milliseconds (cumulative count 99220) 137s 99.609% <= 4.255 milliseconds (cumulative count 99620) 137s 99.805% <= 4.383 milliseconds (cumulative count 99810) 137s 99.902% <= 4.495 milliseconds (cumulative count 99910) 137s 99.951% <= 4.551 milliseconds (cumulative count 99960) 137s 99.976% <= 4.567 milliseconds (cumulative count 99980) 137s 99.988% <= 4.583 milliseconds (cumulative count 99990) 137s 99.994% <= 4.599 milliseconds (cumulative count 100000) 137s 100.000% <= 4.599 milliseconds (cumulative count 100000) 137s 137s Cumulative distribution of latencies: 137s 0.000% <= 0.103 milliseconds (cumulative count 0) 137s 0.010% <= 0.207 milliseconds (cumulative count 10) 137s 0.060% <= 0.303 milliseconds (cumulative count 60) 137s 0.540% <= 0.407 milliseconds (cumulative count 540) 137s 1.290% <= 0.503 milliseconds (cumulative count 1290) 137s 2.920% <= 0.607 milliseconds (cumulative count 2920) 137s 3.430% <= 0.703 milliseconds (cumulative count 3430) 137s 3.770% <= 0.807 milliseconds (cumulative count 3770) 137s 4.000% <= 0.903 milliseconds (cumulative count 4000) 137s 4.640% <= 1.007 milliseconds (cumulative count 4640) 137s 5.420% <= 1.103 milliseconds (cumulative count 5420) 137s 6.660% <= 1.207 milliseconds (cumulative count 6660) 137s 8.990% <= 1.303 milliseconds (cumulative count 8990) 137s 12.610% <= 1.407 milliseconds (cumulative count 12610) 137s 16.560% <= 1.503 milliseconds (cumulative count 16560) 137s 20.470% <= 1.607 milliseconds (cumulative count 20470) 137s 28.580% <= 1.703 milliseconds (cumulative count 28580) 137s 39.110% <= 1.807 milliseconds (cumulative count 39110) 137s 48.730% <= 1.903 milliseconds (cumulative count 48730) 137s 58.980% <= 2.007 milliseconds (cumulative count 58980) 137s 67.960% <= 2.103 milliseconds (cumulative count 67960) 137s 97.620% <= 3.103 milliseconds (cumulative count 97620) 137s 99.350% <= 4.103 milliseconds (cumulative count 99350) 137s 100.000% <= 5.103 milliseconds (cumulative count 100000) 137s 137s Summary: 137s throughput summary: 225225.22 requests per second 137s latency summary (msec): 137s avg min p50 p95 p99 max 137s 1.941 0.200 1.919 2.831 3.839 4.599 137s SPOP: rps=203187.3 (overall: 281768.0) avg_msec=1.507 (overall: 1.507) ====== SPOP ====== 137s 100000 requests completed in 0.34 seconds 137s 50 parallel clients 137s 3 bytes payload 137s keep alive: 1 137s host configuration "save": 3600 1 300 100 60 10000 137s host configuration "appendonly": no 137s multi-thread: no 137s 137s Latency by percentile distribution: 137s 0.000% <= 0.263 milliseconds (cumulative count 10) 137s 50.000% <= 1.527 milliseconds (cumulative count 50450) 137s 75.000% <= 1.807 milliseconds (cumulative count 75550) 137s 87.500% <= 1.935 milliseconds (cumulative count 87650) 137s 93.750% <= 2.119 milliseconds (cumulative count 93830) 137s 96.875% <= 2.359 milliseconds (cumulative count 96950) 137s 98.438% <= 2.511 milliseconds (cumulative count 98470) 137s 99.219% <= 2.631 milliseconds (cumulative count 99220) 137s 99.609% <= 3.823 milliseconds (cumulative count 99610) 137s 99.805% <= 4.047 milliseconds (cumulative count 99810) 137s 99.902% <= 4.151 milliseconds (cumulative count 99910) 137s 99.951% <= 4.207 milliseconds (cumulative count 99960) 137s 99.976% <= 4.231 milliseconds (cumulative count 99980) 137s 99.988% <= 4.247 milliseconds (cumulative count 99990) 137s 99.994% <= 4.263 milliseconds (cumulative count 100000) 137s 100.000% <= 4.263 milliseconds (cumulative count 100000) 137s 137s Cumulative distribution of latencies: 137s 0.000% <= 0.103 milliseconds (cumulative count 0) 137s 0.100% <= 0.303 milliseconds (cumulative count 100) 137s 1.400% <= 0.407 milliseconds (cumulative count 1400) 137s 2.770% <= 0.503 milliseconds (cumulative count 2770) 137s 5.320% <= 0.607 milliseconds (cumulative count 5320) 137s 7.940% <= 0.703 milliseconds (cumulative count 7940) 137s 12.660% <= 0.807 milliseconds (cumulative count 12660) 137s 17.310% <= 0.903 milliseconds (cumulative count 17310) 137s 22.230% <= 1.007 milliseconds (cumulative count 22230) 137s 27.050% <= 1.103 milliseconds (cumulative count 27050) 137s 31.850% <= 1.207 milliseconds (cumulative count 31850) 137s 35.500% <= 1.303 milliseconds (cumulative count 35500) 137s 40.330% <= 1.407 milliseconds (cumulative count 40330) 137s 48.510% <= 1.503 milliseconds (cumulative count 48510) 137s 57.410% <= 1.607 milliseconds (cumulative count 57410) 137s 65.790% <= 1.703 milliseconds (cumulative count 65790) 137s 75.550% <= 1.807 milliseconds (cumulative count 75550) 137s 85.040% <= 1.903 milliseconds (cumulative count 85040) 137s 90.800% <= 2.007 milliseconds (cumulative count 90800) 137s 93.580% <= 2.103 milliseconds (cumulative count 93580) 137s 99.500% <= 3.103 milliseconds (cumulative count 99500) 137s 99.860% <= 4.103 milliseconds (cumulative count 99860) 137s 100.000% <= 5.103 milliseconds (cumulative count 100000) 137s 137s Summary: 137s throughput summary: 293255.12 requests per second 137s latency summary (msec): 137s avg min p50 p95 p99 max 137s 1.455 0.256 1.527 2.199 2.591 4.263 137s ZADD: rps=82240.0 (overall: 233636.4) avg_msec=1.880 (overall: 1.880) ZADD: rps=221872.5 (overall: 224926.3) avg_msec=1.983 (overall: 1.955) ====== ZADD ====== 137s 100000 requests completed in 0.45 seconds 137s 50 parallel clients 137s 3 bytes payload 137s keep alive: 1 137s host configuration "save": 3600 1 300 100 60 10000 137s host configuration "appendonly": no 137s multi-thread: no 137s 137s Latency by percentile distribution: 137s 0.000% <= 0.287 milliseconds (cumulative count 10) 137s 50.000% <= 1.935 milliseconds (cumulative count 50390) 137s 75.000% <= 2.263 milliseconds (cumulative count 75070) 137s 87.500% <= 2.575 milliseconds (cumulative count 87750) 137s 93.750% <= 2.767 milliseconds (cumulative count 93920) 137s 96.875% <= 2.887 milliseconds (cumulative count 96920) 137s 98.438% <= 3.071 milliseconds (cumulative count 98440) 137s 99.219% <= 3.175 milliseconds (cumulative count 99250) 137s 99.609% <= 4.759 milliseconds (cumulative count 99610) 137s 99.805% <= 4.983 milliseconds (cumulative count 99810) 137s 99.902% <= 5.087 milliseconds (cumulative count 99910) 137s 99.951% <= 5.143 milliseconds (cumulative count 99960) 137s 99.976% <= 5.167 milliseconds (cumulative count 99980) 137s 99.988% <= 5.183 milliseconds (cumulative count 99990) 137s 99.994% <= 5.199 milliseconds (cumulative count 100000) 137s 100.000% <= 5.199 milliseconds (cumulative count 100000) 137s 137s Cumulative distribution of latencies: 137s 0.000% <= 0.103 milliseconds (cumulative count 0) 137s 0.030% <= 0.303 milliseconds (cumulative count 30) 137s 0.700% <= 0.407 milliseconds (cumulative count 700) 137s 1.320% <= 0.503 milliseconds (cumulative count 1320) 137s 2.270% <= 0.607 milliseconds (cumulative count 2270) 137s 2.780% <= 0.703 milliseconds (cumulative count 2780) 137s 2.980% <= 0.807 milliseconds (cumulative count 2980) 137s 3.240% <= 0.903 milliseconds (cumulative count 3240) 137s 4.160% <= 1.007 milliseconds (cumulative count 4160) 137s 5.390% <= 1.103 milliseconds (cumulative count 5390) 137s 7.010% <= 1.207 milliseconds (cumulative count 7010) 137s 9.310% <= 1.303 milliseconds (cumulative count 9310) 137s 12.950% <= 1.407 milliseconds (cumulative count 12950) 137s 16.850% <= 1.503 milliseconds (cumulative count 16850) 137s 21.460% <= 1.607 milliseconds (cumulative count 21460) 137s 28.150% <= 1.703 milliseconds (cumulative count 28150) 137s 38.320% <= 1.807 milliseconds (cumulative count 38320) 137s 47.600% <= 1.903 milliseconds (cumulative count 47600) 137s 56.640% <= 2.007 milliseconds (cumulative count 56640) 137s 64.780% <= 2.103 milliseconds (cumulative count 64780) 137s 98.690% <= 3.103 milliseconds (cumulative count 98690) 137s 99.500% <= 4.103 milliseconds (cumulative count 99500) 137s 99.920% <= 5.103 milliseconds (cumulative count 99920) 137s 100.000% <= 6.103 milliseconds (cumulative count 100000) 137s 137s Summary: 137s throughput summary: 223713.64 requests per second 137s latency summary (msec): 137s avg min p50 p95 p99 max 137s 1.955 0.280 1.935 2.807 3.143 5.199 138s ZPOPMIN: rps=140079.7 (overall: 251142.9) avg_msec=1.668 (overall: 1.668) ZPOPMIN: rps=257640.0 (overall: 255307.7) avg_msec=1.668 (overall: 1.668) ====== ZPOPMIN ====== 138s 100000 requests completed in 0.39 seconds 138s 50 parallel clients 138s 3 bytes payload 138s keep alive: 1 138s host configuration "save": 3600 1 300 100 60 10000 138s host configuration "appendonly": no 138s multi-thread: no 138s 138s Latency by percentile distribution: 138s 0.000% <= 0.263 milliseconds (cumulative count 10) 138s 50.000% <= 1.647 milliseconds (cumulative count 50860) 138s 75.000% <= 1.879 milliseconds (cumulative count 75660) 138s 87.500% <= 2.175 milliseconds (cumulative count 87600) 138s 93.750% <= 2.407 milliseconds (cumulative count 93850) 138s 96.875% <= 2.559 milliseconds (cumulative count 96970) 138s 98.438% <= 3.783 milliseconds (cumulative count 98440) 138s 99.219% <= 4.119 milliseconds (cumulative count 99240) 138s 99.609% <= 4.303 milliseconds (cumulative count 99610) 138s 99.805% <= 4.431 milliseconds (cumulative count 99810) 138s 99.902% <= 4.543 milliseconds (cumulative count 99910) 138s 99.951% <= 4.599 milliseconds (cumulative count 99960) 138s 99.976% <= 4.615 milliseconds (cumulative count 99980) 138s 99.988% <= 4.631 milliseconds (cumulative count 99990) 138s 99.994% <= 4.647 milliseconds (cumulative count 100000) 138s 100.000% <= 4.647 milliseconds (cumulative count 100000) 138s 138s Cumulative distribution of latencies: 138s 0.000% <= 0.103 milliseconds (cumulative count 0) 138s 0.160% <= 0.303 milliseconds (cumulative count 160) 138s 1.960% <= 0.407 milliseconds (cumulative count 1960) 138s 3.650% <= 0.503 milliseconds (cumulative count 3650) 138s 5.080% <= 0.607 milliseconds (cumulative count 5080) 138s 5.950% <= 0.703 milliseconds (cumulative count 5950) 138s 6.490% <= 0.807 milliseconds (cumulative count 6490) 138s 6.950% <= 0.903 milliseconds (cumulative count 6950) 138s 8.100% <= 1.007 milliseconds (cumulative count 8100) 138s 9.940% <= 1.103 milliseconds (cumulative count 9940) 138s 13.160% <= 1.207 milliseconds (cumulative count 13160) 138s 16.850% <= 1.303 milliseconds (cumulative count 16850) 138s 22.170% <= 1.407 milliseconds (cumulative count 22170) 138s 33.930% <= 1.503 milliseconds (cumulative count 33930) 138s 46.320% <= 1.607 milliseconds (cumulative count 46320) 138s 57.100% <= 1.703 milliseconds (cumulative count 57100) 138s 68.220% <= 1.807 milliseconds (cumulative count 68220) 138s 78.060% <= 1.903 milliseconds (cumulative count 78060) 138s 82.940% <= 2.007 milliseconds (cumulative count 82940) 138s 85.490% <= 2.103 milliseconds (cumulative count 85490) 138s 97.850% <= 3.103 milliseconds (cumulative count 97850) 138s 99.200% <= 4.103 milliseconds (cumulative count 99200) 138s 100.000% <= 5.103 milliseconds (cumulative count 100000) 138s 138s Summary: 138s throughput summary: 255754.47 requests per second 138s latency summary (msec): 138s avg min p50 p95 p99 max 138s 1.665 0.256 1.647 2.463 4.015 4.647 138s LPUSH (needed to benchmark LRANGE): rps=228645.4 (overall: 230481.9) avg_msec=1.889 (overall: 1.889) ====== LPUSH (needed to benchmark LRANGE) ====== 138s 100000 requests completed in 0.45 seconds 138s 50 parallel clients 138s 3 bytes payload 138s keep alive: 1 138s host configuration "save": 3600 1 300 100 60 10000 138s host configuration "appendonly": no 138s multi-thread: no 138s 138s Latency by percentile distribution: 138s 0.000% <= 0.279 milliseconds (cumulative count 10) 138s 50.000% <= 1.919 milliseconds (cumulative count 50240) 138s 75.000% <= 2.247 milliseconds (cumulative count 75220) 138s 87.500% <= 2.543 milliseconds (cumulative count 87560) 138s 93.750% <= 2.703 milliseconds (cumulative count 94000) 138s 96.875% <= 2.783 milliseconds (cumulative count 96890) 138s 98.438% <= 2.903 milliseconds (cumulative count 98470) 138s 99.219% <= 3.175 milliseconds (cumulative count 99220) 138s 99.609% <= 4.815 milliseconds (cumulative count 99610) 138s 99.805% <= 5.039 milliseconds (cumulative count 99810) 138s 99.902% <= 5.151 milliseconds (cumulative count 99910) 138s 99.951% <= 5.207 milliseconds (cumulative count 99960) 138s 99.976% <= 5.231 milliseconds (cumulative count 99980) 138s 99.988% <= 5.247 milliseconds (cumulative count 99990) 138s 99.994% <= 5.255 milliseconds (cumulative count 100000) 138s 100.000% <= 5.255 milliseconds (cumulative count 100000) 138s 138s Cumulative distribution of latencies: 138s 0.000% <= 0.103 milliseconds (cumulative count 0) 138s 0.030% <= 0.303 milliseconds (cumulative count 30) 138s 0.340% <= 0.407 milliseconds (cumulative count 340) 138s 0.710% <= 0.503 milliseconds (cumulative count 710) 138s 1.400% <= 0.607 milliseconds (cumulative count 1400) 138s 1.650% <= 0.703 milliseconds (cumulative count 1650) 138s 1.840% <= 0.807 milliseconds (cumulative count 1840) 138s 2.190% <= 0.903 milliseconds (cumulative count 2190) 138s 2.950% <= 1.007 milliseconds (cumulative count 2950) 138s 3.870% <= 1.103 milliseconds (cumulative count 3870) 138s 5.260% <= 1.207 milliseconds (cumulative count 5260) 138s 7.110% <= 1.303 milliseconds (cumulative count 7110) 138s 9.930% <= 1.407 milliseconds (cumulative count 9930) 138s 13.230% <= 1.503 milliseconds (cumulative count 13230) 138s 18.570% <= 1.607 milliseconds (cumulative count 18570) 138s 27.920% <= 1.703 milliseconds (cumulative count 27920) 138s 38.910% <= 1.807 milliseconds (cumulative count 38910) 138s 48.680% <= 1.903 milliseconds (cumulative count 48680) 138s 58.760% <= 2.007 milliseconds (cumulative count 58760) 138s 67.810% <= 2.103 milliseconds (cumulative count 67810) 138s 99.100% <= 3.103 milliseconds (cumulative count 99100) 138s 99.500% <= 4.103 milliseconds (cumulative count 99500) 138s 99.870% <= 5.103 milliseconds (cumulative count 99870) 138s 100.000% <= 6.103 milliseconds (cumulative count 100000) 138s 138s Summary: 138s throughput summary: 222717.16 requests per second 138s latency summary (msec): 138s avg min p50 p95 p99 max 138s 1.957 0.272 1.919 2.735 3.063 5.255 140s LRANGE_100 (first 100 elements): rps=17569.7 (overall: 88200.0) avg_msec=4.349 (overall: 4.349) LRANGE_100 (first 100 elements): rps=72381.0 (overall: 75000.0) avg_msec=5.437 (overall: 5.225) LRANGE_100 (first 100 elements): rps=65280.0 (overall: 70597.8) avg_msec=5.968 (overall: 5.536) LRANGE_100 (first 100 elements): rps=73201.6 (overall: 71416.1) avg_msec=5.415 (overall: 5.497) LRANGE_100 (first 100 elements): rps=62948.2 (overall: 69403.4) avg_msec=6.202 (overall: 5.649) LRANGE_100 (first 100 elements): rps=62191.2 (overall: 68018.4) avg_msec=6.428 (overall: 5.786) ====== LRANGE_100 (first 100 elements) ====== 140s 100000 requests completed in 1.48 seconds 140s 50 parallel clients 140s 3 bytes payload 140s keep alive: 1 140s host configuration "save": 3600 1 300 100 60 10000 140s host configuration "appendonly": no 140s multi-thread: no 140s 140s Latency by percentile distribution: 140s 0.000% <= 0.367 milliseconds (cumulative count 20) 140s 50.000% <= 5.839 milliseconds (cumulative count 50000) 140s 75.000% <= 7.047 milliseconds (cumulative count 75060) 140s 87.500% <= 7.959 milliseconds (cumulative count 87530) 140s 93.750% <= 8.487 milliseconds (cumulative count 93790) 140s 96.875% <= 8.831 milliseconds (cumulative count 96910) 140s 98.438% <= 9.207 milliseconds (cumulative count 98480) 140s 99.219% <= 9.559 milliseconds (cumulative count 99220) 140s 99.609% <= 10.495 milliseconds (cumulative count 99610) 140s 99.805% <= 11.111 milliseconds (cumulative count 99830) 140s 99.902% <= 11.391 milliseconds (cumulative count 99910) 140s 99.951% <= 11.551 milliseconds (cumulative count 99960) 140s 99.976% <= 11.631 milliseconds (cumulative count 99980) 140s 99.988% <= 11.695 milliseconds (cumulative count 99990) 140s 99.994% <= 11.775 milliseconds (cumulative count 100000) 140s 100.000% <= 11.775 milliseconds (cumulative count 100000) 140s 140s Cumulative distribution of latencies: 140s 0.000% <= 0.103 milliseconds (cumulative count 0) 140s 0.020% <= 0.407 milliseconds (cumulative count 20) 140s 0.040% <= 0.503 milliseconds (cumulative count 40) 140s 0.080% <= 0.607 milliseconds (cumulative count 80) 140s 0.120% <= 0.703 milliseconds (cumulative count 120) 140s 0.190% <= 0.807 milliseconds (cumulative count 190) 140s 0.250% <= 0.903 milliseconds (cumulative count 250) 140s 0.310% <= 1.007 milliseconds (cumulative count 310) 140s 0.370% <= 1.103 milliseconds (cumulative count 370) 140s 0.430% <= 1.207 milliseconds (cumulative count 430) 140s 0.500% <= 1.303 milliseconds (cumulative count 500) 140s 0.600% <= 1.407 milliseconds (cumulative count 600) 140s 0.680% <= 1.503 milliseconds (cumulative count 680) 140s 0.800% <= 1.607 milliseconds (cumulative count 800) 140s 0.900% <= 1.703 milliseconds (cumulative count 900) 140s 1.030% <= 1.807 milliseconds (cumulative count 1030) 140s 1.230% <= 1.903 milliseconds (cumulative count 1230) 140s 1.480% <= 2.007 milliseconds (cumulative count 1480) 140s 1.750% <= 2.103 milliseconds (cumulative count 1750) 140s 7.340% <= 3.103 milliseconds (cumulative count 7340) 140s 17.480% <= 4.103 milliseconds (cumulative count 17480) 140s 33.990% <= 5.103 milliseconds (cumulative count 33990) 140s 56.110% <= 6.103 milliseconds (cumulative count 56110) 140s 75.920% <= 7.103 milliseconds (cumulative count 75920) 140s 89.260% <= 8.103 milliseconds (cumulative count 89260) 140s 98.050% <= 9.103 milliseconds (cumulative count 98050) 140s 99.490% <= 10.103 milliseconds (cumulative count 99490) 140s 99.800% <= 11.103 milliseconds (cumulative count 99800) 140s 100.000% <= 12.103 milliseconds (cumulative count 100000) 140s 140s Summary: 140s throughput summary: 67796.61 requests per second 140s latency summary (msec): 140s avg min p50 p95 p99 max 140s 5.802 0.360 5.839 8.599 9.383 11.775 145s LRANGE_300 (first 300 elements): rps=8300.4 (overall: 25000.0) avg_msec=11.673 (overall: 11.673) LRANGE_300 (first 300 elements): rps=17928.3 (overall: 19701.5) avg_msec=16.796 (overall: 15.166) LRANGE_300 (first 300 elements): rps=17786.6 (overall: 18877.6) avg_msec=17.004 (overall: 15.911) LRANGE_300 (first 300 elements): rps=17857.1 (overall: 18571.4) avg_msec=16.864 (overall: 16.186) LRANGE_300 (first 300 elements): rps=19170.0 (overall: 18710.0) avg_msec=15.713 (overall: 16.074) LRANGE_300 (first 300 elements): rps=17131.5 (overall: 18415.2) avg_msec=17.423 (overall: 16.308) LRANGE_300 (first 300 elements): rps=17578.1 (overall: 18281.2) avg_msec=17.104 (overall: 16.431) LRANGE_300 (first 300 elements): rps=17786.6 (overall: 18213.7) avg_msec=16.909 (overall: 16.494) LRANGE_300 (first 300 elements): rps=19724.4 (overall: 18395.8) avg_msec=15.792 (overall: 16.404) LRANGE_300 (first 300 elements): rps=18760.0 (overall: 18434.4) avg_msec=15.894 (overall: 16.349) LRANGE_300 (first 300 elements): rps=24392.2 (overall: 19016.1) avg_msec=12.274 (overall: 15.838) LRANGE_300 (first 300 elements): rps=23147.4 (overall: 19378.3) avg_msec=12.557 (overall: 15.495) LRANGE_300 (first 300 elements): rps=18167.3 (overall: 19280.7) avg_msec=17.032 (overall: 15.612) LRANGE_300 (first 300 elements): rps=18406.4 (overall: 19215.5) avg_msec=16.332 (overall: 15.663) LRANGE_300 (first 300 elements): rps=18000.0 (overall: 19131.4) avg_msec=17.011 (overall: 15.751) LRANGE_300 (first 300 elements): rps=17647.1 (overall: 19033.6) avg_msec=17.213 (overall: 15.840) LRANGE_300 (first 300 elements): rps=18366.5 (overall: 18993.0) avg_msec=16.306 (overall: 15.868) LRANGE_300 (first 300 elements): rps=17928.3 (overall: 18931.8) avg_msec=16.798 (overall: 15.918) LRANGE_300 (first 300 elements): rps=17928.3 (overall: 18877.4) avg_msec=16.837 (overall: 15.965) LRANGE_300 (first 300 elements): rps=19561.8 (overall: 18912.6) avg_msec=15.072 (overall: 15.918) LRANGE_300 (first 300 elements): rps=19000.0 (overall: 18917.0) avg_msec=15.827 (overall: 15.913) ====== LRANGE_300 (first 300 elements) ====== 145s 100000 requests completed in 5.28 seconds 145s 50 parallel clients 145s 3 bytes payload 145s keep alive: 1 145s host configuration "save": 3600 1 300 100 60 10000 145s host configuration "appendonly": no 145s multi-thread: no 145s 145s Latency by percentile distribution: 145s 0.000% <= 0.911 milliseconds (cumulative count 10) 145s 50.000% <= 16.215 milliseconds (cumulative count 50080) 145s 75.000% <= 18.927 milliseconds (cumulative count 75140) 145s 87.500% <= 20.383 milliseconds (cumulative count 87560) 145s 93.750% <= 21.183 milliseconds (cumulative count 93830) 145s 96.875% <= 21.695 milliseconds (cumulative count 96910) 145s 98.438% <= 22.175 milliseconds (cumulative count 98460) 145s 99.219% <= 22.847 milliseconds (cumulative count 99230) 145s 99.609% <= 23.439 milliseconds (cumulative count 99610) 145s 99.805% <= 23.967 milliseconds (cumulative count 99810) 145s 99.902% <= 25.247 milliseconds (cumulative count 99910) 145s 99.951% <= 25.823 milliseconds (cumulative count 99960) 145s 99.976% <= 26.015 milliseconds (cumulative count 99980) 145s 99.988% <= 26.207 milliseconds (cumulative count 99990) 145s 99.994% <= 26.415 milliseconds (cumulative count 100000) 145s 100.000% <= 26.415 milliseconds (cumulative count 100000) 145s 145s Cumulative distribution of latencies: 145s 0.000% <= 0.103 milliseconds (cumulative count 0) 145s 0.060% <= 1.007 milliseconds (cumulative count 60) 145s 0.150% <= 1.207 milliseconds (cumulative count 150) 145s 0.210% <= 1.303 milliseconds (cumulative count 210) 145s 0.270% <= 1.407 milliseconds (cumulative count 270) 145s 0.340% <= 1.503 milliseconds (cumulative count 340) 145s 0.460% <= 1.607 milliseconds (cumulative count 460) 145s 0.500% <= 1.703 milliseconds (cumulative count 500) 145s 0.660% <= 1.807 milliseconds (cumulative count 660) 145s 0.710% <= 1.903 milliseconds (cumulative count 710) 145s 0.850% <= 2.007 milliseconds (cumulative count 850) 145s 0.910% <= 2.103 milliseconds (cumulative count 910) 145s 1.350% <= 3.103 milliseconds (cumulative count 1350) 145s 1.570% <= 4.103 milliseconds (cumulative count 1570) 145s 1.860% <= 5.103 milliseconds (cumulative count 1860) 145s 2.310% <= 6.103 milliseconds (cumulative count 2310) 145s 3.270% <= 7.103 milliseconds (cumulative count 3270) 145s 4.500% <= 8.103 milliseconds (cumulative count 4500) 145s 6.070% <= 9.103 milliseconds (cumulative count 6070) 145s 7.660% <= 10.103 milliseconds (cumulative count 7660) 145s 9.420% <= 11.103 milliseconds (cumulative count 9420) 145s 12.670% <= 12.103 milliseconds (cumulative count 12670) 145s 20.350% <= 13.103 milliseconds (cumulative count 20350) 145s 29.760% <= 14.103 milliseconds (cumulative count 29760) 145s 39.460% <= 15.103 milliseconds (cumulative count 39460) 145s 49.070% <= 16.103 milliseconds (cumulative count 49070) 145s 58.500% <= 17.103 milliseconds (cumulative count 58500) 145s 67.710% <= 18.111 milliseconds (cumulative count 67710) 145s 76.680% <= 19.103 milliseconds (cumulative count 76680) 145s 85.410% <= 20.111 milliseconds (cumulative count 85410) 145s 93.270% <= 21.103 milliseconds (cumulative count 93270) 145s 98.290% <= 22.111 milliseconds (cumulative count 98290) 145s 99.400% <= 23.103 milliseconds (cumulative count 99400) 145s 99.830% <= 24.111 milliseconds (cumulative count 99830) 145s 99.900% <= 25.103 milliseconds (cumulative count 99900) 145s 99.980% <= 26.111 milliseconds (cumulative count 99980) 145s 100.000% <= 27.103 milliseconds (cumulative count 100000) 145s 145s Summary: 145s throughput summary: 18942.98 requests per second 145s latency summary (msec): 145s avg min p50 p95 p99 max 145s 15.911 0.904 16.215 21.343 22.559 26.415 153s LRANGE_500 (first 500 elements): rps=7285.2 (overall: 17110.1) avg_msec=13.197 (overall: 13.197) LRANGE_500 (first 500 elements): rps=13113.3 (overall: 14306.8) avg_msec=19.313 (overall: 17.129) LRANGE_500 (first 500 elements): rps=10510.0 (overall: 12759.7) avg_msec=24.068 (overall: 19.458) LRANGE_500 (first 500 elements): rps=10700.4 (overall: 12153.5) avg_msec=23.544 (overall: 20.517) LRANGE_500 (first 500 elements): rps=11417.3 (overall: 11987.6) avg_msec=24.090 (overall: 21.284) LRANGE_500 (first 500 elements): rps=10717.1 (overall: 11756.2) avg_msec=22.465 (overall: 21.480) LRANGE_500 (first 500 elements): rps=11192.3 (overall: 11666.7) avg_msec=24.717 (overall: 21.973) LRANGE_500 (first 500 elements): rps=10714.3 (overall: 11539.7) avg_msec=24.450 (overall: 22.279) LRANGE_500 (first 500 elements): rps=10868.0 (overall: 11461.2) avg_msec=22.644 (overall: 22.320) LRANGE_500 (first 500 elements): rps=12322.7 (overall: 11551.7) avg_msec=21.624 (overall: 22.242) LRANGE_500 (first 500 elements): rps=10318.7 (overall: 11434.5) avg_msec=23.886 (overall: 22.383) LRANGE_500 (first 500 elements): rps=10629.9 (overall: 11364.0) avg_msec=23.911 (overall: 22.508) LRANGE_500 (first 500 elements): rps=12244.1 (overall: 11434.9) avg_msec=21.938 (overall: 22.459) LRANGE_500 (first 500 elements): rps=10846.2 (overall: 11390.0) avg_msec=22.418 (overall: 22.456) LRANGE_500 (first 500 elements): rps=11378.0 (overall: 11389.2) avg_msec=23.720 (overall: 22.544) LRANGE_500 (first 500 elements): rps=10318.7 (overall: 11320.6) avg_msec=24.587 (overall: 22.663) LRANGE_500 (first 500 elements): rps=10656.2 (overall: 11279.8) avg_msec=23.951 (overall: 22.738) LRANGE_500 (first 500 elements): rps=11772.5 (overall: 11308.2) avg_msec=23.418 (overall: 22.779) LRANGE_500 (first 500 elements): rps=10119.5 (overall: 11244.4) avg_msec=24.653 (overall: 22.869) LRANGE_500 (first 500 elements): rps=10555.6 (overall: 11209.2) avg_msec=24.281 (overall: 22.937) LRANGE_500 (first 500 elements): rps=11046.5 (overall: 11201.1) avg_msec=24.608 (overall: 23.019) LRANGE_500 (first 500 elements): rps=10000.0 (overall: 11145.9) avg_msec=23.802 (overall: 23.051) LRANGE_500 (first 500 elements): rps=11026.9 (overall: 11140.4) avg_msec=23.781 (overall: 23.084) LRANGE_500 (first 500 elements): rps=14531.2 (overall: 11286.2) avg_msec=17.752 (overall: 22.789) LRANGE_500 (first 500 elements): rps=13908.4 (overall: 11392.3) avg_msec=19.592 (overall: 22.631) LRANGE_500 (first 500 elements): rps=15193.7 (overall: 11541.3) avg_msec=17.820 (overall: 22.383) LRANGE_500 (first 500 elements): rps=14824.9 (overall: 11667.0) avg_msec=17.160 (overall: 22.129) LRANGE_500 (first 500 elements): rps=10749.0 (overall: 11633.9) avg_msec=24.148 (overall: 22.196) LRANGE_500 (first 500 elements): rps=9960.2 (overall: 11575.7) avg_msec=24.716 (overall: 22.272) LRANGE_500 (first 500 elements): rps=11166.0 (overall: 11561.8) avg_msec=23.407 (overall: 22.309) LRANGE_500 (first 500 elements): rps=14583.3 (overall: 11660.4) avg_msec=18.458 (overall: 22.151) LRANGE_500 (first 500 elements): rps=15039.2 (overall: 11768.4) avg_msec=16.863 (overall: 21.935) LRANGE_500 (first 500 elements): rps=14763.8 (overall: 11860.9) avg_msec=17.574 (overall: 21.768) ====== LRANGE_500 (first 500 elements) ====== 153s 100000 requests completed in 8.39 seconds 153s 50 parallel clients 153s 3 bytes payload 153s keep alive: 1 153s host configuration "save": 3600 1 300 100 60 10000 153s host configuration "appendonly": no 153s multi-thread: no 153s 153s Latency by percentile distribution: 153s 0.000% <= 1.071 milliseconds (cumulative count 10) 153s 50.000% <= 22.703 milliseconds (cumulative count 50050) 153s 75.000% <= 25.903 milliseconds (cumulative count 75100) 153s 87.500% <= 27.743 milliseconds (cumulative count 87560) 153s 93.750% <= 28.863 milliseconds (cumulative count 93780) 153s 96.875% <= 29.823 milliseconds (cumulative count 96920) 153s 98.438% <= 31.359 milliseconds (cumulative count 98440) 153s 99.219% <= 32.639 milliseconds (cumulative count 99230) 153s 99.609% <= 34.047 milliseconds (cumulative count 99610) 153s 99.805% <= 35.775 milliseconds (cumulative count 99810) 153s 99.902% <= 38.207 milliseconds (cumulative count 99910) 153s 99.951% <= 39.071 milliseconds (cumulative count 99960) 153s 99.976% <= 40.223 milliseconds (cumulative count 99980) 153s 99.988% <= 40.447 milliseconds (cumulative count 99990) 153s 99.994% <= 40.639 milliseconds (cumulative count 100000) 153s 100.000% <= 40.639 milliseconds (cumulative count 100000) 153s 153s Cumulative distribution of latencies: 153s 0.000% <= 0.103 milliseconds (cumulative count 0) 153s 0.010% <= 1.103 milliseconds (cumulative count 10) 153s 0.030% <= 1.207 milliseconds (cumulative count 30) 153s 0.060% <= 1.407 milliseconds (cumulative count 60) 153s 0.070% <= 1.503 milliseconds (cumulative count 70) 153s 0.120% <= 1.607 milliseconds (cumulative count 120) 153s 0.130% <= 1.703 milliseconds (cumulative count 130) 153s 0.160% <= 1.807 milliseconds (cumulative count 160) 153s 0.190% <= 1.903 milliseconds (cumulative count 190) 153s 0.230% <= 2.007 milliseconds (cumulative count 230) 153s 0.250% <= 2.103 milliseconds (cumulative count 250) 153s 1.010% <= 3.103 milliseconds (cumulative count 1010) 153s 1.460% <= 4.103 milliseconds (cumulative count 1460) 153s 1.690% <= 5.103 milliseconds (cumulative count 1690) 153s 1.860% <= 6.103 milliseconds (cumulative count 1860) 153s 2.070% <= 7.103 milliseconds (cumulative count 2070) 153s 2.290% <= 8.103 milliseconds (cumulative count 2290) 153s 2.720% <= 9.103 milliseconds (cumulative count 2720) 153s 3.500% <= 10.103 milliseconds (cumulative count 3500) 153s 5.040% <= 11.103 milliseconds (cumulative count 5040) 153s 7.570% <= 12.103 milliseconds (cumulative count 7570) 153s 10.790% <= 13.103 milliseconds (cumulative count 10790) 153s 14.140% <= 14.103 milliseconds (cumulative count 14140) 153s 17.240% <= 15.103 milliseconds (cumulative count 17240) 153s 19.680% <= 16.103 milliseconds (cumulative count 19680) 153s 21.320% <= 17.103 milliseconds (cumulative count 21320) 153s 22.860% <= 18.111 milliseconds (cumulative count 22860) 153s 25.200% <= 19.103 milliseconds (cumulative count 25200) 153s 29.870% <= 20.111 milliseconds (cumulative count 29870) 153s 37.000% <= 21.103 milliseconds (cumulative count 37000) 153s 45.230% <= 22.111 milliseconds (cumulative count 45230) 153s 53.260% <= 23.103 milliseconds (cumulative count 53260) 153s 61.330% <= 24.111 milliseconds (cumulative count 61330) 153s 69.110% <= 25.103 milliseconds (cumulative count 69110) 153s 76.630% <= 26.111 milliseconds (cumulative count 76630) 153s 83.330% <= 27.103 milliseconds (cumulative count 83330) 153s 89.750% <= 28.111 milliseconds (cumulative count 89750) 153s 94.820% <= 29.103 milliseconds (cumulative count 94820) 153s 97.430% <= 30.111 milliseconds (cumulative count 97430) 153s 98.270% <= 31.103 milliseconds (cumulative count 98270) 153s 98.920% <= 32.111 milliseconds (cumulative count 98920) 153s 99.410% <= 33.119 milliseconds (cumulative count 99410) 153s 99.620% <= 34.111 milliseconds (cumulative count 99620) 153s 99.740% <= 35.103 milliseconds (cumulative count 99740) 153s 99.830% <= 36.127 milliseconds (cumulative count 99830) 153s 99.860% <= 37.119 milliseconds (cumulative count 99860) 153s 99.900% <= 38.111 milliseconds (cumulative count 99900) 153s 99.960% <= 39.103 milliseconds (cumulative count 99960) 153s 99.970% <= 40.127 milliseconds (cumulative count 99970) 153s 100.000% <= 41.119 milliseconds (cumulative count 100000) 153s 153s Summary: 153s throughput summary: 11914.69 requests per second 153s latency summary (msec): 153s avg min p50 p95 p99 max 153s 21.665 1.064 22.703 29.151 32.223 40.639 163s LRANGE_600 (first 600 elements): rps=4464.8 (overall: 12423.9) avg_msec=19.408 (overall: 19.408) LRANGE_600 (first 600 elements): rps=10680.8 (overall: 11136.4) avg_msec=22.324 (overall: 21.474) LRANGE_600 (first 600 elements): rps=9080.0 (overall: 10282.4) avg_msec=27.806 (overall: 23.796) LRANGE_600 (first 600 elements): rps=8534.6 (overall: 9755.2) avg_msec=28.236 (overall: 24.968) LRANGE_600 (first 600 elements): rps=9764.9 (overall: 9757.4) avg_msec=26.865 (overall: 25.396) LRANGE_600 (first 600 elements): rps=8884.6 (overall: 9592.1) avg_msec=26.482 (overall: 25.586) LRANGE_600 (first 600 elements): rps=9601.6 (overall: 9593.6) avg_msec=26.875 (overall: 25.786) LRANGE_600 (first 600 elements): rps=12231.1 (overall: 9946.7) avg_msec=18.210 (overall: 24.539) LRANGE_600 (first 600 elements): rps=9436.5 (overall: 9886.2) avg_msec=26.480 (overall: 24.758) LRANGE_600 (first 600 elements): rps=10685.3 (overall: 9970.6) avg_msec=24.394 (overall: 24.717) LRANGE_600 (first 600 elements): rps=9007.8 (overall: 9876.7) avg_msec=25.977 (overall: 24.829) LRANGE_600 (first 600 elements): rps=9798.4 (overall: 9869.8) avg_msec=26.923 (overall: 25.011) LRANGE_600 (first 600 elements): rps=8664.0 (overall: 9773.7) avg_msec=26.944 (overall: 25.148) LRANGE_600 (first 600 elements): rps=10375.5 (overall: 9818.6) avg_msec=25.102 (overall: 25.144) LRANGE_600 (first 600 elements): rps=13004.0 (overall: 10039.8) avg_msec=18.732 (overall: 24.567) LRANGE_600 (first 600 elements): rps=10046.9 (overall: 10040.3) avg_msec=23.293 (overall: 24.484) LRANGE_600 (first 600 elements): rps=10776.9 (overall: 10086.3) avg_msec=22.353 (overall: 24.341) LRANGE_600 (first 600 elements): rps=9844.6 (overall: 10072.5) avg_msec=26.079 (overall: 24.438) LRANGE_600 (first 600 elements): rps=9457.4 (overall: 10038.6) avg_msec=25.767 (overall: 24.507) LRANGE_600 (first 600 elements): rps=9123.5 (overall: 9991.9) avg_msec=26.542 (overall: 24.602) LRANGE_600 (first 600 elements): rps=9709.2 (overall: 9978.1) avg_msec=26.011 (overall: 24.669) LRANGE_600 (first 600 elements): rps=9372.0 (overall: 9950.2) avg_msec=26.307 (overall: 24.740) LRANGE_600 (first 600 elements): rps=9633.9 (overall: 9936.0) avg_msec=26.155 (overall: 24.801) LRANGE_600 (first 600 elements): rps=9454.2 (overall: 9915.6) avg_msec=25.555 (overall: 24.832) LRANGE_600 (first 600 elements): rps=9788.0 (overall: 9910.5) avg_msec=25.755 (overall: 24.869) LRANGE_600 (first 600 elements): rps=9533.9 (overall: 9895.8) avg_msec=25.844 (overall: 24.905) LRANGE_600 (first 600 elements): rps=9644.3 (overall: 9886.2) avg_msec=26.009 (overall: 24.946) LRANGE_600 (first 600 elements): rps=9440.0 (overall: 9870.1) avg_msec=26.175 (overall: 24.988) LRANGE_600 (first 600 elements): rps=11314.7 (overall: 9920.6) avg_msec=21.626 (overall: 24.854) LRANGE_600 (first 600 elements): rps=8960.5 (overall: 9887.9) avg_msec=26.331 (overall: 24.900) LRANGE_600 (first 600 elements): rps=10012.0 (overall: 9892.0) avg_msec=26.052 (overall: 24.938) LRANGE_600 (first 600 elements): rps=9200.0 (overall: 9870.2) avg_msec=25.897 (overall: 24.966) LRANGE_600 (first 600 elements): rps=9641.4 (overall: 9863.2) avg_msec=27.075 (overall: 25.029) LRANGE_600 (first 600 elements): rps=8968.6 (overall: 9836.2) avg_msec=26.414 (overall: 25.068) LRANGE_600 (first 600 elements): rps=9892.4 (overall: 9837.8) avg_msec=26.090 (overall: 25.097) LRANGE_600 (first 600 elements): rps=8996.0 (overall: 9814.1) avg_msec=26.390 (overall: 25.131) LRANGE_600 (first 600 elements): rps=9689.2 (overall: 9810.7) avg_msec=27.132 (overall: 25.185) LRANGE_600 (first 600 elements): rps=9090.2 (overall: 9791.2) avg_msec=26.085 (overall: 25.207) LRANGE_600 (first 600 elements): rps=9892.0 (overall: 9793.8) avg_msec=26.657 (overall: 25.245) LRANGE_600 (first 600 elements): rps=8913.7 (overall: 9771.3) avg_msec=26.122 (overall: 25.265) LRANGE_600 (first 600 elements): rps=10048.0 (overall: 9778.0) avg_msec=26.136 (overall: 25.287) ====== LRANGE_600 (first 600 elements) ====== 163s 100000 requests completed in 10.23 seconds 163s 50 parallel clients 163s 3 bytes payload 163s keep alive: 1 163s host configuration "save": 3600 1 300 100 60 10000 163s host configuration "appendonly": no 163s multi-thread: no 163s 163s Latency by percentile distribution: 163s 0.000% <= 0.663 milliseconds (cumulative count 10) 163s 50.000% <= 26.559 milliseconds (cumulative count 50060) 163s 75.000% <= 29.743 milliseconds (cumulative count 75060) 163s 87.500% <= 31.391 milliseconds (cumulative count 87600) 163s 93.750% <= 33.343 milliseconds (cumulative count 93780) 163s 96.875% <= 36.639 milliseconds (cumulative count 96920) 163s 98.438% <= 37.535 milliseconds (cumulative count 98450) 163s 99.219% <= 38.143 milliseconds (cumulative count 99220) 163s 99.609% <= 38.559 milliseconds (cumulative count 99610) 163s 99.805% <= 39.007 milliseconds (cumulative count 99820) 163s 99.902% <= 39.679 milliseconds (cumulative count 99910) 163s 99.951% <= 41.087 milliseconds (cumulative count 99960) 163s 99.976% <= 41.471 milliseconds (cumulative count 99980) 163s 99.988% <= 41.727 milliseconds (cumulative count 99990) 163s 99.994% <= 42.367 milliseconds (cumulative count 100000) 163s 100.000% <= 42.367 milliseconds (cumulative count 100000) 163s 163s Cumulative distribution of latencies: 163s 0.000% <= 0.103 milliseconds (cumulative count 0) 163s 0.010% <= 0.703 milliseconds (cumulative count 10) 163s 0.020% <= 1.007 milliseconds (cumulative count 20) 163s 0.050% <= 1.303 milliseconds (cumulative count 50) 163s 0.250% <= 1.407 milliseconds (cumulative count 250) 163s 0.300% <= 1.503 milliseconds (cumulative count 300) 163s 0.510% <= 1.607 milliseconds (cumulative count 510) 163s 0.560% <= 1.703 milliseconds (cumulative count 560) 163s 1.090% <= 1.807 milliseconds (cumulative count 1090) 163s 1.230% <= 1.903 milliseconds (cumulative count 1230) 163s 1.670% <= 2.007 milliseconds (cumulative count 1670) 163s 1.820% <= 2.103 milliseconds (cumulative count 1820) 163s 3.880% <= 3.103 milliseconds (cumulative count 3880) 163s 4.240% <= 4.103 milliseconds (cumulative count 4240) 163s 4.520% <= 5.103 milliseconds (cumulative count 4520) 163s 4.680% <= 6.103 milliseconds (cumulative count 4680) 163s 4.810% <= 7.103 milliseconds (cumulative count 4810) 163s 4.990% <= 8.103 milliseconds (cumulative count 4990) 163s 5.130% <= 9.103 milliseconds (cumulative count 5130) 163s 5.570% <= 10.103 milliseconds (cumulative count 5570) 163s 6.160% <= 11.103 milliseconds (cumulative count 6160) 163s 6.990% <= 12.103 milliseconds (cumulative count 6990) 163s 8.050% <= 13.103 milliseconds (cumulative count 8050) 163s 9.120% <= 14.103 milliseconds (cumulative count 9120) 163s 10.080% <= 15.103 milliseconds (cumulative count 10080) 163s 11.070% <= 16.103 milliseconds (cumulative count 11070) 163s 12.280% <= 17.103 milliseconds (cumulative count 12280) 163s 13.250% <= 18.111 milliseconds (cumulative count 13250) 163s 14.030% <= 19.103 milliseconds (cumulative count 14030) 163s 14.690% <= 20.111 milliseconds (cumulative count 14690) 163s 15.500% <= 21.103 milliseconds (cumulative count 15500) 163s 18.060% <= 22.111 milliseconds (cumulative count 18060) 163s 23.150% <= 23.103 milliseconds (cumulative count 23150) 163s 30.540% <= 24.111 milliseconds (cumulative count 30540) 163s 38.470% <= 25.103 milliseconds (cumulative count 38470) 163s 46.600% <= 26.111 milliseconds (cumulative count 46600) 163s 54.360% <= 27.103 milliseconds (cumulative count 54360) 163s 62.250% <= 28.111 milliseconds (cumulative count 62250) 163s 70.060% <= 29.103 milliseconds (cumulative count 70060) 163s 77.960% <= 30.111 milliseconds (cumulative count 77960) 163s 85.570% <= 31.103 milliseconds (cumulative count 85570) 163s 90.990% <= 32.111 milliseconds (cumulative count 90990) 163s 93.440% <= 33.119 milliseconds (cumulative count 93440) 163s 94.570% <= 34.111 milliseconds (cumulative count 94570) 163s 95.500% <= 35.103 milliseconds (cumulative count 95500) 163s 96.110% <= 36.127 milliseconds (cumulative count 96110) 163s 97.790% <= 37.119 milliseconds (cumulative count 97790) 163s 99.180% <= 38.111 milliseconds (cumulative count 99180) 163s 99.850% <= 39.103 milliseconds (cumulative count 99850) 163s 99.930% <= 40.127 milliseconds (cumulative count 99930) 164s 99.960% <= 41.119 milliseconds (cumulative count 99960) 164s 99.990% <= 42.111 milliseconds (cumulative count 99990) 164s 100.000% <= 43.103 milliseconds (cumulative count 100000) 164s 164s Summary: 164s throughput summary: 9779.95 requests per second 164s latency summary (msec): 164s avg min p50 p95 p99 max 164s 25.256 0.656 26.559 34.559 37.983 42.367 164s MSET (10 keys): rps=123426.3 (overall: 135283.8) avg_msec=3.542 (overall: 3.542) MSET (10 keys): rps=134302.8 (overall: 134770.8) avg_msec=3.581 (overall: 3.563) MSET (10 keys): rps=133266.9 (overall: 134254.4) avg_msec=3.588 (overall: 3.571) ====== MSET (10 keys) ====== 164s 100000 requests completed in 0.74 seconds 164s 50 parallel clients 164s 3 bytes payload 164s keep alive: 1 164s host configuration "save": 3600 1 300 100 60 10000 164s host configuration "appendonly": no 164s multi-thread: no 164s 164s Latency by percentile distribution: 164s 0.000% <= 0.375 milliseconds (cumulative count 50) 164s 50.000% <= 3.687 milliseconds (cumulative count 50380) 164s 75.000% <= 3.959 milliseconds (cumulative count 75240) 164s 87.500% <= 4.199 milliseconds (cumulative count 87660) 164s 93.750% <= 4.511 milliseconds (cumulative count 93810) 164s 96.875% <= 4.871 milliseconds (cumulative count 96890) 164s 98.438% <= 6.103 milliseconds (cumulative count 98440) 164s 99.219% <= 7.207 milliseconds (cumulative count 99220) 164s 99.609% <= 7.423 milliseconds (cumulative count 99610) 164s 99.805% <= 7.583 milliseconds (cumulative count 99810) 164s 99.902% <= 7.687 milliseconds (cumulative count 99910) 164s 99.951% <= 7.767 milliseconds (cumulative count 99960) 164s 99.976% <= 7.815 milliseconds (cumulative count 99980) 164s 99.988% <= 7.855 milliseconds (cumulative count 99990) 164s 99.994% <= 7.903 milliseconds (cumulative count 100000) 164s 100.000% <= 7.903 milliseconds (cumulative count 100000) 164s 164s Cumulative distribution of latencies: 164s 0.000% <= 0.103 milliseconds (cumulative count 0) 164s 0.200% <= 0.407 milliseconds (cumulative count 200) 164s 0.270% <= 0.503 milliseconds (cumulative count 270) 164s 0.300% <= 0.703 milliseconds (cumulative count 300) 164s 0.350% <= 0.807 milliseconds (cumulative count 350) 164s 0.400% <= 0.903 milliseconds (cumulative count 400) 164s 0.500% <= 1.007 milliseconds (cumulative count 500) 164s 0.520% <= 1.407 milliseconds (cumulative count 520) 164s 0.600% <= 1.503 milliseconds (cumulative count 600) 164s 0.610% <= 1.607 milliseconds (cumulative count 610) 164s 1.050% <= 1.703 milliseconds (cumulative count 1050) 164s 1.350% <= 1.807 milliseconds (cumulative count 1350) 164s 1.880% <= 1.903 milliseconds (cumulative count 1880) 164s 2.850% <= 2.007 milliseconds (cumulative count 2850) 164s 5.090% <= 2.103 milliseconds (cumulative count 5090) 164s 22.560% <= 3.103 milliseconds (cumulative count 22560) 164s 84.780% <= 4.103 milliseconds (cumulative count 84780) 164s 97.250% <= 5.103 milliseconds (cumulative count 97250) 164s 98.440% <= 6.103 milliseconds (cumulative count 98440) 164s 99.090% <= 7.103 milliseconds (cumulative count 99090) 164s 100.000% <= 8.103 milliseconds (cumulative count 100000) 164s 164s Summary: 164s throughput summary: 134589.50 requests per second 164s latency summary (msec): 164s avg min p50 p95 p99 max 164s 3.570 0.368 3.687 4.591 6.375 7.903 165s XADD: rps=239320.0 (overall: 253517.0) avg_msec=1.849 (overall: 1.849) ====== XADD ====== 165s 100000 requests completed in 0.38 seconds 165s 50 parallel clients 165s 3 bytes payload 165s keep alive: 1 165s host configuration "save": 3600 1 300 100 60 10000 165s host configuration "appendonly": no 165s multi-thread: no 165s 165s Latency by percentile distribution: 165s 0.000% <= 0.335 milliseconds (cumulative count 10) 165s 50.000% <= 1.775 milliseconds (cumulative count 50690) 165s 75.000% <= 1.935 milliseconds (cumulative count 75180) 165s 87.500% <= 2.047 milliseconds (cumulative count 87650) 165s 93.750% <= 2.183 milliseconds (cumulative count 93760) 165s 96.875% <= 4.303 milliseconds (cumulative count 96880) 165s 98.438% <= 5.015 milliseconds (cumulative count 98470) 165s 99.219% <= 5.191 milliseconds (cumulative count 99240) 165s 99.609% <= 5.303 milliseconds (cumulative count 99610) 165s 99.805% <= 5.447 milliseconds (cumulative count 99810) 165s 99.902% <= 5.895 milliseconds (cumulative count 99910) 165s 99.951% <= 5.967 milliseconds (cumulative count 99960) 165s 99.976% <= 6.015 milliseconds (cumulative count 99980) 165s 99.988% <= 6.047 milliseconds (cumulative count 99990) 165s 99.994% <= 6.063 milliseconds (cumulative count 100000) 165s 100.000% <= 6.063 milliseconds (cumulative count 100000) 165s 165s Cumulative distribution of latencies: 165s 0.000% <= 0.103 milliseconds (cumulative count 0) 165s 0.400% <= 0.407 milliseconds (cumulative count 400) 165s 0.510% <= 0.503 milliseconds (cumulative count 510) 165s 0.870% <= 0.607 milliseconds (cumulative count 870) 165s 1.250% <= 0.703 milliseconds (cumulative count 1250) 165s 1.720% <= 0.807 milliseconds (cumulative count 1720) 165s 2.100% <= 0.903 milliseconds (cumulative count 2100) 165s 2.890% <= 1.007 milliseconds (cumulative count 2890) 165s 6.040% <= 1.103 milliseconds (cumulative count 6040) 165s 17.530% <= 1.207 milliseconds (cumulative count 17530) 165s 20.360% <= 1.303 milliseconds (cumulative count 20360) 165s 21.950% <= 1.407 milliseconds (cumulative count 21950) 165s 23.560% <= 1.503 milliseconds (cumulative count 23560) 165s 27.140% <= 1.607 milliseconds (cumulative count 27140) 165s 39.530% <= 1.703 milliseconds (cumulative count 39530) 165s 55.500% <= 1.807 milliseconds (cumulative count 55500) 165s 70.990% <= 1.903 milliseconds (cumulative count 70990) 165s 83.880% <= 2.007 milliseconds (cumulative count 83880) 165s 91.530% <= 2.103 milliseconds (cumulative count 91530) 165s 95.500% <= 3.103 milliseconds (cumulative count 95500) 165s 95.950% <= 4.103 milliseconds (cumulative count 95950) 165s 98.820% <= 5.103 milliseconds (cumulative count 98820) 165s 100.000% <= 6.103 milliseconds (cumulative count 100000) 165s 165s Summary: 165s throughput summary: 259740.27 requests per second 165s latency summary (msec): 165s avg min p50 p95 p99 max 165s 1.804 0.328 1.775 2.359 5.143 6.063 170s FUNCTION LOAD: rps=6454.2 (overall: 16200.0) avg_msec=23.890 (overall: 23.890) FUNCTION LOAD: rps=20478.1 (overall: 19259.3) avg_msec=25.204 (overall: 24.889) FUNCTION LOAD: rps=19160.0 (overall: 19218.0) avg_msec=25.309 (overall: 25.063) FUNCTION LOAD: rps=19601.6 (overall: 19331.0) avg_msec=25.422 (overall: 25.170) FUNCTION LOAD: rps=19203.2 (overall: 19301.9) avg_msec=25.995 (overall: 25.357) FUNCTION LOAD: rps=19720.0 (overall: 19379.2) avg_msec=25.504 (overall: 25.385) FUNCTION LOAD: rps=19920.3 (overall: 19463.8) avg_msec=25.657 (overall: 25.428) FUNCTION LOAD: rps=19083.7 (overall: 19412.4) avg_msec=25.368 (overall: 25.420) FUNCTION LOAD: rps=19920.3 (overall: 19472.9) avg_msec=25.463 (overall: 25.426) FUNCTION LOAD: rps=18764.9 (overall: 19397.5) avg_msec=25.329 (overall: 25.416) FUNCTION LOAD: rps=19083.7 (overall: 19367.3) avg_msec=26.114 (overall: 25.482) FUNCTION LOAD: rps=18840.0 (overall: 19321.2) avg_msec=26.198 (overall: 25.543) FUNCTION LOAD: rps=19007.9 (overall: 19295.8) avg_msec=26.075 (overall: 25.585) FUNCTION LOAD: rps=19920.3 (overall: 19342.5) avg_msec=25.404 (overall: 25.571) FUNCTION LOAD: rps=19920.3 (overall: 19382.6) avg_msec=25.320 (overall: 25.553) FUNCTION LOAD: rps=18840.0 (overall: 19347.5) avg_msec=25.345 (overall: 25.540) FUNCTION LOAD: rps=19920.3 (overall: 19382.4) avg_msec=25.315 (overall: 25.526) FUNCTION LOAD: rps=19920.3 (overall: 19413.4) avg_msec=25.252 (overall: 25.510) FUNCTION LOAD: rps=20000.0 (overall: 19445.2) avg_msec=25.164 (overall: 25.491) FUNCTION LOAD: rps=19083.7 (overall: 19426.5) avg_msec=25.212 (overall: 25.477) FUNCTION LOAD: rps=20000.0 (overall: 19454.5) avg_msec=25.210 (overall: 25.463) ====== FUNCTION LOAD ====== 170s 100000 requests completed in 5.14 seconds 170s 50 parallel clients 170s 3 bytes payload 170s keep alive: 1 170s host configuration "save": 3600 1 300 100 60 10000 170s host configuration "appendonly": no 170s multi-thread: no 170s 170s Latency by percentile distribution: 170s 0.000% <= 4.255 milliseconds (cumulative count 10) 170s 50.000% <= 25.407 milliseconds (cumulative count 50760) 170s 75.000% <= 25.999 milliseconds (cumulative count 75330) 170s 87.500% <= 26.511 milliseconds (cumulative count 87600) 170s 93.750% <= 27.215 milliseconds (cumulative count 93820) 170s 96.875% <= 28.143 milliseconds (cumulative count 96940) 170s 98.438% <= 28.767 milliseconds (cumulative count 98460) 170s 99.219% <= 29.407 milliseconds (cumulative count 99220) 170s 99.609% <= 29.887 milliseconds (cumulative count 99610) 170s 99.805% <= 30.175 milliseconds (cumulative count 99810) 170s 99.902% <= 30.399 milliseconds (cumulative count 99910) 170s 99.951% <= 30.479 milliseconds (cumulative count 99960) 170s 99.976% <= 30.559 milliseconds (cumulative count 99980) 170s 99.988% <= 30.575 milliseconds (cumulative count 99990) 170s 99.994% <= 30.687 milliseconds (cumulative count 100000) 170s 100.000% <= 30.687 milliseconds (cumulative count 100000) 170s 170s Cumulative distribution of latencies: 170s 0.000% <= 0.103 milliseconds (cumulative count 0) 170s 0.020% <= 5.103 milliseconds (cumulative count 20) 170s 0.120% <= 7.103 milliseconds (cumulative count 120) 170s 0.270% <= 12.103 milliseconds (cumulative count 270) 170s 0.520% <= 13.103 milliseconds (cumulative count 520) 170s 0.880% <= 14.103 milliseconds (cumulative count 880) 170s 1.000% <= 15.103 milliseconds (cumulative count 1000) 170s 1.370% <= 16.103 milliseconds (cumulative count 1370) 170s 1.670% <= 17.103 milliseconds (cumulative count 1670) 170s 1.730% <= 18.111 milliseconds (cumulative count 1730) 170s 1.750% <= 23.103 milliseconds (cumulative count 1750) 170s 2.580% <= 24.111 milliseconds (cumulative count 2580) 170s 23.950% <= 25.103 milliseconds (cumulative count 23950) 170s 79.490% <= 26.111 milliseconds (cumulative count 79490) 170s 92.960% <= 27.103 milliseconds (cumulative count 92960) 170s 96.810% <= 28.111 milliseconds (cumulative count 96810) 170s 98.940% <= 29.103 milliseconds (cumulative count 98940) 170s 99.770% <= 30.111 milliseconds (cumulative count 99770) 170s 100.000% <= 31.103 milliseconds (cumulative count 100000) 170s 170s Summary: 170s throughput summary: 19474.20 requests per second 170s latency summary (msec): 170s avg min p50 p95 p99 max 170s 25.457 4.248 25.407 27.391 29.183 30.687 170s FCALL: rps=183680.0 (overall: 202290.8) avg_msec=2.195 (overall: 2.195) FCALL: rps=197689.2 (overall: 199874.5) avg_msec=2.268 (overall: 2.233) ====== FCALL ====== 170s 100000 requests completed in 0.50 seconds 170s 50 parallel clients 170s 3 bytes payload 170s keep alive: 1 170s host configuration "save": 3600 1 300 100 60 10000 170s host configuration "appendonly": no 170s multi-thread: no 170s 170s Latency by percentile distribution: 170s 0.000% <= 0.311 milliseconds (cumulative count 20) 170s 50.000% <= 2.151 milliseconds (cumulative count 50450) 170s 75.000% <= 2.431 milliseconds (cumulative count 75500) 170s 87.500% <= 2.687 milliseconds (cumulative count 87670) 170s 93.750% <= 2.879 milliseconds (cumulative count 93790) 170s 96.875% <= 3.079 milliseconds (cumulative count 96880) 170s 98.438% <= 5.743 milliseconds (cumulative count 98450) 170s 99.219% <= 6.103 milliseconds (cumulative count 99220) 170s 99.609% <= 6.375 milliseconds (cumulative count 99610) 170s 99.805% <= 6.567 milliseconds (cumulative count 99810) 170s 99.902% <= 6.679 milliseconds (cumulative count 99910) 170s 99.951% <= 6.735 milliseconds (cumulative count 99960) 170s 99.976% <= 6.759 milliseconds (cumulative count 99980) 170s 99.988% <= 6.767 milliseconds (cumulative count 99990) 170s 99.994% <= 6.783 milliseconds (cumulative count 100000) 170s 100.000% <= 6.783 milliseconds (cumulative count 100000) 170s 170s Cumulative distribution of latencies: 170s 0.000% <= 0.103 milliseconds (cumulative count 0) 170s 0.370% <= 0.407 milliseconds (cumulative count 370) 170s 0.780% <= 0.503 milliseconds (cumulative count 780) 170s 1.100% <= 0.607 milliseconds (cumulative count 1100) 170s 1.260% <= 0.703 milliseconds (cumulative count 1260) 170s 1.610% <= 0.807 milliseconds (cumulative count 1610) 170s 1.910% <= 0.903 milliseconds (cumulative count 1910) 170s 2.260% <= 1.007 milliseconds (cumulative count 2260) 170s 2.740% <= 1.103 milliseconds (cumulative count 2740) 170s 3.220% <= 1.207 milliseconds (cumulative count 3220) 170s 4.260% <= 1.303 milliseconds (cumulative count 4260) 170s 5.270% <= 1.407 milliseconds (cumulative count 5270) 170s 6.070% <= 1.503 milliseconds (cumulative count 6070) 170s 6.960% <= 1.607 milliseconds (cumulative count 6960) 170s 8.870% <= 1.703 milliseconds (cumulative count 8870) 170s 13.000% <= 1.807 milliseconds (cumulative count 13000) 170s 22.940% <= 1.903 milliseconds (cumulative count 22940) 170s 34.530% <= 2.007 milliseconds (cumulative count 34530) 170s 45.140% <= 2.103 milliseconds (cumulative count 45140) 170s 96.980% <= 3.103 milliseconds (cumulative count 96980) 170s 98.000% <= 4.103 milliseconds (cumulative count 98000) 170s 99.220% <= 6.103 milliseconds (cumulative count 99220) 170s 100.000% <= 7.103 milliseconds (cumulative count 100000) 170s 170s Summary: 170s throughput summary: 200000.00 requests per second 170s latency summary (msec): 170s avg min p50 p95 p99 max 170s 2.223 0.304 2.151 2.927 5.983 6.783 170s 171s autopkgtest [14:54:55]: test 0002-benchmark: -----------------------] 171s autopkgtest [14:54:55]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 171s 0002-benchmark PASS 171s autopkgtest [14:54:55]: test 0003-valkey-check-aof: preparing testbed 172s Reading package lists... 172s Building dependency tree... 172s Reading state information... 172s Solving dependencies... 172s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 172s autopkgtest [14:54:56]: test 0003-valkey-check-aof: [----------------------- 173s ************************************************************************** 173s # A new feature in cloud-init identified possible datasources for # 173s # this system as: # 173s # [] # 173s # However, the datasource used was: OpenStack # 173s # # 173s # In the future, cloud-init will only attempt to use datasources that # 173s # are identified or specifically configured. # 173s # For more information see # 173s # https://bugs.launchpad.net/bugs/1669675 # 173s # # 173s # If you are seeing this message, please file a bug against # 173s # cloud-init at # 173s # https://github.com/canonical/cloud-init/issues # 173s # Make sure to include the cloud provider your instance is # 173s # running on. # 173s # # 173s # After you have filed a bug, you can disable this warning by launching # 173s # your instance with the cloud-config below, or putting that content # 173s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 173s # # 173s # #cloud-config # 173s # warnings: # 173s # dsid_missing_source: off # 173s ************************************************************************** 173s 173s Disable the warnings above by: 173s touch /root/.cloud-warnings.skip 173s or 173s touch /var/lib/cloud/instance/warnings/.skip 173s autopkgtest [14:54:57]: test 0003-valkey-check-aof: -----------------------] 173s 0003-valkey-check-aof PASS 173s autopkgtest [14:54:57]: test 0003-valkey-check-aof: - - - - - - - - - - results - - - - - - - - - - 174s autopkgtest [14:54:58]: test 0004-valkey-check-rdb: preparing testbed 174s Reading package lists... 174s Building dependency tree... 174s Reading state information... 174s Solving dependencies... 174s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 175s autopkgtest [14:54:59]: test 0004-valkey-check-rdb: [----------------------- 175s ************************************************************************** 175s # A new feature in cloud-init identified possible datasources for # 175s # this system as: # 175s # [] # 175s # However, the datasource used was: OpenStack # 175s # # 175s # In the future, cloud-init will only attempt to use datasources that # 175s # are identified or specifically configured. # 175s # For more information see # 175s # https://bugs.launchpad.net/bugs/1669675 # 175s # # 175s # If you are seeing this message, please file a bug against # 175s # cloud-init at # 175s # https://github.com/canonical/cloud-init/issues # 175s # Make sure to include the cloud provider your instance is # 175s # running on. # 175s # # 175s # After you have filed a bug, you can disable this warning by launching # 175s # your instance with the cloud-config below, or putting that content # 175s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 175s # # 175s # #cloud-config # 175s # warnings: # 175s # dsid_missing_source: off # 175s ************************************************************************** 175s 175s Disable the warnings above by: 175s touch /root/.cloud-warnings.skip 175s or 175s touch /var/lib/cloud/instance/warnings/.skip 180s OK 180s [offset 0] Checking RDB file /var/lib/valkey/dump.rdb 180s [offset 27] AUX FIELD valkey-ver = '8.1.1' 180s [offset 41] AUX FIELD redis-bits = '64' 180s [offset 53] AUX FIELD ctime = '1750344903' 180s [offset 68] AUX FIELD used-mem = '3029832' 180s [offset 80] AUX FIELD aof-base = '0' 180s [offset 191] Selecting DB ID 0 180s [offset 566516] Checksum OK 180s [offset 566516] \o/ RDB looks OK! \o/ 180s [info] 5 keys read 180s [info] 0 expires 180s [info] 0 already expired 180s autopkgtest [14:55:04]: test 0004-valkey-check-rdb: -----------------------] 181s 0004-valkey-check-rdb PASS 181s autopkgtest [14:55:05]: test 0004-valkey-check-rdb: - - - - - - - - - - results - - - - - - - - - - 181s autopkgtest [14:55:05]: test 0005-cjson: preparing testbed 181s Reading package lists... 181s Building dependency tree... 181s Reading state information... 181s Solving dependencies... 182s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 182s autopkgtest [14:55:06]: test 0005-cjson: [----------------------- 182s ************************************************************************** 182s # A new feature in cloud-init identified possible datasources for # 182s # this system as: # 182s # [] # 182s # However, the datasource used was: OpenStack # 182s # # 182s # In the future, cloud-init will only attempt to use datasources that # 182s # are identified or specifically configured. # 182s # For more information see # 182s # https://bugs.launchpad.net/bugs/1669675 # 182s # # 182s # If you are seeing this message, please file a bug against # 182s # cloud-init at # 182s # https://github.com/canonical/cloud-init/issues # 182s # Make sure to include the cloud provider your instance is # 182s # running on. # 182s # # 182s # After you have filed a bug, you can disable this warning by launching # 182s # your instance with the cloud-config below, or putting that content # 182s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 182s # # 182s # #cloud-config # 182s # warnings: # 182s # dsid_missing_source: off # 182s ************************************************************************** 182s 182s Disable the warnings above by: 182s touch /root/.cloud-warnings.skip 182s or 182s touch /var/lib/cloud/instance/warnings/.skip 188s 188s autopkgtest [14:55:12]: test 0005-cjson: -----------------------] 188s autopkgtest [14:55:12]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 188s 0005-cjson PASS 189s autopkgtest [14:55:13]: test 0006-migrate-from-redis: preparing testbed 270s autopkgtest [14:56:34]: testbed dpkg architecture: ppc64el 271s autopkgtest [14:56:35]: testbed apt version: 3.1.2 271s autopkgtest [14:56:35]: @@@@@@@@@@@@@@@@@@@@ test bed setup 271s autopkgtest [14:56:35]: testbed release detected to be: questing 272s autopkgtest [14:56:36]: updating testbed package index (apt update) 272s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 272s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 272s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 272s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 272s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.4 kB] 272s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [426 kB] 272s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/restricted Sources [4716 B] 272s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [38.3 kB] 272s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/main ppc64el Packages [66.7 kB] 272s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/restricted ppc64el Packages [724 B] 272s Get:11 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el Packages [340 kB] 272s Get:12 http://ftpmaster.internal/ubuntu questing-proposed/multiverse ppc64el Packages [6448 B] 272s Fetched 1149 kB in 0s (2411 kB/s) 273s Reading package lists... 274s autopkgtest [14:56:38]: upgrading testbed (apt dist-upgrade and autopurge) 274s Reading package lists... 274s Building dependency tree... 274s Reading state information... 274s Calculating upgrade... 274s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 274s Reading package lists... 274s Building dependency tree... 274s Reading state information... 275s Solving dependencies... 275s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 277s Reading package lists... 277s Building dependency tree... 277s Reading state information... 277s Solving dependencies... 277s The following NEW packages will be installed: 277s liblzf1 redis-sentinel redis-server redis-tools 277s 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. 277s Need to get 1812 kB of archives. 277s After this operation, 10.6 MB of additional disk space will be used. 277s Get:1 http://ftpmaster.internal/ubuntu questing/universe ppc64el liblzf1 ppc64el 3.6-4 [7920 B] 278s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el redis-tools ppc64el 5:8.0.0-2 [1738 kB] 278s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el redis-sentinel ppc64el 5:8.0.0-2 [12.5 kB] 278s Get:4 http://ftpmaster.internal/ubuntu questing-proposed/universe ppc64el redis-server ppc64el 5:8.0.0-2 [53.2 kB] 278s Fetched 1812 kB in 0s (3873 kB/s) 278s Selecting previously unselected package liblzf1:ppc64el. 279s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 79652 files and directories currently installed.) 279s Preparing to unpack .../liblzf1_3.6-4_ppc64el.deb ... 279s Unpacking liblzf1:ppc64el (3.6-4) ... 279s Selecting previously unselected package redis-tools. 279s Preparing to unpack .../redis-tools_5%3a8.0.0-2_ppc64el.deb ... 279s Unpacking redis-tools (5:8.0.0-2) ... 279s Selecting previously unselected package redis-sentinel. 279s Preparing to unpack .../redis-sentinel_5%3a8.0.0-2_ppc64el.deb ... 279s Unpacking redis-sentinel (5:8.0.0-2) ... 279s Selecting previously unselected package redis-server. 279s Preparing to unpack .../redis-server_5%3a8.0.0-2_ppc64el.deb ... 279s Unpacking redis-server (5:8.0.0-2) ... 279s Setting up liblzf1:ppc64el (3.6-4) ... 279s Setting up redis-tools (5:8.0.0-2) ... 279s Setting up redis-server (5:8.0.0-2) ... 279s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'. 279s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'. 280s Setting up redis-sentinel (5:8.0.0-2) ... 280s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 280s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 281s Processing triggers for man-db (2.13.1-1) ... 281s Processing triggers for libc-bin (2.41-6ubuntu2) ... 285s autopkgtest [14:56:49]: test 0006-migrate-from-redis: [----------------------- 285s ************************************************************************** 285s # A new feature in cloud-init identified possible datasources for # 285s # this system as: # 285s # [] # 285s # However, the datasource used was: OpenStack # 285s # # 285s # In the future, cloud-init will only attempt to use datasources that # 285s # are identified or specifically configured. # 285s # For more information see # 285s # https://bugs.launchpad.net/bugs/1669675 # 285s # # 285s # If you are seeing this message, please file a bug against # 285s # cloud-init at # 285s # https://github.com/canonical/cloud-init/issues # 285s # Make sure to include the cloud provider your instance is # 285s # running on. # 285s # # 285s # After you have filed a bug, you can disable this warning by launching # 285s # your instance with the cloud-config below, or putting that content # 285s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 285s # # 285s # #cloud-config # 285s # warnings: # 285s # dsid_missing_source: off # 285s ************************************************************************** 285s 285s Disable the warnings above by: 285s touch /root/.cloud-warnings.skip 285s or 285s touch /var/lib/cloud/instance/warnings/.skip 286s + FLAG_FILE=/etc/valkey/REDIS_MIGRATION 286s + sed -i 's#loglevel notice#loglevel debug#' /etc/redis/redis.conf 286s + systemctl restart redis-server 286s + redis-cli -h 127.0.0.1 -p 6379 SET test 1 286s OK 286s + redis-cli -h 127.0.0.1 -p 6379 GET test 286s 1 286s + redis-cli -h 127.0.0.1 -p 6379 SAVE 286s OK 286s + sha256sum /var/lib/redis/dump.rdb 286s + apt-get install -y valkey-redis-compat 286s a131a0613f15fade8ab5353a7e822243d043915ba10cb304baa5b109393146d2 /var/lib/redis/dump.rdb 286s Reading package lists... 286s Building dependency tree... 286s Reading state information... 286s Solving dependencies... 286s The following additional packages will be installed: 286s valkey-server valkey-tools 286s Suggested packages: 286s ruby-redis 286s The following packages will be REMOVED: 286s redis-sentinel redis-server redis-tools 286s The following NEW packages will be installed: 286s valkey-redis-compat valkey-server valkey-tools 286s 0 upgraded, 3 newly installed, 3 to remove and 0 not upgraded. 286s Need to get 1695 kB of archives. 286s After this operation, 476 kB disk space will be freed. 286s Get:1 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-tools ppc64el 8.1.1+dfsg1-2ubuntu1 [1636 kB] 286s Get:2 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-server ppc64el 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 286s Get:3 http://ftpmaster.internal/ubuntu questing/universe ppc64el valkey-redis-compat all 8.1.1+dfsg1-2ubuntu1 [7794 B] 287s Fetched 1695 kB in 0s (11.6 MB/s) 287s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 79703 files and directories currently installed.) 287s Removing redis-sentinel (5:8.0.0-2) ... 287s Removing redis-server (5:8.0.0-2) ... 288s Removing redis-tools (5:8.0.0-2) ... 288s Selecting previously unselected package valkey-tools. 288s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 79666 files and directories currently installed.) 288s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 288s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 288s Selecting previously unselected package valkey-server. 288s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_ppc64el.deb ... 288s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 288s Selecting previously unselected package valkey-redis-compat. 288s Preparing to unpack .../valkey-redis-compat_8.1.1+dfsg1-2ubuntu1_all.deb ... 288s Unpacking valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 288s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 289s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 289s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 289s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 290s Setting up valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 290s dpkg-query: no packages found matching valkey-sentinel 290s [I] /etc/redis/redis.conf has been copied to /etc/valkey/valkey.conf. Please, review the content of valkey.conf, especially if you had modified redis.conf. 290s [I] /etc/redis/sentinel.conf has been copied to /etc/valkey/sentinel.conf. Please, review the content of sentinel.conf, especially if you had modified sentinel.conf. 290s [I] On-disk redis dumps moved from /var/lib/redis/ to /var/lib/valkey. 290s Processing triggers for man-db (2.13.1-1) ... 290s + '[' -f /etc/valkey/REDIS_MIGRATION ']' 290s + sha256sum /var/lib/valkey/dump.rdb 290s 1f75992e6b11424a107a89487dded7b93b7ff92b33944d51da8534327c28f704 /var/lib/valkey/dump.rdb 290s + systemctl status valkey-server 290s + grep inactive 290s Active: inactive (dead) since Thu 2025-06-19 14:56:54 UTC; 657ms ago 290s + rm /etc/valkey/REDIS_MIGRATION 290s + systemctl start valkey-server 291s Job for valkey-server.service failed because the control process exited with error code. 291s See "systemctl status valkey-server.service" and "journalctl -xeu valkey-server.service" for details. 291s autopkgtest [14:56:55]: test 0006-migrate-from-redis: -----------------------] 291s autopkgtest [14:56:55]: test 0006-migrate-from-redis: - - - - - - - - - - results - - - - - - - - - - 291s 0006-migrate-from-redis FAIL non-zero exit status 1 292s autopkgtest [14:56:55]: @@@@@@@@@@@@@@@@@@@@ summary 292s 0001-valkey-cli PASS 292s 0002-benchmark PASS 292s 0003-valkey-check-aof PASS 292s 0004-valkey-check-rdb PASS 292s 0005-cjson PASS 292s 0006-migrate-from-redis FAIL non-zero exit status 1 295s nova [W] Using flock in prodstack7-ppc64el 295s Creating nova instance adt-questing-ppc64el-valkey-20250619-145204-juju-7f2275-prod-proposed-migration-environment-15-e5854d58-6d10-4857-bcc0-c48b9c274b05 from image adt/ubuntu-questing-ppc64el-server-20250619.img (UUID 1c97422d-c646-492e-9581-3c98f213de4b)... 295s nova [W] Timed out waiting for 572d3766-bd5e-43e6-8cf9-665360c6432b to get deleted. 295s nova [W] Using flock in prodstack7-ppc64el 295s Creating nova instance adt-questing-ppc64el-valkey-20250619-145204-juju-7f2275-prod-proposed-migration-environment-15-e5854d58-6d10-4857-bcc0-c48b9c274b05 from image adt/ubuntu-questing-ppc64el-server-20250619.img (UUID 1c97422d-c646-492e-9581-3c98f213de4b)... 295s nova [W] Timed out waiting for 69ca9b0c-66ef-490d-8c69-a112781658aa to get deleted.