0s autopkgtest [14:25:14]: starting date and time: 2025-06-19 14:25:14+0000 0s autopkgtest [14:25:14]: git checkout: 9986aa8c Merge branch 'skia/fix_network_interface' into 'ubuntu/production' 0s autopkgtest [14:25:14]: host juju-7f2275-prod-proposed-migration-environment-20; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.o9dfwbfw/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:redis --apt-upgrade valkey --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=redis/5:8.0.0-2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-20@bos03-arm64-11.secgroup --name adt-questing-arm64-valkey-20250619-142514-juju-7f2275-prod-proposed-migration-environment-20-f72ed46b-14f6-4e71-8cc3-2702b46dc7d4 --image adt/ubuntu-questing-arm64-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-20 --net-id=net_prod-proposed-migration -e TERM=linux --mirror=http://ftpmaster.internal/ubuntu/ 165s autopkgtest [14:27:59]: testbed dpkg architecture: arm64 165s autopkgtest [14:27:59]: testbed apt version: 3.1.2 165s autopkgtest [14:27:59]: @@@@@@@@@@@@@@@@@@@@ test bed setup 165s autopkgtest [14:27:59]: testbed release detected to be: None 166s autopkgtest [14:28:00]: updating testbed package index (apt update) 167s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 167s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 167s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 167s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 167s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/restricted Sources [4716 B] 167s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.4 kB] 167s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [426 kB] 167s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [38.3 kB] 167s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/main arm64 Packages [65.9 kB] 167s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/restricted arm64 Packages [18.4 kB] 167s Get:11 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 Packages [364 kB] 167s Get:12 http://ftpmaster.internal/ubuntu questing-proposed/multiverse arm64 Packages [23.9 kB] 167s Fetched 1208 kB in 1s (1267 kB/s) 169s Reading package lists... 169s autopkgtest [14:28:03]: upgrading testbed (apt dist-upgrade and autopurge) 170s Reading package lists... 170s Building dependency tree... 170s Reading state information... 170s Calculating upgrade... 171s The following packages will be upgraded: 171s libpython3.12-minimal libpython3.12-stdlib libpython3.12t64 171s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 171s Need to get 5180 kB of archives. 171s After this operation, 291 kB disk space will be freed. 171s Get:1 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 libpython3.12t64 arm64 3.12.10-1 [2314 kB] 172s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 libpython3.12-stdlib arm64 3.12.10-1 [2029 kB] 172s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 libpython3.12-minimal arm64 3.12.10-1 [836 kB] 173s Fetched 5180 kB in 1s (3964 kB/s) 173s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118766 files and directories currently installed.) 173s Preparing to unpack .../libpython3.12t64_3.12.10-1_arm64.deb ... 173s Unpacking libpython3.12t64:arm64 (3.12.10-1) over (3.12.8-3) ... 173s Preparing to unpack .../libpython3.12-stdlib_3.12.10-1_arm64.deb ... 173s Unpacking libpython3.12-stdlib:arm64 (3.12.10-1) over (3.12.8-3) ... 173s Preparing to unpack .../libpython3.12-minimal_3.12.10-1_arm64.deb ... 173s Unpacking libpython3.12-minimal:arm64 (3.12.10-1) over (3.12.8-3) ... 174s Setting up libpython3.12-minimal:arm64 (3.12.10-1) ... 174s Setting up libpython3.12-stdlib:arm64 (3.12.10-1) ... 174s Setting up libpython3.12t64:arm64 (3.12.10-1) ... 174s Processing triggers for libc-bin (2.41-6ubuntu2) ... 174s Reading package lists... 175s Building dependency tree... 175s Reading state information... 175s Solving dependencies... 176s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 179s autopkgtest [14:28:13]: testbed running kernel: Linux 6.14.0-15-generic #15-Ubuntu SMP PREEMPT_DYNAMIC Sun Apr 6 14:37:51 UTC 2025 179s autopkgtest [14:28:13]: @@@@@@@@@@@@@@@@@@@@ apt-source valkey 183s Get:1 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (dsc) [2484 B] 183s Get:2 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (tar) [2726 kB] 183s Get:3 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (diff) [20.4 kB] 184s gpgv: Signature made Wed Jun 18 14:39:32 2025 UTC 184s gpgv: using RSA key 63EEFC3DE14D5146CE7F24BF34B8AD7D9529E793 184s gpgv: issuer "lena.voytek@canonical.com" 184s gpgv: Can't check signature: No public key 184s dpkg-source: warning: cannot verify inline signature for ./valkey_8.1.1+dfsg1-2ubuntu1.dsc: no acceptable signature found 184s autopkgtest [14:28:18]: testing package valkey version 8.1.1+dfsg1-2ubuntu1 185s autopkgtest [14:28:19]: build not needed 187s autopkgtest [14:28:21]: test 0001-valkey-cli: preparing testbed 188s Reading package lists... 188s Building dependency tree... 188s Reading state information... 188s Solving dependencies... 189s The following NEW packages will be installed: 189s liblzf1 valkey-server valkey-tools 189s 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. 189s Need to get 1345 kB of archives. 189s After this operation, 7648 kB of additional disk space will be used. 189s Get:1 http://ftpmaster.internal/ubuntu questing/universe arm64 liblzf1 arm64 3.6-4 [7426 B] 189s Get:2 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-tools arm64 8.1.1+dfsg1-2ubuntu1 [1285 kB] 189s Get:3 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-server arm64 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 190s Fetched 1345 kB in 1s (1961 kB/s) 190s Selecting previously unselected package liblzf1:arm64. 190s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118766 files and directories currently installed.) 190s Preparing to unpack .../liblzf1_3.6-4_arm64.deb ... 190s Unpacking liblzf1:arm64 (3.6-4) ... 190s Selecting previously unselected package valkey-tools. 190s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 190s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 190s Selecting previously unselected package valkey-server. 190s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 190s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 190s Setting up liblzf1:arm64 (3.6-4) ... 190s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 190s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 191s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 191s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 192s Processing triggers for man-db (2.13.1-1) ... 192s Processing triggers for libc-bin (2.41-6ubuntu2) ... 194s autopkgtest [14:28:28]: test 0001-valkey-cli: [----------------------- 194s ************************************************************************** 194s # A new feature in cloud-init identified possible datasources for # 194s # this system as: # 194s # [] # 194s # However, the datasource used was: OpenStack # 194s # # 194s # In the future, cloud-init will only attempt to use datasources that # 194s # are identified or specifically configured. # 194s # For more information see # 194s # https://bugs.launchpad.net/bugs/1669675 # 194s # # 194s # If you are seeing this message, please file a bug against # 194s # cloud-init at # 194s # https://github.com/canonical/cloud-init/issues # 194s # Make sure to include the cloud provider your instance is # 194s # running on. # 194s # # 194s # After you have filed a bug, you can disable this warning by launching # 194s # your instance with the cloud-config below, or putting that content # 194s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 194s # # 194s # #cloud-config # 194s # warnings: # 194s # dsid_missing_source: off # 194s ************************************************************************** 194s 194s Disable the warnings above by: 194s touch /root/.cloud-warnings.skip 194s or 194s touch /var/lib/cloud/instance/warnings/.skip 199s # Server 199s redis_version:7.2.4 199s server_name:valkey 199s valkey_version:8.1.1 199s valkey_release_stage:ga 199s redis_git_sha1:00000000 199s redis_git_dirty:0 199s redis_build_id:454dc2cf719509d2 199s server_mode:standalone 199s os:Linux 6.14.0-15-generic aarch64 199s arch_bits:64 199s monotonic_clock:POSIX clock_gettime 199s multiplexing_api:epoll 199s gcc_version:14.3.0 199s process_id:2079 199s process_supervised:systemd 199s run_id:8dc90652b1750021c5fcb309d4bd512db9040428 199s tcp_port:6379 199s server_time_usec:1750343313420342 199s uptime_in_seconds:5 199s uptime_in_days:0 199s hz:10 199s configured_hz:10 199s clients_hz:10 199s lru_clock:5512849 199s executable:/usr/bin/valkey-server 199s config_file:/etc/valkey/valkey.conf 199s io_threads_active:0 199s availability_zone: 199s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 199s 199s # Clients 199s connected_clients:1 199s cluster_connections:0 199s maxclients:10000 199s client_recent_max_input_buffer:0 199s client_recent_max_output_buffer:0 199s blocked_clients:0 199s tracking_clients:0 199s pubsub_clients:0 199s watching_clients:0 199s clients_in_timeout_table:0 199s total_watched_keys:0 199s total_blocking_keys:0 199s total_blocking_keys_on_nokey:0 199s paused_reason:none 199s paused_actions:none 199s paused_timeout_milliseconds:0 199s 199s # Memory 199s used_memory:945152 199s used_memory_human:923.00K 199s used_memory_rss:14209024 199s used_memory_rss_human:13.55M 199s used_memory_peak:945152 199s used_memory_peak_human:923.00K 199s used_memory_peak_perc:100.29% 199s used_memory_overhead:925248 199s used_memory_startup:925024 199s used_memory_dataset:19904 199s used_memory_dataset_perc:98.89% 199s allocator_allocated:4426880 199s allocator_active:9043968 199s allocator_resident:10354688 199s allocator_muzzy:0 199s total_system_memory:4086984704 199s total_system_memory_human:3.81G 199s used_memory_lua:32768 199s used_memory_vm_eval:32768 199s used_memory_lua_human:32.00K 199s used_memory_scripts_eval:0 199s number_of_cached_scripts:0 199s number_of_functions:0 199s number_of_libraries:0 199s used_memory_vm_functions:33792 199s used_memory_vm_total:66560 199s used_memory_vm_total_human:65.00K 199s used_memory_functions:224 199s used_memory_scripts:224 199s used_memory_scripts_human:224B 199s maxmemory:0 199s maxmemory_human:0B 199s maxmemory_policy:noeviction 199s allocator_frag_ratio:1.00 199s allocator_frag_bytes:0 199s allocator_rss_ratio:1.14 199s allocator_rss_bytes:1310720 199s rss_overhead_ratio:1.37 199s rss_overhead_bytes:3854336 199s mem_fragmentation_ratio:15.36 199s mem_fragmentation_bytes:13283856 199s mem_not_counted_for_evict:0 199s mem_replication_backlog:0 199s mem_total_replication_buffers:0 199s mem_clients_slaves:0 199s mem_clients_normal:0 199s mem_cluster_links:0 199s mem_aof_buffer:0 199s mem_allocator:jemalloc-5.3.0 199s mem_overhead_db_hashtable_rehashing:0 199s active_defrag_running:0 199s lazyfree_pending_objects:0 199s lazyfreed_objects:0 199s 199s # Persistence 199s loading:0 199s async_loading:0 199s current_cow_peak:0 199s current_cow_size:0 199s current_cow_size_age:0 199s current_fork_perc:0.00 199s current_save_keys_processed:0 199s current_save_keys_total:0 199s rdb_changes_since_last_save:0 199s rdb_bgsave_in_progress:0 199s rdb_last_save_time:1750343308 199s rdb_last_bgsave_status:ok 199s rdb_last_bgsave_time_sec:-1 199s rdb_current_bgsave_time_sec:-1 199s rdb_saves:0 199s rdb_last_cow_size:0 199s rdb_last_load_keys_expired:0 199s rdb_last_load_keys_loaded:0 199s aof_enabled:0 199s aof_rewrite_in_progress:0 199s aof_rewrite_scheduled:0 199s aof_last_rewrite_time_sec:-1 199s aof_current_rewrite_time_sec:-1 199s aof_last_bgrewrite_status:ok 199s aof_rewrites:0 199s aof_rewrites_consecutive_failures:0 199s aof_last_write_status:ok 199s aof_last_cow_size:0 199s module_fork_in_progress:0 199s module_fork_last_cow_size:0 199s 199s # Stats 199s total_connections_received:1 199s total_commands_processed:0 199s instantaneous_ops_per_sec:0 199s total_net_input_bytes:14 199s total_net_output_bytes:0 199s total_net_repl_input_bytes:0 199s total_net_repl_output_bytes:0 199s instantaneous_input_kbps:0.00 199s instantaneous_output_kbps:0.00 199s instantaneous_input_repl_kbps:0.00 199s instantaneous_output_repl_kbps:0.00 199s rejected_connections:0 199s sync_full:0 199s sync_partial_ok:0 199s sync_partial_err:0 199s expired_keys:0 199s expired_stale_perc:0.00 199s expired_time_cap_reached_count:0 199s expire_cycle_cpu_milliseconds:0 199s evicted_keys:0 199s evicted_clients:0 199s evicted_scripts:0 199s total_eviction_exceeded_time:0 199s current_eviction_exceeded_time:0 199s keyspace_hits:0 199s keyspace_misses:0 199s pubsub_channels:0 199s pubsub_patterns:0 199s pubsubshard_channels:0 199s latest_fork_usec:0 199s total_forks:0 199s migrate_cached_sockets:0 199s slave_expires_tracked_keys:0 199s active_defrag_hits:0 199s active_defrag_misses:0 199s active_defrag_key_hits:0 199s active_defrag_key_misses:0 199s total_active_defrag_time:0 199s current_active_defrag_time:0 199s tracking_total_keys:0 199s tracking_total_items:0 199s tracking_total_prefixes:0 199s unexpected_error_replies:0 199s total_error_replies:0 199s dump_payload_sanitizations:0 199s total_reads_processed:1 199s total_writes_processed:0 199s io_threaded_reads_processed:0 199s io_threaded_writes_processed:0 199s io_threaded_freed_objects:0 199s io_threaded_accept_processed:0 199s io_threaded_poll_processed:0 199s io_threaded_total_prefetch_batches:0 199s io_threaded_total_prefetch_entries:0 199s client_query_buffer_limit_disconnections:0 199s client_output_buffer_limit_disconnections:0 199s reply_buffer_shrinks:0 199s reply_buffer_expands:0 199s eventloop_cycles:51 199s eventloop_duration_sum:10456 199s eventloop_duration_cmd_sum:0 199s instantaneous_eventloop_cycles_per_sec:9 199s instantaneous_eventloop_duration_usec:210 199s acl_access_denied_auth:0 199s acl_access_denied_cmd:0 199s acl_access_denied_key:0 199s acl_access_denied_channel:0 199s 199s # Replication 199s role:master 199s connected_slaves:0 199s replicas_waiting_psync:0 199s master_failover_state:no-failover 199s master_replid:a87d4d83221f3f959ced0d38086cf19d8318c136 199s master_replid2:0000000000000000000000000000000000000000 199s master_repl_offset:0 199s second_repl_offset:-1 199s repl_backlog_active:0 199s repl_backlog_size:10485760 199s repl_backlog_first_byte_offset:0 199s repl_backlog_histlen:0 199s 199s # CPU 199s used_cpu_sys:0.020988 199s used_cpu_user:0.051052 199s used_cpu_sys_children:0.003889 199s used_cpu_user_children:0.001072 199s used_cpu_sys_main_thread:0.020751 199s used_cpu_user_main_thread:0.050395 199s 199s # Modules 199s 199s # Errorstats 199s 199s # Cluster 199s cluster_enabled:0 199s 199s # Keyspace 199s Redis ver. 8.1.1 199s autopkgtest [14:28:33]: test 0001-valkey-cli: -----------------------] 200s autopkgtest [14:28:34]: test 0001-valkey-cli: - - - - - - - - - - results - - - - - - - - - - 200s 0001-valkey-cli PASS 200s autopkgtest [14:28:34]: test 0002-benchmark: preparing testbed 201s Reading package lists... 201s Building dependency tree... 201s Reading state information... 201s Solving dependencies... 202s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 203s autopkgtest [14:28:37]: test 0002-benchmark: [----------------------- 204s ************************************************************************** 204s # A new feature in cloud-init identified possible datasources for # 204s # this system as: # 204s # [] # 204s # However, the datasource used was: OpenStack # 204s # # 204s # In the future, cloud-init will only attempt to use datasources that # 204s # are identified or specifically configured. # 204s # For more information see # 204s # https://bugs.launchpad.net/bugs/1669675 # 204s # # 204s # If you are seeing this message, please file a bug against # 204s # cloud-init at # 204s # https://github.com/canonical/cloud-init/issues # 204s # Make sure to include the cloud provider your instance is # 204s # running on. # 204s # # 204s # After you have filed a bug, you can disable this warning by launching # 204s # your instance with the cloud-config below, or putting that content # 204s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 204s # # 204s # #cloud-config # 204s # warnings: # 204s # dsid_missing_source: off # 204s ************************************************************************** 204s 204s Disable the warnings above by: 204s touch /root/.cloud-warnings.skip 204s or 204s touch /var/lib/cloud/instance/warnings/.skip 209s PING_INLINE: rps=0.0 (overall: 0.0) avg_msec=nan (overall: nan) ====== PING_INLINE ====== 209s 100000 requests completed in 0.20 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.399 milliseconds (cumulative count 10) 209s 50.000% <= 0.839 milliseconds (cumulative count 51170) 209s 75.000% <= 1.031 milliseconds (cumulative count 75350) 209s 87.500% <= 1.135 milliseconds (cumulative count 88110) 209s 93.750% <= 1.271 milliseconds (cumulative count 93900) 209s 96.875% <= 1.463 milliseconds (cumulative count 96890) 209s 98.438% <= 1.599 milliseconds (cumulative count 98540) 209s 99.219% <= 1.719 milliseconds (cumulative count 99230) 209s 99.609% <= 1.919 milliseconds (cumulative count 99610) 209s 99.805% <= 2.031 milliseconds (cumulative count 99810) 209s 99.902% <= 2.399 milliseconds (cumulative count 99910) 209s 99.951% <= 2.535 milliseconds (cumulative count 99960) 209s 99.976% <= 2.583 milliseconds (cumulative count 99980) 209s 99.988% <= 2.615 milliseconds (cumulative count 99990) 209s 99.994% <= 2.647 milliseconds (cumulative count 100000) 209s 100.000% <= 2.647 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 0.020% <= 0.407 milliseconds (cumulative count 20) 209s 0.190% <= 0.503 milliseconds (cumulative count 190) 209s 1.050% <= 0.607 milliseconds (cumulative count 1050) 209s 11.250% <= 0.703 milliseconds (cumulative count 11250) 209s 44.760% <= 0.807 milliseconds (cumulative count 44760) 209s 60.360% <= 0.903 milliseconds (cumulative count 60360) 209s 72.450% <= 1.007 milliseconds (cumulative count 72450) 209s 84.790% <= 1.103 milliseconds (cumulative count 84790) 209s 92.190% <= 1.207 milliseconds (cumulative count 92190) 209s 94.660% <= 1.303 milliseconds (cumulative count 94660) 209s 96.230% <= 1.407 milliseconds (cumulative count 96230) 209s 97.430% <= 1.503 milliseconds (cumulative count 97430) 209s 98.610% <= 1.607 milliseconds (cumulative count 98610) 209s 99.150% <= 1.703 milliseconds (cumulative count 99150) 209s 99.400% <= 1.807 milliseconds (cumulative count 99400) 209s 99.580% <= 1.903 milliseconds (cumulative count 99580) 209s 99.790% <= 2.007 milliseconds (cumulative count 99790) 209s 99.830% <= 2.103 milliseconds (cumulative count 99830) 209s 100.000% <= 3.103 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 492610.84 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.904 0.392 0.839 1.327 1.655 2.647 209s PING_MBULK: rps=76040.0 (overall: 413260.9) avg_msec=0.965 (overall: 0.965) ====== PING_MBULK ====== 209s 100000 requests completed in 0.25 seconds 209s 50 parallel clients 209s 3 bytes payload 209s keep alive: 1 209s host configuration "save": 3600 1 300 100 60 10000 209s host configuration "appendonly": no 209s multi-thread: no 209s 209s Latency by percentile distribution: 209s 0.000% <= 0.263 milliseconds (cumulative count 10) 209s 50.000% <= 0.815 milliseconds (cumulative count 50750) 209s 75.000% <= 0.967 milliseconds (cumulative count 75470) 209s 87.500% <= 1.095 milliseconds (cumulative count 87840) 209s 93.750% <= 1.215 milliseconds (cumulative count 93820) 209s 96.875% <= 1.375 milliseconds (cumulative count 96880) 209s 98.438% <= 1.719 milliseconds (cumulative count 98440) 209s 99.219% <= 2.615 milliseconds (cumulative count 99220) 209s 99.609% <= 2.895 milliseconds (cumulative count 99610) 209s 99.805% <= 3.055 milliseconds (cumulative count 99810) 209s 99.902% <= 3.231 milliseconds (cumulative count 99910) 209s 99.951% <= 3.311 milliseconds (cumulative count 99960) 209s 99.976% <= 3.399 milliseconds (cumulative count 99980) 209s 99.988% <= 3.455 milliseconds (cumulative count 99990) 209s 99.994% <= 3.559 milliseconds (cumulative count 100000) 209s 100.000% <= 3.559 milliseconds (cumulative count 100000) 209s 209s Cumulative distribution of latencies: 209s 0.000% <= 0.103 milliseconds (cumulative count 0) 209s 0.030% <= 0.303 milliseconds (cumulative count 30) 209s 0.130% <= 0.407 milliseconds (cumulative count 130) 209s 1.280% <= 0.503 milliseconds (cumulative count 1280) 209s 11.840% <= 0.607 milliseconds (cumulative count 11840) 209s 29.200% <= 0.703 milliseconds (cumulative count 29200) 209s 49.360% <= 0.807 milliseconds (cumulative count 49360) 209s 65.810% <= 0.903 milliseconds (cumulative count 65810) 209s 80.380% <= 1.007 milliseconds (cumulative count 80380) 209s 88.320% <= 1.103 milliseconds (cumulative count 88320) 209s 93.560% <= 1.207 milliseconds (cumulative count 93560) 209s 95.840% <= 1.303 milliseconds (cumulative count 95840) 209s 97.300% <= 1.407 milliseconds (cumulative count 97300) 209s 97.930% <= 1.503 milliseconds (cumulative count 97930) 209s 98.270% <= 1.607 milliseconds (cumulative count 98270) 209s 98.400% <= 1.703 milliseconds (cumulative count 98400) 209s 98.510% <= 1.807 milliseconds (cumulative count 98510) 209s 98.600% <= 1.903 milliseconds (cumulative count 98600) 209s 98.700% <= 2.007 milliseconds (cumulative count 98700) 209s 98.820% <= 2.103 milliseconds (cumulative count 98820) 209s 99.840% <= 3.103 milliseconds (cumulative count 99840) 209s 100.000% <= 4.103 milliseconds (cumulative count 100000) 209s 209s Summary: 209s throughput summary: 406504.06 requests per second 209s latency summary (msec): 209s avg min p50 p95 p99 max 209s 0.861 0.256 0.815 1.263 2.351 3.559 210s SET: rps=67012.0 (overall: 343265.3) avg_msec=1.226 (overall: 1.226) ====== SET ====== 210s 100000 requests completed in 0.28 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.511 milliseconds (cumulative count 10) 210s 50.000% <= 1.183 milliseconds (cumulative count 50450) 210s 75.000% <= 1.399 milliseconds (cumulative count 75570) 210s 87.500% <= 1.559 milliseconds (cumulative count 87970) 210s 93.750% <= 1.647 milliseconds (cumulative count 93910) 210s 96.875% <= 1.735 milliseconds (cumulative count 96970) 210s 98.438% <= 1.831 milliseconds (cumulative count 98440) 210s 99.219% <= 1.903 milliseconds (cumulative count 99220) 210s 99.609% <= 1.975 milliseconds (cumulative count 99620) 210s 99.805% <= 2.047 milliseconds (cumulative count 99820) 210s 99.902% <= 2.111 milliseconds (cumulative count 99910) 210s 99.951% <= 2.223 milliseconds (cumulative count 99960) 210s 99.976% <= 2.279 milliseconds (cumulative count 99980) 210s 99.988% <= 2.311 milliseconds (cumulative count 99990) 210s 99.994% <= 2.319 milliseconds (cumulative count 100000) 210s 100.000% <= 2.319 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.160% <= 0.607 milliseconds (cumulative count 160) 210s 0.560% <= 0.703 milliseconds (cumulative count 560) 210s 1.320% <= 0.807 milliseconds (cumulative count 1320) 210s 3.500% <= 0.903 milliseconds (cumulative count 3500) 210s 13.050% <= 1.007 milliseconds (cumulative count 13050) 210s 33.040% <= 1.103 milliseconds (cumulative count 33040) 210s 54.240% <= 1.207 milliseconds (cumulative count 54240) 210s 66.370% <= 1.303 milliseconds (cumulative count 66370) 210s 76.230% <= 1.407 milliseconds (cumulative count 76230) 210s 83.690% <= 1.503 milliseconds (cumulative count 83690) 210s 91.420% <= 1.607 milliseconds (cumulative count 91420) 210s 96.230% <= 1.703 milliseconds (cumulative count 96230) 210s 98.100% <= 1.807 milliseconds (cumulative count 98100) 210s 99.220% <= 1.903 milliseconds (cumulative count 99220) 210s 99.740% <= 2.007 milliseconds (cumulative count 99740) 210s 99.900% <= 2.103 milliseconds (cumulative count 99900) 210s 100.000% <= 3.103 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 354609.94 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 1.239 0.504 1.183 1.671 1.887 2.319 210s GET: rps=21000.0 (overall: 375000.0) avg_msec=1.096 (overall: 1.096) GET: rps=350876.5 (overall: 352151.0) avg_msec=1.119 (overall: 1.117) ====== GET ====== 210s 100000 requests completed in 0.28 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.455 milliseconds (cumulative count 10) 210s 50.000% <= 1.047 milliseconds (cumulative count 50880) 210s 75.000% <= 1.263 milliseconds (cumulative count 75420) 210s 87.500% <= 1.447 milliseconds (cumulative count 87880) 210s 93.750% <= 1.615 milliseconds (cumulative count 93850) 210s 96.875% <= 1.927 milliseconds (cumulative count 96880) 210s 98.438% <= 2.639 milliseconds (cumulative count 98440) 210s 99.219% <= 3.183 milliseconds (cumulative count 99230) 210s 99.609% <= 3.687 milliseconds (cumulative count 99610) 210s 99.805% <= 4.383 milliseconds (cumulative count 99810) 210s 99.902% <= 4.967 milliseconds (cumulative count 99910) 210s 99.951% <= 5.087 milliseconds (cumulative count 99960) 210s 99.976% <= 5.303 milliseconds (cumulative count 99980) 210s 99.988% <= 5.335 milliseconds (cumulative count 99990) 210s 99.994% <= 5.399 milliseconds (cumulative count 100000) 210s 100.000% <= 5.399 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.110% <= 0.503 milliseconds (cumulative count 110) 210s 1.770% <= 0.607 milliseconds (cumulative count 1770) 210s 6.790% <= 0.703 milliseconds (cumulative count 6790) 210s 15.600% <= 0.807 milliseconds (cumulative count 15600) 210s 26.420% <= 0.903 milliseconds (cumulative count 26420) 210s 43.450% <= 1.007 milliseconds (cumulative count 43450) 210s 59.910% <= 1.103 milliseconds (cumulative count 59910) 210s 70.750% <= 1.207 milliseconds (cumulative count 70750) 210s 78.420% <= 1.303 milliseconds (cumulative count 78420) 210s 85.450% <= 1.407 milliseconds (cumulative count 85450) 210s 90.670% <= 1.503 milliseconds (cumulative count 90670) 210s 93.660% <= 1.607 milliseconds (cumulative count 93660) 210s 95.200% <= 1.703 milliseconds (cumulative count 95200) 210s 96.140% <= 1.807 milliseconds (cumulative count 96140) 210s 96.760% <= 1.903 milliseconds (cumulative count 96760) 210s 97.210% <= 2.007 milliseconds (cumulative count 97210) 210s 97.470% <= 2.103 milliseconds (cumulative count 97470) 210s 99.120% <= 3.103 milliseconds (cumulative count 99120) 210s 99.720% <= 4.103 milliseconds (cumulative count 99720) 210s 99.960% <= 5.103 milliseconds (cumulative count 99960) 210s 100.000% <= 6.103 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 353356.91 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 1.122 0.448 1.047 1.695 3.039 5.399 210s INCR: rps=355080.0 (overall: 387641.9) avg_msec=1.050 (overall: 1.050) ====== INCR ====== 210s 100000 requests completed in 0.26 seconds 210s 50 parallel clients 210s 3 bytes payload 210s keep alive: 1 210s host configuration "save": 3600 1 300 100 60 10000 210s host configuration "appendonly": no 210s multi-thread: no 210s 210s Latency by percentile distribution: 210s 0.000% <= 0.495 milliseconds (cumulative count 40) 210s 50.000% <= 1.031 milliseconds (cumulative count 50560) 210s 75.000% <= 1.223 milliseconds (cumulative count 75330) 210s 87.500% <= 1.383 milliseconds (cumulative count 87900) 210s 93.750% <= 1.479 milliseconds (cumulative count 94230) 210s 96.875% <= 1.543 milliseconds (cumulative count 96980) 210s 98.438% <= 1.623 milliseconds (cumulative count 98520) 210s 99.219% <= 1.695 milliseconds (cumulative count 99230) 210s 99.609% <= 1.759 milliseconds (cumulative count 99620) 210s 99.805% <= 1.831 milliseconds (cumulative count 99820) 210s 99.902% <= 1.919 milliseconds (cumulative count 99910) 210s 99.951% <= 1.999 milliseconds (cumulative count 99970) 210s 99.976% <= 2.023 milliseconds (cumulative count 99980) 210s 99.988% <= 2.039 milliseconds (cumulative count 99990) 210s 99.994% <= 2.055 milliseconds (cumulative count 100000) 210s 100.000% <= 2.055 milliseconds (cumulative count 100000) 210s 210s Cumulative distribution of latencies: 210s 0.000% <= 0.103 milliseconds (cumulative count 0) 210s 0.050% <= 0.503 milliseconds (cumulative count 50) 210s 1.620% <= 0.607 milliseconds (cumulative count 1620) 210s 7.150% <= 0.703 milliseconds (cumulative count 7150) 210s 17.620% <= 0.807 milliseconds (cumulative count 17620) 210s 29.200% <= 0.903 milliseconds (cumulative count 29200) 210s 46.090% <= 1.007 milliseconds (cumulative count 46090) 210s 61.890% <= 1.103 milliseconds (cumulative count 61890) 210s 73.700% <= 1.207 milliseconds (cumulative count 73700) 210s 82.080% <= 1.303 milliseconds (cumulative count 82080) 210s 89.650% <= 1.407 milliseconds (cumulative count 89650) 210s 95.340% <= 1.503 milliseconds (cumulative count 95340) 210s 98.370% <= 1.607 milliseconds (cumulative count 98370) 210s 99.300% <= 1.703 milliseconds (cumulative count 99300) 210s 99.750% <= 1.807 milliseconds (cumulative count 99750) 210s 99.890% <= 1.903 milliseconds (cumulative count 99890) 210s 99.970% <= 2.007 milliseconds (cumulative count 99970) 210s 100.000% <= 2.103 milliseconds (cumulative count 100000) 210s 210s Summary: 210s throughput summary: 387596.91 requests per second 210s latency summary (msec): 210s avg min p50 p95 p99 max 210s 1.054 0.488 1.031 1.495 1.679 2.055 211s LPUSH: rps=297320.0 (overall: 340963.3) avg_msec=1.287 (overall: 1.287) ====== LPUSH ====== 211s 100000 requests completed in 0.30 seconds 211s 50 parallel clients 211s 3 bytes payload 211s keep alive: 1 211s host configuration "save": 3600 1 300 100 60 10000 211s host configuration "appendonly": no 211s multi-thread: no 211s 211s Latency by percentile distribution: 211s 0.000% <= 0.527 milliseconds (cumulative count 10) 211s 50.000% <= 1.247 milliseconds (cumulative count 50260) 211s 75.000% <= 1.479 milliseconds (cumulative count 75550) 211s 87.500% <= 1.647 milliseconds (cumulative count 87640) 211s 93.750% <= 1.751 milliseconds (cumulative count 93930) 211s 96.875% <= 1.847 milliseconds (cumulative count 96880) 211s 98.438% <= 1.943 milliseconds (cumulative count 98490) 211s 99.219% <= 2.015 milliseconds (cumulative count 99220) 211s 99.609% <= 2.103 milliseconds (cumulative count 99630) 211s 99.805% <= 2.199 milliseconds (cumulative count 99820) 211s 99.902% <= 2.319 milliseconds (cumulative count 99910) 211s 99.951% <= 2.383 milliseconds (cumulative count 99960) 211s 99.976% <= 2.407 milliseconds (cumulative count 99980) 211s 99.988% <= 2.439 milliseconds (cumulative count 99990) 211s 99.994% <= 2.559 milliseconds (cumulative count 100000) 211s 100.000% <= 2.559 milliseconds (cumulative count 100000) 211s 211s Cumulative distribution of latencies: 211s 0.000% <= 0.103 milliseconds (cumulative count 0) 211s 0.060% <= 0.607 milliseconds (cumulative count 60) 211s 0.290% <= 0.703 milliseconds (cumulative count 290) 211s 0.960% <= 0.807 milliseconds (cumulative count 960) 211s 2.390% <= 0.903 milliseconds (cumulative count 2390) 211s 8.470% <= 1.007 milliseconds (cumulative count 8470) 211s 21.720% <= 1.103 milliseconds (cumulative count 21720) 211s 42.890% <= 1.207 milliseconds (cumulative count 42890) 211s 57.930% <= 1.303 milliseconds (cumulative count 57930) 211s 69.160% <= 1.407 milliseconds (cumulative count 69160) 211s 77.540% <= 1.503 milliseconds (cumulative count 77540) 211s 85.060% <= 1.607 milliseconds (cumulative count 85060) 211s 91.420% <= 1.703 milliseconds (cumulative count 91420) 211s 96.000% <= 1.807 milliseconds (cumulative count 96000) 211s 97.910% <= 1.903 milliseconds (cumulative count 97910) 211s 99.200% <= 2.007 milliseconds (cumulative count 99200) 211s 99.630% <= 2.103 milliseconds (cumulative count 99630) 211s 100.000% <= 3.103 milliseconds (cumulative count 100000) 211s 211s Summary: 211s throughput summary: 336700.34 requests per second 211s latency summary (msec): 211s avg min p50 p95 p99 max 211s 1.306 0.520 1.247 1.783 1.991 2.559 211s RPUSH: rps=235378.5 (overall: 349585.8) avg_msec=1.232 (overall: 1.232) ====== RPUSH ====== 211s 100000 requests completed in 0.28 seconds 211s 50 parallel clients 211s 3 bytes payload 211s keep alive: 1 211s host configuration "save": 3600 1 300 100 60 10000 211s host configuration "appendonly": no 211s multi-thread: no 211s 211s Latency by percentile distribution: 211s 0.000% <= 0.535 milliseconds (cumulative count 10) 211s 50.000% <= 1.183 milliseconds (cumulative count 50440) 211s 75.000% <= 1.391 milliseconds (cumulative count 75040) 211s 87.500% <= 1.559 milliseconds (cumulative count 87590) 211s 93.750% <= 1.655 milliseconds (cumulative count 93820) 211s 96.875% <= 1.743 milliseconds (cumulative count 96990) 211s 98.438% <= 1.839 milliseconds (cumulative count 98460) 211s 99.219% <= 1.911 milliseconds (cumulative count 99290) 211s 99.609% <= 1.967 milliseconds (cumulative count 99650) 211s 99.805% <= 2.031 milliseconds (cumulative count 99810) 211s 99.902% <= 2.095 milliseconds (cumulative count 99910) 211s 99.951% <= 2.175 milliseconds (cumulative count 99960) 211s 99.976% <= 2.263 milliseconds (cumulative count 99980) 211s 99.988% <= 2.327 milliseconds (cumulative count 99990) 211s 99.994% <= 2.439 milliseconds (cumulative count 100000) 211s 100.000% <= 2.439 milliseconds (cumulative count 100000) 211s 211s Cumulative distribution of latencies: 211s 0.000% <= 0.103 milliseconds (cumulative count 0) 211s 0.180% <= 0.607 milliseconds (cumulative count 180) 211s 0.770% <= 0.703 milliseconds (cumulative count 770) 211s 2.360% <= 0.807 milliseconds (cumulative count 2360) 211s 6.130% <= 0.903 milliseconds (cumulative count 6130) 211s 15.630% <= 1.007 milliseconds (cumulative count 15630) 211s 33.300% <= 1.103 milliseconds (cumulative count 33300) 211s 54.780% <= 1.207 milliseconds (cumulative count 54780) 211s 67.170% <= 1.303 milliseconds (cumulative count 67170) 211s 76.460% <= 1.407 milliseconds (cumulative count 76460) 211s 83.750% <= 1.503 milliseconds (cumulative count 83750) 211s 90.890% <= 1.607 milliseconds (cumulative count 90890) 211s 95.990% <= 1.703 milliseconds (cumulative count 95990) 211s 98.150% <= 1.807 milliseconds (cumulative count 98150) 211s 99.180% <= 1.903 milliseconds (cumulative count 99180) 211s 99.750% <= 2.007 milliseconds (cumulative count 99750) 211s 99.920% <= 2.103 milliseconds (cumulative count 99920) 211s 100.000% <= 3.103 milliseconds (cumulative count 100000) 211s 211s Summary: 211s throughput summary: 352112.66 requests per second 211s latency summary (msec): 211s avg min p50 p95 p99 max 211s 1.232 0.528 1.183 1.687 1.879 2.439 211s LPOP: rps=171280.0 (overall: 324393.9) avg_msec=1.348 (overall: 1.348) ====== LPOP ====== 211s 100000 requests completed in 0.31 seconds 211s 50 parallel clients 211s 3 bytes payload 211s keep alive: 1 211s host configuration "save": 3600 1 300 100 60 10000 211s host configuration "appendonly": no 211s multi-thread: no 211s 211s Latency by percentile distribution: 211s 0.000% <= 0.567 milliseconds (cumulative count 10) 211s 50.000% <= 1.287 milliseconds (cumulative count 50340) 211s 75.000% <= 1.519 milliseconds (cumulative count 75340) 211s 87.500% <= 1.687 milliseconds (cumulative count 87650) 211s 93.750% <= 1.783 milliseconds (cumulative count 93800) 211s 96.875% <= 1.863 milliseconds (cumulative count 97040) 211s 98.438% <= 1.951 milliseconds (cumulative count 98520) 211s 99.219% <= 2.023 milliseconds (cumulative count 99310) 211s 99.609% <= 2.071 milliseconds (cumulative count 99610) 211s 99.805% <= 2.127 milliseconds (cumulative count 99820) 211s 99.902% <= 2.223 milliseconds (cumulative count 99910) 211s 99.951% <= 2.423 milliseconds (cumulative count 99970) 211s 99.976% <= 2.431 milliseconds (cumulative count 99980) 211s 99.988% <= 2.447 milliseconds (cumulative count 99990) 211s 99.994% <= 2.479 milliseconds (cumulative count 100000) 211s 100.000% <= 2.479 milliseconds (cumulative count 100000) 211s 211s Cumulative distribution of latencies: 211s 0.000% <= 0.103 milliseconds (cumulative count 0) 211s 0.050% <= 0.607 milliseconds (cumulative count 50) 211s 0.230% <= 0.703 milliseconds (cumulative count 230) 211s 0.540% <= 0.807 milliseconds (cumulative count 540) 211s 1.260% <= 0.903 milliseconds (cumulative count 1260) 211s 4.290% <= 1.007 milliseconds (cumulative count 4290) 211s 15.550% <= 1.103 milliseconds (cumulative count 15550) 211s 36.200% <= 1.207 milliseconds (cumulative count 36200) 211s 52.620% <= 1.303 milliseconds (cumulative count 52620) 211s 65.200% <= 1.407 milliseconds (cumulative count 65200) 211s 74.050% <= 1.503 milliseconds (cumulative count 74050) 211s 82.040% <= 1.607 milliseconds (cumulative count 82040) 211s 88.750% <= 1.703 milliseconds (cumulative count 88750) 211s 95.030% <= 1.807 milliseconds (cumulative count 95030) 211s 97.900% <= 1.903 milliseconds (cumulative count 97900) 211s 99.100% <= 2.007 milliseconds (cumulative count 99100) 211s 99.750% <= 2.103 milliseconds (cumulative count 99750) 211s 100.000% <= 3.103 milliseconds (cumulative count 100000) 211s 211s Summary: 211s throughput summary: 327868.84 requests per second 211s latency summary (msec): 211s avg min p50 p95 p99 max 211s 1.344 0.560 1.287 1.807 2.007 2.479 211s RPOP: rps=99600.0 (overall: 332000.0) avg_msec=1.308 (overall: 1.308) ====== RPOP ====== 211s 100000 requests completed in 0.30 seconds 211s 50 parallel clients 211s 3 bytes payload 211s keep alive: 1 211s host configuration "save": 3600 1 300 100 60 10000 211s host configuration "appendonly": no 211s multi-thread: no 211s 211s Latency by percentile distribution: 211s 0.000% <= 0.527 milliseconds (cumulative count 10) 211s 50.000% <= 1.263 milliseconds (cumulative count 50430) 211s 75.000% <= 1.495 milliseconds (cumulative count 75480) 211s 87.500% <= 1.663 milliseconds (cumulative count 87500) 211s 93.750% <= 1.759 milliseconds (cumulative count 93830) 211s 96.875% <= 1.831 milliseconds (cumulative count 96880) 211s 98.438% <= 1.943 milliseconds (cumulative count 98470) 211s 99.219% <= 2.007 milliseconds (cumulative count 99240) 211s 99.609% <= 2.071 milliseconds (cumulative count 99630) 211s 99.805% <= 2.127 milliseconds (cumulative count 99820) 211s 99.902% <= 2.207 milliseconds (cumulative count 99910) 211s 99.951% <= 2.311 milliseconds (cumulative count 99960) 211s 99.976% <= 2.463 milliseconds (cumulative count 99980) 211s 99.988% <= 2.495 milliseconds (cumulative count 99990) 211s 99.994% <= 2.591 milliseconds (cumulative count 100000) 211s 100.000% <= 2.591 milliseconds (cumulative count 100000) 211s 211s Cumulative distribution of latencies: 211s 0.000% <= 0.103 milliseconds (cumulative count 0) 211s 0.120% <= 0.607 milliseconds (cumulative count 120) 211s 0.360% <= 0.703 milliseconds (cumulative count 360) 211s 0.690% <= 0.807 milliseconds (cumulative count 690) 211s 1.460% <= 0.903 milliseconds (cumulative count 1460) 211s 4.500% <= 1.007 milliseconds (cumulative count 4500) 211s 16.640% <= 1.103 milliseconds (cumulative count 16640) 211s 39.710% <= 1.207 milliseconds (cumulative count 39710) 211s 56.160% <= 1.303 milliseconds (cumulative count 56160) 211s 67.540% <= 1.407 milliseconds (cumulative count 67540) 211s 76.070% <= 1.503 milliseconds (cumulative count 76070) 211s 83.610% <= 1.607 milliseconds (cumulative count 83610) 211s 90.310% <= 1.703 milliseconds (cumulative count 90310) 211s 96.120% <= 1.807 milliseconds (cumulative count 96120) 211s 98.010% <= 1.903 milliseconds (cumulative count 98010) 211s 99.240% <= 2.007 milliseconds (cumulative count 99240) 211s 99.740% <= 2.103 milliseconds (cumulative count 99740) 211s 100.000% <= 3.103 milliseconds (cumulative count 100000) 211s 211s Summary: 211s throughput summary: 332225.91 requests per second 211s latency summary (msec): 211s avg min p50 p95 p99 max 211s 1.326 0.520 1.263 1.783 1.991 2.591 212s SADD: rps=33027.9 (overall: 360434.8) avg_msec=1.106 (overall: 1.106) SADD: rps=361440.0 (overall: 361355.3) avg_msec=1.139 (overall: 1.136) ====== SADD ====== 212s 100000 requests completed in 0.28 seconds 212s 50 parallel clients 212s 3 bytes payload 212s keep alive: 1 212s host configuration "save": 3600 1 300 100 60 10000 212s host configuration "appendonly": no 212s multi-thread: no 212s 212s Latency by percentile distribution: 212s 0.000% <= 0.463 milliseconds (cumulative count 10) 212s 50.000% <= 1.095 milliseconds (cumulative count 50760) 212s 75.000% <= 1.287 milliseconds (cumulative count 75380) 212s 87.500% <= 1.455 milliseconds (cumulative count 87880) 212s 93.750% <= 1.567 milliseconds (cumulative count 94000) 212s 96.875% <= 1.679 milliseconds (cumulative count 96950) 212s 98.438% <= 1.815 milliseconds (cumulative count 98460) 212s 99.219% <= 2.015 milliseconds (cumulative count 99240) 212s 99.609% <= 3.063 milliseconds (cumulative count 99610) 212s 99.805% <= 3.799 milliseconds (cumulative count 99810) 212s 99.902% <= 4.175 milliseconds (cumulative count 99910) 212s 99.951% <= 4.407 milliseconds (cumulative count 99960) 212s 99.976% <= 4.463 milliseconds (cumulative count 99980) 212s 99.988% <= 4.479 milliseconds (cumulative count 99990) 212s 99.994% <= 4.519 milliseconds (cumulative count 100000) 212s 100.000% <= 4.519 milliseconds (cumulative count 100000) 212s 212s Cumulative distribution of latencies: 212s 0.000% <= 0.103 milliseconds (cumulative count 0) 212s 0.110% <= 0.503 milliseconds (cumulative count 110) 212s 0.960% <= 0.607 milliseconds (cumulative count 960) 212s 3.500% <= 0.703 milliseconds (cumulative count 3500) 212s 9.120% <= 0.807 milliseconds (cumulative count 9120) 212s 17.600% <= 0.903 milliseconds (cumulative count 17600) 212s 33.510% <= 1.007 milliseconds (cumulative count 33510) 212s 52.320% <= 1.103 milliseconds (cumulative count 52320) 212s 67.870% <= 1.207 milliseconds (cumulative count 67870) 212s 76.760% <= 1.303 milliseconds (cumulative count 76760) 212s 84.540% <= 1.407 milliseconds (cumulative count 84540) 212s 90.870% <= 1.503 milliseconds (cumulative count 90870) 212s 95.440% <= 1.607 milliseconds (cumulative count 95440) 212s 97.320% <= 1.703 milliseconds (cumulative count 97320) 212s 98.420% <= 1.807 milliseconds (cumulative count 98420) 212s 98.990% <= 1.903 milliseconds (cumulative count 98990) 212s 99.210% <= 2.007 milliseconds (cumulative count 99210) 212s 99.370% <= 2.103 milliseconds (cumulative count 99370) 212s 99.620% <= 3.103 milliseconds (cumulative count 99620) 212s 99.890% <= 4.103 milliseconds (cumulative count 99890) 212s 100.000% <= 5.103 milliseconds (cumulative count 100000) 212s 212s Summary: 212s throughput summary: 361010.81 requests per second 212s latency summary (msec): 212s avg min p50 p95 p99 max 212s 1.137 0.456 1.095 1.599 1.919 4.519 212s HSET: rps=337450.2 (overall: 347131.2) avg_msec=1.250 (overall: 1.250) ====== HSET ====== 212s 100000 requests completed in 0.29 seconds 212s 50 parallel clients 212s 3 bytes payload 212s keep alive: 1 212s host configuration "save": 3600 1 300 100 60 10000 212s host configuration "appendonly": no 212s multi-thread: no 212s 212s Latency by percentile distribution: 212s 0.000% <= 0.535 milliseconds (cumulative count 10) 212s 50.000% <= 1.199 milliseconds (cumulative count 50320) 212s 75.000% <= 1.415 milliseconds (cumulative count 75350) 212s 87.500% <= 1.583 milliseconds (cumulative count 87550) 212s 93.750% <= 1.679 milliseconds (cumulative count 93780) 212s 96.875% <= 1.751 milliseconds (cumulative count 96950) 212s 98.438% <= 1.855 milliseconds (cumulative count 98440) 212s 99.219% <= 1.935 milliseconds (cumulative count 99220) 212s 99.609% <= 1.983 milliseconds (cumulative count 99660) 212s 99.805% <= 2.031 milliseconds (cumulative count 99810) 212s 99.902% <= 2.119 milliseconds (cumulative count 99910) 212s 99.951% <= 2.175 milliseconds (cumulative count 99960) 212s 99.976% <= 2.207 milliseconds (cumulative count 99980) 212s 99.988% <= 2.223 milliseconds (cumulative count 100000) 212s 100.000% <= 2.223 milliseconds (cumulative count 100000) 212s 212s Cumulative distribution of latencies: 212s 0.000% <= 0.103 milliseconds (cumulative count 0) 212s 0.100% <= 0.607 milliseconds (cumulative count 100) 212s 0.530% <= 0.703 milliseconds (cumulative count 530) 212s 1.830% <= 0.807 milliseconds (cumulative count 1830) 212s 4.510% <= 0.903 milliseconds (cumulative count 4510) 212s 12.780% <= 1.007 milliseconds (cumulative count 12780) 212s 29.380% <= 1.103 milliseconds (cumulative count 29380) 212s 51.790% <= 1.207 milliseconds (cumulative count 51790) 212s 64.970% <= 1.303 milliseconds (cumulative count 64970) 212s 74.720% <= 1.407 milliseconds (cumulative count 74720) 212s 81.820% <= 1.503 milliseconds (cumulative count 81820) 212s 89.180% <= 1.607 milliseconds (cumulative count 89180) 212s 95.180% <= 1.703 milliseconds (cumulative count 95180) 212s 97.970% <= 1.807 milliseconds (cumulative count 97970) 212s 98.940% <= 1.903 milliseconds (cumulative count 98940) 212s 99.760% <= 2.007 milliseconds (cumulative count 99760) 212s 99.890% <= 2.103 milliseconds (cumulative count 99890) 212s 100.000% <= 3.103 milliseconds (cumulative count 100000) 212s 212s Summary: 212s throughput summary: 347222.25 requests per second 212s latency summary (msec): 212s avg min p50 p95 p99 max 212s 1.253 0.528 1.199 1.703 1.911 2.223 212s SPOP: rps=326600.0 (overall: 402216.8) avg_msec=0.974 (overall: 0.974) ====== SPOP ====== 212s 100000 requests completed in 0.25 seconds 212s 50 parallel clients 212s 3 bytes payload 212s keep alive: 1 212s host configuration "save": 3600 1 300 100 60 10000 212s host configuration "appendonly": no 212s multi-thread: no 212s 212s Latency by percentile distribution: 212s 0.000% <= 0.295 milliseconds (cumulative count 20) 212s 50.000% <= 0.959 milliseconds (cumulative count 50680) 212s 75.000% <= 1.135 milliseconds (cumulative count 75120) 212s 87.500% <= 1.279 milliseconds (cumulative count 87960) 212s 93.750% <= 1.391 milliseconds (cumulative count 94060) 212s 96.875% <= 1.463 milliseconds (cumulative count 96910) 212s 98.438% <= 1.551 milliseconds (cumulative count 98590) 212s 99.219% <= 1.631 milliseconds (cumulative count 99250) 212s 99.609% <= 1.703 milliseconds (cumulative count 99640) 212s 99.805% <= 1.767 milliseconds (cumulative count 99820) 212s 99.902% <= 1.815 milliseconds (cumulative count 99910) 212s 99.951% <= 1.935 milliseconds (cumulative count 99970) 212s 99.976% <= 1.943 milliseconds (cumulative count 99980) 212s 99.988% <= 1.967 milliseconds (cumulative count 99990) 212s 99.994% <= 1.991 milliseconds (cumulative count 100000) 212s 100.000% <= 1.991 milliseconds (cumulative count 100000) 212s 212s Cumulative distribution of latencies: 212s 0.000% <= 0.103 milliseconds (cumulative count 0) 212s 0.020% <= 0.303 milliseconds (cumulative count 20) 212s 0.100% <= 0.407 milliseconds (cumulative count 100) 212s 0.590% <= 0.503 milliseconds (cumulative count 590) 212s 4.420% <= 0.607 milliseconds (cumulative count 4420) 212s 14.340% <= 0.703 milliseconds (cumulative count 14340) 212s 27.930% <= 0.807 milliseconds (cumulative count 27930) 212s 41.630% <= 0.903 milliseconds (cumulative count 41630) 212s 58.060% <= 1.007 milliseconds (cumulative count 58060) 212s 71.350% <= 1.103 milliseconds (cumulative count 71350) 212s 82.320% <= 1.207 milliseconds (cumulative count 82320) 212s 89.390% <= 1.303 milliseconds (cumulative count 89390) 212s 94.750% <= 1.407 milliseconds (cumulative count 94750) 212s 97.880% <= 1.503 milliseconds (cumulative count 97880) 212s 99.120% <= 1.607 milliseconds (cumulative count 99120) 212s 99.640% <= 1.703 milliseconds (cumulative count 99640) 212s 99.890% <= 1.807 milliseconds (cumulative count 99890) 212s 99.950% <= 1.903 milliseconds (cumulative count 99950) 212s 100.000% <= 2.007 milliseconds (cumulative count 100000) 212s 212s Summary: 212s throughput summary: 403225.81 requests per second 212s latency summary (msec): 212s avg min p50 p95 p99 max 212s 0.974 0.288 0.959 1.415 1.607 1.991 213s ZADD: rps=251560.0 (overall: 311336.6) avg_msec=1.429 (overall: 1.429) ====== ZADD ====== 213s 100000 requests completed in 0.32 seconds 213s 50 parallel clients 213s 3 bytes payload 213s keep alive: 1 213s host configuration "save": 3600 1 300 100 60 10000 213s host configuration "appendonly": no 213s multi-thread: no 213s 213s Latency by percentile distribution: 213s 0.000% <= 0.591 milliseconds (cumulative count 10) 213s 50.000% <= 1.359 milliseconds (cumulative count 50690) 213s 75.000% <= 1.599 milliseconds (cumulative count 75350) 213s 87.500% <= 1.775 milliseconds (cumulative count 87620) 213s 93.750% <= 1.871 milliseconds (cumulative count 93920) 213s 96.875% <= 1.951 milliseconds (cumulative count 96960) 213s 98.438% <= 2.063 milliseconds (cumulative count 98440) 213s 99.219% <= 2.135 milliseconds (cumulative count 99220) 213s 99.609% <= 2.183 milliseconds (cumulative count 99610) 213s 99.805% <= 2.231 milliseconds (cumulative count 99810) 213s 99.902% <= 2.287 milliseconds (cumulative count 99910) 213s 99.951% <= 2.423 milliseconds (cumulative count 99960) 213s 99.976% <= 2.503 milliseconds (cumulative count 99980) 213s 99.988% <= 2.551 milliseconds (cumulative count 99990) 213s 99.994% <= 2.607 milliseconds (cumulative count 100000) 213s 100.000% <= 2.607 milliseconds (cumulative count 100000) 213s 213s Cumulative distribution of latencies: 213s 0.000% <= 0.103 milliseconds (cumulative count 0) 213s 0.010% <= 0.607 milliseconds (cumulative count 10) 213s 0.100% <= 0.703 milliseconds (cumulative count 100) 213s 0.310% <= 0.807 milliseconds (cumulative count 310) 213s 0.550% <= 0.903 milliseconds (cumulative count 550) 213s 1.240% <= 1.007 milliseconds (cumulative count 1240) 213s 4.490% <= 1.103 milliseconds (cumulative count 4490) 213s 20.140% <= 1.207 milliseconds (cumulative count 20140) 213s 41.200% <= 1.303 milliseconds (cumulative count 41200) 213s 56.960% <= 1.407 milliseconds (cumulative count 56960) 213s 66.950% <= 1.503 milliseconds (cumulative count 66950) 213s 75.950% <= 1.607 milliseconds (cumulative count 75950) 213s 82.830% <= 1.703 milliseconds (cumulative count 82830) 213s 89.840% <= 1.807 milliseconds (cumulative count 89840) 213s 95.600% <= 1.903 milliseconds (cumulative count 95600) 213s 97.830% <= 2.007 milliseconds (cumulative count 97830) 213s 98.930% <= 2.103 milliseconds (cumulative count 98930) 213s 100.000% <= 3.103 milliseconds (cumulative count 100000) 213s 213s Summary: 213s throughput summary: 312500.00 requests per second 213s latency summary (msec): 213s avg min p50 p95 p99 max 213s 1.424 0.584 1.359 1.895 2.111 2.607 213s ZPOPMIN: rps=205776.9 (overall: 397307.7) avg_msec=0.897 (overall: 0.897) ====== ZPOPMIN ====== 213s 100000 requests completed in 0.25 seconds 213s 50 parallel clients 213s 3 bytes payload 213s keep alive: 1 213s host configuration "save": 3600 1 300 100 60 10000 213s host configuration "appendonly": no 213s multi-thread: no 213s 213s Latency by percentile distribution: 213s 0.000% <= 0.367 milliseconds (cumulative count 10) 213s 50.000% <= 0.871 milliseconds (cumulative count 50150) 213s 75.000% <= 1.031 milliseconds (cumulative count 75460) 213s 87.500% <= 1.143 milliseconds (cumulative count 87930) 213s 93.750% <= 1.231 milliseconds (cumulative count 94050) 213s 96.875% <= 1.311 milliseconds (cumulative count 97010) 213s 98.438% <= 1.383 milliseconds (cumulative count 98440) 213s 99.219% <= 1.447 milliseconds (cumulative count 99270) 213s 99.609% <= 1.519 milliseconds (cumulative count 99610) 213s 99.805% <= 1.607 milliseconds (cumulative count 99820) 213s 99.902% <= 1.711 milliseconds (cumulative count 99910) 213s 99.951% <= 1.767 milliseconds (cumulative count 99960) 213s 99.976% <= 1.879 milliseconds (cumulative count 99980) 213s 99.988% <= 1.919 milliseconds (cumulative count 99990) 213s 99.994% <= 1.967 milliseconds (cumulative count 100000) 213s 100.000% <= 1.967 milliseconds (cumulative count 100000) 213s 213s Cumulative distribution of latencies: 213s 0.000% <= 0.103 milliseconds (cumulative count 0) 213s 0.010% <= 0.407 milliseconds (cumulative count 10) 213s 0.670% <= 0.503 milliseconds (cumulative count 670) 213s 8.560% <= 0.607 milliseconds (cumulative count 8560) 213s 22.410% <= 0.703 milliseconds (cumulative count 22410) 213s 39.310% <= 0.807 milliseconds (cumulative count 39310) 213s 55.690% <= 0.903 milliseconds (cumulative count 55690) 213s 72.090% <= 1.007 milliseconds (cumulative count 72090) 213s 84.080% <= 1.103 milliseconds (cumulative count 84080) 213s 92.750% <= 1.207 milliseconds (cumulative count 92750) 213s 96.680% <= 1.303 milliseconds (cumulative count 96680) 213s 98.840% <= 1.407 milliseconds (cumulative count 98840) 213s 99.590% <= 1.503 milliseconds (cumulative count 99590) 213s 99.820% <= 1.607 milliseconds (cumulative count 99820) 213s 99.890% <= 1.703 milliseconds (cumulative count 99890) 213s 99.970% <= 1.807 milliseconds (cumulative count 99970) 213s 99.980% <= 1.903 milliseconds (cumulative count 99980) 213s 100.000% <= 2.007 milliseconds (cumulative count 100000) 213s 213s Summary: 213s throughput summary: 400000.00 requests per second 213s latency summary (msec): 213s avg min p50 p95 p99 max 213s 0.886 0.360 0.871 1.255 1.423 1.967 213s LPUSH (needed to benchmark LRANGE): rps=171040.0 (overall: 336692.9) avg_msec=1.288 (overall: 1.288) ====== LPUSH (needed to benchmark LRANGE) ====== 213s 100000 requests completed in 0.30 seconds 213s 50 parallel clients 213s 3 bytes payload 213s keep alive: 1 213s host configuration "save": 3600 1 300 100 60 10000 213s host configuration "appendonly": no 213s multi-thread: no 213s 213s Latency by percentile distribution: 213s 0.000% <= 0.519 milliseconds (cumulative count 10) 213s 50.000% <= 1.255 milliseconds (cumulative count 50190) 213s 75.000% <= 1.471 milliseconds (cumulative count 75320) 213s 87.500% <= 1.647 milliseconds (cumulative count 87800) 213s 93.750% <= 1.751 milliseconds (cumulative count 94160) 213s 96.875% <= 1.823 milliseconds (cumulative count 96950) 213s 98.438% <= 1.927 milliseconds (cumulative count 98460) 213s 99.219% <= 1.999 milliseconds (cumulative count 99230) 213s 99.609% <= 2.055 milliseconds (cumulative count 99620) 213s 99.805% <= 2.095 milliseconds (cumulative count 99810) 213s 99.902% <= 2.191 milliseconds (cumulative count 99920) 213s 99.951% <= 2.231 milliseconds (cumulative count 99960) 213s 99.976% <= 2.303 milliseconds (cumulative count 99980) 213s 99.988% <= 2.367 milliseconds (cumulative count 99990) 213s 99.994% <= 2.383 milliseconds (cumulative count 100000) 213s 100.000% <= 2.383 milliseconds (cumulative count 100000) 213s 213s Cumulative distribution of latencies: 213s 0.000% <= 0.103 milliseconds (cumulative count 0) 213s 0.100% <= 0.607 milliseconds (cumulative count 100) 213s 0.510% <= 0.703 milliseconds (cumulative count 510) 213s 1.350% <= 0.807 milliseconds (cumulative count 1350) 213s 2.680% <= 0.903 milliseconds (cumulative count 2680) 213s 7.850% <= 1.007 milliseconds (cumulative count 7850) 213s 20.060% <= 1.103 milliseconds (cumulative count 20060) 213s 40.860% <= 1.207 milliseconds (cumulative count 40860) 213s 57.990% <= 1.303 milliseconds (cumulative count 57990) 213s 69.840% <= 1.407 milliseconds (cumulative count 69840) 213s 77.790% <= 1.503 milliseconds (cumulative count 77790) 213s 85.040% <= 1.607 milliseconds (cumulative count 85040) 213s 91.350% <= 1.703 milliseconds (cumulative count 91350) 213s 96.530% <= 1.807 milliseconds (cumulative count 96530) 213s 98.160% <= 1.903 milliseconds (cumulative count 98160) 213s 99.290% <= 2.007 milliseconds (cumulative count 99290) 213s 99.810% <= 2.103 milliseconds (cumulative count 99810) 213s 100.000% <= 3.103 milliseconds (cumulative count 100000) 213s 213s Summary: 213s throughput summary: 334448.16 requests per second 213s latency summary (msec): 213s avg min p50 p95 p99 max 213s 1.307 0.512 1.255 1.775 1.975 2.383 214s LRANGE_100 (first 100 elements): rps=30757.0 (overall: 101579.0) avg_msec=2.788 (overall: 2.788) LRANGE_100 (first 100 elements): rps=106280.0 (overall: 105184.0) avg_msec=2.583 (overall: 2.630) LRANGE_100 (first 100 elements): rps=107222.2 (overall: 106072.7) avg_msec=2.540 (overall: 2.590) LRANGE_100 (first 100 elements): rps=107301.6 (overall: 106445.8) avg_msec=2.510 (overall: 2.565) ====== LRANGE_100 (first 100 elements) ====== 214s 100000 requests completed in 0.94 seconds 214s 50 parallel clients 214s 3 bytes payload 214s keep alive: 1 214s host configuration "save": 3600 1 300 100 60 10000 214s host configuration "appendonly": no 214s multi-thread: no 214s 214s Latency by percentile distribution: 214s 0.000% <= 0.855 milliseconds (cumulative count 10) 214s 50.000% <= 2.487 milliseconds (cumulative count 50020) 214s 75.000% <= 2.647 milliseconds (cumulative count 75500) 214s 87.500% <= 2.807 milliseconds (cumulative count 87990) 214s 93.750% <= 3.015 milliseconds (cumulative count 93860) 214s 96.875% <= 3.263 milliseconds (cumulative count 96890) 214s 98.438% <= 3.615 milliseconds (cumulative count 98470) 214s 99.219% <= 4.183 milliseconds (cumulative count 99220) 214s 99.609% <= 5.375 milliseconds (cumulative count 99610) 214s 99.805% <= 6.023 milliseconds (cumulative count 99810) 214s 99.902% <= 6.623 milliseconds (cumulative count 99910) 214s 99.951% <= 7.063 milliseconds (cumulative count 99960) 214s 99.976% <= 7.159 milliseconds (cumulative count 99980) 214s 99.988% <= 7.255 milliseconds (cumulative count 99990) 214s 99.994% <= 7.343 milliseconds (cumulative count 100000) 214s 100.000% <= 7.343 milliseconds (cumulative count 100000) 214s 214s Cumulative distribution of latencies: 214s 0.000% <= 0.103 milliseconds (cumulative count 0) 214s 0.010% <= 0.903 milliseconds (cumulative count 10) 214s 0.020% <= 1.103 milliseconds (cumulative count 20) 214s 0.030% <= 1.207 milliseconds (cumulative count 30) 214s 0.040% <= 1.303 milliseconds (cumulative count 40) 214s 0.050% <= 1.407 milliseconds (cumulative count 50) 214s 0.060% <= 1.503 milliseconds (cumulative count 60) 214s 0.070% <= 1.607 milliseconds (cumulative count 70) 214s 0.110% <= 1.807 milliseconds (cumulative count 110) 214s 0.170% <= 1.903 milliseconds (cumulative count 170) 214s 0.290% <= 2.007 milliseconds (cumulative count 290) 214s 0.670% <= 2.103 milliseconds (cumulative count 670) 214s 95.320% <= 3.103 milliseconds (cumulative count 95320) 214s 99.150% <= 4.103 milliseconds (cumulative count 99150) 214s 99.510% <= 5.103 milliseconds (cumulative count 99510) 214s 99.820% <= 6.103 milliseconds (cumulative count 99820) 214s 99.960% <= 7.103 milliseconds (cumulative count 99960) 214s 100.000% <= 8.103 milliseconds (cumulative count 100000) 214s 214s Summary: 214s throughput summary: 106609.80 requests per second 214s latency summary (msec): 214s avg min p50 p95 p99 max 214s 2.556 0.848 2.487 3.087 3.919 7.343 217s LRANGE_300 (first 300 elements): rps=16677.2 (overall: 29622.4) avg_msec=8.803 (overall: 8.803) LRANGE_300 (first 300 elements): rps=27673.3 (overall: 28380.7) avg_msec=10.643 (overall: 9.946) LRANGE_300 (first 300 elements): rps=31702.0 (overall: 29685.7) avg_msec=7.613 (overall: 8.967) LRANGE_300 (first 300 elements): rps=31725.5 (overall: 30261.1) avg_msec=7.649 (overall: 8.577) LRANGE_300 (first 300 elements): rps=31394.4 (overall: 30507.4) avg_msec=7.902 (overall: 8.426) LRANGE_300 (first 300 elements): rps=30787.4 (overall: 30557.8) avg_msec=8.374 (overall: 8.417) LRANGE_300 (first 300 elements): rps=30912.7 (overall: 30611.7) avg_msec=8.168 (overall: 8.379) LRANGE_300 (first 300 elements): rps=31004.0 (overall: 30663.2) avg_msec=8.123 (overall: 8.345) LRANGE_300 (first 300 elements): rps=30924.3 (overall: 30693.5) avg_msec=7.928 (overall: 8.296) LRANGE_300 (first 300 elements): rps=30976.2 (overall: 30723.0) avg_msec=8.352 (overall: 8.302) LRANGE_300 (first 300 elements): rps=31011.9 (overall: 30750.4) avg_msec=8.355 (overall: 8.307) LRANGE_300 (first 300 elements): rps=31238.1 (overall: 30792.5) avg_msec=7.921 (overall: 8.273) LRANGE_300 (first 300 elements): rps=30261.9 (overall: 30750.3) avg_msec=8.567 (overall: 8.296) ====== LRANGE_300 (first 300 elements) ====== 217s 100000 requests completed in 3.25 seconds 217s 50 parallel clients 217s 3 bytes payload 217s keep alive: 1 217s host configuration "save": 3600 1 300 100 60 10000 217s host configuration "appendonly": no 217s multi-thread: no 217s 217s Latency by percentile distribution: 217s 0.000% <= 1.351 milliseconds (cumulative count 10) 217s 50.000% <= 7.687 milliseconds (cumulative count 50030) 217s 75.000% <= 9.423 milliseconds (cumulative count 75050) 217s 87.500% <= 11.503 milliseconds (cumulative count 87500) 217s 93.750% <= 13.279 milliseconds (cumulative count 93790) 217s 96.875% <= 15.199 milliseconds (cumulative count 96880) 217s 98.438% <= 17.135 milliseconds (cumulative count 98440) 217s 99.219% <= 18.559 milliseconds (cumulative count 99220) 217s 99.609% <= 20.175 milliseconds (cumulative count 99610) 217s 99.805% <= 22.255 milliseconds (cumulative count 99810) 217s 99.902% <= 27.087 milliseconds (cumulative count 99910) 217s 99.951% <= 32.799 milliseconds (cumulative count 99960) 217s 99.976% <= 33.759 milliseconds (cumulative count 99980) 217s 99.988% <= 33.983 milliseconds (cumulative count 99990) 217s 99.994% <= 34.207 milliseconds (cumulative count 100000) 217s 100.000% <= 34.207 milliseconds (cumulative count 100000) 217s 217s Cumulative distribution of latencies: 217s 0.000% <= 0.103 milliseconds (cumulative count 0) 217s 0.010% <= 1.407 milliseconds (cumulative count 10) 217s 0.040% <= 1.903 milliseconds (cumulative count 40) 217s 0.060% <= 2.007 milliseconds (cumulative count 60) 217s 0.430% <= 3.103 milliseconds (cumulative count 430) 217s 2.450% <= 4.103 milliseconds (cumulative count 2450) 217s 7.640% <= 5.103 milliseconds (cumulative count 7640) 217s 19.640% <= 6.103 milliseconds (cumulative count 19640) 217s 38.730% <= 7.103 milliseconds (cumulative count 38730) 217s 57.750% <= 8.103 milliseconds (cumulative count 57750) 217s 71.680% <= 9.103 milliseconds (cumulative count 71680) 217s 80.740% <= 10.103 milliseconds (cumulative count 80740) 217s 85.890% <= 11.103 milliseconds (cumulative count 85890) 217s 89.860% <= 12.103 milliseconds (cumulative count 89860) 217s 93.320% <= 13.103 milliseconds (cumulative count 93320) 217s 95.610% <= 14.103 milliseconds (cumulative count 95610) 217s 96.780% <= 15.103 milliseconds (cumulative count 96780) 217s 97.660% <= 16.103 milliseconds (cumulative count 97660) 217s 98.390% <= 17.103 milliseconds (cumulative count 98390) 217s 99.000% <= 18.111 milliseconds (cumulative count 99000) 217s 99.410% <= 19.103 milliseconds (cumulative count 99410) 217s 99.600% <= 20.111 milliseconds (cumulative count 99600) 217s 99.690% <= 21.103 milliseconds (cumulative count 99690) 217s 99.790% <= 22.111 milliseconds (cumulative count 99790) 217s 99.840% <= 23.103 milliseconds (cumulative count 99840) 217s 99.870% <= 25.103 milliseconds (cumulative count 99870) 217s 99.910% <= 27.103 milliseconds (cumulative count 99910) 217s 99.940% <= 31.103 milliseconds (cumulative count 99940) 217s 99.950% <= 32.111 milliseconds (cumulative count 99950) 217s 99.960% <= 33.119 milliseconds (cumulative count 99960) 217s 99.990% <= 34.111 milliseconds (cumulative count 99990) 217s 100.000% <= 35.103 milliseconds (cumulative count 100000) 217s 217s Summary: 217s throughput summary: 30769.23 requests per second 217s latency summary (msec): 217s avg min p50 p95 p99 max 217s 8.278 1.344 7.687 13.815 18.063 34.207 223s LRANGE_500 (first 500 elements): rps=11087.7 (overall: 16370.6) avg_msec=14.009 (overall: 14.009) LRANGE_500 (first 500 elements): rps=17071.4 (overall: 16789.1) avg_msec=13.162 (overall: 13.495) LRANGE_500 (first 500 elements): rps=17976.7 (overall: 17238.6) avg_msec=11.317 (overall: 12.635) LRANGE_500 (first 500 elements): rps=17964.6 (overall: 17436.2) avg_msec=11.007 (overall: 12.179) LRANGE_500 (first 500 elements): rps=18078.7 (overall: 17573.7) avg_msec=11.038 (overall: 11.927) LRANGE_500 (first 500 elements): rps=18165.4 (overall: 17678.0) avg_msec=11.039 (overall: 11.766) LRANGE_500 (first 500 elements): rps=18122.5 (overall: 17744.4) avg_msec=11.094 (overall: 11.664) LRANGE_500 (first 500 elements): rps=18118.6 (overall: 17793.0) avg_msec=10.912 (overall: 11.564) LRANGE_500 (first 500 elements): rps=17871.1 (overall: 17802.1) avg_msec=12.455 (overall: 11.668) LRANGE_500 (first 500 elements): rps=17984.1 (overall: 17820.7) avg_msec=10.868 (overall: 11.586) LRANGE_500 (first 500 elements): rps=17845.2 (overall: 17823.0) avg_msec=11.034 (overall: 11.534) LRANGE_500 (first 500 elements): rps=17952.8 (overall: 17834.1) avg_msec=11.049 (overall: 11.492) LRANGE_500 (first 500 elements): rps=17952.4 (overall: 17843.4) avg_msec=11.036 (overall: 11.456) LRANGE_500 (first 500 elements): rps=17952.9 (overall: 17851.5) avg_msec=11.252 (overall: 11.441) LRANGE_500 (first 500 elements): rps=18110.7 (overall: 17869.1) avg_msec=11.010 (overall: 11.411) LRANGE_500 (first 500 elements): rps=17490.1 (overall: 17845.0) avg_msec=11.210 (overall: 11.399) LRANGE_500 (first 500 elements): rps=18138.3 (overall: 17862.5) avg_msec=11.074 (overall: 11.379) LRANGE_500 (first 500 elements): rps=16774.3 (overall: 17800.1) avg_msec=12.677 (overall: 11.449) LRANGE_500 (first 500 elements): rps=18122.0 (overall: 17817.4) avg_msec=10.873 (overall: 11.418) LRANGE_500 (first 500 elements): rps=17257.9 (overall: 17789.1) avg_msec=11.525 (overall: 11.423) LRANGE_500 (first 500 elements): rps=18305.6 (overall: 17814.0) avg_msec=12.262 (overall: 11.465) LRANGE_500 (first 500 elements): rps=16433.1 (overall: 17750.1) avg_msec=12.390 (overall: 11.504) ====== LRANGE_500 (first 500 elements) ====== 223s 100000 requests completed in 5.63 seconds 223s 50 parallel clients 223s 3 bytes payload 223s keep alive: 1 223s host configuration "save": 3600 1 300 100 60 10000 223s host configuration "appendonly": no 223s multi-thread: no 223s 223s Latency by percentile distribution: 223s 0.000% <= 0.831 milliseconds (cumulative count 10) 223s 50.000% <= 11.095 milliseconds (cumulative count 50110) 223s 75.000% <= 13.327 milliseconds (cumulative count 75030) 223s 87.500% <= 14.631 milliseconds (cumulative count 87510) 223s 93.750% <= 15.919 milliseconds (cumulative count 93750) 223s 96.875% <= 18.111 milliseconds (cumulative count 96890) 223s 98.438% <= 21.663 milliseconds (cumulative count 98440) 223s 99.219% <= 25.567 milliseconds (cumulative count 99220) 223s 99.609% <= 30.431 milliseconds (cumulative count 99610) 223s 99.805% <= 34.015 milliseconds (cumulative count 99810) 223s 99.902% <= 35.551 milliseconds (cumulative count 99910) 223s 99.951% <= 36.351 milliseconds (cumulative count 99960) 223s 99.976% <= 36.543 milliseconds (cumulative count 99980) 223s 99.988% <= 36.703 milliseconds (cumulative count 99990) 223s 99.994% <= 36.895 milliseconds (cumulative count 100000) 223s 100.000% <= 36.895 milliseconds (cumulative count 100000) 223s 223s Cumulative distribution of latencies: 223s 0.000% <= 0.103 milliseconds (cumulative count 0) 223s 0.010% <= 0.903 milliseconds (cumulative count 10) 223s 0.080% <= 3.103 milliseconds (cumulative count 80) 223s 0.170% <= 4.103 milliseconds (cumulative count 170) 223s 0.730% <= 5.103 milliseconds (cumulative count 730) 223s 2.110% <= 6.103 milliseconds (cumulative count 2110) 223s 4.820% <= 7.103 milliseconds (cumulative count 4820) 223s 11.610% <= 8.103 milliseconds (cumulative count 11610) 223s 24.170% <= 9.103 milliseconds (cumulative count 24170) 223s 38.280% <= 10.103 milliseconds (cumulative count 38280) 223s 50.220% <= 11.103 milliseconds (cumulative count 50220) 223s 61.130% <= 12.103 milliseconds (cumulative count 61130) 223s 72.620% <= 13.103 milliseconds (cumulative count 72620) 223s 82.950% <= 14.103 milliseconds (cumulative count 82950) 223s 90.810% <= 15.103 milliseconds (cumulative count 90810) 223s 94.150% <= 16.103 milliseconds (cumulative count 94150) 223s 95.740% <= 17.103 milliseconds (cumulative count 95740) 223s 96.890% <= 18.111 milliseconds (cumulative count 96890) 223s 97.600% <= 19.103 milliseconds (cumulative count 97600) 223s 97.880% <= 20.111 milliseconds (cumulative count 97880) 223s 98.240% <= 21.103 milliseconds (cumulative count 98240) 223s 98.560% <= 22.111 milliseconds (cumulative count 98560) 223s 98.850% <= 23.103 milliseconds (cumulative count 98850) 223s 99.040% <= 24.111 milliseconds (cumulative count 99040) 223s 99.180% <= 25.103 milliseconds (cumulative count 99180) 223s 99.270% <= 26.111 milliseconds (cumulative count 99270) 223s 99.420% <= 27.103 milliseconds (cumulative count 99420) 223s 99.550% <= 28.111 milliseconds (cumulative count 99550) 223s 99.570% <= 29.103 milliseconds (cumulative count 99570) 223s 99.580% <= 30.111 milliseconds (cumulative count 99580) 223s 99.640% <= 31.103 milliseconds (cumulative count 99640) 223s 99.700% <= 32.111 milliseconds (cumulative count 99700) 223s 99.740% <= 33.119 milliseconds (cumulative count 99740) 223s 99.810% <= 34.111 milliseconds (cumulative count 99810) 223s 99.860% <= 35.103 milliseconds (cumulative count 99860) 223s 99.950% <= 36.127 milliseconds (cumulative count 99950) 223s 100.000% <= 37.119 milliseconds (cumulative count 100000) 223s 223s Summary: 223s throughput summary: 17774.62 requests per second 223s latency summary (msec): 223s avg min p50 p95 p99 max 223s 11.489 0.824 11.095 16.623 23.775 36.895 230s LRANGE_600 (first 600 elements): rps=4701.2 (overall: 10085.5) avg_msec=21.884 (overall: 21.884) LRANGE_600 (first 600 elements): rps=11725.1 (overall: 11203.8) avg_msec=22.957 (overall: 22.650) LRANGE_600 (first 600 elements): rps=14772.9 (overall: 12651.0) avg_msec=15.544 (overall: 19.285) LRANGE_600 (first 600 elements): rps=13755.9 (overall: 12972.5) avg_msec=16.753 (overall: 18.504) LRANGE_600 (first 600 elements): rps=14976.2 (overall: 13421.3) avg_msec=15.004 (overall: 17.629) LRANGE_600 (first 600 elements): rps=14498.0 (overall: 13617.7) avg_msec=15.620 (overall: 17.239) LRANGE_600 (first 600 elements): rps=14668.0 (overall: 13780.8) avg_msec=14.896 (overall: 16.851) LRANGE_600 (first 600 elements): rps=13734.1 (overall: 13774.6) avg_msec=16.476 (overall: 16.801) LRANGE_600 (first 600 elements): rps=15435.3 (overall: 13972.8) avg_msec=11.688 (overall: 16.127) LRANGE_600 (first 600 elements): rps=15390.6 (overall: 14124.6) avg_msec=11.799 (overall: 15.622) LRANGE_600 (first 600 elements): rps=14603.2 (overall: 14170.2) avg_msec=15.253 (overall: 15.586) LRANGE_600 (first 600 elements): rps=14920.3 (overall: 14235.2) avg_msec=14.350 (overall: 15.474) LRANGE_600 (first 600 elements): rps=14300.4 (overall: 14240.5) avg_msec=15.701 (overall: 15.492) LRANGE_600 (first 600 elements): rps=13565.7 (overall: 14190.6) avg_msec=16.090 (overall: 15.534) LRANGE_600 (first 600 elements): rps=13071.7 (overall: 14113.7) avg_msec=18.492 (overall: 15.723) LRANGE_600 (first 600 elements): rps=14992.1 (overall: 14170.4) avg_msec=12.988 (overall: 15.536) LRANGE_600 (first 600 elements): rps=15147.3 (overall: 14231.0) avg_msec=13.121 (overall: 15.376) LRANGE_600 (first 600 elements): rps=15007.9 (overall: 14275.4) avg_msec=13.688 (overall: 15.275) LRANGE_600 (first 600 elements): rps=14517.9 (overall: 14288.4) avg_msec=14.414 (overall: 15.228) LRANGE_600 (first 600 elements): rps=14734.1 (overall: 14311.3) avg_msec=14.772 (overall: 15.204) LRANGE_600 (first 600 elements): rps=14948.8 (overall: 14342.6) avg_msec=15.027 (overall: 15.195) LRANGE_600 (first 600 elements): rps=14586.6 (overall: 14354.0) avg_msec=15.454 (overall: 15.207) LRANGE_600 (first 600 elements): rps=13836.7 (overall: 14331.2) avg_msec=15.732 (overall: 15.229) LRANGE_600 (first 600 elements): rps=13580.4 (overall: 14298.9) avg_msec=16.608 (overall: 15.286) LRANGE_600 (first 600 elements): rps=14884.5 (overall: 14322.7) avg_msec=14.430 (overall: 15.250) LRANGE_600 (first 600 elements): rps=14850.4 (overall: 14343.5) avg_msec=15.578 (overall: 15.263) LRANGE_600 (first 600 elements): rps=14870.1 (overall: 14363.5) avg_msec=14.858 (overall: 15.247) LRANGE_600 (first 600 elements): rps=13864.5 (overall: 14345.4) avg_msec=16.819 (overall: 15.302) ====== LRANGE_600 (first 600 elements) ====== 230s 100000 requests completed in 6.98 seconds 230s 50 parallel clients 230s 3 bytes payload 230s keep alive: 1 230s host configuration "save": 3600 1 300 100 60 10000 230s host configuration "appendonly": no 230s multi-thread: no 230s 230s Latency by percentile distribution: 230s 0.000% <= 1.655 milliseconds (cumulative count 10) 230s 50.000% <= 14.119 milliseconds (cumulative count 50010) 230s 75.000% <= 19.727 milliseconds (cumulative count 75080) 230s 87.500% <= 23.103 milliseconds (cumulative count 87550) 230s 93.750% <= 25.727 milliseconds (cumulative count 93780) 230s 96.875% <= 28.255 milliseconds (cumulative count 96910) 230s 98.438% <= 30.655 milliseconds (cumulative count 98450) 230s 99.219% <= 32.575 milliseconds (cumulative count 99220) 230s 99.609% <= 34.559 milliseconds (cumulative count 99610) 230s 99.805% <= 36.607 milliseconds (cumulative count 99810) 230s 99.902% <= 42.143 milliseconds (cumulative count 99910) 230s 99.951% <= 43.327 milliseconds (cumulative count 99960) 230s 99.976% <= 43.935 milliseconds (cumulative count 99980) 230s 99.988% <= 44.127 milliseconds (cumulative count 99990) 230s 99.994% <= 44.735 milliseconds (cumulative count 100000) 230s 100.000% <= 44.735 milliseconds (cumulative count 100000) 230s 230s Cumulative distribution of latencies: 230s 0.000% <= 0.103 milliseconds (cumulative count 0) 230s 0.020% <= 1.703 milliseconds (cumulative count 20) 230s 0.050% <= 1.903 milliseconds (cumulative count 50) 230s 0.100% <= 2.007 milliseconds (cumulative count 100) 230s 0.130% <= 2.103 milliseconds (cumulative count 130) 230s 0.670% <= 3.103 milliseconds (cumulative count 670) 230s 1.520% <= 4.103 milliseconds (cumulative count 1520) 230s 2.880% <= 5.103 milliseconds (cumulative count 2880) 230s 4.280% <= 6.103 milliseconds (cumulative count 4280) 230s 6.250% <= 7.103 milliseconds (cumulative count 6250) 230s 9.590% <= 8.103 milliseconds (cumulative count 9590) 230s 14.520% <= 9.103 milliseconds (cumulative count 14520) 230s 20.930% <= 10.103 milliseconds (cumulative count 20930) 230s 28.560% <= 11.103 milliseconds (cumulative count 28560) 230s 35.550% <= 12.103 milliseconds (cumulative count 35550) 230s 42.840% <= 13.103 milliseconds (cumulative count 42840) 230s 49.910% <= 14.103 milliseconds (cumulative count 49910) 230s 55.770% <= 15.103 milliseconds (cumulative count 55770) 230s 60.780% <= 16.103 milliseconds (cumulative count 60780) 230s 64.600% <= 17.103 milliseconds (cumulative count 64600) 230s 68.340% <= 18.111 milliseconds (cumulative count 68340) 230s 72.520% <= 19.103 milliseconds (cumulative count 72520) 230s 76.580% <= 20.111 milliseconds (cumulative count 76580) 230s 80.510% <= 21.103 milliseconds (cumulative count 80510) 230s 84.050% <= 22.111 milliseconds (cumulative count 84050) 230s 87.550% <= 23.103 milliseconds (cumulative count 87550) 230s 90.350% <= 24.111 milliseconds (cumulative count 90350) 230s 92.590% <= 25.103 milliseconds (cumulative count 92590) 230s 94.410% <= 26.111 milliseconds (cumulative count 94410) 230s 95.690% <= 27.103 milliseconds (cumulative count 95690) 230s 96.730% <= 28.111 milliseconds (cumulative count 96730) 230s 97.710% <= 29.103 milliseconds (cumulative count 97710) 230s 98.240% <= 30.111 milliseconds (cumulative count 98240) 230s 98.600% <= 31.103 milliseconds (cumulative count 98600) 230s 99.040% <= 32.111 milliseconds (cumulative count 99040) 230s 99.380% <= 33.119 milliseconds (cumulative count 99380) 230s 99.570% <= 34.111 milliseconds (cumulative count 99570) 230s 99.680% <= 35.103 milliseconds (cumulative count 99680) 230s 99.780% <= 36.127 milliseconds (cumulative count 99780) 230s 99.810% <= 37.119 milliseconds (cumulative count 99810) 230s 99.850% <= 40.127 milliseconds (cumulative count 99850) 230s 99.870% <= 41.119 milliseconds (cumulative count 99870) 230s 99.900% <= 42.111 milliseconds (cumulative count 99900) 230s 99.950% <= 43.103 milliseconds (cumulative count 99950) 230s 99.990% <= 44.127 milliseconds (cumulative count 99990) 230s 100.000% <= 45.119 milliseconds (cumulative count 100000) 230s 230s Summary: 230s throughput summary: 14328.70 requests per second 230s latency summary (msec): 230s avg min p50 p95 p99 max 230s 15.315 1.648 14.119 26.559 31.983 44.735 231s MSET (10 keys): rps=129680.0 (overall: 156618.4) avg_msec=2.919 (overall: 2.919) MSET (10 keys): rps=157290.8 (overall: 156986.9) avg_msec=2.942 (overall: 2.932) ====== MSET (10 keys) ====== 231s 100000 requests completed in 0.64 seconds 231s 50 parallel clients 231s 3 bytes payload 231s keep alive: 1 231s host configuration "save": 3600 1 300 100 60 10000 231s host configuration "appendonly": no 231s multi-thread: no 231s 231s Latency by percentile distribution: 231s 0.000% <= 0.799 milliseconds (cumulative count 10) 231s 50.000% <= 3.031 milliseconds (cumulative count 50660) 231s 75.000% <= 3.295 milliseconds (cumulative count 75570) 231s 87.500% <= 3.463 milliseconds (cumulative count 87900) 231s 93.750% <= 3.567 milliseconds (cumulative count 93980) 231s 96.875% <= 3.655 milliseconds (cumulative count 96940) 231s 98.438% <= 3.751 milliseconds (cumulative count 98560) 231s 99.219% <= 3.823 milliseconds (cumulative count 99220) 231s 99.609% <= 3.911 milliseconds (cumulative count 99640) 231s 99.805% <= 3.975 milliseconds (cumulative count 99810) 231s 99.902% <= 4.039 milliseconds (cumulative count 99910) 231s 99.951% <= 4.103 milliseconds (cumulative count 99960) 231s 99.976% <= 4.143 milliseconds (cumulative count 99980) 231s 99.988% <= 4.159 milliseconds (cumulative count 99990) 231s 99.994% <= 4.223 milliseconds (cumulative count 100000) 231s 100.000% <= 4.223 milliseconds (cumulative count 100000) 231s 231s Cumulative distribution of latencies: 231s 0.000% <= 0.103 milliseconds (cumulative count 0) 231s 0.010% <= 0.807 milliseconds (cumulative count 10) 231s 0.040% <= 0.903 milliseconds (cumulative count 40) 231s 0.070% <= 1.007 milliseconds (cumulative count 70) 231s 0.140% <= 1.703 milliseconds (cumulative count 140) 231s 0.490% <= 1.807 milliseconds (cumulative count 490) 231s 1.900% <= 1.903 milliseconds (cumulative count 1900) 231s 5.280% <= 2.007 milliseconds (cumulative count 5280) 231s 9.500% <= 2.103 milliseconds (cumulative count 9500) 231s 57.450% <= 3.103 milliseconds (cumulative count 57450) 231s 99.960% <= 4.103 milliseconds (cumulative count 99960) 231s 100.000% <= 5.103 milliseconds (cumulative count 100000) 231s 231s Summary: 231s throughput summary: 156985.86 requests per second 231s latency summary (msec): 231s avg min p50 p95 p99 max 231s 2.940 0.792 3.031 3.599 3.799 4.223 231s XADD: rps=65537.9 (overall: 235000.0) avg_msec=1.906 (overall: 1.906) XADD: rps=254000.0 (overall: 249843.8) avg_msec=1.809 (overall: 1.829) ====== XADD ====== 231s 100000 requests completed in 0.40 seconds 231s 50 parallel clients 231s 3 bytes payload 231s keep alive: 1 231s host configuration "save": 3600 1 300 100 60 10000 231s host configuration "appendonly": no 231s multi-thread: no 231s 231s Latency by percentile distribution: 231s 0.000% <= 0.623 milliseconds (cumulative count 10) 231s 50.000% <= 1.815 milliseconds (cumulative count 50040) 231s 75.000% <= 2.087 milliseconds (cumulative count 75410) 231s 87.500% <= 2.247 milliseconds (cumulative count 87920) 231s 93.750% <= 2.343 milliseconds (cumulative count 94080) 231s 96.875% <= 2.455 milliseconds (cumulative count 96990) 231s 98.438% <= 2.543 milliseconds (cumulative count 98490) 231s 99.219% <= 2.623 milliseconds (cumulative count 99260) 231s 99.609% <= 2.719 milliseconds (cumulative count 99610) 231s 99.805% <= 2.823 milliseconds (cumulative count 99820) 231s 99.902% <= 2.975 milliseconds (cumulative count 99910) 231s 99.951% <= 3.071 milliseconds (cumulative count 99960) 231s 99.976% <= 3.103 milliseconds (cumulative count 99980) 231s 99.988% <= 3.143 milliseconds (cumulative count 99990) 231s 99.994% <= 3.159 milliseconds (cumulative count 100000) 231s 100.000% <= 3.159 milliseconds (cumulative count 100000) 231s 231s Cumulative distribution of latencies: 231s 0.000% <= 0.103 milliseconds (cumulative count 0) 231s 0.060% <= 0.703 milliseconds (cumulative count 60) 231s 0.120% <= 0.807 milliseconds (cumulative count 120) 231s 0.180% <= 0.903 milliseconds (cumulative count 180) 231s 0.240% <= 1.007 milliseconds (cumulative count 240) 231s 0.300% <= 1.103 milliseconds (cumulative count 300) 231s 0.740% <= 1.207 milliseconds (cumulative count 740) 231s 2.280% <= 1.303 milliseconds (cumulative count 2280) 231s 9.440% <= 1.407 milliseconds (cumulative count 9440) 231s 22.030% <= 1.503 milliseconds (cumulative count 22030) 231s 31.860% <= 1.607 milliseconds (cumulative count 31860) 231s 39.610% <= 1.703 milliseconds (cumulative count 39610) 231s 49.270% <= 1.807 milliseconds (cumulative count 49270) 231s 59.080% <= 1.903 milliseconds (cumulative count 59080) 231s 68.690% <= 2.007 milliseconds (cumulative count 68690) 231s 76.670% <= 2.103 milliseconds (cumulative count 76670) 231s 99.980% <= 3.103 milliseconds (cumulative count 99980) 231s 100.000% <= 4.103 milliseconds (cumulative count 100000) 231s 231s Summary: 231s throughput summary: 250000.00 requests per second 231s latency summary (msec): 231s avg min p50 p95 p99 max 231s 1.827 0.616 1.815 2.367 2.591 3.159 235s FUNCTION LOAD: rps=15595.2 (overall: 23254.4) avg_msec=19.503 (overall: 19.503) FUNCTION LOAD: rps=25378.5 (overall: 24523.8) avg_msec=19.891 (overall: 19.743) FUNCTION LOAD: rps=24502.0 (overall: 24515.6) avg_msec=19.888 (overall: 19.797) FUNCTION LOAD: rps=24541.8 (overall: 24522.8) avg_msec=19.960 (overall: 19.842) FUNCTION LOAD: rps=25200.0 (overall: 24667.2) avg_msec=19.888 (overall: 19.852) FUNCTION LOAD: rps=24183.3 (overall: 24581.9) avg_msec=19.918 (overall: 19.863) FUNCTION LOAD: rps=24502.0 (overall: 24569.9) avg_msec=20.042 (overall: 19.890) FUNCTION LOAD: rps=24183.3 (overall: 24519.5) avg_msec=20.144 (overall: 19.923) FUNCTION LOAD: rps=25040.0 (overall: 24579.3) avg_msec=19.970 (overall: 19.928) FUNCTION LOAD: rps=24940.2 (overall: 24616.7) avg_msec=19.879 (overall: 19.923) FUNCTION LOAD: rps=24581.7 (overall: 24613.4) avg_msec=20.004 (overall: 19.931) FUNCTION LOAD: rps=24382.5 (overall: 24593.6) avg_msec=20.014 (overall: 19.938) FUNCTION LOAD: rps=25099.6 (overall: 24633.5) avg_msec=19.939 (overall: 19.938) FUNCTION LOAD: rps=24661.4 (overall: 24635.6) avg_msec=19.995 (overall: 19.942) FUNCTION LOAD: rps=24382.5 (overall: 24618.3) avg_msec=20.154 (overall: 19.956) FUNCTION LOAD: rps=24661.4 (overall: 24621.1) avg_msec=20.108 (overall: 19.966) ====== FUNCTION LOAD ====== 235s 100000 requests completed in 4.06 seconds 235s 50 parallel clients 235s 3 bytes payload 235s keep alive: 1 235s host configuration "save": 3600 1 300 100 60 10000 235s host configuration "appendonly": no 235s multi-thread: no 235s 235s Latency by percentile distribution: 235s 0.000% <= 1.143 milliseconds (cumulative count 10) 235s 50.000% <= 21.391 milliseconds (cumulative count 50240) 235s 75.000% <= 22.063 milliseconds (cumulative count 75050) 235s 87.500% <= 22.511 milliseconds (cumulative count 87680) 235s 93.750% <= 22.863 milliseconds (cumulative count 93860) 235s 96.875% <= 23.231 milliseconds (cumulative count 96920) 235s 98.438% <= 23.823 milliseconds (cumulative count 98450) 235s 99.219% <= 24.831 milliseconds (cumulative count 99220) 235s 99.609% <= 25.615 milliseconds (cumulative count 99610) 235s 99.805% <= 27.903 milliseconds (cumulative count 99810) 235s 99.902% <= 28.367 milliseconds (cumulative count 99910) 235s 99.951% <= 28.703 milliseconds (cumulative count 99960) 235s 99.976% <= 28.863 milliseconds (cumulative count 99980) 235s 99.988% <= 29.327 milliseconds (cumulative count 99990) 235s 99.994% <= 29.695 milliseconds (cumulative count 100000) 235s 100.000% <= 29.695 milliseconds (cumulative count 100000) 235s 235s Cumulative distribution of latencies: 235s 0.000% <= 0.103 milliseconds (cumulative count 0) 235s 0.010% <= 1.207 milliseconds (cumulative count 10) 235s 0.220% <= 9.103 milliseconds (cumulative count 220) 235s 0.900% <= 10.103 milliseconds (cumulative count 900) 235s 4.170% <= 11.103 milliseconds (cumulative count 4170) 235s 11.510% <= 12.103 milliseconds (cumulative count 11510) 235s 14.630% <= 13.103 milliseconds (cumulative count 14630) 235s 15.530% <= 14.103 milliseconds (cumulative count 15530) 235s 15.700% <= 15.103 milliseconds (cumulative count 15700) 235s 15.900% <= 17.103 milliseconds (cumulative count 15900) 235s 16.050% <= 18.111 milliseconds (cumulative count 16050) 235s 16.480% <= 19.103 milliseconds (cumulative count 16480) 235s 25.130% <= 20.111 milliseconds (cumulative count 25130) 235s 42.150% <= 21.103 milliseconds (cumulative count 42150) 235s 76.750% <= 22.111 milliseconds (cumulative count 76750) 235s 96.140% <= 23.103 milliseconds (cumulative count 96140) 235s 98.690% <= 24.111 milliseconds (cumulative count 98690) 235s 99.380% <= 25.103 milliseconds (cumulative count 99380) 235s 99.730% <= 26.111 milliseconds (cumulative count 99730) 235s 99.800% <= 27.103 milliseconds (cumulative count 99800) 235s 99.840% <= 28.111 milliseconds (cumulative count 99840) 235s 99.980% <= 29.103 milliseconds (cumulative count 99980) 235s 100.000% <= 30.111 milliseconds (cumulative count 100000) 235s 235s Summary: 235s throughput summary: 24642.68 requests per second 235s latency summary (msec): 235s avg min p50 p95 p99 max 235s 19.969 1.136 21.391 22.975 24.559 29.695 235s FCALL: rps=120520.0 (overall: 251083.3) avg_msec=1.805 (overall: 1.805) FCALL: rps=251560.0 (overall: 251405.4) avg_msec=1.826 (overall: 1.819) ====== FCALL ====== 235s 100000 requests completed in 0.40 seconds 235s 50 parallel clients 235s 3 bytes payload 235s keep alive: 1 235s host configuration "save": 3600 1 300 100 60 10000 235s host configuration "appendonly": no 235s multi-thread: no 235s 235s Latency by percentile distribution: 235s 0.000% <= 0.631 milliseconds (cumulative count 10) 235s 50.000% <= 1.791 milliseconds (cumulative count 50130) 235s 75.000% <= 2.079 milliseconds (cumulative count 75540) 235s 87.500% <= 2.247 milliseconds (cumulative count 87990) 235s 93.750% <= 2.351 milliseconds (cumulative count 93950) 235s 96.875% <= 2.479 milliseconds (cumulative count 96970) 235s 98.438% <= 2.591 milliseconds (cumulative count 98470) 235s 99.219% <= 2.687 milliseconds (cumulative count 99230) 235s 99.609% <= 2.783 milliseconds (cumulative count 99610) 235s 99.805% <= 2.903 milliseconds (cumulative count 99810) 235s 99.902% <= 3.015 milliseconds (cumulative count 99910) 235s 99.951% <= 3.175 milliseconds (cumulative count 99960) 235s 99.976% <= 3.279 milliseconds (cumulative count 99980) 235s 99.988% <= 3.471 milliseconds (cumulative count 99990) 235s 99.994% <= 3.503 milliseconds (cumulative count 100000) 235s 100.000% <= 3.503 milliseconds (cumulative count 100000) 235s 235s Cumulative distribution of latencies: 235s 0.000% <= 0.103 milliseconds (cumulative count 0) 235s 0.010% <= 0.703 milliseconds (cumulative count 10) 235s 0.040% <= 0.807 milliseconds (cumulative count 40) 235s 0.050% <= 0.903 milliseconds (cumulative count 50) 235s 0.100% <= 1.103 milliseconds (cumulative count 100) 235s 0.410% <= 1.207 milliseconds (cumulative count 410) 235s 1.720% <= 1.303 milliseconds (cumulative count 1720) 235s 9.450% <= 1.407 milliseconds (cumulative count 9450) 235s 22.840% <= 1.503 milliseconds (cumulative count 22840) 235s 34.970% <= 1.607 milliseconds (cumulative count 34970) 235s 42.470% <= 1.703 milliseconds (cumulative count 42470) 235s 51.710% <= 1.807 milliseconds (cumulative count 51710) 235s 60.560% <= 1.903 milliseconds (cumulative count 60560) 235s 69.860% <= 2.007 milliseconds (cumulative count 69860) 235s 77.500% <= 2.103 milliseconds (cumulative count 77500) 235s 99.950% <= 3.103 milliseconds (cumulative count 99950) 235s 100.000% <= 4.103 milliseconds (cumulative count 100000) 235s 235s Summary: 235s throughput summary: 251256.28 requests per second 235s latency summary (msec): 235s avg min p50 p95 p99 max 235s 1.817 0.624 1.791 2.383 2.655 3.503 235s 236s autopkgtest [14:29:10]: test 0002-benchmark: -----------------------] 236s autopkgtest [14:29:10]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 236s 0002-benchmark PASS 237s autopkgtest [14:29:11]: test 0003-valkey-check-aof: preparing testbed 237s Reading package lists... 238s Building dependency tree... 238s Reading state information... 238s Solving dependencies... 239s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 239s autopkgtest [14:29:13]: test 0003-valkey-check-aof: [----------------------- 240s ************************************************************************** 240s # A new feature in cloud-init identified possible datasources for # 240s # this system as: # 240s # [] # 240s # However, the datasource used was: OpenStack # 240s # # 240s # In the future, cloud-init will only attempt to use datasources that # 240s # are identified or specifically configured. # 240s # For more information see # 240s # https://bugs.launchpad.net/bugs/1669675 # 240s # # 240s # If you are seeing this message, please file a bug against # 240s # cloud-init at # 240s # https://github.com/canonical/cloud-init/issues # 240s # Make sure to include the cloud provider your instance is # 240s # running on. # 240s # # 240s # After you have filed a bug, you can disable this warning by launching # 240s # your instance with the cloud-config below, or putting that content # 240s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 240s # # 240s # #cloud-config # 240s # warnings: # 240s # dsid_missing_source: off # 240s ************************************************************************** 240s 240s Disable the warnings above by: 240s touch /root/.cloud-warnings.skip 240s or 240s touch /var/lib/cloud/instance/warnings/.skip 240s autopkgtest [14:29:14]: test 0003-valkey-check-aof: -----------------------] 241s autopkgtest [14:29:15]: test 0003-valkey-check-aof: - - - - - - - - - - results - - - - - - - - - - 241s 0003-valkey-check-aof PASS 241s autopkgtest [14:29:15]: test 0004-valkey-check-rdb: preparing testbed 241s Reading package lists... 242s Building dependency tree... 242s Reading state information... 242s Solving dependencies... 243s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 244s autopkgtest [14:29:18]: test 0004-valkey-check-rdb: [----------------------- 244s ************************************************************************** 244s # A new feature in cloud-init identified possible datasources for # 244s # this system as: # 244s # [] # 244s # However, the datasource used was: OpenStack # 244s # # 244s # In the future, cloud-init will only attempt to use datasources that # 244s # are identified or specifically configured. # 244s # For more information see # 244s # https://bugs.launchpad.net/bugs/1669675 # 244s # # 244s # If you are seeing this message, please file a bug against # 244s # cloud-init at # 244s # https://github.com/canonical/cloud-init/issues # 244s # Make sure to include the cloud provider your instance is # 244s # running on. # 244s # # 244s # After you have filed a bug, you can disable this warning by launching # 244s # your instance with the cloud-config below, or putting that content # 244s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 244s # # 244s # #cloud-config # 244s # warnings: # 244s # dsid_missing_source: off # 244s ************************************************************************** 244s 244s Disable the warnings above by: 244s touch /root/.cloud-warnings.skip 244s or 244s touch /var/lib/cloud/instance/warnings/.skip 249s OK 249s [offset 0] Checking RDB file /var/lib/valkey/dump.rdb 249s [offset 27] AUX FIELD valkey-ver = '8.1.1' 249s [offset 41] AUX FIELD redis-bits = '64' 249s [offset 53] AUX FIELD ctime = '1750343363' 249s [offset 68] AUX FIELD used-mem = '3030232' 249s [offset 80] AUX FIELD aof-base = '0' 249s [offset 191] Selecting DB ID 0 249s [offset 566589] Checksum OK 249s [offset 566589] \o/ RDB looks OK! \o/ 249s [info] 5 keys read 249s [info] 0 expires 249s [info] 0 already expired 249s autopkgtest [14:29:23]: test 0004-valkey-check-rdb: -----------------------] 250s autopkgtest [14:29:24]: test 0004-valkey-check-rdb: - - - - - - - - - - results - - - - - - - - - - 250s 0004-valkey-check-rdb PASS 250s autopkgtest [14:29:24]: test 0005-cjson: preparing testbed 251s Reading package lists... 251s Building dependency tree... 251s Reading state information... 251s Solving dependencies... 252s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 255s autopkgtest [14:29:29]: test 0005-cjson: [----------------------- 255s ************************************************************************** 255s # A new feature in cloud-init identified possible datasources for # 255s # this system as: # 255s # [] # 255s # However, the datasource used was: OpenStack # 255s # # 255s # In the future, cloud-init will only attempt to use datasources that # 255s # are identified or specifically configured. # 255s # For more information see # 255s # https://bugs.launchpad.net/bugs/1669675 # 255s # # 255s # If you are seeing this message, please file a bug against # 255s # cloud-init at # 255s # https://github.com/canonical/cloud-init/issues # 255s # Make sure to include the cloud provider your instance is # 255s # running on. # 255s # # 255s # After you have filed a bug, you can disable this warning by launching # 255s # your instance with the cloud-config below, or putting that content # 255s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 255s # # 255s # #cloud-config # 255s # warnings: # 255s # dsid_missing_source: off # 255s ************************************************************************** 255s 255s Disable the warnings above by: 255s touch /root/.cloud-warnings.skip 255s or 255s touch /var/lib/cloud/instance/warnings/.skip 260s 261s autopkgtest [14:29:35]: test 0005-cjson: -----------------------] 261s autopkgtest [14:29:35]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 261s 0005-cjson PASS 262s autopkgtest [14:29:36]: test 0006-migrate-from-redis: preparing testbed 381s autopkgtest [14:31:34]: testbed dpkg architecture: arm64 381s autopkgtest [14:31:35]: testbed apt version: 3.1.2 381s autopkgtest [14:31:35]: @@@@@@@@@@@@@@@@@@@@ test bed setup 381s autopkgtest [14:31:35]: testbed release detected to be: questing 382s autopkgtest [14:31:36]: updating testbed package index (apt update) 382s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 383s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 383s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 383s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 383s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [426 kB] 383s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [38.3 kB] 383s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/restricted Sources [4716 B] 383s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.4 kB] 383s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/main arm64 Packages [65.9 kB] 383s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/restricted arm64 Packages [18.4 kB] 383s Get:11 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 Packages [364 kB] 383s Get:12 http://ftpmaster.internal/ubuntu questing-proposed/multiverse arm64 Packages [23.9 kB] 383s Fetched 1208 kB in 1s (1425 kB/s) 384s Reading package lists... 385s autopkgtest [14:31:39]: upgrading testbed (apt dist-upgrade and autopurge) 385s Reading package lists... 385s Building dependency tree... 385s Reading state information... 386s Calculating upgrade... 387s The following packages will be upgraded: 387s libpython3.12-minimal libpython3.12-stdlib libpython3.12t64 387s 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 387s Need to get 5180 kB of archives. 387s After this operation, 291 kB disk space will be freed. 387s Get:1 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 libpython3.12t64 arm64 3.12.10-1 [2314 kB] 388s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 libpython3.12-stdlib arm64 3.12.10-1 [2029 kB] 388s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 libpython3.12-minimal arm64 3.12.10-1 [836 kB] 389s Fetched 5180 kB in 1s (6268 kB/s) 389s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118766 files and directories currently installed.) 389s Preparing to unpack .../libpython3.12t64_3.12.10-1_arm64.deb ... 389s Unpacking libpython3.12t64:arm64 (3.12.10-1) over (3.12.8-3) ... 389s Preparing to unpack .../libpython3.12-stdlib_3.12.10-1_arm64.deb ... 389s Unpacking libpython3.12-stdlib:arm64 (3.12.10-1) over (3.12.8-3) ... 389s Preparing to unpack .../libpython3.12-minimal_3.12.10-1_arm64.deb ... 389s Unpacking libpython3.12-minimal:arm64 (3.12.10-1) over (3.12.8-3) ... 389s Setting up libpython3.12-minimal:arm64 (3.12.10-1) ... 389s Setting up libpython3.12-stdlib:arm64 (3.12.10-1) ... 389s Setting up libpython3.12t64:arm64 (3.12.10-1) ... 389s Processing triggers for libc-bin (2.41-6ubuntu2) ... 390s Reading package lists... 390s Building dependency tree... 390s Reading state information... 390s Solving dependencies... 391s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 394s Reading package lists... 395s Building dependency tree... 395s Reading state information... 395s Solving dependencies... 396s The following NEW packages will be installed: 396s liblzf1 redis-sentinel redis-server redis-tools 396s 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. 396s Need to get 1419 kB of archives. 396s After this operation, 7903 kB of additional disk space will be used. 396s Get:1 http://ftpmaster.internal/ubuntu questing/universe arm64 liblzf1 arm64 3.6-4 [7426 B] 397s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 redis-tools arm64 5:8.0.0-2 [1346 kB] 397s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 redis-sentinel arm64 5:8.0.0-2 [12.5 kB] 397s Get:4 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 redis-server arm64 5:8.0.0-2 [53.2 kB] 397s Fetched 1419 kB in 1s (2216 kB/s) 397s Selecting previously unselected package liblzf1:arm64. 397s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118766 files and directories currently installed.) 398s Preparing to unpack .../liblzf1_3.6-4_arm64.deb ... 398s Unpacking liblzf1:arm64 (3.6-4) ... 398s Selecting previously unselected package redis-tools. 398s Preparing to unpack .../redis-tools_5%3a8.0.0-2_arm64.deb ... 398s Unpacking redis-tools (5:8.0.0-2) ... 398s Selecting previously unselected package redis-sentinel. 398s Preparing to unpack .../redis-sentinel_5%3a8.0.0-2_arm64.deb ... 398s Unpacking redis-sentinel (5:8.0.0-2) ... 398s Selecting previously unselected package redis-server. 398s Preparing to unpack .../redis-server_5%3a8.0.0-2_arm64.deb ... 398s Unpacking redis-server (5:8.0.0-2) ... 398s Setting up liblzf1:arm64 (3.6-4) ... 398s Setting up redis-tools (5:8.0.0-2) ... 398s Setting up redis-server (5:8.0.0-2) ... 398s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'. 398s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'. 399s Setting up redis-sentinel (5:8.0.0-2) ... 399s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 399s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 400s Processing triggers for man-db (2.13.1-1) ... 400s Processing triggers for libc-bin (2.41-6ubuntu2) ... 407s autopkgtest [14:32:01]: test 0006-migrate-from-redis: [----------------------- 407s ************************************************************************** 407s # A new feature in cloud-init identified possible datasources for # 407s # this system as: # 407s # [] # 407s # However, the datasource used was: OpenStack # 407s # # 407s # In the future, cloud-init will only attempt to use datasources that # 407s # are identified or specifically configured. # 407s # For more information see # 407s # https://bugs.launchpad.net/bugs/1669675 # 407s # # 407s # If you are seeing this message, please file a bug against # 407s # cloud-init at # 407s # https://github.com/canonical/cloud-init/issues # 407s # Make sure to include the cloud provider your instance is # 407s # running on. # 407s # # 407s # After you have filed a bug, you can disable this warning by launching # 407s # your instance with the cloud-config below, or putting that content # 407s # into /etc/cloud/cloud.cfg.d/99-warnings.cfg # 407s # # 407s # #cloud-config # 407s # warnings: # 407s # dsid_missing_source: off # 407s ************************************************************************** 407s 407s Disable the warnings above by: 407s touch /root/.cloud-warnings.skip 407s or 407s touch /var/lib/cloud/instance/warnings/.skip 407s + FLAG_FILE=/etc/valkey/REDIS_MIGRATION 407s + sed -i 's#loglevel notice#loglevel debug#' /etc/redis/redis.conf 407s + systemctl restart redis-server 407s OK 407s + redis-cli -h 127.0.0.1 -p 6379 SET test 1 407s + redis-cli -h 127.0.0.1 -p 6379 GET test 407s 1 407s + redis-cli -h 127.0.0.1 -p 6379 SAVE 407s OK 407s + sha256sum /var/lib/redis/dump.rdb 407s c34680d64b6e77f8a1b83873305a9d017bf2e38566de85498701af80bed97d1b /var/lib/redis/dump.rdb 407s + apt-get install -y valkey-redis-compat 407s Reading package lists... 408s Building dependency tree... 408s Reading state information... 408s Solving dependencies... 409s The following additional packages will be installed: 409s valkey-server valkey-tools 409s Suggested packages: 409s ruby-redis 409s The following packages will be REMOVED: 409s redis-sentinel redis-server redis-tools 409s The following NEW packages will be installed: 409s valkey-redis-compat valkey-server valkey-tools 409s 0 upgraded, 3 newly installed, 3 to remove and 0 not upgraded. 409s Need to get 1345 kB of archives. 409s After this operation, 212 kB disk space will be freed. 409s Get:1 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-tools arm64 8.1.1+dfsg1-2ubuntu1 [1285 kB] 410s Get:2 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-server arm64 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 410s Get:3 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-redis-compat all 8.1.1+dfsg1-2ubuntu1 [7794 B] 410s Fetched 1345 kB in 1s (2169 kB/s) 411s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118817 files and directories currently installed.) 411s Removing redis-sentinel (5:8.0.0-2) ... 411s Removing redis-server (5:8.0.0-2) ... 411s Removing redis-tools (5:8.0.0-2) ... 412s Selecting previously unselected package valkey-tools. 412s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118780 files and directories currently installed.) 412s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 412s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 412s Selecting previously unselected package valkey-server. 412s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 412s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 412s Selecting previously unselected package valkey-redis-compat. 412s Preparing to unpack .../valkey-redis-compat_8.1.1+dfsg1-2ubuntu1_all.deb ... 412s Unpacking valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 412s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 412s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 413s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 413s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 413s Setting up valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 413s dpkg-query: no packages found matching valkey-sentinel 413s [I] /etc/redis/redis.conf has been copied to /etc/valkey/valkey.conf. Please, review the content of valkey.conf, especially if you had modified redis.conf. 413s [I] /etc/redis/sentinel.conf has been copied to /etc/valkey/sentinel.conf. Please, review the content of sentinel.conf, especially if you had modified sentinel.conf. 413s [I] On-disk redis dumps moved from /var/lib/redis/ to /var/lib/valkey. 413s Processing triggers for man-db (2.13.1-1) ... 414s + '[' -f /etc/valkey/REDIS_MIGRATION ']' 414s + sha256sum /var/lib/valkey/dump.rdb 414s a2fd8401f6d6a39f8432f2d1fe7298f76f53b10ba1a350d06582c9a47e479b2d /var/lib/valkey/dump.rdb 414s + systemctl status valkey-server 414s + grep inactive 414s Active: inactive (dead) since Thu 2025-06-19 14:32:07 UTC; 601ms ago 414s + rm /etc/valkey/REDIS_MIGRATION 414s + systemctl start valkey-server 414s Job for valkey-server.service failed because the control process exited with error code. 414s See "systemctl status valkey-server.service" and "journalctl -xeu valkey-server.service" for details. 414s autopkgtest [14:32:08]: test 0006-migrate-from-redis: -----------------------] 414s autopkgtest [14:32:08]: test 0006-migrate-from-redis: - - - - - - - - - - results - - - - - - - - - - 414s 0006-migrate-from-redis FAIL non-zero exit status 1 415s autopkgtest [14:32:09]: @@@@@@@@@@@@@@@@@@@@ summary 415s 0001-valkey-cli PASS 415s 0002-benchmark PASS 415s 0003-valkey-check-aof PASS 415s 0004-valkey-check-rdb PASS 415s 0005-cjson PASS 415s 0006-migrate-from-redis FAIL non-zero exit status 1 433s nova [W] Using flock in prodstack6-arm64 433s Creating nova instance adt-questing-arm64-valkey-20250619-142514-juju-7f2275-prod-proposed-migration-environment-20-f72ed46b-14f6-4e71-8cc3-2702b46dc7d4 from image adt/ubuntu-questing-arm64-server-20250619.img (UUID 9e826193-3943-4502-8d49-d04976fe922a)... 433s nova [W] Timed out waiting for 96fb9c99-94ef-410c-b7cd-e07616bf5a09 to get deleted. 433s nova [W] Using flock in prodstack6-arm64 433s Creating nova instance adt-questing-arm64-valkey-20250619-142514-juju-7f2275-prod-proposed-migration-environment-20-f72ed46b-14f6-4e71-8cc3-2702b46dc7d4 from image adt/ubuntu-questing-arm64-server-20250619.img (UUID 9e826193-3943-4502-8d49-d04976fe922a)... 433s nova [W] Timed out waiting for fa27cbdc-14ae-42af-b8fb-b10ec7da201f to get deleted.