0s autopkgtest [08:20:52]: starting date and time: 2025-06-30 08:20:52+0000 0s autopkgtest [08:20:52]: git checkout: 508d4a25 a-v-ssh wait_for_ssh: demote "ssh connection failed" to a debug message 0s autopkgtest [08:20:52]: host juju-7f2275-prod-proposed-migration-environment-23; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.di7b9di8/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:redis --apt-upgrade valkey --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=redis/5:8.0.0-2 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-23@bos03-arm64-7.secgroup --name adt-questing-arm64-valkey-20250630-082052-juju-7f2275-prod-proposed-migration-environment-23-63f33cec-76c5-4d9e-99eb-2249834f65cf --image adt/ubuntu-questing-arm64-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-23 --net-id=net_prod-proposed-migration -e TERM=linux --mirror=http://ftpmaster.internal/ubuntu/ 3s Creating nova instance adt-questing-arm64-valkey-20250630-082052-juju-7f2275-prod-proposed-migration-environment-23-63f33cec-76c5-4d9e-99eb-2249834f65cf from image adt/ubuntu-questing-arm64-server-20250630.img (UUID ae295103-813a-4e52-a06a-9453e78f97db)... 64s autopkgtest [08:21:56]: testbed dpkg architecture: arm64 64s autopkgtest [08:21:56]: testbed apt version: 3.1.2 64s autopkgtest [08:21:56]: @@@@@@@@@@@@@@@@@@@@ test bed setup 64s autopkgtest [08:21:56]: testbed release detected to be: None 65s autopkgtest [08:21:57]: updating testbed package index (apt update) 66s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 66s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 66s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 66s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 66s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [429 kB] 66s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.5 kB] 66s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [26.6 kB] 66s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main arm64 Packages [26.7 kB] 66s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 Packages [390 kB] 66s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/multiverse arm64 Packages [16.5 kB] 67s Fetched 1156 kB in 1s (1299 kB/s) 68s Reading package lists... 68s autopkgtest [08:22:00]: upgrading testbed (apt dist-upgrade and autopurge) 69s Reading package lists... 69s Building dependency tree... 69s Reading state information... 70s Calculating upgrade... 71s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 71s Reading package lists... 72s Building dependency tree... 72s Reading state information... 72s Solving dependencies... 72s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 75s autopkgtest [08:22:07]: testbed running kernel: Linux 6.15.0-3-generic #3-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 4 08:41:23 UTC 2025 76s autopkgtest [08:22:08]: @@@@@@@@@@@@@@@@@@@@ apt-source valkey 80s Get:1 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (dsc) [2484 B] 80s Get:2 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (tar) [2726 kB] 80s Get:3 http://ftpmaster.internal/ubuntu questing/universe valkey 8.1.1+dfsg1-2ubuntu1 (diff) [20.4 kB] 80s gpgv: Signature made Wed Jun 18 14:39:32 2025 UTC 80s gpgv: using RSA key 63EEFC3DE14D5146CE7F24BF34B8AD7D9529E793 80s gpgv: issuer "lena.voytek@canonical.com" 80s gpgv: Can't check signature: No public key 80s dpkg-source: warning: cannot verify inline signature for ./valkey_8.1.1+dfsg1-2ubuntu1.dsc: no acceptable signature found 81s autopkgtest [08:22:13]: testing package valkey version 8.1.1+dfsg1-2ubuntu1 82s autopkgtest [08:22:14]: build not needed 85s autopkgtest [08:22:17]: test 0001-valkey-cli: preparing testbed 85s Reading package lists... 85s Building dependency tree... 85s Reading state information... 85s Solving dependencies... 86s The following NEW packages will be installed: 86s liblzf1 valkey-server valkey-tools 86s 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. 86s Need to get 1345 kB of archives. 86s After this operation, 7648 kB of additional disk space will be used. 86s Get:1 http://ftpmaster.internal/ubuntu questing/universe arm64 liblzf1 arm64 3.6-4 [7426 B] 86s Get:2 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-tools arm64 8.1.1+dfsg1-2ubuntu1 [1285 kB] 87s Get:3 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-server arm64 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 87s Fetched 1345 kB in 1s (1879 kB/s) 87s Selecting previously unselected package liblzf1:arm64. 88s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 127289 files and directories currently installed.) 88s Preparing to unpack .../liblzf1_3.6-4_arm64.deb ... 88s Unpacking liblzf1:arm64 (3.6-4) ... 88s Selecting previously unselected package valkey-tools. 88s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 88s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 88s Selecting previously unselected package valkey-server. 88s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 88s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 88s Setting up liblzf1:arm64 (3.6-4) ... 88s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 88s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 89s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 89s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 89s Processing triggers for man-db (2.13.1-1) ... 90s Processing triggers for libc-bin (2.41-6ubuntu2) ... 91s autopkgtest [08:22:23]: test 0001-valkey-cli: [----------------------- 97s # Server 97s redis_version:7.2.4 97s server_name:valkey 97s valkey_version:8.1.1 97s valkey_release_stage:ga 97s redis_git_sha1:00000000 97s redis_git_dirty:0 97s redis_build_id:454dc2cf719509d2 97s server_mode:standalone 97s os:Linux 6.15.0-3-generic aarch64 97s arch_bits:64 97s monotonic_clock:POSIX clock_gettime 97s multiplexing_api:epoll 97s gcc_version:14.3.0 97s process_id:1936 97s process_supervised:systemd 97s run_id:4a64a51c196df79264161900f2a26f8bed0f0e57 97s tcp_port:6379 97s server_time_usec:1751271749131140 97s uptime_in_seconds:5 97s uptime_in_days:0 97s hz:10 97s configured_hz:10 97s clients_hz:10 97s lru_clock:6441285 97s executable:/usr/bin/valkey-server 97s config_file:/etc/valkey/valkey.conf 97s io_threads_active:0 97s availability_zone: 97s listener0:name=tcp,bind=127.0.0.1,bind=-::1,port=6379 97s 97s # Clients 97s connected_clients:1 97s cluster_connections:0 97s maxclients:10000 97s client_recent_max_input_buffer:0 97s client_recent_max_output_buffer:0 97s blocked_clients:0 97s tracking_clients:0 97s pubsub_clients:0 97s watching_clients:0 97s clients_in_timeout_table:0 97s total_watched_keys:0 97s total_blocking_keys:0 97s total_blocking_keys_on_nokey:0 97s paused_reason:none 97s paused_actions:none 97s paused_timeout_milliseconds:0 97s 97s # Memory 97s used_memory:944512 97s used_memory_human:922.38K 97s used_memory_rss:14209024 97s used_memory_rss_human:13.55M 97s used_memory_peak:944512 97s used_memory_peak_human:922.38K 97s used_memory_peak_perc:100.29% 97s used_memory_overhead:924608 97s used_memory_startup:924384 97s used_memory_dataset:19904 97s used_memory_dataset_perc:98.89% 97s allocator_allocated:4426880 97s allocator_active:9043968 97s allocator_resident:10354688 97s allocator_muzzy:0 97s total_system_memory:4086386688 97s total_system_memory_human:3.81G 97s used_memory_lua:32768 97s used_memory_vm_eval:32768 97s used_memory_lua_human:32.00K 97s used_memory_scripts_eval:0 97s number_of_cached_scripts:0 97s number_of_functions:0 97s number_of_libraries:0 97s used_memory_vm_functions:33792 97s used_memory_vm_total:66560 97s used_memory_vm_total_human:65.00K 97s used_memory_functions:224 97s used_memory_scripts:224 97s used_memory_scripts_human:224B 97s maxmemory:0 97s maxmemory_human:0B 97s maxmemory_policy:noeviction 97s allocator_frag_ratio:1.00 97s allocator_frag_bytes:0 97s allocator_rss_ratio:1.14 97s allocator_rss_bytes:1310720 97s rss_overhead_ratio:1.37 97s rss_overhead_bytes:3854336 97s mem_fragmentation_ratio:15.37 97s mem_fragmentation_bytes:13284496 97s mem_not_counted_for_evict:0 97s mem_replication_backlog:0 97s mem_total_replication_buffers:0 97s mem_clients_slaves:0 97s mem_clients_normal:0 97s mem_cluster_links:0 97s mem_aof_buffer:0 97s mem_allocator:jemalloc-5.3.0 97s mem_overhead_db_hashtable_rehashing:0 97s active_defrag_running:0 97s lazyfree_pending_objects:0 97s lazyfreed_objects:0 97s 97s # Persistence 97s loading:0 97s async_loading:0 97s current_cow_peak:0 97s current_cow_size:0 97s current_cow_size_age:0 97s current_fork_perc:0.00 97s current_save_keys_processed:0 97s current_save_keys_total:0 97s rdb_changes_since_last_save:0 97s rdb_bgsave_in_progress:0 97s rdb_last_save_time:1751271744 97s rdb_last_bgsave_status:ok 97s rdb_last_bgsave_time_sec:-1 97s rdb_current_bgsave_time_sec:-1 97s rdb_saves:0 97s rdb_last_cow_size:0 97s rdb_last_load_keys_expired:0 97s rdb_last_load_keys_loaded:0 97s aof_enabled:0 97s aof_rewrite_in_progress:0 97s aof_rewrite_scheduled:0 97s aof_last_rewrite_time_sec:-1 97s aof_current_rewrite_time_sec:-1 97s aof_last_bgrewrite_status:ok 97s aof_rewrites:0 97s aof_rewrites_consecutive_failures:0 97s aof_last_write_status:ok 97s aof_last_cow_size:0 97s module_fork_in_progress:0 97s module_fork_last_cow_size:0 97s 97s # Stats 97s total_connections_received:1 97s total_commands_processed:0 97s instantaneous_ops_per_sec:0 97s total_net_input_bytes:14 97s total_net_output_bytes:0 97s total_net_repl_input_bytes:0 97s total_net_repl_output_bytes:0 97s instantaneous_input_kbps:0.00 97s instantaneous_output_kbps:0.00 97s instantaneous_input_repl_kbps:0.00 97s instantaneous_output_repl_kbps:0.00 97s rejected_connections:0 97s sync_full:0 97s sync_partial_ok:0 97s sync_partial_err:0 97s expired_keys:0 97s expired_stale_perc:0.00 97s expired_time_cap_reached_count:0 97s expire_cycle_cpu_milliseconds:0 97s evicted_keys:0 97s evicted_clients:0 97s evicted_scripts:0 97s total_eviction_exceeded_time:0 97s current_eviction_exceeded_time:0 97s keyspace_hits:0 97s keyspace_misses:0 97s pubsub_channels:0 97s pubsub_patterns:0 97s pubsubshard_channels:0 97s latest_fork_usec:0 97s total_forks:0 97s migrate_cached_sockets:0 97s slave_expires_tracked_keys:0 97s active_defrag_hits:0 97s active_defrag_misses:0 97s active_defrag_key_hits:0 97s active_defrag_key_misses:0 97s total_active_defrag_time:0 97s current_active_defrag_time:0 97s tracking_total_keys:0 97s tracking_total_items:0 97s tracking_total_prefixes:0 97s unexpected_error_replies:0 97s total_error_replies:0 97s dump_payload_sanitizations:0 97s total_reads_processed:1 97s total_writes_processed:0 97s io_threaded_reads_processed:0 97s io_threaded_writes_processed:0 97s io_threaded_freed_objects:0 97s io_threaded_accept_processed:0 97s io_threaded_poll_processed:0 97s io_threaded_total_prefetch_batches:0 97s io_threaded_total_prefetch_entries:0 97s client_query_buffer_limit_disconnections:0 97s client_output_buffer_limit_disconnections:0 97s reply_buffer_shrinks:0 97s reply_buffer_expands:0 97s eventloop_cycles:52 97s eventloop_duration_sum:8509 97s eventloop_duration_cmd_sum:0 97s instantaneous_eventloop_cycles_per_sec:9 97s instantaneous_eventloop_duration_usec:177 97s acl_access_denied_auth:0 97s acl_access_denied_cmd:0 97s acl_access_denied_key:0 97s acl_access_denied_channel:0 97s 97s # Replication 97s role:master 97s connected_slaves:0 97s replicas_waiting_psync:0 97s master_failover_state:no-failover 97s master_replid:0419709aeed8e268f95e5db8630945e7473a09d7 97s master_replid2:0000000000000000000000000000000000000000 97s master_repl_offset:0 97s second_repl_offset:-1 97s repl_backlog_active:0 97s repl_backlog_size:10485760 97s repl_backlog_first_byte_offset:0 97s repl_backlog_histlen:0 97s 97s # CPU 97s used_cpu_sys:0.032282 97s used_cpu_user:0.038676 97s used_cpu_sys_children:0.005745 97s used_cpu_user_children:0.000000 97s used_cpu_sys_main_thread:0.031544 97s used_cpu_user_main_thread:0.038689 97s 97s # Modules 97s 97s # Errorstats 97s 97s # Cluster 97s cluster_enabled:0 97s 97s # Keyspace 97s Redis ver. 8.1.1 97s autopkgtest [08:22:29]: test 0001-valkey-cli: -----------------------] 97s autopkgtest [08:22:29]: test 0001-valkey-cli: - - - - - - - - - - results - - - - - - - - - - 97s 0001-valkey-cli PASS 98s autopkgtest [08:22:30]: test 0002-benchmark: preparing testbed 98s Reading package lists... 99s Building dependency tree... 99s Reading state information... 99s Solving dependencies... 100s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 101s autopkgtest [08:22:33]: test 0002-benchmark: [----------------------- 107s PING_INLINE: rps=0.0 (overall: 0.0) avg_msec=nan (overall: nan) PING_INLINE: rps=385240.0 (overall: 383705.2) avg_msec=0.925 (overall: 0.925) ====== PING_INLINE ====== 107s 100000 requests completed in 0.26 seconds 107s 50 parallel clients 107s 3 bytes payload 107s keep alive: 1 107s host configuration "save": 3600 1 300 100 60 10000 107s host configuration "appendonly": no 107s multi-thread: no 107s 107s Latency by percentile distribution: 107s 0.000% <= 0.279 milliseconds (cumulative count 10) 107s 50.000% <= 0.895 milliseconds (cumulative count 50470) 107s 75.000% <= 1.063 milliseconds (cumulative count 75510) 107s 87.500% <= 1.183 milliseconds (cumulative count 87700) 107s 93.750% <= 1.311 milliseconds (cumulative count 93770) 107s 96.875% <= 1.463 milliseconds (cumulative count 97000) 107s 98.438% <= 1.599 milliseconds (cumulative count 98460) 107s 99.219% <= 1.735 milliseconds (cumulative count 99230) 107s 99.609% <= 1.831 milliseconds (cumulative count 99650) 107s 99.805% <= 1.919 milliseconds (cumulative count 99830) 107s 99.902% <= 1.991 milliseconds (cumulative count 99910) 107s 99.951% <= 2.031 milliseconds (cumulative count 99960) 107s 99.976% <= 2.135 milliseconds (cumulative count 99980) 107s 99.988% <= 2.175 milliseconds (cumulative count 99990) 107s 99.994% <= 2.207 milliseconds (cumulative count 100000) 107s 100.000% <= 2.207 milliseconds (cumulative count 100000) 107s 107s Cumulative distribution of latencies: 107s 0.000% <= 0.103 milliseconds (cumulative count 0) 107s 0.020% <= 0.303 milliseconds (cumulative count 20) 107s 0.060% <= 0.407 milliseconds (cumulative count 60) 107s 0.560% <= 0.503 milliseconds (cumulative count 560) 107s 5.870% <= 0.607 milliseconds (cumulative count 5870) 107s 17.880% <= 0.703 milliseconds (cumulative count 17880) 107s 35.190% <= 0.807 milliseconds (cumulative count 35190) 107s 51.700% <= 0.903 milliseconds (cumulative count 51700) 107s 68.070% <= 1.007 milliseconds (cumulative count 68070) 107s 80.460% <= 1.103 milliseconds (cumulative count 80460) 107s 89.300% <= 1.207 milliseconds (cumulative count 89300) 107s 93.530% <= 1.303 milliseconds (cumulative count 93530) 107s 96.120% <= 1.407 milliseconds (cumulative count 96120) 107s 97.580% <= 1.503 milliseconds (cumulative count 97580) 107s 98.510% <= 1.607 milliseconds (cumulative count 98510) 107s 99.060% <= 1.703 milliseconds (cumulative count 99060) 107s 99.560% <= 1.807 milliseconds (cumulative count 99560) 107s 99.780% <= 1.903 milliseconds (cumulative count 99780) 107s 99.930% <= 2.007 milliseconds (cumulative count 99930) 107s 99.970% <= 2.103 milliseconds (cumulative count 99970) 107s 100.000% <= 3.103 milliseconds (cumulative count 100000) 107s 107s Summary: 107s throughput summary: 383141.75 requests per second 107s latency summary (msec): 107s avg min p50 p95 p99 max 107s 0.923 0.272 0.895 1.359 1.695 2.207 107s PING_MBULK: rps=374302.8 (overall: 394747.9) avg_msec=0.823 (overall: 0.823) ====== PING_MBULK ====== 107s 100000 requests completed in 0.25 seconds 107s 50 parallel clients 107s 3 bytes payload 107s keep alive: 1 107s host configuration "save": 3600 1 300 100 60 10000 107s host configuration "appendonly": no 107s multi-thread: no 107s 107s Latency by percentile distribution: 107s 0.000% <= 0.231 milliseconds (cumulative count 10) 107s 50.000% <= 0.791 milliseconds (cumulative count 51550) 107s 75.000% <= 0.911 milliseconds (cumulative count 75070) 107s 87.500% <= 1.031 milliseconds (cumulative count 87530) 107s 93.750% <= 1.167 milliseconds (cumulative count 93780) 107s 96.875% <= 1.311 milliseconds (cumulative count 96880) 107s 98.438% <= 1.455 milliseconds (cumulative count 98460) 107s 99.219% <= 1.671 milliseconds (cumulative count 99240) 107s 99.609% <= 1.967 milliseconds (cumulative count 99630) 107s 99.805% <= 2.839 milliseconds (cumulative count 99810) 107s 99.902% <= 3.567 milliseconds (cumulative count 99910) 107s 99.951% <= 3.903 milliseconds (cumulative count 99960) 107s 99.976% <= 4.015 milliseconds (cumulative count 99980) 107s 99.988% <= 4.087 milliseconds (cumulative count 99990) 107s 99.994% <= 4.135 milliseconds (cumulative count 100000) 107s 100.000% <= 4.135 milliseconds (cumulative count 100000) 107s 107s Cumulative distribution of latencies: 107s 0.000% <= 0.103 milliseconds (cumulative count 0) 107s 0.040% <= 0.303 milliseconds (cumulative count 40) 107s 0.120% <= 0.407 milliseconds (cumulative count 120) 107s 0.580% <= 0.503 milliseconds (cumulative count 580) 107s 8.430% <= 0.607 milliseconds (cumulative count 8430) 107s 30.020% <= 0.703 milliseconds (cumulative count 30020) 107s 55.340% <= 0.807 milliseconds (cumulative count 55340) 107s 74.030% <= 0.903 milliseconds (cumulative count 74030) 107s 85.720% <= 1.007 milliseconds (cumulative count 85720) 107s 91.290% <= 1.103 milliseconds (cumulative count 91290) 107s 94.810% <= 1.207 milliseconds (cumulative count 94810) 107s 96.760% <= 1.303 milliseconds (cumulative count 96760) 107s 98.090% <= 1.407 milliseconds (cumulative count 98090) 107s 98.780% <= 1.503 milliseconds (cumulative count 98780) 107s 99.090% <= 1.607 milliseconds (cumulative count 99090) 107s 99.280% <= 1.703 milliseconds (cumulative count 99280) 107s 99.480% <= 1.807 milliseconds (cumulative count 99480) 107s 99.560% <= 1.903 milliseconds (cumulative count 99560) 107s 99.660% <= 2.007 milliseconds (cumulative count 99660) 107s 99.720% <= 2.103 milliseconds (cumulative count 99720) 107s 99.860% <= 3.103 milliseconds (cumulative count 99860) 107s 99.990% <= 4.103 milliseconds (cumulative count 99990) 107s 100.000% <= 5.103 milliseconds (cumulative count 100000) 107s 107s Summary: 107s throughput summary: 393700.78 requests per second 107s latency summary (msec): 107s avg min p50 p95 p99 max 107s 0.828 0.224 0.791 1.215 1.575 4.135 107s SET: rps=314023.9 (overall: 339741.4) avg_msec=1.276 (overall: 1.276) ====== SET ====== 107s 100000 requests completed in 0.29 seconds 107s 50 parallel clients 107s 3 bytes payload 107s keep alive: 1 107s host configuration "save": 3600 1 300 100 60 10000 107s host configuration "appendonly": no 107s multi-thread: no 107s 107s Latency by percentile distribution: 107s 0.000% <= 0.503 milliseconds (cumulative count 10) 107s 50.000% <= 1.223 milliseconds (cumulative count 50700) 107s 75.000% <= 1.447 milliseconds (cumulative count 75740) 107s 87.500% <= 1.607 milliseconds (cumulative count 87650) 107s 93.750% <= 1.711 milliseconds (cumulative count 94040) 107s 96.875% <= 1.799 milliseconds (cumulative count 97030) 107s 98.438% <= 1.887 milliseconds (cumulative count 98440) 107s 99.219% <= 1.983 milliseconds (cumulative count 99230) 107s 99.609% <= 2.079 milliseconds (cumulative count 99630) 107s 99.805% <= 2.191 milliseconds (cumulative count 99810) 107s 99.902% <= 2.343 milliseconds (cumulative count 99910) 107s 99.951% <= 2.471 milliseconds (cumulative count 99960) 107s 99.976% <= 2.527 milliseconds (cumulative count 99980) 107s 99.988% <= 2.583 milliseconds (cumulative count 99990) 107s 99.994% <= 2.679 milliseconds (cumulative count 100000) 107s 100.000% <= 2.679 milliseconds (cumulative count 100000) 107s 107s Cumulative distribution of latencies: 107s 0.000% <= 0.103 milliseconds (cumulative count 0) 107s 0.010% <= 0.503 milliseconds (cumulative count 10) 107s 0.200% <= 0.607 milliseconds (cumulative count 200) 107s 1.070% <= 0.703 milliseconds (cumulative count 1070) 107s 2.890% <= 0.807 milliseconds (cumulative count 2890) 107s 5.570% <= 0.903 milliseconds (cumulative count 5570) 107s 12.560% <= 1.007 milliseconds (cumulative count 12560) 107s 26.680% <= 1.103 milliseconds (cumulative count 26680) 107s 47.840% <= 1.207 milliseconds (cumulative count 47840) 107s 62.070% <= 1.303 milliseconds (cumulative count 62070) 107s 72.450% <= 1.407 milliseconds (cumulative count 72450) 107s 80.000% <= 1.503 milliseconds (cumulative count 80000) 107s 87.650% <= 1.607 milliseconds (cumulative count 87650) 107s 93.600% <= 1.703 milliseconds (cumulative count 93600) 107s 97.220% <= 1.807 milliseconds (cumulative count 97220) 107s 98.620% <= 1.903 milliseconds (cumulative count 98620) 107s 99.400% <= 2.007 milliseconds (cumulative count 99400) 107s 99.720% <= 2.103 milliseconds (cumulative count 99720) 107s 100.000% <= 3.103 milliseconds (cumulative count 100000) 107s 107s Summary: 107s throughput summary: 342465.75 requests per second 107s latency summary (msec): 107s avg min p50 p95 p99 max 107s 1.269 0.496 1.223 1.735 1.959 2.679 108s GET: rps=284280.0 (overall: 380053.5) avg_msec=0.998 (overall: 0.998) ====== GET ====== 108s 100000 requests completed in 0.26 seconds 108s 50 parallel clients 108s 3 bytes payload 108s keep alive: 1 108s host configuration "save": 3600 1 300 100 60 10000 108s host configuration "appendonly": no 108s multi-thread: no 108s 108s Latency by percentile distribution: 108s 0.000% <= 0.351 milliseconds (cumulative count 10) 108s 50.000% <= 0.999 milliseconds (cumulative count 51070) 108s 75.000% <= 1.159 milliseconds (cumulative count 75650) 108s 87.500% <= 1.279 milliseconds (cumulative count 87630) 108s 93.750% <= 1.407 milliseconds (cumulative count 93890) 108s 96.875% <= 1.511 milliseconds (cumulative count 97060) 108s 98.438% <= 1.623 milliseconds (cumulative count 98460) 108s 99.219% <= 1.783 milliseconds (cumulative count 99250) 108s 99.609% <= 1.967 milliseconds (cumulative count 99610) 108s 99.805% <= 3.007 milliseconds (cumulative count 99810) 108s 99.902% <= 3.943 milliseconds (cumulative count 99910) 108s 99.951% <= 4.191 milliseconds (cumulative count 99960) 108s 99.976% <= 4.351 milliseconds (cumulative count 99980) 108s 99.988% <= 4.423 milliseconds (cumulative count 99990) 108s 99.994% <= 4.479 milliseconds (cumulative count 100000) 108s 100.000% <= 4.479 milliseconds (cumulative count 100000) 108s 108s Cumulative distribution of latencies: 108s 0.000% <= 0.103 milliseconds (cumulative count 0) 108s 0.040% <= 0.407 milliseconds (cumulative count 40) 108s 0.520% <= 0.503 milliseconds (cumulative count 520) 108s 4.220% <= 0.607 milliseconds (cumulative count 4220) 108s 11.660% <= 0.703 milliseconds (cumulative count 11660) 108s 23.680% <= 0.807 milliseconds (cumulative count 23680) 108s 36.620% <= 0.903 milliseconds (cumulative count 36620) 108s 52.400% <= 1.007 milliseconds (cumulative count 52400) 108s 67.840% <= 1.103 milliseconds (cumulative count 67840) 108s 81.240% <= 1.207 milliseconds (cumulative count 81240) 108s 89.140% <= 1.303 milliseconds (cumulative count 89140) 108s 93.890% <= 1.407 milliseconds (cumulative count 93890) 108s 96.840% <= 1.503 milliseconds (cumulative count 96840) 108s 98.360% <= 1.607 milliseconds (cumulative count 98360) 108s 98.920% <= 1.703 milliseconds (cumulative count 98920) 108s 99.350% <= 1.807 milliseconds (cumulative count 99350) 108s 99.510% <= 1.903 milliseconds (cumulative count 99510) 108s 99.630% <= 2.007 milliseconds (cumulative count 99630) 108s 99.670% <= 2.103 milliseconds (cumulative count 99670) 108s 99.830% <= 3.103 milliseconds (cumulative count 99830) 108s 99.940% <= 4.103 milliseconds (cumulative count 99940) 108s 100.000% <= 5.103 milliseconds (cumulative count 100000) 108s 108s Summary: 108s throughput summary: 380228.12 requests per second 108s latency summary (msec): 108s avg min p50 p95 p99 max 108s 1.006 0.344 0.999 1.439 1.727 4.479 108s INCR: rps=259880.5 (overall: 379244.2) avg_msec=1.012 (overall: 1.012) ====== INCR ====== 108s 100000 requests completed in 0.26 seconds 108s 50 parallel clients 108s 3 bytes payload 108s keep alive: 1 108s host configuration "save": 3600 1 300 100 60 10000 108s host configuration "appendonly": no 108s multi-thread: no 108s 108s Latency by percentile distribution: 108s 0.000% <= 0.367 milliseconds (cumulative count 10) 108s 50.000% <= 1.007 milliseconds (cumulative count 50070) 108s 75.000% <= 1.199 milliseconds (cumulative count 75130) 108s 87.500% <= 1.343 milliseconds (cumulative count 87980) 108s 93.750% <= 1.463 milliseconds (cumulative count 94120) 108s 96.875% <= 1.575 milliseconds (cumulative count 96930) 108s 98.438% <= 1.711 milliseconds (cumulative count 98480) 108s 99.219% <= 1.831 milliseconds (cumulative count 99230) 108s 99.609% <= 1.959 milliseconds (cumulative count 99630) 108s 99.805% <= 2.031 milliseconds (cumulative count 99810) 108s 99.902% <= 2.111 milliseconds (cumulative count 99910) 108s 99.951% <= 2.167 milliseconds (cumulative count 99960) 108s 99.976% <= 2.199 milliseconds (cumulative count 99980) 108s 99.988% <= 2.207 milliseconds (cumulative count 99990) 108s 99.994% <= 2.271 milliseconds (cumulative count 100000) 108s 100.000% <= 2.271 milliseconds (cumulative count 100000) 108s 108s Cumulative distribution of latencies: 108s 0.000% <= 0.103 milliseconds (cumulative count 0) 108s 0.020% <= 0.407 milliseconds (cumulative count 20) 108s 0.340% <= 0.503 milliseconds (cumulative count 340) 108s 2.520% <= 0.607 milliseconds (cumulative count 2520) 108s 9.220% <= 0.703 milliseconds (cumulative count 9220) 108s 21.270% <= 0.807 milliseconds (cumulative count 21270) 108s 34.510% <= 0.903 milliseconds (cumulative count 34510) 108s 50.070% <= 1.007 milliseconds (cumulative count 50070) 108s 63.900% <= 1.103 milliseconds (cumulative count 63900) 108s 76.170% <= 1.207 milliseconds (cumulative count 76170) 108s 85.120% <= 1.303 milliseconds (cumulative count 85120) 108s 91.640% <= 1.407 milliseconds (cumulative count 91640) 108s 95.330% <= 1.503 milliseconds (cumulative count 95330) 108s 97.520% <= 1.607 milliseconds (cumulative count 97520) 108s 98.390% <= 1.703 milliseconds (cumulative count 98390) 108s 99.140% <= 1.807 milliseconds (cumulative count 99140) 108s 99.500% <= 1.903 milliseconds (cumulative count 99500) 108s 99.750% <= 2.007 milliseconds (cumulative count 99750) 108s 99.900% <= 2.103 milliseconds (cumulative count 99900) 108s 100.000% <= 3.103 milliseconds (cumulative count 100000) 108s 108s Summary: 108s throughput summary: 377358.50 requests per second 108s latency summary (msec): 108s avg min p50 p95 p99 max 108s 1.032 0.360 1.007 1.495 1.783 2.271 108s LPUSH: rps=197680.0 (overall: 320909.1) avg_msec=1.375 (overall: 1.375) ====== LPUSH ====== 108s 100000 requests completed in 0.31 seconds 108s 50 parallel clients 108s 3 bytes payload 108s keep alive: 1 108s host configuration "save": 3600 1 300 100 60 10000 108s host configuration "appendonly": no 108s multi-thread: no 108s 108s Latency by percentile distribution: 108s 0.000% <= 0.463 milliseconds (cumulative count 10) 108s 50.000% <= 1.279 milliseconds (cumulative count 50760) 108s 75.000% <= 1.535 milliseconds (cumulative count 75660) 108s 87.500% <= 1.703 milliseconds (cumulative count 87940) 108s 93.750% <= 1.807 milliseconds (cumulative count 94020) 108s 96.875% <= 1.911 milliseconds (cumulative count 96890) 108s 98.438% <= 2.031 milliseconds (cumulative count 98440) 108s 99.219% <= 2.159 milliseconds (cumulative count 99240) 108s 99.609% <= 2.351 milliseconds (cumulative count 99610) 108s 99.805% <= 3.039 milliseconds (cumulative count 99810) 108s 99.902% <= 4.151 milliseconds (cumulative count 99910) 108s 99.951% <= 4.367 milliseconds (cumulative count 99960) 108s 99.976% <= 4.479 milliseconds (cumulative count 99980) 108s 99.988% <= 4.527 milliseconds (cumulative count 99990) 108s 99.994% <= 4.567 milliseconds (cumulative count 100000) 108s 100.000% <= 4.567 milliseconds (cumulative count 100000) 108s 108s Cumulative distribution of latencies: 108s 0.000% <= 0.103 milliseconds (cumulative count 0) 108s 0.030% <= 0.503 milliseconds (cumulative count 30) 108s 0.170% <= 0.607 milliseconds (cumulative count 170) 108s 0.470% <= 0.703 milliseconds (cumulative count 470) 108s 1.060% <= 0.807 milliseconds (cumulative count 1060) 108s 2.370% <= 0.903 milliseconds (cumulative count 2370) 108s 6.370% <= 1.007 milliseconds (cumulative count 6370) 108s 17.070% <= 1.103 milliseconds (cumulative count 17070) 108s 38.210% <= 1.207 milliseconds (cumulative count 38210) 108s 54.050% <= 1.303 milliseconds (cumulative count 54050) 108s 64.900% <= 1.407 milliseconds (cumulative count 64900) 108s 73.100% <= 1.503 milliseconds (cumulative count 73100) 108s 81.110% <= 1.607 milliseconds (cumulative count 81110) 108s 87.940% <= 1.703 milliseconds (cumulative count 87940) 108s 94.020% <= 1.807 milliseconds (cumulative count 94020) 108s 96.790% <= 1.903 milliseconds (cumulative count 96790) 108s 98.230% <= 2.007 milliseconds (cumulative count 98230) 108s 98.980% <= 2.103 milliseconds (cumulative count 98980) 108s 99.820% <= 3.103 milliseconds (cumulative count 99820) 108s 99.900% <= 4.103 milliseconds (cumulative count 99900) 108s 100.000% <= 5.103 milliseconds (cumulative count 100000) 108s 108s Summary: 108s throughput summary: 326797.41 requests per second 108s latency summary (msec): 108s avg min p50 p95 p99 max 108s 1.345 0.456 1.279 1.839 2.111 4.567 108s RPUSH: rps=133480.0 (overall: 347604.2) avg_msec=1.245 (overall: 1.245) ====== RPUSH ====== 108s 100000 requests completed in 0.29 seconds 108s 50 parallel clients 108s 3 bytes payload 108s keep alive: 1 108s host configuration "save": 3600 1 300 100 60 10000 108s host configuration "appendonly": no 108s multi-thread: no 108s 108s Latency by percentile distribution: 108s 0.000% <= 0.535 milliseconds (cumulative count 30) 108s 50.000% <= 1.199 milliseconds (cumulative count 51070) 108s 75.000% <= 1.415 milliseconds (cumulative count 75490) 108s 87.500% <= 1.575 milliseconds (cumulative count 87610) 108s 93.750% <= 1.687 milliseconds (cumulative count 94150) 108s 96.875% <= 1.775 milliseconds (cumulative count 97010) 108s 98.438% <= 1.871 milliseconds (cumulative count 98450) 108s 99.219% <= 1.975 milliseconds (cumulative count 99220) 108s 99.609% <= 2.103 milliseconds (cumulative count 99610) 108s 99.805% <= 2.783 milliseconds (cumulative count 99810) 108s 99.902% <= 3.655 milliseconds (cumulative count 99910) 108s 99.951% <= 3.855 milliseconds (cumulative count 99960) 108s 99.976% <= 3.991 milliseconds (cumulative count 99980) 108s 99.988% <= 4.039 milliseconds (cumulative count 99990) 108s 99.994% <= 4.087 milliseconds (cumulative count 100000) 108s 100.000% <= 4.087 milliseconds (cumulative count 100000) 108s 108s Cumulative distribution of latencies: 108s 0.000% <= 0.103 milliseconds (cumulative count 0) 108s 0.150% <= 0.607 milliseconds (cumulative count 150) 108s 0.820% <= 0.703 milliseconds (cumulative count 820) 108s 2.810% <= 0.807 milliseconds (cumulative count 2810) 108s 6.140% <= 0.903 milliseconds (cumulative count 6140) 108s 14.690% <= 1.007 milliseconds (cumulative count 14690) 108s 31.090% <= 1.103 milliseconds (cumulative count 31090) 108s 52.290% <= 1.207 milliseconds (cumulative count 52290) 108s 65.280% <= 1.303 milliseconds (cumulative count 65280) 108s 74.820% <= 1.407 milliseconds (cumulative count 74820) 108s 82.260% <= 1.503 milliseconds (cumulative count 82260) 108s 89.690% <= 1.607 milliseconds (cumulative count 89690) 108s 94.960% <= 1.703 milliseconds (cumulative count 94960) 108s 97.520% <= 1.807 milliseconds (cumulative count 97520) 108s 98.720% <= 1.903 milliseconds (cumulative count 98720) 108s 99.370% <= 2.007 milliseconds (cumulative count 99370) 108s 99.610% <= 2.103 milliseconds (cumulative count 99610) 108s 99.870% <= 3.103 milliseconds (cumulative count 99870) 108s 100.000% <= 4.103 milliseconds (cumulative count 100000) 108s 108s Summary: 108s throughput summary: 347222.25 requests per second 108s latency summary (msec): 108s avg min p50 p95 p99 max 108s 1.248 0.528 1.199 1.711 1.935 4.087 109s LPOP: rps=67290.8 (overall: 301607.1) avg_msec=1.446 (overall: 1.446) LPOP: rps=315760.0 (overall: 313169.9) avg_msec=1.411 (overall: 1.417) ====== LPOP ====== 109s 100000 requests completed in 0.32 seconds 109s 50 parallel clients 109s 3 bytes payload 109s keep alive: 1 109s host configuration "save": 3600 1 300 100 60 10000 109s host configuration "appendonly": no 109s multi-thread: no 109s 109s Latency by percentile distribution: 109s 0.000% <= 0.479 milliseconds (cumulative count 10) 109s 50.000% <= 1.343 milliseconds (cumulative count 50030) 109s 75.000% <= 1.599 milliseconds (cumulative count 75120) 109s 87.500% <= 1.775 milliseconds (cumulative count 87770) 109s 93.750% <= 1.879 milliseconds (cumulative count 93930) 109s 96.875% <= 1.967 milliseconds (cumulative count 96990) 109s 98.438% <= 2.071 milliseconds (cumulative count 98490) 109s 99.219% <= 2.175 milliseconds (cumulative count 99230) 109s 99.609% <= 2.303 milliseconds (cumulative count 99610) 109s 99.805% <= 3.167 milliseconds (cumulative count 99810) 109s 99.902% <= 4.207 milliseconds (cumulative count 99910) 109s 99.951% <= 4.519 milliseconds (cumulative count 99960) 109s 99.976% <= 4.631 milliseconds (cumulative count 99980) 109s 99.988% <= 4.687 milliseconds (cumulative count 99990) 109s 99.994% <= 4.743 milliseconds (cumulative count 100000) 109s 100.000% <= 4.743 milliseconds (cumulative count 100000) 109s 109s Cumulative distribution of latencies: 109s 0.000% <= 0.103 milliseconds (cumulative count 0) 109s 0.020% <= 0.503 milliseconds (cumulative count 20) 109s 0.090% <= 0.607 milliseconds (cumulative count 90) 109s 0.240% <= 0.703 milliseconds (cumulative count 240) 109s 0.470% <= 0.807 milliseconds (cumulative count 470) 109s 0.940% <= 0.903 milliseconds (cumulative count 940) 109s 2.050% <= 1.007 milliseconds (cumulative count 2050) 109s 7.130% <= 1.103 milliseconds (cumulative count 7130) 109s 24.970% <= 1.207 milliseconds (cumulative count 24970) 109s 43.860% <= 1.303 milliseconds (cumulative count 43860) 109s 58.150% <= 1.407 milliseconds (cumulative count 58150) 109s 67.230% <= 1.503 milliseconds (cumulative count 67230) 109s 75.720% <= 1.607 milliseconds (cumulative count 75720) 109s 82.840% <= 1.703 milliseconds (cumulative count 82840) 109s 89.910% <= 1.807 milliseconds (cumulative count 89910) 109s 95.010% <= 1.903 milliseconds (cumulative count 95010) 109s 97.860% <= 2.007 milliseconds (cumulative count 97860) 109s 98.790% <= 2.103 milliseconds (cumulative count 98790) 109s 99.800% <= 3.103 milliseconds (cumulative count 99800) 109s 99.900% <= 4.103 milliseconds (cumulative count 99900) 109s 100.000% <= 5.103 milliseconds (cumulative count 100000) 109s 109s Summary: 109s throughput summary: 313479.62 requests per second 109s latency summary (msec): 109s avg min p50 p95 p99 max 109s 1.415 0.472 1.343 1.903 2.143 4.743 109s RPOP: rps=301035.9 (overall: 321531.9) avg_msec=1.372 (overall: 1.372) ====== RPOP ====== 109s 100000 requests completed in 0.31 seconds 109s 50 parallel clients 109s 3 bytes payload 109s keep alive: 1 109s host configuration "save": 3600 1 300 100 60 10000 109s host configuration "appendonly": no 109s multi-thread: no 109s 109s Latency by percentile distribution: 109s 0.000% <= 0.535 milliseconds (cumulative count 10) 109s 50.000% <= 1.303 milliseconds (cumulative count 50760) 109s 75.000% <= 1.551 milliseconds (cumulative count 75420) 109s 87.500% <= 1.727 milliseconds (cumulative count 87990) 109s 93.750% <= 1.823 milliseconds (cumulative count 93960) 109s 96.875% <= 1.911 milliseconds (cumulative count 97060) 109s 98.438% <= 2.007 milliseconds (cumulative count 98500) 109s 99.219% <= 2.087 milliseconds (cumulative count 99220) 109s 99.609% <= 2.175 milliseconds (cumulative count 99620) 109s 99.805% <= 2.647 milliseconds (cumulative count 99810) 109s 99.902% <= 3.519 milliseconds (cumulative count 99910) 109s 99.951% <= 3.847 milliseconds (cumulative count 99960) 109s 99.976% <= 3.943 milliseconds (cumulative count 99980) 109s 99.988% <= 3.967 milliseconds (cumulative count 99990) 109s 99.994% <= 4.007 milliseconds (cumulative count 100000) 109s 100.000% <= 4.007 milliseconds (cumulative count 100000) 109s 109s Cumulative distribution of latencies: 109s 0.000% <= 0.103 milliseconds (cumulative count 0) 109s 0.060% <= 0.607 milliseconds (cumulative count 60) 109s 0.280% <= 0.703 milliseconds (cumulative count 280) 109s 0.600% <= 0.807 milliseconds (cumulative count 600) 109s 1.050% <= 0.903 milliseconds (cumulative count 1050) 109s 2.880% <= 1.007 milliseconds (cumulative count 2880) 109s 11.190% <= 1.103 milliseconds (cumulative count 11190) 109s 32.380% <= 1.207 milliseconds (cumulative count 32380) 109s 50.760% <= 1.303 milliseconds (cumulative count 50760) 109s 62.850% <= 1.407 milliseconds (cumulative count 62850) 109s 71.530% <= 1.503 milliseconds (cumulative count 71530) 109s 79.620% <= 1.607 milliseconds (cumulative count 79620) 109s 86.380% <= 1.703 milliseconds (cumulative count 86380) 109s 93.130% <= 1.807 milliseconds (cumulative count 93130) 109s 96.870% <= 1.903 milliseconds (cumulative count 96870) 109s 98.500% <= 2.007 milliseconds (cumulative count 98500) 109s 99.310% <= 2.103 milliseconds (cumulative count 99310) 109s 99.890% <= 3.103 milliseconds (cumulative count 99890) 109s 100.000% <= 4.103 milliseconds (cumulative count 100000) 109s 109s Summary: 109s throughput summary: 322580.66 requests per second 109s latency summary (msec): 109s avg min p50 p95 p99 max 109s 1.371 0.528 1.303 1.847 2.063 4.007 109s SADD: rps=246640.0 (overall: 358488.4) avg_msec=1.198 (overall: 1.198) ====== SADD ====== 109s 100000 requests completed in 0.28 seconds 109s 50 parallel clients 109s 3 bytes payload 109s keep alive: 1 109s host configuration "save": 3600 1 300 100 60 10000 109s host configuration "appendonly": no 109s multi-thread: no 109s 109s Latency by percentile distribution: 109s 0.000% <= 0.423 milliseconds (cumulative count 10) 109s 50.000% <= 1.127 milliseconds (cumulative count 51400) 109s 75.000% <= 1.327 milliseconds (cumulative count 75280) 109s 87.500% <= 1.495 milliseconds (cumulative count 87850) 109s 93.750% <= 1.591 milliseconds (cumulative count 93770) 109s 96.875% <= 1.703 milliseconds (cumulative count 96960) 109s 98.438% <= 1.839 milliseconds (cumulative count 98450) 109s 99.219% <= 1.959 milliseconds (cumulative count 99240) 109s 99.609% <= 2.119 milliseconds (cumulative count 99620) 109s 99.805% <= 2.463 milliseconds (cumulative count 99810) 109s 99.902% <= 3.279 milliseconds (cumulative count 99910) 109s 99.951% <= 3.623 milliseconds (cumulative count 99960) 109s 99.976% <= 3.727 milliseconds (cumulative count 99980) 109s 99.988% <= 3.775 milliseconds (cumulative count 99990) 109s 99.994% <= 3.831 milliseconds (cumulative count 100000) 109s 100.000% <= 3.831 milliseconds (cumulative count 100000) 109s 109s Cumulative distribution of latencies: 109s 0.000% <= 0.103 milliseconds (cumulative count 0) 109s 0.060% <= 0.503 milliseconds (cumulative count 60) 109s 0.520% <= 0.607 milliseconds (cumulative count 520) 109s 2.750% <= 0.703 milliseconds (cumulative count 2750) 109s 7.580% <= 0.807 milliseconds (cumulative count 7580) 109s 14.480% <= 0.903 milliseconds (cumulative count 14480) 109s 27.490% <= 1.007 milliseconds (cumulative count 27490) 109s 46.810% <= 1.103 milliseconds (cumulative count 46810) 109s 63.630% <= 1.207 milliseconds (cumulative count 63630) 109s 73.170% <= 1.303 milliseconds (cumulative count 73170) 109s 81.570% <= 1.407 milliseconds (cumulative count 81570) 109s 88.520% <= 1.503 milliseconds (cumulative count 88520) 109s 94.490% <= 1.607 milliseconds (cumulative count 94490) 109s 96.960% <= 1.703 milliseconds (cumulative count 96960) 109s 98.100% <= 1.807 milliseconds (cumulative count 98100) 109s 98.900% <= 1.903 milliseconds (cumulative count 98900) 109s 99.440% <= 2.007 milliseconds (cumulative count 99440) 109s 99.580% <= 2.103 milliseconds (cumulative count 99580) 109s 99.900% <= 3.103 milliseconds (cumulative count 99900) 109s 100.000% <= 4.103 milliseconds (cumulative count 100000) 109s 109s Summary: 109s throughput summary: 363636.34 requests per second 109s latency summary (msec): 109s avg min p50 p95 p99 max 109s 1.164 0.416 1.127 1.623 1.919 3.831 110s HSET: rps=200119.5 (overall: 344041.1) avg_msec=1.249 (overall: 1.249) ====== HSET ====== 110s 100000 requests completed in 0.29 seconds 110s 50 parallel clients 110s 3 bytes payload 110s keep alive: 1 110s host configuration "save": 3600 1 300 100 60 10000 110s host configuration "appendonly": no 110s multi-thread: no 110s 110s Latency by percentile distribution: 110s 0.000% <= 0.519 milliseconds (cumulative count 10) 110s 50.000% <= 1.175 milliseconds (cumulative count 50060) 110s 75.000% <= 1.399 milliseconds (cumulative count 75390) 110s 87.500% <= 1.567 milliseconds (cumulative count 88030) 110s 93.750% <= 1.663 milliseconds (cumulative count 93990) 110s 96.875% <= 1.751 milliseconds (cumulative count 96950) 110s 98.438% <= 1.855 milliseconds (cumulative count 98460) 110s 99.219% <= 1.967 milliseconds (cumulative count 99250) 110s 99.609% <= 2.079 milliseconds (cumulative count 99610) 110s 99.805% <= 2.655 milliseconds (cumulative count 99810) 110s 99.902% <= 3.631 milliseconds (cumulative count 99910) 110s 99.951% <= 3.863 milliseconds (cumulative count 99960) 110s 99.976% <= 4.015 milliseconds (cumulative count 99980) 110s 99.988% <= 4.071 milliseconds (cumulative count 99990) 110s 99.994% <= 4.127 milliseconds (cumulative count 100000) 110s 100.000% <= 4.127 milliseconds (cumulative count 100000) 110s 110s Cumulative distribution of latencies: 110s 0.000% <= 0.103 milliseconds (cumulative count 0) 110s 0.090% <= 0.607 milliseconds (cumulative count 90) 110s 0.810% <= 0.703 milliseconds (cumulative count 810) 110s 3.150% <= 0.807 milliseconds (cumulative count 3150) 110s 7.550% <= 0.903 milliseconds (cumulative count 7550) 110s 17.630% <= 1.007 milliseconds (cumulative count 17630) 110s 35.380% <= 1.103 milliseconds (cumulative count 35380) 110s 55.330% <= 1.207 milliseconds (cumulative count 55330) 110s 66.720% <= 1.303 milliseconds (cumulative count 66720) 110s 76.020% <= 1.407 milliseconds (cumulative count 76020) 110s 83.380% <= 1.503 milliseconds (cumulative count 83380) 110s 90.710% <= 1.607 milliseconds (cumulative count 90710) 110s 95.710% <= 1.703 milliseconds (cumulative count 95710) 110s 97.850% <= 1.807 milliseconds (cumulative count 97850) 110s 98.880% <= 1.903 milliseconds (cumulative count 98880) 110s 99.430% <= 2.007 milliseconds (cumulative count 99430) 110s 99.620% <= 2.103 milliseconds (cumulative count 99620) 110s 99.880% <= 3.103 milliseconds (cumulative count 99880) 110s 99.990% <= 4.103 milliseconds (cumulative count 99990) 110s 100.000% <= 5.103 milliseconds (cumulative count 100000) 110s 110s Summary: 110s throughput summary: 349650.34 requests per second 110s latency summary (msec): 110s avg min p50 p95 p99 max 110s 1.230 0.512 1.175 1.687 1.927 4.127 110s SPOP: rps=162480.0 (overall: 376111.1) avg_msec=0.911 (overall: 0.911) ====== SPOP ====== 110s 100000 requests completed in 0.26 seconds 110s 50 parallel clients 110s 3 bytes payload 110s keep alive: 1 110s host configuration "save": 3600 1 300 100 60 10000 110s host configuration "appendonly": no 110s multi-thread: no 110s 110s Latency by percentile distribution: 110s 0.000% <= 0.263 milliseconds (cumulative count 10) 110s 50.000% <= 0.871 milliseconds (cumulative count 50250) 110s 75.000% <= 1.039 milliseconds (cumulative count 75320) 110s 87.500% <= 1.183 milliseconds (cumulative count 87510) 110s 93.750% <= 1.311 milliseconds (cumulative count 93870) 110s 96.875% <= 1.423 milliseconds (cumulative count 96950) 110s 98.438% <= 1.543 milliseconds (cumulative count 98440) 110s 99.219% <= 1.687 milliseconds (cumulative count 99230) 110s 99.609% <= 1.911 milliseconds (cumulative count 99630) 110s 99.805% <= 2.727 milliseconds (cumulative count 99810) 110s 99.902% <= 3.551 milliseconds (cumulative count 99910) 110s 99.951% <= 3.879 milliseconds (cumulative count 99960) 110s 99.976% <= 3.943 milliseconds (cumulative count 99980) 110s 99.988% <= 3.967 milliseconds (cumulative count 99990) 110s 99.994% <= 4.023 milliseconds (cumulative count 100000) 110s 100.000% <= 4.023 milliseconds (cumulative count 100000) 110s 110s Cumulative distribution of latencies: 110s 0.000% <= 0.103 milliseconds (cumulative count 0) 110s 0.030% <= 0.303 milliseconds (cumulative count 30) 110s 0.110% <= 0.407 milliseconds (cumulative count 110) 110s 0.740% <= 0.503 milliseconds (cumulative count 740) 110s 6.280% <= 0.607 milliseconds (cumulative count 6280) 110s 19.620% <= 0.703 milliseconds (cumulative count 19620) 110s 38.680% <= 0.807 milliseconds (cumulative count 38680) 110s 55.910% <= 0.903 milliseconds (cumulative count 55910) 110s 71.490% <= 1.007 milliseconds (cumulative count 71490) 110s 81.280% <= 1.103 milliseconds (cumulative count 81280) 110s 89.090% <= 1.207 milliseconds (cumulative count 89090) 110s 93.610% <= 1.303 milliseconds (cumulative count 93610) 110s 96.570% <= 1.407 milliseconds (cumulative count 96570) 110s 98.010% <= 1.503 milliseconds (cumulative count 98010) 110s 98.830% <= 1.607 milliseconds (cumulative count 98830) 110s 99.290% <= 1.703 milliseconds (cumulative count 99290) 110s 99.530% <= 1.807 milliseconds (cumulative count 99530) 110s 99.600% <= 1.903 milliseconds (cumulative count 99600) 110s 99.700% <= 2.007 milliseconds (cumulative count 99700) 110s 99.750% <= 2.103 milliseconds (cumulative count 99750) 110s 99.890% <= 3.103 milliseconds (cumulative count 99890) 110s 100.000% <= 4.103 milliseconds (cumulative count 100000) 110s 110s Summary: 110s throughput summary: 381679.41 requests per second 110s latency summary (msec): 110s avg min p50 p95 p99 max 110s 0.911 0.256 0.871 1.351 1.647 4.023 110s ZADD: rps=121235.1 (overall: 323723.4) avg_msec=1.359 (overall: 1.359) ====== ZADD ====== 110s 100000 requests completed in 0.30 seconds 110s 50 parallel clients 110s 3 bytes payload 110s keep alive: 1 110s host configuration "save": 3600 1 300 100 60 10000 110s host configuration "appendonly": no 110s multi-thread: no 110s 110s Latency by percentile distribution: 110s 0.000% <= 0.535 milliseconds (cumulative count 10) 110s 50.000% <= 1.239 milliseconds (cumulative count 50370) 110s 75.000% <= 1.471 milliseconds (cumulative count 75040) 110s 87.500% <= 1.647 milliseconds (cumulative count 88050) 110s 93.750% <= 1.743 milliseconds (cumulative count 94140) 110s 96.875% <= 1.831 milliseconds (cumulative count 97030) 110s 98.438% <= 1.943 milliseconds (cumulative count 98450) 110s 99.219% <= 2.055 milliseconds (cumulative count 99230) 110s 99.609% <= 2.191 milliseconds (cumulative count 99610) 110s 99.805% <= 3.175 milliseconds (cumulative count 99810) 110s 99.902% <= 4.135 milliseconds (cumulative count 99910) 110s 99.951% <= 4.495 milliseconds (cumulative count 99960) 110s 99.976% <= 4.583 milliseconds (cumulative count 99980) 110s 99.988% <= 4.623 milliseconds (cumulative count 99990) 110s 99.994% <= 4.711 milliseconds (cumulative count 100000) 110s 100.000% <= 4.711 milliseconds (cumulative count 100000) 110s 110s Cumulative distribution of latencies: 110s 0.000% <= 0.103 milliseconds (cumulative count 0) 110s 0.080% <= 0.607 milliseconds (cumulative count 80) 110s 0.390% <= 0.703 milliseconds (cumulative count 390) 110s 1.110% <= 0.807 milliseconds (cumulative count 1110) 110s 2.800% <= 0.903 milliseconds (cumulative count 2800) 110s 8.210% <= 1.007 milliseconds (cumulative count 8210) 110s 22.000% <= 1.103 milliseconds (cumulative count 22000) 110s 44.430% <= 1.207 milliseconds (cumulative count 44430) 110s 59.130% <= 1.303 milliseconds (cumulative count 59130) 110s 69.630% <= 1.407 milliseconds (cumulative count 69630) 110s 77.620% <= 1.503 milliseconds (cumulative count 77620) 110s 85.330% <= 1.607 milliseconds (cumulative count 85330) 110s 91.890% <= 1.703 milliseconds (cumulative count 91890) 110s 96.470% <= 1.807 milliseconds (cumulative count 96470) 110s 98.050% <= 1.903 milliseconds (cumulative count 98050) 110s 98.990% <= 2.007 milliseconds (cumulative count 98990) 110s 99.410% <= 2.103 milliseconds (cumulative count 99410) 110s 99.800% <= 3.103 milliseconds (cumulative count 99800) 110s 99.900% <= 4.103 milliseconds (cumulative count 99900) 110s 100.000% <= 5.103 milliseconds (cumulative count 100000) 110s 110s Summary: 110s throughput summary: 336700.34 requests per second 110s latency summary (msec): 110s avg min p50 p95 p99 max 110s 1.304 0.528 1.239 1.767 2.015 4.711 110s ZPOPMIN: rps=63944.2 (overall: 356666.7) avg_msec=0.907 (overall: 0.907) ====== ZPOPMIN ====== 110s 100000 requests completed in 0.26 seconds 110s 50 parallel clients 110s 3 bytes payload 110s keep alive: 1 110s host configuration "save": 3600 1 300 100 60 10000 110s host configuration "appendonly": no 110s multi-thread: no 110s 110s Latency by percentile distribution: 110s 0.000% <= 0.255 milliseconds (cumulative count 10) 110s 50.000% <= 0.807 milliseconds (cumulative count 51390) 110s 75.000% <= 0.911 milliseconds (cumulative count 75380) 110s 87.500% <= 0.999 milliseconds (cumulative count 88250) 110s 93.750% <= 1.079 milliseconds (cumulative count 93890) 110s 96.875% <= 1.191 milliseconds (cumulative count 96890) 110s 98.438% <= 1.351 milliseconds (cumulative count 98480) 110s 99.219% <= 1.543 milliseconds (cumulative count 99220) 110s 99.609% <= 1.895 milliseconds (cumulative count 99610) 110s 99.805% <= 2.727 milliseconds (cumulative count 99810) 110s 99.902% <= 3.391 milliseconds (cumulative count 99910) 110s 99.951% <= 3.743 milliseconds (cumulative count 99960) 110s 99.976% <= 3.839 milliseconds (cumulative count 99980) 110s 99.988% <= 3.903 milliseconds (cumulative count 99990) 110s 99.994% <= 3.943 milliseconds (cumulative count 100000) 110s 100.000% <= 3.943 milliseconds (cumulative count 100000) 110s 110s Cumulative distribution of latencies: 110s 0.000% <= 0.103 milliseconds (cumulative count 0) 110s 0.020% <= 0.303 milliseconds (cumulative count 20) 110s 0.070% <= 0.407 milliseconds (cumulative count 70) 110s 0.170% <= 0.503 milliseconds (cumulative count 170) 110s 5.570% <= 0.607 milliseconds (cumulative count 5570) 110s 23.920% <= 0.703 milliseconds (cumulative count 23920) 110s 51.390% <= 0.807 milliseconds (cumulative count 51390) 110s 73.700% <= 0.903 milliseconds (cumulative count 73700) 110s 89.110% <= 1.007 milliseconds (cumulative count 89110) 110s 94.840% <= 1.103 milliseconds (cumulative count 94840) 110s 97.130% <= 1.207 milliseconds (cumulative count 97130) 110s 98.120% <= 1.303 milliseconds (cumulative count 98120) 110s 98.790% <= 1.407 milliseconds (cumulative count 98790) 110s 99.090% <= 1.503 milliseconds (cumulative count 99090) 110s 99.330% <= 1.607 milliseconds (cumulative count 99330) 110s 99.450% <= 1.703 milliseconds (cumulative count 99450) 110s 99.540% <= 1.807 milliseconds (cumulative count 99540) 110s 99.610% <= 1.903 milliseconds (cumulative count 99610) 110s 99.720% <= 2.007 milliseconds (cumulative count 99720) 110s 99.780% <= 2.103 milliseconds (cumulative count 99780) 110s 99.880% <= 3.103 milliseconds (cumulative count 99880) 110s 100.000% <= 4.103 milliseconds (cumulative count 100000) 110s 110s Summary: 110s throughput summary: 380228.12 requests per second 110s latency summary (msec): 110s avg min p50 p95 p99 max 110s 0.828 0.248 0.807 1.111 1.471 3.943 111s LPUSH (needed to benchmark LRANGE): rps=36440.0 (overall: 314137.9) avg_msec=1.354 (overall: 1.354) LPUSH (needed to benchmark LRANGE): rps=335298.8 (overall: 333107.2) avg_msec=1.301 (overall: 1.306) ====== LPUSH (needed to benchmark LRANGE) ====== 111s 100000 requests completed in 0.30 seconds 111s 50 parallel clients 111s 3 bytes payload 111s keep alive: 1 111s host configuration "save": 3600 1 300 100 60 10000 111s host configuration "appendonly": no 111s multi-thread: no 111s 111s Latency by percentile distribution: 111s 0.000% <= 0.575 milliseconds (cumulative count 20) 111s 50.000% <= 1.247 milliseconds (cumulative count 51150) 111s 75.000% <= 1.487 milliseconds (cumulative count 75100) 111s 87.500% <= 1.663 milliseconds (cumulative count 88160) 111s 93.750% <= 1.759 milliseconds (cumulative count 93890) 111s 96.875% <= 1.871 milliseconds (cumulative count 96950) 111s 98.438% <= 1.983 milliseconds (cumulative count 98440) 111s 99.219% <= 2.103 milliseconds (cumulative count 99240) 111s 99.609% <= 2.215 milliseconds (cumulative count 99610) 111s 99.805% <= 2.631 milliseconds (cumulative count 99810) 111s 99.902% <= 3.527 milliseconds (cumulative count 99910) 111s 99.951% <= 3.855 milliseconds (cumulative count 99960) 111s 99.976% <= 3.927 milliseconds (cumulative count 99980) 111s 99.988% <= 3.975 milliseconds (cumulative count 99990) 111s 99.994% <= 4.023 milliseconds (cumulative count 100000) 111s 100.000% <= 4.023 milliseconds (cumulative count 100000) 111s 111s Cumulative distribution of latencies: 111s 0.000% <= 0.103 milliseconds (cumulative count 0) 111s 0.050% <= 0.607 milliseconds (cumulative count 50) 111s 0.580% <= 0.703 milliseconds (cumulative count 580) 111s 1.600% <= 0.807 milliseconds (cumulative count 1600) 111s 3.680% <= 0.903 milliseconds (cumulative count 3680) 111s 10.020% <= 1.007 milliseconds (cumulative count 10020) 111s 23.900% <= 1.103 milliseconds (cumulative count 23900) 111s 44.310% <= 1.207 milliseconds (cumulative count 44310) 111s 58.230% <= 1.303 milliseconds (cumulative count 58230) 111s 68.410% <= 1.407 milliseconds (cumulative count 68410) 111s 76.230% <= 1.503 milliseconds (cumulative count 76230) 111s 84.000% <= 1.607 milliseconds (cumulative count 84000) 111s 90.700% <= 1.703 milliseconds (cumulative count 90700) 111s 95.620% <= 1.807 milliseconds (cumulative count 95620) 111s 97.520% <= 1.903 milliseconds (cumulative count 97520) 111s 98.630% <= 2.007 milliseconds (cumulative count 98630) 111s 99.240% <= 2.103 milliseconds (cumulative count 99240) 111s 99.890% <= 3.103 milliseconds (cumulative count 99890) 111s 100.000% <= 4.103 milliseconds (cumulative count 100000) 111s 111s Summary: 111s throughput summary: 333333.31 requests per second 111s latency summary (msec): 111s avg min p50 p95 p99 max 111s 1.306 0.568 1.247 1.791 2.063 4.023 112s LRANGE_100 (first 100 elements): rps=99123.5 (overall: 108646.3) avg_msec=2.949 (overall: 2.949) LRANGE_100 (first 100 elements): rps=112031.9 (overall: 110416.7) avg_msec=2.877 (overall: 2.911) LRANGE_100 (first 100 elements): rps=110674.6 (overall: 110505.5) avg_msec=2.932 (overall: 2.918) ====== LRANGE_100 (first 100 elements) ====== 112s 100000 requests completed in 0.90 seconds 112s 50 parallel clients 112s 3 bytes payload 112s keep alive: 1 112s host configuration "save": 3600 1 300 100 60 10000 112s host configuration "appendonly": no 112s multi-thread: no 112s 112s Latency by percentile distribution: 112s 0.000% <= 1.391 milliseconds (cumulative count 10) 112s 50.000% <= 2.831 milliseconds (cumulative count 50100) 112s 75.000% <= 3.207 milliseconds (cumulative count 75300) 112s 87.500% <= 3.519 milliseconds (cumulative count 87550) 112s 93.750% <= 3.839 milliseconds (cumulative count 93870) 112s 96.875% <= 4.079 milliseconds (cumulative count 96960) 112s 98.438% <= 4.423 milliseconds (cumulative count 98440) 112s 99.219% <= 5.319 milliseconds (cumulative count 99230) 112s 99.609% <= 5.815 milliseconds (cumulative count 99610) 112s 99.805% <= 7.023 milliseconds (cumulative count 99810) 112s 99.902% <= 10.503 milliseconds (cumulative count 99910) 112s 99.951% <= 10.959 milliseconds (cumulative count 99960) 112s 99.976% <= 11.151 milliseconds (cumulative count 99980) 112s 99.988% <= 11.223 milliseconds (cumulative count 99990) 112s 99.994% <= 11.351 milliseconds (cumulative count 100000) 112s 100.000% <= 11.351 milliseconds (cumulative count 100000) 112s 112s Cumulative distribution of latencies: 112s 0.000% <= 0.103 milliseconds (cumulative count 0) 112s 0.010% <= 1.407 milliseconds (cumulative count 10) 112s 0.020% <= 1.607 milliseconds (cumulative count 20) 112s 0.060% <= 1.703 milliseconds (cumulative count 60) 112s 0.140% <= 1.807 milliseconds (cumulative count 140) 112s 0.380% <= 1.903 milliseconds (cumulative count 380) 112s 1.050% <= 2.007 milliseconds (cumulative count 1050) 112s 2.610% <= 2.103 milliseconds (cumulative count 2610) 112s 69.420% <= 3.103 milliseconds (cumulative count 69420) 112s 97.220% <= 4.103 milliseconds (cumulative count 97220) 112s 99.060% <= 5.103 milliseconds (cumulative count 99060) 112s 99.730% <= 6.103 milliseconds (cumulative count 99730) 112s 99.810% <= 7.103 milliseconds (cumulative count 99810) 112s 99.900% <= 8.103 milliseconds (cumulative count 99900) 112s 99.970% <= 11.103 milliseconds (cumulative count 99970) 112s 100.000% <= 12.103 milliseconds (cumulative count 100000) 112s 112s Summary: 112s throughput summary: 110741.97 requests per second 112s latency summary (msec): 112s avg min p50 p95 p99 max 112s 2.923 1.384 2.831 3.919 5.007 11.351 115s LRANGE_300 (first 300 elements): rps=9511.8 (overall: 29827.2) avg_msec=8.698 (overall: 8.698) LRANGE_300 (first 300 elements): rps=32478.1 (overall: 31831.3) avg_msec=7.648 (overall: 7.888) LRANGE_300 (first 300 elements): rps=31428.6 (overall: 31657.5) avg_msec=8.843 (overall: 8.297) LRANGE_300 (first 300 elements): rps=32480.3 (overall: 31906.9) avg_msec=7.670 (overall: 8.103) LRANGE_300 (first 300 elements): rps=32134.4 (overall: 31959.7) avg_msec=7.658 (overall: 8.000) LRANGE_300 (first 300 elements): rps=32439.2 (overall: 32050.5) avg_msec=7.784 (overall: 7.958) LRANGE_300 (first 300 elements): rps=32127.0 (overall: 32062.6) avg_msec=7.952 (overall: 7.957) LRANGE_300 (first 300 elements): rps=32102.0 (overall: 32068.0) avg_msec=7.642 (overall: 7.914) LRANGE_300 (first 300 elements): rps=31858.3 (overall: 32042.7) avg_msec=7.564 (overall: 7.872) LRANGE_300 (first 300 elements): rps=31620.6 (overall: 31997.5) avg_msec=8.594 (overall: 7.948) LRANGE_300 (first 300 elements): rps=31420.6 (overall: 31941.8) avg_msec=8.190 (overall: 7.971) LRANGE_300 (first 300 elements): rps=31921.6 (overall: 31940.0) avg_msec=7.840 (overall: 7.960) LRANGE_300 (first 300 elements): rps=32206.3 (overall: 31961.5) avg_msec=7.606 (overall: 7.931) ====== LRANGE_300 (first 300 elements) ====== 115s 100000 requests completed in 3.13 seconds 115s 50 parallel clients 115s 3 bytes payload 115s keep alive: 1 115s host configuration "save": 3600 1 300 100 60 10000 115s host configuration "appendonly": no 115s multi-thread: no 115s 115s Latency by percentile distribution: 115s 0.000% <= 1.007 milliseconds (cumulative count 10) 115s 50.000% <= 7.551 milliseconds (cumulative count 50040) 115s 75.000% <= 9.047 milliseconds (cumulative count 75040) 115s 87.500% <= 10.599 milliseconds (cumulative count 87500) 115s 93.750% <= 11.991 milliseconds (cumulative count 93780) 115s 96.875% <= 13.191 milliseconds (cumulative count 96890) 115s 98.438% <= 14.583 milliseconds (cumulative count 98450) 115s 99.219% <= 15.607 milliseconds (cumulative count 99220) 115s 99.609% <= 16.655 milliseconds (cumulative count 99610) 115s 99.805% <= 17.439 milliseconds (cumulative count 99810) 115s 99.902% <= 18.031 milliseconds (cumulative count 99910) 115s 99.951% <= 18.447 milliseconds (cumulative count 99960) 115s 99.976% <= 18.639 milliseconds (cumulative count 99980) 115s 99.988% <= 18.719 milliseconds (cumulative count 99990) 115s 99.994% <= 18.943 milliseconds (cumulative count 100000) 115s 100.000% <= 18.943 milliseconds (cumulative count 100000) 115s 115s Cumulative distribution of latencies: 115s 0.000% <= 0.103 milliseconds (cumulative count 0) 115s 0.010% <= 1.007 milliseconds (cumulative count 10) 115s 0.020% <= 1.407 milliseconds (cumulative count 20) 115s 0.040% <= 1.903 milliseconds (cumulative count 40) 115s 0.420% <= 3.103 milliseconds (cumulative count 420) 115s 2.340% <= 4.103 milliseconds (cumulative count 2340) 115s 7.760% <= 5.103 milliseconds (cumulative count 7760) 115s 19.380% <= 6.103 milliseconds (cumulative count 19380) 115s 39.940% <= 7.103 milliseconds (cumulative count 39940) 115s 60.790% <= 8.103 milliseconds (cumulative count 60790) 115s 75.760% <= 9.103 milliseconds (cumulative count 75760) 115s 84.550% <= 10.103 milliseconds (cumulative count 84550) 115s 90.140% <= 11.103 milliseconds (cumulative count 90140) 115s 94.150% <= 12.103 milliseconds (cumulative count 94150) 115s 96.740% <= 13.103 milliseconds (cumulative count 96740) 115s 97.970% <= 14.103 milliseconds (cumulative count 97970) 115s 98.890% <= 15.103 milliseconds (cumulative count 98890) 115s 99.440% <= 16.103 milliseconds (cumulative count 99440) 115s 99.740% <= 17.103 milliseconds (cumulative count 99740) 115s 99.920% <= 18.111 milliseconds (cumulative count 99920) 115s 100.000% <= 19.103 milliseconds (cumulative count 100000) 115s 115s Summary: 115s throughput summary: 31969.31 requests per second 115s latency summary (msec): 115s avg min p50 p95 p99 max 115s 7.927 1.000 7.551 12.367 15.223 18.943 120s LRANGE_500 (first 500 elements): rps=16808.8 (overall: 17652.7) avg_msec=12.281 (overall: 12.281) LRANGE_500 (first 500 elements): rps=17366.5 (overall: 17506.1) avg_msec=11.831 (overall: 12.052) LRANGE_500 (first 500 elements): rps=18478.1 (overall: 17835.4) avg_msec=11.158 (overall: 11.738) LRANGE_500 (first 500 elements): rps=18569.2 (overall: 18022.1) avg_msec=10.570 (overall: 11.432) LRANGE_500 (first 500 elements): rps=18537.8 (overall: 18126.1) avg_msec=10.683 (overall: 11.277) LRANGE_500 (first 500 elements): rps=18363.3 (overall: 18166.6) avg_msec=10.665 (overall: 11.172) LRANGE_500 (first 500 elements): rps=18099.6 (overall: 18157.0) avg_msec=11.006 (overall: 11.148) LRANGE_500 (first 500 elements): rps=18223.1 (overall: 18165.3) avg_msec=10.956 (overall: 11.124) LRANGE_500 (first 500 elements): rps=17980.5 (overall: 18144.3) avg_msec=12.323 (overall: 11.259) LRANGE_500 (first 500 elements): rps=17383.4 (overall: 18067.7) avg_msec=12.247 (overall: 11.354) LRANGE_500 (first 500 elements): rps=17945.5 (overall: 18056.3) avg_msec=12.462 (overall: 11.457) LRANGE_500 (first 500 elements): rps=18015.8 (overall: 18052.9) avg_msec=10.754 (overall: 11.398) LRANGE_500 (first 500 elements): rps=17937.3 (overall: 18043.9) avg_msec=10.917 (overall: 11.361) LRANGE_500 (first 500 elements): rps=17980.1 (overall: 18039.4) avg_msec=10.879 (overall: 11.327) LRANGE_500 (first 500 elements): rps=16833.3 (overall: 17959.0) avg_msec=12.287 (overall: 11.387) LRANGE_500 (first 500 elements): rps=17111.6 (overall: 17906.2) avg_msec=11.553 (overall: 11.397) LRANGE_500 (first 500 elements): rps=17948.4 (overall: 17908.7) avg_msec=11.759 (overall: 11.418) LRANGE_500 (first 500 elements): rps=16686.3 (overall: 17840.0) avg_msec=11.552 (overall: 11.425) LRANGE_500 (first 500 elements): rps=18039.7 (overall: 17850.5) avg_msec=11.268 (overall: 11.417) LRANGE_500 (first 500 elements): rps=18082.4 (overall: 17862.2) avg_msec=10.733 (overall: 11.382) LRANGE_500 (first 500 elements): rps=18342.5 (overall: 17885.3) avg_msec=10.775 (overall: 11.352) LRANGE_500 (first 500 elements): rps=18231.4 (overall: 17901.2) avg_msec=10.907 (overall: 11.331) ====== LRANGE_500 (first 500 elements) ====== 120s 100000 requests completed in 5.59 seconds 120s 50 parallel clients 120s 3 bytes payload 120s keep alive: 1 120s host configuration "save": 3600 1 300 100 60 10000 120s host configuration "appendonly": no 120s multi-thread: no 120s 120s Latency by percentile distribution: 120s 0.000% <= 1.535 milliseconds (cumulative count 10) 120s 50.000% <= 10.975 milliseconds (cumulative count 50090) 120s 75.000% <= 12.983 milliseconds (cumulative count 75000) 120s 87.500% <= 14.335 milliseconds (cumulative count 87560) 120s 93.750% <= 15.703 milliseconds (cumulative count 93770) 120s 96.875% <= 17.999 milliseconds (cumulative count 96880) 120s 98.438% <= 22.655 milliseconds (cumulative count 98450) 120s 99.219% <= 26.703 milliseconds (cumulative count 99220) 120s 99.609% <= 29.551 milliseconds (cumulative count 99610) 120s 99.805% <= 31.135 milliseconds (cumulative count 99810) 120s 99.902% <= 33.823 milliseconds (cumulative count 99910) 120s 99.951% <= 34.783 milliseconds (cumulative count 99960) 120s 99.976% <= 35.167 milliseconds (cumulative count 99980) 120s 99.988% <= 35.199 milliseconds (cumulative count 99990) 120s 99.994% <= 35.359 milliseconds (cumulative count 100000) 120s 100.000% <= 35.359 milliseconds (cumulative count 100000) 120s 120s Cumulative distribution of latencies: 120s 0.000% <= 0.103 milliseconds (cumulative count 0) 120s 0.010% <= 1.607 milliseconds (cumulative count 10) 120s 0.020% <= 2.007 milliseconds (cumulative count 20) 120s 0.040% <= 3.103 milliseconds (cumulative count 40) 120s 0.140% <= 4.103 milliseconds (cumulative count 140) 120s 0.580% <= 5.103 milliseconds (cumulative count 580) 120s 1.960% <= 6.103 milliseconds (cumulative count 1960) 120s 4.770% <= 7.103 milliseconds (cumulative count 4770) 120s 12.320% <= 8.103 milliseconds (cumulative count 12320) 120s 25.820% <= 9.103 milliseconds (cumulative count 25820) 120s 39.390% <= 10.103 milliseconds (cumulative count 39390) 120s 51.580% <= 11.103 milliseconds (cumulative count 51580) 120s 64.410% <= 12.103 milliseconds (cumulative count 64410) 120s 76.230% <= 13.103 milliseconds (cumulative count 76230) 120s 85.720% <= 14.103 milliseconds (cumulative count 85720) 120s 92.050% <= 15.103 milliseconds (cumulative count 92050) 120s 94.560% <= 16.103 milliseconds (cumulative count 94560) 120s 95.840% <= 17.103 milliseconds (cumulative count 95840) 120s 96.950% <= 18.111 milliseconds (cumulative count 96950) 120s 97.590% <= 19.103 milliseconds (cumulative count 97590) 120s 98.010% <= 20.111 milliseconds (cumulative count 98010) 120s 98.240% <= 21.103 milliseconds (cumulative count 98240) 120s 98.340% <= 22.111 milliseconds (cumulative count 98340) 120s 98.540% <= 23.103 milliseconds (cumulative count 98540) 120s 98.770% <= 24.111 milliseconds (cumulative count 98770) 120s 98.940% <= 25.103 milliseconds (cumulative count 98940) 120s 99.090% <= 26.111 milliseconds (cumulative count 99090) 120s 99.270% <= 27.103 milliseconds (cumulative count 99270) 120s 99.380% <= 28.111 milliseconds (cumulative count 99380) 120s 99.540% <= 29.103 milliseconds (cumulative count 99540) 120s 99.670% <= 30.111 milliseconds (cumulative count 99670) 120s 99.790% <= 31.103 milliseconds (cumulative count 99790) 120s 99.870% <= 32.111 milliseconds (cumulative count 99870) 120s 99.880% <= 33.119 milliseconds (cumulative count 99880) 120s 99.920% <= 34.111 milliseconds (cumulative count 99920) 120s 99.970% <= 35.103 milliseconds (cumulative count 99970) 120s 100.000% <= 36.127 milliseconds (cumulative count 100000) 120s 120s Summary: 120s throughput summary: 17905.10 requests per second 120s latency summary (msec): 120s avg min p50 p95 p99 max 120s 11.327 1.528 10.975 16.415 25.567 35.359 128s LRANGE_600 (first 600 elements): rps=10222.2 (overall: 11816.5) avg_msec=19.939 (overall: 19.939) LRANGE_600 (first 600 elements): rps=14175.3 (overall: 13078.9) avg_msec=15.108 (overall: 17.137) LRANGE_600 (first 600 elements): rps=13310.8 (overall: 13159.7) avg_msec=18.189 (overall: 17.508) LRANGE_600 (first 600 elements): rps=15197.6 (overall: 13689.6) avg_msec=11.952 (overall: 15.904) LRANGE_600 (first 600 elements): rps=14545.1 (overall: 13867.3) avg_msec=11.343 (overall: 14.911) LRANGE_600 (first 600 elements): rps=14334.7 (overall: 13946.6) avg_msec=15.163 (overall: 14.955) LRANGE_600 (first 600 elements): rps=14739.1 (overall: 14062.4) avg_msec=14.789 (overall: 14.929) LRANGE_600 (first 600 elements): rps=15098.4 (overall: 14194.9) avg_msec=12.348 (overall: 14.578) LRANGE_600 (first 600 elements): rps=15047.6 (overall: 14290.9) avg_msec=11.808 (overall: 14.250) LRANGE_600 (first 600 elements): rps=14948.4 (overall: 14357.4) avg_msec=12.588 (overall: 14.075) LRANGE_600 (first 600 elements): rps=13197.6 (overall: 14250.5) avg_msec=16.225 (overall: 14.258) LRANGE_600 (first 600 elements): rps=13251.0 (overall: 14166.7) avg_msec=18.533 (overall: 14.594) LRANGE_600 (first 600 elements): rps=14354.3 (overall: 14181.3) avg_msec=16.180 (overall: 14.719) LRANGE_600 (first 600 elements): rps=14626.0 (overall: 14213.6) avg_msec=13.974 (overall: 14.664) LRANGE_600 (first 600 elements): rps=14623.5 (overall: 14241.4) avg_msec=15.373 (overall: 14.713) LRANGE_600 (first 600 elements): rps=14579.4 (overall: 14262.7) avg_msec=15.564 (overall: 14.768) LRANGE_600 (first 600 elements): rps=12390.4 (overall: 14152.3) avg_msec=19.079 (overall: 14.990) LRANGE_600 (first 600 elements): rps=14570.9 (overall: 14175.9) avg_msec=15.405 (overall: 15.014) LRANGE_600 (first 600 elements): rps=13918.0 (overall: 14162.1) avg_msec=16.760 (overall: 15.106) LRANGE_600 (first 600 elements): rps=14529.9 (overall: 14180.4) avg_msec=14.912 (overall: 15.096) LRANGE_600 (first 600 elements): rps=14737.1 (overall: 14206.9) avg_msec=15.262 (overall: 15.104) LRANGE_600 (first 600 elements): rps=14250.0 (overall: 14208.9) avg_msec=15.912 (overall: 15.141) LRANGE_600 (first 600 elements): rps=13625.5 (overall: 14183.5) avg_msec=16.098 (overall: 15.181) LRANGE_600 (first 600 elements): rps=14507.9 (overall: 14197.1) avg_msec=15.703 (overall: 15.204) LRANGE_600 (first 600 elements): rps=14403.9 (overall: 14205.5) avg_msec=15.101 (overall: 15.199) LRANGE_600 (first 600 elements): rps=13484.4 (overall: 14177.3) avg_msec=18.119 (overall: 15.308) LRANGE_600 (first 600 elements): rps=14177.9 (overall: 14177.3) avg_msec=15.913 (overall: 15.331) LRANGE_600 (first 600 elements): rps=14083.7 (overall: 14174.0) avg_msec=16.236 (overall: 15.363) ====== LRANGE_600 (first 600 elements) ====== 128s 100000 requests completed in 7.06 seconds 128s 50 parallel clients 128s 3 bytes payload 128s keep alive: 1 128s host configuration "save": 3600 1 300 100 60 10000 128s host configuration "appendonly": no 128s multi-thread: no 128s 128s Latency by percentile distribution: 128s 0.000% <= 0.951 milliseconds (cumulative count 10) 128s 50.000% <= 14.255 milliseconds (cumulative count 50030) 128s 75.000% <= 19.999 milliseconds (cumulative count 75050) 128s 87.500% <= 23.119 milliseconds (cumulative count 87520) 128s 93.750% <= 25.279 milliseconds (cumulative count 93750) 128s 96.875% <= 27.935 milliseconds (cumulative count 96880) 128s 98.438% <= 30.143 milliseconds (cumulative count 98450) 128s 99.219% <= 31.631 milliseconds (cumulative count 99220) 128s 99.609% <= 32.687 milliseconds (cumulative count 99610) 128s 99.805% <= 33.791 milliseconds (cumulative count 99810) 128s 99.902% <= 34.303 milliseconds (cumulative count 99910) 128s 99.951% <= 34.879 milliseconds (cumulative count 99960) 128s 99.976% <= 35.263 milliseconds (cumulative count 99980) 128s 99.988% <= 35.455 milliseconds (cumulative count 99990) 128s 99.994% <= 35.679 milliseconds (cumulative count 100000) 128s 100.000% <= 35.679 milliseconds (cumulative count 100000) 128s 128s Cumulative distribution of latencies: 128s 0.000% <= 0.103 milliseconds (cumulative count 0) 128s 0.010% <= 1.007 milliseconds (cumulative count 10) 128s 0.040% <= 1.903 milliseconds (cumulative count 40) 128s 0.050% <= 2.103 milliseconds (cumulative count 50) 128s 0.570% <= 3.103 milliseconds (cumulative count 570) 128s 1.640% <= 4.103 milliseconds (cumulative count 1640) 128s 2.410% <= 5.103 milliseconds (cumulative count 2410) 128s 3.720% <= 6.103 milliseconds (cumulative count 3720) 128s 6.190% <= 7.103 milliseconds (cumulative count 6190) 128s 9.480% <= 8.103 milliseconds (cumulative count 9480) 128s 14.660% <= 9.103 milliseconds (cumulative count 14660) 128s 20.750% <= 10.103 milliseconds (cumulative count 20750) 128s 27.960% <= 11.103 milliseconds (cumulative count 27960) 128s 35.790% <= 12.103 milliseconds (cumulative count 35790) 128s 42.780% <= 13.103 milliseconds (cumulative count 42780) 128s 49.040% <= 14.103 milliseconds (cumulative count 49040) 128s 55.000% <= 15.103 milliseconds (cumulative count 55000) 128s 59.900% <= 16.103 milliseconds (cumulative count 59900) 128s 63.870% <= 17.103 milliseconds (cumulative count 63870) 128s 67.670% <= 18.111 milliseconds (cumulative count 67670) 128s 71.550% <= 19.103 milliseconds (cumulative count 71550) 128s 75.600% <= 20.111 milliseconds (cumulative count 75600) 128s 79.390% <= 21.103 milliseconds (cumulative count 79390) 128s 83.320% <= 22.111 milliseconds (cumulative count 83320) 128s 87.430% <= 23.103 milliseconds (cumulative count 87430) 128s 90.950% <= 24.111 milliseconds (cumulative count 90950) 128s 93.380% <= 25.103 milliseconds (cumulative count 93380) 128s 95.000% <= 26.111 milliseconds (cumulative count 95000) 128s 96.080% <= 27.103 milliseconds (cumulative count 96080) 128s 96.980% <= 28.111 milliseconds (cumulative count 96980) 128s 97.840% <= 29.103 milliseconds (cumulative count 97840) 128s 98.400% <= 30.111 milliseconds (cumulative count 98400) 128s 98.980% <= 31.103 milliseconds (cumulative count 98980) 128s 99.420% <= 32.111 milliseconds (cumulative count 99420) 128s 99.700% <= 33.119 milliseconds (cumulative count 99700) 128s 99.870% <= 34.111 milliseconds (cumulative count 99870) 128s 99.970% <= 35.103 milliseconds (cumulative count 99970) 128s 100.000% <= 36.127 milliseconds (cumulative count 100000) 128s 128s Summary: 128s throughput summary: 14170.33 requests per second 128s latency summary (msec): 128s avg min p50 p95 p99 max 128s 15.351 0.944 14.255 26.111 31.119 35.679 128s MSET (10 keys): rps=140200.0 (overall: 151731.6) avg_msec=3.002 (overall: 3.002) MSET (10 keys): rps=154621.5 (overall: 153236.5) avg_msec=2.943 (overall: 2.971) ====== MSET (10 keys) ====== 128s 100000 requests completed in 0.65 seconds 128s 50 parallel clients 128s 3 bytes payload 128s keep alive: 1 128s host configuration "save": 3600 1 300 100 60 10000 128s host configuration "appendonly": no 128s multi-thread: no 128s 128s Latency by percentile distribution: 128s 0.000% <= 1.023 milliseconds (cumulative count 10) 128s 50.000% <= 3.031 milliseconds (cumulative count 50680) 128s 75.000% <= 3.335 milliseconds (cumulative count 75550) 128s 87.500% <= 3.519 milliseconds (cumulative count 87550) 128s 93.750% <= 3.647 milliseconds (cumulative count 94030) 128s 96.875% <= 3.743 milliseconds (cumulative count 97030) 128s 98.438% <= 3.823 milliseconds (cumulative count 98460) 128s 99.219% <= 3.911 milliseconds (cumulative count 99250) 128s 99.609% <= 3.991 milliseconds (cumulative count 99610) 128s 99.805% <= 4.095 milliseconds (cumulative count 99810) 128s 99.902% <= 4.167 milliseconds (cumulative count 99910) 128s 99.951% <= 4.247 milliseconds (cumulative count 99960) 128s 99.976% <= 4.367 milliseconds (cumulative count 99980) 128s 99.988% <= 4.399 milliseconds (cumulative count 99990) 128s 99.994% <= 4.487 milliseconds (cumulative count 100000) 128s 100.000% <= 4.487 milliseconds (cumulative count 100000) 128s 128s Cumulative distribution of latencies: 128s 0.000% <= 0.103 milliseconds (cumulative count 0) 128s 0.010% <= 1.103 milliseconds (cumulative count 10) 128s 0.050% <= 1.207 milliseconds (cumulative count 50) 128s 0.070% <= 1.303 milliseconds (cumulative count 70) 128s 0.090% <= 1.407 milliseconds (cumulative count 90) 128s 0.130% <= 1.607 milliseconds (cumulative count 130) 128s 0.200% <= 1.703 milliseconds (cumulative count 200) 128s 0.290% <= 1.807 milliseconds (cumulative count 290) 128s 0.740% <= 1.903 milliseconds (cumulative count 740) 128s 2.680% <= 2.007 milliseconds (cumulative count 2680) 128s 6.580% <= 2.103 milliseconds (cumulative count 6580) 128s 56.310% <= 3.103 milliseconds (cumulative count 56310) 128s 99.830% <= 4.103 milliseconds (cumulative count 99830) 128s 100.000% <= 5.103 milliseconds (cumulative count 100000) 128s 128s Summary: 128s throughput summary: 153609.83 requests per second 128s latency summary (msec): 128s avg min p50 p95 p99 max 128s 2.966 1.016 3.031 3.679 3.879 4.487 129s XADD: rps=80000.0 (overall: 256410.2) avg_msec=1.716 (overall: 1.716) XADD: rps=271832.7 (overall: 268176.3) avg_msec=1.621 (overall: 1.643) ====== XADD ====== 129s 100000 requests completed in 0.37 seconds 129s 50 parallel clients 129s 3 bytes payload 129s keep alive: 1 129s host configuration "save": 3600 1 300 100 60 10000 129s host configuration "appendonly": no 129s multi-thread: no 129s 129s Latency by percentile distribution: 129s 0.000% <= 0.663 milliseconds (cumulative count 10) 129s 50.000% <= 1.599 milliseconds (cumulative count 50280) 129s 75.000% <= 1.879 milliseconds (cumulative count 75410) 129s 87.500% <= 2.055 milliseconds (cumulative count 87830) 129s 93.750% <= 2.167 milliseconds (cumulative count 93850) 129s 96.875% <= 2.263 milliseconds (cumulative count 96950) 129s 98.438% <= 2.375 milliseconds (cumulative count 98460) 129s 99.219% <= 2.495 milliseconds (cumulative count 99220) 129s 99.609% <= 2.703 milliseconds (cumulative count 99610) 129s 99.805% <= 2.935 milliseconds (cumulative count 99810) 129s 99.902% <= 3.159 milliseconds (cumulative count 99910) 129s 99.951% <= 3.263 milliseconds (cumulative count 99960) 129s 99.976% <= 3.319 milliseconds (cumulative count 99980) 129s 99.988% <= 3.359 milliseconds (cumulative count 99990) 129s 99.994% <= 3.447 milliseconds (cumulative count 100000) 129s 100.000% <= 3.447 milliseconds (cumulative count 100000) 129s 129s Cumulative distribution of latencies: 129s 0.000% <= 0.103 milliseconds (cumulative count 0) 129s 0.030% <= 0.703 milliseconds (cumulative count 30) 129s 0.240% <= 0.807 milliseconds (cumulative count 240) 129s 0.570% <= 0.903 milliseconds (cumulative count 570) 129s 1.200% <= 1.007 milliseconds (cumulative count 1200) 129s 2.160% <= 1.103 milliseconds (cumulative count 2160) 129s 5.450% <= 1.207 milliseconds (cumulative count 5450) 129s 12.660% <= 1.303 milliseconds (cumulative count 12660) 129s 25.730% <= 1.407 milliseconds (cumulative count 25730) 129s 39.940% <= 1.503 milliseconds (cumulative count 39940) 129s 51.230% <= 1.607 milliseconds (cumulative count 51230) 129s 60.710% <= 1.703 milliseconds (cumulative count 60710) 129s 69.800% <= 1.807 milliseconds (cumulative count 69800) 129s 77.050% <= 1.903 milliseconds (cumulative count 77050) 129s 84.450% <= 2.007 milliseconds (cumulative count 84450) 129s 90.660% <= 2.103 milliseconds (cumulative count 90660) 129s 99.890% <= 3.103 milliseconds (cumulative count 99890) 129s 100.000% <= 4.103 milliseconds (cumulative count 100000) 129s 129s Summary: 129s throughput summary: 267379.66 requests per second 129s latency summary (msec): 129s avg min p50 p95 p99 max 129s 1.645 0.656 1.599 2.199 2.447 3.447 133s FUNCTION LOAD: rps=19322.7 (overall: 23891.6) avg_msec=19.516 (overall: 19.516) FUNCTION LOAD: rps=24682.5 (overall: 24329.7) avg_msec=20.013 (overall: 19.795) FUNCTION LOAD: rps=24780.9 (overall: 24490.1) avg_msec=19.939 (overall: 19.847) FUNCTION LOAD: rps=24900.4 (overall: 24597.7) avg_msec=19.835 (overall: 19.844) FUNCTION LOAD: rps=23585.7 (overall: 24387.4) avg_msec=19.957 (overall: 19.867) FUNCTION LOAD: rps=25800.0 (overall: 24629.6) avg_msec=19.935 (overall: 19.879) FUNCTION LOAD: rps=24063.7 (overall: 24546.5) avg_msec=19.712 (overall: 19.855) FUNCTION LOAD: rps=25418.3 (overall: 24658.2) avg_msec=19.797 (overall: 19.847) FUNCTION LOAD: rps=24640.0 (overall: 24656.1) avg_msec=19.782 (overall: 19.840) FUNCTION LOAD: rps=24422.3 (overall: 24632.3) avg_msec=19.818 (overall: 19.838) FUNCTION LOAD: rps=25418.3 (overall: 24705.0) avg_msec=19.989 (overall: 19.852) FUNCTION LOAD: rps=24741.0 (overall: 24708.1) avg_msec=19.847 (overall: 19.852) FUNCTION LOAD: rps=24661.4 (overall: 24704.4) avg_msec=19.897 (overall: 19.855) FUNCTION LOAD: rps=24860.6 (overall: 24715.7) avg_msec=19.908 (overall: 19.859) FUNCTION LOAD: rps=23904.4 (overall: 24660.9) avg_msec=19.970 (overall: 19.866) FUNCTION LOAD: rps=24600.0 (overall: 24657.1) avg_msec=19.911 (overall: 19.869) ====== FUNCTION LOAD ====== 133s 100000 requests completed in 4.05 seconds 133s 50 parallel clients 133s 3 bytes payload 133s keep alive: 1 133s host configuration "save": 3600 1 300 100 60 10000 133s host configuration "appendonly": no 133s multi-thread: no 133s 133s Latency by percentile distribution: 133s 0.000% <= 1.359 milliseconds (cumulative count 10) 133s 50.000% <= 21.167 milliseconds (cumulative count 50000) 133s 75.000% <= 21.823 milliseconds (cumulative count 75460) 133s 87.500% <= 22.207 milliseconds (cumulative count 87880) 133s 93.750% <= 22.527 milliseconds (cumulative count 93990) 133s 96.875% <= 22.799 milliseconds (cumulative count 96900) 133s 98.438% <= 23.071 milliseconds (cumulative count 98480) 133s 99.219% <= 23.391 milliseconds (cumulative count 99240) 133s 99.609% <= 23.775 milliseconds (cumulative count 99610) 133s 99.805% <= 24.271 milliseconds (cumulative count 99810) 133s 99.902% <= 24.559 milliseconds (cumulative count 99910) 133s 99.951% <= 24.831 milliseconds (cumulative count 99960) 133s 99.976% <= 25.087 milliseconds (cumulative count 99980) 133s 99.988% <= 25.215 milliseconds (cumulative count 99990) 133s 99.994% <= 25.567 milliseconds (cumulative count 100000) 133s 100.000% <= 25.567 milliseconds (cumulative count 100000) 133s 133s Cumulative distribution of latencies: 133s 0.000% <= 0.103 milliseconds (cumulative count 0) 133s 0.010% <= 1.407 milliseconds (cumulative count 10) 133s 0.040% <= 9.103 milliseconds (cumulative count 40) 133s 0.840% <= 10.103 milliseconds (cumulative count 840) 133s 4.160% <= 11.103 milliseconds (cumulative count 4160) 133s 10.860% <= 12.103 milliseconds (cumulative count 10860) 133s 13.820% <= 13.103 milliseconds (cumulative count 13820) 133s 14.520% <= 14.103 milliseconds (cumulative count 14520) 133s 14.560% <= 15.103 milliseconds (cumulative count 14560) 133s 14.620% <= 16.103 milliseconds (cumulative count 14620) 133s 15.580% <= 19.103 milliseconds (cumulative count 15580) 133s 24.720% <= 20.111 milliseconds (cumulative count 24720) 133s 47.840% <= 21.103 milliseconds (cumulative count 47840) 133s 85.030% <= 22.111 milliseconds (cumulative count 85030) 133s 98.590% <= 23.103 milliseconds (cumulative count 98590) 133s 99.760% <= 24.111 milliseconds (cumulative count 99760) 133s 99.980% <= 25.103 milliseconds (cumulative count 99980) 133s 100.000% <= 26.111 milliseconds (cumulative count 100000) 133s 133s Summary: 133s throughput summary: 24715.77 requests per second 133s latency summary (msec): 133s avg min p50 p95 p99 max 133s 19.865 1.352 21.167 22.607 23.247 25.567 133s FCALL: rps=166960.0 (overall: 252969.7) avg_msec=1.745 (overall: 1.745) ====== FCALL ====== 133s 100000 requests completed in 0.39 seconds 133s 50 parallel clients 133s 3 bytes payload 133s keep alive: 1 133s host configuration "save": 3600 1 300 100 60 10000 133s host configuration "appendonly": no 133s multi-thread: no 133s 133s Latency by percentile distribution: 133s 0.000% <= 0.631 milliseconds (cumulative count 10) 133s 50.000% <= 1.655 milliseconds (cumulative count 50570) 133s 75.000% <= 1.951 milliseconds (cumulative count 75080) 133s 87.500% <= 2.143 milliseconds (cumulative count 87890) 133s 93.750% <= 2.263 milliseconds (cumulative count 93880) 133s 96.875% <= 2.383 milliseconds (cumulative count 96920) 133s 98.438% <= 2.527 milliseconds (cumulative count 98470) 133s 99.219% <= 2.647 milliseconds (cumulative count 99220) 133s 99.609% <= 2.839 milliseconds (cumulative count 99620) 133s 99.805% <= 2.967 milliseconds (cumulative count 99810) 133s 99.902% <= 3.079 milliseconds (cumulative count 99910) 133s 99.951% <= 3.151 milliseconds (cumulative count 99960) 133s 99.976% <= 3.223 milliseconds (cumulative count 99980) 133s 99.988% <= 3.391 milliseconds (cumulative count 99990) 133s 99.994% <= 3.423 milliseconds (cumulative count 100000) 133s 100.000% <= 3.423 milliseconds (cumulative count 100000) 133s 133s Cumulative distribution of latencies: 133s 0.000% <= 0.103 milliseconds (cumulative count 0) 133s 0.070% <= 0.703 milliseconds (cumulative count 70) 133s 0.330% <= 0.807 milliseconds (cumulative count 330) 133s 0.610% <= 0.903 milliseconds (cumulative count 610) 133s 1.280% <= 1.007 milliseconds (cumulative count 1280) 133s 2.260% <= 1.103 milliseconds (cumulative count 2260) 133s 4.200% <= 1.207 milliseconds (cumulative count 4200) 133s 9.010% <= 1.303 milliseconds (cumulative count 9010) 133s 20.330% <= 1.407 milliseconds (cumulative count 20330) 133s 33.330% <= 1.503 milliseconds (cumulative count 33330) 133s 45.510% <= 1.607 milliseconds (cumulative count 45510) 133s 55.140% <= 1.703 milliseconds (cumulative count 55140) 133s 64.300% <= 1.807 milliseconds (cumulative count 64300) 133s 71.690% <= 1.903 milliseconds (cumulative count 71690) 133s 78.960% <= 2.007 milliseconds (cumulative count 78960) 133s 85.330% <= 2.103 milliseconds (cumulative count 85330) 133s 99.920% <= 3.103 milliseconds (cumulative count 99920) 133s 100.000% <= 4.103 milliseconds (cumulative count 100000) 133s 133s Summary: 133s throughput summary: 258397.94 requests per second 133s latency summary (msec): 133s avg min p50 p95 p99 max 133s 1.704 0.624 1.655 2.295 2.599 3.423 133s 133s autopkgtest [08:23:05]: test 0002-benchmark: -----------------------] 134s autopkgtest [08:23:06]: test 0002-benchmark: - - - - - - - - - - results - - - - - - - - - - 134s 0002-benchmark PASS 134s autopkgtest [08:23:06]: test 0003-valkey-check-aof: preparing testbed 134s Reading package lists... 135s Building dependency tree... 135s Reading state information... 135s Solving dependencies... 136s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 137s autopkgtest [08:23:09]: test 0003-valkey-check-aof: [----------------------- 137s autopkgtest [08:23:09]: test 0003-valkey-check-aof: -----------------------] 138s autopkgtest [08:23:10]: test 0003-valkey-check-aof: - - - - - - - - - - results - - - - - - - - - - 138s 0003-valkey-check-aof PASS 138s autopkgtest [08:23:10]: test 0004-valkey-check-rdb: preparing testbed 139s Reading package lists... 139s Building dependency tree... 139s Reading state information... 139s Solving dependencies... 140s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 141s autopkgtest [08:23:13]: test 0004-valkey-check-rdb: [----------------------- 147s OK 147s [offset 0] Checking RDB file /var/lib/valkey/dump.rdb 147s [offset 27] AUX FIELD valkey-ver = '8.1.1' 147s [offset 41] AUX FIELD redis-bits = '64' 147s [offset 53] AUX FIELD ctime = '1751271799' 147s [offset 68] AUX FIELD used-mem = '3029608' 147s [offset 80] AUX FIELD aof-base = '0' 147s [offset 191] Selecting DB ID 0 147s [offset 566450] Checksum OK 147s [offset 566450] \o/ RDB looks OK! \o/ 147s [info] 5 keys read 147s [info] 0 expires 147s [info] 0 already expired 147s autopkgtest [08:23:19]: test 0004-valkey-check-rdb: -----------------------] 148s 0004-valkey-check-rdb PASS 148s autopkgtest [08:23:20]: test 0004-valkey-check-rdb: - - - - - - - - - - results - - - - - - - - - - 148s autopkgtest [08:23:20]: test 0005-cjson: preparing testbed 148s Reading package lists... 149s Building dependency tree... 149s Reading state information... 149s Solving dependencies... 150s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 151s autopkgtest [08:23:23]: test 0005-cjson: [----------------------- 156s 157s autopkgtest [08:23:29]: test 0005-cjson: -----------------------] 157s 0005-cjson PASS 157s autopkgtest [08:23:29]: test 0005-cjson: - - - - - - - - - - results - - - - - - - - - - 158s autopkgtest [08:23:30]: test 0006-migrate-from-redis: preparing testbed 183s Creating nova instance adt-questing-arm64-valkey-20250630-082052-juju-7f2275-prod-proposed-migration-environment-23-63f33cec-76c5-4d9e-99eb-2249834f65cf from image adt/ubuntu-questing-arm64-server-20250630.img (UUID ae295103-813a-4e52-a06a-9453e78f97db)... 257s autopkgtest [08:25:09]: testbed dpkg architecture: arm64 257s autopkgtest [08:25:09]: testbed apt version: 3.1.2 257s autopkgtest [08:25:09]: @@@@@@@@@@@@@@@@@@@@ test bed setup 257s autopkgtest [08:25:09]: testbed release detected to be: questing 258s autopkgtest [08:25:10]: updating testbed package index (apt update) 259s Get:1 http://ftpmaster.internal/ubuntu questing-proposed InRelease [249 kB] 259s Hit:2 http://ftpmaster.internal/ubuntu questing InRelease 259s Hit:3 http://ftpmaster.internal/ubuntu questing-updates InRelease 259s Hit:4 http://ftpmaster.internal/ubuntu questing-security InRelease 259s Get:5 http://ftpmaster.internal/ubuntu questing-proposed/multiverse Sources [17.5 kB] 259s Get:6 http://ftpmaster.internal/ubuntu questing-proposed/universe Sources [429 kB] 259s Get:7 http://ftpmaster.internal/ubuntu questing-proposed/main Sources [26.6 kB] 259s Get:8 http://ftpmaster.internal/ubuntu questing-proposed/main arm64 Packages [26.7 kB] 259s Get:9 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 Packages [390 kB] 259s Get:10 http://ftpmaster.internal/ubuntu questing-proposed/multiverse arm64 Packages [16.5 kB] 260s Fetched 1156 kB in 1s (1196 kB/s) 260s Reading package lists... 261s autopkgtest [08:25:13]: upgrading testbed (apt dist-upgrade and autopurge) 261s Reading package lists... 262s Building dependency tree... 262s Reading state information... 263s Calculating upgrade... 264s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 264s Reading package lists... 265s Building dependency tree... 265s Reading state information... 265s Solving dependencies... 266s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 269s Reading package lists... 269s Building dependency tree... 269s Reading state information... 269s Solving dependencies... 270s The following NEW packages will be installed: 270s liblzf1 redis-sentinel redis-server redis-tools 270s 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. 270s Need to get 1419 kB of archives. 270s After this operation, 7903 kB of additional disk space will be used. 270s Get:1 http://ftpmaster.internal/ubuntu questing/universe arm64 liblzf1 arm64 3.6-4 [7426 B] 270s Get:2 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 redis-tools arm64 5:8.0.0-2 [1346 kB] 270s Get:3 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 redis-sentinel arm64 5:8.0.0-2 [12.5 kB] 270s Get:4 http://ftpmaster.internal/ubuntu questing-proposed/universe arm64 redis-server arm64 5:8.0.0-2 [53.2 kB] 271s Fetched 1419 kB in 1s (2250 kB/s) 271s Selecting previously unselected package liblzf1:arm64. 271s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 127289 files and directories currently installed.) 271s Preparing to unpack .../liblzf1_3.6-4_arm64.deb ... 271s Unpacking liblzf1:arm64 (3.6-4) ... 272s Selecting previously unselected package redis-tools. 272s Preparing to unpack .../redis-tools_5%3a8.0.0-2_arm64.deb ... 272s Unpacking redis-tools (5:8.0.0-2) ... 272s Selecting previously unselected package redis-sentinel. 272s Preparing to unpack .../redis-sentinel_5%3a8.0.0-2_arm64.deb ... 272s Unpacking redis-sentinel (5:8.0.0-2) ... 272s Selecting previously unselected package redis-server. 272s Preparing to unpack .../redis-server_5%3a8.0.0-2_arm64.deb ... 272s Unpacking redis-server (5:8.0.0-2) ... 272s Setting up liblzf1:arm64 (3.6-4) ... 272s Setting up redis-tools (5:8.0.0-2) ... 272s Setting up redis-server (5:8.0.0-2) ... 273s Created symlink '/etc/systemd/system/redis.service' → '/usr/lib/systemd/system/redis-server.service'. 273s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-server.service' → '/usr/lib/systemd/system/redis-server.service'. 273s Setting up redis-sentinel (5:8.0.0-2) ... 274s Created symlink '/etc/systemd/system/sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 274s Created symlink '/etc/systemd/system/multi-user.target.wants/redis-sentinel.service' → '/usr/lib/systemd/system/redis-sentinel.service'. 274s Processing triggers for man-db (2.13.1-1) ... 275s Processing triggers for libc-bin (2.41-6ubuntu2) ... 286s autopkgtest [08:25:38]: test 0006-migrate-from-redis: [----------------------- 286s + FLAG_FILE=/etc/valkey/REDIS_MIGRATION 286s + sed -i 's#loglevel notice#loglevel debug#' /etc/redis/redis.conf 286s + systemctl restart redis-server 286s + redis-cli -h 127.0.0.1 -p 6379 SET test 1 286s OK 286s + redis-cli -h 127.0.0.1 -p 6379 GET test 286s 1 286s + redis-cli -h 127.0.0.1 -p 6379 SAVE 286s OK 286s + sha256sum /var/lib/redis/dump.rdb 286s 7efa3d2ae0a1a97f732448491c4c2f31d65f2b4a88cbec4da418ebbda4cb49ae /var/lib/redis/dump.rdb 286s + apt-get install -y valkey-redis-compat 286s Reading package lists... 287s Building dependency tree... 287s Reading state information... 287s Solving dependencies... 287s The following additional packages will be installed: 287s valkey-server valkey-tools 287s Suggested packages: 287s ruby-redis 287s The following packages will be REMOVED: 287s redis-sentinel redis-server redis-tools 287s The following NEW packages will be installed: 287s valkey-redis-compat valkey-server valkey-tools 287s 0 upgraded, 3 newly installed, 3 to remove and 0 not upgraded. 287s Need to get 1345 kB of archives. 287s After this operation, 212 kB disk space will be freed. 287s Get:1 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-tools arm64 8.1.1+dfsg1-2ubuntu1 [1285 kB] 288s Get:2 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-server arm64 8.1.1+dfsg1-2ubuntu1 [51.7 kB] 288s Get:3 http://ftpmaster.internal/ubuntu questing/universe arm64 valkey-redis-compat all 8.1.1+dfsg1-2ubuntu1 [7794 B] 288s Fetched 1345 kB in 1s (2156 kB/s) 289s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 127340 files and directories currently installed.) 289s Removing redis-sentinel (5:8.0.0-2) ... 289s Removing redis-server (5:8.0.0-2) ... 290s Removing redis-tools (5:8.0.0-2) ... 290s Selecting previously unselected package valkey-tools. 290s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 127303 files and directories currently installed.) 290s Preparing to unpack .../valkey-tools_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 290s Unpacking valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 290s Selecting previously unselected package valkey-server. 290s Preparing to unpack .../valkey-server_8.1.1+dfsg1-2ubuntu1_arm64.deb ... 290s Unpacking valkey-server (8.1.1+dfsg1-2ubuntu1) ... 290s Selecting previously unselected package valkey-redis-compat. 290s Preparing to unpack .../valkey-redis-compat_8.1.1+dfsg1-2ubuntu1_all.deb ... 290s Unpacking valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 290s Setting up valkey-tools (8.1.1+dfsg1-2ubuntu1) ... 290s Setting up valkey-server (8.1.1+dfsg1-2ubuntu1) ... 291s Created symlink '/etc/systemd/system/valkey.service' → '/usr/lib/systemd/system/valkey-server.service'. 291s Created symlink '/etc/systemd/system/multi-user.target.wants/valkey-server.service' → '/usr/lib/systemd/system/valkey-server.service'. 291s Setting up valkey-redis-compat (8.1.1+dfsg1-2ubuntu1) ... 291s dpkg-query: no packages found matching valkey-sentinel 291s [I] /etc/redis/redis.conf has been copied to /etc/valkey/valkey.conf. Please, review the content of valkey.conf, especially if you had modified redis.conf. 291s [I] /etc/redis/sentinel.conf has been copied to /etc/valkey/sentinel.conf. Please, review the content of sentinel.conf, especially if you had modified sentinel.conf. 291s [I] On-disk redis dumps moved from /var/lib/redis/ to /var/lib/valkey. 291s Processing triggers for man-db (2.13.1-1) ... 292s + '[' -f /etc/valkey/REDIS_MIGRATION ']' 292s + sha256sum /var/lib/valkey/dump.rdb 292s b0ac8258144cf841098c36282957cbe5513dc074ace79fcd7f07655fccdcbf1f /var/lib/valkey/dump.rdb 292s + systemctl status valkey-server 292s + grep inactive 292s Active: inactive (dead) since Mon 2025-06-30 08:25:43 UTC; 600ms ago 292s + rm /etc/valkey/REDIS_MIGRATION 292s + systemctl start valkey-server 292s Job for valkey-server.service failed because the control process exited with error code. 292s See "systemctl status valkey-server.service" and "journalctl -xeu valkey-server.service" for details. 292s autopkgtest [08:25:44]: test 0006-migrate-from-redis: -----------------------] 293s 0006-migrate-from-redis FAIL non-zero exit status 1 293s autopkgtest [08:25:45]: test 0006-migrate-from-redis: - - - - - - - - - - results - - - - - - - - - - 293s autopkgtest [08:25:45]: @@@@@@@@@@@@@@@@@@@@ summary 293s 0001-valkey-cli PASS 293s 0002-benchmark PASS 293s 0003-valkey-check-aof PASS 293s 0004-valkey-check-rdb PASS 293s 0005-cjson PASS 293s 0006-migrate-from-redis FAIL non-zero exit status 1