Redis continuously takes a large amount of memory and grows until killed by OOM -


Currently, my 8GB RAM server is using 5.33 GB for radis (approximately 1.6 in other parts of the server) GB, so even after the reboot of the server, I already have ~ 7 GB RAM [88%]). The memory usage of the REDIS continues to grow until it is ultimately killed by the OOM of Ubuntu, causing the nervousness of errors for my node application.

I've attached Radis Info Output at the bottom of this post. I originally thought that redis could be a lot of keys, but I read from REDIS () that 1 million keys are ~ 100 MB, we have around 2 million (~ 200 MB - nowhere near 5GB) , So this could not possibly be a problem.

My questions are: - When is all this memory consuming? The keyspace does not take much at all. - What can I do to stop it from taking more storage?

Thank you! Redis_version: 2.8.6 redis_git_sha1: 00000000 redis_git_dirty: 0 redis_build_id: f73a208b84b18824 redis_mode: standalone OS: Linux 3.2.0-55-virtual x86_64 Arc_bits: 64 Multiplexing API: Apollo GCC_version: 4.6 .3 Process_ID: 1286 Run_ID: 6D3 DAi 5341A 549D FCA 63706C 240C44086198317 TCP_port: 637 9Uptime_s_second: 1390 uptime_in_day: 0 hj: 10 alu: 771223 Configuration file: / etc / redis / redis.conf # Customers connected_clients: 198 client_longest_output_list: 0 client_biggest_input_buf: 0 blocked_clients: 72 # memory used_memory: +572 230,408 Used_memori_humn: 5.33G Used_memori_arss: +5826732032 Used_memori_pik: +5732485800 Used_memori_pik_humn: 5.34G Used_memori_lua: 33,792 Mem_fregmenteshn_reshyo 1.02 Mem_allokator: Jemlo-3.5. 0 # persistence Loading: 0 rdb_changes_since_last_save: 94 rdb_bgsave_in_progress: 0 rdb_last_save_time: +1412804004 rdb_last_bgsave_status: Well rdb_last_bgsave_time_sec: 40 rdb_current_bgsave_time_ seconds: -1 aof_enabled: 0 aof_rewrite_in_progress: 0 aof_rewrite_scheduled: 0 aof_last_rewrite_time_sec: -1 aof_current_rewrite_time_sec: -1 aof_last_bgrewrite_status: OK Aof_last_write_status: right # stats total_connections_received: 382 total_commands_processed: 36,936 instantaneous_ops_per_sec: 0 reject_connections: 0 sync_full: 0 sync_partial_ok: 0 sync_partial_err: 0 expired_keys: 0 evicted_keys: 0 keyspace_hits: 2421 keyspace_misses: 1 pubsub_channels: 1 pubsub_patterns: 9 latest_fork_usec: 1,361,869 # replication role : Master connected_slaves: 0 master_repl_offset: 0 repl_backlog_active: 0 repl_backlog_size: 1048576 repl_backlog_first_byte_offset: 0 repl_backlo G_histlen: 0 # cpu used_cpu_sys: 15.95 used_cpu_user: 101.34 used_cpu_sys_children: 12.55 used_cpu_user_children: 146.17 # keyspace db0: key = 2,082,234, ending = 1,162,351, avg_ttl = 306,635,722,644

Thanks for the response Itamar I was under the lie (and does not seem really enough) After the assumption that the keys and value will almost all have the same size -

Turns out that some swans were collected from there, which were more than every 10KB, and we had hundreds of thousands Worked out by removing those people.

Thanks again.

Comments

Popular posts from this blog

php - PDO bindParam() fatal error -

logging - How can I log both the Request.InputStream and Response.OutputStream traffic in my ASP.NET MVC3 Application for specific Actions? -

java - Why my included JSP file won't get processed correctly? -