What is an easy way to bring Elasticsearch to its knees?  Use time based indexing with one index per day, install Winlogbeat on a server and forget to set the “ignore_older” variable in Winlogbeat.

What happens is Winlogbeat will grab all logs from that server regardless of the date and send to ELK for processing and indexing.  If the server is 3 years old there are likely log files for that entire 3 year period and when Logstash processes them it will instruct Elasticsearch to create a new index for each new date that is discovered.  Trying to create 1000 new indexes in Elasticsearch in the matter of a few minutes is extremely resource intensive and will take time to complete and will likely cause major performance issues with all other indexing and searching tasks that Elasticsearch is doing.

 

What to do?  Put a fail safe in your Logstash pipeline that will delete any log older then  your retention time in Elasticsearch.  Logstash deletes the log and the problem is avoided.

A quick Ruby script to do it:

#####################
## Author: packetrevolt.com
#####################
## drop logs >= 2592000 seconds (30 days) old
ruby { code => "event.cancel if (Time.now.to_f - event.timestamp.to_f >= 2592000)" }
#####################

Leave a Reply

Close Menu