I have Elasticsearch installed on a VM, where logs are collected daily through Logstash and stored in indices following the {name}-{date} naming pattern. However, I’ve noticed that indices are being automatically deleted from Elasticsearch.
I can retrieve the list of indices using the API and have noticed that only the last two days' indices are visible. Previously, I was able to view indices from the past seven days.
There are no disk space issues—I have sufficient storage available. Additionally, no index lifecycle policies have been attached to these indices. Despite this, older indices are still being deleted, and no one is explicitly removing them.
Elasticsearch does not automatically delete any indices unless configured to do so. I would recommend you check if you have any Index Lifecycle management (ILM) policies configured in the cluster. You can do this through the API or Kibana.
If there is nothing there that explains the deletion you likely have some external process set up to delete the indices, e.g. Curator. This typically runs periodically through a cron job, so you would need to search the hosts for this.
Another possibility for getting indices deleted is if you have an unsecured cluster that is open to the internet. As you say only older indices are getting deleted I suspect this is less likely in this case.
Could you please help me determine where and how to check if an external process might be responsible for deleting the indices? Also, while I don’t recall setting up any automated cleanup processes myself, that could still be a possibility.
Additionally, I am not using Kibana for visualization—I am using Grafana instead.
Could you please check in Stack Management > Index Management > search for your index > Click on your index > confirm if you see Index Lifecycle tab like shown in below screenshot >
As looking at previous comments I see we have below options :
We can check the elasticsearch logs for MASTER Node and it should have information about the deleted indices as to when it got deleted, what was the user or any other insights to guide as further.
I have reviewed the logs and noticed that there was maintenance activity early in the morning. Could this have been the reason for the deletion? I have not set anything related to the Curator.
Search the logs on all nodes for the name of an index that was deleted. Depending on the logging level you have configured I believe this should be logged.
Just restarting Elasticsearch will not result in any index getting deleted.
Are you sure noone else might have set up scheduled deletion for this cluster?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.