How to prevent Elasticsearch from automatically deleting old indices

Hi,

I have Elasticsearch installed on a VM, where logs are collected daily through Logstash and stored in indices following the {name}-{date} naming pattern. However, I’ve noticed that indices are being automatically deleted from Elasticsearch.

I can retrieve the list of indices using the API and have noticed that only the last two days' indices are visible. Previously, I was able to view indices from the past seven days.

There are no disk space issues—I have sufficient storage available. Additionally, no index lifecycle policies have been attached to these indices. Despite this, older indices are still being deleted, and no one is explicitly removing them.

please help me with this.

Thank you

Elasticsearch does not automatically delete any indices unless configured to do so. I would recommend you check if you have any Index Lifecycle management (ILM) policies configured in the cluster. You can do this through the API or Kibana.

If there is nothing there that explains the deletion you likely have some external process set up to delete the indices, e.g. Curator. This typically runs periodically through a cron job, so you would need to search the hosts for this.

Another possibility for getting indices deleted is if you have an unsecured cluster that is open to the internet. As you say only older indices are getting deleted I suspect this is less likely in this case.

Hi,

I have not set any lifecycle policies.

Could you please help me determine where and how to check if an external process might be responsible for deleting the indices? Also, while I don’t recall setting up any automated cleanup processes myself, that could still be a possibility.

Additionally, I am not using Kibana for visualization—I am using Grafana instead.

Thanks for your response!

Hello Aditya,

Could you please check in Stack Management > Index Management > search for your index > Click on your index > confirm if you see Index Lifecycle tab like shown in below screenshot >

This will help to understand the issue if there is any ILM assigned with it or not?

Thanks!!

Hi, I have not configured any Index Lifecycle Management (ILM) policy for the index.

I am not using Kibana for visualizing Elasticsearch data.

Thank you for sharing the details.

As looking at previous comments I see we have below options :

  1. We can check the elasticsearch logs for MASTER Node and it should have information about the deleted indices as to when it got deleted, what was the user or any other insights to guide as further.

  2. Check if there is any configuration related to Curator
    Delete indices in ElasticSearch using Curator | by Khushal Bisht | Medium

  3. If the source of deletion is still not clear , how about enabling audit for future investigation :
    Enable audit logging | Elasticsearch Guide [8.18] | Elastic

Thanks!!

I have reviewed the logs and noticed that there was maintenance activity early in the morning. Could this have been the reason for the deletion? I have not set anything related to the Curator.

Search the logs on all nodes for the name of an index that was deleted. Depending on the logging level you have configured I believe this should be logged.

Just restarting Elasticsearch will not result in any index getting deleted.

Are you sure noone else might have set up scheduled deletion for this cluster?

Which version are you using?

Is your cluster exposed to the internet? Do you have security enabled?

As mentioned, Elasticsearch does not delete anything unless told to be it by using an ILM policy, Curator or direct requests.

If I'm not wrong delete requests are logged on the cluster, check the logs from any line containing the DELETE work.