Red indexes + unassigned shards after snapshot restore

Hello,

I am trying to restore my indexes from local server to webserver, same Elasticsearch version, but I can't make it work and now I am facing a problem, I don't really understand.

I created a new repository on my local server via:

curl -XPUT 'localhost:9200/_snapshot/backup_repository?verify=false' -H 'Content-Type: application/json' -d '{ "type": "fs","settings": {"location": "backup_repository"}}'

Then I verified it:

curl -XPOST 'localhost:9200/_snapshot/backup_repository/_verify'

Then I created the snapshot and transferred these files on my webserver via FTP in the repository folder which I refer to in the config file:

curl -XPUT 'http://localhost:9200/_snapshot/backup_repository/backup_1?wait_for_completion=true' -H 'Content-Type: application/json' -d '{"indices":"su_articles,su_articles_live","ignore_unavailable":true,"include_global_state":false}'

id	"backup_1"
repository	"backup_repository"
status	"SUCCESS"
start_epoch	"1747404094"
start_time	"14:01:34"
end_epoch	"1747404094"
end_time	"14:01:34"
duration	"206ms"
indices	"2"
successful_shards	"2"
failed_shards	"0"
total_shards	"2"

Now I switched to the webserver, created and verified a repository there and restored the snapshot

curl -XPOST '<server-ip>:9200/_snapshot/backup_repository/backup_1/_restore' -H 'Content-Type: application/json' -d '{"indices":"su_articles_live,su_articles"}'

Afterwards I was checking the indexes, but these were marked as red:

red    open   su_articles_live 0OYeSSf-TQWY7eiCfoxYuA   1   0
red    open   su_articles      AIjpMfQMR4efSSSaHbS-3g   1   0

Also shards are unassigned

su_articles  0 p UNASSIGNED
su_articles_live 0 p UNASSIGNED

When checking _cluster/allocation/explain I get the following explanation:

shard has failed to be restored from the snapshot [backup_repository:backup_1/6Osxb83vT-WkliCyU4yl5Q] because of [failed shard on node [bQzu3MFlRb6vUMEXhMD9mA]: failed recovery, failure RecoveryFailedException[[su_articles][0]: Recovery failed on {<server-ip>}{bQzu3MFlRb6vUMEXhMD9mA}{a4iAvyV5QtWxLtYZaOMopw}{162.55.xxx.xxx}{162.55.xxx.xxx:9300}{cdfhilmrstw}{ml.machine_memory=67324219392, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=33285996544}]; nested: IndexShardRecoveryException[failed recovery]; nested: IndexShardRestoreFailedException[restore failed]; nested: IndexShardRestoreFailedException[failed to restore snapshot [backup_1/6Osxb83vT-WkliCyU4yl5Q]]; nested: CorruptIndexException[checksum failed (hardware problem?) : expected=pz2ega actual=1vt2022 (resource=name [_63.fdt], length [49248], checksum [pz2ega], writtenBy [8.11.3]) (resource=VerifyingIndexOutput(_63.fdt))]; ] - manually close or delete the index [su_articles] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard

When I search through the red indexes /su_articles/_search?pretty=true&q=*:* I get:
no_shard_available_action_exception

Does anyone know how to resolve this?

is there hardware issue (disk?)
is both cluster same version?

Wild guess, but did you transfer files via binary FTP. Default for some clients is ascii.

Any reason you couldn’t use scp?

Check the md5sums completely match.

I am not sure if this was the right call, but it works now, thank you! I have also set discovery.seed_hosts to 0.0.0.0, was empty before and created a new repository and snapshot locally and on my webserver.

Glad it works. Maybe just a lucky guess.