Import dangling index APIedit
Imports a dangling index.
Requestedit
POST /_dangling/<index-uuid>?accept_data_loss=true
Prerequisitesedit
-
If the Elasticsearch security features are enabled, you must have the
manage
cluster privilege to use this API.
Descriptionedit
If Elasticsearch encounters index data that is absent from the current cluster
state, those indices are considered to be dangling. For example,
this can happen if you delete more than
cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
Import a single index into the cluster by referencing its UUID. Use the List dangling indices API to locate the UUID of an index.
Path parametersedit
-
<index-uuid>
- (Required, string) UUID of the index to import, which you can find using the List dangling indices API.
Query parametersedit
-
accept_data_loss
-
(Required, Boolean)
This field must be set to
true
to import a dangling index. Because Elasticsearch cannot know where the dangling index data came from or determine which shard copies are fresh and which are stale, it cannot guarantee that the imported data represents the latest state of the index when it was last in the cluster. -
master_timeout
-
(Optional, time units)
Period to wait for a connection to the master node. If no response is received
before the timeout expires, the request fails and returns an error. Defaults to
30s
. -
timeout
-
(Optional, time units)
Period to wait for a response. If no response is received before the timeout
expires, the request fails and returns an error. Defaults to
30s
.
Examplesedit
The following example shows how to import a dangling index:
POST /_dangling/zmM4e0JtBkeUjiHD-MihPQ?accept_data_loss=true
The API returns following response:
{ "acknowledged" : true }