本篇文章为你整理了Shrink index API()的详细内容,包含有 Shrink index API,希望能帮助你了解 Shrink index API。
IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
To make shard allocation easier, we recommend you also remove the index s
replica shards. You can later re-add replica shards as part of the shrink
operation.
You can use the following update index settings API
request to remove an index s replica shards, relocates the index s remaining
shards to the same node, and make the index read-only.
PUT /my_source_index/_settings
"settings": {
"index.number_of_replicas": 0,
"index.routing.allocation.require._name": "shrink_node_name",
"index.blocks.write": true
}
Prevents write operations to this index. Metadata changes, such as deleting
the index, are still allowed.
It can take a while to relocate the source index. Progress can be tracked
with the _cat recovery API, or the cluster health API can be used to wait until all shards have relocated
with the wait_for_no_relocating_shards parameter.
Descriptionedit
The shrink index API allows you to shrink an existing index into a new index
with fewer primary shards. The requested number of primary shards in the target index
must be a factor of the number of shards in the source index. For example an index with
8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index
with 15 primary shards can be shrunk into 5, 3 or 1. If the number
of shards in the index is a prime number it can only be shrunk into a single
primary shard. Before shrinking, a (primary or replica) copy of every shard
in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink
the current write index, the data stream must first be
rolled over so that a new write index is created
and then the previous write index can be shrunk.
Creates a new target index with the same definition as the source
index, but with a smaller number of primary shards.
Hard-links segments from the source index into the target index. (If
the file system doesn t support hard-linking, then all segments are copied
into the new index, which is a much more time consuming process. Also if using
multiple data paths, shards on different data paths require a full copy of
segment files if they are not on the same disk since hardlinks don’t work across
disks)
Recovers the target index as though it were a closed index which
had just been re-opened.
Shrink an indexedit
To shrink my_source_index into a new index called my_target_index, issue
the following request:
POST /my_source_index/_shrink/my_target_index
"settings": {
"index.routing.allocation.require._name": null,
"index.blocks.write": null
}
The above request returns immediately once the target index has been added to
the cluster state it doesn t wait for the shrink operation to start.
The number of primary shards in the target index must be a factor of the
number of primary shards in the source index. The source index must have
more primary shards than the target index.
The index must not contain more than 2,147,483,519 documents in total
across all shards that will be shrunk into a single shard on the target index
as this is the maximum number of docs that can fit into a single shard.
The node handling the shrink process must have sufficient free disk space to
accommodate a second copy of the existing index.
The _shrink API is similar to the create index API
and accepts settings and aliases parameters for the target index:
POST /my_source_index/_shrink/my_target_index
"settings": {
"index.number_of_replicas": 1,
"index.number_of_shards": 1,
"index.codec": "best_compression"
"aliases": {
"my_search_indices": {}
}
The number of shards in the target index. This must be a factor of the
number of shards in the source index.
Best compression will only take affect when new writes are made to the
index, such as when force-merging the shard to a single
segment.
Monitor the shrink processedit
The shrink process can be monitored with the _cat recoveryAPI, or the cluster health API can be used to wait
until all primary shards have been allocated by setting the wait_for_status
parameter to yellow.
The _shrink API returns as soon as the target index has been added to the
cluster state, before any shards have been allocated. At this point, all
shards are in the state unassigned. If, for any reason, the target index
can t be allocated on the shrink node, its primary shard will remain
unassigned until it can be allocated on that node.
Once the primary shard is allocated, it moves to state initializing, and the
shrink process begins. When the shrink operation completes, the shard will
become active. At that point, Elasticsearch will try to allocate any
replicas and may decide to relocate the primary shard to another node.
Name of the target index to create.
Index names must meet the following criteria:
Indices prior to 7.0 could contain a colon (:), but that s been deprecated and won t be supported in 7.0+
Cannot be longer than 255 bytes (note it is bytes, so multi-byte characters will count towards the 255 limit faster)
Names starting with . are deprecated, except for hidden indices and internal indices managed by plugins
(Optional, string) The number of shard copies that must be active before
proceeding with the operation. Set to all or any positive integer up
to the total number of shards in the index (number_of_replicas+1).
Default: 1, the primary shard.
See Active shards.
(Optional, time units) Specifies the period of time to wait for
a connection to the master node. If no response is received before the timeout
expires, the request fails and returns an error. Defaults to 30s.
(Optional, time units) Specifies the period of time to wait for
a response. If no response is received before the timeout expires, the request
fails and returns an error. Defaults to 30s.
以上就是Shrink index API()的详细内容,想要了解更多 Shrink index API的内容,请持续关注盛行IT软件开发工作室。
郑重声明:本文由网友发布,不代表盛行IT的观点,版权归原作者所有,仅为传播更多信息之目的,如有侵权请联系,我们将第一时间修改或删除,多谢。