Paginate search results()

  本篇文章为你整理了Paginate search results()的详细内容,包含有 Paginate search results,希望能帮助你了解 Paginate search results。

  Avoid using from and size to page too deeply or request too many results at

  once. Search requests usually span multiple shards. Each shard must load its

  requested hits and the hits for any previous pages into memory. For deep pages

  or large sets of results, these operations can significantly increase memory and

  CPU usage, resulting in degraded performance or node failures.

  By default, you cannot use from and size to page through more than 10,000

  hits. This limit is a safeguard set by the

  index.max_result_window index setting. If you need

  to page through more than 10,000 hits, use the search_after

  parameter instead.

  
Elasticsearch uses Lucene s internal doc IDs as tie-breakers. These internal doc

  IDs can be completely different across replicas of the same data. When paging

  search hits, you might occasionally see that documents with the same sort values

  are not ordered consistently.

  
Search afteredit

  You can use the search_after parameter to retrieve the next page of hits

  using a set of sort values from the previous page.

  Using search_after requires multiple search requests with the same query and

  sort values. The first step is to run an initial request. The following

  example sorts the results by two fields (date and tie_breaker_id):

  

GET twitter/_search

 

   "query": {

   "match": {

   "title": "elasticsearch"

   "sort": [

   {"date": "asc"},

   {"tie_breaker_id": "asc"}

  }

 

  
The search response includes an array of sort values for each hit:

  

{

 

   "took" : 17,

   "timed_out" : false,

   "_shards" : ...,

   "hits" : {

   "total" : ...,

   "max_score" : null,

   "hits" : [

   "_index" : "twitter",

   "_id" : "654322",

   "_score" : null,

   "_source" : ...,

   "sort" : [

   1463538855,

   "654322"

   "_index" : "twitter",

   "_id" : "654323",

   "_score" : null,

   "_source" : ...,

   "sort" : [

   1463538857,

   "654323"

  }

 

  
To retrieve the next page of results, repeat the request, take the sort values from the

  last hit, and insert those into the search_after array:

  

GET twitter/_search

 

   "query": {

   "match": {

   "title": "elasticsearch"

   "search_after": [1463538857, "654323"],

   "sort": [

   {"date": "asc"},

   {"tie_breaker_id": "asc"}

  }

 

  
Repeat this process by updating the search_after array every time you retrieve a

  new page of results. If a refresh occurs between these requests,

  the order of your results may change, causing inconsistent results across pages. To

  prevent this, you can create a point in time (PIT) to

  preserve the current index state over your searches.

  

POST /my-index-000001/_pit?keep_alive=1m

 

  


{

 

   "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=="

  }

 

  To get the first page of results, submit a search request with a sort

  argument. If using a PIT, specify the PIT ID in the pit.id parameter and omit

  the target data stream or index from the request path.

  
All PIT search requests add an implicit sort tiebreaker field called _shard_doc,

  which can also be provided explicitly.

  If you cannot use a PIT, we recommend that you include a tiebreaker field

  in your sort. This tiebreaker field should contain a unique value for each document.

  If you don t include a tiebreaker field, your paged results could miss or duplicate hits.

  
Search after requests have optimizations that make them faster when the sort

  order is _shard_doc and total hits are not tracked. If you want to iterate over all documents regardless of the

  order, this is the most efficient option.

  
If the sort field is a date in some target data streams or indices

  but a date_nanos field in other targets, use the numeric_type parameter

  to convert the values to a single resolution and the format parameter to specify a

  date format for the sort field. Otherwise, Elasticsearch won t interpret

  the search after parameter correctly in each request.

  
"pit": {

   "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==",

   "keep_alive": "1m"

   "sort": [

   {"@timestamp": {"order": "asc", "format": "strict_date_optional_time_nanos", "numeric_type" : "date_nanos" }}

  }

 

 

  
The search response includes an array of sort values for each hit. If you used

  a PIT, a tiebreaker is included as the last sort values for each hit.

  This tiebreaker called _shard_doc is added automatically on every search requests that use a PIT.

  The _shard_doc value is the combination of the shard index within the PIT and the Lucene s internal doc ID,

  it is unique per document and constant within a PIT.

  You can also add the tiebreaker explicitly in the search request to customize the order:

  

GET /_search

 

   "size": 10000,

   "query": {

   "match" : {

   "user.id" : "elkbee"

   "pit": {

   "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==",

   "keep_alive": "1m"

   "sort": [

   {"@timestamp": {"order": "asc", "format": "strict_date_optional_time_nanos"}},

   {"_shard_doc": "desc"}

  }

 

  


{

 

   "pit_id" : "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==",

   "took" : 17,

   "timed_out" : false,

   "_shards" : ...,

   "hits" : {

   "total" : ...,

   "max_score" : null,

   "hits" : [

   "_index" : "my-index-000001",

   "_id" : "FaslK3QBySSL_rrj9zM5",

   "_score" : null,

   "_source" : ...,

   "sort" : [

   "2021-05-20T05:30:04.832Z",

   4294967298

  }

 

  
To get the next page of results, rerun the previous search using the last hit s

  sort values (including the tiebreaker) as the search_after argument. If using a PIT, use the latest PIT

  ID in the pit.id parameter. The search s query and sort arguments must

  remain unchanged. If provided, the from argument must be 0 (default) or -1.

  

GET /_search

 

   "size": 10000,

   "query": {

   "match" : {

   "user.id" : "elkbee"

   "pit": {

   "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==",

   "keep_alive": "1m"

   "sort": [

   {"@timestamp": {"order": "asc", "format": "strict_date_optional_time_nanos"}}

   "search_after": [

   "2021-05-20T05:30:04.832Z",

   4294967298

   "track_total_hits": false

  }

 

  
You can repeat this process to get additional pages of results. If using a PIT,

  you can extend the PIT s retention period using the

  keep_alive parameter of each search request.

  When you re finished, you should delete your PIT.

  

DELETE /_pit

 

   "id" : "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=="

  }

 

  
We no longer recommend using the scroll API for deep pagination. If

  you need to preserve the index state while paging through more than 10,000 hits,

  use the search_after parameter with a point in time (PIT).

  
While a search request returns a single page of results, the scroll

  API can be used to retrieve large numbers of results (or even all results)

  from a single search request, in much the same way as you would use a cursor

  on a traditional database.

  Scrolling is not intended for real time user requests, but rather for

  processing large amounts of data, e.g. in order to reindex the contents of one

  data stream or index into a new data stream or index with a different

  configuration.

  
Some of the officially supported clients provide helpers to assist with

  scrolled searches and reindexing:

  
The results that are returned from a scroll request reflect the state of

  the data stream or index at the time that the initial search request was made, like a

  snapshot in time. Subsequent changes to documents (index, update or delete)

  will only affect later search requests.

  
In order to use scrolling, the initial search request should specify the

  scroll parameter in the query string, which tells Elasticsearch how long it

  should keep the search context alive (see Keeping the search context alive), eg ?scroll=1m.

  

POST /my-index-000001/_search?scroll=1m

 

   "size": 100,

   "query": {

   "match": {

   "message": "foo"

  }

 

  
The result from the above request includes a _scroll_id, which should

  be passed to the scroll API in order to retrieve the next batch of

  results.

  

response = client.scroll(

 

   body: {

   scroll: 1m,

   scroll_id: DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==

  puts response

 

  
"scroll": "1m",

   "scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="

   }`)),

   es.Scroll.WithPretty(),

  fmt.Println(res, err)

 

 

  

POST /_search/scroll 

 

   "scroll" : "1m",

   "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="

  }

 

  
GET or POST can be used and the URL should not include the index

  name this is specified in the original search request instead.

  
The scroll parameter tells Elasticsearch to keep the search context open

  for another 1m.

  
The size parameter allows you to configure the maximum number of hits to be

  returned with each batch of results. Each call to the scroll API returns the

  next batch of results until there are no more results left to return, ie the

  hits array is empty.

  
The initial search request and each subsequent scroll request each

  return a _scroll_id. While the _scroll_id may change between requests, it doesn’t

  always change — in any case, only the most recently received _scroll_id should be used.

  
If the request specifies aggregations, only the initial search response

  will contain the aggregations results.

  
Scroll requests have optimizations that make them faster when the sort

  order is _doc. If you want to iterate over all documents regardless of the

  order, this is the most efficient option:

  
Keeping the search context aliveedit

  A scroll returns all the documents which matched the search at the time of the

  initial search request. It ignores any subsequent changes to these documents.

  The scroll_id identifies a search context which keeps track of everything

  that Elasticsearch needs to return the correct documents. The search context is created

  by the initial request and kept alive by subsequent requests.

  The scroll parameter (passed to the search request and to every scroll

  request) tells Elasticsearch how long it should keep the search context alive.

  Its value (e.g. 1m, see Time units) does not need to be long enough to

  process all data it just needs to be long enough to process the previous

  batch of results. Each scroll request (with the scroll parameter) sets a

  new expiry time. If a scroll request doesn t pass in the scroll

  parameter, then the search context will be freed as part of that scroll

  request.

  Normally, the background merge process optimizes the index by merging together

  smaller segments to create new, bigger segments. Once the smaller segments are

  no longer needed they are deleted. This process continues during scrolling, but

  an open search context prevents the old segments from being deleted since they

  are still in use.

  
Keeping older segments alive means that more disk space and file handles

  are needed. Ensure that you have configured your nodes to have ample free file

  handles. See File Descriptors.

  
Additionally, if a segment contains deleted or updated documents then the

  search context must keep track of whether each document in the segment was live

  at the time of the initial search request. Ensure that your nodes have

  sufficient heap space if you have many open scrolls on an index that is subject

  to ongoing deletes or updates.

  
To prevent against issues caused by having too many scrolls open, the

  user is not allowed to open scrolls past a certain limit. By default, the

  maximum number of open scrolls is 500. This limit can be updated with the

  search.max_open_scroll_context cluster setting.

  
You can check how many search contexts are open with the

  nodes stats API:

  

$params = [

 

   metric = indices,

   index_metric = search,

  $response = $client- nodes()- stats($params);

 

  


resp = client.nodes.stats(metric="indices", index_metric="search")

 

  print(resp)

 

  
es.Nodes.Stats.WithMetric([]string{"indices"}...),

   es.Nodes.Stats.WithIndexMetric([]string{"search"}...),

  fmt.Println(res, err)

 

 

  
Clear scrolledit

  Search context are automatically removed when the scroll timeout has been

  exceeded. However keeping scrolls open has a cost, as discussed in the

  previous section so scrolls should be explicitly

  cleared as soon as the scroll is not being used anymore using the

  clear-scroll API:

  

response = client.clear_scroll(

 

   body: {

   scroll_id: DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==

  puts response

 

  
es.ClearScroll.WithBody(strings.NewReader(`{

   "scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="

   }`)),

  fmt.Println(res, err)

 

 

  

DELETE /_search/scroll

 

   "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="

  }

 

  
DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,

   DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB

  puts response

 

 

  
"DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",

   "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB"

   }`)),

  fmt.Println(res, err)

 

 

  

DELETE /_search/scroll

 

   "scroll_id" : [

   "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",

   "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB"

  }

 

  
The scroll_id can also be passed as a query string parameter or in the request body.

  Multiple scroll IDs can be passed as comma separated values:

  

$params = [

 

   scroll_id = DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB,

  $response = $client- clearScroll($params);

 

  
"DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",

   "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB",

  print(resp)

 

 

  


response = client.clear_scroll(

 

   scroll_id: DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB

  puts response

 

  


res, err := es.ClearScroll(

 

   es.ClearScroll.WithScrollID("DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==", "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB"),

  fmt.Println(res, err)

 

  


const response = await client.clearScroll({

 

   scroll_id: DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB

  console.log(response)

 

  

DELETE /_search/scroll/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB

 

  
Sliced scrolledit

  When paging through a large number of documents, it can be helpful to split the search into multiple slices

  to consume them independently:

  

GET /my-index-000001/_search?scroll=1m

 

   "slice": {

   "id": 0,

   "max": 2

   "query": {

   "match": {

   "message": "foo"

  GET /my-index-000001/_search?scroll=1m

   "slice": {

   "id": 1,

   "max": 2

   "query": {

   "match": {

   "message": "foo"

  }

 

  
The result from the first request returned documents that belong to the first slice (id: 0) and

  the result from the second request returned documents that belong to the second slice. Since the

  maximum number of slices is set to 2 the union of the results of the two requests is equivalent

  to the results of a scroll query without slicing. By default the splitting is done first on the

  shards, then locally on each shard using the _id field. The local splitting follows the formula

  slice(doc) = floorMod(hashCode(doc._id), max)).

  Each scroll is independent and can be processed in parallel like any scroll request.

  
If the number of slices is bigger than the number of shards the slice filter is very slow on

  the first calls, it has a complexity of O(N) and a memory cost equals to N bits per slice where N

  is the total number of documents in the shard. After few calls the filter should be cached and

  subsequent calls should be faster but you should limit the number of sliced query you perform in

  parallel to avoid the memory explosion.

  
The point-in-time API supports a more efficient partitioning strategy and

  does not suffer from this problem. When possible, it s recommended to use a point-in-time search

  with slicing instead of a scroll.

  Another way to avoid this high cost is to use the doc_values of another field to do the slicing.

  The field must have the following properties:

  
Every document should contain a single value. If a document has multiple values for the specified field, the first value is used.

  
The value for each document should be set once when the document is created and never updated. This ensures that each

  slice gets deterministic results.

  
The cardinality of the field should be high. This ensures that each slice gets approximately the same amount of documents.

  以上就是Paginate search results()的详细内容,想要了解更多 Paginate search results的内容,请持续关注盛行IT软件开发工作室。

郑重声明:本文由网友发布,不代表盛行IT的观点,版权归原作者所有,仅为传播更多信息之目的,如有侵权请联系,我们将第一时间修改或删除,多谢。

留言与评论(共有 条评论)
   
验证码: