Elasticsearch-Py bulk not indexing all documents - python

I am using the elasticsearch-py Python package to interact with Elasticsearch through code. I have a script that is meant to take each document from one index, generate a field + value, then re-index it into a new index.
The issue is that there is 1216 documents in the first index, but only 1000 documents make it to the second one. Typically, it is exactly 1000 documents, occasionally making it higher around 1100, but never making it to the full 1216.
I usually keep the batch_size at 200, but changing it around seems to have some effect on the amount of documents that make it to the second index. Changing it to 400 will typically get a result of 800 documents being transferred. Using parallel_bulk seems to have the same results as using bulk.
I believe the issue is with the generating process I am performing. For each document I am generating its ancestry (they are organized in a tree structure) by recursively getting its parent from the first index. This involves rapid document GET requests interwoven with Bulk API calls to index the documents and Scroll API calls to get the documents from the index in the first place.
Would activity like this cause the documents to not go through? If I remove (comment out) the recursive GET requests, all documents seem to go through every time. I have tried creating multiple Elasticsearch clients, but that wouldn't even help if ES itself is the bottleneck.
Here is the code if you're curious:
def complete_resources():
for result in helpers.scan(client=es, query=query, index=TEMP_INDEX_NAME):
resource = result["_source"]
ancestors = []
parent = resource.get("parent")
while parent is not None:
ancestors.append(parent)
parent = es.get(
index=TEMP_INDEX_NAME,
doc_type=TEMPORARY_DOCUMENT_TYPE,
id=parent["uid"]
).get("_source").get("parent")
resource["ancestors"] = ancestors
resource["_id"] = resource["uid"]
yield resource
This generator is consumed by helpers.parallel_bulk()
for success, info in helpers.parallel_bulk(
client=es,
actions=complete_resources(),
thread_count=10,
queue_size=12,
raise_on_error=False,
chunk_size=INDEX_BATCH_SIZE,
index=new_primary_index_name,
doc_type=PRIMARY_DOCUMENT_TYPE,
):
if success:
successful += 1
else:
failed += 1
print('A document failed:', info)
This gives me the following result:
Time: 7 seconds
Successful: 1000
Failed: 0

Related

Couchbase.exceptions._TimeoutError while trying to retrieve the document information

I am using python 3 with couchbase client. I have 461378 records in couchbase bucket and RAM used/quota is 3.77GB / 5.78GB. I am trying to retrieve the document by using the following code:
list_of_rows = []
for idx, product_details in enumerate(CouchRepo.get_product_details_iterator()):
list_of_rows.append(get_required_dict_for_df(product_details["data"]))
But i am getting the following error:
in __iter__
raw_rows = self.raw.fetch(self._mres)
couchbase.exceptions._TimeoutError_0x17 (generated, catch TimeoutError): <RC=0x17[Client-Side timeout exceeded for operation. Inspect network conditions or increase the timeout], HTTP Request failed. Examine 'objextra' for full result, Results=1, C Source=(src/http.c,144), OBJ=ViewResult<rc=0x17[Client-Side timeout exceeded for operation. Inspect network conditions or increase the timeout], value=None, http_status=0, tracing_context=0, tracing_output=None>, Tracing Output={":nokey:0": null}>
Basically inside code,
while self._do_iter:
raw_rows = self.raw.fetch(self._mres)
for row in self._process_payload(raw_rows):
yield row
I have tried setting a different operation_timeout but getting the same error. I also checked how can I allocate more RAM to bucket or node but not getting any solution. I have gone through the following links but didn't find any implementation details.
https://docs.couchbase.com/python-sdk/current/client-settings.html
https://docs.couchbase.com/server/current/install/sizing-general.html
How can I retrieve records info, also the number of records will increase in the future.

Elasticsearch : retrieve all documents from index with python

I need to retrieve documents from Elasticsearch in Python.
So I wrote this small code :
es = Elasticsearch(
myHost,
port=myPort,
scheme="http")
request = '''{"query": {"match_all": {}}}'''
results = es.search(index=myIndex, body=request)['hits']['hits']
print(len(results))
>> 10
The problem is that it only returns 10 documents from my index when I expect to have few hundreds. How is it possible to retrieve all documents from the index ?
You have several ways to solve this.
If you know the maximum amount of documents you will have in the index, you can set the size parameter of the search as that number or more. For example, if you know you will have less than 100, you can retrieve them this way results = es.search(index=myIndex, body=request, size=100)['hits']['hits']
If you don't know that number, and you still want all of them, you will have to use the scan function, instead of the search function. The documentation for that is here

Is there a way of setting a range when querying data of firestore?

I have a collection of documents, all with random id's and a field called date.
docs = collection_ref.order_by(u'date', direction=firestore.Query.ASCENDING).get()
Imagine I had limited the search to the first ten
docs = collection_ref.order_by(date', direction=firestore.Query.ASCENDING).limit(10).get()
How would I continue with my next query when I want to get the items from 11 to 20?
you can use an offset(),but every doc skipped counts as a read. For example, if you did query.offset(10).limit(5), you would get charged for 15 reads: the 10 offset + the 5 that you actually got.
If you want to avoid unnecessary reads, use startAt() or startAfter()
example (Java, sorry, don't speak python. here's link to docs though):
QuerySnapshot querySnapshot = // your query snapshot
List<DocumentSnapshot> docs = querySnapshot.getDocuments();
//reference doc, query starts at or after this one
DocumentSnapshot indexDocSnapshot = docs.get(docs.size());
//to 'start after', or paginate, you can do below:
query.startAfter(indexDocSnapshot).limit(10).get()
.addOnSuccessListener(queryDocumentSnapshots -> {
// next 10 docs here, no extra reads necessary
});

Why does ArangoDB (using Python-Arango) return ERR 1600 ERROR_CURSOR_NOT_FOUND?

The problem
I iterate over an entire vertex collection, e.g. journals, and use it to create edges, author, from a person to the given journal.
I use python-arango and the code is something like:
for journal in journals.all():
create_author_edge(journal)
I have a relatively small dataset, and the journals-collection has only ca. 1300 documents. However: this is more than 1000, which is the batch size in the Web Interface - but I don't know if this is of relevance.
The problem is that it raises a CursorNextError, and returns HTTP 404 and ERR 1600 from the database, which is the ERROR_CURSOR_NOT_FOUND error:
Will be raised when a cursor is requested via its id but a cursor with that id cannot be found.
Insights to the cause
From ArangoDB Cursor Timeout, and from this issue, I suspect that it's because the cursor's TTL has expired in the database, and in the python stacktrace something like this is seen:
# Part of the stacktrace in the error:
(...)
if not cursor.has_more():
raise StopIteration
cursor.fetch() <---- error raised here
(...)
If I iterate over the entire collection fast, i.e. if I do print(len(journals.all()) it outputs "1361" with no errors.
When I replace the journals.all() with AQL, and increase the TTL parameter, it works without errors:
for journal in db.aql.execute("FOR j IN journals RETURN j", ttl=3600):
create_author_edge(journal)
However, without the the ttl-parameter, the AQL approach gives the same error as using journals.all().
More information
A last piece of information is that I'm running this on my personal laptop when the error is raised. On my work computer, the same code was used to create the graph and populate it with the same data, but there no errors were raised. Because I'm on holiday I don't have access to my work computer to compare versions, but both systems were installed during the summer so there's a big chance the versions are the same.
The question
I don't know if this is an issue with python-arango, or with ArangoDB. I believe that because there is no problem when TTL is increased that it could indicate an issue with ArangodDB and not the Python driver, but I cannot know.
(I've added a feature request to add ttl-param to the .all()-method here.)
Any insights into why this is happening?
I don't have the rep to create the tag "python-arango", so it would be great if someone would create it and tag my question.
Inside of the server the simple queries will be translated to all().
As discussed on the referenced github issue, simple queries don't support the TTL parameter, and won't get them.
The prefered solution here is to use an AQL-Query on the client, so that you can specify the TTL parameter.
In general you should refrain from pulling all documents from the database at once, since this may introduce other scaling issues. You should use proper AQL with FILTER statements backed by indices (use explain() to revalidate) to fetch the documents you require.
If you need to iterate over all documents in the database, use paging. This is usually implemented the best way by combining a range FILTER with a LIMIT clause:
FOR x IN docs
FILTER x.offsetteableAttribute > #lastDocumentWithThisID
LIMIT 200
RETURN x
So here is how I did it. You can specify with the more args param makes it easy to do.
Looking at the source you can see the doc string says what to do
def AQLQuery(self, query, batchSize = 100, rawResults = False, bindVars = None, options = None, count = False, fullCount = False,
json_encoder = None, **moreArgs):
"""Set rawResults = True if you want the query to return dictionnaries instead of Document objects.
You can use **moreArgs to pass more arguments supported by the api, such as ttl=60 (time to live)"""
from pyArango.connection import *
conn = Connection(username=usr, password=pwd,arangoURL=url)# set this how ya need
db = conn['collectionName']#set this to the name of your collection
aql = """ for journal in journals.all():
create_author_edge(journal)"""
doc = db.AQLQuery(aql,ttl=300)
Thats all ya need to do!

Why is the reported number of hits from elasticsearch different depending on the query method?

I have an elasticsearch index which has 60k elements. I know that by checking the head plugin and I get the same information via Sense (the result is in the lower right corner)
I then wanted to query the same index from Python, in two diffrent ways: via a direct requests call and using the elasticsearch module:
import elasticsearch
import json
import requests
# the requests version
data = {"query": {"match_all": {}}}
r = requests.get('http://elk.example.com:9200/nessus_current/_search', data=json.dumps(data))
print(len(r.json()['hits']['hits']))
# the elasticsearch module version
es = elasticsearch.Elasticsearch(hosts='elk.example.com')
res = es.search(index="nessus_current", body={"query": {"match_all": {}}})
print(len(res['hits']['hits']))
In both cases the result is 10 - far from the expected 60k. The results of the query make sense (the content is what I expect), it is just that there are only a few of them.
I took one of these 10 hits and queried with Sense for its _id to close the loop. It is, as expected, found indeed:
So it looks like the 10 hits are a subset of the whole index, why aren't all elements reported in the Python version of the calls?
10 is the default size of the results returned by Elasticsearch. If you want more, specify "size": 100 for example. But, be careful, returning all the docs using size is not recommended as it can bring down your cluster. For getting back all the results use scan&scroll.
And I think it should be res['hits']['total'] not res['hits']['hits'] to get the number of total hits.

Categories

Resources