Cloudsearch Request Exceed 10,000 Limit - python

When I search a query that has more than 10,000 matches I get the following error:
{u'message': u'Request depth (10100) exceeded, limit=10000', u'__type': u'#SearchException', u'error': {u'rid': u'zpXDxukp4bEFCiGqeQ==', u'message': u'[*Deprecated*: Use the outer message field] Request depth (10100) exceeded, limit=10000'}}
When I search for more narrowed down keywords and queries with less results, everything works fine and no error is returned.
I guess I have to limit the search somehow, but I'm unable to figure out how. My search function looks like this:
def execute_query_string(self, query_string):
amazon_query = self.search_connection.build_query(q=query_string, start=0, size=100)
json_search_results = []
for json_blog in self.search_connection.get_all_hits(amazon_query):
json_search_results.append(json_blog)
results = []
for json_blog in json_search_results:
results.append(json_blog['fields'])
return results
And it's being called like this:
results = searcher.execute_query_string(request.GET.get('q', ''))[:100]
As you can see, I've tried to limit the results with the start and size attributes of build_query(). I still get the error though.
I must have missunderstood how to avoid getting more than 10,000 matches on a search result. Can someone tell me how to do it?
All I can find on this topic is Amazon's Limits where it says that you can only request 10,000 results. It does not say how to limit it.

You're calling get_all_hits, which gets ALL results for your query. That is why your size param is being ignored.
From the docs:
get_all_hits(query) Get a generator to iterate over all search results
Transparently handles the results paging from Cloudsearch search
results so even if you have many thousands of results you can iterate
over all results in a reasonably efficient manner.
http://boto.readthedocs.org/en/latest/ref/cloudsearch2.html#boto.cloudsearch2.search.SearchConnection.get_all_hits
You should be calling search instead -- http://boto.readthedocs.org/en/latest/ref/cloudsearch2.html#boto.cloudsearch2.search.SearchConnection.search

Related

Can't get the Place details (Gmaps Places API) more than 20 data

I'm quite a newbie in Python espescially to use Gmaps API to get place details.
I want to search for places with this parameters:
places_result = gmaps.places_nearby(location=' -6.880270,107.60794', radius = 300, type = 'cafe')
But actually i want to get many data as i can in the specific lat/lng and radius. So, I try to get new parameters that google api has provided. That's page_token. This is the detail of documentation:
pagetoken — Returns up to 20 results from a previously run search. Setting a pagetoken parameter will execute a search with the same parameters used previously — all parameters other than pagetoken will be ignored.
https://developers.google.com/places/web-service/search
So i tried to get more data (Next page data) with this function:
places_result = gmaps.places_nearby(location=' -6.880270,107.60794', radius = 300, type = 'cafe')
time.sleep(5)
place_result = gmaps.places_nearby(page_token = places_result['next_page_token'])
And this is my whole output function:
for place in places_result['results']:
my_place_id = place['place_id']
my_fields = ['name','formatted_address','business_status','rating','user_ratings_total','formatted_phone_number']
places_details = gmaps.place(place_id= my_place_id , fields= my_fields)
pprint.pprint(places_details['result'])
But unfortunately when i start to running i only get 20 (Max) data of place details. I don't know whether my function of page token parameter it's true or not, because the output can't get more than 20 data.
I'm very appreciate for anyone who can give me an advice to solve the problem. Thank you very much :)
As stated on the documentation here:
By default, each Nearby Search or Text Search returns up to 20 establishment results per query; however, each search can return as many as 60 results, split across three pages.
So basically, what you are currently experiencing is an intended behavior. There is no way for you to get more than 20 results in a single nearby search query.
If a next_page_token was returned upon sending your first nearby search query, then, this means that a second page with results is available.
To access this second page of results, then just like what you did, you just have to send another nearby search request, but use the pagetoken parameter this time, and set its value with the next_page_token you got from the first response.
And if the next_page_token also exists on the response of your second nearby search query, then this means that the third (and the last) page of the result is also available. You could access the third page of results using the same way you accessed the second page.
Going back to your query, I tried the parameters you've specified but I could only get around 9 results. Is it intended that your radius parameter is only set at 300 meters?

failing to iterate over list for querying DynamoDB

I am trying to query a DynamoDB iterating over a list and it is failing. I mean returning empty JSON. If I run query with single id, I am able to get data.
I am getting data from a file to a list.
Below is my code in loop:
with open('file.txt') as f:
resid = f.read().splitlines()
for id in resid:
result = table.query(
IndexName="partner_resid-index",
KeyConditionExpression=Key("id").eq(partner_resid[0]),
FilterExpression=Key("event").eq("active"),
)
print(result)
Even I tried to call using a function, but no luck.
Any suggestions what I am missing here?
The boto3 query function returns only a single page of query results. You must check whether this result has a LastEvaluatedKey and if it does, send another query, with ExclusiveStartKey set to the last LastEvaluatedKey, and continue to do that until you get the last page, without LastEvaluatedKey set.
The thing is, if your FilterExpression filters out a lot of results, you may even get an empty page - and it is possible this is the empty result you're seeing. Note that DynamoDB first reads a page full of data (by default, 1MB of data), and only then applies to it the FilterExpression. It is possible to get back an empty page if all those results did not match the filter, and you still need to continue the loop to the next page.
Alternatively, you can use boto3's paginator mechanism. It is used like:
got_items = []
paginator = dynamodb.meta.client.get_paginator('query')
for page in paginator.paginate(TableName='name', KeyConditionExpression=...):
got_items += page['Items']

Elasticsearch-Py bulk not indexing all documents

I am using the elasticsearch-py Python package to interact with Elasticsearch through code. I have a script that is meant to take each document from one index, generate a field + value, then re-index it into a new index.
The issue is that there is 1216 documents in the first index, but only 1000 documents make it to the second one. Typically, it is exactly 1000 documents, occasionally making it higher around 1100, but never making it to the full 1216.
I usually keep the batch_size at 200, but changing it around seems to have some effect on the amount of documents that make it to the second index. Changing it to 400 will typically get a result of 800 documents being transferred. Using parallel_bulk seems to have the same results as using bulk.
I believe the issue is with the generating process I am performing. For each document I am generating its ancestry (they are organized in a tree structure) by recursively getting its parent from the first index. This involves rapid document GET requests interwoven with Bulk API calls to index the documents and Scroll API calls to get the documents from the index in the first place.
Would activity like this cause the documents to not go through? If I remove (comment out) the recursive GET requests, all documents seem to go through every time. I have tried creating multiple Elasticsearch clients, but that wouldn't even help if ES itself is the bottleneck.
Here is the code if you're curious:
def complete_resources():
for result in helpers.scan(client=es, query=query, index=TEMP_INDEX_NAME):
resource = result["_source"]
ancestors = []
parent = resource.get("parent")
while parent is not None:
ancestors.append(parent)
parent = es.get(
index=TEMP_INDEX_NAME,
doc_type=TEMPORARY_DOCUMENT_TYPE,
id=parent["uid"]
).get("_source").get("parent")
resource["ancestors"] = ancestors
resource["_id"] = resource["uid"]
yield resource
This generator is consumed by helpers.parallel_bulk()
for success, info in helpers.parallel_bulk(
client=es,
actions=complete_resources(),
thread_count=10,
queue_size=12,
raise_on_error=False,
chunk_size=INDEX_BATCH_SIZE,
index=new_primary_index_name,
doc_type=PRIMARY_DOCUMENT_TYPE,
):
if success:
successful += 1
else:
failed += 1
print('A document failed:', info)
This gives me the following result:
Time: 7 seconds
Successful: 1000
Failed: 0

Elasticsearch : retrieve all documents from index with python

I need to retrieve documents from Elasticsearch in Python.
So I wrote this small code :
es = Elasticsearch(
myHost,
port=myPort,
scheme="http")
request = '''{"query": {"match_all": {}}}'''
results = es.search(index=myIndex, body=request)['hits']['hits']
print(len(results))
>> 10
The problem is that it only returns 10 documents from my index when I expect to have few hundreds. How is it possible to retrieve all documents from the index ?
You have several ways to solve this.
If you know the maximum amount of documents you will have in the index, you can set the size parameter of the search as that number or more. For example, if you know you will have less than 100, you can retrieve them this way results = es.search(index=myIndex, body=request, size=100)['hits']['hits']
If you don't know that number, and you still want all of them, you will have to use the scan function, instead of the search function. The documentation for that is here

Why does ArangoDB (using Python-Arango) return ERR 1600 ERROR_CURSOR_NOT_FOUND?

The problem
I iterate over an entire vertex collection, e.g. journals, and use it to create edges, author, from a person to the given journal.
I use python-arango and the code is something like:
for journal in journals.all():
create_author_edge(journal)
I have a relatively small dataset, and the journals-collection has only ca. 1300 documents. However: this is more than 1000, which is the batch size in the Web Interface - but I don't know if this is of relevance.
The problem is that it raises a CursorNextError, and returns HTTP 404 and ERR 1600 from the database, which is the ERROR_CURSOR_NOT_FOUND error:
Will be raised when a cursor is requested via its id but a cursor with that id cannot be found.
Insights to the cause
From ArangoDB Cursor Timeout, and from this issue, I suspect that it's because the cursor's TTL has expired in the database, and in the python stacktrace something like this is seen:
# Part of the stacktrace in the error:
(...)
if not cursor.has_more():
raise StopIteration
cursor.fetch() <---- error raised here
(...)
If I iterate over the entire collection fast, i.e. if I do print(len(journals.all()) it outputs "1361" with no errors.
When I replace the journals.all() with AQL, and increase the TTL parameter, it works without errors:
for journal in db.aql.execute("FOR j IN journals RETURN j", ttl=3600):
create_author_edge(journal)
However, without the the ttl-parameter, the AQL approach gives the same error as using journals.all().
More information
A last piece of information is that I'm running this on my personal laptop when the error is raised. On my work computer, the same code was used to create the graph and populate it with the same data, but there no errors were raised. Because I'm on holiday I don't have access to my work computer to compare versions, but both systems were installed during the summer so there's a big chance the versions are the same.
The question
I don't know if this is an issue with python-arango, or with ArangoDB. I believe that because there is no problem when TTL is increased that it could indicate an issue with ArangodDB and not the Python driver, but I cannot know.
(I've added a feature request to add ttl-param to the .all()-method here.)
Any insights into why this is happening?
I don't have the rep to create the tag "python-arango", so it would be great if someone would create it and tag my question.
Inside of the server the simple queries will be translated to all().
As discussed on the referenced github issue, simple queries don't support the TTL parameter, and won't get them.
The prefered solution here is to use an AQL-Query on the client, so that you can specify the TTL parameter.
In general you should refrain from pulling all documents from the database at once, since this may introduce other scaling issues. You should use proper AQL with FILTER statements backed by indices (use explain() to revalidate) to fetch the documents you require.
If you need to iterate over all documents in the database, use paging. This is usually implemented the best way by combining a range FILTER with a LIMIT clause:
FOR x IN docs
FILTER x.offsetteableAttribute > #lastDocumentWithThisID
LIMIT 200
RETURN x
So here is how I did it. You can specify with the more args param makes it easy to do.
Looking at the source you can see the doc string says what to do
def AQLQuery(self, query, batchSize = 100, rawResults = False, bindVars = None, options = None, count = False, fullCount = False,
json_encoder = None, **moreArgs):
"""Set rawResults = True if you want the query to return dictionnaries instead of Document objects.
You can use **moreArgs to pass more arguments supported by the api, such as ttl=60 (time to live)"""
from pyArango.connection import *
conn = Connection(username=usr, password=pwd,arangoURL=url)# set this how ya need
db = conn['collectionName']#set this to the name of your collection
aql = """ for journal in journals.all():
create_author_edge(journal)"""
doc = db.AQLQuery(aql,ttl=300)
Thats all ya need to do!

Categories

Resources