How to sort mongodb with pymongo - python

I'm trying to use the sort feature when querying my mongoDB, but it is failing. The same query works in the MongoDB console but not here. Code is as follows:
import pymongo
from pymongo import Connection
connection = Connection()
db = connection.myDB
print db.posts.count()
for post in db.posts.find({}, {'entities.user_mentions.screen_name':1}).sort({u'entities.user_mentions.screen_name':1}):
print post
The error I get is as follows:
Traceback (most recent call last):
File "find_ow.py", line 7, in <module>
for post in db.posts.find({}, {'entities.user_mentions.screen_name':1}).sort({'entities.user_mentions.screen_name':1},1):
File "/Library/Python/2.6/site-packages/pymongo-2.0.1-py2.6-macosx-10.6-universal.egg/pymongo/cursor.py", line 430, in sort
File "/Library/Python/2.6/site-packages/pymongo-2.0.1-py2.6-macosx-10.6-universal.egg/pymongo/helpers.py", line 67, in _index_document
TypeError: first item in each key pair must be a string
I found a link elsewhere that says I need to place a 'u' infront of the key if using pymongo, but that didn't work either. Anyone else get this to work or is this a bug.

.sort(), in pymongo, takes key and direction as parameters.
So if you want to sort by, let's say, id then you should .sort("_id", 1)
For multiple fields:
.sort([("field1", pymongo.ASCENDING), ("field2", pymongo.DESCENDING)])

You can try this:
db.Account.find().sort("UserName")
db.Account.find().sort("UserName",pymongo.ASCENDING)
db.Account.find().sort("UserName",pymongo.DESCENDING)

This also works:
db.Account.find().sort('UserName', -1)
db.Account.find().sort('UserName', 1)
I'm using this in my code, please comment if i'm doing something wrong here, thanks.

Why python uses list of tuples instead dict?
In python, you cannot guarantee that the dictionary will be interpreted in the order you declared.
So, in mongo shell you could do .sort({'field1':1,'field2':1}) and the interpreter would sort field1 at first level and field 2 at second level.
If this syntax was used in python, there is a chance of sorting by field2 at first level. With tuple, there is no such risk.
.sort([("field1",pymongo.ASCENDING), ("field2",pymongo.DESCENDING)])

Sort by _id descending:
collection.find(filter={"keyword": keyword}, sort=[( "_id", -1 )])
Sort by _id ascending:
collection.find(filter={"keyword": keyword}, sort=[( "_id", 1 )])

DESC & ASC :
import pymongo
client = pymongo.MongoClient("mongodb://localhost:27017/")
db = client["mydatabase"]
col = db["customers"]
doc = col.find().sort("name", -1) #
for x in doc:
print(x)
###################
import pymongo
client = pymongo.MongoClient("mongodb://localhost:27017/")
db = client["mydatabase"]
col = db["customers"]
doc = col.find().sort("name", 1) #
for x in doc:
print(x)

TLDR: Aggregation pipeline is faster as compared to conventional .find().sort().
Now moving to the real explanation. There are two ways to perform sorting operations in MongoDB:
Using .find() and .sort().
Or using the aggregation pipeline.
As suggested by many .find().sort() is the simplest way to perform the sorting.
.sort([("field1",pymongo.ASCENDING), ("field2",pymongo.DESCENDING)])
However, this is a slow process compared to the aggregation pipeline.
Coming to the aggregation pipeline method. The steps to implement simple aggregation pipeline intended for sorting are:
$match (optional step)
$sort
NOTE: In my experience, the aggregation pipeline works a bit faster than the .find().sort() method.
Here's an example of the aggregation pipeline.
db.collection_name.aggregate([{
"$match": {
# your query - optional step
}
},
{
"$sort": {
"field_1": pymongo.ASCENDING,
"field_2": pymongo.DESCENDING,
....
}
}])
Try this method yourself, compare the speed and let me know about this in the comments.
Edit: Do not forget to use allowDiskUse=True while sorting on multiple fields otherwise it will throw an error.

.sort([("field1",pymongo.ASCENDING), ("field2",pymongo.DESCENDING)])
Python uses key,direction. You can use the above way.
So in your case you can do this
for post in db.posts.find().sort('entities.user_mentions.screen_name',pymongo.ASCENDING):
print post

Say, you want to sort by 'created_on' field, then you can do like this,
.sort('{}'.format('created_on'), 1 if sort_type == 'asc' else -1)

Related

Python Mongodb sorting too big, how to use index?

I'm trying to iterate in Python over all elements of a large Mongodb database.
Usually, I do:
mgclient = MongoClient('mongodb://user:pwd#0.0.0.0:27017')
mgdb = mgclient['mongo']
mgcol = mgdb['name']
for mg_ob in mgcol.find().sort('Date').sort('time'):
#DOTHINGS
But it says "Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit".
So I created an index named 'SortedTime', but I don't understand how I can use it now.
Basically, I'm trying to have something like:
mgclient = MongoClient('mongodb://user:pwd#0.0.0.0:27017')
mgdb = mgclient['mongo']
mgcol = mgdb['name']
for mg_ob in mgcol.find()['SortedTime']:
#DOTHINGS
Any ideas ? A little hand would be much appreciated.
I hope this post will help others. Thank you very much
Update:
I managed to make it work thanks to Joe. After I created the Index:
resp = mgcol.create_index(
[
("date", 1),
("time", 1)
]
)
print ("index response:", resp)
What I did was just:
mgclient = MongoClient('mongodb://user:pwd#0.0.0.0:27017')
mgdb = mgclient['mongo']
mgcol = mgdb['name']
for mg_ob in mgcol.find():
#DOTHINGS
No need to use the index name.
Your query is sorting on 2 fields, Date and time, so you will need an index that includes these fields first in the key specification.
Working from the mongo shell, you might use the createIndex shell helper:
db.getSiblingDB("mongo").getCollection("name").createIndex({Date:1, time:1})
Working from the client side, you might use the createIndexes database command.
Once the index has been created, query just like you did before and the mongod's query executor should use the index.
You can use explain() to get detailed query execution stages to see which indexes were considered and the comparative performance of each.

How to use MongoDB find() to perform range queries on numeric strings?

How can I make a find() in MongoDB, using find to be >= with some value, but that value is a numeric string?
If I run the following line (that searches the MongoDB database for modes higher than 1):
cursor = db.foo.find({"mode": {"$gt": 1}})
This will work only if the data in MongoDB is in the format:
data = {"mode":3}
But I need to use the find() with this data:
data = {"mode":'3'} # as string
How can I do this?
Here is my example:
from pymongo import MongoClient
client = MongoClient()
db = client.test
db.foo.drop()
data = {"mode":3} # Works because this is a numeric
data = {"mode":'3'} # Won't work!!!!!!!!!! But my database contains only numeric strings...how can use like this?
db.foo.insert_one(data)
print(db.foo.count())
cursor = db.foo.find({"mode": {"$gt": 1}})
for document in cursor:
print(document)
If you leave your numeric data stored in the database as strings, in order to query your data with range operators such as $gt and $lt you're going to have to use one of two approaches.
First, you can use JavaScript's automatic conversion to run your range queries. This works as shown below, but it is very limited as you will not be able to use any indexes, as explained in the comments to previous answers. Thus for big data sets, this will be prohibitively slow.
db.foo.find("this.mode > 1");
A second approach would involve regular expressions. You will have to figure out what regex to use, but once you have that, you can use the syntax below to run your query or use the $regex operator as highlighted here.
db.foo.find({ mode: /pattern/<options> });
Aside from having to figure out some complex regex, again there are possible performance issues with this approach, as explained here (see extract below). Most likely, you will also run into issues where your query is not taking advantage of indexes.
If an index exists for the field, then MongoDB matches the regular expression against the values in the index, which can be faster than a collection scan. Further optimization can occur if the regular expression is a “prefix expression”, which means that all potential matches start with the same string. This allows MongoDB to construct a “range” from that prefix and only match against those values from the index that fall within that range.
Because of this, if you're going to be running these queries often, I would recommend that you follow a third approach, which would be to change your schema and store your data as numbers. You can achieve this with a simple migration script such as the following in JavaScript, which you could run in the shell.
var cursor = db.foo.find();
while (cursor.hasNext()) {
var doc = cursor.next();
var _id = doc._id;
if (doc.mode) {
var modeString = doc.mode;
var modeInt = parseInt(modeString);
db.foo.update({ _id: _id }, { $set: { mode: modeInt } });
}
}
Having done that you will be able to query your data using operators such as $gt and $lt, sort it without much hassle, and take advantage of indexes.
From Mongo docs,
$type selects the documents where the value of the field is an instance of the specified BSON type. Querying by data type is useful when dealing with highly unstructured data where data types are not predictable.
{ field: { $type: BSON type number | String alias } }
$type returns documents where the BSON type of the field matches the BSON type passed to $type.
I guess you'll have to pass the $type explicitly in your case which might be:
data = {{"mode":{$type:"string"}}:'3'}
You could try this synthax (JavaScript's automatic conversion):
db.test.find("this.mode > 1")
source

ObjectID generated by server on pymongo

I am using pymongo (python module for mongodb).
I want the ObjectID to be created automatically by the server, however it seems to be created by pymongo itself when we don't specify it.
The problem it raises is that I use ObjectID to sort by time (by just sorting by the _id field). However it seems that it is using the time set on each computer so we cannot truly rely on it.
Any idea on how to solve this problem?
If you call save and pass it a document without an _id field, you can force the server to add the _id instead of the client by setting the (enigmatically-named) manipulate option to False:
coll.save({'foo': 'bar'}, manipulate=False)
I'm not Python user but I'm afraid there's no way to generate _id by server. For performance reasons _id is always generated by driver thus when you insert a document you don't need to do another query to get the _id back.
Here's a possible way you can do it by generating a int sequence _id, just like the IDENTITY ID of SqlServer. To do this, you need to keep a record in you certain collection for example in my project there's a seed, which has only one record:
{_id: ObjectId("..."), seqNo: 1 }
The trick is, you have to use findAndModify to keep the find and modify in the same "transaction".
var idSeed = db.seed.findAndModify({
query: {},
sort: {seqNo: 1},
update: { $inc: { seqNo: 1 } },
new: false
});
var id = idSeed.seqNo;
This way you'll have all you instances get a unique sequence# and you can use it to sort the records.

How to query a couchdb view using a composite key?

I have a couchdb view "record_by_date_product" with the following definition:
function(doc) {
emit([doc.logtime, doc.product_id], doc);
}
I am trying to run a query which is something like:
(logtime > fromdate & logtime < todate) & product_id in (1,2,6)
Is this possible with this view?
I am also using couchdb python library to access couchdb. Here is a code snippet:
server = couchdb.Server()
db = server['mydb']
results = db.view('_design/record_by_date_product/_view/record_by_date_product')
This page http://packages.python.org/CouchDB/client.html#viewresults specifies that we can use a startkey and endkey. But I am not able to get it working.
Thanks
I think I just found the exact answer:
Design a view 'sampleview' which is like:
{
"records_by_date_product": {
"map": "function(doc) {\n emit([doc.prod_id, doc.logtime], doc);\n}"
}
}
Let us say that the query parameters are:
prod_id in [1,3]
from_date = '2010-01-01 00:00:00'
to_date = '2010-01-02 00:00:00'
Then you will have to run 2 separate queries on the same view:
http://localhost:5984/db/_design/sampleview/_view/records_by_date_product?startkey='\["1,2010-01-01%2000:00:00"\]'&endkey='\[1,"2010-01-02%2000:00:00"\]'
http://localhost:5984/db/_design/sampleview/_view/records_by_date_product?startkey='\[2,"2010-01-01%2000:00:00"\]'&endkey='\[2,"2010-01-02%2000:00:00"\]'
Notice that the same query is run each time except that the prod_id is changed in the second query. The results have to be collated later. Hope this helps!
That exact query is not possible. As the documentation suggests, you can get everything in a view in a particular key range. Views are sorted data structures, so all CouchDB does to fulfill this request is locate the start key and begin returning items until you hit the end key.
The strategy you should use for this query depends on characteristics of the data itself. Most importantly, will you waste a lot of time weeding out items if you use only the first part of the key (logtime) and iterate through those in Python, weeding out items where product_id won't match? If so, you should consider writing another view that is primarily sorted by product_id. If not, go ahead and use the weed-out approach.
How about this solution:
I create a view for each product with logtime as the index.
Access each view if required and filter theresults using the range - [fromdate todate]
Do 3 for each product in the input parameters and collate the results
This has a drawback that for every product we will have to create a view and this looks like a manual process.
Just a thought! Let me know your views.

mongodb: insert if not exists

Every day, I receive a stock of documents (an update). What I want to do is insert each item that does not already exist.
I also want to keep track of the first time I inserted them, and the last time I saw them in an update.
I don't want to have duplicate documents.
I don't want to remove a document which has previously been saved, but is not in my update.
95% (estimated) of the records are unmodified from day to day.
I am using the Python driver (pymongo).
What I currently do is (pseudo-code):
for each document in update:
existing_document = collection.find_one(document)
if not existing_document:
document['insertion_date'] = now
else:
document = existing_document
document['last_update_date'] = now
my_collection.save(document)
My problem is that it is very slow (40 mins for less than 100 000 records, and I have millions of them in the update).
I am pretty sure there is something builtin for doing this, but the document for update() is mmmhhh.... a bit terse.... (http://www.mongodb.org/display/DOCS/Updating )
Can someone advise how to do it faster?
Sounds like you want to do an upsert. MongoDB has built-in support for this. Pass an extra parameter to your update() call: {upsert:true}. For example:
key = {'key':'value'}
data = {'key2':'value2', 'key3':'value3'};
coll.update(key, data, upsert=True); #In python upsert must be passed as a keyword argument
This replaces your if-find-else-update block entirely. It will insert if the key doesn't exist and will update if it does.
Before:
{"key":"value", "key2":"Ohai."}
After:
{"key":"value", "key2":"value2", "key3":"value3"}
You can also specify what data you want to write:
data = {"$set":{"key2":"value2"}}
Now your selected document will update the value of key2 only and leave everything else untouched.
As of MongoDB 2.4, you can use $setOnInsert (http://docs.mongodb.org/manual/reference/operator/setOnInsert/)
Set insertion_date using $setOnInsert and last_update_date using $set in your upsert command.
To turn your pseudocode into a working example:
now = datetime.utcnow()
for document in update:
collection.update_one(
filter={
'_id': document['_id'],
},
update={
'$setOnInsert': {
'insertion_date': now,
},
'$set': {
'last_update_date': now,
},
},
upsert=True,
)
You could always make a unique index, which causes MongoDB to reject a conflicting save. Consider the following done using the mongodb shell:
> db.getCollection("test").insert ({a:1, b:2, c:3})
> db.getCollection("test").find()
{ "_id" : ObjectId("50c8e35adde18a44f284e7ac"), "a" : 1, "b" : 2, "c" : 3 }
> db.getCollection("test").ensureIndex ({"a" : 1}, {unique: true})
> db.getCollection("test").insert({a:2, b:12, c:13}) # This works
> db.getCollection("test").insert({a:1, b:12, c:13}) # This fails
E11000 duplicate key error index: foo.test.$a_1 dup key: { : 1.0 }
You may use Upsert with $setOnInsert operator.
db.Table.update({noExist: true}, {"$setOnInsert": {xxxYourDocumentxxx}}, {upsert: true})
Summary
You have an existing collection of records.
You have a set records that contain updates to the existing records.
Some of the updates don't really update anything, they duplicate what you have already.
All updates contain the same fields that are there already, just possibly different values.
You want to track when a record was last changed, where a value actually changed.
Note, I'm presuming PyMongo, change to suit your language of choice.
Instructions:
Create the collection with an index with unique=true so you don't get duplicate records.
Iterate over your input records, creating batches of them of 15,000 records or so. For each record in the batch, create a dict consisting of the data you want to insert, presuming each one is going to be a new record. Add the 'created' and 'updated' timestamps to these. Issue this as a batch insert command with the 'ContinueOnError' flag=true, so the insert of everything else happens even if there's a duplicate key in there (which it sounds like there will be). THIS WILL HAPPEN VERY FAST. Bulk inserts rock, I've gotten 15k/second performance levels. Further notes on ContinueOnError, see http://docs.mongodb.org/manual/core/write-operations/
Record inserts happen VERY fast, so you'll be done with those inserts in no time. Now, it's time to update the relevant records. Do this with a batch retrieval, much faster than one at a time.
Iterate over all your input records again, creating batches of 15K or so. Extract out the keys (best if there's one key, but can't be helped if there isn't). Retrieve this bunch of records from Mongo with a db.collectionNameBlah.find({ field : { $in : [ 1, 2,3 ...}) query. For each of these records, determine if there's an update, and if so, issue the update, including updating the 'updated' timestamp.
Unfortunately, we should note, MongoDB 2.4 and below do NOT include a bulk update operation. They're working on that.
Key Optimization Points:
The inserts will vastly speed up your operations in bulk.
Retrieving records en masse will speed things up, too.
Individual updates are the only possible route now, but 10Gen is working on it. Presumably, this will be in 2.6, though I'm not sure if it will be finished by then, there's a lot of stuff to do (I've been following their Jira system).
I don't think mongodb supports this type of selective upserting. I have the same problem as LeMiz, and using update(criteria, newObj, upsert, multi) doesn't work right when dealing with both a 'created' and 'updated' timestamp. Given the following upsert statement:
update( { "name": "abc" },
{ $set: { "created": "2010-07-14 11:11:11",
"updated": "2010-07-14 11:11:11" }},
true, true )
Scenario #1 - document with 'name' of 'abc' does not exist:
New document is created with 'name' = 'abc', 'created' = 2010-07-14 11:11:11, and 'updated' = 2010-07-14 11:11:11.
Scenario #2 - document with 'name' of 'abc' already exists with the following:
'name' = 'abc', 'created' = 2010-07-12 09:09:09, and 'updated' = 2010-07-13 10:10:10.
After the upsert, the document would now be the same as the result in scenario #1. There's no way to specify in an upsert which fields be set if inserting, and which fields be left alone if updating.
My solution was to create a unique index on the critera fields, perform an insert, and immediately afterward perform an update just on the 'updated' field.
1. Use Update.
Drawing from Van Nguyen's answer above, use update instead of save. This gives you access to the upsert option.
NOTE: This method overrides the entire document when found (From the docs)
var conditions = { name: 'borne' } , update = { $inc: { visits: 1 }} , options = { multi: true };
Model.update(conditions, update, options, callback);
function callback (err, numAffected) { // numAffected is the number of updated documents })
1.a. Use $set
If you want to update a selection of the document, but not the whole thing, you can use the $set method with update. (again, From the docs)...
So, if you want to set...
var query = { name: 'borne' }; Model.update(query, ***{ name: 'jason borne' }***, options, callback)
Send it as...
Model.update(query, ***{ $set: { name: 'jason borne' }}***, options, callback)
This helps prevent accidentally overwriting all of your document(s) with { name: 'jason borne' }.
In general, using update is better in MongoDB as it will just create the document if it doesn't exist yet, though I'm not sure how to work that with your python adapter.
Second, if you only need to know whether or not that document exists, count() which returns only a number will be a better option than find_one which supposedly transfer the whole document from your MongoDB causing unnecessary traffic.
Method For Pymongo
The Official MongoDB Driver for Python
5% of the times you may want to update and overwrite, while other times you like to insert a new row, this is done with updateOne and upsert
95% (estimated) of the records are unmodified from day to day.
The following solution is taken from this core mongoDB function:
db.collection.updateOne(filter, update, options)
Updates a single document within the collection based on the filter.
This is done with this Pymongo's function update_one(filter, new_values, upsert=True)
Code Example:
# importing pymongo's MongoClient
from pymongo import MongoClient
conn = MongoClient('localhost', 27017)
db = conn.databaseName
# Filter by appliances called laptops
filter = { 'user_id': '4142480', 'question_id': '2801008' }
# Update number of laptops to
new_values = { "$set": { 'votes': 1400 } }
# Using update_one() method for single update with upsert.
db.collectionName.update_one(filter, new_values, upsert=True)
What upsert=True Do?
Creates a new document if no documents match the filter.
Updates a single document that matches the filter.
I do propose the using of await now.

Categories

Resources