i added a document to marqo add_documents() but i didn't pass an id and now i am trying to get the document but i don't know what the document_id is?
Here is what my code look like:
mq = marqo.Client(url='http://localhost:8882')
mq.index("my-first-index").add_documents([
{
"Title": title,
"Description": document_body
}]
)
i tried to check whether the document got added or not but ;
no_of_docs = mq.index("my-first-index").get_stats()
print(no_of_docs)
i got;
{'numberOfDocuments': 1}
meaning it was added.
if you don't add the "_id" as part of key/value then by default marqo will generate a random id for you, to access it you can search the document using the document's Title,
doc = mq.index("my-first-index").search(title_of_your_document, searchable_attributes=['Title'])
you should get a dictionary as the result something like this;
{'hits': [{'Description': your_description,
'Title': title_of_your_document,
'_highlights': relevant part of the doc,
'_id': 'ac14f87e-50b8-43e7-91de-ee72e1469bd3',
'_score': 1.0}],
'limit': 10,
'processingTimeMs': 122,
'query': 'The Premier League'}
the part that says _id is the id of your document.
Related
I wanted to add new keys to an existing object in a MongoDB docuemnt, I am trying to update the specific abject with update query but I don't see new keys in database.
I have a object like this:
{'_id': 'patent_1023',
'raw': {'id': 'CN-109897889-A',
'title': 'A kind of LAMP(ring mediated isothermal amplification) product visible detection method',
'assignee': '北京天恩泽基因科技有限公司',
'inventor/author': '徐堤',
'priority_date': '2019-04-17',
'filing/creation_date': '2019-04-17',
'publication_date': '2019-06-18',
'grant_date': None,
'result_link': 'https://patents.google.com/patent/CN109897889A/en', 'representative_figure_link': None
},
'source': 'Google Patent'}
I added two new keys in raw and want to update only 'raw' with new keys 'abstract' and 'description'
Here is what I have done.
d = client.find_one({'_id': {'$in': ids}})
d['raw'].update(missing_data) # missing_data contain new keys to be added in raw.
here = client.find_one_and_update({'_id': d['_id']}, {'$set': {"raw": d['raw']}})
Both update_one and update_many will work with this:
missing_data = {'abstract':'a book', 'description':'a fun book'};
ids = [ 'patent_1023', 'X'];
rc=db.foo.update_one(
{'_id': {'$in': ids}},
# Use pipeline form of update to exploit richer agg framework
# function like $mergeObjects. Below we are saying "take the
# incoming raw object, overlay the missing_data object on top of
# it, and then set that back into raw and save":
[ {'$set': {
'raw': {'$mergeObjects': [ '$$ROOT.raw', missing_data ] }
}}
]
)
I want to find the duplicated document in my mongodb based on name, I have the following code:
def Check_BFA_DB(options):
issue_list=[]
client = MongoClient(options.host, int(options.port))
db = client[options.db]
collection = db[options.collection]
names = [{'$project': {'name':'$name'}}]
name_cursor = collection.aggregate(names, cursor={})
for name in name_cursor:
issue_list.append(name)
print(name)
It will print all names, how can I print only the duplicated ones?
Appritiated for any help!
The following query will show only duplicates:
db['collection_name'].aggregate([{'$group': {'_id':'$name', 'count': {'$sum': 1}}}, {'$match': {'count': {'$gt': 1}}}])
How it works:
Step 1:
Go over the whole collection, and group the documents by the property called name, and for each name count how many times it is used in the collection.
Step 2:
filter (using the keyword match) only documents in which the count is greater than 1 (the gt operator).
An example (written for mongo shell, but can be easily adapted for python):
db.a.insert({name: "name1"})
db.a.insert({name: "name1"})
db.a.insert({name: "name2"})
db.a.aggregate([{"$group": {_id:"$name", count: {"$sum": 1}}}, {$match: {count: {"$gt": 1}}}])
Result is { "_id" : "name1", "count" : 2 }
So your code should look something like this:
def Check_BFA_DB(options):
issue_list=[]
client = MongoClient(options.host, int(options.port))
db = client[options.db]
name_cursor = db[options.collection].aggregate([
{'$group': {'_id': '$name', 'count': {'$sum': 1}}},
{'$match': {'count': {'$gt': 1}}}
])
for document in name_cursor:
name = document['_id']
issue_list.append(name)
print(name)
BTW (not related to the question), python naming convention for function names is lowercase letters, so you might want to call it check_bfa_db()
I have a collection of about 1.4 million tweets in a MongoDB collection. I want to find all that are NOT retweets, and am using Python. The structure of a document is as follows:
{
'_id': ObjectId('59388c046b0c1901172555b9'),
'coordinates': None,
'created_at': datetime.datetime(2016, 8, 18, 17, 17, 12),
'geo': None,
'is_quote': False,
'lang': 'en',
'text': b'Adam Cole Praises Kevin Owens + A Preview For Next Week\xe2\x80\x99s',
'tw_id': 766323071976247296,
'user_id': 2231233110,
'user_lang': 'en',
'user_loc': 'main; #Kan1shk3',
'user_name': 'sheezy0',
'user_timezone': 'Chennai'
}
I can write a query that works to find the particular tweet from above:
twitter_mongo_collection.find_one({
'text': b'Adam Cole Praises Kevin Owens + A Preview For Next Week\xe2\x80\x99s'
})
But when I try to find retweets, my code doesn't work, for example I try to find any tweets that start like this:
'text': b'RT some tweet'
Using this query:
find_one( {'text': {'$regex': "/^RT/" } } )
It doesn't return an error, but it doesn't find anything. I suspect it has something to do with that 'b' at the beginning before the text starts. I know I also need to put '$not:' in there somewhere but am not sure where.
Thanks!
It looks like your regex search is trying to match the string
b'RT'
but you want to match strings like
b'RT some text afterwards'
try using this regex instead
find_one( {'text': {'$regex': "/^RT.*/" } } )
I had to decode the 'text' field that was encoded as binary. Then I was able to use
twitter_mongo_collection.find_one( { {'text': { '$not': re.compile("^RT.*") } } )
to find all the documents that did not start with "RT".
I am using a document with nested structure in it where the content is analysed in spite of my telling it "not analysed". The document is defined as follows:
class SearchDocument(es.DocType)
# Verblijfsobject specific data
gebruiksdoel_omschrijving = es.String(index='not_analyzed')
oppervlakte = es.Integer()
bouwblok = es.String(index='not_analyzed')
gebruik = es.String(index='not_analyzed')
panden = es.String(index='not_analyzed')
sbi_codes = es.Nested({
'properties': {
'sbi_code': es.String(index='not_analyzed'),
'hcat': es.String(index='not_analyzed'),
'scat': es.String(index='not_analyzed'),
'hoofdcategorie': es.String(fields= {'raw': es.String(in dex='not_analyzed')}),
'subcategorie': es.String(fields={'raw':es.String(index='not_analyzed')}),
'sub_sub_categorie': es.String(fields= {'raw': es.String(index='not_analyzed')}),
'bedrijfsnaam': es.String(fields= {'raw': es.String(index='not_analyzed')}),
'vestigingsnummer': es.String(index='not_analyzed')
}
})
As is clear, it says "not analysed" in the document for most fields. This works OK for the "regular fields". The problem is in the nested structure. There the hoofdcategorie and other fields are indexed for their separate words instead of the unanalysed version.
The structure is filled with the following data:
[
{
"sbi_code": "74103",
"sub_sub_categorie": "Interieur- en ruimtelijk ontwerp",
"vestigingsnummer": "000000002216",
"bedrijfsnaam": "Flippie Tests",
"subcategorie": "design",
"scat": "22279_12_22254_11",
"hoofdcategorie": "zakelijke dienstverlening",
"hcat": "22279_12"
},
{
"sbi_code": "9003",
"sub_sub_categorie": "Schrijven en overige scheppende kunsten",
"vestigingsnummer": "000000002216",
"bedrijfsnaam": "Flippie Tests",
"subcategorie": "kunst",
"scat": "22281_12_22259_11",
"hoofdcategorie": "cultuur, sport, recreatie",
"hcat": "22281_12"
}
]
Now when I retrieve aggregates it has split the hoofdcategorie in 3 different words ("cultuur", "sport", "recreatie"). This is not what I want, but as far as I know I have specified it correctly using the "not analysed" phrase.
Anyone any ideas?
I am trying to run the following query:
data = {
'user_id':1,
'text':'Lorem ipsum',
'$inc':{'count':1},
'$set':{'updated':datetime.now()},
}
self.db.collection('collection').update({'user_id':1}, data, upsert=True)
but the two '$' queries cause it to fail. Is it possible to do this within one statement?
First of all, when you ask a question like this it's very helpful to add information on why it's failing (e.g. copy the error).
Your query fails because you're mixing $ operators with document overrides. You should use the $set operator for the user_id and text fields as well (although the user_id part in your update is irrelevant at this example).
So convert this to pymongo query:
db.test.update({user_id:1},
{$set:{text:"Lorem ipsum", updated:new Date()}, $inc:{count:1}},
true,
false)
I've removed the user_id in the update because that isn't necessary. If the document exists this value will already be 1. If it doesn't exist the upsert will copy the query part of your update into the new document.
If you're trying to do the following:
If the doc doesn't exist, insert a new doc.
If it exists, then only increment one field.
Then you can use a combo of $setOnInsert and $inc. If the song exists then $setOnInsert won't do anything and $inc will increase the value of "listened". If the song doesn't exist, then it will create a new doc with the fields "songId" and "songName". Then $inc will create the field and set the value to be 1.
let songsSchema = new mongoose.Schema({
songId: String,
songName: String,
listened: Number
})
let Song = mongoose.model('Song', songsSchema);
let saveSong = (song) => {
return Song.updateOne(
{songId: song.songId},
{
$inc: {listened: 1},
$setOnInsert: {
songId: song.songId,
songName: song.songName,
}
},
{upsert: true}
)
.then((savedSong) => {
return savedSong;
})
.catch((err) => {
console.log('ERROR SAVING SONG IN DB', err);
})