is there any way to add values via aggregation
like db.insert_one
x = db.aggregate([{
"$addFields": {
"chat_id": -10013345566,
}
}])
i tried this
but this code return nothing and values are not updated
i wanna add the values via aggregation
cuz aggregation is way faster than others
sample document :
{"_id": 123 , "chat_id" : 125}
{"_id": 234, "chat_id" : 1325}
{"_id": 1323 , "chat_id" : 335}
expected output :
alternative to db.insert_one() in mongodb aggregation
You have to make use of $merge stage to save output of the aggregation to the collection.
Note: Be very very careful when you use $merge stage as you can accidentally replace the entire document in your collection. Go through the complete documentation of this stage before using it.
db.collection.aggregate([
{
"$match": {
"_id": 123
}
},
{
"$addFields": {
"chat_id": -10013345566,
}
},
{
"$merge": {
"into": "collection", // <- Collection Name
"on": "_id", // <- Merge operation match key
"whenMatched": "merge" // <- Operation to perform when matched
}
},
])
Mongo Playground Sample Execution
Related
I have data of the form:
{
'_id': asdf123b51234
'field2': 0
'array': [
0: {
'unique_array_elem_id': id
'nested_field': {
'new_field_i_want_to_add': value
}
}
...
]
}
I have been trying to update like this:
for doc in update_dict:
collection.find_one_and_update(
{'_id':doc['_id']},
{'$set': {
'array.$[elem].nested_field.new_field_i_want_to_add':doc['new_field_value']
}
},
array_filters=[{'elem.unique_array_elem_id':doc['unique_array_elem_id']}]
But it is painfully slow. Updating all of my data will take several days running continuously. Is there a way to update this nested field for all array elements for a given document at once?
Thanks a lot
I'm parsing some XML data, doing some logic on it, and trying to display the results in an HTML table. The dictionary, after filling, looks like this:
{
"general_info": {
"name": "xxx",
"description": "xxx",
"language": "xxx",
"prefix": "xxx",
"version": "xxx"
},
"element_count": {
"folders": 23,
"conditions": 72,
"listeners": 1,
"outputs": 47
},
"external_resource_count": {
"total": 9,
"extensions": {
"jar": 8,
"json": 1
},
"paths": {
"/lib": 9
}
},
"complexity": {
"over_1_transition": {
"number": 4,
"percentage": 30.769
},
"over_1_trigger": {
"number": 2,
"percentage": 15.385
},
"over_1_output": {
"number": 4,
"percentage": 30.769
}
}
}
Then I'm using pandas to convert the dictionary into a table, like so:
data_frame = pandas.DataFrame.from_dict(data=extracted_metrics, orient='index').stack().to_frame()
The result is a table that is mostly correct:
While the first and second levels seem to render correctly, those categories with a sub-sub category get written as a string in the cell, rather than as a further column. I've also tried using stack(level=1) but it raises an error "IndexError: Too many levels: Index has only 1 level, not 2". I've also tried making it into a series with no luck. It seems like it only renders "complete" columns. Is there a way of filling up the empty spaces in the dictionary before processing?
How can I get, for example, external_resource_count -> extensions to have two daughter rows jar and json, with an additional column for the values, so that the final table looks like this:
Extra credit if anyone can tell me how to get rid of the first row with the index numbers. Thanks!
The way you load the dataframe is correct but you should rename the 0 to a some column name.
# this function extracts all the keys from your nested dicts
def explode_and_filter(df, filterdict):
return [df[col].apply(lambda x:x.get(k) if type(x)==dict else x).rename(f'{k}')
for col,nested in filterdict.items()
for k in nested]
data_frame = pd.DataFrame.from_dict(data= extracted_metrics, orient='index').stack().to_frame(name='somecol')
#lets separate the rows where a dict is present & explode only those rows
mask = data_frame.somecol.apply(lambda x:type(x)==dict)
expp = explode_and_filter(data_frame[mask],
{'somecol':['jar', 'json', '/lib', 'number', 'percentage']})
# here we concat the exploded series to a frame
exploded_df = pd.concat(expp, axis=1).stack().to_frame(name='somecol2').reset_index(level=2)\.rename(columns={'level_2':'somecol'})
# and now we concat the rows with dict elements with the rows with non dict elements
out = pd.concat([data_frame[~mask], exploded_df])
The output dataframe looks like this
I have to perform aggregate on mongodb in python and unable to do so.
Below is the structure of mongodb document extracted:
{'Category': 'Male',
'details' :[{'name':'Sachin','height': 6},
{'name':'Rohit','height': 5.6},
{'name':'Virat','height': 5}
]
}
I want to return the height where name is Sachin by the aggregate function. Basically my idea is to extract data by $match apply condition and aggregate at the same time with aggregate function. This can be easily done by doing in 3 steps with if statements but i'm looking to do in 1 aggregate function.
Please note: there is not fixed length of 'details' value.
Let me know if any more explanation is needed.
You can do a $filter to achieve
db.collection.aggregate([
{
$project: {
details: {
$filter: {
input: "$details",
cond: {
$eq: [
"$$this.name",
"Sachin"
]
}
}
}
}
}
])
Working Mongo playground
If you use in find, but you need to be aware of positional operator
db.collection.find({
"details.name": "Sachin"
},
{
"details.$": 1
})
Working Mongo playground
If you need to make it as object, you can simply use $arrayElemAr with $ifNull
I use Python ElasticSearch API.
I have a dataset too large to retrieve using search().
I can retrieve it with helpers.scan() but the data is too big to be process rapidly with pandas.
So I learnt how to do aggregations to compact the data with ElasticSearch but still using search() I can't retrieve all the data. I understand that the aggregation is done on the "usual" search size, even if the aggregation would give one line ?
Finally I tried aggregations + scan or scroll but I understand that scan() or scroll() can not be used to do aggregations because those requests work on subset of the dataset then the aggregation is nonsense on the subsets.
What is the good way to do aggregations on a very large dataset ?
I can't find any relevant solution on the web.
To be more explicit my case is:
I have X thousands moving sensors transmitting every hour the last stop location, the new stop location. The move from last stop to new stop can take days, so during days I don't have relevant informations with the hourly acquisitions.
As an ElasticSearch search output I only need every unique line of the format :
sensor_id / last_stop / new_stop
If you are using elastic with pandas, you could try eland a new official elastic library written to integrate better them. Try:
es = Elasticsearch()
body = {
"size": 0,
"aggs": {
"getAllSensorId": {
"terms": {
"field": "sensor_id",
"size": 10000
},
"aggs": {
"getAllTheLastStop": {
"terms": {
"field": "last_stop",
"size": 10000
},
"aggs": {
"getAllTheNewStop": {
"terms": {
"field": "new_stop",
"size": 10000
}
}
}
}
}
}
}
}
list_of_results = []
result = es.search(index="my_index", body=body)
for sensor in result["aggregations"]["getAllTheSensorId"]["buckets"]:
for last in sensor["getAllTheLastStop"]["buckets"]:
for new in last["getAllTheNewStop"]["buckets"]:
record = {"sensor": sensor['key'], "last_stop": last['key'], "new_stop": new['key']}
list_of_results.append(record)
I know about $exists in MongoDB but I don't know how to combine it with an OR logic within find().
I want to find all transactions where the base_currency field is either not existing or has a specific value. At the same time trade_currency must have a specific value. Here's what I tried but doesn't work.
txs = db.transactions.find({
'base_currency': { $or: [{ $exists: true }, { $eq: base_currency }]},
'trade_currency': currency
}).sort([('datetime_closed', 1)])
You can use $and combined with $or like this:
db.transactions.find({
"$and": [
{"trade_currency" : currency},
{"$or": [{"base_currency": {$exists: false}}, {"base_currency":base_currency }]},
]
})
If you want to check that field is missing you have to use $exists: false