I am using python with firebase, and have a table, I need to update all items that have a specific value in a field, I found out how to query for all the items having that value here and here
Now I need to update all the items that I get in return.
What I would like to do, is to use a single query to perform a single operation that will find the correct items using the where query, and update a certain field to a certain value, I need that functionality in order to keep some data consistent.
Thanks
It is possible to do it, but in more steps:
# the usual preparation
import firebase_admin
from firebase_admin import credentials, firestore
databaseURL = {'databaseURL': "https://<YOUR-DB>.firebaseio.com"}
cred = credentials.Certificate("<YOUR-SERVICE-KEY>.json")
firebase_admin.initialize_app(cred, databaseURL)
database = firestore.client()
col_ref = database.collection('<YOUR-COLLECTION>')
# Query generator for the documents; get all people named Pepe
results = col_ref.where('name', '==', 'Pepe').get()
# Update Pepe to José
field_updates = {"name": "José"}
for item in results:
doc = col_ref.document(item.id)
doc.update(field_updates)
(I use Cloud Firestore, maybe it is different in Realtime Database)
You can iterate through the query and find the DocumentReference in each iteration.
db = firestore.client()
a= request_json['message']['a']
b= request_json['message']['b']
ref = db.collection('doc').where(a'', u'==', u'a').stream()
for i in ref:
i.reference.update({"c":"c"}) # get the document reference and use it to update\delete ...
Related
PS: I only have to do this due to business requirements
I'm able to achieve it using mongosh, but since there are multiple records to be updated, I'm trying to implement a simple python script to automate the task.
Is it possible to do this with pymongodb?
// store the document in a variable
doc = db.clients.findOne({_id: ObjectId("4cc45467c55f4d2d2a000002")})
// set a new _id on the document
doc._id = ObjectId("4c8a331bda76c559ef000004")
// insert the document, using the new _id
db.clients.insert(doc)
// remove the document with the old _id
db.clients.remove({_id: ObjectId("4cc45467c55f4d2d2a000002")})
I'm not able to set the new Id in the doc variable in order to insert the new document that will mirror the old one.
Thanks
Here's a pymongo equivalent:
from pymongo import MongoClient
from bson import ObjectId
db = MongoClient()['mydatabase']
old_doc_id = '4cc45467c55f4d2d2a000002'
new_doc_id = '4c8a331bda76c559ef000004'
doc = db.clients.find_one({'_id': ObjectId(old_doc_id)})
if doc is not None:
# set a new _id on the document
doc['_id'] = ObjectId(new_doc_id)
# insert the document, using the new _id
db.clients.insert_one(doc)
# remove the document with the old _id
db.clients.delete_one({'_id': ObjectId(old_doc_id)})
I’m getting auto generated IDs in my firestore collection even though I’m specifying IDs when creating documents.
I am currently load testing my FastAPI app, currently testing it synchronously. My firestore IDs are coming from a counter stored in firebase’s realtime DB. The counter consists of alphanumeric characters and I’m incrementing them in a transaction. I’m then seeing if a document with that ID exists in firestore, when I find a ID that doesn’t exist I use .set() to create a document with that ID.
def increment_realtimedb() -> str:
try:
return rdb.transaction(crement)
except db.TransactionAbortedError:
increment_realtimedb()
def insert_firestore(payload: dict):
new_id = increment_realtimedb()
doc = collection.document(new_id).get()
while doc.exists:
new_id = increment_realtimedb()
doc = collection.document(new_id).get()
collection.document(new_id).set(payload)
Figured out that increment_realtimedb() was returning None somehow. I changed the while loop to check if new_id was None. That seems to have fixed the problem.
while doc.exists or new_id is None:
Edit:
After further research it turns out realtime db will return None when you max out the retries for a transaction.
I am making a salesforce bulk API call to fetch the data. I am using a simple_salesforce library. I want to fetch the data where my id equals a specific value, I also need to return a few responses as I have a list of ids. My data looks like the following:
ids_dict = [{'ID-1010': 'abc'}, {'ID-1020': 'def'}]
Here is code:
for key, value in ids_dict.items():
desired_opp = value
sql = sf.bulk.OpportunityLineItem.query("SELECT Name, Price FROM OpportunityLineItem where Opportunity_ID__c = '%s'" % desired_opp)
sql_response = []
sql_response.append(sql)
What being returned is a list with multiple responses with def ids. Where I need only two responses for respectful ids.
with my experience i have found it best to use sf.query(query=SOQL) and use sf.query_more(next_URL, True) to get the rest of the records
Since query only returns 2000 records, you need need to use .query_more() to get more records
From the simple-salesforce docs
SOQL queries are done via:
sf.query("SELECT Id, Email FROM Contact WHERE LastName = 'Jones'")
If, due to an especially large result, Salesforce adds a nextRecordsUrl to your query result, such as "nextRecordsUrl" : "/services/data/v26.0/query/01gD0000002HU6KIAW-2000", you can pull the additional results with either the ID or the full URL (if using the full URL, you must pass ‘True’ as your second argument)
sf.query_more("01gD0000002HU6KIAW-2000")
sf.query_more("/services/data/v26.0/query/01gD0000002HU6KIAW-2000", True)
Also maybe you want to change your sql statement to use "in" instead of "=" to select multiple values it can equal to at once. That way you only make one API call to salesforce until you go to request more records. (removes wasted calls on multiple less than 2000 record searches)
Here is an example of using this
data = [] # list to hold all the records
SOQL = "SELECT Name, Price FROM OpportunityLineItem where Opportunity_ID__c in ('abc', 'def')"
results = sf.query(query=SOQL) # api call
## loop through the results and add the records
for rec in results['records']:
rec.pop('attributes', None) # remove extra data
data.append(rec) # add the record to the list
## check the 'done' attrubite in the response to see if there are more records
## While 'done' == False (more records to fetch) get the next page of records
while(results['done'] == False):
## attribute 'nextRecordsUrl' holds the url to the next page of records
results = sf.query_more(results['nextRecordsUrl', True])
## repeat the loop of adding the records
for rec in results['records']:
rec.pop('attributes', None)
data.append(rec)
Looping through the records and using the data
## loop through the records and get their attribute values
for rec in data:
# the attribute name will always be the same as the salesforce api name for that value
print(rec['Name'])
print(rec['Price'])
I am trying to do a simple query based on two conditions in MongoDB using pymongo.
I am using the sample restaurants data set from the tutorial documentation. I have:
from pymongo import MongoClient
import pymongo
import pandas as pd
client = MongoClient()
db = client.test
cursor = db.restaurants.find({"$and":[{'borough':"Manhattan"},{"grades":{'grade':"A"}}]}
for record in cursor:
print record
I am just trying to print all the restaurants in Manhattan with a grade of 'B.' But this pulls back no results. I have also tried
cursor = db.restaurants.find({"borough":"Manhattan", "grades.grade":"B"})
but this will only filter by the first condition and won't filter by the "grade." It's exactly how it is laid out in the documentation but I can't get it to work.
The problem is in the second condition. grades is a subarray of grades, use $elemMatch:
db.restaurants.find({"$and": [{"borough": "Manhattan"}, {"grades": {"$elemMatch": {"grade": "A"}}}]})
Works for me.
I had a similar issue and it worked for me with the following syntax:
result = db.mycollection.find({"$and": [{"key1": value1}, {"key2": value2}]})
I have multiple records with the same value under key1, but I want the only one with specific value on key2. It seems to work for me.
Please apologize me if my question is naive, I am new to python and I am trying my hand on using the collections using pymongo. I have tried to extract the names using
collects = db.collection_names(); #This returns a list with names of collections
But when I tried to get the cursor using
cursor = db.collects[1].find(); #This returns a cursor which has no reference to a collection.
I understand that the above code uses a string instead of an object. So, I was wondering how I could accomplish this task of retaining a cursor for each collection in the DB, which I can use later to perform operations of search and update etc.
If you are using the pymongo driver you must use the get_collection method or a dict-style lookups instead. Also you may want to set the include_system_collections to False in collection_names so you don't include system collections (e.g system.indexes)
import pymongo
client = pymongo.MongoClient()
db = client.db
collects = db.collection_names(include_system_collections=False)
cursor = db.get_collection(collects[1]).find()
or
cursor = db[collects[1]].find()
Sry, i can't create a comment yet, but have you tried?:
cursor = db.getCollection(collects[1]).find();