I am using PyMongo and I am trying to loop through an entire collection and display the ObjectId onto onto my Flask Web Page. However, when I write my method I keep getting the error "ObjectId('5efbe85b4aeb5d21e56fa81f')" is not a valid ObjectId.
The following is the code I am running
def get_class_names(self):
temp = list()
print("1")
for document_ in db.classes.find():
tempstr = document_.get("_id")
tempobjectid = ObjectId(tempstr)
temp.append(repr(tempobjectid))
print("2")
classes = list()
for class_ in temp:
classes.append(class_, Classes.get_by_id(class_).name)
return classes
How do I fix this?
Note: get_by_id, just takes in an ObjectId and finds it in the database.
The line
tempstr = document_.get("_id")
retrieves an ObjectId already. You then wrap it again in another ObjectId before calling repr on that. If you print(type(tempstr)), you'll see that it's an ObjectId.
Just do temp.append(tempstr).
BTW, you should rename the variable tempstr to tempId or something more appropriate.
Related
I have a python program that loads an order to a treeview, this order is loaded in the form of documents in a firestore collection from firebase. When I press the order that I want to load, I call the function loadOrder with the necessary id to filter them. But for some reason they don't load.
This is my code:
def loadPedido(idTarget):
docs = db.collection(u'slots').where(u'slotId', u'==', idTarget).stream()
for doc in docs:
docu = doc.to_dict()
nombre = (docu.get('SlotName'))
entero = (docu.get('entero'))
valor = (docu.get('slotPrecio'))
print(f'{doc.id} => {nombre}')
trvPedido.insert("",'end',iid= doc.id, values=(doc.id,nombre, entero, valor))
idTarget is the id to filter, and check with a print that it arrives correctly.
i tried this:
If I write the result of the varable directly in the code, it loads correctly, like so:
...
docs = db.collection(u'slots').where(u'slotId', u'==', u"2996gHQ32CNFMp5vyieu").stream()
...
I'm currently developing locally an Azure function that communicates with Microsoft Sentinel, in order to fetch the alert rules from it, and more specifically their respective querys :
credentials = AzureCliCredential()
alert_rules_operations = SecurityInsights(credentials, SUBSCRIPTION_ID).alert_rules
list_alert_rules = alert_rules_operations.list(resource_group_name=os.getenv('RESOURCE_GROUP_NAME'), workspace_name=os.getenv('WORKSPACE_NAME'))
The issue is that when I'm looping over list_alert_rules, and try to see each rule's query, I get an error:
Exception: AttributeError: 'FusionAlertRule' object has no attribute 'query'.
Yet, when I check their type via the type() function:
list_alert_rules = alert_rules_operations.list(resource_group_name=os.getenv(
'RESOURCE_GROUP_NAME'), workspace_name=os.getenv('WORKSPACE_NAME'))
for rule in list_alert_rules:
print(type(rule))
##console: <class 'azure.mgmt.securityinsight.models._models_py3.ScheduledAlertRule'>
The weirder issue is that this error appears only when you don't print the attribute. Let me show you:
Print:
for rule in list_alert_rules:
query = rule.query
print('query', query)
##console: query YAY I GET WHAT I WANT
No print:
for rule in list_alert_rules:
query = rule.query
...
##console: Exception: AttributeError: 'FusionAlertRule' object has no attribute 'query'.
I posted the issue on the GitHub repo, but I'm not sure whether it's a package bug or a runtime issue. Has anyone ran into this kind of problems?
BTW I'm running Python 3.10.8
TIA!
EDIT:
I've tried using a map function, same issue:
def format_list(rule):
query = rule.query
# print('query', query)
# query = query.split('\n')
# query = list(filter(lambda line: "//" not in line, query))
# query = '\n'.join(query)
return rule
def main(mytimer: func.TimerRequest) -> None:
# results = fetch_missing_data()
credentials = AzureCliCredential()
alert_rules_operations = SecurityInsights(
credentials, SUBSCRIPTION_ID).alert_rules
list_alert_rules = alert_rules_operations.list(resource_group_name=os.getenv(
'RESOURCE_GROUP_NAME'), workspace_name=os.getenv('WORKSPACE_NAME'))
list_alert_rules = list(map(format_list, list_alert_rules))
I have tried with same as you used After I changed like below; I get the valid response.
# Management Plane - Alert Rules
alertRules = mgmt_client.alert_rules.list_by_resource_group('<ResourceGroup>')
for rule in alertRules:
# Try this
test.query = rule.query //Get the result
#print(rule)
if mytimer.past_due:
logging.info('The timer is past due!')
Instead of this
for rule in list_alert_rules:
query = rule.query
Try below
for rule in list_alert_rules:
# Try this
test.query = rule.query
Sorry for the late answer as I've been under tons of work these last few days.
Python has an excellent method called hasattr that checks if the object contains a specific key.
I've used it in the following way:
for rule in rules:
if hasattr(rule, 'query'):
...
The reason behind using this is because the method returns object of different classes, however inherited from the one same mother class.
Hope this helps.
InvalidS3ObjectException when calling the AnalyzeDocument operation: Unable to get object metadata from S3. Check object key, region and/or access permissions."
I keep getting this error. Over. And. Over. This program worked with my test cases of what I'm bringing in, the json with a {"body":"imagename.jpg"}. But the very moment I try to utilize the actual code my JS brings in, I get this error. The thing that confuses me is that I've checked the regions and they are fine. I went into my account and created users with full access to all AWS and S3 features, and utilized those logins, I've used my root account, everything. All I'm trying to do is access an image from my s3 bucket. Why won't it work? Below is my code. It works if I utilize the test case I provided above, but the moment I try and use the website it's connected to, it doesn't work.
def main(event, context):
key_map, value_map, block_map = get_kv_map(event) #Take map variables in to get the key and value map we need.
It goes to this function...
def get_kv_map(event):
filePath = event
fileExt = filePath.get('body')
s3 = boto3.resource('s3', region_name='us-east-1')
bucket = s3.Bucket('react-images-ex')
obj = bucket.Object(bucket)
client = boto3.client('textract') #We utilize boto3's textract lib
response = client.analyze_document(Document={'S3Object': {'Bucket': 'react-images-ex', 'Name': fileExt}}, FeatureTypes=['FORMS'])
# Get the text blocks
blocks=response['Blocks'] #We make a blocks variable that will be the blocks we find in the document
# get key and value maps
key_map = {}
value_map = {}
block_map = {}
for block in blocks: #Traverse the blocks found in the document
block_id = block['Id'] #Set variable for blockId to the Id's found on that block location
block_map[block_id] = block #Make the block map at that ID be the block variable
if block['BlockType'] == "KEY_VALUE_SET": #if we see that the type of block we're on is a key and value set pair, we check if it's a key or not. If it's not a key, we know it's a value. We send it to the respective map.
if 'KEY' in block['EntityTypes']:
key_map[block_id] = block
else:
value_map[block_id] = block
return key_map, value_map, block_map #Return the maps we need after they're filled.
I have been told before this code is fine, and it should work. So why exactly is it that I get this error?
Based on the comments.
The issue with body was that it was json string, not actual json object.
The solution was to parse the string into json:
fileExt = json.loads(filePath.get('body'))
Try awscli to see if you can access the image in s3:
aws s3 ls s3://react-images-ex/<some-fileExt>
Either you are parsing the fileExt wrongly, or you don't have S3 permission to access the file. The awscli command will help to verify this.
I am trying to retrieve an audio file stored in MongoDB when above error is thrown.
The code is as follows:
elif json_data != None and 'retriever' in json_data:
query_param = json_data['retriever']
data = db.soundData
x = data.find({'name': query_param})
y = data.find({'data': x})
return Response(y, mimetype='audio/mp3')
Under name I have the name of the file and under data is audio file itself.
As I am new to pymongo can somebody point to where an error could be coming from?
First of all, you need not be saving your file itself in mongo what you should be saving is the filename and the file itself is better off on the file system.
The error appears because, both x and y are indeed mongodb cursors rather than the data that you expect. You should be using find_one instead.
find_one(filter=None, *args, **kwargs) Get a single document from the
database.
All arguments to find() are also valid arguments for find_one(),
although any limit argument will be ignored. Returns a single
document, or None if no matching document is found.
y = data.find_one({'data': x})
You can make your code a bit more concise with
y = data.find_one({'data': {'name': query_param}})
I am creating a list from some of the model data but I am not doing it correctly, it works but when I refresh the page in the broswer reportResults just gets added to. I hoped it would get garbage collected between requests but obviously I am doing something wrong, any ideas anyone??
Thanks,
Ewan
reportResults = [] #the list that doesn't get collected
def addReportResult(fix,description):
fix.description = description
reportResults.append(fix)
def unitHistory(request,unitid, syear, smonth, sday, shour, fyear, fmonth, fday, fhour, type=None):
waypoints = Fixes.objects.filter(name=(unitid))
waypoints = waypoints.filter(gpstime__range=(awareStartTime, awareEndTime)).order_by('gpstime')[:1000]
if waypoints:
for index in range(len(waypoints)):
...do stuff here selecting some waypoints and generating "description" text
addReportResult(waypointsindex,description) ##append the list with this, adding a text description
return render_to_response('unitHistory.html', {'fixes': reportResults})
You are reusing the same list each time, to fix it you need to restructure your code to create a new list on every request. This can be done in multiple ways and this is one such way:
def addReportResult(reportResults, fix,description):
fix.description = description
reportResults.append(fix)
def unitHistory(request,unitid, syear, smonth, sday, shour, fyear, fmonth, fday, fhour, type=None):
reportResults = [] # Here we create our local list that is recreated each request.
waypoints = Fixes.objects.filter(name=(unitid))
waypoints = waypoints.filter(gpstime__range=(awareStartTime, awareEndTime)).order_by('gpstime')[:1000]
if waypoints:
for index in range(len(waypoints)):
# Do processing
addReportResult(reportResults, waypointsindex, description)
# We pass the list to the function so it can use it.
return render_to_response('unitHistory.html', {'fixes': reportResults})
If the addReportResult stays small you could also inline the description attribute set by removing the call to addReportResult altogether and doing the waypointsindex.description = description at the same position.
Just so you're aware of the life cycle of requests, mod_wsgi will keep a process open to service multiple requests. That process gets recycled every so often but it is definitely not bound to a single request as you've assumed.
That means you need a local list. I would suggest moving the addReportResult function contents directly in-line, but that's not a great idea if it needs to be reusable or if the function is too long. Instead I'd make that function return the item, and you can collect the results locally.
def create_report(fix, description): # I've changed the name to snake_casing
fix.description = description
return fix
def unit_history(request,unitid, syear, smonth, sday, shour, fyear, fmonth, fday, fhour, type=None):
reports = []
waypoints = Fixes.objects.filter(name=(unitid))
waypoints = waypoints.filter(gpstime__range=(awareStartTime, awareEndTime)).order_by('gpstime')[:1000]
if waypoints:
for index in range(len(waypoints)):
report = create_report(waypointsindex, description)
reports.append(report)
return render_to_response('unitHistory.html', {'fixes': reportResults})