I have a token saved in mongo db like .
db.user.findOne({'token':'7fd74c28-8ba1-11e2-9073-e840f23c81a0'}['uuid'])
{
"_id" : ObjectId("5140114fae4cb51773d8c4f8"),
"username" : "jjj51#gmail.com",
"name" : "vivek",
"mobile" : "12345",
"is_active" : false,
"token" : BinData(3,"hLL6kIugEeKif+hA8jyBoA==")
}
The above query works fine when i execute in the mongo db command line interface .
The same query when i am trying to run in Django view lik.
get_user = db.user.findOne({'token':token}['uuid'])
or `get_user = db.user.findOne({'token':'7fd74c28-8ba1-11e2-9073-e840f23c81a0'}['uuid'])`
I am getting an error
KeyError at /activateaccount/
'uuid'
Please help me out why I am getting this error .
My database
db.user.find()
{ "_id" : ObjectId("5140114fae4cb51773d8c4f8"), "username" : "ghgh#gmail.com", "name" : "Rohit", "mobile" : "12345", "is_active" : false, "token" : BinData(3,"hLL6kIugEeKif+hA8jyBoA==") }
{ "_id" : ObjectId("51401194ae4cb51773d8c4f9"), "username" : "ghg#gmail.com", "name" : "rohit", "mobile" : "12345", "is_active" : false, "token" : BinData(3,"rgBIMIugEeKQBuhA8jyBoA==") }
{ "_id" : ObjectId("514012fcae4cb51874ca3e6f"), "username" : "ghgh#gmail.com", "name" : "rahul", "mobile" : "8528256", "is_active" : false, "token" : BinData(3,"f9dMKIuhEeKQc+hA8jyBoA==") }
TL;DR your query is faulty.
Longer explanation:
{'token':'7fd74c28-8ba1-11e2-9073-e840f23c81a0'}['uuid']
translates to undefined, because you're trying to get the property uuid from an object that doesn't have that property. In the Mongo shell, which uses Javascript, that translates to the following query:
db.user.findOne(undefined)
You'll get some random (okay, not so random, probably the first) result.
Python is a bit more strict when you're trying to get an unknown key from a dictionary:
{'token':token}['uuid']
Since uuid isn't a valid key in the dictionary {'token':token}, you'll get a KeyError when you try to access it.
EDIT: since you've used Python UUID types to store the tokens in the database, you also need to use the same type in your query:
from uuid import UUID
token = '7fd74c28-8ba1-11e2-9073-e840f23c81a0'
get_user = db.user.find_one({'token' : UUID(token) })
Related
I am trying to read the Audit Events generated by accesses to an Azure Key Vault. They are streamed to an Event Hub. The events appear in the Event Hub as AVRO files. An individual event appears as a file, 44.avro, in a folder whose path specifies the time stamp of the event. For example, an event generated today (noon, 6-Nov-20) could be found at 'kv-audit-eh/security-logs/0/2020/11/06/12/00/44.avro'. So far, so good.
The problem comes when trying to read the contents of this file to verfiy the type of Key Vault access that triggered the event. An on-line utility says the file is empty. (The file is 508 bytes in size, and you can see a JSON-formatted schema embedded in it, along with some binary information.) I have used a tool to extract the JSON schema, and here it is:
{"namespace": "44.avro",
"type" : "record",
"name" : "EventData",
"namespace" : "Microsoft.ServiceBus.Messaging",
"fields" : [
{
"name" : "SequenceNumber",
"type" : "long"
},
{
"name" : "Offset",
"type" : "string"
},
{
"name" : "EnqueuedTimeUtc",
"type" : "string"
},
{
"name" : "SystemProperties",
"type" : {
"type" : "map",
"values" : [
"long",
"double",
"string",
"bytes"
]
}
},
{
"name" : "Properties",
"type" : {
"type" : "map",
"values" : [
"long",
"double",
"string",
"bytes",
"null"
]
}
},
{
"name" : "Body",
"type" : [
"null",
"bytes"
]
}
]
}
I saved this schema into the file audit.avsc. When I use the following Python code to read the file, I don't get any errors, but I don't get any output either.
import avro.schema
from avro.datafile import DataFileReader, DataFileWriter
from avro.io import DatumReader, DatumWriter
schema = avro.schema.parse(open("audit.avsc", "rb").read())
reader = DataFileReader(open("44.avro", "rb"), DatumReader())
for name in reader:
print (name)
reader.close()
If I open the file in the Azure dashboard, it displays the message "may not render correctly as it contains an unknown extension."
So my question is: What is required to read the contents of one of these files? Any advice welcome, as I'm stumped by this.
Thanks in advance.
It turns out the AVRO file is empty, most of the time. The event hub was capturing all sorts of idle activity on the Key Vault. I switched off the option where the capture does not record null events. (That is, only capture an event that actually affects the Key Vault.) Once I did that, the AVRO files were a few kilobytes in size and the Python code read out audit events. The folders were no longer cluttered with empty files.
The capture setting in question was this one:
Do not emit empty files when no events occur during the Capture time window.
Check it.
I have downloaded an AVRO file (with JSON payload) from Microsoft Azure to my Windows 10 computer:
Then with python 3.8.5 and avro 1.10.0 installed via pip I have tried running the following script:
import os, avro
from avro.datafile import DataFileReader, DataFileWriter
from avro.io import DatumReader, DatumWriter
reader = DataFileReader(open("48.avro", "rb"), DatumReader())
for d in reader:
print(d)
reader.close()
Unfortunately, nothing is printed by the script.
Then I have searched around and have tried to add a schema as in below:
schema_str = """
{
"type" : "record",
"name" : "EventData",
"namespace" : "Microsoft.ServiceBus.Messaging",
"fields" : [ {
"name" : "SequenceNumber",
"type" : "long"
}, {
"name" : "Offset",
"type" : "string"
}, {
"name" : "EnqueuedTimeUtc",
"type" : "string"
}, {
"name" : "SystemProperties",
"type" : {
"type" : "map",
"values" : [ "long", "double", "string", "bytes" ]
}
}, {
"name" : "Properties",
"type" : {
"type" : "map",
"values" : [ "long", "double", "string", "bytes", "null" ]
}
}, {
"name" : "Body",
"type" : [ "null", "bytes" ]
} ]
}
"""
schema = avro.schema.parse(schema_str)
reader = DataFileReader(open("48.avro", "rb"), DatumReader(schema, schema))
for d in reader:
print(d)
reader.close()
But this hasn't helped, still nothing is printed.
While I was expecting that a list of dictionary objects would be printed...
UPDATE:
I've got a reply at the mailing list that avro-python3 is deprecated.
Still my issue with original avro persists, nothing is printed.
UPDATE 2:
I have to apologize - the avro file I was using did not contain any useful data. The reason for my confusion is that a colleague was using a different file with the same name while testing for me.
Now I have tried both avro and fastavro modules on a different avro file and both worked. I will look at PySpark as well.
As OneCricketeer suggested use PySpark to read avro files generated by EventHub. Here, PySpark: Deserializing an Avro serialized message contained in an eventhub capture avro file is one such example.
I have been working with the Zapier storage api through the store.zapier.com endpoint and have been successful at setting and retrieving values. However I have recently found a need to store more complex information that I would like to update over time.
The data I am storing at the moment looks like the following:
{
"task_id_1": {"google_id": "google_id_1", "due_on": "2018-10-24T17:00:00.000Z"},
"task_id_2": {"google_id": "google_id_2", "due_on": "2018-10-23T20:00:00.000Z"},
"task_id_3": {"google_id": "google_id_3", "due_on": "2018-10-25T21:00:00.000Z"},
}
What I would like to do is update the "due_on" child value of any arbitrary task_id_n without having to delete and add it again. Reading the API information at store.zapier.com I see you can send a patch request combined with a specific action to have better control over the stored data. I attempt to use the patch request and the "set_child_value" action as follows:
def update_child(self, parent_key, child_key, child_value):
header = self.generate_header()
data = {
"action" : "set_child_value",
"data" : {
"key" : parent_key,
"value" : {child_key : child_value}
}
}
result = requests.patch(self.URL, headers=header, json=data)
return result
When I send this request Zapier responds with a 200 status code but the storage is not updated. Any ideas what I might be missing?
Zapier Store doesn't seem to be validating the request body past the "action" and "data" fields.
When you make a request with the "data" field set to an array, you trigger a validation error that describes the schema for the data field (What a way to find documentation for an API! smh).
In the request body, the data field schema for "set_child_value" action is:
{
"action" : {
"enum": [
"delete",
"increment_by",
"set_child_value",
"list_pop",
"set_value_if",
"remove_child_value",
"list_push"
]
},
"data" : {
"key" : {
"type": "object"
},
"values" : {
"type": "object"
}
}
}
Note that it's "values" and not "value"
I was able to update specific child values by modifying my request from a PATCH to a PUT. I had to do away with the data structure of:
data = {
"action" : "set_child_value",
"data" : {
"key" : parent_key,
"value" : {child_key : child_value}
}
and instead send it along as:
data = {
parent_key : {child_key : child_value}
}
My updated request looks like:
def update_child(self, parent_key, child_key, child_value):
header = self.generate_header()
data = {
parent_key : {child_key : child_value}
}
result = requests.put(self.URL, headers=header, json=data)
return result
Never really resolved the issue with the patch method I was attempting before, it does work for other Zapier storage methods such as "pop_from_list" and "push_to_list". Anyhow this is a suitable solution for anyone who runs into the same problem.
I am trying to make an aggregation query using flask-mongoengine, and from what I have read it does not sound like it is possible.
I have looked over several forum threads, e-mail chains and a few questions on Stack Overflow, but I have not found a really good example of how to implement aggregation with flask-mongoengine.
There is a comment in this question that says you have to use "raw pymongo and aggregation functionality." However, there is no examples of how that might work. I have tinkered with Python and have a basic application up using Flask framework, but delving into full fledged applications & connecting/querying to Mongo is pretty new to me.
Can someone provide an example (or link to an example) of how I might utilize my flask-mongoengine models, but query using the aggregation framework with PyMongo?
Will this require two connections to MongoDB (one for PyMongo to perform the aggregation query, and a second for the regular query/insert/updating via MongoEngine)?
An example of the aggregation query I would like to perform is as follows (this query gets me exactly the information I want in the Mongo shell):
db.entry.aggregate([
{ '$group' :
{ '_id' : { 'carrier' : '$carrierA', 'category' : '$category' },
'count' : { '$sum' : 1 }
}
}
])
An example of the output from this query:
{ "_id" : { "carrier" : "Carrier 1", "category" : "XYZ" }, "count" : 2 }
{ "_id" : { "carrier" : "Carrier 1", "category" : "ABC" }, "count" : 4 }
{ "_id" : { "carrier" : "Carrier 2", "category" : "XYZ" }, "count" : 31 }
{ "_id" : { "carrier" : "Carrier 2", "category" : "ABC" }, "count" : 6 }
The class your define with Mongoengine actually has a _get_collection() method which gets the "raw" collection object as implemented in the pymongo driver.
I'm just using the name Model here as a placeholder for your actual class defined for the connection in this example:
Model._get_collection().aggregate([
{ '$group' :
{ '_id' : { 'carrier' : '$carrierA', 'category' : '$category' },
'count' : { '$sum' : 1 }
}
}
])
So you can always access the pymongo objects without establishing a separate connection. Mongoengine is itself build upon pymongo.
aggregate is available since Mongoengine 0.9.
Link to the API Reference.
As there is no example whatsoever around, here is how you perform an aggregate query using aggregation framework with Mongoengine > 0.9
pipeline = [
{ '$group' :
{ '_id' : { 'carrier' : '$carrierA', 'category' : '$category' },
'count' : { '$sum' : 1 }
}
}]
Model.objects().aggregate(*pipeline)
I have an ElasticSearch setup, receiving data to index via a CouchDB river. I have the problem that most of the fields in the CouchDB documents are actually not relevant for search: they are fields internally used by the application (IDs and so on), and I do not want to get false positives because of these fields. Besides, indexing not needed data seems to me a waste of resources.
To solve this problem, I have defined a mapping where I specify the fields which I want to be indexed. I am using pyes to access ElasticSearch. The process that I follow is:
Create the CouchDB river, associated to an index. This apparently creates also the index, and creates a "couchdb" mapping in that index which, as far as I can see, includes all fields, with dynamically assigned types.
Put a mapping, restring it to the fields which I really want to index.
This is the index definition as obtained by:
curl -XGET http://localhost:9200/notes_index/_mapping?pretty=true
{
"notes_index" : {
"default_mapping" : {
"properties" : {
"note_text" : {
"type" : "string"
}
}
},
"couchdb" : {
"properties" : {
"_rev" : {
"type" : "string"
},
"created_at_date" : {
"format" : "dateOptionalTime",
"type" : "date"
},
"note_text" : {
"type" : "string"
},
"organization_id" : {
"type" : "long"
},
"user_id" : {
"type" : "long"
},
"created_at_time" : {
"type" : "long"
}
}
}
}
}
The problem that I have is manyfold:
that the default "couchdb" mapping is indexing all fields. I do not want this. Is it possible to avoid the creation of that mapping? I am confused, because that mapping seems to be the one which is somehow "connecting" to the CouchDB river.
the mapping that I create seems not to have any effect: there are no documents indexed by that mapping
Do you have any advice on this?
EDIT
This is what I am actually doing, exactly as typed:
server="localhost"
# Create the index
curl -XPUT "$server:9200/index1"
# Create the mapping
curl -XPUT "$server:9200/index1/mapping1/_mapping" -d '
{
"type1" : {
"properties" : {
"note_text" : {"type" : "string", "store" : "no"}
}
}
}
'
# Configure the river
curl -XPUT "$server:9200/_river/river1/_meta" -d '{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"user" : "admin",
"password" : "admin",
"db" : "notes"
},
"index" : {
"index" : "index1",
"type" : "type1"
}
}'
The documents in index1 still contain fields other than "note_text", which is the only one that I have specifically mentioned in the mapping definition. Why is that?
The default behavior of CouchDB river is to use a 'dynamic' mapping, i.e. index all the fields that are found in the incoming CouchDB documents. You're right that it can unnecessarily increase the size of the index (your problems with search can be solved by excluding some fields from the query).
To use your own mapping instead of the 'dynamic' one, you need to configure the River plugin to use the mapping you've created (see this article):
curl -XPUT 'elasticsearch-host:9200/_river/notes_index/_meta' -d '{
"type" : "couchdb",
... your CouchDB connection configuration ...
"index" : {
"index" : "notes_index",
"type" : "mapping1"
}
}'
The name of the type that you're specifying in URL while doing mapping PUT overrides the one that you're including in the definition, so the type that you're creating is in fact mapping1. Try executing this command to see for yourself:
> curl 'localhost:9200/index1/_mapping?pretty=true'
{
"index1" : {
"mapping1" : {
"properties" : {
"note_text" : {
"type" : "string"
}
}
}
}
}
I think that if you will get the name of type right, it will start working fine.