I have few items in DynamoDB like this.
Example 1 -
{
"policyName":"Some Name",
"domainBypassList":{
"custom":[],
"default":[]
}
"applicationBypassList":{
"default":{
"Mac":[],
"Windows":[]
},
"custom":{
"Mac":[],
"Window":[]
}
}
Example 2 -
{
"policyName":"Some Name",
"domainBypassList":{
"default":[]
}
"applicationBypassList":{
"default":{
"Mac":[],
"Windows":[]
}
}
I want to add/update the custom of domainBypassList & applicationBypassList. You can see in Example 1 the custom attribute exist in both domainBypassList & applicationBypassList. However, in Example 2 you can see that custom attribute is missing. So if the custom attribute is missing then it should be added or if it exist then it should be updated.
I wrote a simple query which works only if the custom attribute exist in both domainBypassList & applicationBypassList i.e Example 1.
res = self.service_context.policies.update_item(
Key={
'tenantId': self.event["user"]['tenantId'],
'policyName': self.policy_name
},
UpdateExpression="set domainBypassList.custom=:dbl, applicationBypassList.custom=:abl, updatedAt=:uat",
ExpressionAttributeValues={
':abl': self.payload["applicationBypassList"],
':dbl': self.payload["domainBypassList"],
':uat': datetime.datetime.now().strftime("%Y/%m/%d, %I:%M %p")
},
ReturnValues="UPDATED_NEW"
)
How can I make this query make work with both the cases.
Here is the object that I have for custom of both domainBypassList & applicationBypassList.
{
"applicationBypassList":{
"Mac": [
"Zoom.us",
"fool.is"
],
"Windows": [
"Zoom.exe",
"cool.com"
]
},
"domainBypassList":["some.com"]
}
Related
So I'm new to graphQL and I've been figuring out the Uniswap API, through the sandbox browser, but I'm running this program which just gets metadata on the top 100 tokens and their relative pools, but the pool one isn't working at all. I'm trying to put two conditions of if token0's hash is this and token1's hash is this, it should output the pool of those two, however if only outputs pools with the token0 hash, and just ignores the second one. I've tried using and, _and, or two where's seperated by {} or , so on so forth. This is an example I have (python btw):
class ExchangePools:
def QueryPoolDB(self, hash1, hash2):
query = """
{
pools(where: {token0: "%s"}, where: {token1:"%s"}, first: 1, orderBy:volumeUSD, orderDirection:desc) {
id
token0 {
id
symbol
}
token1 {
id
symbol
}
token1Price
}
}""" % (hash1, hash2)
return query
or in the sandbox explorer this:
{
pools(where: {token0: "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599"} and: {token1:"0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"}, first: 1, orderBy:volumeUSD, orderDirection:desc) {
id
token0 {
id
symbol
}
token1 {
id
symbol
}
token1Price
}
}
with this output:
{
"data": {
"pools": [
{
"id": "0x4585fe77225b41b697c938b018e2ac67ac5a20c0",
"token0": {
"id": "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599",
"symbol": "WBTC"
},
"token1": {
"id": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"symbol": "WETH"
},
"token1Price": "14.8094450357546760737720184457113"
}
]
}
}
How can I get the API to register both statements?
I have elasticsearch documents like below where I need to rectify age value based on creationtime currentdate
age = creationtime - currentdate
:
hits = [
{
"_id":"CrRvuvcC_uqfwo-WSwLi",
"creationtime":"2018-05-20T20:57:02",
"currentdate":"2021-02-05 00:00:00",
"age":"60 months"
},
{
"_id":"CrRvuvcC_uqfwo-WSwLi",
"creationtime":"2013-07-20T20:57:02",
"currentdate":"2021-02-05 00:00:00",
"age":"60 months"
},
{
"_id":"CrRvuvcC_uqfwo-WSwLi",
"creationtime":"2014-08-20T20:57:02",
"currentdate":"2021-02-05 00:00:00",
"age":"60 months"
},
{
"_id":"CrRvuvcC_uqfwo-WSwLi",
"creationtime":"2015-09-20T20:57:02",
"currentdate":"2021-02-05 00:00:00",
"age":"60 months"
}
]
I want to do bulk update based on each document ID, but the problem is I need to correct 6 months of data & per data size (doc count of Index) is almost 535329, I want to efficiently do bulk update on age based on _id for each day on all documents using python.
Is there a way to do this, without looping through, all examples I came across using Pandas dataframes for update is based on a known value. But here _id I will get as and when the code runs.
The logic I had written was to fetch all doc & store their _id & then for each _id update the age . But its not an efficient way if I want to update all documents in bulk for each day of 6 months.
Can anyone give me some ideas for this or point me in the right direction.
As mentioned in the comments, fetching the IDs won't be necessary. You don't even need to fetch the documents themselves!
A single _update_by_query call will be enough. You can use ChronoUnit to get the difference after you've parsed the dates:
POST your-index-name/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"source": """
def created = LocalDateTime.parse(ctx._source.creationtime, DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss"));
def currentdate = LocalDateTime.parse(ctx._source.currentdate, DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"));
def months = ChronoUnit.MONTHS.between(created, currentdate);
ctx._source._age = months + ' month' + (months > 1 ? 's' : '');
""",
"lang": "painless"
}
}
The official python client has this method too. Here's a working example.
🔑 Try running this update script on a small subset of your documents before letting in out on your whole index by adding a query other than the match_all I put there.
💡 It's worth mentioning that unless you search on this age field, it doesn't need to be stored in your index because it can be calculated at query time.
You see, if your index mapping's dates are properly defined like so:
{
"mappings": {
"properties": {
"creationtime": {
"type": "date",
"format": "yyyy-MM-dd'T'HH:mm:ss"
},
"currentdate": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss"
},
...
}
}
}
the age can be calculated as a script field:
POST ttimes/_search
{
"query": {
"match_all": {}
},
"script_fields": {
"age_calculated": {
"script": {
"source": """
def months = ChronoUnit.MONTHS.between(
doc['creationtime'].value,
doc['currentdate'].value );
return months + ' month' + (months > 1 ? 's' : '');
"""
}
}
}
}
The only caveat is, the value won't be inside of the _source but rather inside of its own group called fields (which implies that more script fields are possible at once!).
"hits" : [
{
...
"_id" : "FFfPuncBly0XYOUcdIs5",
"fields" : {
"age_calculated" : [ "32 months" ] <--
}
},
...
For example, if this is my record
{
"_id":"123",
"name":"google",
"ip_1":"10.0.0.1",
"ip_2":"10.0.0.2",
"ip_3":"10.0.1",
"ip_4":"10.0.1",
"description":""}
I want to get only those fields starting with 'ip_'. Consider I have 500 fields & only 15 of them start with 'ip_'
Can we do something like this to get the output -
db.collection.find({id:"123"}, {'ip*':1})
Output -
{
"ip_1":"10.0.0.1",
"ip_2":"10.0.0.2",
"ip_3":"10.0.1",
"ip_4":"10.0.1"
}
The following aggregate query, using PyMongo, returns documents with the field names starting with "ip_".
Note the various aggregation operators used: $filter, $regexMatch, $objectToArray, $arrayToObject. The aggregation pipeline the two stages $project and $replaceWith.
pipeline = [
{
"$project": {
"ipFields": {
"$filter" : {
"input": { "$objectToArray": "$$ROOT" },
"cond": { "$regexMatch": { "input": "$$this.k" , "regex": "^ip" } }
}
}
}
},
{
"$replaceWith": { "$arrayToObject": "$ipFields" }
}
]
pprint.pprint(list(collection.aggregate(pipeline)))
I am unaware of a way to specify an expression that would decide which hash keys would be projected. MongoDB has projection operators but they deal with arrays and text search.
If you have a fixed possible set of ip fields, you can simply request all of them regardless of which fields are present in a particular document, e.g. project with
{ip_1: true, ip_2: true, ...}
I'm trying to access some data within nested ordered dictionaries. This dictionary was created by using the XMLTODICT module. Obviously I would like to create my own dictionaries but this one is out of my control.
I've tried to access them numerous ways.
Example:
Using a for loop:
I can access the first level using v["name"] which gives me Child_Policy and Parent Policy
When I do v["class"]["name"] I would expect to get "Test1" but that's not the case.
I've also tried v[("class", )] variations as well with no luck.
Any input would be much appreciated
The data below is retrieved from a device via XML and converted to dictionary with XMLTODICT.
[
{
"#xmlns": "http://cisco.com/ns/yang/Cisco-IOS-XE-policy",
"name": "Child_Policy",
"class": [
{
"name": "Test1",
"action-list": {
"action-type": "bandwidth",
"bandwidth": {
"percent": "30"
}
}
},
{
"name": "Test2",
"action-list": {
"action-type": "bandwidth",
"bandwidth": {
"percent": "30"
}
}
}
]
},
{
"#xmlns": "http://cisco.com/ns/yang/Cisco-IOS-XE-policy",
"name": "Parent_Policy",
"class": {
"name": "class-default",
"action-list": [
{
"action-type": "shape",
"shape": {
"average": {
"bit-rate": "10000000"
}
}
},
{
"action-type": "service-policy",
"service-policy": "Child_Policy"
}
]
}
}
]
My expectations result is to retrieve values from the nested dictionary and produce and output similar to this:
Queue_1: Test1
Action_1: bandwidth
Allocation_1: 40
Queue_2: Test2
Action_2: bandwidth
Allocation_2: 10
I have now issue formatting the output, just getting the values is the issue.
#
I had some time tonight so I changed the code be be dynamic:
int = 0
int_2 = 0
for v in policy_dict.values():
print("\n")
print("{:15} {:<35}".format("Policy: ", v[0]["name"]))
print("_______")
for i in v:
int_2 = int_2 + 1
try:
print("\n")
print("{:15} {:<35}".format("Queue_%s: " % int_2, v[0]["class"][int]["name"]))
print("{:15} {:<35}".format("Action_%s: " % int_2, v[0]["class"][int]["action-list"]["action-type"]))
print("{:15} {:<35}".format("Allocation_%s: " % int_2, v[0]["class"][int]["action-list"]["bandwidth"]["percent"]))
int = int + 1
except KeyError:
break
pass
According to the sample you posted you can try to retrieve values like:
v[0]["class"][0]["name"]
This outputs:
Test1
I have a schema that using MongoEngine that looks like this
class User(db.Document)
email = db.EmailField(unique=True)
class QueueElement(db.EmbeddedDocument):
accepts = db.ListField(db.ReferenceField('Resource'))
user = db.ReferenceField(User)
class Resource(db.Document):
name = db.StringField(max_length=255, required=True)
current_queue_element = db.EmbeddedDocumentField('QueueElement')
class Queue(db.EmbeddedDocument):
name = db.StringField(max_length=255, required=True)
resources = db.ListField(db.ReferenceField(Resource))
queue_elements = db.ListField(db.EmbeddedDocumentField('QueueElement'))
class Room(db.Document):
name = db.StringField(max_length=255, required=True)
queues = db.ListField(db.EmbeddedDocumentField('Queue'))
and I would like to return a JSON object of a Room object that would include the information about its queues (together with the referenced resources), and the nested queue_elements ( together with their referenced "accepts" references, and user references)
However, when I want to return a Room with its relationships dereferenced:
room = Room.objects(slug=slug).select_related()
if (room):
return ast.literal_eval(room.to_json())
abort(404)
I don't get any dereferencing. I get:
{
"_cls":"Room",
"_id":{
"$oid":"552ab000605cd92f22347d79"
},
"created_at":{
"$date":1428842482049
},
"name":"second",
"queues":[
{
"created_at":{
"$date":1428842781490
},
"name":"myQueue",
"queue_elements":[
{
"accepts":[
{
"$oid":"552aafb3605cd92f22347d78"
},
{
"$oid":"552aafb3605cd92f22347d78"
},
{
"$oid":"552ab1f8605cd92f22347d7a"
}
],
"created_at":{
"$date":1428849389503
},
"user":{
"$oid":"552ac8c7605cd92f22347d7b"
}
}
],
"resources":[
{
"$oid":"552aafb3605cd92f22347d78"
},
{
"$oid":"552aafb3605cd92f22347d78"
},
{
"$oid":"552ab1f8605cd92f22347d7a"
}
]
}
],
"slug":"secondslug"
}
even though I'm using the select_related() function. I believe this is because MongoEngine may not follow references on embedded documents. Note, I can actually dereference in the python if I do something like this:
room = Room.objects(slug=slug).first().queues[0].queue_elements[0].accepts[0]
return ast.literal_eval(room.to_json())
which yields
{
"_id":{
"$oid":"552aafb3605cd92f22347d78"
},
"created_at":{
"$date":1428842849393
},
"name":"myRes"
}
which is clearly the dereferenced Resource document.
Is there a way I can follow references on embedded documents? Or is this coming up because I'm following a bad pattern, and should be finding a different way to store this information in MongoDB (or indeed, switch to a Relational DB) ? Thanks!