constructing a message format from the fetchall result in python - python

*New to Programming
Question: I need to use the below "Data" (two rows as arrays) queried from sql and use it to create the message structure below.
data from sql using fetchall()
Data = [[100,1,4,5],[101,1,4,6]]
##expected message structure
message = {
"name":"Tom",
"Job":"IT",
"info": [
{
"id_1":"100",
"id_2":"1",
"id_3":"4",
"id_4":"5"
},
{
"id_1":"101",
"id_2":"1",
"id_3":"4",
"id_4":"6"
},
]
}
I tried to create below method to iterate over the rows and then input the values, this is was just a starting, but this was also not working
def create_message(data)
for row in data:
{
"id_1":str(data[0][0],
"id_2":str(data[0][1],
"id_3":str(data[0][2],
"id_4":str(data[0][3],
}
Latest Code
def create_info(data):
info = []
for row in data:
temp_dict = {"id_1_tom":"","id_2_hell":"","id_3_trip":"","id_4_clap":""}
for i in range(0,1):
temp_dict["id_1_tom"] = str(row[i])
temp_dict["id_2_hell"] = str(row[i+1])
temp_dict["id_3_trip"] = str(row[i+2])
temp_dict["id_4_clap"] = str(row[i+3])
info.append(temp_dict)
return info

Edit: Updated answer based on updates to the question and comment by original poster.
This function might work for the example you've given to get the desired output, based on the attempt you've provided:
def create_info(data):
info = []
for row in data:
temp_dict = {}
temp_dict['id_1_tom'] = str(row[0])
temp_dict['id_2_hell'] = str(row[1])
temp_dict['id_3_trip'] = str(row[2])
temp_dict['id_4_clap'] = str(row[3])
info.append(temp_dict)
return info
For the input:
[[100, 1, 4, 5],[101,1,4,6]]
This function will return a list of dictionaries:
[{"id_1_tom":"100","id_2_hell":"1","id_3_trip":"4","id_4_clap":"5"},
{"id_1_tom":"101","id_2_hell":"1","id_3_trip":"4","id_4_clap":"6"}]
This can serve as the value for the key info in your dictionary message. Note that you would still have to construct the message dictionary.

Related

Filter nested JSON structure and get field names as values in Pyspark

I have the following complex data that would like to parse in PySpark:
records = '[{"segmentMembership":{"ups":{"FF6KCPTR6AQ0836R":{"lastQualificationTime":"2021-01-16 22:05:11.074357","status":"exited"},"QMS3YRT06JDEUM8O":{"lastQualificationTime":"2021-01-16 22:05:11.074357","status":"realized"},"8XH45RT87N6ZV4KQ":{"lastQualificationTime":"2021-01-16 22:05:11.074357","status":"exited"}}},"_aepgdcdevenablement2":{"emailId":{"address":"stuff#someemail.com"},"person":{"name":{"firstName":"Name2"}},"identities":{"customerid":"PH25PEUWOTA7QF93"}}},{"segmentMembership":{"ups":{"FF6KCPTR6AQ0836R":{"lastQualificationTime":"2021-01-16 22:05:11.074457","status":"realized"},"D45TOO8ZUH0B7GY7":{"lastQualificationTime":"2021-01-16 22:05:11.074457","status":"realized"},"QMS3YRT06JDEUM8O":{"lastQualificationTime":"2021-01-16 22:05:11.074457","status":"existing"}}},"_aepgdcdevenablement2":{"emailId":{"address":"stuff4#someemail.com"},"person":{"name":{"firstName":"TestName"}},"identities":{"customerid":"9LAIHVG91GCREE3Z"}}}]'
df = spark.read.json(sc.parallelize([records]))
df.show()
df.printSchema()
The problem I am having is with the segmentMembership object. The JSON object looks like this:
"segmentMembership": {
"ups": {
"FF6KCPTR6AQ0836R": {
"lastQualificationTime": "2021-01-16 22:05:11.074357",
"status": "exited"
},
"QMS3YRT06JDEUM8O": {
"lastQualificationTime": "2021-01-16 22:05:11.074357",
"status": "realized"
},
"8XH45RT87N6ZV4KQ": {
"lastQualificationTime": "2021-01-16 22:05:11.074357",
"status": "exited"
}
}
}
The annoying thing with this is, the key values ("FF6KCPTR6AQ0836R", "QMS3YRT06JDEUM8O", "8XH45RT87N6ZV4KQ") end up being defined as a column in pyspark.
In the end, if the status of the segment is "exited", I was hoping to get the results as follows.
+--------------------+----------------+---------+------------------+
|address |customerid |firstName|segment_id |
+--------------------+----------------+---------+------------------+
|stuff#someemail.com |PH25PEUWOTA7QF93|Name2 |[8XH45RT87N6ZV4KQ]|
|stuff4#someemail.com|9LAIHVG91GCREE3Z|TestName |[8XH45RT87N6ZV4KQ]|
+--------------------+----------------+---------+------------------+
After loading the data into a dataframe(above), I tried the following:
dfx = df.select("_aepgdcdevenablement2.emailId.address", "_aepgdcdevenablement2.identities.customerid", "_aepgdcdevenablement2.person.name.firstName", "segmentMembership.ups")
dfx.show(truncate=False)
seg_list = array(*[lit(k) for k in ["8XH45RT87N6ZV4KQ", "QMS3YRT06JDEUM8O"]])
print(seg_list)
# if v["status"] in ['existing', 'realized']
def confusing_compare(ups, seg_list):
seg_id_filtered_d = dict((k, ups[k]) for k in seg_list if k in ups)
# This is the line I am having a problem with.
# seg_id_status_filtered_d = {key for key, value in seg_id_filtered_d.items() if v["status"] in ['existing', 'realized']}
return list(seg_id_filtered_d)
final_conf_dx_pred = udf(confusing_compare, ArrayType(StringType()))
result_df = dfx.withColumn("segment_id", final_conf_dx_pred(dfx.ups, seg_list)).select("address", "customerid", "firstName", "segment_id")
result_df.show(truncate=False)
I am not able to check the status field within the value field of the dic.
You can actually do that without using UDF. Here I'm using all the segment names present in the schema and filtering out those with status = 'exited'. You can adapt it depending on which segments and status you want.
First, using the schema fields, get the list of all segment names like this:
segment_names = df.select("segmentMembership.ups.*").schema.fieldNames()
Then, by looping throught the list created above and using when function, you can create a column that can have either segment_name as value or null depending on status:
active_segments = [
when(col(f"segmentMembership.ups.{c}.status") != lit("exited"), lit(c))
for c in segment_names
]
Finally, add new column segments of array type and use filter function to remove null elements from the array (which corresponds to status 'exited'):
dfx = df.withColumn("segments", array(*active_segments)) \
.withColumn("segments", expr("filter(segments, x -> x is not null)")) \
.select(
col("_aepgdcdevenablement2.emailId.address"),
col("_aepgdcdevenablement2.identities.customerid"),
col("_aepgdcdevenablement2.person.name.firstName"),
col("segments").alias("segment_id")
)
dfx.show(truncate=False)
#+--------------------+----------------+---------+------------------------------------------------------+
#|address |customerid |firstName|segment_id |
#+--------------------+----------------+---------+------------------------------------------------------+
#|stuff#someemail.com |PH25PEUWOTA7QF93|Name2 |[QMS3YRT06JDEUM8O] |
#|stuff4#someemail.com|9LAIHVG91GCREE3Z|TestName |[D45TOO8ZUH0B7GY7, FF6KCPTR6AQ0836R, QMS3YRT06JDEUM8O]|
#+--------------------+----------------+---------+------------------------------------------------------+

how do I merge two list[dict] and order properly by its ID?

I am trying to merge two list[dict] into the following format and order both techniques, and subtechnique, but I am not sure how to properly join them without getting affected other techniques
techniques = [{
"technique_id":"T1548",
"technique":"demo",
"url":"url",
"tactic":[
"demo",
"demo"
]
}]
subtechniques = [{
"technique_id":"T1548.002",
"technique":"demo",
"url":"url"
}]
def merge_techniques(techniques, subtechniques):
change_list = []
for x in techniques:
for y in subtechniques:
if x['technique_id'] == y['technique_id'].split('.')[0]:
print(x)
print(y)
return change_list
merge_techniques(techniques, subtechniques)
desired output
{
"technique_id":"T1548",
"technique":"dmep",
"url":"https://xxxxxxxxxxxx",
"tactic":[
"xxxxxxxxxxxx",
"xxxxxxxxxxxx"
],
"subtechnique": [
{
"technique_id":"T1548.002",
"technique":"demo",
"url":"url"
}
]
}
If you don't want to create a new object, you can discard copy operations.
import copy
def merge_techniques(techniques, subtechniques):
result = []
# create technique_id dictionary
techniques_dict = dict()
for tech in techniques:
# create new object
tech_copy = copy.copy(tech)
techniques_dict[tech['technique_id']] = tech_copy
result.append(tech_copy)
# visit all sub techniques
for subtech in subtechniques:
tech_id = subtech['technique_id'].split('.')[0]
# search by tech_id
if tech_id not in techniques_dict:
# if not found
print('Tech %s is not found'%(tech_id))
continue
# get tech by tech_id
tech = techniques_dict[tech_id]
# create subtechnique array if not existed
if 'subtechnique' not in tech:
tech['subtechnique'] = []
# copy subtech object
tech['subtechnique'].append(copy.copy(subtech))
return result

I want to firstly create a database and then update it as per the values in mongodb

I want to update the value of Entry1 using upsert. I have a sensor that returns the value of Entry1. If sensor is blocked, the value is true. If sensor is not blocked then the value is False.
machineOne = None
oneIn = 1
while True:
global machineOneId
global userId
try:
if Entry1.get_value() and oneIn < 2:
machineOne = Entry1.get_value()
print('entered looopp ONeeeE', machineOne)
machine1 = {
'Entry1': Entry1.get_value(),
'Exit1': Exit1.get_value(),
'id': 'test'
}
result = Machine1.insert_one(machine1)
myquery = {"Entry1": 'true'}
newvalues = {"$set": {"id": result.inserted_id}}
#result = Machine1.insert_one(machine1)
Machine1.update_one(myquery, newvalues)
userId = result.inserted_id
oneIn += 1
print('added', result.inserted_id, oneIn)
elif machineOne:
print('entered looopp', userId)
myquery = {"id": userId}
newvalues = {"$set": {"id": Entry1.get_value()}}
upsert = True
#result = Machine1.insert_one(machine1)
Machine1.update_one(myquery, newvalues)
if Exit1.get_value():
print('added',)
finally:
print('nothings happened', machineOne)
what is expected: i should be able to update the Entry1 from true to false in the same log, displayed in robo3t
Good Afternoon #digs10 ,
I read your post and I think the error is how you locate the document that you want to update.
I remember that MongoDB document primary key is "_id" instead of "id". You could take a look here MongoDB Documents
For what I see in the code (I don't know Python but it is readable), you are referring to the document "Entry1" using the field "id" instead of "_id.
Try modifing the line myquery = {"id": userId} for myquery = {"_id": userId}.
I hope this answer can help you.
Best Regards,
JB
P.S: I saw this question in my email and I took a quick read of it, If I misunderstood it, please let me know.

How to create a function that subtracts the current value from previous value every time it's called

I am querying a dataset that works like this: each time I query it produces a different value of Bytes_Written and Bytes_Read. What I cannot seem to accomplish is to subtract the current value from the previous value and keep doing this every second.
Here is what the data looks like:
{
"Number of Devices": 2,
"Block Devices": {
"bdev0": {
"Backend_Device_Path": "/dev/disk/by-path/ip-192.168.26.1:3260-iscsi-iqn.2010-10.org.openstack:volume-d1c8e7c6-8c77-444c-9a93-8b56fa1e37f2-lun-010.0.0.142",
"Capacity": "2147483648",
"Guest_Device_Name": "vdb",
"IO_Operations": "97069",
"Bytes_Written": "34410496",
"Bytes_Read": "363172864"
},
"bdev1": {
"Backend_Device_Path": "/dev/disk/by-path/ip-192.168.26.1:3260-iscsi-iqn.2010-10.org.openstack:volume-b27110f9-41ba-4bc6-b97c-b5dde23af1f9-lun-010.0.0.146",
"Capacity": "2147483648",
"Guest_Device_Name": "vdb",
"IO_Operations": "93",
"Bytes_Written": "0",
"Bytes_Read": "380928"
}
}
}
The code that queries the data:
def counterVolume_one():
#Get data
url = 'http://url:8080/vrio/blk'
r = requests.get(url)
data = r.json()
wanted = {'Bytes_Written', 'Bytes_Written', 'IO_Operation'}
for d in data['Block Devices'].itervalues():
values = {k: v for k, v in d.iteritems() if k in wanted}
print json.dumps(values)
counterVolume_one()
The way I want to get the output is:
{"IO_Operations": "97069", "Bytes_Read": "363172864", "Bytes_Written": "34410496"}
{"IO_Operations": "93", "Bytes_Read": "380928", "Bytes_Written": "0"}
Here is what I want to accomplish:
first time query = first set of values
after 1 sec
second time query = first set of values-second set of values
after 1 sec
third time query = second set of values-third set of values
Expected output would be a json object as below
{'bytes-read': newvalue, 'bytes-written': newvalue, 'io_operations': newvalue}
The simplest fix may be to modify the counterVolume_one() function so that it accepts parameters that define the current state, and that you'd update as you collect data. For example, the following code collects and sums the the fields your interested in from the JSON documents:
FIELDS = ('Bytes_Written', 'Bytes_Read', 'IO_Operation')
def counterVolume_one(state):
url = 'http://url:8080/vrio/blk'
r = requests.get(url)
data = r.json()
for field in FIELDS:
state[field] += data[field]
return state
state = {"Bytes_Written": 0, "Bytes_Read": 0, "IO_Operation": 0}
while True:
counterVolume_one(state)
time.sleep(1)
for field in FIELDS:
print("{field:s}: {count:d}".format(field=field,
count=state[field]))
A more correct fix might be to use a class to hold the state, and that has a method that updates the state. But, I think following the idea above will get you the solution quickest.

Python : Normalize Json response (array)

I am new to JSON and Python,I am trying to achieve below
Need to parse below JSON
{
"id": "12345abc",
"codes": [
"BSVN1FKW3JKKNNMN",
"HJYYUKJJL999OJR",
"DFTTHJJJJ0099JUU",
"FGUUKHKJHJGJJYGJ"
],
"ctr": {
"source": "xyz",
"user_id": "1234"
}
}
Expected output:Normalized on "codes" value
ID~CODES~USER_ID
12345abc~BSVN1FKW3JKKNNMN~1234
12345abc~HJYYUKJJL999OJR~1234
12345abc~DFTTHJJJJ0099JUU~1234
12345abc~FGUUKHKJHJGJJYGJ~1234
Started with below ,but need help to get to my desired output.
The "codes" block can have n number of values separated by comma.
The below code is throwing an error "TypeError: string indices must be integers"
#!/usr/bin/python
import os
import json
import csv
f = open('rspns.csv','w')
writer = csv.writer(f,delimiter = '~')
headers = [‘ID’,’CODES’,’USER_ID’]
default = ''
writer.writerow(headers)
string = open('sample.json').read().decode('utf-8')
json_obj = json.loads(string)
#print json_obj['id']
#print json_obj['codes']
#print json_obj['codes'][0]
#print json_obj['codes'][1]
#print json_obj['codes’][2]
#print json_obj['codes’][3]
#print json_obj['ctr’][‘user_id']
for keyword in json_obj:
row = []
row.append(str(keyword['id']))
row.append(str(keyword['codes']))
row.append(str(keyword['ctr’][‘user_id']))
writer.writerow(row)
If your json_obj looks exactly like that , that is it is a dictionary, then the issue is that when you do -
for keyword in json_obj:
You are iterating over keys in json_obj, then if you try to access ['id'] for that key it should error out saying string indices must be integers .
You should first get the id and user_id before looping and then loop over json_obj['codes'] and then add the previously computed id and user_id along with the current value from codes list to the writer csv as a row.
Example -
import json
import csv
string = open('sample.json').read().decode('utf-8')
json_obj = json.loads(string)
with open('rspns.csv','w') as f:
writer = csv.writer(f,delimiter = '~')
headers = ['ID','CODES','USER_ID']
writer.writerow(headers)
id = json_obj['id']
user_id = json_obj['ctr']['user_id']
for code in json_obj['codes']:
writer.writerow([id,code,user_id])
You don't want to iterate through json_obj as that is a dictionary and iterating through will get the keys. The TypeError is caused by trying to index into the keys ('id', 'code', and 'ctr') -- which are strings -- as if they were a dictionary.
Instead, you want a separate row for each code in json_obj['codes'] and to use the json_obj dictionary for your lookups:
for code in json_obj['codes']:
row = []
row.append(json_obj['id'])
row.append(code)
row.append(json_obj['ctr’][‘user_id'])
writer.writerow(row)

Categories

Resources