In this json array:
json_string=[{"Id": "report","Value": "3001"},{"Id": "user","Value": "user123"}]
How can I get back user123 if I pass in user
When I try to do this:
content = json.loads(json_string)
content['user']
I get an error that says you have to use integer to reference an element.
I am brand new to Python.
Thanks!
content is a list so you should get the element by index first:
>>> content[1]['Value']
'user123'
>>> for d in content:
... if 'user' in d.values():
... print d['Value']
'user123'
Assuming user is always mapped to Id:
>>> for d in content:
... if d['Id'] == 'user':
... print d['Value']
One liner:
>>> [d['Value'] for d in content if d['Id'] == 'user'][0]
'user123'
Assuming you want to focus on the first occurrence of an element in the list with a given field (e.g. 'Id') with a certain value (e.g. 'user'):
def look_for(string, field, val):
return next((el['Value'] for el in string if el[field] == val))
json_string = [{"Id": "report","Value": "3001"}, {"Id": "user","Value": "user123"}]
found_val = look_for(json_string, 'Id', 'user')
produces
'user123'
Obviously, also the output field can become a parameter instead of being hardcoded to Value
Related
I have the below list from which I have to retrieve the port number I want the value 50051 but what I get is port=50051 I know I can retrieve this by iterating the list and using string operations but wanted to see if there is some direct way to access this.
r = requests.get(url_service)
data = {}
data = r.json()
#Below is the json after printing
[{'ServerTag': [ 'abc-service=true',
'port=50051',
'protocol=http']
}]
print(data[0]["ServiceTags"][1]) // prints port=50051
You can do something like this perhaps:
received_dic = {
'ServerTag': [ 'abc-service=true',
'port=50051',
'protocol=http']
}
ServerTag = received_dic.get("ServerTag", None)
if ServerTag:
port = list(filter(lambda x: "port" in x, ServerTag))[0].split("=")[1]
print(port)
# 50051
Considering you have the following JSON:
[
{
"ServerTag": ["abc-service=true", "port=50051", "protocol=http"]
}
]
You can extract your value like this:
from functools import partial
# ...
def extract_value_from_tag(tags, name, default=None):
tags = map(partial(str.split, sep='='), tags)
try:
return next(value for key, value in tags if key == name)
except StopIteration:
# Tag was not found
return default
And then you just:
# Providing data is the deserialized JSON as a Python list
# Also assuming that data is not empty and ServerTag is present on the first object
tags = data[0].get('ServerTag', [])
port_number = extract_value_from_tag(tags, 'port', default='8080')
my schema:
test_multivalues = {
'name': {'type':'string'},
'multi': {'type': 'list', 'schema': {'type': 'media'}},
'arr': {'type': 'list'},
}
I use the post data as follow:
Content-Type: multipart/form-data
name: multivalue
multi: ....file1...
multi: ....file2....
arr: [arr_value1, arr_value2]
In Eve,parameter arr will get as a list, but multi only get the first value.
I expect to get multi as a list like [file1, file2].
When I read the code, Eve use werkzeug's MultiDict.to_dict() in payload() method which only return the first value for the same key.
How can I get the key with multiple values as list ?
Updated:
Eve will raise an exception with above schema and post data:
multi:must be of list type
Updated:
Yes, I test it with curl.
curl -F "image=#text.txt" -F "image=#test.txt" http://localhost/eve/api
When I changed the code in payload() to:
v = lambda l: l if len(l) > 1 else l[0]
return dict([(k, v(request.form.getlist(k))) for k in request.form] +
[(k, v(request.files.getlist(k))) for k in request.files])
it return file list, but Eve's post method not support it, and throw an exception.
Ugly way to solve this:
def saver(filestorageobj):
app.media.put(
filestorageobj,
filename=filestorageobj.name,
content_type=filestorageobj.mimetype,
resource='test')
def pre_test_POST_callback(request):
from werkzeug.datastructures import ImmutableMultiDict
# files format: [("pics", FileStorageObject)]
pics = [saver(
upfile[1]
) for upfile in request.files.items(True) if upfile[0] == "pics"]
form = request.form.copy()
form['pics'] = pics
request.form = ImmutableMultiDict(form)
request.files = ImmutableMultiDict()
As of 0.7+ version of eve, you only need to set AUTO_COLLAPSE_MULTI_KEYS to True.
I'm reading data from a SELECT statement of SQLite. Date comes in the following form:
ID|Phone|Email|Status|Role
Multiple rows may be returned for the same ID, Phone, or Email. And for a given row, either Phone or Email can be empty/NULL. However, for the same ID, it's always the same value for Status and the same for Role. for example:
1|1234567892|a#email.com| active |typeA
2|3434567893|b#email.com| active |typeB
2|3434567893|c#email.com| active |typeB
3|5664567891|d#email.com|inactive|typeC
3|7942367891|d#email.com|inactive|typeC
4|5342234233| NULL | active |typeD
5| NULL |e#email.com| active |typeD
These data are returned as a list by Sqlite3, let's call it results. I need to go through them and reorganize the data to construct another list structure in Python. The final list basically consolidates the data for each ID, such that:
Each item of the final list is a dict, one for each unique ID in results. In other words, multiple rows for the same ID will be merged.
Each dict contains these keys: 'id', 'phones', 'emails', 'types', 'role', 'status'.
'phones' and 'emails' are lists, and contains zero or more items, but no duplicates.
'types' is also a list, and contains either 'phone' or 'email' or both, but no duplicates.
The order of dicts in the final list does not matter.
So far I have come up this:
processed = {}
for r in results:
if r['ID'] in processed:
p_data = processed[r['ID']]
if r['Phone']:
p_data['phones'].add(r['Phone'])
p_data['types'].add('phone')
if r['Email']:
p_data['emails'].add(r['Email'])
p_data['types'].add('email')
else:
p_data = {'id': r['ID'], 'status': r['Status'], 'role': r['Role']}
if r['Phone']:
p_data['phones'] = set([r['Phone']])
p_data.setdefault('types', set).add('phone')
if r['Email']:
p_data['emails'] = set([r['Email']])
p_data.setdefault('types', set).add('email')
processed[r['ID']] = p_data
consolidated = list(processed.values())
I wonder if there is a faster and/or more concise way to do this.
EDIT:
A final detail: I would prefer to have 'phones', 'emails', and 'types' in each dict as list instead of set. The reason is that I need to dump consolidated into JSON, and JSON does not allow set.
When faced with something like this I usually use:
processed = collections.defaultdict(lambda:{'phone':set(),'email':set(),'status':None,'type':set()})
and then something like:
for r in results:
for field in ['Phone','Email']:
if r[field]:
processed[r['ID']][field.lower()].add(r[field])
processed[r['ID']]['type'].add(field.lower())
Finally, you can dump it into a dictionary or a list:
a_list = processed.items()
a_dict = dict(a_list)
Regarding the JSON problem with sets, you can either convert the sets to lists right before serializing or write a custom encoder (very useful!). Here is an example of one I have for dates extended to handle sets:
class JSONDateTimeEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return int(time.mktime(obj.timetuple()))
elif isinstance(ojb, set):
return list(obj)
try:
return json.JSONEncoder.default(self, obj)
except:
return str(obj)
and to use it:
json.dumps(a_list,sort_keys=True, indent=2, cls =JSONDateTimeEncoder)
I assume results is a 2d list:
print results
#[['1', '1234567892', 'a#email.com', ' active ', 'typeA'],
#['2', '3434567893', 'b#email.com', ' active ', 'typeB'],
#['2', '3434567893', 'c#email.com', ' active ', 'typeB'],
#['3', '5664567891', 'd#email.com', 'inactive', 'typeC'],
#['3', '7942367891', 'd#email.com', 'inactive', 'typeC'],
#['4', '5342234233', ' NULL ', ' active ', 'typeD'],
#['5', ' NULL ', 'e#email.com', ' active ', 'typeD']]
Now we group this list by id:
from itertools import groupby
data_grouped = [ (k,list(v)) for k,v in groupby( sorted(results, key=lambda x:x[0]) , lambda x : x[0] )]
# make list of column names (should correspond to results). These will be dict keys
names = [ 'id', 'phone','email', 'status', 'roll' ]
ID_info = { g[0]: {names[i]: list(list( map( set, zip(*g[1] )))[i]) for i in range( len(names))} for g in data_grouped }
Now for the types:
for k in ID_info:
email = [ i for i in ID_info[k]['email'] if i.strip() != 'NULL' and i != '']
phone = [ i for i in ID_info[k]['phone'] if i.strip() != 'NULL' and i != '']
if email and phone:
ID_info[k]['types'] = [ 'phone', 'email' ]
elif email and not phone:
ID_info[k]['types'] = ['email']
elif phone and not email:
ID_info[k]['types'] = ['phone']
else:
ID_info[k]['types'] = []
# project
ID_info[k]['id'] = ID_info[k]['id'][0]
ID_info[k]['roll'] = ID_info[k]['roll'][0]
ID_info[k]['status'] = ID_info[k]['status'][0]
And what you asked for (a list of dicts) is returned by ID_info.values()
I have a long html form I'm retriving data from and then populating in an html document and emailing it. I'm using form = cgi.FieldStorage() to get the data posted from my form and then I have a string for the message body similer to (but mutch longer than)
msg = """<html><p>%(Frist_Name)s</p><p>%(Last_Name)s</p></html> """ % form
Which according to debuging works until it finds one of my named vars that was not submitted with the form.
There are many optional fields on the form, what can I do to either make this work or do something similar so I don't have to gather all the fields beforehand(ie manually reenter each field) and create a blank one if it's None? Or is there no easier way then doing exactly that?
set defaults
my_data_dict = {'first_name':'','last_name':'','blah':'','something_else':''} #defaults with all values from format string
my_data_dict.update(form) #replace any values we got
msg = """<html><p>%(Frist_Name)s</p><p>%(Last_Name)s</p></html> """ % my_data_dict
or use default dict (this way you wont need to input all your defaults they will just magically be '')
from collections import defaultdict
my_dict = defaultdict(str)
print my_dict['some_key_that_doesnt_exist'] #should print empty string
my_dict.update(form)
msg = """<html><p>%(Frist_Name)s</p><p>%(Last_Name)s</p></html> """ % my_data_dict
or as abarnert points out you can simplify it to
from collections import defaultdict
my_dict = defaultdict(str,form)
msg = """<html><p>%(Frist_Name)s</p><p>%(Last_Name)s</p></html> """ % my_dict
here is a complete example I just did in my terminal
>>> d = defaultdict(str,{'fname':'bob','lname':'smith','zipcode':11111})
>>> format_str = "Name: %(fname)s %(lname)s\nPhone: %(telephone)s\nZipcode: %(z
ipcode)s"
>>> d
defaultdict(<type 'str'>, {'lname': 'smith', 'zipcode': 11111, 'fname': 'bob'})
>>> #notice no telephone here
...
>>> d['extra_unneeded_argument'] =' just for show'
>>> d
defaultdict(<type 'str'>, {'lname': 'smith', 'extra_unneeded_argument': ' just f
or show', 'zipcode': 11111, 'fname': 'bob'})
>>> print format_str%d
Name: bob smith
Phone:
Zipcode: 11111
How do I look up the 'id' associated with the a person's 'name' when the 2 are in a dictionary?
user = 'PersonA'
id = ? #How do I retrieve the 'id' from the user_stream json variable?
json, stored in a variable named "user_stream"
[
{
'name': 'PersonA',
'id': '135963'
},
{
'name': 'PersonB',
'id': '152265'
},
]
You'll have to decode the JSON structure and loop through all the dictionaries until you find a match:
for person in json.loads(user_stream):
if person['name'] == user:
id = person['id']
break
else:
# The else branch is only ever reached if no match was found
raise ValueError('No such person')
If you need to make multiple lookups, you probably want to transform this structure to a dict to ease lookups:
name_to_id = {p['name']: p['id'] for p in json.loads(user_stream)}
then look up the id directly:
id = name_to_id.get(name) # if name is not found, id will be None
The above example assumes that names are unique, if they are not, use:
from collections import defaultdict
name_to_id = defaultdict(list)
for person in json.loads(user_stream):
name_to_id[person['name']).append(person['id'])
# lookup
ids = name_to_id.get(name, []) # list of ids, defaults to empty
This is as always a trade-off, you trade memory for speed.
Martijn Pieters's solution is correct, but if you intend to make many such look-ups it's better to load the json and iterate over it just once, and not for every look-up.
name_id = {}
for person in json.loads(user_stream):
name = person['name']
id = person['id']
name_id[name] = id
user = 'PersonA'
print name_id[user]
persons = json.loads(...)
results = filter(lambda p:p['name'] == 'avi',persons)
if results:
id = results[0]["id"]
results can be more than 1 of course..