I am trying to parse out Face Matches from the results of the get_face_search() AWS Rekognition API. It outputs an array of Persons, within that array is another array of FaceMatches for a given person and timestamp. I want to take information from the FaceMatches array and be able to loop through the array of Face Matches.
I have done something similar before for single arrays and looped successfully, but I am missing something trivial here perhaps.
Here is output from API:
Response:
{
"JobStatus": "SUCCEEDED",
"NextToken": "U5EdbZ+86xseDBfDlQ2u8QhSVzbdodDOmX/gSbwIgeO90l2BKWvJEscjUDmA6GFDCSSfpKA4",
"VideoMetadata": {
"Codec": "h264",
"DurationMillis": 6761,
"Format": "QuickTime / MOV",
"FrameRate": 30.022184371948242,
"FrameHeight": 568,
"FrameWidth": 320
},
"Persons": [
{
"Timestamp": 0,
"Person": {
"Index": 0,
"BoundingBox": {
"Width": 0.987500011920929,
"Height": 0.7764084339141846,
"Left": 0.0031250000465661287,
"Top": 0.2042253464460373
},
"Face": {
"BoundingBox": {
"Width": 0.6778846383094788,
"Height": 0.3819068372249603,
"Left": 0.10096154361963272,
"Top": 0.2654387652873993
},
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.33232420682907104,
"Y": 0.4194057583808899
},
{
"Type": "eyeRight",
"X": 0.5422032475471497,
"Y": 0.41616082191467285
},
{
"Type": "nose",
"X": 0.45633792877197266,
"Y": 0.4843473732471466
},
{
"Type": "mouthLeft",
"X": 0.37002310156822205,
"Y": 0.567118763923645
},
{
"Type": "mouthRight",
"X": 0.5330674052238464,
"Y": 0.5631639361381531
}
],
"Pose": {
"Roll": -2.2475271224975586,
"Yaw": 4.371307373046875,
"Pitch": 6.83940315246582
},
"Quality": {
"Brightness": 40.40004348754883,
"Sharpness": 99.95819854736328
},
"Confidence": 99.87971496582031
}
},
"FaceMatches": [
{
"Similarity": 99.81229400634766,
"Face": {
"FaceId": "4699a1eb-9f6e-415d-8716-eef141d23433a",
"BoundingBox": {
"Width": 0.6262923432480737,
"Height": 0.46972032423490747,
"Left": 0.130435005324523403604,
"Top": 0.13354002343240603
},
"ImageId": "1ac790eb-615a-111f-44aa-4017c3c315ad",
"Confidence": 99.19400024414062
}
}
]
},
{
"Timestamp": 66,
"Person": {
"Index": 0,
"BoundingBox": {
"Width": 0.981249988079071,
"Height": 0.7764084339141846,
"Left": 0.0062500000931322575,
"Top": 0.2042253464460373
}
}
},
{
"Timestamp": 133,
"Person": {
"Index": 0,
"BoundingBox": {
"Width": 0.9781249761581421,
"Height": 0.783450722694397,
"Left": 0.0062500000931322575,
"Top": 0.19894365966320038
}
}
},
{
"Timestamp": 199,
"Person": {
"Index": 0,
"BoundingBox": {
"Width": 0.981249988079071,
"Height": 0.783450722694397,
"Left": 0.0031250000465661287,
"Top": 0.19894365966320038
},
"Face": {
"BoundingBox": {
"Width": 0.6706730723381042,
"Height": 0.3778440058231354,
"Left": 0.10817307233810425,
"Top": 0.26679307222366333
},
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.33244985342025757,
"Y": 0.41591548919677734
},
{
"Type": "eyeRight",
"X": 0.5446155667304993,
"Y": 0.41204410791397095
},
{
"Type": "nose",
"X": 0.4586191177368164,
"Y": 0.479543000459671
},
{
"Type": "mouthLeft",
"X": 0.37614554166793823,
"Y": 0.5639738440513611
},
{
"Type": "mouthRight",
"X": 0.5334802865982056,
"Y": 0.5592300891876221
}
],
"Pose": {
"Roll": -2.4899401664733887,
"Yaw": 3.7596628665924072,
"Pitch": 6.3544135093688965
},
"Quality": {
"Brightness": 40.46360778808594,
"Sharpness": 99.95819854736328
},
"Confidence": 99.89802551269531
}
},
"FaceMatches": [
{
"Similarity": 99.80543518066406,
"Face": {
"FaceId": "4699a1eb-9f6e-415d-8716-eef141d9223a",
"BoundingBox": {
"Width": 0.626294234234737,
"Height": 0.469234234890747,
"Left": 0.130435002334234604,
"Top": 0.13354023423449180603
},
"ImageId": "1ac790eb-615a-111f-44aa-4017c3c315ad",
"Confidence": 99.19400024414062
}
}
]
},
{
"Timestamp": 266,
"Person": {
"Index": 0,
"BoundingBox": {
"Width": 0.984375,
"Height": 0.7852112650871277,
"Left": 0,
"Top": 0.19718310236930847
}
}
}
],
I have isolated the timestamps (just testing my approach) using the following:
timestamps = [m['Timestamp'] for m in response['Persons']]
Output is this, as expected - [0, 66, 133, 199, 266]
However, when I try the same thing with FaceMatches, I get an error.
[0, 66, 133, 199, 266]
list indices must be integers or slices, not str: TypeError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 40, in lambda_handler
matches = [m['FaceMatches']['Face']['FaceId'] for m in response['Persons']]
File "/var/task/lambda_function.py", line 40, in <listcomp>
matches = [m['FaceMatches']['Face']['FaceId'] for m in response['Persons']]
TypeError: list indices must be integers or slices, not str
What I need to end up with is for each face that is matched:
Timestamp
FaceID
Similarity
Can anybody shed some light on this for me?
According to your needs , you have two FaceMatch objects in your response and you can extract required info in this way :
import json
with open('newtest.json') as f:
data = json.load(f)
length =len(data['Persons'])
for i in range(0,length):
try:
print(data['Persons'][i]['FaceMatches'][0]['Similarity'])
print(data['Persons'][i]['FaceMatches'][0]['Face']['FaceId'])
print(data['Persons'][i]['Timestamp'])
except:
continue
I have taken your json object in data variable and i have ignored timestamps where there is no corresponding facematch, if you wish you can extract then the same way
Related
I am trying to link several Altair charts that share aspects of the same data. I can do this by merging all the data into one data frame, but because of the nature of the data the merged data frame is much larger than is needed to have two separate data frames for each of the two charts. This is because the columns unique to each chart have many repeated rows for each entry in the shared column.
Would using transform_lookup save space over just using the merged data frame, or does transform_lookup end up doing the whole merge internally?
No, the entire dataset is still included in the vegaspec when you use transform_lookup. You can see this by printing the json spec of the charts you create. With the example from the docs:
import altair as alt
import pandas as pd
from vega_datasets import data
people = data.lookup_people().head(3)
people
name age height
0 Alan 25 180
1 George 32 174
2 Fred 39 182
groups = data.lookup_groups().head(3)
groups
group person
0 1 Alan
1 1 George
2 1 Fred
With pandas merge:
merged = pd.merge(groups, people, how='left',
left_on='person', right_on='name')
print(alt.Chart(merged).mark_bar().encode(
x='mean(age):Q',
y='group:O'
).to_json())
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.8.1.json",
"config": {
"view": {
"continuousHeight": 300,
"continuousWidth": 400
}
},
"data": {
"name": "data-b41b97ffc89b39c92e168871d447e720"
},
"datasets": {
"data-b41b97ffc89b39c92e168871d447e720": [
{
"age": 25,
"group": 1,
"height": 180,
"name": "Alan",
"person": "Alan"
},
{
"age": 32,
"group": 1,
"height": 174,
"name": "George",
"person": "George"
},
{
"age": 39,
"group": 1,
"height": 182,
"name": "Fred",
"person": "Fred"
}
]
},
"encoding": {
"x": {
"aggregate": "mean",
"field": "age",
"type": "quantitative"
},
"y": {
"field": "group",
"type": "ordinal"
}
},
"mark": "bar"
}
With transform lookup all the data is there but as to separate dataset (so technically it takes a little bit of more space with the additional braces and the transform):
print(alt.Chart(groups).mark_bar().encode(
x='mean(age):Q',
y='group:O'
).transform_lookup(
lookup='person',
from_=alt.LookupData(data=people, key='name',
fields=['age'])
).to_json())
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.8.1.json",
"config": {
"view": {
"continuousHeight": 300,
"continuousWidth": 400
}
},
"data": {
"name": "data-5fe242a79352d1fe243b588af570c9c6"
},
"datasets": {
"data-2b374d1509415e1d327c3a7521f8117c": [
{
"age": 25,
"height": 180,
"name": "Alan"
},
{
"age": 32,
"height": 174,
"name": "George"
},
{
"age": 39,
"height": 182,
"name": "Fred"
}
],
"data-5fe242a79352d1fe243b588af570c9c6": [
{
"group": 1,
"person": "Alan"
},
{
"group": 1,
"person": "George"
},
{
"group": 1,
"person": "Fred"
}
]
},
"encoding": {
"x": {
"aggregate": "mean",
"field": "age",
"type": "quantitative"
},
"y": {
"field": "group",
"type": "ordinal"
}
},
"mark": "bar",
"transform": [
{
"from": {
"data": {
"name": "data-2b374d1509415e1d327c3a7521f8117c"
},
"fields": [
"age",
"height"
],
"key": "name"
},
"lookup": "person"
}
]
}
When transform_lookup can save space is if you use it with the URLs of two dataset:
people = data.lookup_people.url
groups = data.lookup_groups.url
print(alt.Chart(groups).mark_bar().encode(
x='mean(age):Q',
y='group:O'
).transform_lookup(
lookup='person',
from_=alt.LookupData(data=people, key='name',
fields=['age'])
).to_json())
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.8.1.json",
"config": {
"view": {
"continuousHeight": 300,
"continuousWidth": 400
}
},
"data": {
"url": "https://vega.github.io/vega-datasets/data/lookup_groups.csv"
},
"encoding": {
"x": {
"aggregate": "mean",
"field": "age",
"type": "quantitative"
},
"y": {
"field": "group",
"type": "ordinal"
}
},
"mark": "bar",
"transform": [
{
"from": {
"data": {
"url": "https://vega.github.io/vega-datasets/data/lookup_people.csv"
},
"fields": [
"age",
"height"
],
"key": "name"
},
"lookup": "person"
}
]
}
Hello i need to modify json file with the key values of outcomeType>displayName. My code only shows one key value of displayName; however need to create 2 key with key values.
Json:
{ "18916": [ { "id": 2275920175, "eventId": 16, "minute": 0, "second": 51, "teamId": 223, "playerId": 18916, "x": 66.6, "y": 81.2, "expandedMinute": 0, "period": { "value": 1, "displayName": "FirstHalf" }, "type": { "value": 1, "displayName": "Pass" }, "outcomeType": { "value": 1, "displayName": "Successful" }, "qualifiers": [ { "type": { "value": 213, "displayName": "Angle" }, "value": "2.8" }, { "type": { "value": 56, "displayName": "Zone" }, "value": "Left" }, { "type": { "value": 212, "displayName": "Length" }, "value": "12.4" }, { "type": { "value": 140, "displayName": "PassEndX" }, "value": "55.3" }, { "type": { "value": 141, "displayName": "PassEndY" }, "value": "86.6" } ], "satisfiedEventsTypes": [ 90, 118, 116, 29, 34, 36, 215, 217 ], "isTouch": true, "endX": 55.3, "endY": 86.6 }, { "id": 2275920577, "eventId": 29, "minute": 1, "second": 24, "teamId": 223, "playerId": 18916, "x": 75, "y": 80.2, "expandedMinute": 1, "period": { "value": 1, "displayName": "FirstHalf" }, "type": { "value": 1, "displayName": "Pass" }, "outcomeType": { "value": 1, "displayName": "Successful" }, "qualifiers": [ { "type": { "value": 212, "displayName": "Length" }, "value": "22.1" }, { "type": { "value": 141, "displayName": "PassEndY" }, "value": "76.4" }, { "type": { "value": 56, "displayName": "Zone" }, "value": "Center" }, { "type": { "value": 213, "displayName": "Angle" }, "value": "6.2" }, { "type": { "value": 140, "displayName": "PassEndX" }, "value": "95.9" } ], "satisfiedEventsTypes": [ 90, 118, 116, 29, 204, 35, 37, 216, 217 ], "isTouch": true, "endX": 95.9, "endY": 76.4 }, { "id": 2275921705, "eventId": 49, "minute": 3, "second": 11, "teamId": 223, "playerId": 18916, "x": 73.5, "y": 79.7, "expandedMinute": 3, "period": { "value": 1, "displayName": "FirstHalf" }, "type": { "value": 1, "displayName": "Pass" }, "outcomeType": { "value": 0, "displayName": "Unsuccessful" }, "qualifiers": [ { "type": { "value": 56, "displayName": "Zone" }, "value": "Center" }, { "type": { "value": 3, "displayName": "HeadPass" } }, { "type": { "value": 212, "displayName": "Length" }, "value": "19.1" }, { "type": { "value": 140, "displayName": "PassEndX" }, "value": "89.7" }, { "type": { "value": 213, "displayName": "Angle" }, "value": "5.8" }, { "type": { "value": 141, "displayName": "PassEndY" }, "value": "66.9" } ], "satisfiedEventsTypes": [ 90, 119, 28, 138, 35, 37, 216, 217 ], "isTouch": true, "endX": 89.7, "endY": 66.9 }]}
Code:
for js in data_passes[18916]:
icdType = js["outcomeType"]["displayName"]
if icdType in pass_type:
pass_type[icdType].append(js)
else:
pass_type[icdType] = [js]
data_pass_type = json.dumps(pass_type)
data_pass_type = json.loads(data_pass_type)
with open('test.json', 'w') as json_file:
json.dump(data_pass_type, json_file)
Result:
Result Json
Expected Result: Both Successful and UnSuccessful key values. (UnSuccessful missing)
Your code works for me, after changing the integer to a string:
import json
data_passes = json.load(open("x.txt"))
pass_type = {}
for js in data_passes["18916"]:
icdType = js["outcomeType"]["displayName"]
if icdType in pass_type:
pass_type[icdType].append(js)
else:
pass_type[icdType] = [js]
with open('test.json', 'w') as json_file:
json.dump(pass_type, json_file)
Result:
{
"Successful": [
{
"id": 2275920175,
...
},
{
"id": 2275920577,
...
}
],
"Unsuccessful": [
{
"id": 2275921705,
...
}
]
}
I have a nested JSON file which I fail to parse into flatten csv.
I want to have the following columns in the csv:
id, name, path, tags (a column for each of them), points (I need x\y values of the 4 dots)
example of the JSON input:
{
"name": "test",
"securityToken": "test Token",
"videoSettings": {
"frameExtractionRate": 15
},
"tags": [
{
"name": "Blur Reject",
"color": "#FF0000"
},
{
"name": "Blur Poor",
"color": "#800000"
}
],
"id": "Du1qtrZQ1",
"activeLearningSettings": {
"autoDetect": false,
"predictTag": true,
"modelPathType": "coco"
},
"version": "2.1.0",
"lastVisitedAssetId": "ddee3e694ec299432fed9e42de8741ad",
"assets": {
"0b8f6f214dc7066b00b50ae16cf25cf6": {
"asset": {
"format": "jpg",
"id": "0b8f6f214dc7066b00b50ae16cf25cf6",
"name": "1.jpg",
"path": "c:\temp\1.jpg",
"size": {
"width": 1500,
"height": 1125
},
"state": 2,
"type": 1
},
"regions": [
{
"id": "VtDyR9Ovl",
"type": "POLYGON",
"tags": [
"3",
"9",
"Dark Poor"
],
"boundingBox": {
"height": 695.2110389610389,
"width": 1111.607142857143,
"left": 167.41071428571428,
"top": 241.07142857142856
},
"points": [
{
"x": 167.41071428571428,
"y": 252.02922077922076
},
{
"x": 208.80681818181816,
"y": 891.2337662337662
},
{
"x": 1252.232142857143,
"y": 936.2824675324675
},
{
"x": 1279.017857142857,
"y": 241.07142857142856
}
]
}
],
"version": "2.1.0"
},
"0155d8143c8cad85b5b9d392fd2895a4": {
"asset": {
"format": "jpg",
"id": "0155d8143c8cad85b5b9d392fd2895a4",
"name": "2.jpg",
"path": "c:\temp\2.jpg",
"size": {
"width": 1080,
"height": 1920
},
"state": 2,
"type": 1
},
"regions": [
{
"id": "7FFl_diM2",
"type": "POLYGON",
"tags": [
"Dark Poor"
],
"boundingBox": {
"height": 502.85714285714283,
"width": 820.3846153846155,
"left": 144.08653846153848,
"top": 299.2207792207792
},
"points": [
{
"x": 152.39423076923077,
"y": 311.68831168831167
},
{
"x": 144.08653846153848,
"y": 802.077922077922
},
{
"x": 964.4711538461539,
"y": 781.2987012987012
},
{
"x": 935.3942307692308,
"y": 299.2207792207792
}
]
}
],
"version": "2.1.0"
}
}
I tried using pandas's json_normalize and realized I don't fully understand how to specify the columns I wish to parse:
import json
import csv
import pandas as pd
from pandas import Series, DataFrame
from pandas.io.json import json_normalize
f = open(r'c:\temp\test-export.json')
data = json.load(f) # load as json
f.close()
df = json_normalize(data) #load json into dataframe
df.to_csv(r'c:\temp\json-to-csv.csv', sep=',', encoding='utf-8')
The results are hard to work with because I didn't specify what I want (iteirate trough specific array and append it to the CSV)
This where I wish your help.
I assume i don't fully understand how the normalize works and suspect it is not the best way to deal with this problem.
Thank you!
You can do something like this. Since you didn't provide an example output I did something on my own.
import json
import csv
f = open(r'file.txt')
data = json.load(f)
f.close()
with open("output.csv", mode="w", newline='') as out:
w = csv.writer(out)
header = ["id","name","path","tags","points"]
w.writerow(header)
for asset in data["assets"]:
data_point = data["assets"][asset]
output = [data_point["asset"]["id"]]
output.append(data_point["asset"]["name"])
output.append(data_point["asset"]["path"])
output.append(data_point["regions"][0]["tags"])
output.append(data_point["regions"][0]["points"])
w.writerow(output)
Output
id,name,path,tags,points
0b8f6f214dc7066b00b50ae16cf25cf6,1.jpg,c:\temp\1.jpg,"['3', '9', 'Dark Poor']","[{'x': 167.41071428571428, 'y': 252.02922077922076}, {'x': 208.80681818181816, 'y': 891.2337662337662}, {'x': 1252.232142857143, 'y': 936.2824675324675}, {'x': 1279.017857142857, 'y': 241.07142857142856}]"
0155d8143c8cad85b5b9d392fd2895a4,2.jpg,c:\temp\2.jpg,['Dark Poor'],"[{'x': 152.39423076923077, 'y': 311.68831168831167}, {'x': 144.08653846153848, 'y': 802.077922077922}, {'x': 964.4711538461539, 'y': 781.2987012987012}, {'x': 935.3942307692308, 'y': 299.2207792207792}]"
I am quite new to Raspberry Pi and Python coding but I was successful in configuring Google Cloud Vision. However the JSON dump looks like:
{
"responses": [
{
"faceAnnotations": [
{
"angerLikelihood": "UNLIKELY",
"blurredLikelihood": "VERY_UNLIKELY",
"boundingPoly": {
"vertices": [
{
"x": 129
},
{
"x": 370
},
{
"x": 370,
"y": 240
},
{
"x": 129,
"y": 240
}
]
},
"detectionConfidence": 0.99543685,
"fdBoundingPoly": {
"vertices": [
{
"x": 162,
"y": 24
},
{
"x": 337,
"y": 24
},
{
"x": 337,
"y": 199
},
{
"x": 162,
"y": 199
}
]
},
"headwearLikelihood": "VERY_UNLIKELY",
"joyLikelihood": "VERY_UNLIKELY",
"landmarkingConfidence": 0.77542377,
"landmarks": [
{
"position": {
"x": 210.93373,
"y": 92.71409,
"z": -0.00025338508
},
"type": "LEFT_EYE"
},
{
"position": {
"x": 280.00177,
"y": 82.57283,
"z": 0.49017733
},
"type": "RIGHT_EYE"
},
{
"position": {
"x": 182.08047,
"y": 77.89372,
"z": 6.825161
},
"type": "LEFT_OF_LEFT_EYEBROW"
},
{
"position": {
"x": 225.82335,
"y": 72.88091,
"z": -13.963233
},
"type": "RIGHT_OF_LEFT_EYEBROW"
},
{
"position": {
"x": 260.4491,
"y": 66.19005,
"z": -13.798634
},
"type": "LEFT_OF_RIGHT_EYEBROW"
},
{
"position": {
"x": 303.87503,
"y": 59.69522,
"z": 7.8336163
},
"type": "RIGHT_OF_RIGHT_EYEBROW"
},
{
"position": {
"x": 244.57729,
"y": 83.701904,
"z": -15.022567
},
"type": "MIDPOINT_BETWEEN_EYES"
},
{
"position": {
"x": 251.58353,
"y": 124.68004,
"z": -36.52176
},
"type": "NOSE_TIP"
},
{
"position": {
"x": 255.39096,
"y": 151.87607,
"z": -19.560472
},
"type": "UPPER_LIP"
},
{
"position": {
"x": 259.96045,
"y": 178.62886,
"z": -14.095398
},
"type": "LOWER_LIP"
},
{
"position": {
"x": 232.35422,
"y": 167.2542,
"z": -1.0750997
},
"type": "MOUTH_LEFT"
},
{
"position": {
"x": 284.49316,
"y": 159.06075,
"z": -0.078973025
},
"type": "MOUTH_RIGHT"
},
{
"position": {
"x": 256.94714,
"y": 163.11235,
"z": -14.0897665
},
"type": "MOUTH_CENTER"
},
{
"position": {
"x": 274.47885,
"y": 125.8553,
"z": -7.8479633
},
"type": "NOSE_BOTTOM_RIGHT"
},
{
"position": {
"x": 231.2164,
"y": 132.60686,
"z": -8.418254
},
"type": "NOSE_BOTTOM_LEFT"
},
{
"position": {
"x": 252.96692,
"y": 135.81783,
"z": -19.805998
},
"type": "NOSE_BOTTOM_CENTER"
},
{
"position": {
"x": 208.6943,
"y": 86.72571,
"z": -4.8503814
},
"type": "LEFT_EYE_TOP_BOUNDARY"
},
{
"position": {
"x": 223.4354,
"y": 90.71454,
"z": 0.42966545
},
"type": "LEFT_EYE_RIGHT_CORNER"
},
{
"position": {
"x": 210.67189,
"y": 96.09362,
"z": -0.62435865
},
"type": "LEFT_EYE_BOTTOM_BOUNDARY"
},
{
"position": {
"x": 195.00711,
"y": 93.783226,
"z": 6.6310787
},
"type": "LEFT_EYE_LEFT_CORNER"
},
{
"position": {
"x": 208.30045,
"y": 91.73073,
"z": -1.7749802
},
"type": "LEFT_EYE_PUPIL"
},
{
"position": {
"x": 280.8329,
"y": 75.722244,
"z": -4.3266015
},
"type": "RIGHT_EYE_TOP_BOUNDARY"
},
{
"position": {
"x": 295.9134,
"y": 78.8241,
"z": 7.3644505
},
"type": "RIGHT_EYE_RIGHT_CORNER"
},
{
"position": {
"x": 281.82813,
"y": 85.56999,
"z": -0.09711724
},
"type": "RIGHT_EYE_BOTTOM_BOUNDARY"
},
{
"position": {
"x": 266.6147,
"y": 83.689865,
"z": 0.6850431
},
"type": "RIGHT_EYE_LEFT_CORNER"
},
{
"position": {
"x": 282.31485,
"y": 80.471725,
"z": -1.3341979
},
"type": "RIGHT_EYE_PUPIL"
},
{
"position": {
"x": 202.4563,
"y": 66.06882,
"z": -8.493092
},
"type": "LEFT_EYEBROW_UPPER_MIDPOINT"
},
{
"position": {
"x": 280.76108,
"y": 54.08935,
"z": -7.895889
},
"type": "RIGHT_EYEBROW_UPPER_MIDPOINT"
},
{
"position": {
"x": 168.31839,
"y": 134.46411,
"z": 89.73161
},
"type": "LEFT_EAR_TRAGION"
},
{
"position": {
"x": 332.23724,
"y": 109.35637,
"z": 90.81501
},
"type": "RIGHT_EAR_TRAGION"
},
{
"position": {
"x": 242.81676,
"y": 67.845825,
"z": -16.629877
},
"type": "FOREHEAD_GLABELLA"
},
{
"position": {
"x": 264.32065,
"y": 208.95119,
"z": -4.0186276
},
"type": "CHIN_GNATHION"
},
{
"position": {
"x": 183.4723,
"y": 179.30655,
"z": 59.87147
},
"type": "CHIN_LEFT_GONION"
},
{
"position": {
"x": 331.6927,
"y": 156.69931,
"z": 60.93835
},
"type": "CHIN_RIGHT_GONION"
}
],
"panAngle": 0.41165036,
"rollAngle": -8.687789,
"sorrowLikelihood": "VERY_UNLIKELY",
"surpriseLikelihood": "VERY_UNLIKELY",
"tiltAngle": 0.2050134,
"underExposedLikelihood": "POSSIBLE"
}
]
}
]
}
Yes, it's an eyesore to look at. I am only wanting to extract the likelihood. Preferably in this format:
Anger likelihood is UNLIKEY
Joy likelihood is VERY_UNLIKELY
Sorrow likelihood is VERY_UNLIKELY
Suprise likelihood is VERY_UNLIKELY
Python code can be found here:
https://github.com/DexterInd/GoogleVisionTutorials/blob/master/camera-vision-face.py
Answered my own question in perhaps the most noobiest way:
print "Anger likelihood is:",
print(response['responses'][0]['faceAnnotations'][0]['angerLikelihood'])
print "Joy likelihood is:",
print(response['responses'][0]['faceAnnotations'][0]['joyLikelihood'])
print "Sorrow likelihood is:",
print(response['responses'][0]['faceAnnotations'][0]['sorrowLikelihood'])
print "Surprise likelihood is:",
print(response['responses'][0]['faceAnnotations'][0]['surpriseLikelihood'])
Came out looking like:
Anger likelihood is: VERY_UNLIKELY
Joy likelihood is: VERY_LIKELY
Sorrow likelihood is: VERY_UNLIKELY
Surprise likelihood is: VERY_UNLIKELY
You can go with dictionary comprehensions. Given that you have your response in variable result, the following code will output exactly what you want.
import json
likelihood = {
attr[:len(attr) - 10].capitalize(): value
for attr, value
in json.loads(result)['responses'][0]['faceAnnotations'][0].items()
if attr.find('Likelihood') != -1
}
print(*[
'{} likelihood is {}'.format(e, p) for e, p in likelihood.items()
], sep='\n')
Keep in mind that this code works correctly if there is only one item in both responses and faceAnnotations arrays. If there's more, the code will handle only the first items. It's also kinda ugly.
In len(attr) - 10, 10 is the length of word "Likelihood".
I'm working on a Python script that can download images from Flickr, among other sites. I use the Flickr API to pull the various sizes of the image I'm trying to download and identify the URL for the original size. Well, that's what I'm trying to do. Here's my code so far...
URL = {a Flickr link}
flickr = re.match(r".*flickr\.com\/photos\/([^\/]+)\/([0-9^\/]+)\/", URL)
URL = "https://api.flickr.com/services/rest/?method=flickr.photos.getSizes&api_key=6002c84e96ff95c1a861eafafa4284ba&photo_id=" + flickr.group(2) + "&format=json&nojsoncallback=1"
request = requests.get(URL)
result = request.text
parsed = re.match(r".\"Original\".*\"source\"\: \"([^\"]+)", result)
URL = parsed.group(1)
Using print() statements throughout my code, I know that the first regular expression (to parse the original Flickr URL to identify the photo ID) works properly, and that the API request works properly, returning the following result (using the example URL https://www.flickr.com/photos/matbellphotography/33413612735/sizes/h/)...
{ "sizes": { "canblog": 0, "canprint": 0, "candownload": 1,
"size": [
{ "label": "Square", "width": 75, "height": 75, "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_s.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/sq\/", "media": "photo" },
{ "label": "Large Square", "width": "150", "height": "150", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_q.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/q\/", "media": "photo" },
{ "label": "Thumbnail", "width": 100, "height": 67, "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_t.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/t\/", "media": "photo" },
{ "label": "Small", "width": "240", "height": "160", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_m.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/s\/", "media": "photo" },
{ "label": "Small 320", "width": "320", "height": "213", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_n.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/n\/", "media": "photo" },
{ "label": "Medium", "width": "500", "height": "333", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/m\/", "media": "photo" },
{ "label": "Medium 640", "width": "640", "height": "427", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_z.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/z\/", "media": "photo" },
{ "label": "Medium 800", "width": "800", "height": "534", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_c.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/c\/", "media": "photo" },
{ "label": "Large", "width": "1024", "height": "683", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_b.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/l\/", "media": "photo" },
{ "label": "Large 1600", "width": "1600", "height": "1067", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_4d92e2f70d_h.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/h\/", "media": "photo" },
{ "label": "Large 2048", "width": "2048", "height": "1365", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_81441ed1da_k.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/k\/", "media": "photo" },
{ "label": "Original", "width": "5760", "height": "3840", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_34cbc172c1_o.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/o\/", "media": "photo" }
] }, "stat": "ok" }
My code apparently breaks down after that. The second regular expression, intended to identify the download URL of the image at its original filesize, apparently doesn't find any matches. According to yet another print() statement...
parsed.group(1) = none
I setup the expression using RegExr, which identified exactly what I needed from the JSON result. What have I done wrong?
Maybe your requests.Response object has a json attribute that you can access directly. If not, simply import json, parse your request.content and work with the returned dictionary. Example:
>>> import json
>>> json_response = """
... { "sizes": { "canblog": 0, "canprint": 0, "candownload": 1,
... "size": [
... { "label": "Square", "width": 75, "height": 75, "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_s.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/sq\/", "media": "photo" },
... { "label": "Large Square", "width": "150", "height": "150", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_q.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/q\/", "media": "photo" },
... { "label": "Thumbnail", "width": 100, "height": 67, "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_t.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/t\/", "media": "photo" },
... { "label": "Small", "width": "240", "height": "160", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_m.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/s\/", "media": "photo" },
... { "label": "Small 320", "width": "320", "height": "213", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_n.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/n\/", "media": "photo" },
... { "label": "Medium", "width": "500", "height": "333", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/m\/", "media": "photo" },
... { "label": "Medium 640", "width": "640", "height": "427", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_z.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/z\/", "media": "photo" },
... { "label": "Medium 800", "width": "800", "height": "534", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_c.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/c\/", "media": "photo" },
... { "label": "Large", "width": "1024", "height": "683", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_645397d6a5_b.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/l\/", "media": "photo" },
... { "label": "Large 1600", "width": "1600", "height": "1067", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_4d92e2f70d_h.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/h\/", "media": "photo" },
... { "label": "Large 2048", "width": "2048", "height": "1365", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_81441ed1da_k.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/k\/", "media": "photo" },
... { "label": "Original", "width": "5760", "height": "3840", "source": "https:\/\/farm3.staticflickr.com\/2855\/33413612735_34cbc172c1_o.jpg", "url": "https:\/\/www.flickr.com\/photos\/matbellphotography\/33413612735\/sizes\/o\/", "media": "photo" }
... ] }, "stat": "ok" }"""
>>>
>>> json_parsed = json.loads(json_response)
>>> for img in json_parsed["sizes"]["size"]:
... print img.get("source")
...
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_s.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_q.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_t.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_m.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_n.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_z.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_c.jpg
https://farm3.staticflickr.com/2855/33413612735_645397d6a5_b.jpg
https://farm3.staticflickr.com/2855/33413612735_4d92e2f70d_h.jpg
https://farm3.staticflickr.com/2855/33413612735_81441ed1da_k.jpg
https://farm3.staticflickr.com/2855/33413612735_34cbc172c1_o.jpg
>>>