Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a JSON file like this:
{"objects":[{"featureId":"ckm39acfw00043b6a4i8vv8zf","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{
top":110,"left":799,"height":42,"width":53},"instanceURI":<"https://api.labelbox.com/masks/feature/ckm39acfw00043b6a4i8vv8zf?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"
},{"featureId":"ckm39agzr00073b6alzzwpm77","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":
{"top":151,"left":875,"height":45,"width":120},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39agzr00073b6alzzwpm77?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"},{"featureId":"ckm39an0e000a3b6ae7vc0bo8","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":635,"left":952,"height":93,"width":84},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39an0e000a3b6ae7vc0bo8?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"},{"featureId":"ckm39bbki000g3b6au6s5s3se","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":646,"left":764,"height":74,"width":93},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39bbki000g3b6au6s5s3se?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"
},{"featureId":"ckm39cgdi000p3b6aru669fzh","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":
{"top":375,"left":916,"height":52,"width":80},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39cgdi000p3b6aru669fzh?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"
},{"featureId":"ckm39ckyi000s3b6armui3tn3","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":420,"left":914,"height":72,"width":86},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39ckyi000s3b6armui3tn3?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"
},{"featureId":"ckm39cp6a000v3b6a6cjj16xp","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":478,"left":867,"height":66,"width":137},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39cp6a000v3b6a6cjj16xp?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"},{"featureId":"ckm39cyom000y3b6aqp2x5i0s","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":703,"left":806,"height":85,"width":95},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39cyom000y3b6aqp2x5i0s?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"},{"featureId":"ckm39dz3t00143b6a2brbj4qi","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":41,"left":823,"height":50,"width":80},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39dz3t00143b6a2brbj4qi?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"},{"featureId":"ckm39eco400173b6a35p84q7y","schemaId":"ckm399dnn07ax0y8hdncv51yy","title":"Buildings","value":"buildings","color":"#1CE6FF","bbox":{"top":31,"left":892,"height":62,"width":95},"instanceURI":"https://api.labelbox.com/masks/feature/ckm39eco400173b6a35p84q7y?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJja20yN3F1aDVwNzh2MDc4OXh3YmE3eWo5Iiwib3JnYW5pemF0aW9uSWQiOiJja20yN3F0cm1wNzhvMDc4OXBlaHJiZG4wIiwiaWF0IjoxNjE1MzcyNjUxLCJleHAiOjE2MTc5NjQ2NTF9.ALYeG0mpNvnOpAuj6O3h0OFcrREOtOvJqqVqqt8xcqw"}],"classifications":[]}
and I want to save in array all occurrences of top, left, right, height and width. How can I do that?
Firstly, the JSON file does not look valid. For my answer I'll assume a valid file like so:
#example.json
{
"top":151,
"left":875,
"height":45,
"width":120
}
Next you can use python's builtin library 'json' to load the json and extract information:
import json
with open('test.json', 'r') as fh:
d = json.load(fh)
print(d.get('top'))
Notice that first you load the json file's content into a dictionary and then you can access it's contents in the same way you would a dictionary using d.get('top') or d['top']
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
import json
data = json.load(open("files1\data.json"))
def definitioner(w):
return data(w)
word = input("Enter the word you are looking for: ")
print(definitioner(word))
I am doing a course on UDEMY and after trying it myself it didn't work so I even copied the code to see if it was my code, couldn't figure out what the issue was, any help would be appreciated. I am running Python 3.8
Thanks.
You are calling data(w) like it's a function, but data is a dictionary. Use data.get(w) instead:
def definitioner(w):
return data.get(w)
That also allows you to specify what you would like returned by default if the word is not present, by adding a second argument:
def definitioner(w):
return data.get(w, 'Word not found!')
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a Matlab code implementing a for loop which I have to convert it to python code:
for i = 1:numel(file_list)
filename = file_list(i).name;
File_list consists of 207 CSV files having 3036*190 items. this is how the following part of the code looks like:
for i = 1:numel(file_list)
filename = file_list(i).name;
SS= strcat(filename);
ActualRadarData = csvread(SS);
RadarData = real(ActualRadarData(:,20:end));
and this is what I attempted in doing so which is not correct:
for i in 1:len(file_list):
filename = os.path.basename('/path/file_list')
This method doesn't work out. how can it be done correctly?
Python starts indexes at zero, unlike MATLAB which starts indexing at 1, so you should keep that in mind. If you want to iterate through a list, you'd usually do for element in list, although you could iterate through indexes as well.
import os
for file in file_list:
filename = os.path.basename(file)
I would recommend looking into a guide for indexing and looping in Python, and then for CSV reading I recommend using Pandas.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to get all keys' values that equal "url" ignoring nesting from a JSON file and then output them to a text file. How would I go about doing this?
I'm running Python 3.7 and cannot seem to find a solution.
r = requests.get('https://launchermeta.mojang.com/mc/game/version_manifest.json')
j = r.json()
The result expected from this would be a text file filled with links from this json file.
https://launchermeta.mojang.com/v1/packages/31fa028661857f2e3d3732d07a6d36ec21d6dbdc/a1.2.3_02.json
https://launchermeta.mojang.com/v1/packages/2dbccc4579a4481dc8d72a962d396de044648522/a1.2.3_01.json
https://launchermeta.mojang.com/v1/packages/48f077bf27e0a01a0bb2051e0ac17a96693cb730/a1.2.3.json
etc.
Using requests library
import requests
response = requests.get('https://launchermeta.mojang.com/mc/game/version_manifest.json').json()
url_list = []
for result in response['versions']:
url_list.append(result['url'])
print(url_list)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to read a .dat file and to extract lines which their timestamp==2000 and to save them on a file, here is my code and I get that error on converting query error screenshot
can someone help me ?
make sure your field is a datetime
ratings_df['timestamp'] = pd.to_datetime(ratings_df['timestamp'])
You can then pull year from it
ratings_df['timestamp'].dt.year
I found the answer def dateparse (time_in_secs):
return datetime.datetime.fromtimestamp(float(time_in_secs))
ratings_df = pd.read_table('~/ml-1m/ratings.dat', header=None, sep='::', names=['user_id', 'movie_id', 'rating', 'timestamp'],parse_dates=['timestamp'],date_parser=dateparse)
thank you for your help
You should use the pandas function to_datetime instead of the native datetime functionality.
ratings_df.loc[pd.to_datetime(ratings_df['Timestamp'], unit='s').year == 2000, colonnes]
The format of the timestamp is an important factor to get the right output.