I have a Json file created that holds a specific directory. I am trying to write something that will allow the user to go in and out of each "folder" creating almost a command line "file explorer"
my json file is formatted as such:
{
"children": [
{
"children": [
{
"name": "somefile.cmd",
"path": "C:\\some\\directory\\somefile.cmd",
"type": "file"
},
{
"name": "otherfile.ps1",
"path": "C:\\some\\directory\\somefile.ps1",
"type": "file"
},
{
"name": "somefile.exe",
"path": "C:\\some\\directory\\somefile.exe",
"type": "file"
}
],
"name": "somefile",
"path": "C:\\some\\directory",
"type": "folder"
},
{
"children": [
.
.
.
My function I am using
def search_json(filename):
json_file = open(filename)
data = json.load(json_file)
subsyst_count = 1
subsyst_list = []
#list of subsystems
for i in data['children']:
print(subsyst_count, i['name'])
subsyst_list.append(i['name'])
subsyst_count = subsyst_count + 1
user = int(input('Which Subsystem?'))
#search json for children of subsyst_list[user]
print(subsyst_list[user])
for i in data['children']:
if i['name'] == subsyst_list[user]:
print(i['name'])
for j in i['name']:
print(j[0])
I expect it to go into the first children folder count all the folders under it and prompt user to select a number as to which subsystem to go into. I then wanted it to search that new "children" subsystem directory and again number which file to go into or file to select. instead it throws a keyerror when I have
print(j['name'])
and then the function just spells the name of the subsystem when I have:
print(j['0'])
Related
"attributes": [
{
"trait_type": "Vintage",
"value": 2019
},
{
"trait_type": "Volume (ml)",
"value": 750
}
]
I want to change the 2019 and the 750 to a string (by including "") for multiple values over 150+ JSONs using python
I am not a dev and do not use Python, but I have this so far:
import json
for i in range(1, 147):
with open(f'{i}.json', 'r+', encoding='utf8') as f:
data = json.load(f)
You can make a function that takes a JSON file name and replaces these specific values you're looking for:
def change_values_to_string(json_filename):
with open(f'{json_filename}.json', 'r') as json_fp:
json_to_dict = json.loads(json_fp.read())
for sub_dict in json_to_dict["attributes"]:
sub_dict["value"] = str(sub_dict["value"])
with open(f'{json_filename}.json', 'w') as json_fp:
json_fp.write(json.dumps(json_to_dict))
for _ in range(1, 147):
change_values_to_string(_)
Input:
test.json
{
"attributes": [
{
"trait_type": "Vintage",
"value": 2019
},
{
"trait_type": "Volume (ml)",
"value": 750
}
]
}
Calling the function with the correct file name: change_values_to_string("test")
Outputs:
{
"attributes": [
{
"trait_type": "Vintage",
"value": "2019"
},
{
"trait_type": "Volume (ml)",
"value": "750"
}
]
}
Explanations:
Open the json file in read mode - and loading it into a dictionary python type.
Iterate over the attributes key which contains a lot of dictionaries if I get it right.
Replace each value key to a string
Dump the dictionary into the same file overriding it.
I have a json output file and I am trying to encrypt a value of key(name) in it using sha256 encryption method. Have two occurence of name in a list of dict but everytime when I write, the changes reflecting once. Can anybody tell me where I am missing?
Json structure:
Output.json
{
"site": [
{
"name": "google",
"description": "Hi I am google"
},
{
"name": "microsoft",
"description": "Hi, I am microsoft"
}
],
"veg": [
{
"status": "ok",
"slot": null
},
{
"status": "ok"
}
]
}
Code:
import hashlib
import json
class test():
def __init__(self):
def encrypt(self):
with open("Output.json", "r+") as json_file:
res = json.load(json_file)
for i in res['site']:
for key,val in i.iteritems():
if 'name' in key:
hs = hashlib.sha256(val.encode('utf-8')).hexdigest()
res['site'][0]['name'] = hs
json_file.seek(0)
json_file.write(json.dumps(res,indent=4))
json_file.truncate()
Current Output.json
{
"site": [
{
"name": "bbdefa2950f49882f295b1285d4fa9dec45fc4144bfb07ee6acc68762d12c2e3",
"description": "Hi I am google"
},
{
"name": "microsoft",
"description": "Hi, I am microsoft"
}
],
"veg": [
{
"status": "ok",
"slot": null
},
{
"status": "ok"
}
]
}
I think your problem is in this line:
res['site'][0]['name'] = hs
you are always changing the name field of the first map in the site list. I think you want this to be:
i['name'] = hs
so that you are updating the map you are currently working on (pointed to by i).
Instead of iterating over each item in the dictionary, you could make use of the fact that dictionaries are made for looking up values by key, and do this:
if 'name' in i:
val = i['name']
hs = hashlib.sha256(val.encode('utf-8')).hexdigest()
i['name'] = hs
json_file.seek(0)
json_file.write(json.dumps(res, indent=4))
json_file.truncate()
instead of this:
for key,val in i.iteritems():
if 'name' in key:
...
Also, iteritems() should be items(), and if 'name' in key should be if key == 'name', as key is a string. As it is, you'd be matching any entry with a key name containing the substring 'name'.
UPDATE: I noticed that you are writing the entire file multiple times, once for each name entry that you encrypt. Even without this I would recommend that you open the file twice...once for reading and once for writing. This is preferred over opening a file for both reading and writing, and having to seek and truncate. So, here are all of my suggested changes, along with a few other tweaks, in a full version of your code:
import hashlib
import json
class Test:
def encrypt(self, infile, outfile=None):
if outfile is None:
outfile = infile
with open(infile) as json_file:
res = json.load(json_file)
for i in res['site']:
if 'name' in i:
i['name'] = hashlib.sha256(i['name'].encode('utf-8')).hexdigest()
with open(outfile, "w") as json_file:
json.dump(res, json_file, indent=4)
Test().encrypt("/tmp/input.json", "/tmp/output.json")
# Test().encrypt("/tmp/Output.json") # <- this form will read and write to the same file
Resulting file contents:
{
"site": [
{
"name": "bbdefa2950f49882f295b1285d4fa9dec45fc4144bfb07ee6acc68762d12c2e3",
"description": "Hi I am google"
},
{
"name": "9fbf261b62c1d7c00db73afb81dd97fdf20b3442e36e338cb9359b856a03bdc8",
"description": "Hi, I am microsoft"
}
],
"veg": [
{
"status": "ok",
"slot": null
},
{
"status": "ok"
}
]
}
I have a recursive method that traverses a file system structure and creates a dictionary from it.
This is the code:
def path_to_dict(path):
d = {'name': os.path.basename(path)}
if os.path.isdir(path):
d['type'] = "directory"
d['path'] = os.path.relpath(path).strip('..\\').replace('\\','/')
d['children'] = [path_to_dict(os.path.join(path, x)) for x in os.listdir\
(path)]
else:
d['type'] = "file"
d['path'] = os.path.relpath(path).strip('..\\').replace('\\','/')
with open(path, 'r', encoding="utf-8", errors='ignore') as myfile:
content = myfile.read().splitlines()
d['content'] = content
At the moment, it checks if it is a folder then puts the keys name, type, path and children where children is an array which can contain further folders or files. If it is a file it has the keys name, type, path and content.
After converting it to JSON, the final structure is like this.
{
"name": "nw",
"type": "directory",
"path": "Parsing/nw",
"children": [{
"name": "New folder",
"type": "directory",
"path": "Parsing/nw/New folder",
"children": [{
"name": "abc",
"type": "directory",
"path": "Parsing/nw/New folder/abc",
"children": [{
"name": "text2.txt",
"type": "file",
"path": "Parsing/nw/New folder/abc/text2.txt",
"content": ["abc", "def", "dfg"]
}]
}, {
"name": "text2.txt",
"type": "file",
"path": "Parsing/nw/New folder/text2.txt",
"content": ["abc", "def", "dfg"]
}]
}, {
"name": "text1.txt",
"type": "file",
"path": "Parsing/nw/text1.txt",
"content": ["aaa "]
}, {
"name": "text2.txt",
"type": "file",
"path": "Parsing/nw/text2.txt",
"content": []
}]
}
Now I want the script to always set the type in only the root folder to the value root. How can I do this?
I think you want something similar than the following implementation. The directories and files in root folder will contain the "type": "root" and the child elements won't contain this key-value pair.
def path_to_dict(path, child=False):
d = {'name': os.path.basename(path)}
if os.path.isdir(path):
if not child:
d['type'] = "root"
d['path'] = os.path.relpath(path).strip('..\\').replace('\\','/')
d['children'] = [path_to_dict(os.path.join(path, x), child=True) for x in os.listdir\
(path)]
else:
if not child:
d['type'] = "root"
d['path'] = os.path.relpath(path).strip('..\\').replace('\\','/')
with open(path, 'r', encoding="utf-8", errors='ignore') as myfile:
content = myfile.read().splitlines()
d['content'] = content
I wish to create a JSON type nested dictionary from a list of lists. The lists contained a full directory path, but I broke them into their individual components as I thought it would make the creation of the nested dictionaries easier.
An example list:
["root", "dir1", "file.txt"]
The expected result:
{
"type": "directory",
"name": "root",
"children": [
{
"type": "directory",
"name": "dir1",
"children": [
{
"type": "file",
"name": "file.txt",
}
]
}
]
}
I've tried using a recursive method but couldn't quite get there (new to recursive methods and my head continually spun out). Also tried an iterative method from an idea I found here (stack overflow) which inverted the list and build the dict backwards, which I kind of got to work, but was unable to solve one of the solution requirements, which is that the code can deal with duplication in parts of the directory paths as it iterates over the list of lists.
For example following on from the last example, the next inputted list is this:-
["root", "dir1", "dir2", "file2.txt"]
and it need to build onto the JSON dictionary to produce this:-
{
"type": "directory",
"name": "root",
"children": [
{
"type": "directory",
"name": "dir1",
"children": [
{
"type": "file",
"name": "file.txt",
}
{
"type": "directory",
"name": "dir2",
"children": [
{
"type": "file",
"name": "file2.txt"
}
]
}
]
}
]
}
and so on with an unknown number of lists containing directory paths.
Thanks.
A recursive solution with itertools.groupby is as follows (assuming all paths are absolute paths). The idea is to group paths by the first element in the path list. This groups similar directory roots together, allowing us to call the function recursively on that group.
Also note that file names cannot be duplicated in a directory, so all files will be grouped as single element lists by groupby:
from itertools import groupby
from operator import itemgetter
def build_dict(paths):
if len(paths) == 1 and len(paths[0]) == 1:
return {"type": "file", "name": paths[0][0]}
dirname = paths[0][0]
d = {"type": "directory", "name": dirname, "children": []}
for k, g in groupby(sorted([p[1:] for p in paths], key=itemgetter(0)),
key=itemgetter(0)):
d["children"].append(build_dict(list(g)))
return d
paths = [["root", "dir1", "file.txt"], ["root", "dir1", "dir2", "file2.txt"]]
print(build_dict(paths))
Output
{
"type": "directory",
"name": "root",
"children": [
{
"type": "directory",
"name": "dir1",
"children": [
{
"type": "directory",
"name": "dir2",
"children": [
{
"type": "file",
"name": "file2.txt"
}
]
},
{
"type": "file",
"name": "file.txt"
}
]
}
]
}
Here's a naive recursive solution that simply walks through the tree structure, adding children as necessary, until the last element of path is reached (assumed to be a file).
import json
def path_to_json(path, root):
if path:
curr = path.pop(0)
if not root:
root["type"] = "file"
root["name"] = curr
if path:
root["children"] = [{}]
root["type"] = "directory"
path_to_json(path, root["children"][0])
elif path:
try:
i = [x["name"] for x in root["children"]].index(path[0])
path_to_json(path, root["children"][i])
except ValueError:
root["children"].append({})
path_to_json(path, root["children"][-1])
return root
if __name__ == "__main__":
paths = [["root", "dir1", "file.txt"],
["root", "dir1", "dir2", "file2.txt"]]
result = {}
print(json.dumps([path_to_json(x, result) for x in paths][0], indent=4))
Output:
{
"type": "directory",
"name": "root",
"children": [
{
"type": "directory",
"name": "dir1",
"children": [
{
"type": "file",
"name": "file.txt"
},
{
"type": "directory",
"name": "dir2",
"children": [
{
"type": "file",
"name": "file2.txt"
}
]
}
]
}
]
}
Try it!
Given not much detail has been provided, here is a solution that uses a reference to enter each nested dict
In [537]: structure = ["root", "dir1", "dir2", "file2.txt"]
In [538]: d = {}
# Create a reference to the current dict
In [541]: curr = d
In [542]: for i, s in enumerate(structure):
...: curr['name'] = s
...: if i != len(structure) - 1:
...: curr['type'] = 'directory'
...: curr['children'] = {}
...: curr = curr['children'] # New reference is the child dict
...: else:
...: curr['type'] = 'file'
...:
In [544]: from pprint import pprint
In [545]: pprint(d)
{'children': {'children': {'children': {'name': 'file2.txt', 'type': 'file'},
'name': 'dir2',
'type': 'directory'},
'name': 'dir1',
'type': 'directory'},
'name': 'root',
'type': 'directory'}
I don't know if this will work for all of your questions as the spec isn't very detailed
First of all, I am totally new at python. I am a graphic designer and I need to get group members photos for a group logo. I have found this:
https://github.com/lionaneesh/IIITD-Students-Collage
and it pretty much should do the thing I need, but apparently I am doing something wrong and it does not work as intended.
When I execute this script:
import json
from urllib2 import urlopen
fp = open("test2.txt")
data = json.loads(fp.read())
fp.close()
user_photos = {} # id -> [User's Name, Photo URL]
for user in data["data"]:
print user
page = urlopen("http://graph.facebook.com/" + user["id"] + "?fields=picture")
page_data = json.loads(page.read())
photo_url = page_data["picture"]["data"]["url"]
user_photos[user["id"]] = [user["name"], photo_url]
fp = open("user_photos.json", "w")
fp.write(json.dumps(user_photos))
I get this error:
Traceback (most recent call last):
File "C:\test.py", line 11, in <module>
for user in data["data"]:
KeyError: 'data'
>>>
Could someone explain to me how to fix it or where to seek for help?
edit: this is how the data in text2.txt looks:
{
"id": "1390694364479028",
"members": {
"data": [
{
"name": "Patryk Wiśniewski",
"administrator": false,
"id": "321297624692717"
},
{
"name": "Backed PL",
"administrator": false,
"id": "1440205746235525"
},
and so on, with other group members infos
You probably just don't have a JSON field for "data" in test2.txt
KeyError means that there is no such key in a dict object. Therefore it means your file does not contain JSON data structure like this according to your script.
{"data": {"id": 10000}, {"id": 20000}, {"id": 30000}}
It would help if you posted the contents of test2.txt or the output of print(data).
Edit: according to your text2.txt file, your program flow should be like this
for user in data["members"]["data"]:
print user
page = urlopen("http://graph.facebook.com/" + user["id"] + "?fields=picture")
page_data = json.loads(page.read())
photo_url = page_data["picture"]["data"]["url"]
user_photos[user["id"]] = [user["name"], photo_url]
You just simply change data["data"] to data["members"]["data"] to make your script work.
Looking at the docs, you should have exactly the same structure as the following in your txt file bar the details.
{
"data": [
{
"name": "Arushi Jain",
"administrator": false,
"id": "100000582289046"
},
{
"name": "Ajay Yadav",
"administrator": false,
"id": "100004213058283"
},
and so on ........
],
"paging": {
"next": "https://graph.facebook.com/114462201948585/members?limit=5000&offset=5000&__after_id=712305377"
}
}
{
{
"data": [ # how yours should look
{
"name": "Patryk Wiśniewski",
"administrator": false,
"id": "321297624692717"
},
{
"name": "Patryk Kurowski",
"administrator": false,
"id": "1429534777317507"
},
{
"name": "Jan Konieczny",
"administrator": false,
"id": "852450774783365"
}
],
"paging": {
"next": "https://graph.facebook.com/114462201948585/members?limit=5000&offset=5000&__after_id=712305377"
}
}
That is the very first thing that is executed in the loop so if it does not match exactly then it will fail as it does in your error.