Is it possible to load one .ipynb file to a json? I have file data.ipynb and I want to load and save it as a json to my notebook. I have following code to load data but it does not work.
import json
with open("data.ipynb", mode= "r", encoding= "utf-8") as f:
myfile = json.load(f)
myfile
You need to change
myfile = json.load(f)
to
myfile = json.loads(f.read())
f is an object of file. You want the content of the file.
Related
in jupyter notebook, I ran this code in a cell:
for i in range(10):
with open('data.json', 'w') as f:
json.dump({"counter":i}, f)
time.sleep(10000)
easy so far, but after executing the cell there won't be any update on the actual data.json file during each iteration, it will get updated up until the end of the program. in other words, the data.json as a file object stays open till the end of the code.
how can I update the file on the disk in a loop?
The json module doesn't work that way AFAIK. You'll have to load the json data into a dictionary/list then make your changes, then write the file again:
# funciton to read json files
def read_json(path):
with open(path, 'r') as file:
return json.load(file)
# function to write json files
def write_json(path, data, indent=4):
with open(path, 'w') as file:
json.dump(data, file, indent=indent)
# read some json data
json_data = read_json('./my_json_file.json')
# ... do some stuff to the data
# write the data back to the file
write_json('./my_json_file.json', json_data)
I have downloaded a compressed json file and want to open it as a dictionary.
I used json.load but the data type still gives me a string.
I want to extract a keyword list from the json file. Is there a way I can do it even though my data is a string?
Here is my code:
import gzip
import json
with gzip.open("19.04_association_data.json.gz", "r") as f:
data = f.read()
with open('association.json', 'w') as json_file:
json.dump(data.decode('utf-8'), json_file)
with open("association.json", "r") as read_it:
association_data = json.load(read_it)
print(type(association_data))
#The actual output is 'str' but I expect it is 'dic'
In the first with block you already got the uncompressed string, no need to open it a second time.
import gzip
import json
with gzip.open("19.04_association_data.json.gz", "r") as f:
data = f.read()
j = json.loads (data.decode('utf-8'))
print (type(j))
Open the file using the gzip package from the standard library (docs), then read it directly into json.loads():
import gzip
import json
with gzip.open("19.04_association_data.json.gz", "rb") as f:
data = json.loads(f.read(), encoding="utf-8")
To read from a json.gz, you can use the following snippet:
import json
import gzip
with gzip.open("file_path_to_read", "rt") as f:
expected_dict = json.load(f)
The result is of type dict.
In case if you want to write to a json.gz, you can use the following snippet:
import json
import gzip
with gzip.open("file_path_to_write", "wt") as f:
json.dump(expected_dict, f)
I have the code below to save a dictionary to csv in python. How can I load this back in?
There are a few answers on here, but none seem to be working
file='Location'
with open(file, 'w') as f: # Just use 'w' mode in 3.x
w = csv.DictWriter(f, mydic.keys())
w.writeheader()
w.writerow(mydic)
I would suggest to save a dictionary as json file.
import json
with open('data.json', 'w') as fp:
json.dump(mydic, fp)
To load the json file:
with open('data.json', 'r') as fp:
mydic = json.load(fp)
I believe a way to deal with saving and loading dictionary is to use pickle
Saving:
import pickle
some_file = open("file", "wb")
my_dictionary = {"aa":"aa"}
pickle.dump(my_dictionary, some_file)
some_file.close()
Reading back:
import pickle
some_file = open("file", "r")
my_dictionary = pickle.load(some_file)
Please remember that using pickle is not safe while dealing with data received from the untrusted source.
I am failing to open this uploaded csv file. When I use a file from the pc directory it works fine but when I upload it from an html form I get this error:
TypeError: coercing to Unicode: need string or buffer, file found
When trying to read from uploaded csv file
domain_file = request.POST['csv'].file
file = open(domain_file, "r")
csv_file = csv.reader(file, delimiter=",", quotechar='"')
This works fine when am using a file from pc
file = open('/Desktop/csv.csv', "r")
csv_file = csv.reader( file, delimiter=",", quotechar='"')
The file contains a file object, not a path. Use the filename property instead: http://flask.pocoo.org/docs/0.10/patterns/fileuploads/
Maybe something like this:
domain_file = request.files['csv']
if domain_file and allowed_file(domain_file.filename):
file = open(domain_file, 'r')
#...
Also see http://werkzeug.pocoo.org/docs/0.9/wrappers/#werkzeug.wrappers.BaseRequest.files
If you do this you'll be able to iterate through the data in the csv line by line shown in a dict.
import csv
csv_contents = request.POST['csv'].value.decode('utf-8')
file = csv_contents.splitlines()
data = csv.DictReader(file)
In python 3,3
import json
peinaw = {"hi":4,"pordi":6}
json_data = open('data.json')
json.dump(peinaw, json_data)
json_data.close()
i get
File "C:\Python33\lib\json\__init__.py", line 179, in dump
fp.write(chunk)
io.UnsupportedOperation: not writable
tried same thing in 2,7 and it works.I s there a different way in 3,3?
>>> import json
>>> peinaw = {"hi":4,"pordi":6}
>>> with open('data.json', 'w') as json_data: # 'w' to open for writing
json.dump(peinaw, json_data)
I used a with statement here, where the file is automatically .close()d at the end of the with block.
You are not opening the file for writing. The file is opened in a read mode. to verify do this:
json_data = open('data.json')
print (json_data) # should work with 2.x and 3.x
To solve the probem, just open the file in write mode.
json_data = open('data.json', 'w')
Also, you should use the with statement when woking with files.
with open('data.json', 'w') as json_data:
json.dump(peinaw, json_data)
You need to open file for writing, use 'w' mode parameter:
json_data = open('data.json', 'w')