I'm trying to create a simple function which I can use to store json data to a file. I currently have this code
def data_store(key_id, key_info):
try:
with open('data.txt', 'a') as f:
data = json.load(f)
data[key_id] = key_info
json.dump(data, f)
pass
except Exception:
print("Error in data store")
The idea is the load what data is currently within the text file, then create or edit the json data. So running the code...
data_store("foo","bar")
The function will then read what's within the text file, then allow me to append the json data with either replacing what's there (if "foo" exists) or create it if it doesn't exist
This has been throwing errors at me however, Any ideas?
a mode would not work for both reading and writing at the same time. Instead, use r+:
with open('data.txt', 'r+') as f:
data = json.load(f)
data[key_id] = key_info
f.seek(0)
json.dump(data, f)
f.truncate()
seek(0) call here moves the cursor back to the beginning of the file. truncate() helps in situations where the new file contents is less than the old one.
And, as a side note, try to avoid having a bare except clause, or/and log the error and the traceback properly.
Related
I am trying to do something like this which uses reading , appending and writing at the same time.
with open("data.json", mode="a+") as file:
# 1.Reading old data
data = json.load(file)
# 2. Updating old data with new data
data.update(new_dict)
# 3.Writing into json file
json.dump(data,file,indent=4)
But it shows json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
First you need to open the file in mode="r+". Update the old data with new, then seek(0) to the beginning of the file, write your updated json data, and then truncate the rest:
with open("data.json", mode="r+") as file:
file.seek(0, 2)
if file.tell():
file.seek(0)
data = json.load(file)
data.update(new_dict)
else:
data = new_dict
file.seek(0)
json.dump(data, file, indent=4)
file.truncate()
Reason it doesn't work with a+ mode is that it will always write at the end of the file, irrespective of seek(0). So your updated json data just gets appended to the older one like a normal text data, but since its not a valid json syntax, it causes JSON Decode error.
Check here for more detailed info on how the different open modes work.
Recently I came across a strange behavior of the with open() statement in Python.
The following code returns output just for the first read-statement, having an empty lines-list.
input_csv = []
with open(self.path, 'r') as f: # Opening the CSV
r = csv.DictReader(f)
for row in r:
input_csv.append(row) # Storing its contents in a dictionary for later use
lines = f.readlines() # Reading it in as a list too
f.close()
While splitting it into two open () statements returns the objects as desired.
input_csv = []
with open(self.path, 'r') as f: # Opening the CSV
r = csv.DictReader(f)
for row in r:
input_csv.append(row) # Storing its contents in a dictionary for later use
f.close()
with open(self.path, 'r') as f: # Opening the CSV
lines = f.readlines() # Reading it in as a list too
f.close()
Why is the f variable just used once in the first statement?
Many thanks
If you look into documentation of csv.reader() which is used for DictReader().reader:
Return a reader object which will iterate over lines in the given csvfile. csvfile can be any object which supports the iterator protocol and returns a string each time its __next__() method is called...
Hence, it uses behavior of file-like object for which each iteration essentially is f.readline(). An operation which also advances current position in the file... until EOF is reached, which when iteration raises StopIteration exception. It is the same behavior you would observe trying:
with open(self.path, 'r') as f:
for l in f:
pass # each line was read
print(f.readlines())
You can add print(f.tell()) to see how the position changes as you execute each line.
If you (re)open a new file, you start at position 0 (again). If you've read through once and wanted to use the same handle again, you need to return to the beginning of the file: f.seek(0).
Note: you really do not need to perform f.close() in a managed context using with. Once you leave it, it'll close the file handle for you.
I am trying to transfer data received from one function (reading) to another (writing).
Existing data inside file.txt should be transfer into json format and be printed to the console, that data should be taken and be written on the file called pfile.txt, by the second function.
I just can't get them to work together. When running each function separately as commands in plain shell, they work; combined, not so much. What am I missing here?
def reading():
filename = 'file.txt'
with open(filename, 'r') as f:
print(json.loads(f.read()))
reading()
def writing():
with open('pfile.txt', 'w+') as pf:
pf.write(reading() in writing()) <-- this doesn't work
pf.write('hello SO') <-- this does work
writing()
When you refer to a function with a pair of parenthesis, Python will call that function with no arguments and resolve it's return value (if any). This is not bash; functions pass data to each other as variables in memory, not through stdin/stdout.
Your code as written appears to be riddled with infinite loops (functions calling themselves) and likely will crash with "recursion depth exceeded" errors. These can be fixed by not calling functions within themselves (or having cycles of functions that call each other).
There's nothing about your code as written that needs multiple functions. I'd go down to 1 function:
def read_and_write():
filename = 'file.txt'
with open(filename, 'r') as f:
content = json.loads(f.read())
print(content)
with open('pfile.txt', 'w+') as pf:
pf.write(content)
If you want two functions, try the following:
def read():
filename = 'file.txt'
with open(filename, 'r') as f:
content = json.loads(f.read())
return content
def write():
content = read()
with open('pfile.txt', 'w+') as pf:
pf.write(content)
Basically I want to be able to calculate a parameter store it was a text file then read it back in later in the program.
myFile = 'example.txt'
Using with will automatically close the file when you leave that structure
# perform your writing
with open(myFile, 'w') as f:
f.write('some stuff')
# doing other work
# more code
# perform your reading
with open(myFile, 'r') as f:
data = f.read()
# do stuff with data
You need to use close() before changing mode (read / write):
def MyWrite(myfile):
file = open(myfile, "w")
file.write("hello world in the new file\n")
file.close()
def MyRead(myfile):
file = open(myfile, "r")
file.read()
file.close()
Also, you could open a file for reading AND writing, using:
fd = open(myfile, "r+")
However, you must be very careful, since every operation, either read or write, changes the pointer position, so you may need to use fd.seek to make sure you're placed in the right position where you want to read or write.
Also, keep in mind that your file becomes a sort of memory mapped string(*) that sometimes syncs with the disk. If you want to save changes at a specific point, you must use fd.flush and os.fsync(fd) to efectively commit the changes to disk without closing the file.
All in all, I'd say its better to stick to one mode of operation and then closing the file and opening again, unless there's a very good reason to have read/write available without switching modes.
* There's also a module for memory mapped files, but I think thats way beyond what you were asking.
I have json file with some data, and would like to occasionally update this file.
I read the file:
with open('index.json', 'rb') as f:
idx = json.load(f)
then check for presence of a key from potentially new data, and if key is not present update the file:
with open('index.json', mode='a+') as f:
json.dump(new_data, f, indent=4)
However this procedure just creates new json object (python dict) and appends it as new object in output json file, making the file not valid json file.
Is there any simple way to append new data to json file without overwriting whole file, by updating the initial dict?
One way to do what you're after is to write one JSON object per line in the file. I'm using that approach and it works quite well.
A nice benefit is that you can read the file more efficiently (memory-wise) because you can read one line at a time. If you need all of them, there's no problem with assembling a list in Python, but if you don't you're operating much faster and you can also append.
So to initially write all your objects, you'd do something like this:
with open(json_file_path, "w") as json_file:
for data in data_iterable:
json_file.write("{}\n".format(json.dumps(data)))
Then to read efficiently (will consume little memory, no matter the file size):
with open(json_file_path, "r") as json_file:
for line in json_file:
data = json.loads(line)
process_data(data)
To update/append:
with open(json_file_path, "a") as json_file:
json_file.write("{}\n".format(json.dumps(new_data)))
Hope this helps :)