This is similar or identical to csv writer not closing file but I'm not 100% sure why my behaviour is different.
def LoadCSV:
with open('test.csv', 'r') as csvfile:
targetReader = csv.reader(csvfile, delimiter=',')
for row in targetReader:
...
then finally in the function
csvfile.close()
This opens the test.csv file in the same direction as the script. Desired behaviour is for when the script has done what it's doing to the rows in the function, it renames the sheet to test.[timestamp] to archive it and watches the directory for a new sheet to arrive.
Later down the code;
os.rename('test.csv', "test." + time.strftime("%x") )
Gives an error that the file can't be renamed because a process is still using it. How do I close this file once I'm done? csvfile.close() doesn't raise an exception, and if I step through my code in interactive mode I can see that csvfile is a "closed file object." What even is that? Surely an open file is an object but a closed one isn't, how do I make my code forget this even exists so I can then do IO on the file?
NOT FOR POINTS.
Code is not valid anyway, since your function name is wrong. If that was not intentional, better edit it or to produce a pseudo-replica of your code, rather than have us guess what the issue is.
To iterate, the issues with your code:
def LoadCSV is not valid. def LoadCSV() is. Proof in following screenshot. Notice how the lack of () is showing syntax error markers.
Fixing (1) above, your next problem is using csvfile.close(). If the code is properly written, once the code is out of the scope of with, the file is closed automatically. Even if the renaming part of the code is inside the function, it shouldn't pose any problems.
Final word of warning -- using the format string %x will produce date-strings like 08/25/14, depending on locale. Obviously, this is erroneous, as a / is invalid in filenames in Windows (try renaming a file manually with this). Better to be very explicit and just use %m%d%y instead.
Finally, see the running code on my end. If your code is not structured like this, then other errors we cannot guess might arise.
Result as follows after running:
Code for reference:
import csv
import os
import time
def LoadCSV():
with open("test.csv", "r") as csvfile:
targetReader = csv.reader(csvfile, delimiter=",")
for row in targetReader:
print row
new_name = "test.%s.csv" % time.strftime("%m%d%y")
print new_name
os.rename("test.csv", new_name)
LoadCSV()
Note that on my end, there is nothing that watches my file. Antivirus is on, and no multithreading obviously is enabled. Check if one of your other scripts concurrently watches this file for changes. It's better if instead of watching the file, the file is sent as an argument post-renaming to this other function instead, as this might be the reason why it's "being used". On the one hand, and this is untested on my side, possibly better to copy the file with a new name rather than rename it.
Hope this helps.
When you are using a with block you do not need to close the file, it should be released outside the scope. If you want python to "forget" the entire filehandle you could delete it with del csvfile. But since you are using with you should not delete the variable inside the scope.
Try without the with scope instead:
csvfile = open('test.csv','r')
targetReader = csv.reader(csvfile, delimiter=',')
for row in targetReader:
....
csvfile.close()
del targetReader
os.rename('test.csv','test.'+time.strftime('%x'))
It might be the csv reader that still access the file when you are using a with block.
Related
I have a file in my python folder called data.txt and i have another file read.py trying to read text from data.txt but when i change something in data.txt my read doesn't show anything new i put
Something else i tried wasn't working and i found something that read, but when i changed it to something that was actually meaningful it didn't print the new text.
Can someone explain why it doesn't refresh, or what i need to do to fix it?
with open("data.txt") as f:
file_content = f.read().rstrip("\n")
print(file_content)
First and foremost, strings are immutable in Python - once you use file.read(), that returned object cannot change.
That being said, you must re-read the file at any given point the file contents may change.
For example
read.py
def get_contents(filepath):
with open(filepath) as f:
return f.read().rstrip("\n")
main.py
from read import get_contents
import time
print(get_contents("data.txt"))
time.sleep(30)
# .. change file somehow
print(get_contents("data.txt"))
Now, you could setup an infinite loop that watches the file's last modification timestamp from the OS, then always have the latest changes, but that seems like a waste of resources unless you have a specific need for that (e.g. tailing a log file), however there are arguably better tools for that
It was unclear from your question if you do the read once or multiple times. So here are steps to do:
Make sure you call the read function repeatedly with a certain interval
Check if you actually save file after modification
Make sure there are no file usage conflicts
So here is a description of each step:
When you read a file the way you shared it gets closed, meaning it is read only once, you need to read it multiple times if you want to see changes, so make it with some kind of interval in another thread or async or whatever suits your application best.
This step is obvious, remember to hit ctrl+c
It may happen that a single file is being accessed by multiple processes, for example your editor and the script, now to prevent errors try the following code:
def read_file(file_name: str):
while True:
try:
with open(file_name) as f:
return f.read().rstrip("\n")
except IOError:
pass
My code currently writes a dictionary which contains scores for a class to a CSV file. This part is correctly done by the program and the scores are wrote to file, however the latest dictionary written to file is not printed. For example, after the code has been ran once, it will not be printed however once the code has been ran for a second time, the first bit of data is printed however the new data isn't. Can someone tell me where I am going wrong?
SortedScores = sorted(Class10x1.items(), key = lambda t: t[0], reverse = True) #this sorts the scores in alphabetical order and by the highest score
FileWriter = csv.writer(open('10x1 Class Score.csv', 'a+'))
FileWriter.writerow(SortedScores) #the sorted scores are written to file
print "Okay here are your scores!\n"
I am guessing the problem is here somewhere however I cannot quite pinpoint what or where it is. I have tried to solve this by changing the mode of the file when it is read back in to r, r+ and rb, however all have the same consequence.
ReadFile = csv.reader(open("10x1 Class Score.csv", "r")) #this opens the file using csv.reader in read mode
for row in ReadFile:
print row
return
from Input output- python docs:
It is good practice to use the with keyword when dealing with file objects. This has the advantage that the file is properly closed after its suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent try-finally blocks:
>>> with open('workfile', 'r') as f:
... read_data = f.read()
>>> f.closed
True
File objects have some additional methods, such as isatty() and truncate() which are less frequently used; consult the Library Reference for a complete guide to file objects.
I'm not sure why they bury that so far in the documentation since it is really useful and a very common beginner mistake:
SortedScores = sorted(Class10x1.items(), key = lambda t: t[0], reverse = True) #this sorts the scores in alphabetical order and by the highest score
with open('10x1 Class Score.csv', 'a+') as file:
FileWriter = csv.writer(file)
FileWriter.writerow(SortedScores) #the sorted scores are written to file
print "Okay here are your scores!\n"
this will close the file for you even if an error is raised which prevents many possibilities of loss of data
the reason it did not appear to write to the file is because when you do .write_row() it doesn't immediately write to the hard drive, only to a buffer which is occasionally emptied into the file on hard drive, although with only one write statement it has no need to empty.
Remember to close the file after operation, otherwise the data will not be saved properly.
Try to use with keyword so that Python will handle the closure for you:
import csv
with open('10x1 Class Score.csv', 'a+') as f:
csv_writer = csv.writer(f)
# write something into the file
...
# when the above block is done, file will be automatically closed
# so that the file is saved properly
new python2.7 user here. I've searched for similar queries and I can't quite see what I'm doing wrong. I have a short script to read though all the files in a directory, and reading each one in turn, write them to a single master file.
My code is below; I can see two things going wrong at this time (although I get no error messages), 1 - that it only appears to open the first file in the list its supposed to be looping through (so my guess is I've an error using glob?), and 2 - although the print(str) statements display to the console nps, the output file never gets written too.
I've double checked that the file exists, it is empty and I'm passing the correct path & filename in when I call the function.
Any help is much appreciated.
#!/usr/bin/env python
import glob
import sys
filestoberecognised=sys.argv[1]
outputfile=sys.argv[2]
filecontents=glob.glob(filestoberecognised)
with open(outputfile,'w+') as f:
for i, row in enumerate(filecontents):
print(row) # this correctly prints to console
f.write(row+'\n') # this should write the filename of the filestoberecognised to the outputfile
with open(row,'r') as labfile:
for j, line in enumerate(labfile): # this should write words in label file
f.write('%s'%(line))
print('%s'%(line))
labfile.close() # ensures each file looped through is closed
f.write('\n.\n')
f.flush()
f.close()
I am a beginner, writing a python script in which I need it to create a file that I can write information to. However, I am having problems getting it to create a new, not previously existing file.
for example, I have:
file = open(coordinates.kml, 'w')
which it proceeds to tell me:
nameerror: name 'coordinates' is not defined.
Of course it isn't defined, I'm trying to make that file.
Everything I read on creating a new file says to take this route, but it simply will not allow me. What am I doing wrong?
I even tried to flat out define it...
file = coordinates.kml
file_open = open(file, 'w')
... and essentially got the same result.
You need to pass coordinates.kml as a string, so place them in quotes (single or double is fine).
file = open("coordinates.kml", "w")
In addition to the above answer,
If you want to create a file in the same path, then no problem or else you need to specify the path as well in the quotes.
But surely opening a file with read permission will throw an error as you are trying to access an nonexistent file.
To be future proof and independent of the platforms you can read and write files in binaries. For example if this is Python on Windows, there could be some alternations done to the end of line. Hence reading and writing in Binary mode should help, using switches "rb" and "wb"
file = open("coordinates.kml", "wb")
And also remember to close the file session, else can throw errors while re running the script.
I'm using the CSV module to read a tab delimited file. Code below:
z = csv.reader(open('/home/rv/ncbi-blast-2.2.23+/db/output.blast'), delimiter='\t')
But when I add Z.close() to end of my script i get and error stating "csv.reader' object has no attribute 'close'"
z.close()
So how do i close "Z"?
The reader is really just a parser. When you ask it for a line of data, it delegates the reading action to the underlying file object and just converts the result into a set of fields. The reader itself doesn't manage any resources that would need to be cleaned up when you're done using it, so there's no need to close it; it'd be a meaningless operation.
You should make sure to close the underlying file object, though, because that does manage a resource (an open file descriptor) that needs to be cleaned up. Here's the way to do that:
with open('/home/rv/ncbi-blast-2.2.23+/db/output.blast') as f:
z = csv.reader(f, delimiter='\t')
# do whatever you need to with z
If you're not familiar with the with statement, it's roughly equivalent to enclosing its contents in a try...finally block that closes the file in the finally part.
Hopefully this doesn't matter (and if it does, you really need to update to a newer version of Python), but the with statement was introduced in Python 2.5, and in that version you would have needed a __future__ import to enable it. If you were working with an even older version of Python, you would have had to do the closing yourself using try...finally.
Thanks to Jared for pointing out compatibility issues with the with statement.
You do not close CSV readers directly; instead you should close whatever file-like object is being used. For example, in your case, you'd say:
f = open('/home/rv/ncbi-blast-2.2.23+/db/output.blast')
z = csv.reader(f, delimiter='\t')
...
f.close()
If you are using a recent version of Python, you can use the with statement, e.g.
with open('/home/rv/ncbi-blast-2.2.23+/db/output.blast') as f:
z = csv.reader(f, delimiter='\t')
...
This has the advantage that f will be closed even if you throw an exception or otherwise return inside the with-block, whereas such a case would lead to the file remaining open in the previous example. In other words, it's basically equivalent to a try/finally block, e.g.
f = open('/home/rv/ncbi-blast-2.2.23+/db/output.blast')
try:
z = csv.reader(f, delimiter='\t')
...
finally:
f.close()
You don't close the result of the reader() method, you close the result of the open() method. So, use two statements: foo=open(...); bar=csv.reader(foo). Then you can call foo.close().
There are no bonus points awarded for doing in one line that which can be more readable and functional in two.