This question already has an answer here:
In the Inline "open and write file" is the close() implicit?
(1 answer)
Closed 9 years ago.
I've seen this idiom in Dive Into Python 3:
l = list(open('strcpy.c'))
The question is, how can I close the file?
Is something happening behind the scenes?
I couldn't find this information in the book.
The file will be closed when its object is garbage-collected. In CPython, this happens pretty much immediately after that line is executed, because the file is never assigned to a variable. In other Pythons, such as Jython or IronPython, this may not happen right away (or at all), though all open files are always closed when the process exits.
For this reason, a better approach is to close the file explicitly using 'with':
with open("strcpy.c") as infile:
l = list(infile)
An advantage of this is that the file will be properly closed even if an exception occurs in reading it; you don't have to manually write code for this case using a try/except block.
A with statement can be written on one line if you want to stick with the concise one-liner. :-)
That said, I do sometimes use this idiom myself in short-running scripts where having the file open a wee bit longer than it strictly needs to be isn't a big deal. An advantage is that you don't clutter things up with a variable (infile in this case) pointing to a closed file.
From doc:
It is good practice to use the with keyword when dealing with file
objects. This has the advantage that the file is properly closed after
its suite finishes, even if an exception is raised on the way.
You can use it like this:
with open('strcpy.c') as f:
l = list(f)
Personally, I would never open a file without a context manager:
with open('strcpy.c') as myfile:
l = list(myfile)
# do stuff with l here
When using the context manager, the file is automagically closed at the end of the indentation block.
Related
This question already has answers here:
Why is `with open()` better for opening files in Python?
(4 answers)
File read using "open()" vs "with open()" [duplicate]
(1 answer)
What is the python "with" statement designed for?
(11 answers)
Closed 3 years ago.
For the sake of a small example. Let's say that I want to read each line from this file into a list:
First line
Second line
Third line
Fourth line
and it's called example.txt and in my cwd.
I've always been told that I should do it using a with statement like these:
# method 1
lines = []
with open("example.txt") as handle:
for line in handle:
lines.append(line.strip("\n"))
# method 2
with open("example.txt") as handle:
lines = [line.strip("\n") for line in handle]
There's a lot of info out there on this.. I found some good stuff in
What is the python keyword "with" used for?. So it seems this is/was recommended so that the file was properly closed, especially in the cases of exceptions being thrown during processing.
# method 3
handle = open("example.txt")
lines = [line.strip("\n") for line in handle]
handle.close()
# method 4
lines = [line.strip("\n") for line in open("example.txt")]
For that reason the above two methods would be frowned upon. But all of the information I could find on the subject is extremely old, most of it being aimed at python 2.x. sometimes using with instead of these methods within more complex code is just impractical.
I know that in python3 at some point variables within a list comprehension were limited to their own scope (so that the variables would not leak out of the comprehension).. could that take care of the issue in method 4? Would I still experience issues when using method 3 if an exception was thrown before I explicity called the .close() statement?
Is the insistence on always using with to open file objects just old habits dying hard?
The with keyword is a context manager. It is not required to work with a file. However, it is good practice.
It is good practice to use the with keyword when dealing with file
objects. The advantage is that the file is properly closed after its
suite finishes, even if an exception is raised at some point. Using
with is also much shorter than writing equivalent try-finally blocks...
Python Tutorial Section 7.2
Using it is syntactic sugar for a try/except block similar to:
f = open('example.txt', 'w')
try:
f.write('hello, world')
finally:
f.close()
You could explicitly write that block out whenever you want to work with a file. In my opinion it is more pythonic, and easier to read, to use the context manager syntax.
with open('example.txt', 'w') as f:
f.write('hello, world')
If you would like to learn a little more about context managers check out this blog post and of course the Python Documentation on contextlib.
There is also a 5th way:
handle = None
try:
handle = open("example.txt")
except:
pass
finally:
if handle is not None:
handle.close()
Note, this would not work, in case open would throw an exception, handle.close() would crash the program.:
try:
handle = open("example.txt")
except:
pass
finally:
handle.close()
Yes. Closing files is still needed. This concept is present in all programming languages and is caused by buffering, which improves performance of working with IO. To avoid the need of closing file handles you would need to find a way of reading files without buffering.
I know
with open(my_file, 'r') as f:
pass
opens and closes the file. Also,
f = open(my_file, 'r'); f.close()
does the same.
What about this:
open(my_file, 'r')
My actual question case is this:
import json;
json.load(open(my_file, 'r'))
vs
import json;
with open(my_file, 'r') as f:
j = json.load(f)
I guess that technically, the file should stay open, but I'm pretty sure that since the file object was not assigned, it was closed right away, by Python's garbage collector.
Am I right?
Is this a good practice?
Any difference between Python versions here?
I'm pretty sure that since the file object was not assigned, it was closed right away.
In CPython implementation, the object returned by open will be deleted once the reference count decreases to zero. This is independent of garbage collector, and will happen even if GC is turned off. In the case of json.load(open(my_file, 'r')) the file handle will be closed right away, since exiting the body of json.load will release the only reference.
Is this a good practice?
No. It relies on implementation details of CPython that the file handle will be closed in a timely manner. It's a "code smell", people reading your code might assume you are a sloppy programmer. Use the with statement to ensure the timing of the closure is made explicit. If you just prefer the way a one-liner looks, then you can use pathlib, like this:
json.loads(Path(my_file).read_text())
Any difference between Python versions here?
No.
This question already has answers here:
Block scope in Python
(7 answers)
Closed 3 years ago.
I'm confused how the closed method on a file object can be executed successfully even after the file has been closed, as documented one of the tutorials.
>>> with open('workfile') as f:
... read_data = f.read()
>>> f.closed
True
I would expect the f.closed command to fail because the with statement should close the file and f should no longer be available. How is the program still able to recognize the f file object after the it is closed?
Also, shouldn't the name f exist only within the with block? How is the program able to recognize the object outside of the with block?
How is the program still able to recognize the f file object after the it is closed?
It just works, but is that really so much of a problem? Variables are not meant to be reused, some might say. For those, it would be a useless feature. For others, it would be good, or just alright.
[...] shouldn't the name f exist only within the with block? [...]
Probably.
[...] How is the program able to recognize the object outside of the with block?
As a matter of fact, the object, in the source code, exists before (and out of) the nested block (just right before and out of it). So the language designers' choice, even maybe arguably, makes sense.
I would like to insert the following function into a Python script, concerning a file, and I haven't found how to write it yet:
if the file is opened, close it
if it is already closed, do nothing
Any help will be really much appreciated. Thank you in advance!
You write this as f.close().
In Python 3.x, IOBase.close says:
Flush and close this stream. This method has no effect if the file is already closed.
Likewise, in Python 2.x, file.close says:
Close the file … Calling close() more than once is allowed.
Of course if you read the docs, you'll notice that files (whether 3.x IOBase or 2.x file) also have a closed attribute, so if you really wanted to write what you were asking for explicitly, you could:
if not f.closed:
f.close()
But that has no benefit over just calling f.close(). Unless you want to make sure that some file-like objects that aren't 100% file-like raise an inscrutable AttributeError instead of just working as you'd like, but I doubt you want that.
f.close() is the way to close a file. Python details how to use file operations Here in python 2 and here in Python3
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
check what files are open in Python
Hello,
Is it possible to obtain a list of all currently open file handles, I presume that they are stored somewhere in the environment.
I am interested in theis function as I would like to safely handle any files that are open when a fatal error is raised, i.e. close file handles and replace potentially corrupted files with the original files.
I have the handling working but without knowing what file handles are open, I am unable to implement this idea.
As an aside, when a file handle is initialised, can this be inherited by another imported method?
Thank you
lsof, /proc/pid/fd/
The nice way of doing this would be to modify your code to keep track of when it opens a file:
def log_open( *args, **kwargs ):
print( "Opening a file..." )
print( *args, **kwargs )
return open( *args, **kwargs )
Then, use log_open instead of open to open files. You could even do something more hacky, like modifying the File class to log itself. That's covered in the linked question above.
There's probably a disgusting, filthy hack involving the garbage collector or looking in __dict__ or something, but you don't want to do that unless you absolutely really truly seriously must.
If you're using python 2.5+ you can use the with keyword (though 2.5 needs `from future import with_statement)
with open('filename.txt', 'r') as f:
#do stuff here
pass
#here f has been closed and disposed properly - even with raised exceptions
I don't know what kind of catastrophic failure needs to bork the with statement, but I assume it's a really bad one. On WinXP, my quick unscientific test:
import time
with open('test.txt', 'w') as f:
f.write('testing\n')
while True:
time.sleep(1)
and then killing the process with Windows Task Manager still wrote the data to file.