Copy a file line by line in python - python

I am writing a python program to copy a file line by line into a new file. The code I have is below in which I am using a loop to copy the file line by line.
However since the number of lines in the file may change, is there a way to copy a file line by line in python without using a loop which relies on numbers, and instead relies on the something like the EOF character to terminate the loop?
import os
import sys
i = 0
f = open("C:\\Users\\jgr208\\Desktop\\research_12\\sap\\beam_springs.$2k","r")
copy = open("C:\\Users\\jgr208\\Desktop\\research_12\\sap\\copy.$2k","wt")
#loop that copies file line by line and terminates loop when i reaches 10
while i < 10:
line = f.readline()
copy.write(str(line))
i = i +1
f.close()
copy.close()

You can iterate over lines in a file object in Python by iterating over the file object itself:
for line in f:
copy.write(line)
From the docs on file objects:
An alternative approach to reading lines is to loop over the file object. This is memory efficient, fast, and leads to simpler code:
>>> for line in f:
print line,

Files can be iterated directly, without the need for an explicit call to readline:
f = open("...", "r")
copy = open("...", "w")
for line in f:
copy.write(line)
f.close()
copy.close()

See shutil module for better ways of doing this than copying line-by-line:
shutil.copyfile(src, dst)
Copy the contents (no metadata) of the
file named src to a file named dst. dst must be the complete target
file name; look at shutil.copy() for a copy that accepts a target
directory path. If src and dst are the same files, Error is raised.
The destination location must be writable; otherwise, an IOError
exception will be raised. If dst already exists, it will be replaced.
Special files such as character or block devices and pipes cannot be
copied with this function. src and dst are path names given as
strings.
Edit: Your question says you are copying line-by-line because the source file is volatile. Something smells wrong about your design. Could you share more details regarding the problem you are solving?

Writing line by line can be slow when working with large data. You can accelerate the read/write operations by reading/writing a bunch of lines all at once. Please refer to my answer to a similar question here

Using with statements:
with open("input.txt", "r", encoding="utf-8") as input_file:
with open("output.txt", "w", encoding="utf-8") as output_file:
for input_line in input_file:
output_line = f(input_line) # You can change the line here
output_file.write(output_line)
Note that input_line contains the end-of-line character(s) (\n or \r\n), if there are any.

Related

Parsing large, possibly compressed, files in Python

I am trying to parse a large file, line by line, for relevant information.
I may be receiving either an uncompressed or gzipped file (I may have to edit for zip file at a later stage).
I am using the following code but I feel that, because I am not inside the with statement, I am not parsing the file line by line and am in fact loading the entire file file_content into memory.
if ".gz" in FILE_LIST['INPUT_FILE']:
with gzip.open(FILE_LIST['INPUT_FILE']) as input_file:
file_content = input_file.readlines()
else:
with open(FILE_LIST['INPUT_FILE']) as input_file:
file_content = input_file.readlines()
for line in file_content:
# do stuff
Any suggestions for how I should handle this?
I would prefer not to unzip the file outside the code block, as this needs to be generic, and I would have to tidy up multiple files.
readlines reads the file fully. So it's a no-go for big files.
Doing 2 context blocks like you're doing and then using the input_file handle outside them doesn't work (operation on closed file).
To get best of both worlds, I would use a ternary conditional for the context block (which determines if open or gzip.open must be used), then iterate on the lines.
open_function = gzip.open if ".gz" in FILE_LIST['INPUT_FILE'] else open
with open_function(FILE_LIST['INPUT_FILE'],"r") as input_file:
for line in input_file:
note that I have added the "r" mode to make sure to work on text not on binary (gzip.open defaults to binary)
Alternative: open_function can be made generic so it doesn't depend on FILE_LIST['INPUT_FILE']:
open_function = lambda f: gzip.open(f,"r") if ".gz" in f else open(f)
once defined, you can reuse it at will
with open_function(FILE_LIST['INPUT_FILE']) as input_file:
for line in input_file:

How do I start reading a file from the top in python?

I am trying to add dependencies from a list to a requirements.txt file depending on the platform the software is going to run. So I wrote the following code:
if platform.system() == 'Windows':
# Add windows only requirements
platform_specific_req = [req1, req2]
elif platform.system() == 'Linux':
# Add linux only requirements
platform_specific_req = [req3]
with open('requirements.txt', 'a+') as file_handler:
for requirement in platform_specific_req:
already_in_file = False
# Make sure the requirement is not already in the file
for line in file_handler.readlines():
line = line.rstrip() # remove '\n' at end of line
if line == requirement:
already_in_file = True
break
if not already_in_file:
file_handler.write('{0}\n'.format(requirement))
file_handler.close()
But what is happening with this code is that when the second requirement is going to be searched in the list of requirements already in the file, the for line in file_handler.readlines(): seems to be pointing to the last element of the list in the file so the new requirement is actually only compared to the last element in the list, and if it is not the same one it gets added. Obviously this is causing several elements to be duplicated in the list, since only the first requirement is being compared against all the elements in the list. How can I tell python to start comparing from the top of the file again?
Solution:
I received many great responses, I learned a lot, thanks Guys. I ended up combining two solutions; the one from Antti Haapala and the one from Matthew Franglen into one. I am showing the final code here for reference:
# Append the extra requirements to the requirements.txt file
with open('requirements.txt', 'r') as file_in:
reqs_in_file = set([line.rstrip() for line in file_in])
missing_reqs = set(platform_specific_reqs).difference(reqs_in_file)
with open('requirements.txt', 'a') as file_out:
for req in missing_reqs:
file_out.write('{0}\n'.format(req))
The answer to your explicit question: file_handler.seek(0) will seek it back to the beginning of the file.
Some neat improvements:
You can use the file handler itself as an iterator instead of calling the readlines() method.
If your file is too large to read entirely in to memory, then iterating over the lines in the file directly is fine - but you should change how you're doing it. As is, you're iterating over the entire file for each requirement, but IO is costly. You should probably iterate over the lines, and for each line check if it's one of the requirements. Like so:
with open('requirements.txt', 'a+') as file_handler:
for line in file_handler:
line = line.rstrip()
if line in platform_specific_req:
platform_specific_req.remove(line)
for req in platform_specific_req:
file_handler.write('{0}\n'.format(req))
You open the file handle before iterating over the existing requirement list. You then read the entire file handle for each requirement.
The file handle will finish after the first requirement because you have not reopened it. Reopening the file for each iteration would be very wasteful - read the file into a list and then use that inside the loops. Or do a set comparison!
file_content = set([line.rstrip() for line in file_handler])
only_in_platform = set(platform_specific_req).difference(file_content)
Do not try to read the file again for each requirement. While appending does work for this very use case, for modifications in general it is easier to just:
Read the content from the file into a list (preferably skipping empty lines)
Modify the list
Open the file again for writing and save the modified data.
So for example
with open('requirements.txt', 'r') as fin:
requirements = [ i for i in (line.strip() for line in fin) if i ]
for req in platform_specific_req:
if req not in requirements:
requirements.append(req)
with open('requirements.txt', 'w') as fout:
for req in requirements:
fout.write('{0}\n'.format(req))
# or print(req, file=fout)
I know I'm answering a little late, but I would suggest doing it this way, opening it once, reading and appending in the same go. Note this should work on every platform regardless of your system:
import os
def ensure_in_file(lines, file_path):
'''
idempotent function to append lines to a file if they're not already there
'''
with open(file_path, 'r+U') as f: # r+U allows append, Universal Newline mode
# set of all lines in the file, less newlines, and trailing spaces too.
file_lines = set(l.rstrip() for l in f)
# write lines not in the file, add the os line separator as you go
f.writelines(l + os.linesep for l in set(lines).difference(file_lines))
You can test this
a_file = '/temp/temp/foo/bar' # insert your own file path here.
# with open(a_file, 'w') as f: # ensure a blank file
# pass
ensure_in_file(['this', 'that'], a_file)
with open(a_file, 'rU') as f:
print f.read()
ensure_in_file(['this', 'that'], a_file)
with open(a_file, 'rU') as f:
print f.read()
Each print statement should demonstrate that the file has each line once.

python loop won't iterate on second pass

When I run the following in the Python IDLE Shell:
f = open(r"H:\Test\test.csv", "rb")
for line in f:
print line
#this works fine
however, when I run the following for a second time:
for line in f:
print line
#this does nothing
This does not work because you've already seeked to the end of the file the first time. You need to rewind (using .seek(0)) or re-open your file.
Some other pointers:
Python has a very good csv module. Do not attempt to implement CSV parsing yourself unless doing so as an educational exercise.
You probably want to open your file in 'rU' mode, not 'rb'. 'rU' is universal newline mode, which will deal with source files coming from platforms with different line endings for you.
Use with when working with file objects, since it will cleanup the handles for you even in the case of errors. Ex:
.
with open(r"H:\Test\test.csv", "rU") as f:
for line in f:
...
You can read the data from the file in a variable, and then you can iterate over this data any no. of times you want to in your script. This is better than doing seek back and forth.
f = open(r"H:\Test\test.csv", "rb")
data = f.readlines()
for line in data:
print line
for line in data:
print line
Output:
# This is test.csv
Line1,This is line 1, there are, some numbers here,321423423
Line2,This is line2 , there are some characters here,sdfdsfdsf
# This is test.csv
Line1,This is line 1, there are, some numbers here,321423423
Line2,This is line2 , there are some characters here,sdfdsfdsf
Because you've gone all the way through the CSV file, and the iterator is exhausted. You'll need to re-open it before the second loop.

Open a file for input and output in Python

I have the following code which is intended to remove specific lines of a file. When I run it, it prints the two filenames that live in the directory, then deletes all information in them. What am I doing wrong? I'm using Python 3.2 under Windows.
import os
files = [file for file in os.listdir() if file.split(".")[-1] == "txt"]
for file in files:
print(file)
input = open(file,"r")
output = open(file,"w")
for line in input:
print(line)
# if line is good, write it to output
input.close()
output.close()
open(file, 'w') wipes the file. To prevent that, open it in r+ mode (read+write/don't wipe), then read it all at once, filter the lines, and write them back out again. Something like
with open(file, "r+") as f:
lines = f.readlines() # read entire file into memory
f.seek(0) # go back to the beginning of the file
f.writelines(filter(good, lines)) # dump the filtered lines back
f.truncate() # wipe the remains of the old file
I've assumed that good is a function telling whether a line should be kept.
If your file fits in memory, the easiest solution is to open the file for reading, read its contents to memory, close the file, open it for writing and write the filtered output back:
with open(file_name) as f:
lines = list(f)
# filter lines
with open(file_name, "w") as f: # This removes the file contents
f.writelines(lines)
Since you are not intermangling read and write operations, the advanced file modes like "r+" are unnecessary here, and only compicate things.
If the file does not fit into memory, the usual approach is to write the output to a new, temporary file, and move it back to the original file name after processing is finished.
One way is to use the fileinput stdlib module. Then you don't have to worry about open/closing and file modes etc...
import fileinput
from contextlib import closing
import os
fnames = [fname for fname in os.listdir() if fname.split(".")[-1] == "txt"] # use splitext
with closing(fileinput.input(fnames, inplace=True)) as fin:
for line in fin:
# some condition
if 'z' not in line: # your condition here
print line, # suppress new line but adjust for py3 - print(line, eol='') ?
When using inplace=True - the fileinput redirects stdout to be to the file currently opened. A backup of the file with a default '.bak' extension is created which may come in useful if needed.
jon#minerva:~$ cat testtext.txt
one
two
three
four
five
six
seven
eight
nine
ten
After running the above with a condition of not line.startswith('t'):
jon#minerva:~$ cat testtext.txt
one
four
five
six
seven
eight
nine
You're deleting everything when you open the file to write to it. You can't have an open read and write to a file at the same time. Use open(file,"r+") instead, and then save all the lines to another variable before writing anything.
You should not open the same file for reading and writing at the same time.
"w" means create a empty for writing. If the file already exists, its data will be deleted.
So you can use a different file name for writing.

Python Overwriting files after parsing

I'm new to Python, and I need to do a parsing exercise. I got a file, and I need to parse it (just the headers), but after the process, i need to keep the file the same format, the same extension, and at the same place in disk, but only with the differences of new headers..
I tried this code...
for line in open ('/home/name/db/str/dir/numbers/str.phy'):
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
print linepars
..and it does the job, but I don't know how to "overwrite" the file with the new parsing.
The easiest way, but not the most efficient (by far, and especially for long files) would be to rewrite the complete file.
You could do this by opening a second file handle and rewriting each line, except in the case of the header, you'd write the parsed header. For example,
fr = open('/home/name/db/str/dir/numbers/str.phy')
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
fr.close()
EDIT: Note that this does not use readlines(), so its more memory efficient. It also does not store every output line, but only one at a time, writing it to file immediately.
Just as a cool trick, you could use the with statement on the input file to avoid having to close it (Python 2.5+):
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
with open('/home/name/db/str/dir/numbers/str.phy') as fr:
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
P.S. Welcome :-)
As others are saying here, you want to open a file and use that file object's .write() method.
The best approach would be to open an additional file for writing:
import os
current_cfg = open(...)
parsed_cfg = open(..., 'w')
for line in current_cfg:
new_line = parse(line)
print new_line
parsed.cfg.write(new_line + '\n')
current_cfg.close()
parsed_cfg.close()
os.rename(....) # Rename old file to backup name
os.rename(....) # Rename new file into place
Additionally I'd suggest looking at the tempfile module and use one of its methods for either naming your new file or opening/creating it. Personally I'd favor putting the new file in the same directory as the existing file to ensure that os.rename will work atomically (the configuration file named will be guaranteed to either point at the old file or the new file; in no case would it point at a partially written/copied file).
The following code DOES the job.
I mean it DOES overwrite the file ON ONESELF; that's what the OP asked for. That's possible because the transformations are only removing characters, so the file's pointer fo that writes is always BEHIND the file's pointer fi that reads.
import re
regx = re.compile('\AENS([A-Z]+)0+([0-9]{6})')
with open('bomo.phy','rb+') as fi, open('bomo.phy','rb+') as fo:
fo.writelines(regx.sub('\\1\\2',line) for line in fi)
I think that the writing isn't performed by the operating system one line at a time but through a buffer. So several lines are read before a pool of transformed lines are written. That's what I think.
newlines = []
for line in open ('/home/name/db/str/dir/numbers/str.phy').readlines():
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
newlines.append( linepars )
open ('/home/name/db/str/dir/numbers/str.phy', 'w').write('\n'.join(newlines))
(sidenote: Of course if you are working with large files, you should be aware that the level of optimization required may depend on your situation. Python by nature is very non-lazily-evaluated. The following solution is not a good choice if you are parsing large files, such as database dumps or logs, but a few tweaks such as nesting the with clauses and using lazy generators or a line-by-line algorithm can allow O(1)-memory behavior.)
targetFile = '/home/name/db/str/dir/numbers/str.phy'
def replaceIfHeader(line):
if line.startswith('ENS'):
return re.sub('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
else:
return line
with open(targetFile, 'r') as f:
newText = '\n'.join(replaceIfHeader(line) for line in f)
try:
# make backup of targetFile
with open(targetFile, 'w') as f:
f.write(newText)
except:
# error encountered, do something to inform user where backup of targetFile is
edit: thanks to Jeff for suggestion

Categories

Resources