Why can't I read file after my for loops python using ubuntu - python

For some reason after my for loops I am not able to read the ouput text file.
For example :
for line in a:
name = (x)
f = open('name','w')
for line in b:
get = (something)
f.write(get)
for line in c:
get2 = (something2)
f.write(get2)
(the below works if the above is commented out only)
f1 = open(name, 'r')
for line in f1:
print line
If I comment out the loops, I am able to read the file and print the contents.
I am very new to coding and guessing this is something obvious that I am missing.However I can't seem to figure it out. I have used google but the more I read the more I feel I am missing something. Any advice is appreciated.

#bernie is right in his comment above. The problem is that when you do open(..., 'w'), the file is rewritten to be blank, but Python/the OS doesn't actually write out the things you write to disk until its buffer is filled up or you call close(). (This delay helps speed things up, because writing to disk is slow.) You can also call flush() to force this without closing the file.
The with statement bernie referred to would look like this:
with open('name', 'w') as f:
for line in b:
b.write(...)
for line in c:
b.write(...)
# f is closed now that we're leaving the with block
# and any writes are actually written out to the file
with open('name', 'r') as f:
for line in f:
print line
If you're using Python 2.5 rather than 2.6 or 2.7, you'll have to do from __future__ import with_statement at the top of your file.

Related

writing a string from one very long file to another file in python

Please do not behead me for my noob question. I have looked up many other questions on stackoverflow concerning this topic, but haven't found a solution that works as intended.
The Problem:
I have a fairly large txt-file (about 5 MB) that I want to copy via readlines() or any other build in string-handling function into a new file. For smaller files the following code sure works (only schematically coded here):
f = open('C:/.../old.txt', 'r');
n = open('C:/.../new.txt', 'w');
for line in f:
print(line, file=n);
However, as I found out here (UnicodeDecodeError: 'charmap' codec can't encode character X at position Y: character maps to undefined), internal restrictions of Windows prohibit this from working on larger files. So far, the only solution I came up with is the following:
f = open('C:/.../old.txt', 'r', encoding='utf8', errors='ignore');
n = open('C:/.../new.txt', 'a');
for line in f:
print(line, file=sys.stderr) and append(line, file='C:/.../new.txt');
f.close();
n.close();
But this doesn't work. I do get a new.txt-file, but it is empty. So, how do I iterate through a long txt-file and write every line into a new txt-file? Is there a way to read the sys.stderr as the source for the new file (I actually don't have any idea, what this sys.stderr is)?
I know this is a noob question, but I don't know where to look for an answer anymore.
Thanks in advance!
There is no need to use print() just write() to the file:
with open('C:/.../old.txt', 'r') as f, open('C:/.../new.txt', 'w') as n:
n.writelines(f)
However, it sounds like you may have an encoding issue, so make sure that both files are opened with the correct encoding. If you provide the error output perhaps more help can be provided.
BTW: Python doesn't use ; as a line terminator, it can be used to separate 2 statements if you want to put them on the same line but this is generally considered bad form.
You can set standard output to file like my code.
I successfully copied 6MB text file with this.
import sys
bigoutput = open("bigcopy.txt", "w")
sys.stdout = bigoutput
with open("big.txt", "r") as biginput:
for bigline in biginput.readlines():
print(bigline.replace("\n", ""))
bigoutput.close()
Why don't you just use the shutil module and copy the file?
you can try with this code it works for me.
with open("file_path/../large_file.txt") as f:
with open("file_path/../new_file", "wb") as new_f:
new_f.writelines(f.readlines())
new_f.close()
f.close()

Python failing to read lines properly

I'm supposed to open a file, read it line per line and display the lines out.
Here's the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import re
in_path = "../vas_output/Glyph/20140623-FLYOUT_mins_cleaned.csv"
out_path = "../vas_gender/Glyph/"
csv_read_line = open(in_path, "rb").read().split("\n")
line_number = 0
for line in csv_read_line:
line_number+=1
print str(line_number) + line
Here's the contents of the input file:
12345^67890^abcedefg
random^test^subject
this^sucks^crap
And here's the result:
this^sucks^crapjectfg
Some weird combo of all three. In addition to this, the result of line_number is missing. Printing out the result of len(csv_read_line) outputs 1, for some reason, no matter how many is in the input file. Changing the split type from \n to ^ gives the expected output, though, so I'm assuming the problem is probably with the input file.
I'm using a Mac, and did both the python code and the input file (on Sublime Text) on the Mac itself.
Am I missing something?
You seem to be splitting on "\n" which isn't necessary, and could be incorrect depending on the line terminators used in the input file. Python includes functionality to iterate over the lines of a file one at a time. The advantages are that it will worry about processing line terminators in a portable way, as well as not requiring the entire file to be held in memory at once.
Further, note that you are opening the file in binary mode (the b character in your mode string) when you actually intend to read the file as text. This can cause problems similar to the one you are experiencing.
Also, you do not close the file when you are done with it. In this case that isn't a problem, but you should get in the habit of using with blocks when possible to make sure the file gets closed at the earliest possible time.
Try this:
with open(in_path, "r") as f:
line_number = 0
for line in f:
line_number += 1
print str(line_number) + line.rstrip('\r\n')
So your example just works for me.
But then, i just copied your text into a text editor on linux, and did it that way, so any carriage returns will have been wiped out.
Try this code though:
import os
in_path = "input.txt"
with open(in_path, "rb") as inputFile:
for lineNumber, line in enumerate(inputFile):
print lineNumber, line.strip()
It's a little cleaner, and the for line in file style deals with line breaks for you in a system independent way - Python's open has universal newline support.
I'd try the following Pythonic code:
#!/usr/bin/env python
in_path = "../vas_output/Glyph/20140623-FLYOUT_mins_cleaned.csv"
out_path = "../vas_gender/Glyph/"
with open(in_path, 'rb') as f:
for i, line in enumerate(f):
print(str(i) + line)
There are several improvements that can be made here to make it more idiomatic python.
import csv
in_path = "../vas_output/Glyph/20140623-FLYOUT_mins_cleaned.csv"
out_path = "../vas_gender/Glyph/"
#Lets open the file and make sure that it closes when we unindent
with open(in_path,"rb") as input_file:
#Create a csv reader object that will parse the input for us
reader = csv.reader(input_file,delimiter="^")
#Enumerate over the rows (these will be lists of strings) and keep track of
#of the line number using python's built in enumerate function
for line_num, row in enumerate(reader):
#You can process whatever you would like here. But for now we will just
#print out what you were originally printing
print str(line_num) + "^".join(row)

python loop won't iterate on second pass

When I run the following in the Python IDLE Shell:
f = open(r"H:\Test\test.csv", "rb")
for line in f:
print line
#this works fine
however, when I run the following for a second time:
for line in f:
print line
#this does nothing
This does not work because you've already seeked to the end of the file the first time. You need to rewind (using .seek(0)) or re-open your file.
Some other pointers:
Python has a very good csv module. Do not attempt to implement CSV parsing yourself unless doing so as an educational exercise.
You probably want to open your file in 'rU' mode, not 'rb'. 'rU' is universal newline mode, which will deal with source files coming from platforms with different line endings for you.
Use with when working with file objects, since it will cleanup the handles for you even in the case of errors. Ex:
.
with open(r"H:\Test\test.csv", "rU") as f:
for line in f:
...
You can read the data from the file in a variable, and then you can iterate over this data any no. of times you want to in your script. This is better than doing seek back and forth.
f = open(r"H:\Test\test.csv", "rb")
data = f.readlines()
for line in data:
print line
for line in data:
print line
Output:
# This is test.csv
Line1,This is line 1, there are, some numbers here,321423423
Line2,This is line2 , there are some characters here,sdfdsfdsf
# This is test.csv
Line1,This is line 1, there are, some numbers here,321423423
Line2,This is line2 , there are some characters here,sdfdsfdsf
Because you've gone all the way through the CSV file, and the iterator is exhausted. You'll need to re-open it before the second loop.

Pythonic way to read file line by line?

What's the Pythonic way to go about reading files line by line of the two methods below?
with open('file', 'r') as f:
for line in f:
print line
or
with open('file', 'r') as f:
for line in f.readlines():
print line
Or is there something I'm missing?
File handles are their own iterators (specifically, they implement the iterator protocol) so
with open('file', 'r') as f:
for line in f:
# code
Is the preferred usage. f.readlines() returns a list of lines, which means absorbing the entire file into memory -> generally ill advised, especially for large files.
It should be pointed out that I agree with the sentiment that context managers are worthwhile, and have included one in my code example.
Of the two you presented, the first is recommended practice. As pointed out in the comments, any solution (like that below) which doesn't use a context manager means that the file is left open, which is a bad idea.
Original answer which leaves dangling file handles so shouldn't be followed
However, if you don't need f for any purpose other than reading the lines, you can just do:
for line in open('file', 'r'):
print line
theres' no need for .readlines() method call.
PLUS: About with statement
The execution behavior of with statement is as commented below,
with open("xxx.txt",'r') as f:
// now, f is an opened file in context
for line in f:
// code with line
pass // when control exits *with*, f is closed
print f // if you print, you'll get <closed file 'xxx.txt'>

Python Overwriting files after parsing

I'm new to Python, and I need to do a parsing exercise. I got a file, and I need to parse it (just the headers), but after the process, i need to keep the file the same format, the same extension, and at the same place in disk, but only with the differences of new headers..
I tried this code...
for line in open ('/home/name/db/str/dir/numbers/str.phy'):
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
print linepars
..and it does the job, but I don't know how to "overwrite" the file with the new parsing.
The easiest way, but not the most efficient (by far, and especially for long files) would be to rewrite the complete file.
You could do this by opening a second file handle and rewriting each line, except in the case of the header, you'd write the parsed header. For example,
fr = open('/home/name/db/str/dir/numbers/str.phy')
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
fr.close()
EDIT: Note that this does not use readlines(), so its more memory efficient. It also does not store every output line, but only one at a time, writing it to file immediately.
Just as a cool trick, you could use the with statement on the input file to avoid having to close it (Python 2.5+):
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
with open('/home/name/db/str/dir/numbers/str.phy') as fr:
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
P.S. Welcome :-)
As others are saying here, you want to open a file and use that file object's .write() method.
The best approach would be to open an additional file for writing:
import os
current_cfg = open(...)
parsed_cfg = open(..., 'w')
for line in current_cfg:
new_line = parse(line)
print new_line
parsed.cfg.write(new_line + '\n')
current_cfg.close()
parsed_cfg.close()
os.rename(....) # Rename old file to backup name
os.rename(....) # Rename new file into place
Additionally I'd suggest looking at the tempfile module and use one of its methods for either naming your new file or opening/creating it. Personally I'd favor putting the new file in the same directory as the existing file to ensure that os.rename will work atomically (the configuration file named will be guaranteed to either point at the old file or the new file; in no case would it point at a partially written/copied file).
The following code DOES the job.
I mean it DOES overwrite the file ON ONESELF; that's what the OP asked for. That's possible because the transformations are only removing characters, so the file's pointer fo that writes is always BEHIND the file's pointer fi that reads.
import re
regx = re.compile('\AENS([A-Z]+)0+([0-9]{6})')
with open('bomo.phy','rb+') as fi, open('bomo.phy','rb+') as fo:
fo.writelines(regx.sub('\\1\\2',line) for line in fi)
I think that the writing isn't performed by the operating system one line at a time but through a buffer. So several lines are read before a pool of transformed lines are written. That's what I think.
newlines = []
for line in open ('/home/name/db/str/dir/numbers/str.phy').readlines():
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
newlines.append( linepars )
open ('/home/name/db/str/dir/numbers/str.phy', 'w').write('\n'.join(newlines))
(sidenote: Of course if you are working with large files, you should be aware that the level of optimization required may depend on your situation. Python by nature is very non-lazily-evaluated. The following solution is not a good choice if you are parsing large files, such as database dumps or logs, but a few tweaks such as nesting the with clauses and using lazy generators or a line-by-line algorithm can allow O(1)-memory behavior.)
targetFile = '/home/name/db/str/dir/numbers/str.phy'
def replaceIfHeader(line):
if line.startswith('ENS'):
return re.sub('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
else:
return line
with open(targetFile, 'r') as f:
newText = '\n'.join(replaceIfHeader(line) for line in f)
try:
# make backup of targetFile
with open(targetFile, 'w') as f:
f.write(newText)
except:
# error encountered, do something to inform user where backup of targetFile is
edit: thanks to Jeff for suggestion

Categories

Resources