Writing to a file in python - python

I have been receiving indexing errors in python. I got my code to work correctly through reading in a file and simply printing the desired output, but now I am trying to write the output to a file. I seem to be having a problem with indexing when trying to write it. I've tried a couple different things, I left an attempt commented out. Either way I keep getting an indexing error.
EDIT Original error may be caused by an error in eclipse, but when running on server, having a new issue*
I can now get it to run and produce output to a .txt file, however it only prints a single output
with open("blast.txt") as blast_output:
for line in blast_output:
subFields = [item.split('|') for item in line.split()]
#transId = str(subFields[0][0])
#iso = str(subFields[0][1])
#sp = str(subFields[1][3])
#identity = str(subFields[2][0])
out = open("parsed_blast.txt", "w")
#out.write(transId + "\t" + iso + "\t" + sp + "\t" + identity)
out.write((str(subFields[0][0]) + "\t" + str(subFields[0][1]) + "\t" + str(subFields[1][3]) + "\t" + str(subFields[2][0])))
out.close()
IndexError: list index out of range
Input file looks like:
c0_g1_i1|m.1 gi|74665200|sp|Q9HGP0.1|PVG4_SCHPO 100.00 372 0 0 1 372 1 372 0.0 754
c1002_g1_i1|m.801 gi|1723464|sp|Q10302.1|YD49_SCHPO 100.00 646 0 0 1 646 1 646 0.0 1310
c1003_g1_i1|m.803 gi|74631197|sp|Q6BDR8.1|NSE4_SCHPO 100.00 246 0 0 1 246 1 246 1e-179 502
c1004_g1_i1|m.804 gi|74676184|sp|O94325.1|PEX5_SCHPO 100.00 598 0 0 1 598 1 598 0.0 1227
c1005_g1_i1|m.805 gi|9910811|sp|O42832.2|SPB1_SCHPO 100.00 802 0 0 1 802 1 802 0.0 1644
c1006_g1_i1|m.806 gi|74627042|sp|O94631.1|MRM1_SCHPO 100.00 255 0 0 1 255 47 301 0.0 525
Expected output
c0_g1_i1 m.1 Q9HGP0.1 100.00
c1002_g1_i1 m.801 Q10302.1 100.00
c1003_g1_i1 m.803 Q6BDR8.1 100.00
c1004_g1_i1 m.804 O94325.1 100.00
c1005_g1_i1 m.805 O42832.2 100.00
c1006_g1_i1 m.806 O94631.1 100.00
My output is instead only one of the lines instead of all of the lines

You are overwriting the same file again and again. Open the file outside the for loop or open it in append mode 'a'

I suggest you write the whole file to a string.
with open("blast.txt", 'r') as fileIn:
data = fileIn.read()
then process the data.
data = func(data)
Then write to file out.
with open('bast_out.txt','w') as fileOut:
fileOut.write()

As #H Doucet said, write the whole thing to a string, then work with it. Leave the open() function out of the loop so it only opens & closes the file once, and make sure to open as "append." I've also cleaned up your out.write() function. No need to specify those list items as strings, they already are. And added a newline ("\n") to the end of each line.
with open("blast.txt") as f:
blast_output = f.read()
out = open("parsed_blast.txt", "a")
for line in blast_output.split("\n"):
subFields = [item.split('|') for item in line.split()]
out.write("{}\t{}\t{}\t{}\n".format(subFields[0][0], subFields[0][1],
subFields[1][3], subFields[2][0]))
out.close()

Related

Read in a file, splitting and then writing out desired output

I am very new to python, and am having some problems I can't seem to find answers to.
I have a large file I am trying to read in and then split and write out specific information. I am having trouble with the read in and split, where it is only printing the same thing over and over again.
blast_output = open("blast.txt").read()
for line in blast_output:
subFields = [item.split('|') for item in blast_output.split()]
print(str(subFields[0][0]) + "\t" + str(subFields[0][1]) + "\t" + str(subFields[1][3]) + "\t" + str(subFields[2][0]))
My input file has many rows that look like this:
c0_g1_i1|m.1 gi|74665200|sp|Q9HGP0.1|PVG4_SCHPO 100.00 372 0 0 1 372 1 372 0.0 754
c1002_g1_i1|m.801 gi|1723464|sp|Q10302.1|YD49_SCHPO 100.00 646 0 0 1 646 1 646 0.0 1310
c1003_g1_i1|m.803 gi|74631197|sp|Q6BDR8.1|NSE4_SCHPO 100.00 246 0 0 1 246 1 246 1e-179 502
c1004_g1_i1|m.804 gi|74676184|sp|O94325.1|PEX5_SCHPO 100.00 598 0 0 1 598 1 598 0.0 1227
The output I am receiving is this:
c0_g1_i1 m.1 Q9HGP0.1 100.00
c0_g1_i1 m.1 Q9HGP0.1 100.00
c0_g1_i1 m.1 Q9HGP0.1 100.00
c0_g1_i1 m.1 Q9HGP0.1 100.00
But what I am wanting is
c0_g1_i1 m.1 Q9HGP0.1 100.0
c1002_g1_i1 m.801 Q10302.1 100.0
c1003_g1_i1 m.803 Q6BDR8.1 100.0
c1004_g1_i1 m.804 O94325.1 100.0
You don't need to call the read method of the file object, just iterate over it, line by line. Then replace blast_output with line in the for loop to avoid repeating the same action across all the iterations:
with open("blast.txt") as blast_output:
for line in blast_output:
subFields = [item.split('|') for item in line.split()]
print("{:15}{:10}{:10}{:10}".format(subFields[0][0], subFields[0][1],
subFields[0][1], subFields[1][3], subFields[2][0]))
I have opened the file in a context using with, so closing is automatically done by Python. I have also used string formatting to build the final string.
c0_g1_i1 m.1 m.1 Q9HGP0.1
c1002_g1_i1 m.801 m.801 Q10302.1
c1003_g1_i1 m.803 m.803 Q6BDR8.1
c1004_g1_i1 m.804 m.804 O94325.1
Great question. You are taking the same input over and over again with this line
subFields = [item.split('|') for item in blast_output.split()]
The python 2.x version looks like this:
blast_output = open("blast.txt").read()
for line in blast_output:
subFields = [item.split('|') for item in line.split()]
print(str(subFields[0][0]) + "\t" + str(subFields[0][1]) + "\t" + str(subFields[1][3]) + "\t" + str(subFields[2][0]))
see Moses Koledoye's version for the Python 3.x formatted niceness

python print particular lines from file

The background:
Table$Gene=Gene1
time n.risk n.event survival std.err lower 95% CI upper 95% CI
0 2872 208 0.928 0.00484 0.918 0.937
1 2664 304 0.822 0.00714 0.808 0.836
2 2360 104 0.786 0.00766 0.771 0.801
3 2256 48 0.769 0.00787 0.754 0.784
4 2208 40 0.755 0.00803 0.739 0.771
5 2256 48 0.769 0.00787 0.754 0.784
6 2208 40 0.755 0.00803 0.739 0.771
Table$Gene=Gene2
time n.risk n.event survival std.err lower 95% CI upper 95% CI
0 2872 208 0.938 0.00484 0.918 0.937
1 2664 304 0.822 0.00714 0.808 0.836
2 2360 104 0.786 0.00766 0.771 0.801
3 2256 48 0.769 0.00787 0.754 0.784
4 1000 40 0.744 0.00803 0.739 0.774
#There is a new line ("\n") here too, it just doesn't come out in the code.
What I want seems simple. I want to turn the above file into an output that looks like this:
Gene1 0.755
Gene2 0.744
i.e. each gene, and the last number in the survival column from each section.
I have tried multiple ways, using regular expression, reading the file in as a list and saying ".next()". One example of code that I have tried:
fileopen = open(sys.argv[1]).readlines() # Read in the file as a list.
for index,line in enumerate(fileopen): # Enumerate items in list
if "Table" in line: # Find the items with "Table" (This will have my gene name)
line2 = line.split("=")[1] # Parse line to get my gene name
if "\n" in fileopen[index+1]: # This is the problem section.
print fileopen[index]
else:
fileopen[index+1]
So as you can see in the problem section, I was trying to say in this attempt:
if the next item in the list is a new line, print the item, else, the next line is the current line (and then I can split the line to pull out the particular number I want).
If anyone could correct the code so I can see what I did wrong I'd appreciate it.
Bit of overkill, but instead of manually writing parser for each data item use existing package like pandas to read in the csv file. Just need to write a bit of code to specify the relevant lines in the file. Un-optimized code (reading file twice):
import pandas as pd
def genetable(gene):
l = open('gene.txt').readlines()
l += "\n" # add newline to end of file in case last line is not newline
lines = len(l)
skiprows = -1
for (i, line) in enumerate(l):
if "Table$Gene=Gene"+str(gene) in line:
skiprows = i+1
if skiprows>=0 and line=="\n":
skipfooter = lines - i - 1
df = pd.read_csv('gene.txt', sep='\t', engine='python', skiprows=skiprows, skipfooter=skipfooter)
# assuming tab separated data given your inputs. change as needed
# assert df.columns.....
return df
return "Not Found"
this will read in a DataFrame with all the relevant data in that file
can then do:
genetable(2).survival # series with all survival rates
genetable(2).survival.iloc[-1] last item in survival
The advantages of this is that you have access to all the items, any mal-formatting of the file will probably be better picked up and prevent incorrect values from being used. If my own code i would add assertions on column names before returning the pandas DataFrame. Want to pick up any errors in parsing early so that it does not propagate.
This worked when I tried it:
gene = 1
for i in range(len(filelines)):
if filelines[i].strip() == "":
print("Gene" + str(gene) + " " + filelines[i-1].split()[3])
gene += 1
You could try something like this (I copied your data into foo.dat);
In [1]: with open('foo.dat') as input:
...: lines = input.readlines()
...:
Using with makes sure the file is closed after reading.
In [3]: lines = [ln.strip() for ln in lines]
This gets rid of extra whitespace.
In [5]: startgenes = [n for n, ln in enumerate(lines) if ln.startswith("Table")]
In [6]: startgenes
Out[6]: [0, 10]
In [7]: emptylines = [n for n, ln in enumerate(lines) if len(ln) == 0]
In [8]: emptylines
Out[8]: [9, 17]
Using emptylines relies on the fact that the records are separated by lines containing only whitespace.
In [9]: lastlines = [n-1 for n, ln in enumerate(lines) if len(ln) == 0]
In [10]: for first, last in zip(startgenes, lastlines):
....: gene = lines[first].split("=")[1]
....: num = lines[last].split()[-1]
....: print gene, num
....:
Gene1 0.771
Gene2 0.774
here is my solution:
>>> with open('t.txt','r') as f:
... for l in f:
... if "Table" in l:
... gene = l.split("=")[1][:-1]
... elif l not in ['\n', '\r\n']:
... surv = l.split()[3]
... else:
... print gene, surv
...
Gene1 0.755
Gene2 0.744
Instead of checking for new line, simply print when you are done reading the file
lines = open("testgenes.txt").readlines()
table = ""
finalsurvival = 0.0
for line in lines:
if "Table" in line:
if table != "": # print previous survival
print table, finalsurvival
table = line.strip().split('=')[1]
else:
try:
finalsurvival = line.split('\t')[4]
except IndexError:
continue
print table, finalsurvival

Find and print same elements in a loop

I have a huge input file that looks like this,
c651 OS05T0-00 492 749 29.07
c651 OS01T0-00 1141 1311 55.00
c1638 MLOC_8.3 27 101 72.00
c1638 MLOC_8.3 25 117 70.97
c2135 TRIUR3_3-P1 124 210 89.66
c2135 EMT17965 25 117 70.97
c1914 OS02T0-00 2 109 80.56
c1914 OS02T0-00 111 155 93.33
c1914 OS08T0-00 528 617 50.00
I would like to iterate inside each c, see if it has same elements in line[1] and print in 2 separate files
c that contain same elements and
that do not have same elements.
In case of c1914, since it has 2 same elements and 1 is not, it goes to file 2. So desired 2 output files will look like this, file1.txt
c1638 MLOC_8.3 27 101 72.00
c1638 MLOC_8.3 25 117 70.97
file2.txt
c651 OS05T0-00 492 749 29.07
c651 OS01T0-00 1141 1311 55.00
c2135 TRIUR3_3-P1 124 210 89.66
c1914 OS02T0-00 2 109 80.56
c1914 OS02T0-00 111 155 93.33
c1914 OS08T0-00 528 617 50.00
This is what I tried,
oh1=open('result.txt','w')
oh2=open('result2.txt','w')
f=open('file.txt','r')
lines=f.readlines()
for line in lines:
new_list=line.split()
protein=new_list[1]
for i in range(1,len(protein)):
(p, c) = protein[i-1], protein[i]
if c == p:
new_list.append(protein)
oh1.write(line)
else:
oh2.write(line)
If I understand you correctly, you want to send all lines for your input file that have a first element txt1 to your first output file if the second element txt2 of all those lines is the same; otherwise all those lines go to the second output file. Here is a program that does that.
from collections import defaultdict
# Read in file line-by-line for the first time
# Build up dictionary of txt1 to set of txt2 s
txt1totxt2 = defaultdict(set)
f=open('file.txt','r')
for line in f:
lst = line.split()
txt1=lst[0]
txt2=lst[1]
txt1totxt2[txt1].add(txt2);
# The dictionary tells us whether the second text
# is unique or not. If it's unique the set has
# just one element; otherwise the set has > 1 elts.
# Read in file for second time, sending each line
# to the appropriate output file
f.seek(0)
oh1=open('result1.txt','w')
oh2=open('result2.txt','w')
for line in f:
lst = line.split()
txt1=lst[0]
if len(txt1totxt2[txt1]) == 1:
oh1.write(line)
else:
oh2.write(line)
The program logic is very simple. For each txt it builds up a set of txt2s that it sees. When you're done reading the file, if the set has just one element, then you know that the txt2s are unique; if the set has more than one element, then there are at least two txt2s. Note that this means that if you only have one line in the input file with a particular txt1, it will always be sent to the first output file. There are ways round this if this is not the behaviour you want.
Note also that because the file is large, I've read it in line-by-line: lines=f.readlines() in your original program reads the whole file into memory at a time. I've stepped through it twice: the second time does the output. If this increases the run time then you can restore the lines=f.readlines() instead of reading it a second time. However the program as is should be much more robust to very large files. Conversely if your files are very large indeed, it would be worth looking at the program to reduce the memory usage further (the dictionary txt1totxt2 could be replaced with something more optimal, albeit more complicated, if necessary).
Edit: there was a good point in comments (now deleted) about the memory cost of this algorithm. To elaborate, the memory usage could be high, but on the other hand it isn't as severe as storing the whole file: rather txt1totxt2 is a dictionary from the first text in each line to a set of the second text, which is of the order of (size of unique first text) * (average size of unique second text for each unique first text). This is likely to be a lot smaller than the file size, but the approach may require further optimization. The approach here is to get something simple going first -- this can then be iterated to optimize further if necessary.
Try this...
import collections
parsed_data = collections.OrderedDict()
with open("input.txt", "r") as fd:
for line in fd.readlines():
line_data = line.split()
key = line_data[0]
key2 = line_data[1]
if not parsed_data.has_key(key):
parsed_data[key] = collections.OrderedDict()
if not parsed_data[key].has_key(key2):
parsed_data[key][key2] = [line]
else:
parsed_data[key][key2].append(line)
# now process the parsed data and write result files
fsimilar = open("similar.txt", "w")
fdifferent = open("different.txt", "w")
for key in parsed_data:
if len(parsed_data[key]) == 1:
f = fsimilar
else:
f = fdifferent
for key2 in parsed_data[key]:
for line in parsed_data[key][key2]:
f.write(line)
fsimilar.close()
fdifferent.close()
Hope this helps

log file parsing python

I have a log file with arbitrary number of lines. All I need is to extract is one line of data from the log file which starts with a string “Total”. I do not want any other lines from the file.
How do I write a simple python program for this?
This is how my input file looks
TestName id eno TPS GRE FNP
Test 1205 1 0 78.00 0.00 0.02
Test 1206 1 0 45.00 0.00 0.02
Test 1207 1 0 73400 0.00 0.02
Test 1208 1 0 34.00 0.00 0.02
Totals 64 0 129.61 145.64 1.12
I am trying to get an output file which looks like
TestName id TPS GRE
Totals 64 129.61 145.64
Ok.. So I wanted only the 1st, 2nd, 4th and 5th column from the input file but not others. I am trying the list[index] to achieve this but getting a IndexError: (list index out of range ). Also the space between 2 columns are not the same so i am not sure how to split the columns and select the ones that i want. Can somebody please help me with this. below is the program I used
newFile = open('sana.log','r')
for line in newFile.readlines():
if ('TestName' in line) or ('Totals' in line):
data = line.split('\t')
print data[0]+data[1]
theFile = open('thefile.txt','r')
FILE = theFile.readlines()
theFile.close()
printList = []
for line in FILE:
if ('TestName' in line) or ('Totals' in line):
# here you may want to do some splitting/concatenation/formatting to your string
printList.append(line)
for item in printList:
print item # or write it to another file... or whatever
for line in open('filename.txt', 'r'):
if line.startswith('TestName') or line.startswith('Totals'):
fields = line.rsplit(None, 5)
print '\t'.join(fields[:2] + fields[3:4])

DTML: How to prevent formatting loss

I have a DTML document which only contains:
<dtml-var public_blast_results>
and displays when i view it as:
YP_001336283 100.00 345 0 0 23 367 23 367 0.0 688
When I edit the DTML page for example just adding a header like:
<h3>Header</h3>
<dtml-var public_blast_results>
The "public_blast_results" loeses its formatting and displayes as:
Header
YP_001336283 100.00 345 0 0 23 367 23 367 0.0 688
Is there a way for maintaining the formatting? public_blast_results is a python function which just simply reads the contents of a file and returns it.
This is nothing to do with DTML - it's a basic issue with HTML, which is that it ignores whitespace. If you want to preserve it, you need to wrap the content with <pre>.
<pre><dtml-var public_blast_results></pre>

Categories

Resources