I have data, that is set up as the following:
//Name_1 * *
>a xyzxyzyxyzyxzzxy
>b xyxyxyzxyyxzyxyz
>c xyzyxzyxyzyxyzxy
//Name_2
>a xyzxyzyxyzxzyxyx
>b zxyzxyzxyyzxyxzx
>c zxyzxyzxyxyzyzxy
//Name_3 * *
>a xyzxyzyxyzxzyxyz
>b zxyzxyzxzyyzyxyx
>c zxyzxyzxyxyzyzxy
...
The //-line refers to an ID for the following group of sequences until the next //-line is reached.
I have been working on writing a program, that reads the position of the asterix, and print the characters on the given position for the sequences.
To simplifiy things for myself, I have been working on a subset of my data, containing only one group of sequences, so e.g.:
//Name_1 * *
>a xyzxyzyxyzyxzzxy
>b xyxyxyzxyyxzyxyz
>c xyzyxzyxyzyxyzxy
My program does what I want on this subset.
import sys
import csv
datafile = open(sys.argv[1], 'r')
outfile = open(sys.argv[1]+"_FGT_Data", 'w')
csv_out = csv.writer(outfile, delimiter=',')
csv_out.writerow(['Locus', 'Individual', 'Nucleotide', 'Position'])
with (datafile) as searchfile:
var_line = [line for line in searchfile if '*' in line]
LocusID = [line[2:13].strip() for line in var_line]
poslist = [i for line in var_line for i, x in enumerate(line) if x =='*']
datafile = open(sys.argv[1], 'r')
with (datafile) as getsnps:
lines = [line for line in getsnps.readlines() if line.startswith('>')]
for pos in poslist:
for line in lines:
snp = line[pos]
individual = line[0:7]
indistr = individual.strip()
csv_out.writerow((LocusID[0], indistr, line[pos], str(pos)))
datafile.close()
outfile.close()
However, now I am trying to modify it to work on the full dataset. I am having trouble finding a way to iterate over the data in the correct way.
I need to search through the file, and when a line containing '' is reached, I need to do as in the above code for the sequences corresponding to the given line, and then continue to the next line containing an ''. Do I need to split up my data with regards to the //-lines or what is the best approach?
I have uploaded a sample of my data to dropbox:
Data_Sample.txt contains several groups, and is the kind of data, I am trying to get the program to work on.
Data_One_Group.txt contains only one group, and is the data I have gotten the program to work on so far.
https://www.dropbox.com/sh/3j4i04s2rg6b63h/AADkWG3OcsutTiSsyTl8L2Vda?dl=0
--------EDIT---------
I am trying to implement the suggestion by #Julien Spronck below.
However, I am having trouble processing the produced block. How would I be able to search through the block line for line. E.g., why does the below not work as intended? It just prints the asterix' and not the line itself.
block =''
with open('onelocus.txt', 'r') as searchfile:
for line in searchfile:
if line.startswith('//'):
#print line
if block:
for line in block:
if '*' in line:
print line
block = line
else:
block += line
---------EDIT 2----------
I am getting closer. I understand that fact, that I need to split the string into line, to be able to search through them. The below works on one group, but when I try to itereate over several, it prints the information for the first group only. But does it for as many groups, as there are. I have tried clearing LocusID and poslist before next iteration, but this does not seem to be the solution.
block =''
with (datafile) as searchfile:
for line in searchfile:
if line.startswith('//'):
if block:
var_line = [line for line in block.splitlines() if '*' in line]
LocusID = [line[2:13].strip() for line in var_line]
print LocusID
poslist = [i for line in var_line for i, x in enumerate(line) if x == '*']
print poslist
block = line
else:
block += line
Can't you do something like:
block =''
with open(filename, 'r') as fil:
for line in fil:
if line.startswith('//'):
if block:
do_something_with(block)
block = line
else:
block += line
if block:
do_something_with(block)
In this code, I just append the lines of the file to a variable block. Once I find a line that starts with //, I process the previous block and reinitialize the block for the next iteration.
The last two lines will take care of processing the last block, which would not be processed otherwise.
do_something_with(block) could be something like this:
def do_something_with(block):
lines = block.splitlines()
j = 0
first_line = lines[j]
while first_line.strip() == '':
j += 1
first_line = lines[j]
pos = []
position = first_line.find('*')
while position != -1:
pos.append(position)
position = first_line.find('*', position+1)
for k, line in enumerate(lines):
if k > j:
for p in pos:
print line[p],
print
## prints
## z y
## x z
## z y
I have created a way to make this work with the data you provided.
You should run it with 2 file locations, 1 should be your input.txt and 2 should be your output.csv
explanation
first we create a dictionary with the locus as key and the sequences as values.
We iterate over this dictionary and get the * locations in the locus and append these to a list indexes.
We iterate over the values belonging to this key and extract the sequence
per iteration we iterate over indexes so that we gather the snps.
per iteration we append to our csv file.
We empty the indexes list so we can go to the next key.
Keep in mind
This method is highly dependant on the amount of spaces you have inside your input.txt.
You should know that this will not be the fastest way to get it done. but it does get it done.
I hope this helped, if you have any questions, feel free to ask them, and if I have time, I will happily try to answer them.
script
import sys
import csv
sequences = []
dic = {}
indexes = []
datafile = sys.argv[1]
outfile = sys.argv[2]
with open(datafile,'r') as snp_file:
lines = snp_file.readlines()
for i in range(0,len(lines)):
if lines[i].startswith("//"):
dic[lines[i].rstrip()] = sequences
del sequences[:]
if lines[i].startswith(">"):
sequences.append(lines[i].rstrip())
for key in dic:
locus = key.split(" ")[0].replace("//","")
for i, x in enumerate(key):
if x == '*':
indexes.append(i-11)
for sequence in dic[key]:
seq = sequence.split(" ")[1]
seq_id = sequence.split(" ")[0].replace(">","")
for z in indexes:
position = z+1
nucleotide = seq[z]
with open(outfile,'a')as handle:
csv_out = csv.writer(handle, delimiter=',')
csv_out.writerow([locus,seq_id,position,nucleotide])
del indexes[:]
input.txt
//Locus_1 * *
>Safr01 AATCCGTTTTAAACCAGNTCYAT
>Safr02 TTAATCCGTTTTAAACCAGNTCY
//Locus_2 * *
>Safr01 AATCCGTTTTAAACCAGNTCYAT
>Safr02 TTAATCCGTTTTAAACCAGNTCY
output.csv
Locus_1,Safr01,1,A
Locus_1,Safr01,22,A
Locus_1,Safr02,1,T
Locus_1,Safr02,22,C
Locus_2,Safr01,5,C
Locus_2,Safr01,19,T
Locus_2,Safr02,5,T
Locus_2,Safr02,19,G
This is how I ended up solving the problem:
def do_something_with(block):
lines = block.splitlines()
for line in lines:
if '*' in line:
hit = line
LocusID = hit[2:13].strip()
for i, x in enumerate(hit):
if x=='*':
poslist.append(i)
for pos in poslist:
for line in lines:
if line.startswith('>'):
individual = line[0:7].strip()
snp = line[pos]
print LocusID, individual, snp, pos,
csv_out.writerow((LocusID, individual, snp, pos))
with (datafile) as searchfile:
for line in searchfile:
if line.startswith('//'):
if block:
do_something_with(block)
poslist = list()
block = line
else:
block += line
if block:
do_something_with(block)
Related
I want to find positions that overlaps with two coordinates and also that both are in the same chromosomes.
The file with the positions looks like this
with open(file_path, 'r') as f:
lines = [l for l in f if not l.startswith('#')]
print(lines)
['chr1\t36931696\t.\tT\t.\t100\tPASS\tDP=839\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/.:100:830:839:0.0107:24:-100.0000:0.0071\n', 'chr2\t25457280\t.\tA\t.\t100\tPASS\tDP=1410\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:19:1403:1410:0.0050:24:-100.0000:0.0014\n', '\n', '\n']
# I have limited the file to have only two lines. But actually this normally have 100k lines
And the file with the intervals looks like this
print(bedregions)
[('chr1', 36931694, 36931909, 'CSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)'), ('chr2', 25466989, 25467211, 'DNMT3A.CDS.17.line.57.merged--with.DNMT3A.CDS.16.li.probe--coordinates(25466989-25467211)')]
# I have limited this file as well to have two tuples, this has actually 500 tuples
This is what I have been trying
def roi2(file_path,bedregions):
with open(file_path, 'r') as f:
lines = [l for l in f if not l.startswith('#')]
chr2position = {}
for position, line in enumerate(lines):
# If there is a empty line this will give a empty list
# Amd the following split will give a out of range error
if (len(line)) == 1:
break
# Take the chr
chr = line.strip().split()[0]
if chr not in chr2position:
chr2position[chr] = position
filtered_lines =[]
for element in bedregions:
ch, start, end, probe_name = element
for lineindex in range(start + chr2position[chr], end + chr2position[chr] ):
filtered_lines.append(lines[lineindex])
# This return a error in the last line. IndexError list index out of range
This should do what you want considering the data structure you mentioned
f = open(file_path, 'r')
lines = f.readlines()
chr2base2index = dict()
for index,line in enumerate(lines):
if (len(line)) == 1:
break
if line[0] == '#':
continue
handle = line.strip().split()
chrm, base = handle[0], int(handle[1])
if chrm not in chr2base2index:
chr2base2index[chrm] = dict()
if base not in chr2base2index[chrm]:
chr2base2index[chrm][base] = index
filtered_lines = []
for chrm, start, end, probe_name in bedregions:
if chrm not in chr2base2index:
print(f'Chromosome {chrm} not found')
continue
for base in range(start, end):
index = chr2base2index[chrm].get(base, None)
if index != None:
filtered_lines.append('\t'.join(lines[index].strip().split() + [probe_name]))
filtered_lines
['chr1\t36931696\t.\tT\t.\t100\tPASS\tDP=839\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/.:100:830:839:0.0107:24:-100.0000:0.0071\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931697\t.\tT\t.\t100\tPASS\tDP=832\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:15:829:832:0.0036:24:-100.0000:0.0154\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931698\t.\tT\t.\t100\tPASS\tDP=837\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:36:836:837:0.0012:24:-100.0000:0.0095\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931699\t.\tA\t.\t100\tPASS\tDP=836\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:36:835:836:0.0012:24:-100.0000:0.0107\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931700\t.\tC\t.\t100\tPASS\tDP=818\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:14:814:818:0.0049:24:-100.0000:0.0320\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931701\t.\tA\t.\t100\tPASS\tDP=841\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:20:838:841:0.0036:24:-100.0000:0.0047\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931702\t.\tA\t.\t100\tPASS\tDP=825\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:19:822:825:0.0036:24:-100.0000:0.0237\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931703\t.\tT\t.\t100\tPASS\tDP=833\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:26:832:833:0.0012:24:-100.0000:0.0142\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)',
'chr1\t36931704\t.\tA\t.\t100\tPASS\tDP=833\tGT:GQ:AD:DP:VF:NL:SB:NC\t0/0:11:829:833:0.0048:24:-100.0000:0.0142\tCSF3R.exon.17.line.1.chr1.36931697.36932509--tile--1.probe--coordinates(36931694-36931909)']
Work with as many chromosomes as found in the interval file. If a chromosome is given is the interval file but not in the searched file, this would give you and error so I have put a line that solve that issue and also inform you if that happen.
Results are given in a list, one line per element but this can be change in the last line of the code.
I have to input a text file that contains comma seperated and line seperated data in the following format:
A002,R051,02-00-00,05-21-11,00:00:00,REGULAR,003169391,001097585,05-21-11,04:00:00,REGULAR,003169415,001097588,05-21-11,08:00:00,REGULAR,003169431,001097607
Multiple sets of such data is present in the text file
I need to print all this in new lines with the condition:
1st 3 elements of every set followed by 5 parameters in a new line. So solution of the above set would be:
A002,R051,02-00-00,05-21-11,00:00:00,REGULAR,003169391,001097585
A002,R051,02-00-00,05-21-11,04:00:00,REGULAR,003169415,001097588
A002,R051,02-00-00,05-21-11,08:00:00,REGULAR,003169431,001097607
My function to achieve it is given below:
def fix_turnstile_data(filenames):
for name in filenames:
f_in = open(name, 'r')
reader_in = csv.reader(f_in, delimiter = ',')
f_out = open('updated_' + name, 'w')
writer_out = csv.writer(f_out, delimiter = ',')
array=[]
for line in reader_in:
i = 0
j = -1
while i < len(line):
if i % 8 == 0:
i+=2
j+=1
del array[:]
array.append(line[0])
array.append(line[1])
array.append(line[2])
elif (i+1) % 8 == 0:
array.append(line[i-3*j])
writer_out.writerow(array)
else:
array.append(line[i-3*j])
i+=1
f_in.close()
f_out.close()
The output is wrong and there is a space of 3 lines at the end of those lines whose length is 8. I suspect it might be the writer_out.writerow(array) which is to blame.
Can anyone please help me out?
Hmm, the logic you use ends up being fairly confusing. I'd do it more along these lines (this replaces your for loop), and this is more Pythonic:
for line in reader_in:
header = line[:3]
for i in xrange(3, len(line), 5):
writer_out.writerow(header + line[i:i+5])
Here is my first code. Using this code I extracted a list of (6800) random elements and saved my results as a text file. (The file that this code is reading from has over 10,000 lines so every time I run it, I get a new set of random elements).
import random
with open('filename.txt') as fin:
lines = fin.readlines()
random.shuffle(lines)
for i, line in enumerate(lines):
if i >= 0 and i < 6800:
print(line, end='')
Here is my second code. Using that saved text file from my previous step, I then use this code to compare the file to another text file. My results are as you can see, is the count; which always varies, e.g 2390 or 4325 etc..
import csv
with open ("SavedTextFile_RandomElements.txt") as f:
dict1 = {}
r = csv.reader(f,delimiter="\t")
for row in r:
a, b, v = row
dict1.setdefault((a,b),[]).append(v)
#for key in dict1:
#print(key[0])
#print(key[1])
#print(d[key][0]])
with open ("filename2.txt") as f:
dict2 = {}
r = csv.reader(f,delimiter="\t")
for row in r:
a, b, v = row
dict2.setdefault((a,b),[]).append(v)
#for key in dict2:
#print(key[0])
count = 0
for key1 in dict1:
for key2 in dict2:
if (key1[0] == key2[0]) and abs((float(key1[1].split(" ")[0])) - (float(key2[1].split(" ")[0]))) < 10000:
count += 1
print(count)
I decided to combine the two, because I want to skip the extracting and saving process and just have the first code run straight into the second having the random elements read automatically. Here is the combined two:
import csv
import random
with open('filename.txt') as fin:
lines = fin.readlines()
random.shuffle(lines)
str_o = " "
for i, line in enumerate(lines):
if i >= 0 and i < 6800:
str_o += line
r = str_o
dict1 = {}
r = csv.reader(fin,delimiter="\t")
for row in r:
a, b, v = row
dict1.setdefault((a,b),[]).append(v)
with open ("filename2.txt") as f:
dict2 = {}
r = csv.reader(f,delimiter="\t")
for row in r:
a, b, v = row
dict2.setdefault((a,b),[]).append(v)
count = 0
for key1 in dict1:
for key2 in dict2:
if (key1[0] == key2[0]) and abs((float(key1[1].split(" ")[0])) - (float(key2[1].split(" ")[0]))) < 1000:
count += 1
print(count)
However, now when I run the code. I always get a count of 0. Even if I change (less than one thousand):
< 1000:
to for example (less than ten thousand):
< 10000:
I am only receiving a count of zero. And I should only receive a count of zero when I write of course less than zero:
< 0:
But no matter what number I put in, I always get zero. I went wrong somewhere. Can you guys help me figure out where that was? I am happy to clarify anything.
[EDIT]
Both of my files are in the following format:
1 10045 0.120559958
1 157465 0.590642951
1 222471 0.947959795
1 222473 0.083341617
1 222541 0.054014337
1 222588 0.060296547
You wanted to combine the two codes, so you really don't need to read from SavedTextFile_RandomElements.txt, right?
In that case you need to read from somewhere, and I think you intended to store those in variable 'r'. But you overwrote 'r' using this:
r = csv.reader(fin,delimiter="\t")
BTW, was there a typo there 2 lines above that line? You didn't have any file open statement for 'fin'. The above combined code must have not been able to run properly (exception thrown).
To fix, simply remove the csv.reader line, like so (and reduce indentation starting dict1={}
r = str_o
dict1 = {}
for row in r:
a, b, v = row.split()
dict1.setdefault((a,b),[]).append(v)
EDIT: another issue, causing ValueError exception
I missed this earlier. You are concatenating the whole file contents into a single string, but you later on loops over r to read each line and break it into a,b,v.
The error to unpack comes from this loop because you are looping over a single string, meaning you are getting each character, instead of each line, per loop.
To fix this, you just need a single list 'r', no need for string:
r = []
for i, line in enumerate(lines):
if i >= 0 and i < 6800:
r.append(line)
dict1 = {}
for row in r:
a, b, v = row.split()
dict1.setdefault((a,b),[]).append(v)
EDIT: Reading 'row' or line into variables
Since input file is split line by line into string separated by whitespaces, you need to split the 'row' var:
a, b, v = row.split()
Can't tell where exactly you went wrong. In your combined code:
You create str_o and assign it to r but you never use it
A few lines later you assign a csv.reader to r - it is hard to tell from your indentation whether this is still within the with block.
You want to be doing something like this (I didn't use the csv module):
import collections, random
d1 = collections.defaultdict(list)
d2 = collections.defaultdict(list)
with open('filename.txt') as fin:
lines = fin.readlines()
lines = random.sample(lines, 6800)
for line in lines:
line = line.strip()
try:
a, b, v = line.split('\t')
d1[(a,b)].append(v)
except ValueError as e:
print 'Error:' + line
with open ("filename2.txt") as f:
for line in f:
line = line.strip()
try:
a, b, v = line.split('\t')
d2[(a,b)].append(v)
except ValueError as e:
print 'Error:' + line
I have data, that looks like this:
Name Nm1 * *
Ind1 AACTCAGCTCACG
Ind2 GTCATCGCTACGA
Ind3 CTTCAAACTGACT
I need to grab the letter from each position marked by an asterix in the "Name"-line and print this, along with the index of the asterix
So the result would be
Ind1, 12, T
Ind2, 12, A
Ind3, 12, C
Ind1, 17, T
Ind2, 17, T
Ind3, 17, T
I'm trying to use enumerate() to retrieve the positions of the asterix's, and then my thought was, that I could use these indexes to grab the letters.
import sys
import csv
input = open(sys.argv[1], 'r')
Output = open(sys.argv[1]+"_processed", 'w')
indlist = (["Individual_1,", "Individual_2,", "Individual_3,"])
with (input) as searchfile:
for line in searchfile:
if '*' in line:
LocusID = line[2:13]
LocusIDstr = LocusID.strip()
hit = line
for i, x in enumerate(hit):
if x=='*':
position = i
print position
for item in indlist:
Output.write("%s%s%s\n" % (item, LocusIDstr, position))
Output.close()
If the enumerate()outputs e.g.
12
17
How do I access each index seperately?
Also, when I print the position, I get the numbers I want. When I write to the file, however, only the last position is written. Why is this?
----------------EDIT-----------------
After advice below, I have edited split up my code to make it a bit more simple (for me) to understand.
import sys
import csv
input = open(sys.argv[1], 'r')
Output = open(sys.argv[1]+"_FGT_Data", 'w')
indlist = (["Individual_1,", "Individual_2,", "Individual_3,"])
with (input) as searchfile:
for line in searchfile:
if '*' in line:
LocusID = line[2:13]
LocusIDstr = LocusID.strip()
print LocusIDstr
hit = line
for i, x in enumerate(hit):
if x=='*':
position = i
#print position
input = open(sys.argv[1], 'r')
with (input) as searchfile:
for line in searchfile:
if line [0] == ">":
print line[position], position
with (Output) as writefile:
for item in indlist:
writefile.write("%s%s%s\n" % (item, LocusIDstr, position))
Output.close()
I still do not have a solution for how to acces each of the indexes, though.
Edit
changed to work with the file you gave me in your comment. if you have made this file yourself, consider working with columns next time.
import sys
read_file = sys.argv[1]
write_file = "%s_processed.%s"%(sys.argv[1].split('.')[0],sys.argv[1].split('.')[1])
indexes = []
lines_to_write = []
with open(read_file,'r') as getindex:
first_line = getindex.readline()
for i, x in enumerate(first_line):
if x == '*':
indexes.append(i-11)
with open(read_file,'r') as getsnps:
for line in getsnps:
if line.startswith(">"):
sequence = line.split(" ")[1]
for z in indexes:
string_to_append = "%s\t%s\t%s"%(line.split(" ")[0],z+1,sequence[z])
lines_to_write.append(string_to_append)
with open(write_file,"w") as write_file:
write_file.write("\n".join(lines_to_write))
I want to pull all data from a text file from a specified line number until the end of a file. This is how I've tried:
def extract_values(f):
line_offset = []
offset = 0
last_line_of_heading = False
if not last_line_of_heading:
for line in f:
line_offset.append(offset)
offset += len(line)
if whatever_condition:
last_line_of_heading = True
f.seek(0)
# non-functioning pseudocode follows
data = f[offset:] # read from current offset to end of file into this variable
There is actually a blank line between the header and the data I want, so ideally I could skip this also.
Do you know the line number in advance? If so,
def extract_values(f):
line_number = # something
data = f.readlines()[line_number:]
If not, and you need to determine the line number based on the content of the file itself,
def extract_values(f):
lines = f.readlines()
for line_number, line in enumerate(lines):
if some_condition(line):
data = lines[line_number:]
break
This will not be ideal if your files are enormous (since the lines of the file are loaded into memory); in that case, you might want to do it in two passes, only storing the file data on the second pass.
Your if clause is at the wrong position:
for line in f:
if not last_line_of_heading:
Consider this code:
def extract_values(f):
rows = []
last_line_of_heading = False
for line in f:
if last_line_of_heading:
rows.append(line)
elif whatever_condition:
last_line_of_heading = True
# if you want a string instead of an array of lines:
data = "\n".join(rows)
you can use enumerate:
f=open('your_file')
for i,x in enumerate(f):
if i >= your_line:
#do your stuff
here i will store line number starting from 0 and x will contain the line
using list comprehension
[ x for i,x in enumerate(f) if i >= your_line ]
will give you list of lines after specified line
using dictionary comprehension
{ i:x for i,x in enumerate(f) if i >= your_line }
this will give you line number as key and line as value, from specified line number.
Try this small python program, LastLines.py
import sys
def main():
firstLine = int(sys.argv[1])
lines = sys.stdin.read().splitlines()[firstLine:]
for curLine in lines:
print curLine
if __name__ == "__main__":
main()
Example input, test1.txt:
a
b
c
d
Example usage:
python LastLines.py 2 < test1.txt
Example output:
c
d
This program assumes that the first line in a file is the 0th line.