Trouble parsing FASTA files in Python - python

dict = {}
tag = ""
with open('/storage/emulated/0/Download/sequence.fasta.txt','r') as sequence:
seq = sequence.readlines()
for line in seq:
if line.startswith(">"):
tag = line.replace("\n", "")
else:
seq = "".join(seq[1:])
dict[tag] = seq.replace("\n", "")
print(dict)
Background for those who arn't familiar with FASTA files. This format contains one or multiple DNA, RNA, or protein sequences with a one-line descriptive tag of the sequence that starts with a ">" and then the sequence in the following lines(Ex. For DNA it would be a lot of repeating of A, T, G, and C). It also comes with many unnecessary line breaks. So far this code works when I only have one sequence per file but it seems to ignore the if condition if there are multiple. For example it should add each new tag: sequence pair into the dictionary everytime it notices a ">" but instead it only runs once and puts the first description as the key in the dictionary and joins the rest of the file regardless of ">" characters and uses that as the value. How can I get this loop to notice a new ">" after the first occurrence?
I am purposefully steering away from the biopython module.

UPDATE: the code below now works for multiple-line sequences.
The following code works fine for me:
import re
from collections import defaultdict
sequences = defaultdict(str)
with open('fasta.txt') as f:
lines = f.readlines()
current_tag = None
for line in lines:
m = re.match('^>(.+)', line)
if m:
current_tag = m.group(1)
else:
sequences[current_tag] += line.strip()
for k, v in sequences.items():
print(f"{k}: {v}")
It uses a number of features you may be unfamiliar with, such as regular expressions (which are probably very useful in bioinformatics) and f-string formatting. If anything confuses you, ask away. One thing I should add is that you don't want to define a variable as dict because that will clobber something Python has defined at startup. I chose sequences, which doesn't do this and is more informative.
For reference, this is the content of the example FASTA file fasta.txt I used in this instance:
>seq0
FQTWEEFSRAAEKLYLADPMKVRVVLKYRHVDGNLCIKVTDDLVCLVYRTDQAQDVKKIEKF
>seq1
KYRTWEEFTRAAEKLYQADPMKVRVVLKYRHCDGNLCIKVTDDVVCLLYRTDQAQDVKKIEKFHSQLMRLME LKVTDNKECLKFKTDQAQEAKKMEKLNNIFFTLM
>seq2
EEYQTWEEFARAAEKLYLTDPMKVRVVLKYRHCDGNLCMKVTDDAVCLQYKTDQAQDVKKVEKLHGK
>seq3
MYQVWEEFSRAVEKLYLTDPMKVRVVLKYRHCDGNLCIKVTDNSVCLQYKTDQAQDVK
>seq4
EEFSRAVEKLYLTDPMKVRVVLKYRHCDGNLCIKVTDNSVVSYEMRLFGVQKDNFALEHSLL
>seq5
SWEEFAKAAEVLYLEDPMKCRMCTKYRHVDHKLVVKLTDNHTVLKYVTDMAQDVKKIEKLTTLLMR
>seq6
FTNWEEFAKAAERLHSANPEKCRFVTKYNHTKGELVLKLTDDVVCLQYSTNQLQDVKKLEKLSSTLLRSI
>seq7
SWEEFVERSVQLFRGDPNATRYVMKYRHCEGKLVLKVTDDRECLKFKTDQAQDAKKMEKLNNIFF
>seq8
SWDEFVDRSVQLFRADPESTRYVMKYRHCDGKLVLKVTDNKECLKFKTDQAQEAKKMEKLNNIFFTLM
>seq9
KNWEDFEIAAENMYMANPQNCRYTMKYVHSKGHILLKMSDNVKCVQYRAENMPDLKK
>seq10
FDSWDEFVSKSVELFRNHPDTTRYVVKYRHCEGKLVLKVTDNHECLKFKTDQAQDAKKMEK

Related

How to get an unknown substring between two known substrings, within a giant string/file

I'm trying to get all the substrings under a "customLabel" tag, for example "Month" inside of ...,"customLabel":"Month"},"schema":"metric...
Unusual issue: this is a 1071552 characters long ndjson file, of a single line ("for line in file:" is pointless since there's only one).
The best I found was that:
How to find a substring of text with a known starting point but unknown ending point in python
but if I use this, the result obviously doesn't stop (at Month) and keeps writing the whole remaining of the file, same as if using partition()[2].
Just know that Month is only an example, customLabel has about 300 variants and they are not listed (I'm actually doing this to list them...)
To give some details here's my script so far:
with open("file.ndjson","rt", encoding='utf-8') as ndjson:
filedata = ndjson.read()
x="customLabel"
count=filedata.count(x)
for i in range (count):
if filedata.find(x)>0:
print("Found "+str(i+1))
So right now it properly tells me how many occurences of customLabel there are, I'd like to get the substring that comes after customLabel":" instead (Month in the example) to put them all in a list, to locate them way more easily and enable the use of replace() for traductions later on.
I'd guess regex are the solution but I'm pretty new to that, so I'll post that question by the time I learn about them...
If you want to search for all (even nested) customLabel values like this:
{"customLabel":"Month" , "otherJson" : {"customLabel" : 23525235}}
you can use RegEx patterns with the re module
import re
label_values = []
regex_pattern = r"\"customLabel\"[ ]?:[ ]?([1-9a-zA-z\"]+)"
with open("file.ndjson", "rt", encoding="utf-8") as ndjson:
for line in ndjson:
values = re.findall(regex_pattern, line)
label_values.extend(values)
print(label_values) # ['"Month"', '23525235']
# If you don't want the items to have quotations
label_values = [i.replace('"', "") for i in label_values]
print(label_values) # ['Month', '23525235']
Note: If you're only talking about ndjson files and not nested searching, then it'd be better to use the json module to parse the lines and then easily get the value of your specific key which is customLabel.
import json
label = "customLabel"
label_values = []
with open("file.ndjson", "rt", encoding="utf-8") as ndjson:
for line in ndjson:
line_json = json.loads(line)
if line_json.get(label) is not None:
label_values.append(line_json.get(label))
print(label_values) # ['Month']

Speed up fuzzy regex search Python

I am looking for suggestions on how to speed up the process described below, which involves a fuzzy regex search.
What I am trying to do
I am fuzzy searching for keywords, stored in a dictionary d (with example just below, value is list of two always, need to keep track of which was found, if any), in a set of strings, stored in a file testFile (one string each line, ~150 characters each) - 4 mismatches max.
d = {"kw1": ["AGCTCGATGTATGGGTATATGATCTTGAC", "GTCAAGATCATATACCCATACATCGAGCT"], "kw2": ["GGTCAGGTCAGTACGGTACGATCGATTTCGA", "TCGAAATCGATCGTACCGTACTGACCTGACC"]} #simplified to just two keywords
How I do it
For this, I first compile my regex and store them in a dictionary compd. I then read the file line by line and search each keyword in each line (string). I cannot stop the search once a keyword has been found as multiple keywords may be found in one string/line but I can skip the second element in list associated with keyword if first is found.
Here is how I am doing it:
#/usr/bin/env python3
import argparse
import regex
parser = argparse.ArgumentParser()
parser.add_argument('file', help='file with strings')
args = parser.parse_args()
#dictionary with keywords
d = {"kw1": ["AGCTCGATGTATGGGTATATGATCTTGAC", "GTCAAGATCATATACCCATACATCGAGCT"],"kw2": ["GGTCAGGTCAGTACGGTACGATCGATTTCGA", "TCGAAATCGATCGTACCGTACTGACCTGACC"]}
#Compile regex (4 mismatches max)
compd = {"kw1": [], "kw2": []} #to store regex
for k, v in d.items(): #for each keyword
compd[k].append(regex.compile(r'(?b)(' + v[0] + '){s<=4}')) #compile 1st elt of list
compd[k].append(regex.compile(r'(?b)(' + v[1] + '){s<=4}')) #compile second
#Search keywords
with open(args.file) as f: #open file with strings
line = f.readline() #first line/string
while line: #go through each line
for k, v in compd.items(): #for each keyword (ID, regex)
for val in [v[0], v[1]]: #for each elt of list
found = val.search(line) #regex search
if found != None: #if match
print("Keyword " + k + " found as " + found[0]) #print match
if val == v[0]: #if 1st elt of list
break #don't search 2nd
line = f.readline() #next line
I have tested the script using the testFile:
AGCTCGATGTATGGGTATATGATCTTGACAGAGAGA
GTCGTAGCTCGTATTCGATGGCTATTCGCTATATGCTAGCTAT
and get the following expected result:
Keyword kw1 found as AGCTCGATGTATGGGTATATGATCTTGAC
Efficiency
With current script, it takes about 3-4 minutes to process 500k strings and six keywords. There will be cases where I have 2 million strings, which should take 12-16 minutes and I would like to have this reduced, if possible.
Having a separate regex for each keyword requires running a match against each regex separately. Instead, combine all the regexes into one using the keywords as names for named groups:
patterns = []
for k, v in d.items(): #for each keyword
patterns.append(f'(?P<{k}>{v[0]}|{v[1]})')
pattern = '(?b)(?:' + '|'.join(patterns) + '){s<=4}'
reSeqs = regex.compile(pattern)
With this, the program can check for which named group was matched in order to get the keyword. You can replace the loop over all the regexes in compd with loops over matches in a line (in case there is more than 1 match) and dictionary items in each match (which could be implemented as a comprehension):
for matched in reSeqs.finditer(line):
try:
keyword = [kw for kw, val in matched.groupdict().items() if val][0]
# perform further processing of keyword
except: # no match
pass
(Note that you don't need to call readline on a file object to loop over lines; instead, you can loop over the file object directly: for line in f:.)
If you need further optimizations, have memory to burn and can sacrifice a little readability, also test whether replacing the loop over lines with a comprehension over matches is more performant:
with open(args.file) as f:
contents = f.read() # slurp entire file
matches = [{
val:kw for kw, val in found.groupdict().items() if val
} for found in reSeqs.finditer(contents)
]
This solution doesn't distinguish between repetitions of a given sequence; in particular, repetitions on a single line are lost. You could merge entries having the same keys into lists, or, if repetitions should be treated as a single instance, you can merge the dictionaries as-is. If you want to distinguish separate instances of a matched sequence, include file position information in the keys:
matches = [{
(val, found.span()):kw for kw, val in found.groupdict().items() if val
} for found in reSeqs.finditer(contents)
]
To merge:
results = {}
for match in matches:
results.update(match)
# or:
results = {k:v for d in matches for k,v in d.items()}
If memory is an issue, another option would be to break up the file into chunks ending on line breaks (either line-based, or by reading blocks and separating partial lines at block ends) and use finditer on each chunk:
# implementation of `chunks` left as exercise
def file_chunks(path, size=2**12):
with open(path) as file:
yield from chunks(file, size=size)
results = {}
for block in file_chunks(args.file, size=2**20):
for found in reSeqs.finditer(block):
results.update({
(val, found.span()):kw for kw, val in found.groupdict().items() if val
})

How can I effectively pull out human readable strings/terms from code automatically?

I'm trying to determine the most common words, or "terms" (I think) as I iterate over many different files.
Example - For this line of code found in a file:
for w in sorted(strings, key=strings.get, reverse=True):
I'd want these unique strings/terms returned to my dictionary as keys:
for
w
in
sorted
strings
key
strings
get
reverse
True
However, I want this code to be tunable so that I can return strings with periods or other characters between them as well, because I just don't know what makes sense yet until I run the script and count up the "terms" a few times:
strings.get
How can I approach this problem? It would help to understand how I can do this one line at a time so I can loop it as I read my file's lines in. I've got the basic logic down but I'm currently just doing the tallying by unique line instead of "term":
strings = dict()
fname = '/tmp/bigfile.txt'
with open(fname, "r") as f:
for line in f:
if line in strings:
strings[line] += 1
else:
strings[line] = 1
for w in sorted(strings, key=strings.get, reverse=True):
print str(w).rstrip() + " : " + str(strings[w])
(Yes I used code from my little snippet here as the example at the top.)
If the only python token you want to keep together is the object.attr construct then all the tokens you are interested would fit into the regular expression
\w+\.?\w*
Which basically means "one or more alphanumeric characters (including _) optionally followed by a . and then some more characters"
note that this would also match number literals like 42 or 7.6 but that would be easy enough to filter out afterwards.
then you can use collections.Counter to do the actual counting for you:
import collections
import re
pattern = re.compile(r"\w+\.?\w*")
#here I'm using the source file for `collections` as the test example
with open(collections.__file__, "r") as f:
tokens = collections.Counter(t.group() for t in pattern.finditer(f.read()))
for token, count in tokens.most_common(5): #show only the top 5
print(token, count)
Running python version 3.6.0a1 the output is this:
self 226
def 173
return 170
self.data 129
if 102
which makes sense for the collections module since it is full of classes that use self and define methods, it also shows that it does capture self.data which fits the construct you are interested in.

Rosalind Profile and Consensus: Writing long strings to one line in Python (Formatting)

I'm trying to tackle a problem on Rosalind where, given a FASTA file of at most 10 sequences at 1kb, I need to give the consensus sequence and profile (how many of each base do all the sequences have in common at each nucleotide). In the context of formatting my response, what I have as my code works for small sequences (verified).
However, I have issues in formatting my response when it comes to large sequences.
What I expect to return, regardless of length, is:
"consensus sequence"
"A: one line string of numbers without commas"
"C: one line string """" "
"G: one line string """" "
"T: one line string """" "
All aligned with each other and on their own respective lines, or at least some formatting that allows me to carry this formatting as a unit onward to maintain the integrity of aligning.
but when I run my code for a large sequence, I get each separate string below the consensus sequence broken up by a newline, presumably because the string itself is too long. I've been struggling to think of ways to circumvent the issue, but my searches have been fruitless. I'm thinking about some iterative writing algorithm that can just write the entirety of the above expectation but in chunks Any help would be greatly appreciated. I have attached the entirety of my code below for the sake of completeness, with block comments as needed, though the main section.
def cons(file):
#returns consensus sequence and profile of a FASTA file
import os
path = os.path.abspath(os.path.expanduser(file))
with open(path,"r") as D:
F=D.readlines()
#initialize list of sequences, list of all strings, and a temporary storage
#list, respectively
SEQS=[]
mystrings=[]
temp_seq=[]
#get a list of strings from the file, stripping the newline character
for x in F:
mystrings.append(x.strip("\n"))
#if the string in question is a nucleotide sequence (without ">")
#i'll store that string into a temporary variable until I run into a string
#with a ">", in which case I'll join all the strings in my temporary
#sequence list and append to my list of sequences SEQS
for i in range(1,len(mystrings)):
if ">" not in mystrings[i]:
temp_seq.append(mystrings[i])
else:
SEQS.append(("").join(temp_seq))
temp_seq=[]
SEQS.append(("").join(temp_seq))
#set up list of nucleotide counts for A,C,G and T, in that order
ACGT= [[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))],
[0 for i in range(0,len(SEQS[0]))]]
#assumed to be equal length sequences. Counting amount of shared nucleotides
#in each column
for i in range(0,len(SEQS[0])-1):
for j in range(0, len(SEQS)):
if SEQS[j][i]=="A":
ACGT[0][i]+=1
elif SEQS[j][i]=="C":
ACGT[1][i]+=1
elif SEQS[j][i]=="G":
ACGT[2][i]+=1
elif SEQS[j][i]=="T":
ACGT[3][i]+=1
ancstr=""
TR_ACGT=list(zip(*ACGT))
acgt=["A: ","C: ","G: ","T: "]
for i in range(0,len(TR_ACGT)-1):
comp=TR_ACGT[i]
if comp.index(max(comp))==0:
ancstr+=("A")
elif comp.index(max(comp))==1:
ancstr+=("C")
elif comp.index(max(comp))==2:
ancstr+=("G")
elif comp.index(max(comp))==3:
ancstr+=("T")
'''
writing to file... trying to get it to write as
consensus sequence
A: blah(1line)
C: blah(1line)
G: blah(1line)
T: blah(line)
which works for small sequences. but for larger sequences
python keeps adding newlines if the string in question is very long...
'''
myfile="myconsensus.txt"
writing_strings=[acgt[i]+' '.join(str(n) for n in ACGT[i] for i in range(0,len(ACGT))) for i in range(0,len(acgt))]
with open(myfile,'w') as D:
D.writelines(ancstr)
D.writelines("\n")
for i in range(0,len(writing_strings)):
D.writelines(writing_strings[i])
D.writelines("\n")
cons("rosalind_cons.txt")
Your code is totally fine except for this line:
writing_strings=[acgt[i]+' '.join(str(n) for n in ACGT[i] for i in range(0,len(ACGT))) for i in range(0,len(acgt))]
You accidentally replicate your data. Try replacing it with:
writing_strings=[ACGT[i] + str(ACGT[i]) for i in range(0,len(ACGT))]
and then write it to your output file as follows:
D.write(writing_strings[i][1:-1])
That's a lazy way to get rid of the brackets from your list.

While loops for parsing fasta files

I have been given an assignment for a project with no previous programming experience. It asks to create a motif finder using while loops, incrementals and boo's. I believe I am on the right track but very uncertain as I have no programming experience. Can anybody help me find my wrongs and tell me what I need to do to correct them. Again I am a biology guy asked to take this on and
gi|14578797|gb|AF230943.1| Vibrio hollisae strain ATCC33564 Hsp60 (hsp60) gene, partial cds
CGCAACTGTACTGGCACAGGCTATCGTAAGCGAAGGTCTGAAAGCCGTTGCTGCAGGCATGAACCCAATG
GACCTGAAGCGTGGTATTGACAAAGCGGTTGCTGCGGCAGTTGAGCAACTGAAAGCGTTGTCTGTTGAGT
GTAATGACACCAAGGCTATTGCACAGGTAGGTACCATTTCTGCTAACTCTGATGAAACTGTAGGTAACAT
CATTGCAGAAGCGATGGAAAAAGTAGGCCGCGACGGTGTTATCACTGTTGAAGAAGGTCAGTCTCTGCAA
GACGAGCTGGATGTGGTTGAAGGTATGCAGTTTGACCGCGGCTACCTGTCTCCATACTTCATCAACAACC
AAGAGTCTGGTTCTGTTGATCTGGAAAACCCATTCATCCTGCTGGTTGACAAAAAAGTATCAAACATCCG
CGAACTGCTGCCTACTCTGGAAGCCGTCGCGAAATCTTCACGTCCACTGCTGATCATCGCTGAAGACGTA
GAAGGTGAAGCACTGGCAACACTGGTTGTAAACAACATGCGTGGCATCGTAAAAGGGCAGCAGTT
gi|14578795|gb|AF230942.1| Photobacterium damselae strain ATCC33539 Hsp60 (hsp60) gene, partial cds
GGCTACAGTACTGGCTCAAGCAATTATCACTGAAGGTCTAAAAGCGGTTGCTGCGGGTATGAACCCAATG
GATCTTAAGCGTGGTATCGACAAAGCAGTAGTTGCTGCTGTTGAAGAGCTAAAAGCACTATCTGTTCCTT
GTGCTGACACTAAAGCGATTGCTCAGGTAGGTACTATCTCTGCAAACTCTGATGCAACTGTGGGTAACCT
AATTGCAAAAGCTATGGATAAAGTTGGTCGTGATGGTGTTATCACGGTTGAAGAAGGCCAAGCGCTACAA
GATGAGTTAGATGTAGTTGAAGGTATGCAGTTCGATCGCGGTTACCTATCTCCATACTTCATCAACAACC
AACAAGCAGGTGCGGTGGAGCTAGAAAGCCCATTTATCCTTCTTGTTGATAAGAAAATCTCTAACATCCG
TGAGCTATTACCAGCACTAGAAGGCGTTGCAAAAGCATCTCGTCCTCTACTGATCATCGCTGAAGATGTT
GAAGGTGAAGCACTAGCAACACTGGTTGTGAACAACATGCGCGGCATTGTTAAAGTTGCTGCTGTT
I am in need of some help.
import re
#function parsing header for sequence
def fasta_splitter(x):
boo=0
seq = ""
i=0
while i < len(lines)
if line[0] ==">"and boo ==0
line[i] = header
boo = 1
i=1+i
elif line [i][0] ==">"
header=line[0]
seq=""
i=i+1
else
seq=seq+line[i]
print ("header" + "seq")
#open file and read file by command line
x=open('C:\\Python27\\fasta.py.txt','r+')
lines = x.readlines()
fasta_splitter(lines)
#split orgnaism details from actual bases
# not sure how to call defined function
re.search(pattern, string)
# renaming string seq to dna
seq ="x"
m = re.search(r"GG(ATCG)GTTAC",dna)
print "m"
For starters, rеad FASTA with a Bio.SeqIO module, so you don't have to write this fasta_splitter monstrosity. Biopython is generally great.
Second, you've messed just about everything up. You call re.search without having defined either a pattern or a string. This will just crash. Then you write
seq="x"
...
print "m"
In both cases you use literally letters "x" or "m", and what you need are variable names. Correct thing will be
seq = x
...
print(m)
And all this is assuming this is a student assignment and not an actual research. In latter case it's generally better to use some modern motif finder tool: those are more sensitive and biologically correct than any bunch of regexes could be.

Categories

Resources