How to find same lines in two large text files? - python

I'd like to compare two large text files(200M) to get the same lines of them.
How to do that in Python?

since they are just 200M, allocate enough memory, read them, sort the lines in ascending order for each, then iterate through both collections of lines in parallel like in a merge operation and delete those that only occur in one set.
preserve line numbers in the collections and sort them by line number after the above, if you want to output them in original order.
merge operation: keep one index for each collection, if lines at both indexes match, increment both indexes, otherwise delete the smaller line and increment just that index. if either index is past the last line, delete all remaining lines in the other collection.
optimization: use a hash to optimize comparisons a little bit; do the hash in the initial read

Disclaimer: I really have no idea how efficient this will be for 200Mb but it's worth the try I guess:
I have tried the following for two ~80mb files and the result was around 2.7 seconds in a 3GB Ram intel i3 machine.
f1 = open("one")
f2 = open("two")
print set(f1).intersection(f2)

You may be able to use the standard difflib module. The module offers several ways of creating difference deltas from various kinds of input.

Here's an example from the docs:
>>> from difflib import context_diff
>>> fromfile = open('before.py')
>>> tofile = open('tofile.py')
>>> for line in context_diff(fromfile, tofile, fromfile='before.py', tofile='after.py'):
print line,

Related

python script for removing duplicates taking 24 hrs+ to loop through 10^7 records

input t1
P95P,71655,LINC-JP,pathogenic
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
output op
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
myCode
def dup():
fi=open("op","a")
l=[];final="";
q=[];dic={};
for i in open("t1"):
k=i.split(",")
q.append(k[1])
q.append(k[0])
if q in l:
pass
else:
final= final + i.strip() + "\n"
fi.write(str(i.strip()))
fi.write("\n")
l.append(q)
q=[]
#print i.strip()
fi.close()
return final.strip()
d=dup()
In the above input line1,2 and line 3,4 are duplicates. Hence in output these duplicates are removed the entries in my input files are around 10^7.
Why is my code running since past 24 hrs for an input of 76Mb file. Also it has yet to complete one iteration of entire input file.It works fine for small files.
Can anyone please point out the reason for this long time.How can I optimize my program ? thnx
You're using an O(n2) algorithm, which scales poorly for larger files:
for i in open("t1"): # linear pass of file takes O(n) time
...
if q in l: # linear pass of list l takes O(n) time
...
...
You should consider using a set (i.e. make l a set) or itertools.groupby if duplicates will always be next to each other. These approaches will be O(n).
if you have access to a Unix system, uniq is a nice utility that is made for your problem.
uniq input.txt output.txt
see https://www.cs.duke.edu/csl/docs/unix_course/intro-89.html
I know this is a Python question, but sometimes Python is not the tool for the task.
And you can always embed a system call in your python script.
It's not clear why you're building a huge string (final) that holds the same thing the file does, or what dic is for. In terms of performance, you can look up x in y much faster if y is a set than if y is a list. Also, a minor point; shorter variable names don't improve performance, so use good ones instead. I would suggest:
def deduplicate(infile, outfile):
seen = set()
#final = []
with open(outfile, "a") as out, open(infile) as in_:
for line in in_:
check = tuple(line.split(",")[:2])
if check not in seen:
#final.append(line.strip())
out.write(line) # why 'strip' the '\n' then 'write' a new one?
seen.add(check)
#return "\n".join(final)
If you do really need final, make it a list until the last moment (see commented-out lines) - gradual string concatenation means the creation of lots of unnecessary objects.
There are a couple things that you are doing very inefficiently. The largest is that you made l a list, so the line if q in l has to search through everything in the list already in order to check if q matches it. If you make l a set, the membership check can be done using a hash calculation and array lookup, which take the same (small) amount of time no matter how much you add to the set (though it will cause l not to be read in the order that it was written).
Other little speedups that you can do include:
Using a tupple (k[1], k[0]) instead of a list for q.
You are writing your output file fi every loop. Your OS will try to batch and background the writes, but it may be faster to just do one big write at the end. I am not sure on this point but try it.

Removing duplicates in a huge .csv file

I have a csv file of this format
testname unitname time data
test1 1 20131211220159 123123
test1 1 20131211220159 12345
test1 1 20131211230180 1234
I am trying to remove all old data from this file and retain only the data with the latest timestamp.(First two of the abovv should be deleted because the last time stamp is greater than the first two timestamps). I want to keep all test data unless the same test and same unit was repeated at a later time. The input file is sorted by time (so older data goes down below).
The file is about 15 Mb.(output_Temp.csv). I copied it as output_temp2.csv
This is what i have.
file1=open("output_temp.csv","r")
file2=open("output_temp2.csv","r")
file3=open("output.csv","w")
flag=0
linecounter=0
for line in file1:
testname=line[0]
vid=line[1]
tstamp=line[2]
file2.seek(0) #reset
for i in range(linecounter):
file2.readline() #came down to the line #
for line2 in file2:
if testname==line2.split(",")[0] and vid==line2.split(",")[1] and tstamp!=line2.split(",")[2]:
flag==1
print line
if flag==1:
break
if flag==0:
file3.write(line)
linecounter=linecounter+1 #going down is ok dont go up.
flag=0
This is taking really long to process, I think it might be ok but its literally taking 10 minutes per 100kb and I have a long way to go.
The main reason this is slow is that you're reading the entire file (or, rather, a duplicate copy of it) for each line in the file. So, if there are 10000 lines, you're reading 10000 lines 10000 times, meaning 10000000 total line reads!
If you have enough memory to save the lines read so far, there's a really easy solution: Store the lines seen so far in a set. (Or, rather, for each line, store the tuple of the three keys that count for being a duplicate.) For each line, if it's already in the set, skip it; otherwise, process it and add it to the set.
For example:
seen = set()
for line in infile:
testname, vid, tstamp = line.split(",", 3)[:3]
if (testname, vid, tstamp) in seen:
continue
seen.add((testname, vid, tstamp))
outfile.write(line)
The itertools recipes in the docs have a function unique_everseen that lets you wrap this up even more nicely:
def keyfunc(line):
return tuple(line.split(",", 3)[:3])
for line in unique_everseen(infile, key=keyfunc):
outfile.write(line)
If the set takes too much memory, you can always fake a set on top of a dict, and you can fake a dict on top of a database by using the dbm module, which will do a pretty good job of keeping enough in memory to make things fast but not enough to cause a problem. The only problem is that dbm keys have to be strings, not tuples of three strings… but you can always just keep them joined up (or re-join them) instead of splitting, and then you've got a string.
I'm assuming that when you say the file is "sorted", you mean in terms of the timestamp, not in terms of the key columns. That is, there's no guarantee that two rows that are duplicates will be right next to each other. If there were, this is even easier. It may not seem easier if you use the itertools recipes; you're just replacing everseen with justseen:
def keyfunc(line):
return tuple(line.split(",", 3)[:3])
for line in unique_justseen(infile, key=keyfunc):
outfile.write(line)
But under the covers, this is only keeping track of the last line, rather than a set of all lines. Which is not only faster, it also saves a lot of memory.
Now that (I think) I understand your requirements better, what you actually want to get rid of is not all but the first line with the same testname, vid, and tstamp, but rather all lines with the same testname and vid except the one with the highest tstamp. And since the file is sorted in ascending order of tstamp, that means you can ignore the tstamp entirely; you just want the last match for each.
This means the everseen trick won't work—we can't skip the first one, because we don't yet know there's a later one.
If we just iterated the file backward, that would solve the problem. It would also double your memory usage (because, in addition to the set, you're also keeping a list so you can stack up all of those lines in reverse order). But if that's acceptable, it's easy:
def keyfunc(line):
return tuple(line.split(",", 2)[:2])
for line in reversed(list(unique_everseen(reversed(list(infile)), key=keyfunc))):
outfile.write(line)
If turning those lazy iterators into lists so we can reverse them takes too much memory, it's probably fastest to do multiple passes: reverse the file on disk, then filter the reversed file, then reverse it again. It does mean two extra file writes, but that can be a lot better than, say, your OS's virtual memory swapping to and from disk hundreds of times (or your program just failing with a MemoryError).
If you're willing to do the work, it wouldn't be that hard to write a reverse file iterator, which reads buffers from the end and splits on newlines and yields the same way the file/io.Whatever object does. But I wouldn't bother unless you turn out to need it.
If you ever do need to repeatedly read particular line numbers out of a file, the linecache module will usually speed things up a lot. Nowhere near as fast as not re-reading at all, of course, but a lot better than reading and parsing thousands of newlines.
You're also wasting time repeating some work in the inner loop. For example, you call line2.split(",") three times, instead of just splitting it once and stashing the value in a variable, which would be three times as fast. A 3x constant gain is nowhere near as important as a quadratic to linear gain, but when it comes for free by making your code simpler and more readable, might as well take it.
For this much file size(~15MB) Pandas would be excellent choice.
Like this:
import pandas as pd
raw_data = pd.read_csv()
clean_data = raw_data.drop_duplicates()
clean_data.to_csv('/path/to/clean_csv.csv')
I was able to process a CSV file about 151MB of size containing more than 5.9Million rows in less than a second with the above snippet.
Please note that the duplicate checking can be a conditional operation or a subset of fields to be matched for duplicate checking.
Pandas does provide lot of these features out of the box. Documentation here

Python edit distance

I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this:
I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files.
Anyone know how I can use that module with files instead of strings?
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following:
seq_file = open(#your file)
sequences = [seq for seq in seq_file]
uniques = list(set(sequences))
Assuming you have the memory for it. How many millions?
ETA:
Was reading the comments above (but don't have comment privs) - assuming the sequence IDs are the same for any duplicates, this will work. If duplicate sequences can different sequence IDs, then would to know which comes first and what is between them in the file.
If you want to filter out exact duplicates, you can use the set Python built-in type. As an example :
a = ["tccggatcc", "actcctgct", "tccggatcc"] # You have a list of sequences
s = set(a) # Put that into a set
s is then equal to ['tccggatcc', 'actcctgct'], without duplicates.
Does it have to be Python?
If the sequences are simply text strings one per line then a shell script will be very efficient:
sort input-file-name | uniq > output-file-name
This will do the job on files up to 2GB on 32 bit Linux.
If you are on Windows then install the GNU utils http://gnuwin32.sourceforge.net/summary.html.
Four things come to mind:
You can use a set(), as described by F.X. - assuming the unique
strings will all fit in memory
You can use one file per sequence, and feed the files to a program
like equivs3e:
http://stromberg.dnsalias.org/~strombrg/equivalence-classes.html#python-3e
You could perhaps use a gdbm as a set, instead of its usual
key-value store use. This is good if you need something that's 100%
accurate, but you have too much data to fit all the uniques in
Virtual Memory.
You could perhaps use a bloom filter to cut down the data to more
manageable sizes, if you have a truly huge number of strings to
check and lots of duplicates. Basically a bloom filter can say
"This is definitely not in the set" or "This is almost definitely in
the set". In this way, you can eliminate most of the obvious
duplicates before using a more common means to operate on the
remaining elements.
http://stromberg.dnsalias.org/~strombrg/drs-bloom-filter/
Don't be afraid of files! ;-)
I'm posting an example by assuming the following:
its a text-file
one sequence per line
-
filename = 'sequence.txt'
with open(filename, 'r') as sqfile:
sequences = sqfile.readlines() # now we have a list of strings
#discarding the duplicates:
uniques = list(set(sequences))
That's it - by using pythons set-type we eliminate all duplicates automagically.
if you have the id and the sequence in the same line like:
423401 ttacguactg
you may want to eliminate the ids like:
sequences = [s.strip().split()[-1] for s in sequences]
with strip we strip the string from leading and trailing whitespaces and with split we split the line/string into 2 components: the id, and the sequence.
with the [-1] we select the last component (= the sequence-string) and repack it into our sequence-list.

Comparing file contents in Python

I have two files, say source and target. I compare each element in source to check if it also exists in target. If it does not exist in target, I print it ( the end goal is to have 0 difference). Here is the code I have written.
def finddefaulters(source,target):
f = open(source,'r')
g = open(target,'r')
reference = f.readlines()
done = g.readlines()
for i in reference:
if i not in done:
print i,
I need help with
How would this code be rated on a scale of 1-10
How can I make it better and optimal if the file sizes are huge.
Another question - When I read all the lines as list elements, they are interpreted as 'element\n' - So for correct comparison, I have to add a newline at the end of each file. Is there a way to strip the newlines so I do not have to add newline at the end of files. I tried rstrip. But it did not work.
Thanks in advance.
Regarding efficiency: The method you show has an asymptotic runtime complexity of O(m*n) where m and n are the number of elements in reference and done, i.e. if you double the size of both lists, the algorithm will run 4 times longer (times a fixed constant that is uniteresting to theoretical computer scientists). If m and n are very large, you will probably want to choose a faster algorithm, e.g sort the two lists first using the .sort() (runtime complexity: O(n * log(n))) and then go through the lists just once (runtime complexity: O(n)). That algorithm has a worst-case runtime complexity of O(n * log(n)), which is already a big improvement. However, you trade readability and simplicity of the code for efficiency, so I would only advise you to do this if absolutely necessary.
Regarding coding style: You do not .close() the file handles which you should. Instead of opening and closing the file handle, you could use the with language construct of python. Also, if you like the functional style, you could replace the for loop by a list expression:
for i in reference:
if i not in done:
print i,
then becomes:
items = [i.strip() for i in reference if i not in done]
print ' '.join(items)
However, this way you will not see any progress while the list is being composed.
As joaquin already mentions, you can loop over f directly instead of f.readlines() as file handles support the iterator protocol.
Some ideas:
1) use [with] to open files safely:
with open(source) as f:
.............
The with statement is used to wrap the
execution of a block with methods
defined by a context manager. This
allows common try...except...finally
usage patterns to be encapsulated for
convenient reuse.
2) you can iterate over the lines of a file instead of using readlines:
for line in f:
..........
3) Although for this short snippet it could be enough, try to use more informative names for your variables. One-letter names are not recommended.
4) If you want to get profit of python lib, try functions in difflib module. For example:
make_file(fromlines, tolines[, fromdesc][, todesc][, context][, numlines])
Compares fromlines and tolines (lists
of strings) and returns a string which
is a complete HTML file containing a
table showing line by line differences
with inter-line and intra-line changes
highlighted.

Line by line comparison of large text files with python

I am working on some large (several million line) bioinformatics data sets with the general format:
chromosomeNumber locusStart locusStop sequence moreData
I have other files in this format:
chromosomeNumber locusStart locusStop moreData
What I need to be able to do is read one of each type of file into memory and if the locusStart of a line of the upper file is between the start and stop of any of the lines in the lower file, print the line to output file 1. If the locusStart of that line is not between the start and stop of any lines in the bottom file, then print it to output file 2.
I am currently reading the files in, converting them into dictionaries keyed on chromosome with the corresponding lines as values. I then split each value line into a string, and then do comparisons with the strings. This takes an incredibly long time, and I would like to know if there is a more efficient way to do it.
Thanks.
It seems that for the lower file (which I assuming has the second format), the only field you are concerned about is 'locusStart'. Since, from your description, you do not necessarily care about the other data, you could make a set of all of the locusStart:
locusStart_list = set()
with open(upper_file, 'r') as f:
for line in f:
tmp_list = line.strip().split()
locusStart_list.add(tmp_list[1])
This removes all of the unnecessary line manipulation you do for the bottom file. Then, you can easily compare the locusStart of a field to the set built from the lower file. The set would also remove duplicates, making it a bit faster than using a list.
It sounds like you are going to be doing lots of greater than/less than comparisons, as such I don't think loading your data into dictionaries is going to enhance the speed of code at all--based on what you've explained it sounds like you're still looping through every element in one file or the other.
What you need is a different data structure to load your data into and run comparison operations with. Check out the the Python bisect module, I think it may provide the data structure that you need to run your comparison operations much more efficiently.
If you can more precisely describe what exactly you're trying to accomplish, we'll be able to help you get started writing your code.
Using a dictionary of the chromosome number is a good idea, as long as you can fit both files into memory.
You then want to sort both lists by locusStart (split the string, convert locusStart to a number--see instructions on sorting if you're unsure how to sort on locusStart alone).
Now you can just walk through your lists: if the lower locusStart is less than the first upper locusStart, put the line in file 2 and go on to the next one. If the lower locusStart is greater than the first upper locusStart then
While it is also greater than locusEnd, throw away the beginning of the upper list
If you find a case where it's greater than locusStart and less than locusEnd, put it in file 1
Otherwise, put it in file 2
This should replace what is now probably an O(n^2) algorithm with a O(n log n) one.

Categories

Resources