Line by line comparison of large text files with python - python

I am working on some large (several million line) bioinformatics data sets with the general format:
chromosomeNumber locusStart locusStop sequence moreData
I have other files in this format:
chromosomeNumber locusStart locusStop moreData
What I need to be able to do is read one of each type of file into memory and if the locusStart of a line of the upper file is between the start and stop of any of the lines in the lower file, print the line to output file 1. If the locusStart of that line is not between the start and stop of any lines in the bottom file, then print it to output file 2.
I am currently reading the files in, converting them into dictionaries keyed on chromosome with the corresponding lines as values. I then split each value line into a string, and then do comparisons with the strings. This takes an incredibly long time, and I would like to know if there is a more efficient way to do it.
Thanks.

It seems that for the lower file (which I assuming has the second format), the only field you are concerned about is 'locusStart'. Since, from your description, you do not necessarily care about the other data, you could make a set of all of the locusStart:
locusStart_list = set()
with open(upper_file, 'r') as f:
for line in f:
tmp_list = line.strip().split()
locusStart_list.add(tmp_list[1])
This removes all of the unnecessary line manipulation you do for the bottom file. Then, you can easily compare the locusStart of a field to the set built from the lower file. The set would also remove duplicates, making it a bit faster than using a list.

It sounds like you are going to be doing lots of greater than/less than comparisons, as such I don't think loading your data into dictionaries is going to enhance the speed of code at all--based on what you've explained it sounds like you're still looping through every element in one file or the other.
What you need is a different data structure to load your data into and run comparison operations with. Check out the the Python bisect module, I think it may provide the data structure that you need to run your comparison operations much more efficiently.
If you can more precisely describe what exactly you're trying to accomplish, we'll be able to help you get started writing your code.

Using a dictionary of the chromosome number is a good idea, as long as you can fit both files into memory.
You then want to sort both lists by locusStart (split the string, convert locusStart to a number--see instructions on sorting if you're unsure how to sort on locusStart alone).
Now you can just walk through your lists: if the lower locusStart is less than the first upper locusStart, put the line in file 2 and go on to the next one. If the lower locusStart is greater than the first upper locusStart then
While it is also greater than locusEnd, throw away the beginning of the upper list
If you find a case where it's greater than locusStart and less than locusEnd, put it in file 1
Otherwise, put it in file 2
This should replace what is now probably an O(n^2) algorithm with a O(n log n) one.

Related

Extracting certain rows from a file that match a condition from another file

So first, I know there are some answers out there for similar questions, but...my problem has to do with speed and memory efficiency.
I have a 60 GB text file that has 17 fields and 460,368,082 records. Column 3 has the ID of the individual and the same individual can have several records in this file. Lets call this file, File A.
I have a second file, File B, that has the ID of 1,000,000 individuals and I want to extract the rows of File A that have an ID that is in File B.
I have a windows PC and I'm open to doing this in C or Python, or whatever is faster... but not sure how to do it fast and efficiently.
So far every solution I have come up with takes over 1.5 years according to my calculations.
What you are looking for is a sort-merge join. The idea is to sort the File A on column you are joining (ID). Sort File B on the column you are joining (ID). Then read both the files using merge algorithm, ignoring the ones that don't have a match in both.
Sorting the files may require creation of intermediate files.
If the data is in a text file with a delimiter, you can also use linux sort command line utility to perform the sort.
sort -k3,3 -t'|' fileA > fileA.sorted
sort fileB > fileB.sorted
dos2unix fileB.sorted #make sure the line endings are same style
dos2unix fileA.sorted #make sure the line endings are same style
if dos2unix is not available, this maybe used as an alternative
sort -k3,3 -t'|' fileA | tr -d '\r' > fileA.sorted
sort fileB | tr -d '\r' > fileB.sorted
Join the files
join -1 3 -2 1 -t'|' fileA.sorted fileB.sorted
The other option is if you have enough RAM is to load File B in memory in a HashMap kind of structure. Then read File A, and lookup the HashMap for a match. I think either language would work fine, just depends with which you are more comfortable with.
Depends, if it is unsorted your going to have to search the entire thing and I would use multiple threads for that. If you are going to have to do this search multiple times I would also create an index.
If you have a massive amount of memory you could create a hash table to hold strings. You could then load all of the strings from the first file into the hash table. Then, load each of the strings from the second file one at a time. For each string, check if it's in the hash table. If so, report a match. This approach uses O(m) memory (where m is the number of strings in the first file) and takes at least Ω(m + n) time and possibly more, depending on how the hash function works. This is also (almost certainly) the easiest and most direct way to solve the problem.
If you have little ram to work with but tons of disk space: https://en.wikipedia.org/wiki/External_sorting, you could get this to O(n log n) time.
It sounds like what you want to do is first read File B, collecting the IDs. You can store the IDs in a set or a dict.
Then read File A. For each line in File A, extract the ID, then see if it was in File B by checking for membership in your set or dict. If not, then skip that line and continue with the next line. If it is, then process that line as desired.

Removing duplicates in a huge .csv file

I have a csv file of this format
testname unitname time data
test1 1 20131211220159 123123
test1 1 20131211220159 12345
test1 1 20131211230180 1234
I am trying to remove all old data from this file and retain only the data with the latest timestamp.(First two of the abovv should be deleted because the last time stamp is greater than the first two timestamps). I want to keep all test data unless the same test and same unit was repeated at a later time. The input file is sorted by time (so older data goes down below).
The file is about 15 Mb.(output_Temp.csv). I copied it as output_temp2.csv
This is what i have.
file1=open("output_temp.csv","r")
file2=open("output_temp2.csv","r")
file3=open("output.csv","w")
flag=0
linecounter=0
for line in file1:
testname=line[0]
vid=line[1]
tstamp=line[2]
file2.seek(0) #reset
for i in range(linecounter):
file2.readline() #came down to the line #
for line2 in file2:
if testname==line2.split(",")[0] and vid==line2.split(",")[1] and tstamp!=line2.split(",")[2]:
flag==1
print line
if flag==1:
break
if flag==0:
file3.write(line)
linecounter=linecounter+1 #going down is ok dont go up.
flag=0
This is taking really long to process, I think it might be ok but its literally taking 10 minutes per 100kb and I have a long way to go.
The main reason this is slow is that you're reading the entire file (or, rather, a duplicate copy of it) for each line in the file. So, if there are 10000 lines, you're reading 10000 lines 10000 times, meaning 10000000 total line reads!
If you have enough memory to save the lines read so far, there's a really easy solution: Store the lines seen so far in a set. (Or, rather, for each line, store the tuple of the three keys that count for being a duplicate.) For each line, if it's already in the set, skip it; otherwise, process it and add it to the set.
For example:
seen = set()
for line in infile:
testname, vid, tstamp = line.split(",", 3)[:3]
if (testname, vid, tstamp) in seen:
continue
seen.add((testname, vid, tstamp))
outfile.write(line)
The itertools recipes in the docs have a function unique_everseen that lets you wrap this up even more nicely:
def keyfunc(line):
return tuple(line.split(",", 3)[:3])
for line in unique_everseen(infile, key=keyfunc):
outfile.write(line)
If the set takes too much memory, you can always fake a set on top of a dict, and you can fake a dict on top of a database by using the dbm module, which will do a pretty good job of keeping enough in memory to make things fast but not enough to cause a problem. The only problem is that dbm keys have to be strings, not tuples of three strings… but you can always just keep them joined up (or re-join them) instead of splitting, and then you've got a string.
I'm assuming that when you say the file is "sorted", you mean in terms of the timestamp, not in terms of the key columns. That is, there's no guarantee that two rows that are duplicates will be right next to each other. If there were, this is even easier. It may not seem easier if you use the itertools recipes; you're just replacing everseen with justseen:
def keyfunc(line):
return tuple(line.split(",", 3)[:3])
for line in unique_justseen(infile, key=keyfunc):
outfile.write(line)
But under the covers, this is only keeping track of the last line, rather than a set of all lines. Which is not only faster, it also saves a lot of memory.
Now that (I think) I understand your requirements better, what you actually want to get rid of is not all but the first line with the same testname, vid, and tstamp, but rather all lines with the same testname and vid except the one with the highest tstamp. And since the file is sorted in ascending order of tstamp, that means you can ignore the tstamp entirely; you just want the last match for each.
This means the everseen trick won't work—we can't skip the first one, because we don't yet know there's a later one.
If we just iterated the file backward, that would solve the problem. It would also double your memory usage (because, in addition to the set, you're also keeping a list so you can stack up all of those lines in reverse order). But if that's acceptable, it's easy:
def keyfunc(line):
return tuple(line.split(",", 2)[:2])
for line in reversed(list(unique_everseen(reversed(list(infile)), key=keyfunc))):
outfile.write(line)
If turning those lazy iterators into lists so we can reverse them takes too much memory, it's probably fastest to do multiple passes: reverse the file on disk, then filter the reversed file, then reverse it again. It does mean two extra file writes, but that can be a lot better than, say, your OS's virtual memory swapping to and from disk hundreds of times (or your program just failing with a MemoryError).
If you're willing to do the work, it wouldn't be that hard to write a reverse file iterator, which reads buffers from the end and splits on newlines and yields the same way the file/io.Whatever object does. But I wouldn't bother unless you turn out to need it.
If you ever do need to repeatedly read particular line numbers out of a file, the linecache module will usually speed things up a lot. Nowhere near as fast as not re-reading at all, of course, but a lot better than reading and parsing thousands of newlines.
You're also wasting time repeating some work in the inner loop. For example, you call line2.split(",") three times, instead of just splitting it once and stashing the value in a variable, which would be three times as fast. A 3x constant gain is nowhere near as important as a quadratic to linear gain, but when it comes for free by making your code simpler and more readable, might as well take it.
For this much file size(~15MB) Pandas would be excellent choice.
Like this:
import pandas as pd
raw_data = pd.read_csv()
clean_data = raw_data.drop_duplicates()
clean_data.to_csv('/path/to/clean_csv.csv')
I was able to process a CSV file about 151MB of size containing more than 5.9Million rows in less than a second with the above snippet.
Please note that the duplicate checking can be a conditional operation or a subset of fields to be matched for duplicate checking.
Pandas does provide lot of these features out of the box. Documentation here

Alternative to Python Multiprocessing Manager dict for large read only store

I'm using Multiprocessing with a large (~5G) read-only dict used by processes. I started by passing the whole dict to each process, but ran into memory restraints, so changed to use a Multiprocessing Manager dict (after reading this How to share a dictionary between multiple processes in python without locking )
Since the change, performance has dived. What alternatives are there for a faster shared data store? The dict has a 40 character string key, and 2 small string element tuple data.
Use a memory mapped file. While this might sound insane (performance wise), it might not be if you use some clever tricks:
Sort the keys so you can use binary search in the file to locate a record
Try to make each line of the file the same length ("fixed width records")
If you can't use fixed width records, use this pseudo code:
Read 1KB in the middle (or enough to be sure the longest line fits *twice*)
Find the first new line character
Find the next new line character
Get a line as a substring between the two positions
Check the key (first 40 bytes)
If the key is too big, repeat with a 1KB block in the first half of the search range, else in the upper half of the search range
If the performance isn't good enough, consider writing an extension in C.

How to find same lines in two large text files?

I'd like to compare two large text files(200M) to get the same lines of them.
How to do that in Python?
since they are just 200M, allocate enough memory, read them, sort the lines in ascending order for each, then iterate through both collections of lines in parallel like in a merge operation and delete those that only occur in one set.
preserve line numbers in the collections and sort them by line number after the above, if you want to output them in original order.
merge operation: keep one index for each collection, if lines at both indexes match, increment both indexes, otherwise delete the smaller line and increment just that index. if either index is past the last line, delete all remaining lines in the other collection.
optimization: use a hash to optimize comparisons a little bit; do the hash in the initial read
Disclaimer: I really have no idea how efficient this will be for 200Mb but it's worth the try I guess:
I have tried the following for two ~80mb files and the result was around 2.7 seconds in a 3GB Ram intel i3 machine.
f1 = open("one")
f2 = open("two")
print set(f1).intersection(f2)
You may be able to use the standard difflib module. The module offers several ways of creating difference deltas from various kinds of input.
Here's an example from the docs:
>>> from difflib import context_diff
>>> fromfile = open('before.py')
>>> tofile = open('tofile.py')
>>> for line in context_diff(fromfile, tofile, fromfile='before.py', tofile='after.py'):
print line,

list index out of range error received

hey guys, beginner here. I have written a program that outputs files to .txt's and am using another to read them and use them. i have used a list to store these values (len(..) gives me 100 for all files). However, whenever i run this:
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
exec "f=open('file%02d.txt','r').readlines()"%w #stores data from file00,file01,file02...
f00=open('file00.txt','r').readlines() #same as ^ but from file00
for y in range(100):
xvp=float(f[c].rstrip('\n')) #the error is on this line; the file are stored in vertical order
pvp=float(f00[y].rstrip('\n')) #maybe even this one
#and i do stuff with those values...
I get in line 12,
xvp=float(f[c].rstrip('\n'))
IndexError: list index out of range
note: there are 100 numbers stored on separate lines in the .txt's
please, if there is any way to help you help me, let me know
thanks
You seem to be incrementing c two thousand times (20 times 100 -- actually only 1900 times, since range(1,20) will not reach the value 20, as you seem to desire in a comment) -- so of course you're going out of range if you use it to index a list of 100! The whole code is rather a mess and I suggest refactoring it radically, to avoid exec and do things the Python way. Assuming Python 2.6 or better (in 2.5, you need a from __future__ import with_statement at the start of your module):
f00 = open('file00.txt').readlines()
for w in range(1, 21):
for x in range(100):
with open('file%02d.txt' % w) as f:
for line in f:
xvp = float(line)
for line00 in f00:
rvp = float(line00)
do_stuff(xvp, rvp)
I don't know if this is the logic you want -- coupling every line of file00.txt with each line from the 20 other files -- but at least this makes it clear which lines are coupled up with which;-). If what you want is to only couple the first line of file00.txt with the first line from each of the others, then second line with second lines, etc, then add import itertools at the start of your module and change the contents of the with into:
for line00, line in itertools.izip(f00, f):
rvp = float(line00)
xvp = float(line)
do_stuff(xvp, rvp)
and so forth.
Note that I'm reading all of file00.txt in memory once and for all (into the f00 list of lines) because apparently you need to loop on those contents more than once, but that's not needed for the other files.
An obvious optimization is to convert file00.txt's lines to floats only once, replacing the f00 = statement with
with open('file00.txt') as f:
rvps = [float(line) for line in f]
then use rvps directly instead of repeating the conversion every time on the strings in f00 -- for example, in the second version (the one using itertools.izip):
for rvp, line in itertools.izip(rvps, f):
xvp = float(line)
do_stuff(xvp, rvp)
Edit: I see I've done a number of tiny enhancements while hardly realizing I was doing so, maybe I'd better explain them;-). No need to pass 'r' when opening a file for reading (can't hurt, but it's quite idiomatic to omit it). No need to strip trailing (or for that matter leading) whitespace from a string before calling float on it -- float happily skips all such leading and trailing whitespace itself. I did fix what apparently was another bug (you'd never deal with file20.txt) by fixing the applicable range to range(1, 21).
The with open(...) as f: statements do the opening, bind name f to the open file object, and, as soon as the block of statements they control is finished, guarantee that the file is properly closed -- it should almost invariably be used in preference to a stand-alone open, because ensuring all files are closed ASAP is really very good practice (the with statement has many other excellent use cases, but this is the single most frequent one, and the only one that happens to be necessary for this functionality).
Looping directly on an open file object f (provided the file is opened in text mode, as is the default and applies throughout here), for line in f:, provides one after the other the lines of f (without ever needing to keep them all in memory at once) and is an extremely popular and good Pythonic idiom.
The construct rvps = [float(line) for line in f], which I use in my recommended optimization, is known as a "list comprehension" and it's a nicely speedy and compact alternative to a loop that builds a new list.
itertools.izip, given a number of iterables, provides a single iterable whose items are tuples made by the items of the other iterables "walked in lockstep". The built-in zip is similar, but (in Python 2) it builds a list in memory, which itertools.izip avoids, so it's good practice to learn to use the itertools version to avoid wasting memory (not really important for small files like the ones you have, but good habits are best learned and "just applied" rather than having to reflect on them every single time -- just one one doesn't start every morning pondering whether one should brush one's teeth, but just goes and does so as a matter of good habit;-).
I'm sure there's more, but this is what comes to mind off-hand - feel free to ask if I can be of further assistance!
there are 100 numbers stored on
separate lines in the .txt's
but in
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
you incrementing c by 20*100 = 2000 times.
Maybe you need c = 0 in "w" cycle or just use x instead of c?
Based on how you describe your files, you are indexing into them incorrectly. By using c which is incremented for each iteration of the second loop. It will reach values of up to 2000. Using x seems to be the logical choice.
#restructured for efficiency
file = open('file00.txt','r')
f00 = file.readlines() #no need to reopen the file for every iteration
file.close() #always close the file when done with
for w in range(1,20):
file = open('file%02d.txt'%w,'r')
f = file.readlines() #only open once per iteration
file.close()
for x in range(100):
xvp = float(f[x].rstrip('\n'))
for y in range(100):
pvp = float(f00[y].rstrip('\n'))
#do stuff

Categories

Resources