I have two files, both are very big. The files have mixed up information between themselves and I need to compare two files and connect the lines that intersect.
An example would be:
1st file has
var1:var2:var3
2nd would have
var2:var3:var4
I need to connect these in a third file with output: var1:var2:var3:var4.
Please note that the lines do not match, var4 which should go with var1 (since they have var2 and var3 together). Var2 and Var3 are common for Var1 and Var4. could be far away in these huge files.
I need to find a way to compare each line and connect it to the one in the 2nd file. I can't seem to think of anything of an adequate loop. Any ideas?
Try the following (assuming var2:var3 is always a unique key in both files):
Iterate over all lines in the first file
Add all entries into a dictionary with the value var2:var3 as key (and var1 as value)
Iterate over all entries in the second file
look up if the dictionary from part 1 contains an entry for the key var2:var3 and if it does output var1:var2:var3:var4 into the output file and delete the entry from the dictionary.
This approach can use very big amounts of memory and therefore should probably not be used for very large files.
Based on the specific fields you said that you want to match (2 & 3 from file 1, 1 & 2 from file 2):
#!/usr/bin/python3
# Iterate over every line in file1.
# Iterate over every line in file2.
# If lines intersect, print combined line.
with open('file1') as file1:
for line1 in file1:
u1,h1,s1 = line1.rstrip().split(':')
with open('file2') as file2:
for line2 in file2:
h2,s2,p2 = line2.rstrip().split(':')
if h1 == h2 and s1 == s2:
print(':'.join((u1,h1,s2,p2)))
This is horrendously slow (in theory), but uses a minimum of RAM. If the files aren't absolutely huge, it might not perform too badly.
If memory isn't problem, use dictionary where the key is the same as the value:
#!/usr/bin/python
out_dict = {}
with open ('file1','r') as file_in:
lines = file_in.readlines()
for line in lines:
out_dict[line] = line
with open ('file2','r') as file_in:
lines = file_in.readlines()
for line in lines:
out_dict[line] = line
with open ('output_file','w') as file_out:
for key in out_dict:
file_out.write (key)
Related
I have 2 csv files with 2 rows 3 columns (id, name, value) that I want to compare. If there's a new row added to one of the files, the other one is updated as well. Likewise, if a value in one of the column changes the other file is updated.
Here's what I tried
a = '/path/to/file'
b = 'path/to/file'
with open(a, 'r') as f1, open(b, 'r') as f2:
file1 = csv.DictReader(f1)
file2 = csv.DictReader(f2)
for row_new in file2:
for row_old in file1:
if row_new['id'] == row_old['id']:
for k1, v1 in row_new.items():
for k, v in row_old.items():
if row_old[k1] == row_new[k]:
if v1 != v:
print(f'updated value for col {k}')
v1 = v
else:
print('Nothing to update')
else:
print(f'create row {row_new["id"]}')
I noticed that the iteration takes place only once. Am I doing something wrong here?
I noticed that the iteration takes place only once...?
The inner loop is probably reaching the end of the file before the outer loop has a chance to make its next iteration. Try moving the file object's pointer back to the beginning after the inner loop stops.
with open(a, 'r') as f1, open(b, 'r') as f2:
...
for row_new in file2:
for row_old in file1:
if row_new['id'] == row_old['id']:
...
else:
print(f'create row {row_new["id"]}')
f1.seek(0)
Some would say that the nested for loops is what you are doing wrong. Here are some SO questions/answers to consider.
python update a column value of a csv file according to another csv file
Python Pandas: how to update a csv file from another csv file
search python csv update one csv file based on another site:stackoverflow.com
Basically you should try to just read each file once and use data types that allow for fast membership testing like sets or dicts.
Your DictReaders will give you an {'id':x,'name':y,'value':z} dict for each row - causing you to use nested for loops to compare all the rows from one file to each row in the other. You could create a single dictionary using the id column for the keys and the dictionary values could be lists - {id:[name,value],id:[name,value],...} which may make the processing easier.
You also opened both your files for reading, open(...,'r'), so you'll probably find your files unchanged after you fix other things.
I need to write script in python that accept and merge 2 files to a new file according to the following rule:
1)take 1 word from 1st file followed by 2 words from the second file.
2) when we reach the end of 1 file i'll need to copy the rest of the other file to the merged file without change.
I wrote that script, but i managed to only read 1 word from each file.
Complete script will be nice, but I really want to understand by words how i can do this by my own.
This is what i wrote:
def exercise3(file1,file2):
lstFile1=readFile(file1)
lstFile2=readFile(file2)
with open("mergedFile", 'w') as outfile:
merged = [j for i in zip(lstFile1, lstFile2) for j in i]
for word in merged:
outfile.write(word)
def readFile(filename):
lines = []
with open(filename) as file:
for line in file:
line = line.strip()
for word in line.split():
lines.append(word)
return lines
Your immediate problem is that zip alternates items from the iterables you give it: in short, it's a 1:1 mapping, where you need 1:2. Try this:
lstFile2a = listfile2[0::2]
lstFile2b = listfile2[1::2]
... zip(lstfile1, listfile2a, lstfile2b)
This is a bit inefficient, but gets the job done.
Another way is to zip up pairs (2-tuples) in lstFile2 before zipping it with lstFile1. A third way is to forget zipping altogether, and run your own indexing:
for i in min(len(lstFile1), len(lstFile2)//2):
outfile.write(lstFile1[i])
outfile.write(lstFile2[2*i])
outfile.write(lstFile2[2*i+1])
However, this leaves you with the leftovers of the longer file to handle.
These aren't particularly elegant, but they should get you moving.
I have two ASCII text files with columnated data. The first column of both files is a 'name' that is consistent across both files. One file has some 6000 rows, the other only has 800. Without doing a for line in file.readlines(): approach - e.g.,
with open('big_file.txt') as catalogue:
with open('small_file.txt') as targets:
for tline in targets.readlines()[2:]:
name = tline.split()[0]
for cline in catalogue.readlines()[8:]:
if name == cline.split()[0]
print cline
catalogue.seek(0)
break
is there an efficient way to return only the rows (or lines) from the larger file that also appear in the smaller file (using the 'name' as the check)?
It's okay if it is one row at a time for say a file.write(matching_line) the idea would be to create a third file with all the info from the large file for only the objects that are in the small file.
for line in file.readlines() is not inherently bad. What's bad is the nested loops you have there. You can use a set to keep track of and check all the names in the smaller file:
s = set()
for line in targets:
s.add(line.split()[0])
Then, just loop through the bigger file and check if the name is in s:
for line in catalogue:
if line.split()[0] in s:
print line
I have multiple files which I need to open and read (I thought may it will be easier with fileinput.input()). Those files contain at the very beginning non-relevant information, what I need is all the information below this specific line ID[tab]NAME[tab]GEO[tab]FEATURE (some times from line 32, but unfortunately some times at any other line), then I want to store them in a list ("entries")
ID[tab]NAME[tab]GEO[tab]FEATURE
1 aa us A1
2 bb ko B1
3 cc ve C1
.
.
.
Now, before reading from line 32 (see code below), I will like to read from the above line. Is it possible to do this with fileinput? or am I going the wrong way. Is there another mor simple way to do this? Here is my code until now:
entries = list()
for line in fileinput.input():
if fileinput.filelineno() > 32:
entries.append(line.strip().split("\t"))
I'm trying to implement this idea with Python 3.2
UPDATE:
Here is how my code looks now, but still out of range. I need to add some of the entries to a dictionary. Am I missing something?
filelist = fileinput.input()
entries = []
for fn in filelist:
for line in fn:
if line.strip() == "ID\tNAME\tGEO\tFEATURE":
break
entries.extend(line.strip().split("\t")for line in fn)
dic = collections.defaultdict(set)
for e in entries:
dic[e[1]].add(e[3])
Error:
dic[e[1]].add(e[3])
IndexError: list index out of range
Just iterate through the file looking for the marker line and add everything after that to the list.
EDIT Your second problem happens because not all of the lines in the original file split to at least 3 fields. A blank line, for instance, results in an empty list so e[1] is invalid. I've updated the example with a nested iterator that filters out lines that are not the right size. You may want to do something different (maybe strip empty lines but otherwise assert that the remaining lines need to split to exactly 3 columns), but you get the idea
entries = []
for fn in filelist:
with open('fn') as fp:
for line in fp:
if line.strip() == 'ID\tNAME\tGEO\tFEATURE':
break
#entries.extend(line.strip().split('\t') for line in fp)
entries.extend(items for items in (line.strip().split('\t') for line in fp) if len(items) >= 3)
I would like to write a python script that addresses the following problem:
I have two tab separated files, one has just one column of a variety of words. The other file has one column that contains similar words, as well as columns other information. However, within the first file, some lines contain multiple words, separated by " /// ". The other file has a similar problem, but the separator is " | ".
File #1
RED
BLUE /// GREEN
YELLOW /// PINK /// PURPLE
ORANGE
BROWN /// BLACK
File #2 (Which contains additional columns of other measurements)
RED|PINK
ORANGE
BROWN|BLACK|GREEN|PURPLE
YELLOW|MAGENTA
I want to parse through each file and match the words that are the same, and then append the columns of additional measurements too. But I want to ignore the /// in the first file, and the | in the second, so that each word will be compared to the other list on its own. The output file should have just one column of any words that appear in both lists, and then the appended additional information from file 2. Any help??
Addition info / update:
Here are 8 lines of File #1, I used color names above to make it more simple but this is what the words really are: These are the "symbols":
ANKRD38
ANKRD57
ANKRD57
ANXA8 /// ANXA8L1 /// ANXA8L2
AOF1
AOF2
AP1GBP1
APOBEC3F /// APOBEC3G
Here is one line of file #2: What I need to do is run each symbol from file1 and see if it matches with any one of the "synonyms", found in file2, in column 5 (here the synonyms are A1B|ABG|GAP|HYST2477). If any symbols from file1 match ANY of the synonyms from col 5 file 2, then I need to append the additional information (the other columns in file2) onto the symbol in file1 and create one big output file.
9606 '\t' 1 '\t' A1BG '\t' - '\t' A1B|ABG|GAB|HYST2477'\t' HGNC:5|MIM:138670|Ensembl:ENSG00000121410|HPRD:00726 '\t' 19 '\t' 19q13.4'\t' alpha-1-B glycoprotein '\t' protein-coding '\t' A1BG'\t' alpha-1-B glycoprotein'\t' O '\t' alpha-1B-glycoprotein '\t' 20120726
File2 is 22,000 KB, file 1 is much smaller. I have thought of creating a dict much like has been suggested, but I keep getting held up with the different separators in each of the files. Thank you all for questions and help thus far.
EDIT
After your comments below, I think this is what you want to do. I've left the original post below in case anything in that was useful to you.
So, I think you want to do the following. Firstly, this code will read every separate synonym from file1 into a set - this is a useful structure because it will automatically remove any duplicates, and is very fast to look things up. It's like a dictionary but with only keys, no values. If you don't want to remove duplicates, we'll need to change things slightly.
file1_data = set()
with open("file1.txt", "r") as fd:
for line in fd:
file1_data.update(i.strip() for i in line.split("///") if i.strip())
Then you want to run through file2 looking for matches:
with open("file2.txt", "r") as in_fd:
with open("output.txt", "w") as out_fd:
for line in in_fd:
items = line.split("\t")
if len(items) < 5:
# This is so we don't crash if we find a line that's too short
continue
synonyms = set(i.strip() for i in items[4].split("|"))
overlap = synonyms & file1_data
if overlap:
# Build string of columns from file2, stripping out 5th column.
output_str = "\t".join(items[:4] + items[5:])
for item in overlap:
out_fd.write("\t".join((item, output_str)))
So what this does is open file2 and an output file. It goes through each line in file2, and first checks it has enough columns to at least have a column 5 - if not, it ignores that line (you might want to print an error).
Then it splits column 5 by | and builds a set from that list (called synonyms). The set is useful because we can find the intersection of this with the previous set of all the synonyms from file1 very fast - this intersection is stored in overlap.
What we do then is check if there was any overlap - if not, we ignore this line because no synonym was found in file1. This check is mostly for speed, so we don't bother building the output string if we're not going to use it for this line.
If there was an overlap, we build a string which is the full list of columns we're going to append to the synonym - we can build this as a string once even if there's multiple matches because it's the same for each match, because it all comes from the line in file2. This is faster than building it as a string each time.
Then, for each synonym that matched in file1, we write to the output a line which is the synonym, then a tab, then the rest of the line from file2. Because we split by tabs we have to put them back in with "\t".join(...). This is assuming I am correct you want to remove column 5 - if you do not want to remove it, then it's even easier because you can just use the line from file2 having stripped off the newline at the end.
Hopefully that's closer to what you need?
ORIGINAL POST
You don't give any indication of the size of the files, but I'm going to assume they're small enough to fit into memory - if not, your problem becomes slightly trickier.
So, the first step is probably to open file #2 and read in the data. You can do it with code something like this:
file2_data = {}
with open("file2.txt", "r") as fd:
for line in fd:
items = line.split("\t")
file2_data[frozenset(i.strip() for i in items[0].split("|"))] = items[1:]
This will create file2_data as a dictionary which maps a word on to a list of the remaining items on that line. You also should consider whether words can repeat and how you wish to handle that, as I mentioned in my earlier comment.
After this, you can then read the first file and attach the data to each word in that file:
with open("file1.txt", "r") as fd:
with open("output.txt", "w") as fd_out:
for line in fd:
words = set(i.strip() for i in line.split("///"))
for file2_words, file2_cols in file2_data.iteritems():
overlap = file2_words & words
if overlap:
fd_out.write("///".join(overlap) + "\t" + "\t".join(file2_cols))
What you should end up with is each row in output.txt being one where the list of words in the two files had at least one word in common and the first item is the words in common separated by ///. The other columns in that output file will be the other columns from the matched row in file #2.
If that's not what you want, you'll need to be a little more specific.
As an aside, there are probably more efficient ways to do this than the O(N^2) approach I outlined above (i.e. it runs across one entire file as many times as there are rows in the other), but that requires more detailed information on how you want to match the lines.
For example, you could construct a dictionary mapping a word to a list of the rows in which that word occurs - this makes it a lot faster to check for matching rows than the complete scan performed above. This is rendered slightly fiddly by the fact you seem to want the overlaps between the rows, however, so I thought the simple approach outlined above would be sufficient without more specifics.
Look at http://docs.python.org/2/tutorial/inputoutput.html for file i/o
Loop through each line in each file
file1set = set(file1line.split(' /// '))
file2set = set(file2line.split('|'))
wordsineach = list(file1set & file2set)
split will create an array of the color names
set() turns it into a set so we can easily compare differences in each line
Loop over 'wordsineach' and write to your new file
Use the str.replace function
with open('file1.txt', 'r') as f1:
content1 = f1.read()
content1 = content1.replace(' /// ', '\n').split('\n')
with open('file2.txt', 'r') as f2:
content2 = f2.read()
content2 = content1.replace('|', '\n').split('\n')
Then use a list comprehension
common_words = [i for i in content1 if i in content2]
However, if you already know that none of the words in each file are the same, you can use set intersections to make life easier
common_words = list(set(content1) & set(content2))
Then to output the remainder to another file:
common_words = [i + '\n' for i in common_words] #so that we print each word on a new line
with open('common_words.txt', 'w') as f:
f.writelines(common_words)
As to your 'additional information', I cannot help you unless you tell us how it is formatted, etc.