I have a corpus of words like these. There are more than 3000 words. But there are 2 files:
File #1:
#fabulous 7.526 2301 2
#excellent 7.247 2612 3
#superb 7.199 1660 2
#perfection 7.099 3004 4
#terrific 6.922 629 1
#magnificent 6.672 490 1
File #2:
) #perfect 6.021 511 2
? #great 5.995 249 1
! #magnificent 5.979 245 1
) #ideal 5.925 232 1
day #great 5.867 219 1
bed #perfect 5.858 217 1
) #heavenly 5.73 191 1
night #perfect 5.671 180 1
night #great 5.654 177 1
. #partytime 5.427 141 1
I have many sentences like this, more than 3000 lines like below:
superb, All I know is the road for that Lomardi start at TONIGHT!!!! We will set a record for a pre-season MNF I can guarantee it, perfection.
All Blue and White fam, we r meeting at Golden Corral for dinner to night at 6pm....great
I have to go through every line and do the following task:
1) find if those corpus of words match anywhere in the sentences
2) find if those corpus of words match leading and trailing of sentences
I am able to do part 2) and not part 1). I can do it but finding a efficient way.
I have the following code:
for line in sys.stdin:
(id,num,senti,words) = re.split("\t+",line.strip())
sentence = re.split("\s+", words.strip().lower())
for line1 in f1: #f1 is the file containing all corpus of words like File #1
(term2,sentimentScore,numPos,numNeg) = re.split("\t", line1.strip())
wordanalysis["trail"] = found if re.match(sentence[(len(sentence)-1)],term2.lower()) else not(found)
wordanalysis["lead"] = found if re.match(sentence[0],term2.lower()) else not(found)
for line in sys.stdin:
(id,num,senti,words) = re.split("\t+",line.strip())
sentence = re.split("\s+", words.strip().lower())
for line1 in f1: #f1 is the file containing all corpus of words like File #1
(term2,sentimentScore,numPos,numNeg) = re.split("\t", line1.strip())
wordanalysis["trail"] = found if re.match(sentence[(len(sentence)-1)],term2.lower()) else not(found)
wordanalysis["lead"] = found if re.match(sentence[0],term2.lower()) else not(found)
for line1 in f2: #f2 is the file containing all corpus of words like File #2
(term2,sentimentScore,numPos,numNeg) = re.split("\t", line1.strip())
wordanalysis["trail_2"] = found if re.match(sentence[(len(sentence)-1)],term.lower()) else not(found)
wordanalysis["lead_2"] = found if re.match(sentence[0],term.lower()) else not(found)
Am I doing this right? Is there a better way to do it.
this is a classic map reduce problem, if you want to get serious about efficiency you should consider something like: http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
and if you are too lazy / have too few resources to set your own hadoop environment you can try a ready made one http://aws.amazon.com/elasticmapreduce/
feel free to post your code here after its done :) it will be nice to see how it is translated into a mapreduce algorithm...
Related
Since I have a file which is huge (several GBs), I would not like to load the whole thing in memory and instead use *generators to load line by line. My file is something like this:
# millions of lines
..................
..................
keyw 28899
2233 121 ee 0o90 jjsl
2321 232 qq 0kj9 jksl
keyw 28900
3433 124 rr 8hu9 jkas
4532 343 ww 3ko9 aslk
1098 115 uy oiw8 rekl
keyw 29891
..................
..................
# millions more
So far I have found a similar answer here. But I am lost as how to implement it. Because the ans has specific identifiers Start and Stop, whereas my files have an incremental number with a identical keyword. I would like some help regarding this.
Edit: Generators not iterators
If you want to adapt that answer this may help:
bucket = []
for line in infile:
if line.split()[0] == 'keyw':
for strings in bucket:
outfile.write( strings + '\n')
bucket = []
continue
bucket.append(line.strip())
So I want to count the occurrences of certain words, per line, in a text file. How many times each specific word occurred doesnt matter, just how many times any of them occurred per line. I have a file containing a list of words, delimited by newline character. It looks like this:
amazingly
astoundingly
awful
bloody
exceptionally
frightfully
.....
very
I then have another text file containing lines of text. Lets say for example:
frightfully frightfully amazingly Male. Don't forget male
green flag stops? bloody bloody bloody bloody
I'm biased.
LOOKS like he was headed very
green flag stops?
amazingly exceptionally exceptionally
astoundingly
hello world
I want my output to look like:
3
4
0
1
0
3
1
Here's my code:
def checkLine(line):
count = 0
with open("intensifiers.txt") as f:
for word in f:
if word[:-1] in line:
count += 1
print count
for line in open("intense.txt", "r"):
checkLine(line)
Here's my actual output:
4
1
0
1
0
2
1
0
any ideas?
How about this:
def checkLine(line):
with open("intensifiers.txt") as fh:
line_words = line.rstrip().split(' ')
check_words = [word.rstrip() for word in fh]
print sum(line_words.count(w) for w in check_words)
for line in open("intense.txt", "r"):
checkLine(line)
Output:
3
4
0
1
0
3
1
0
I am coding a Python script that collect words in a text file (PDB file), later gathering them in phrases. However, as I am just a beginner in programming, I'm having huge difficulties in doing it. I know how to do it just one line per time. I wish you guys could give me some help. Please.
The text has informations about sites of a protein. Each site has four dedicated lines of information, as you can see below:
REMARK 800
REMARK 800 SITE_IDENTIFIER: CC1
REMARK 800 EVIDENCE_CODE: SOFTWARE
REMARK 800 SITE_DESCRIPTION: BINDING SITE FOR RESIDUE EDO A 326
REMARK 800
REMARK 800 SITE_IDENTIFIER: DF8
REMARK 800 EVIDENCE_CODE: AUTHOR
REMARK 800 SITE_DESCRIPTION: BINDING SITE FOR RESIDUE HEM T 238
REMARK 800
REMARK 800 SITE_IDENTIFIER: FC7
REMARK 800 EVIDENCE_CODE: SOFTWARE
REMARK 800 SITE_DESCRIPTION: BINDING SITE FOR RESIDUE NAG D 1001
#and so on ...
An extended exemple is seen in the following link (search for "REMARK 800"):
http://www.pdb.org/pdb/files/3HDL.pdb
As observed,
The 1st line has nothing. (It just separates one information from the next)
the 2nd has the SITE_IDENTIFIER. (e.g. CC1)
the 3rd, the EVIDENCE_CODE. (e.g. SOFTWARE)
the 4th, some RESIDUE informations. (e.g. EDO A 326)
This pattern is seen in a great part of the text.
What I want to do is to gather some words from three of the four consecutive dedicated lines, in such way that they are put together in one phrase. The necessary informations are the SITE_IDENTIFIER, the EVIDENCE_CODE, and 3 words from SITE_DESCRIPTION. Thus, concerning the text excerpt above, the resulting phrases would be something like this:
CC1 SOFTWARE EDO A 326
DF8 AUTHOR HEM T 238
FC7 SOFTWARE NAG D 1001
#and so on...
Is it possible to do? If so, can you guys imagine how can I do this?
I tried doing it this way, but I feel like it is not going to work at all:
name_file = "3HDL.pdb"
pdb_file = open(name_file,"r")
for line in pdb_file:
list = line.split()
list_2=[]
for j in range(0, 15):
list_2.append("")
if (list[0] == "REMARK" and list[1] == "800"):
j=0
while not j == len(list):
list_2[j] = list[j]
j+=1
n=1
if(list_2[0] == "REMARK" and list_2[1] == "800" and list_2[2] == "SITE_IDENTIFIER:"):
n+=1
print("Site", str(n) + ":", list_2[3])
print("ok" + "\n")
As you can see, I am really a beginner.
Sorry about any grammar problems and thank you very much.
How about something like this:
import re
f = open("3HDL.pdb", "r")
for line in f:
m = re.search(r"REMARK 800 SITE_IDENTIFIER: (.+)", line)
if m:
site_id = m.group(1).strip()
else:
m = re.search(r"REMARK 800 EVIDENCE_CODE: (.+)", line)
if m:
evidence_code = m.group(1).strip()
else:
m = re.search(r"REMARK 800 SITE_DESCRIPTION: (.+)", line)
if m:
site_descrip = m.group(1).strip()
print site_id, evidence_code, site_descrip
f.close()
Or, if you want to avoid using the regex module:
f = open("3HDL.pdb", "r")
for line in f:
if line.startswith("REMARK 800"):
if line.startswith("SITE_IDENTIFIER:", 11):
site_id = line[28:].rstrip()
elif line.startswith("EVIDENCE_CODE:", 11):
evidence_code = line[26:].rstrip()
elif line.startswith("SITE_DESCRIPTION:", 11):
site_descrip = line[29:].rstrip()
print site_id, evidence_code, site_descrip
f.close()
Here we assume the content wanted are the last word of line 2,3 and the last 3 words of line 4.
name_file = "3HDL.pdb"
pdb_file = open(name_file,"r")
output = []
for linenum, line in enumerate(pdb_file):
if linenum % 4 ==0:
continue
elif linenum % 4 == 1:
output.append(line.split()[-1])
elif linenum % 4 == 2:
output.append(line.split()[-1])
elif linenum % 4 == 3:
output.extend(line.split()[-3:])
for i in range(len(output)/6):
print ' '.join(output[i:i+6])
I have a huge input file that looks like this,
c651 OS05T0-00 492 749 29.07
c651 OS01T0-00 1141 1311 55.00
c1638 MLOC_8.3 27 101 72.00
c1638 MLOC_8.3 25 117 70.97
c2135 TRIUR3_3-P1 124 210 89.66
c2135 EMT17965 25 117 70.97
c1914 OS02T0-00 2 109 80.56
c1914 OS02T0-00 111 155 93.33
c1914 OS08T0-00 528 617 50.00
I would like to iterate inside each c, see if it has same elements in line[1] and print in 2 separate files
c that contain same elements and
that do not have same elements.
In case of c1914, since it has 2 same elements and 1 is not, it goes to file 2. So desired 2 output files will look like this, file1.txt
c1638 MLOC_8.3 27 101 72.00
c1638 MLOC_8.3 25 117 70.97
file2.txt
c651 OS05T0-00 492 749 29.07
c651 OS01T0-00 1141 1311 55.00
c2135 TRIUR3_3-P1 124 210 89.66
c1914 OS02T0-00 2 109 80.56
c1914 OS02T0-00 111 155 93.33
c1914 OS08T0-00 528 617 50.00
This is what I tried,
oh1=open('result.txt','w')
oh2=open('result2.txt','w')
f=open('file.txt','r')
lines=f.readlines()
for line in lines:
new_list=line.split()
protein=new_list[1]
for i in range(1,len(protein)):
(p, c) = protein[i-1], protein[i]
if c == p:
new_list.append(protein)
oh1.write(line)
else:
oh2.write(line)
If I understand you correctly, you want to send all lines for your input file that have a first element txt1 to your first output file if the second element txt2 of all those lines is the same; otherwise all those lines go to the second output file. Here is a program that does that.
from collections import defaultdict
# Read in file line-by-line for the first time
# Build up dictionary of txt1 to set of txt2 s
txt1totxt2 = defaultdict(set)
f=open('file.txt','r')
for line in f:
lst = line.split()
txt1=lst[0]
txt2=lst[1]
txt1totxt2[txt1].add(txt2);
# The dictionary tells us whether the second text
# is unique or not. If it's unique the set has
# just one element; otherwise the set has > 1 elts.
# Read in file for second time, sending each line
# to the appropriate output file
f.seek(0)
oh1=open('result1.txt','w')
oh2=open('result2.txt','w')
for line in f:
lst = line.split()
txt1=lst[0]
if len(txt1totxt2[txt1]) == 1:
oh1.write(line)
else:
oh2.write(line)
The program logic is very simple. For each txt it builds up a set of txt2s that it sees. When you're done reading the file, if the set has just one element, then you know that the txt2s are unique; if the set has more than one element, then there are at least two txt2s. Note that this means that if you only have one line in the input file with a particular txt1, it will always be sent to the first output file. There are ways round this if this is not the behaviour you want.
Note also that because the file is large, I've read it in line-by-line: lines=f.readlines() in your original program reads the whole file into memory at a time. I've stepped through it twice: the second time does the output. If this increases the run time then you can restore the lines=f.readlines() instead of reading it a second time. However the program as is should be much more robust to very large files. Conversely if your files are very large indeed, it would be worth looking at the program to reduce the memory usage further (the dictionary txt1totxt2 could be replaced with something more optimal, albeit more complicated, if necessary).
Edit: there was a good point in comments (now deleted) about the memory cost of this algorithm. To elaborate, the memory usage could be high, but on the other hand it isn't as severe as storing the whole file: rather txt1totxt2 is a dictionary from the first text in each line to a set of the second text, which is of the order of (size of unique first text) * (average size of unique second text for each unique first text). This is likely to be a lot smaller than the file size, but the approach may require further optimization. The approach here is to get something simple going first -- this can then be iterated to optimize further if necessary.
Try this...
import collections
parsed_data = collections.OrderedDict()
with open("input.txt", "r") as fd:
for line in fd.readlines():
line_data = line.split()
key = line_data[0]
key2 = line_data[1]
if not parsed_data.has_key(key):
parsed_data[key] = collections.OrderedDict()
if not parsed_data[key].has_key(key2):
parsed_data[key][key2] = [line]
else:
parsed_data[key][key2].append(line)
# now process the parsed data and write result files
fsimilar = open("similar.txt", "w")
fdifferent = open("different.txt", "w")
for key in parsed_data:
if len(parsed_data[key]) == 1:
f = fsimilar
else:
f = fdifferent
for key2 in parsed_data[key]:
for line in parsed_data[key][key2]:
f.write(line)
fsimilar.close()
fdifferent.close()
Hope this helps
I have searched high and low for a resolution to this situation, and tested a few different methods, but I haven't had any luck thus far. Basically, I have a file with data in the following format that I need to convert into a CSV:
(previously known as CyberWay Pte Ltd)
0 2019
01.com
0 1975
1 TRAVEL.COM
0 228
1&1 Internet
97 606
1&1 Internet AG
0 1347
1-800-HOSTING
0 8
1Velocity
0 28
1st Class Internet Solutions
0 375
2iC Systems
0 192
I've tried using re.sub and replacing the whitespace between the numbers on every other line with a comma, but haven't had any success so far. I admit that I normally parse from CSVs, so raw text has been a bit of a challenge for me. I would need to maintain the string formats that are above each respective set of numbers.
I'd prefer the CSV to be formatted as such:
foo bar
0,8
foo bar
0,9
foo bar
0,10
foo bar
0,11
There's about 50,000 entries, so manually editing this would take an obscene amount of time.
If anyone has any suggestions, I'd be most grateful.
Thank you very much.
If you just want to replace whitespace with comma, you can just do:
line = ','.join(line.split())
You'll have to do this only on every other line, but from your question it sounds like you already figured out how to work with every other line.
If I have correctly understood your requirement, you need a strip() on all lines and a split based on whitespace on even lines (lines starting from 1):
import re
fp = open("csv.txt", "r")
while True:
line = fp.readline()
if '' == line:
break
line = line.strip()
fields = re.split("\s+", fp.readline().strip())
print "\"%s\",%s,%s" % ( line, fields[0], fields[1] )
fp.close()
The output is a CSV (you might need to escape quotes if they occur in your input):
"Content of odd line",Number1,Number2
I do not understand the 'foo,bar' you place as header on your example's odd lines, though.