python file manipulation - python

I have a file with entries such as:
26 1
33 2
.
.
.
and another file with sentences in english
I have to write a script to print the 1st word in sentence number 26
and the 2nd word in sentence 33.
How do I do it?

The following code should do the task. With assumptions that files are not too large. You may have to do some modification to deal with edge cases (like double space, etc)
# Get numers from file
num = []
with open('1.txt') as file:
num = file.readlines()
# Get text from file
text = []
with open('2.txt') as file:
text = file.readlines()
# Parse text into words list.
data = []
for line in text: # For each paragraoh in the text
sentences = l.strip().split('.') # Split it into sentences
words = []
for sentence in sentences: # For each sentence in the text
words = sentence.split(' ') # Split it into words list
if len(words) > 0:
data.append(words)
# get desired result
for i = range(0, len(num)/2):
print data[num[i+1]][num[i]]

Here's a general sketch:
Read the first file into a list (a numeric entry in each element)
Read the second file into a list (a sentence in each element)
Iterate over the entry list, for each number find the sentence and print its relevant word
Now, if you show some effort of how you tried to implement this in Python, you will probably get more help.

The big issue is that you have to decide what separates "sentences". For example, is a '.' the end of a sentence? Or maybe part of an abbreviation, e.g. the one I've just used?-) Secondarily, and less difficult, what separates "words", e.g., is "TCP/IP" one word, or two?
Once you have sharply defined these rules, you can easily read the file of text into a a list of "sentences" each of which is a list of "words". Then, you read the other file as a sequence of pairs of numbers, and use them as indices into the overall list and inside the sublist thus identified. But the problem of sentence and word separation is really the hard part.

In the following code, I am assuming that sentences end with '. '. You can modify it easily to accommodate other sentence delimiters as well. Note that abbreviations will therefore be a source of bugs.
Also, I am going to assume that words are delimited by spaces.
sentences = []
queries = []
english = ""
for line in file2:
english += line
while english:
period = english.find('.')
sentences += english[: period+1].split()
english = english[period+1 :]
q=""
for line in file1:
q += " " + line.strip()
q = q.split()
for i in range(0, len(q)-1, 2):
sentence = q[i]
word = q[i+1]
queries.append((sentence, query))
for s, w in queries:
print sentences[s-1][w-1]
I haven't tested this, so please let me know (preferably with the case that broke it) if it doesn't work and I will look into bugs
Hope this helps

Related

Finding specific words and add them into a dictionary

I want to find the words that start with "CHAPTER" and add them to a dictionary.
I have written some but It gives me 0 as an output all the time:
def wordcount(filename, listwords):
try:
file = open(filename, "r")
read = file.readlines()
file.close()
for word in listwords:
lower = word.lower()
count = 0
for sentence in read:
line = sentence.split()
for each in line:
line2=each.lower()
line2=line2.strip("")
if lower == line2:
count += 1
print(lower, ":", count)
except FileExistError:
print("The file is not there ")
wordcount("dad.txt", ["CHAPTER"])
the txt file is here
EDİT*
The problem was encoding type and I solved it but the new question is that How can I add these words into a dictionary?
and How can I make this code case sensitive I mean when I type wordcount("dad.txt", ["CHAPTER"]) I want it to find only CHAPTER words with upper case.
It cannot work because of this line:
if lower == line2:
you can use this line to find the words that start with "CHAPTER"
if line2.startswith(lower):
I notice that you need to check if a word starts with a certain words from listwords rather than equality (lower == line2). Hence, you should use startswith method.
You can have a simpler code, something like this.
def wordcount(filename, listwords):
listwords = [s.lower() for s in listwords]
wordCount = {s:0 for s in listwords} # A dict to store the counts
with open(filename,"r") as f:
for line in f.readlines():
for word in line.split():
for s in listwords:
if word.lower().startswith(s):
wordCount[s]+=1
return wordCount
If the goal is to find chapters and paragraphs, don't try and count words or split any line
For example, start simpler. Since chapters are in numeric order, you only need a list, not a dictionary
chapters = [] # list of chapters
chapter = "" # store one chapter
with open(filename, encoding="UTF-8") as f:
for line in f.readlines():
# TODO: should skip to the first line that starts with "CHAPTER", otherwise 'chapters' variable gets extra, header information
if line.startswith("CHAPTER"):
print("Found chapter: " + line)
# Save off the most recent, non empty chapter text, and reset
if chapter:
chapters.append(chapter)
chapter = ""
else:
# up to you if you want to skip empty lines
chapter += line # don't manipulate any data yet
# Capture the last chapter at the end of the file
if chapter:
chapters.append(chapter)
del chapter # no longer needed
# del chapters[0] if you want to remove the header information before the first chapter header
# Done reading the file, now work with strings in your lists
print(len(chapters)) # find how many chapters there are
If you actually did want the text following "CHAPTER", then you can split that line in the first if statement, however note that the chapter numbers repeat between volumes, and this solution assumes the volume header is part of a chapter
If you want to count the paragraphs, start with finding the empty lines (for example split each element on '\n\n')

Calculating how many times sentence words are repeating in the file

I want to check how many times a word is repeating in the file. I have seen other codes on finding words in file but they won't solve my problem.From this I mean if I want to find "Python is my favourite language"The program will split the text will tell how many times it has repeated in the file.
def search_tand_export():
file = open("mine.txt")
#targetlist = list()
#targetList = [line.rstrip() for line in open("mine.txt")]
contentlist = file.read().split(" ")
string=input("search box").split(" ")
print(string)
fre={}
outputfile=open("outputfile.txt",'w')
for word in contentlist:
print(word)
for i in string:
# print(i)
if i == word:
print(f"'{string}' is in text file ")
outputfile.write(word)
print(word)
spl=tuple(string.split())
for j in range(0,len(contentist)):
if spl in contentlist:
fre[spl]+=1
else:
fre[spl]=1
sor_list=sorted(fre.items(),key =lambda x:x[1])
for x,y in sor_list:
print(f"Word\tFrequency")
print(f"{x}\t{y}")
else:
continue
print(f"The word or collection of word is not present")
search_tand_export()
I don't quite understand what you're trying to do.
But I suppose you are trying to find how many times every word from a given sentence is repeated in the file.
If this is the case, you can try something like this:
sentence = "Python is my favorite programming language"
words = sentence.split()
with open("file.txt") as fp:
file_data = fp.read()
for word in words:
print(f"{file_data.count(word)} occurence(s) of '{word}' found")
Note that the code above is case-sensitive (that is, "Python" and "python" are different words). To make it case-insensitive, you can bring file_data and every word during comparison to lowercase using str.lower().
sentence = "Python is my favorite programming language"
words = sentence.split()
with open("file.txt") as fp:
file_data = fp.read().lower()
for word in words:
print(f"{file_data.count(word.lower())} occurence(s) of '{word}' found")
A couple of things to note:
You are opening a file and even don't close it finally (although you should). It's better to use with open(...) as ... (context-manager), so the file is closed automatically.
Python strings (as well as lists, tuples etc.) have .count(what) method. It returns how many occurences of what are found in the object.
Read about PEP-8 coding style and give better names to variables. For example, it is not easy to understand what does fre means in your code. But if you name it as frequency, the code will become more readable, and it will be easier to work with it.
to be continued
Try this script. It finds word in file and counts how many times it is found in words:
file = open('hello.txt','r')
word = 'Python'
words = 0
for line in file:
for word in line:
words += 1
print('File contains ' + word + ' ' + str(words) + ' times' )

How can I merge two snippets of text that both contain a desired keyword?

I have a program that pulls out text around a specific keyword. I'm trying to modify it so that if two keywords are close enough together, it just shows one longer snippet of text instead of two individual snippets.
My current code, below, adds words after the keyword to a list and resets the counter if another keyword is found. However, I've found two problems with this. The first is that the data rate limit in my spyder notebook is exceeded, and I haven't been able to deal with that. The second is that though this would make a longer snippet, it wouldn't get rid of the duplicate.
Does anyone know a way to get rid of the duplicate snippet, or know how to merge the snippets in a way that doesn't exceed the data rate limit (or know how to change the spyder rate limit)? Thank you!!
def occurs(word1, word2, file, filewrite):
import os
infile = open(file,'r') #opens file, reads, splits into lines
lines = infile.read().splitlines()
infile.close()
wordlist = [word1, word2] #this list allows for multiple words
wordsString = ''.join(lines) #splits file into individual words
words = wordsString.split()
f = open(file, 'w')
f.write("start")
f.write(os.linesep)
g = open(filewrite,'w')
g.write("start")
g.write(os.linesep)
for item in wordlist: #multiple words
matches = [i for i, w in enumerate(words) if w.lower().find(item) != -1]
#above line goes through lines, finds the specific words we want
for m in matches: #next three lines find each instance of the word, print out surrounding words
list = []
s = ""
l = " ".join(words[m-20:m+1])
j = 0
while j < 20:
list.append(words[m+i])
j = j+1
if words[m+i] == word1 or words[m+i] == word2:
j = 0
print (list)
k = " ".join(list)
f.write(f"{s}...{l}{k}...") #writes the data to the external file
f.write(os.linesep)
g.write(str(m))
g.write(os.linesep)
f.close
g.close

How do you check for the presence of a string in a text file based on the elements of an array?

I have an array containing strings.
I have a text file.
I want to loop through the text file line by line.
And check whether each element of my array is present or not.
(they must be whole words and not substrings)
I am stuck because my script only checks for the presence of the first array element.
However, I would like it to return results with each array element and a note as to whether this array element is present in the entire file or not.
#!/usr/bin/python
with open("/home/all_genera.txt") as file:
generaA=[]
for line in file:
line=line.strip('\n')
generaA.append(line)
with open("/home/config/config2.cnf") as config_file:
counter = 0
for line in config_file:
line=line.strip('\n')
for part in line .split():
if generaA[counter]in part:
print (generaA[counter], "is -----> PRESENT")
else:
continue
counter += 1
If I understand correctly, you want a sequence of words that are in both files. If yes, set is your friend:
def parse(f):
return set(word for line in f for word in line.strip().split())
with open("path/to/genera/file") as f:
source = parse(f)
with open("path/to/conf/file" as f:
conf = parse(f)
# elements that are common to both sets
common = conf & source
print(common)
# elements that are in `source` but not in `conf`
print(source - conf)
# elements that are in `conf` but not in `source`
print(conf - source)
So to answer "I would like it to return results with each array element and a note as to whether this array element is present in the entire file or not", you can use either common elements or the source - conf difference to annotate your source list:
# using common elements
common = conf & source
result = [(word, word in common) for word in source]
print(result)
# using difference
diff = source - conf
result = [(word, word not in diff) for word in source]
Both will yeld the same result and since set lookup is O(1) perfs should be similar too, so I suggest the first solution (positive assertions are easier to the brain than negative ones).
You can of course apply further cleaning / normalisation when building the sets, ie if you want case insensitive search:
def parse(f):
return set(word.lower() for line in f for word in line.strip().split())
from collection import Counter
import re
#first normalize the text (lowercase everything and remove puncuation(anything not alphanumeric)
normalized_text = re.sub("[^a-z0-9 ]","",open("some.txt","rb").read().lower())
# note that this normalization is subject to the rules of the language/alphabet/dialect you are using, and english ascii may not cover it
#counter will collect all the words into a dictionary of [word]:count
words = Counter(normalized_text.split())
# create a new set of all the words in both the text and our word_list_array
set(my_word_list_array).intersection(words.keys())
the counter is not increasing because it's outside the for loops.
with open("/home/all_genera.txt") as myfile: # don't use 'file' as variable, is a reserved word! use myfile instead
generaA=[]
for line in myfile: # use .readlines() if you want a list of lines!
generaA.append(line)
# if you just need to know if string are present in your file, you can use .read():
with open("/home/config/config2.cnf") as config_file:
mytext = config_file.read()
for mystring in generaA:
if mystring in mytext:
print mystring, "is -----> PRESENT"
# if you want to check if your string in line N is present in your file in the same line, you can go with:
with open("/home/config/config2.cnf") as config_file:
for N, line in enumerate(config):
if generaA[N] in line:
print "{0} is -----> PRESENT in line {1}".format(generaA[N], N)
I hope that everything is clear.
This code could be improved in many ways, but i tried to have it as similar as yours so it will be easier to understand

printing 5 words before and after a specific word in a file in python

I have a folder which contains some other folders and these folders contain some text files. (The language is Persian). I want to print 5 words before and after a keyword with the keyword in the middle of them. I wrote the code, but it gives the 5 words in the start and the end of the line and not the words around the keyword. How can I fix it?
Hint: I just write the end of the code which relates to the question above. The start of the code is about the opening and normalizing the files.
def c ():
y = "آرامش"
text= normal_text(folder_path) # the first function to open and normalize the files
for i in text:
for line in i:
if y in line:
z = line.split()
print (z[-6], z[-5],
z[-4], z[-3],
z[-2], z[-1], y,
z[+1], z[+2],
z[+3], z[+4],
z[+5], z[+6])
what I expect is something like this:
word word word word word keyword word word word word word
Each sentence in a new line.
Try this. It splits the words. Then it calculates the amount to show before and after (with a minimum of however much is left, and a maximum of 5) and shows it.
words = line.split()
if y in words:
index = words.index(y)
before = index - min(index, 5)
after = index + min( len(words) - 1 - index, 5) + 1
print (words[before:after])
You need to get the words indices based on your keyword's index. You can use list.index() method in order to get the intended index, then use a simple indexing to get the expected words:
for f in normal_text(folder_path):
for line in f:
if keyword in line:
words = line.split()
ins = words.index(keyword)
print words[max(0, ind-5):min(ind+6, len(words))]
Or as a more optimized approach you can use a generator function in order to produce the words as an iterator which is very much optimized in terms of memory usage.
def get_words(keyword):
for f in normal_text(folder_path):
for line in f:
if keyword in line:
words = line.split()
ins = words.index(keyword)
yield words[max(0, ind-5):min(ind+6, len(words))]
Then you can simply loop over the result for print or etc.
y = "آرامش"
for words in get_words(y):
# do stuff
def c():
y = "آرامش"
text= normal_text(folder_path) # the first function to open and normalize the files
for i in text:
for line in i:
split_line = line.split()
if y in split_line:
index = split_line.index(y)
print (' '.join(split_line[max(0,index-5):min(index+6,le
n(split_line))]))
Assuming the keyword must be an exact word.

Categories

Resources