Need help making a "compliment generator" - python

I'm trying to make a simple compliment generator that takes a noun and an adjective from two separate list ands randomly combines them together. I can get one on its own to work but trying to get the second word to appear makes weird stuff happen. What am I doing wrong here? any input would be great.
import random
sentence = "Thou art a *adj *noun."
sentence = sentence.split()
adjectives = ["decadent", "smelly", "delightful", "volatile", "marvelous"]
indexCount= 0
noun = ["dandy", "peaseant", "mule", "maiden", "sir"]
wordCount= 0
for word in sentence:
if word =="*adj":
wordChoice = random.choice (adjectives)
sentence [indexCount] = wordChoice
indexCount += 1
for word in sentence:
if "*noun" in word:
wordChoice = random.choice (noun)
sentence [wordCount] = wordChoice
wordCount += 1
st =""
for word in sentence:
st+= word + " "
print (st)
The end result nets me a double noun. how would I get rid of the duplicate?

You aren't incrementing wordCount in the second loop as you do indexCount in the first.

Related

Ordering sentences according to their length

I tried using this code that I found online:
K=sentences
m=[len(i.split()) for i in K]
lengthorder= sorted(K, key=len, reverse=True)
#print(lengthorder)
#print("\n")
list1 = lengthorder
str1 = '\n'.join(list1)
print(str1)
print('\n')
Sentence1 = "We have developed speed, but we have shut ourselves in"
res = len(Sentence1.split())
print ("The longest sentence in this text contains" + ' ' + str(res) + ' ' + "words.")
Sentence2 = "More than cleverness we need kindness and gentleness"
res = len(Sentence2.split())
print ("The second longest sentence in this text contains" + ' ' + str(res) + ' ' + "words.")
Sentence3 = "Machinery that gives abundance has left us in want"
res = len(Sentence3.split())
print ("The third longest sentence in this text contains" + ' ' + str(res) + ' ' + "words.")
but it doesn't sort out the sentences per word number, but per actual length (as in cm)
You can simply iterate through the different sentaces and split them up into words like this:
text = " We have developed speed. but we have. shut ourselves in Machinery that. gives abundance has left us in want Our knowledge has made us cynical Our cleverness, hard and unkind We think too much and feel too little More than machinery we need humanity More than cleverness we need kindness and gentleness"
# split into sentances
text2array = text.split(".")
i =0
# interate through sentances and split them into words
for sentance in text2array:
text2array[i] = sentance.split(" ")
i += 1
# sort the sentances by word length
text2array.sort(key=len,reverse=True)
i = 0
#iterate through sentances and print them to screen
for sentance in text2array:
i += 1
sentanceOut = ""
for word in sentance:
sentanceOut += " " + word
sentanceOut += "."
print("the nr "+ str(i) +" longest sentence is" + sentanceOut)
You can define a function that uses the regex to obtain the number of words in a given sentence:
import re
def get_word_count(sentence: str) -> int:
return len(re.findall(r"\w+", sentence))
Assuming you already have a list of sentences, you can iterate the list and pass each sentence to the word count function then store each sentence and its word count in a dictionary:
sentences = [
"Assume that this sentence has one word. Really?",
"Assume that this sentence has more words than all sentences in this list. Obviously!",
"Assume that this sentence has more than one word. Duh!",
]
word_count_dict = {}
for sentence in sentences:
word_count_dict[sentence] = get_word_count(sentence)
At this point, the word_count_dict contains sentences as keys and their associated word count as values.
You can then sort word_count_dict by values:
sorted_word_count_dict = dict(
sorted(word_count_dict.items(), key=lambda item: item[1], reverse=True)
)
Here's the full snippet:
import re
def get_word_count(sentence: str) -> int:
return len(re.findall(r"\w+", sentence))
sentences = [
"Assume that this sentence has one word. Really?",
"Assume that this sentence has more words than all sentences in this list. Obviously!",
"Assume that this sentence has more than one word. Duh!",
]
word_count_dict = {}
for sentence in sentences:
word_count_dict[sentence] = get_word_count(sentence)
sorted_word_count_dict = dict(
sorted(word_count_dict.items(), key=lambda item: item[1], reverse=True)
)
print(sorted_word_count_dict)
Let's assume that your sentences are already separate and there is no need to detect the sentences.
So we have a list of sentences. Then we need to calculate the length of the sentence based on the word count. the basic way is to split them by space. So each space separates two words from each other in a sentence.
list_of_sen = ['We have developed speed, but we have shut ourselves in','Machinery that gives abundance has left us in want Our knowledge has made us cynical Our cleverness', 'hard and unkind We think too much and feel too little More than machinery we need humanity More than cleverness we need kindness and gentleness']
sen_len=[len(i.split()) for i in list_of_sen]
sen_len= sorted(sen_len, reverse=True)
for index , count in enumerate(sen_len):
print(f'The {index+1} longest sentence in this text contains {count} words')
But if your sentence is not separated, first we need to recognize the end of the sentence then split them. Your sample date does not contain any punctuation that can be useful to separate sentences. So if we assume that your data has punctuation the answer below can be helpful.
see this question
from nltk import tokenized
p = "Good morning Dr. Adams. The patient is waiting for you in room number 3."
tokenize.sent_tokenize(p)

Converting a sentence into Pig Latin

I'm fairly new to Python and one of the practice projects I'm trying to do is converting sentences into pig latin. The original project was just converting words into pig latin, but I want to expand this into converting sentences.
Here's the code I have so far:
import sys
print("Pig Latin Maker")
VOWELS = 'aeiouy'
while True:
word = input ("Write a Word: ")
if word[0] in VOWELS:
pig_Latin = word + 'way'
else:
pig_Latin = word[1:] + word[0] + 'ay'
print ()
print ("{}".format(pig_Latin), file=sys.stderr)
end = input ("\n\n Press N\n")
if end.lower() == "n":
sys.exit()
The plan is to modify this so it splits all the words in the input sentence, converts each word to pig latin, and then spits it back out as one sentence but I'm not really sure how to do that.
I'm using Python 3.8. Any help is appreciated! Thank you.
You could split the sentence by the space character into separate strings each containing a word. You can then apply your current algorithm to every single word in that sentence. str has a method split which returns a list.
To get the words in a list, use listofwords = input('Write your sentence: ').split().
Then, you can combine the list of pig-latin words doing print(' '.join(listofpiglatin)).
import sys
print("Pig Latin Maker")
VOWELS = 'aeiouy'
while True:
listofwords = input ("Write a Sentence: ").split() # splits by spaces
listofpiglatin = []
for word in listofwords:
if word[0] in VOWELS:
pig_Latin = word + 'way'
else:
pig_Latin = word[1:] + word[0] + 'ay'
listofpiglatin.append(pig_Latin) # adds your new pig-latin word to our list
print()
print(' '.join(listofpiglatin)) # spits the words back as a sentence
end = input ("\n\n Press n")
if end.lower() == "n":
sys.exit()
I hope that this helps you learn!
Put your algorithm into a function:
def makePigLatin(word):
<your code here>
return latinWord
As other users mentioned, split the input and assign to a list:
words = input('blah').split()
Then apply your function to each word in the list:
translatedWords = map(makePigLatin, words)
Print them back out by joining them together:
print(' '.join(translatedWords))

Grok learning- "How many words"

I'm doing a question on grok learning, it asks for this:
You are learning a new language, and are having a competition to see how many unique words you know in it to test your vocabulary learning.
Write a program where you can enter one word at a time, and be told how many unique words you have entered. You should not count duplicates. The program should stop asking for more words when you enter a blank line.
For example:
Word: Chat
Word: Chien
Word: Chat
Word: Escargot
Word:
You know 3 unique word(s)!
​and
Word: Katze
Word: Hund
Word: Maus
Word: Papagei
Word: Schlange
Word:
You know 5 unique word(s)!
and
Word: Salam
Word:
You know 1 unique word(s)!
I cannot get it to work when there are multiple duplicates, here is my code:
word = input("Word: ")
l = []
l.append(word)
words = 1
while word != "":
if word in l:
word = input("Word: ")
else:
words = 1 + words
word = input("Word: ")
print("You know " + str(words) , "unique word(s)!" )
Using a set this problem can be solved easily:
l = set()
while True:
new_word = input("Word:")
if new_word=="":
break
l.add(new_word)
print("You know " + str(len(l)) , "unique word(s)!" )
This is a good example for the power of the Python standard library. Usually if you have a problem there already is a good solution in it.
There is also a way where you do not necessarily have to use the set() function. But it is still best if you learn about the set() function anyway.
Here's the code that doesn't need a set() function, but still works fine:
words = []
word = input('Word: ')
while word != '':
if word not in words:
words.append(word)
word = input('Word: ')
print('You know', len(words), 'unique word(s)!')

How to find the position of a repeating word in a string - Python

How to get Python to return the position of a repeating word in a string?
E.g. the word "cat" in "the cat sat on the mat which was below the cat" is in the 2nd and 11th position in the sentence.
You can use re.finditer to find all occurrences of the word in a string and starting indexes:
import re
for word in set(sentence.split()):
indexes = [w.start() for w in re.finditer(word, sentence)]
print(word, len(indexes), indexes)
And using dictionary comprehension:
{word: [w.start() for w in re.finditer(word, sentence)] for word in sentence.split()}
This will return a dictionary mapping each word in the sentence, which repeates at least once, to the list of word index (not character index)
from collections import defaultdict
sentence = "the cat sat on the mat which was below the cat"
def foo(mystr):
sentence = mystr.lower().split()
counter = defaultdict(list)
for i in range(len(sentence)):
counter[sentence[i]].append(i+1)
new_dict = {}
for k, v in counter.iteritems():
if len(v) > 1:
new_dict[k] = v
return new_dict
print foo(sentence)
The following will take an input sentence, take a word from the sentence, and then print the position(s) of the word in a list with a starting index of 1 (it looks like that's what you want from your code).
sentence = input("Enter a sentence, ").lower()
word = input("Enter a word from the sentence, ").lower()
words = sentence.split(' ')
positions = [ i+1 for i,w in enumerate(words) if w == word ]
print(positions)
I prefer simplicity and here is my code below:
sentence = input("Enter a sentence, ").lower()
word_to_find = input("Enter a word from the sentence, ").lower()
words = sentence.split() ## Splits the string 'sentence' and returns a list of words in it. Split() method splits on default delimiters.
for pos in range(len(words)):
if word_to_find == words[pos]: ## words[pos] corresponds to the word present in the 'words' list at 'pos' index position.
print (pos+1)
The 'words' consists of the list of all the words present in the sentence. Then after that, we iterate and match each word present at index 'pos' with the word we are looking to find(word_to_find) and if both the words are same then we print the value of pos with 1 added to it.
Hope this is simple enough for you to understand and it serves your purpose.
If you wish to use a list comprehension for the above, then:
words = sentence.split()
positions = [ i+1 for i in range(len(words)) if word_to_find == words[i]]
print (positions)
Both the above ways are same, just the later gives you a list.
positions= []
sentence= input("Enter the sentence please: ").lower()
sentence=sentence.split( )
length=len(sentence))
word = input("Enter the word that you would like to search for please: ").lower()
if word not in sentence:
print ("Error, '"+word+"' is not in this sentence.")
else:
for x in range(0,length):
if sentence[x]==word: #
positions.append(x+1)
print(word,"is at positions", positions)
s="hello fattie i'm a fattie too"
#this code is unsure but manageable
looking word= "fattie"
li=[]
for i in range(len(s)):
if s.startswith(lw, i):
print (i)
space = s[:i].count(" ")
hello = space+1
print (hello)
li.append(hello)
print(li)

Scanning for two word phrase in Python dictionary

I am trying to use a Python dictionary object to help translate an input string to other words or phrases. I am having success with translating single words from the input, but I can't seem to figure out how to translate multi-word phrases.
Example:
sentence = input("Please enter a sentence: ")
myDict = {"hello": "hi","mean adult":"grumpy elder", ...ect}
How can I return hi grumpy elder if the user enters hello mean adult for the input?
"fast car" is a key to the dictionary, so you can extract the value if you use the key coming back from it.
If you're taking the input straight from the user and using it to reference the dictionary, get is safer, as it allows you to provide a default value in case the key doesn't exist.
print(myDict.get(sentence, "Phrase not found"))
Since you've clarified your requirements a bit more, the hard part now is the splitting; the get doesn't change. If you can guarantee the order and structure of the sentences (that is, it's always going to be structured such that we have a phrase with 1 word followed by a phrase with 2 words), then split only on the first occurrence of a space character.
split_input = input.split(' ', 1)
print("{} {}".format(myDict.get(split_input[0]), myDict.get(split_input[1])))
More complex split requirements I leave as an exercise for the reader. A hint would be to use the keys of myDict to determine what valid tokens are present in the sentence.
The same way as you normally would.
translation = myDict['fast car']
A solution to your particular problem would be something like the following, where maxlen is the maximum number of words in a single phrase in the dictionary.
translation = []
words = sentence.split(' ')
maxlen = 3
index = 0
while index < len(words):
for i in range(maxlen, 0, -1):
phrase = ' '.join(words[index:index+i])
if phrase in myDict:
translation.append(myDict[phrase])
index += i
break
else:
translation.append(words[index])
index += 1
print ' '.join(translation)
Given the sentence hello this is a nice fast car, it outputs hi this is a sweet quick ride
This will check for each word and also a two word phrase using the word before and after the current word to make the phrase:
myDict = {"hello": "hi",
"fast car": "quick ride"}
sentence = input("Please enter a sentence: ")
words = sentence.split()
for i, word in enumerate(words):
if word in myDict:
print myDict.get(word)
continue
if i:
phrase = ' '.join([words[i-1], word])
if phrase1 in myDict:
print myDict.get(phrase)
continue
if i < len(words)-1:
phrase = ' '.join([word, words[i+1])
if phrase in myDict:
print myDict.get(phrase)
continue

Categories

Resources