PyDictionary Printing Issue - python

I have a small issue with PyDictionary; When I enter a list of words, the printing of the words Does NOT keep the order of the word list.
For example:
from PyDictionary import PyDictionary
dictionary=PyDictionary(
"bad ",
"omen",
"azure ",
"sky",
"icy ",
"smile")
print(dictionary.printMeanings())
This list will print first Omen, Then Sky and so on, What I need is to print the word list in its original order. I search on google but there was nothing related, I search the posts in this forum and nothing. I hope you can help me. Thank you in advance.

I found a workaround that gives me a full solution to my initial printing issues.
The Main Problem is that I am using an OLD laptop (older than 12 years), So I have Not been able to use python 3+, Using PyDictionary with python 2.7 arouse the problem of printing the initial word list randomly.
The Solution is to print a single word for each printing, BUT I have to do this about 25,000 times!... Using Notepad++ I made a macro that codes each word to be used with python, Furthermore I was able to even add the Spanish translation to each English word, The printing of each word individually added the benefit that each word definitions are separated from each word.
Using Notepad++ and regex I am able to do the final clean up of each word, and it's meaning.
So I am happy with this workaround... Thank You for your help.

Related

automise long lists in python

Today I wrote my first program, which is essentially a vocabulary learning program! So naturally I have pretty huge lists of vocabulary and a couple of questions. I created a class with the parameters, one of which is the German vocab and one of which is the Spanish vocab. My first question is: is there anyway to turn all the plain text vocabulary that I copy from an internets vocab list into strings and separate them without adding the " and the commas manually?
And my second question:
I created another list to assign each German vocab to each Spanish vocab and it looks a little bit like that:
vocabs = [
Vocabulary(spanish_word[0], german_word[0])
Vocabulary(spanish_word[1], german_word[1])
etc.
]
Vocabulary would be the class, spanish_word the first word list and German the other obviously.
But with a lot of vocab that's a lot of work too. Is there anyway to automate the process to add each word from the Spanish word list to the German one? I first tried it with the
vocabs = [
for spanish word in german word
Vocabulary(spanish_word[0], german_word[0])
]
But that didn't work. Researching on the internet also didn't help much.
Please don't be rude if those are noob questions I'm actually pretty happy that my program is running so well and I would be thankful for all the help to make it better.
Without knowing what it is you're looking to do with the result, it appears you're trying to do this:
vocabs = [Vocabulary(s, g) for s, g in zip(spanish_word, german_word)]
You didn't provide any code or example data around the "turn all the plain text vocabulary [..] into strings and separate them without adding the quotes and the commas manually". There's sure to be a way to do what you need, but you should probably ask a separate question, after first looking for a solution yourself and coming up with a solution. Ask a question if you can't get it to work.

Print something when a word is in a word list

So I am currently trying to build a Caesar encrypted that automatically tries all the possibilities and compares them to a big list of words to see if it is a real word, so some sort of dictionary attack I guess.
I found a list with a lot of German words, and they even are split so that each word is on a new line. Currently, I am struggling with comparing the sentence that I currently have with the whole word list. So that when the program sees that a word in my sentence is also a word in the Word list that it prints out that this is a real word and possible the right sentence.
So this is how far I currently am, I have not included the code with which I try all the 26 letters. Only my way to look through the word list and compares it to a sentence. Maybe someone can tell me what I am doing wrong and why it doesn't work:
No idea why it doesn't work. I have also tried it with regular expressions but nothing works. The list is really long (166k Words).
There are /n at the en of each word of the list you created from the file, so the they will never be the same as what they are compared to.
Remove the newline character before appending (you can, for example, wordlist.append(line.rstrip())

finding occurrences in Multiple sentences

I'm new to python however after scouring the internet and going back over my study, I cannot seem to find how to find duplicates of a word within multiple sentences. my aim is to define how many times the word python occurs within these strings. I have tried the split() method and count.(python) and even tried to make a dictionary and word_counter which initially I have been taught to do as part of the basics however nothin in my study has shown me anything similar to this before. i need to be able to display the frequency of the word. python occurs 4 times. any help would be very appreciated
python_occurs = ["welcome to our Python program", "Python is my favorite language!", "I am afraid of Pythons", "I love Python"]
A straight-forward approach is to iterate over every word using split. For each word, it's converted to lowercase and the number of times "python" occurs in it is counted using count.
I guess the reason for you approach not working might be that you forgot to change the letters to lowercase.
python_occurs = ["welcome to our Python program", "Python is my favorite language!", "I am afraid of Pythons", "I love Python"]
count = 0
for sentence in python_occurs:
for word in sentence.split():
# lower is necessary because we want to be case-insensitive
count += word.lower().count("python")

Read. Check. Write. A Broken Python script

I am developing a word game, and for this game, I needed a list of words. Sadly, this list was so long that I just had to refine it (this list of words can be found on any Mac at /usr/share/dict/).
To refine it, I decided to use my own Python scripts. I already wrote a script before that removes all words that start with capital letters (thus removing names of places, etc.), and it worked. This is it:
with open("/Users/me/Desktop/oldwords.txt", "r") as text:
with open("/Users/me/Desktop/newwords.txt", "w") as towriteto:
for word in text:
if word[0]==word[0].lower():
towriteto.write(word)
Then, I decided to refine it even further; I decided that I would delete all words that are not in the pyenchant module English dictionary. This opperation's code is very similar to the previous one's code. This is my code:
import enchant
with open("/Users/me/Desktop/newwords.txt", "r") as text:
with open("/Users/me/Desktop/words.txt", "w") as towriteto:
d = enchant.Dict("en_US")
for word in text:
if d.check(word):
towriteto.write(word)
Sadly, this did not write anything to the "towriteto" file, and after some debugging, I found that
d.check(word) -> False
It always returned false. However, when I checked words separately, real words returned True, and fake words returned False as they should.
I have no idea what is wrong with my second script. The file locations are correct and the pyenchant installation had no issues.
Thanks in advance!
I don't know the input file format but if there is only one word per line, try to remove the end-of-line character of word before to call d.check(word):
word = word.rstrip()

Split/decompose German words in Python

I need to split/decompose German composed words in Python. An example:
Donaudampfschiffahrtsgesellschaftskapitän
should be decomposed to:
[Donau, Dampf, Schiff, Fahrt, Gesellschaft, Kapitän]
First I found wordaxe, but it did not work. Than I came across NLTK, but still don't understand if that is smth. I need.
A solution with an example would be really great!

Categories

Resources