I'm trying to make class that can clean text. The class has several methods, like converting text to lower case, spell checking the text, lemmatizing the text, removing special characters etc. Finally I have a method (cleaned_text) that calls all the above methods in order and returns the final cleaned text. Here is the code:
class TextCleaner:
def __init__(self, string):
self.string = string
def lowercase(self):
string_lower = self.string.lower()
return string_lower
def regex_stripper(self):
stripped = re.sub(r"[^a-zA-Z0-9 ']+", " ", self.string)
no_double_spaces = re.sub(r' +', ' ', stripped)
return no_double_spaces
def spell_checker(self):
spell_checked = sym_spell.lookup_compound(self.string, max_edit_distance=2)[0].term
return spell_checked
def remove_stop(self):
no_stop_words = " ".join(i for i in self.string.split() if i not in stop)
return no_stop_words
def lemmatize(self):
doc = nlp(self.string)
lemmatized_sentence = " ".join([token.lemma_ for token in doc])
return lemmatized_sentence
#lowercase
#regex_stripper
#spell_checker
#remove_stop
#lemmatize
def cleaned_text(self):
return self.string
I'm not very well versed with decorators so sorry for the clumsy code. The cleaned_text method ought to do the following methods in order - lowercase, regex_stripper, remove_stop, lemmatize and then finally return the cleaned text.
I am trying to print frequencies of all words in a text.
I wanna print all keys according to their sorted values.
Namely, I wanna print the frequencies from most frequent to least frequent.
Here is my code:
freqMap = {}
class analysedText(object):
def __init__(self, text):
# remove punctuation
formattedText = text.replace('.', '').replace('!', '').replace('?', '').replace(',', '')
# make text lowercase
formattedText = formattedText.lower()
self.fmtText = formattedText
def freqAll(self):
wordList = self.fmtText.split(' ')
freqMap = {}
for word in set(wordList):
freqMap[word] = wordList.count(word)
return freqMap
mytexte = str(input())
my_text = analysedText(mytexte)
my_text.freqAll()
freqKeys = freqMap.keys()
freqValues = sorted(freqMap.values())
a = 0
for i in freqValues:
if i == a:
pass
else:
for key in freqKeys:
if freqMap[key] == freqValues[i]:
print(key,": ", freqValues[i])
a = i
Your function freqAll returns a value that you are not catching.
It should be:
counts = my_text.freqAll()
Then you use the counts variable in the rest of your code.
freqAll method of your class does return freqMap which you should store but do not do that, therefore you are in fact processing empty dict freqMap, which was created before class declaration. Try replacing
my_text.freqAll()
using
freqMap = my_text.freqAll()
I have a graph of dependencies where there's parent and child nodes. Child nodes have a # sign indicating that char/number is the same as the parent node. I understand the title might be weird, let me give you an example:
Initial reference variable:
ref = '12345.1.1'
Strings that will need replacing within:
example1 = '#.1.2'
example2 = '#.#.3'
Outcome after conversion/replacing (this is what I need help with):
# Make some magic, replace #'s with matching parent digits to get this output on string variables above:
example1 = '12345.1.2'
example2 = '12345.1.3'
In essence, how do I replace the # char (if present) to its matching "parent" stringified digits? I guess it might be able to work using replace or regex, but if there's any builtin methods that would work, I'd be happy to know.
Thanks in advance.
ref = '12345.1.1'
example1 = '#.1.2'
example2 = '#.#.3'
def replace(text, ref='12345.1.1', split='.', placeholder='#'):
ref = ref.split(split)
text = text.split(split)
return split.join(txt1 if txt2 == placeholder else txt2
for txt1, txt2 in zip(ref, text))
print(replace(example1))
print(replace(example2))
print(replace('#.#.#'))
output
12345.1.2
12345.1.3
12345.1.1
A bit cumbersome, but this should do it:
import re
class Replacement:
def __init__(self, ref):
self.ref = ref.split(".")
self.counter = 0
def repl(self, match):
if match.group() == "#":
res = self.ref[self.counter]
self.counter += 1
return res
return match.group()
example1 = '#.1.2'
example2 = '#.#.3'
for example in [example1, example2]:
r = Replacement(ref='12345.1.1')
result = re.sub("#", r.repl, example)
print(result)
Output
12345.1.2
12345.1.3
Note that you need to create a new Replacement object or restart the counter for each example in your input data.
Concise function
def replace_char(string: str, palce_holder: str = '#', split: str = '.') -> str:
return split.join((node[0] if node[1] == palce_holder else node[1] for node in zip(ref.split('.'), string.split('.'))))
Essentially I have a python script that loads in a number of files, each file contains a list and these are used to generate strings. For example: "Just been to see $film% in $location%, I'd highly recommend it!" I need to replace the $film% and $location% placeholders with a random element of the array of their respective imported lists.
I'm very new to Python but have picked up most of it quite easily but obviously in Python strings are immutable and so handling this sort of task is different compared to other languages I've used.
Here is the code as it stands, I've tried adding in a while loop but it would still only replace the first instance of a replaceable word and leave the rest.
#!/usr/bin/python
import random
def replaceWord(string):
#Find Variable Type
if "url" in string:
varType = "url"
elif "film" in string:
varType = "film"
elif "food" in string:
varType = "food"
elif "location" in string:
varType = "location"
elif "tvshow" in string:
varType = "tvshow"
#LoadVariableFile
fileToOpen = "/prototype/default_" + varType + "s.txt"
var_file = open(fileToOpen, "r")
var_array = var_file.read().split('\n')
#Get number of possible variables
numberOfVariables = len(var_array)
#ChooseRandomElement
randomElement = random.randrange(0,numberOfVariables)
#ReplaceWord
oldValue = "$" + varType + "%"
newString = string.replace(oldValue, var_array[randomElement], 1)
return newString
testString = "Just been to see $film% in $location%, I'd highly recommend it!"
Test = replaceWord(testString)
This would give the following output: Just been to see Harry Potter in $location%, I'd highly recommend it!
I have tried using while loops, counting the number of words to replace in the string etc. however it still only changes the first word. It also needs to be able to replace multiple instances of the same "variable" type in the same string, so if there are two occurrences of $film% in a string it should replace both with a random element from the loaded file.
The following program may be somewhat closer to what you are trying to accomplish. Please note that documentation has been included to help explain what is going on. The templates are a little different than yours but provide customization options.
#! /usr/bin/env python3
import random
PATH_TEMPLATE = './prototype/default_{}s.txt'
def main():
"""Demonstrate the StringReplacer class with a test sting."""
replacer = StringReplacer(PATH_TEMPLATE)
text = "Just been to see {film} in {location}, I'd highly recommend it!"
result = replacer.process(text)
print(result)
class StringReplacer:
"""StringReplacer(path_template) -> StringReplacer instance"""
def __init__(self, path_template):
"""Initialize the instance attribute of the class."""
self.path_template = path_template
self.cache = {}
def process(self, text):
"""Automatically discover text keys and replace them at random."""
keys = self.load_keys(text)
result = self.replace_keys(text, keys)
return result
def load_keys(self, text):
"""Discover what replacements can be made in a string."""
keys = {}
while True:
try:
text.format(**keys)
except KeyError as error:
key = error.args[0]
self.load_to_cache(key)
keys[key] = ''
else:
return keys
def load_to_cache(self, key):
"""Warm up the cache as needed in preparation for replacements."""
if key not in self.cache:
with open(self.path_template.format(key)) as file:
unique = set(filter(None, map(str.strip, file)))
self.cache[key] = tuple(unique)
def replace_keys(self, text, keys):
"""Build a dictionary of random replacements and run formatting."""
for key in keys:
keys[key] = random.choice(self.cache[key])
new_string = text.format(**keys)
return new_string
if __name__ == '__main__':
main()
The varType you are assigning will be set in only one of your if-elif-else sequence and then the interpreter will go outside. You would have to run all over it and perform operations. One way would be to set flags which part of sentence you want to change. It would go that way:
url_to_change = False
film_to_change = False
if "url" in string:
url_to_change = True
elif "film" in string:
film_to_change = True
if url_to_change:
change_url()
if film_to_change:
change_film()
If you want to change all occurances you could use a foreach loop. Just do something like this in the part you are swapping a word:
for word in sentence:
if word == 'url':
change_word()
Having said this, I'd reccomend introducing two improvements. Push changing into separate functions. It would be easier to manage your code.
For example function for getting items from file to random from could be
def load_variable_file(file_name)
fileToOpen = "/prototype/default_" + file_name + "s.txt"
var_file = open(fileToOpen, "r")
var_array = var_file.read().split('\n')
var_file.clos()
return var_array
Instead of
if "url" in string:
varType = "url"
you could do:
def change_url(sentence):
var_array = load_variable_file(url)
numberOfVariables = len(var_array)
randomElement = random.randrange(0,numberOfVariables)
oldValue = "$" + varType + "%"
return sentence.replace(oldValue, var_array[randomElement], 1)
if "url" in sentence:
setnence = change_url(sentence)
And so on. You could push some part of what I've put into change_url() into a separate function, since it would be used by all such functions (just like loading data from file). I deliberately do not change everything, I hope you get my point. As you see with functions with clear names you can write less code, split it into logical, reusable parts, no needs to comment the code.
A few points about your code:
You can replace the randrange with random.choice as you just
want to select an item from an array.
You can iterate over your types and do the replacement without
specifying a limit (the third parameter), then assign it to the same object, so you keep all your replacements.
readlines() do what you want for open, read from the file as store the lines as an array
Return the new string after go through all the possible replacements
Something like this:
#!/usr/bin/python
import random
def replaceWord(string):
#Find Variable Type
types = ("url", "film", "food", "location", "tvshow")
for t in types:
if "$" + t + "%" in string:
var_array = []
#LoadVariableFile
fileToOpen = "/prototype/default_" + varType + "s.txt"
with open(fname) as f:
var_array = f.readlines()
tag = "$" + t + "%"
while tag in string:
choice = random.choice(var_array)
string = string.replace(tag, choice, 1)
var_array.remove(choice)
return string
testString = "Just been to see $film% in $location%, I'd highly recommend it!"
new = replaceWord(testString)
print(new)
Lexical analyzers are quite easy to write when you have regexes. Today I wanted to write a simple general analyzer in Python, and came up with:
import re
import sys
class Token(object):
""" A simple Token structure.
Contains the token type, value and position.
"""
def __init__(self, type, val, pos):
self.type = type
self.val = val
self.pos = pos
def __str__(self):
return '%s(%s) at %s' % (self.type, self.val, self.pos)
class LexerError(Exception):
""" Lexer error exception.
pos:
Position in the input line where the error occurred.
"""
def __init__(self, pos):
self.pos = pos
class Lexer(object):
""" A simple regex-based lexer/tokenizer.
See below for an example of usage.
"""
def __init__(self, rules, skip_whitespace=True):
""" Create a lexer.
rules:
A list of rules. Each rule is a `regex, type`
pair, where `regex` is the regular expression used
to recognize the token and `type` is the type
of the token to return when it's recognized.
skip_whitespace:
If True, whitespace (\s+) will be skipped and not
reported by the lexer. Otherwise, you have to
specify your rules for whitespace, or it will be
flagged as an error.
"""
self.rules = []
for regex, type in rules:
self.rules.append((re.compile(regex), type))
self.skip_whitespace = skip_whitespace
self.re_ws_skip = re.compile('\S')
def input(self, buf):
""" Initialize the lexer with a buffer as input.
"""
self.buf = buf
self.pos = 0
def token(self):
""" Return the next token (a Token object) found in the
input buffer. None is returned if the end of the
buffer was reached.
In case of a lexing error (the current chunk of the
buffer matches no rule), a LexerError is raised with
the position of the error.
"""
if self.pos >= len(self.buf):
return None
else:
if self.skip_whitespace:
m = self.re_ws_skip.search(self.buf[self.pos:])
if m:
self.pos += m.start()
else:
return None
for token_regex, token_type in self.rules:
m = token_regex.match(self.buf[self.pos:])
if m:
value = self.buf[self.pos + m.start():self.pos + m.end()]
tok = Token(token_type, value, self.pos)
self.pos += m.end()
return tok
# if we're here, no rule matched
raise LexerError(self.pos)
def tokens(self):
""" Returns an iterator to the tokens found in the buffer.
"""
while 1:
tok = self.token()
if tok is None: break
yield tok
if __name__ == '__main__':
rules = [
('\d+', 'NUMBER'),
('[a-zA-Z_]\w+', 'IDENTIFIER'),
('\+', 'PLUS'),
('\-', 'MINUS'),
('\*', 'MULTIPLY'),
('\/', 'DIVIDE'),
('\(', 'LP'),
('\)', 'RP'),
('=', 'EQUALS'),
]
lx = Lexer(rules, skip_whitespace=True)
lx.input('erw = _abc + 12*(R4-623902) ')
try:
for tok in lx.tokens():
print tok
except LexerError, err:
print 'LexerError at position', err.pos
It works just fine, but I'm a bit worried that it's too inefficient. Are there any regex tricks that will allow me to write it in a more efficient / elegant way ?
Specifically, is there a way to avoid looping over all the regex rules linearly to find one that fits?
I suggest using the re.Scanner class, it's not documented in the standard library, but it's well worth using. Here's an example:
import re
scanner = re.Scanner([
(r"-?[0-9]+\.[0-9]+([eE]-?[0-9]+)?", lambda scanner, token: float(token)),
(r"-?[0-9]+", lambda scanner, token: int(token)),
(r" +", lambda scanner, token: None),
])
>>> scanner.scan("0 -1 4.5 7.8e3")[0]
[0, -1, 4.5, 7800.0]
You can merge all your regexes into one using the "|" operator and let the regex library do the work of discerning between tokens. Some care should be taken to ensure the preference of tokens (for example to avoid matching a keyword as an identifier).
I found this in python document. It's just simple and elegant.
import collections
import re
Token = collections.namedtuple('Token', ['typ', 'value', 'line', 'column'])
def tokenize(s):
keywords = {'IF', 'THEN', 'ENDIF', 'FOR', 'NEXT', 'GOSUB', 'RETURN'}
token_specification = [
('NUMBER', r'\d+(\.\d*)?'), # Integer or decimal number
('ASSIGN', r':='), # Assignment operator
('END', r';'), # Statement terminator
('ID', r'[A-Za-z]+'), # Identifiers
('OP', r'[+*\/\-]'), # Arithmetic operators
('NEWLINE', r'\n'), # Line endings
('SKIP', r'[ \t]'), # Skip over spaces and tabs
]
tok_regex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification)
get_token = re.compile(tok_regex).match
line = 1
pos = line_start = 0
mo = get_token(s)
while mo is not None:
typ = mo.lastgroup
if typ == 'NEWLINE':
line_start = pos
line += 1
elif typ != 'SKIP':
val = mo.group(typ)
if typ == 'ID' and val in keywords:
typ = val
yield Token(typ, val, line, mo.start()-line_start)
pos = mo.end()
mo = get_token(s, pos)
if pos != len(s):
raise RuntimeError('Unexpected character %r on line %d' %(s[pos], line))
statements = '''
IF quantity THEN
total := total + price * quantity;
tax := price * 0.05;
ENDIF;
'''
for token in tokenize(statements):
print(token)
The trick here is the line:
tok_regex = '|'.join('(?P<%s>%s)' % pair for pair in token_specification)
Here (?P<ID>PATTERN) will mark the matched result with a name specified by ID.
re.match is anchored. You can give it a position argument:
pos = 0
end = len(text)
while pos < end:
match = regexp.match(text, pos)
# do something with your match
pos = match.end()
Have a look for pygments which ships a shitload of lexers for syntax highlighting purposes with different implementations, most based on regular expressions.
It's possible that combining the token regexes will work, but you'd have to benchmark it. Something like:
x = re.compile('(?P<NUMBER>[0-9]+)|(?P<VAR>[a-z]+)')
a = x.match('9999').groupdict() # => {'VAR': None, 'NUMBER': '9999'}
if a:
token = [a for a in a.items() if a[1] != None][0]
The filter is where you'll have to do some benchmarking...
Update: I tested this, and it seems as though if you combine all the tokens as stated and write a function like:
def find_token(lst):
for tok in lst:
if tok[1] != None: return tok
raise Exception
You'll get roughly the same speed (maybe a teensy faster) for this. I believe the speedup must be in the number of calls to match, but the loop for token discrimination is still there, which of course kills it.
This isn't exactly a direct answer to your question, but you might want to look at ANTLR. According to this document the python code generation target should be up to date.
As to your regexes, there are really two ways to go about speeding it up if you're sticking to regexes. The first would be to order your regexes in the order of the probability of finding them in a default text. You could figure adding a simple profiler to the code that collected token counts for each token type and running the lexer on a body of work. The other solution would be to bucket sort your regexes (since your key space, being a character, is relatively small) and then use a array or dictionary to perform the needed regexes after performing a single discrimination on the first character.
However, I think that if you're going to go this route, you should really try something like ANTLR which will be easier to maintain, faster, and less likely to have bugs.