Lambda Functions in python - python

In the NLTK toolkit, I try to use the lambda function to filter the results.
I have a test_file and a terms_file
What I'm doing is to use the likelihood_ratio in NLTK to rank the multi word terms in the terms_file. But, the input here is the lemma of the multi word terms, so I created a function which extracts from each multi word term its lemma to be introduced afterthat in the lambda function.
so it looks like this
text_file = myfile
terms_file= myfile
def lem(file):
return lemma for each term in the file
My problem is here
How can I call this function in the filter, because when I do what following it does not work.
finder = BigramCollocationFinder.from_words(text_file)
finder.apply_ngram_filter(lambda *w: w not in lem(terms_file))
finder.score_ngrams(BigramAssocMeasures.likelihood_ratio)
print(finder)
Also with the iteration does not work
finder.apply_ngram_filter(lambda *w: w not in [x for x in lem(terms_file)])

(This is sort of a wild guess, but I'm pretty confident that this is the cause of your problem.
Judging from your pseudo-code, the lem function operates on a file handle, reading some information from that file. You need to understand that a file handle is an iterator, and it will be exhausted when iterated once. That is, the first call to lem works as expected, but then the file is fully read and further calls will yield no results.
Thus, I suggest storing the result of lem in a list. This should also be much faster than reading the file again and again. Try something like this:
all_lemma = lem(terms_file) # temporary variable holding the result of `lem`
finder.apply_ngram_filter(lambda *w: w not in all_lemma)
Your line finder.apply_ngram_filter(lambda *w: w not in [x for x in lem(terms_file)]) does not work, because while this creates a list from the result of lem, it does so each time the lambda is executed, so you end up with the same problem.
(Not sure what apply_ngram_filter does, so there might be more problems after that.)
Update: Judging from your other question, it seems like lem itself is a generator function. In this case, you have to explicitly convert the results to a list; otherwise you will run into just the same problem when that generator is exhausted.
all_lemma = list(lem(terms_file))
If the elements yielded by lem are hashable, you can also create a set instead of a list, i.e. all_lemma = set(lem(terms_file)); this will make the lookup in the filter much faster.

If I understand what you are saying, lem(terms_file) returns a list of lemmas. But what do "lemmas" look like? apply_ngram_filter() will only work if each "lemma" is a tuple of exactly two words. If that is indeed the case, then your code should work after you've fixed the file input as suggested by #tobias_k.
Even if your code works, the output of lem() should be stored as a set, not a list. Otherwise your code will be abysmally slow.
all_lemmas = set(lem(terms_file))
But I'm not too sure the above assumptions are right. Why would all lemmas be exactly two words long? I'm guessing that "lemmas" are one word long, and you intended to discard any ngram containing a word that is not in your list. If that's true you need apply_word_filter(), not apply_ngram_filter(). Note that it expects one argument (a word), so it should be written like this:
finder.apply_word_filter(lambda w: w not in all_lemmas)

Related

Why is Map function not computing my lambda?

So I'm trying to use the map function with a lambda to write each item of a list to a txt file on a new line
map(lambda x: text_file.write(f"{x}\n"), itemlist_with_counts_formatted)
I understand that map returns a map object, but I don't need the return value.
What I want is for the map function to compute the lambda, which adds "\n" to the end of each item in the given list.
I thought that map should do this (compute the function (lambda appends "\n") using arguments from the iterable) but nothing gets output to the txt file.
For clarity, I can totally do this with a list comprehension but I wanted to learn how to use map (and properly anonymous lambdas), so am looking for help solving it using these two functions specifically (if possible).
map(lambda x: text_file.write(f"{x}\n"), itemlist_with_counts_formatted)
I have also tried it without the f string, using just x + "\n" but this doesn't work either
Yes the txt file is open, and yes I can get it to work using other methods, the problem is exclusive to how I'm using map or how I'm using lambda, which must be wrong in some way. I've been doing this for 6 weeks so its probably something stupid but I've tried to figure it out myself and i just can't and I've checked but can't find anything on here - appreciate any help I can get.
You should really not use map for this task.
It looks fancy, but this is the same as using list comprehensions for side effects. It's considered bad practice.
[print(i) for i in range(3)]
Which should be replaced with:
for i in range(3):
print(i)
In you case, use:
for item in itemlist_with_counts_formatted:
text_file.write(f"item\n")
why your code did not work:
map returns a generator, nothing is evaluated until something consumes the generator. You would need to do:
list(map(lambda x: text_file.write(f"{x}\n"), itemlist_with_counts_formatted))
But, again, don't, this is useless, less efficient and less explicit.
But I really want a one-liner!
Then use:
text_file.write('\n'.join(itemlist_with_counts_formatted))
NB. unlike the other alternatives in this answer, this one does not add a trailing '\n' in the end of the file.
I really, really, want to use map:
text_file.writelines(map(lambda x: f'{x}\n', itemlist_with_counts_formatted))
i think that the problem is that this use of the map function is a bit unproper. As said in the documentation the map function returns a generator for the results iterable, while the write function is not returning anything. This might brake something during the map internals.
I'd suggest you to use map only to add line end and then use the writeline function on the resulting generator, something like:
text_file.writelines(map(lambda x: f"{x}\n", itemlist_with_counts_formatted))
(Not tested)

Python: replacing multiple words in a text file from a dictionary

I am having trouble figuring out where I'm going wrong. So I need to randomly replace words and re-write them to the text file, until it no longer makes sense to anyone else. I chose some words just to test it, and have written the following code which is not currently working:
# A program to read a file and replace words until it is no longer understandable
word_replacement = {'Python':'Silly Snake', 'programming':'snake charming', 'system':'table', 'systems':'tables', 'language':'spell', 'languages':'spells', 'code':'snake', 'interpreter':'charmer'}
main = open("INF108.txt", 'r+')
words = main.read().split()
main.close()
for x in word_replacement:
for y in words:
if word_replacement[x][0]==y:
y==x[1]
text = " ".join(words)
print text
new_main = open("INF108.txt", 'w')
new_main.write(text)
new_main.close()
This is the text in the file:
Python is a widely used general-purpose, high-level programming
language. It's design philosophy emphasizes code readability, and its
syntax allows programmers to express concepts in fewer lines of code
than would be possible in languages such as C++ or Java. The language
provides constructs intended to enable clear programs on both a small
and large scale.Python supports multiple programming paradigms,
including object-oriented, imperative and functional programming or
procedural styles. It features a dynamic type system and automatic
memory management and has a large and comprehensive standard
library.Python interpreters are available for installation on many
operating systems, allowing Python code execution on a wide variety
of systems. Using third- party tools, such as Py2exe or Pyinstaller,
Python code can be packaged into stand-alone executable programs for
some of the most popular operating systems, allowing for the
distribution of Python-based software for use on those environments
without requiring the installation of a Python interpreter.
I've tried a few methods of this but as someone new to Python it's been a matter of guessing, and the last two days spent researching it online, but most of the answers I've found are either far too complicated for me to understand, or are specific to that person's code and don't help me.
OK, let's take this step by step.
main = open("INF108.txt", 'r+')
words = main.read().split()
main.close()
Better to use the with statement here. Also, r is the default mode. Thus:
with open("INF108.txt") as main:
words = main.read().split()
Using with will make main.close() get called automatically for you when this block ends; you should do the same for the file write at the end as well.
Now for the main bit:
for x in word_replacement:
for y in words:
if word_replacement[x][0]==y:
y==x[1]
This little section has several misconceptions packed into it:
Iterating over a dictionary (for x in word_replacement) gives you its keys only. Thus, when you want to compare later on, you should just be checking if word_replacement[x] == y. Doing a [0] on that just gives you the first letter of the replacement.
Iterating over the dictionary is defeating the purpose of having a dictionary in the first place. Just loop over the words you want to replace, and check if they're in the dictionary using y in word_replacement.
y == x[1] is wrong in two ways. First of all, you probably meant to be assigning to y there, not comparing (i.e. y = x[1] -- note the single = sign). Second, assigning to a loop variable doesn't even do what you want. y will just get overwritten with a new value next time around the loop, and the words data will NOT get changed at all.
What you want to do is create a new list of possibly-replaced words, like so:
replaced = []
for y in words:
if y in word_replacement:
replaced.append(word_replacement[y])
else:
replaced.append(y)
text = ' '.join(replaced)
Now let's do some refinement. Dictionaries have a handy get method that lets you get a value if the key is present, or a default if it's not. If we just use the word itself as a default, we get a nifty reduction:
replaced = []
for y in words:
replacement = word_replacement.get(y, y)
replaced.append(replacement)
text = ' '.join(replaced)
Which you can just turn into a one-line list-comprehension:
text = ' '.join(word_replacement.get(y, y) for y in words)
And now we're done.
It looks like you want something like this as your if statement in the nested loops:
if x==y:
y=word_replacement[x]
When you loop over a dictionary, you get its keys, not key-value pairs:
>>> mydict={'Python':'Silly Snake', 'programming':'snake charming', 'system':'table'}
>>> for i in mydict:
... print i
Python
programming
system
You can then get the value with mydict[i].
This doesn't quite work, though, because assigning to y doesn't change that element of words. You can loop over its indices instead of elements to assign to the current element:
for x in word_replacement:
for y in range(len(words)):
if x==words[y]:
words[y]=word_replacement[x]
I'm using range() and len() here to get a list of indices of words ([0, 1, 2, ...])
Your issue is probably here:
if word_replacement[x][0]==y:
Here's a small example of what is actually happening, which is probably not what you intended:
w = {"Hello": "World", "Python": "Awesome"}
print w["Hello"]
print w["Hello"][0]
Which should result in:
"World"
"W"
You should be able to figure out how to correct the code from here.
You used word_replacement (which is a dictionary) in a wrong way. You should change for loop to something like this:
for y in words:
if y in word_replacement:
words[words.index(y)] = word_replacement[y]

python script for removing duplicates taking 24 hrs+ to loop through 10^7 records

input t1
P95P,71655,LINC-JP,pathogenic
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
output op
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
myCode
def dup():
fi=open("op","a")
l=[];final="";
q=[];dic={};
for i in open("t1"):
k=i.split(",")
q.append(k[1])
q.append(k[0])
if q in l:
pass
else:
final= final + i.strip() + "\n"
fi.write(str(i.strip()))
fi.write("\n")
l.append(q)
q=[]
#print i.strip()
fi.close()
return final.strip()
d=dup()
In the above input line1,2 and line 3,4 are duplicates. Hence in output these duplicates are removed the entries in my input files are around 10^7.
Why is my code running since past 24 hrs for an input of 76Mb file. Also it has yet to complete one iteration of entire input file.It works fine for small files.
Can anyone please point out the reason for this long time.How can I optimize my program ? thnx
You're using an O(n2) algorithm, which scales poorly for larger files:
for i in open("t1"): # linear pass of file takes O(n) time
...
if q in l: # linear pass of list l takes O(n) time
...
...
You should consider using a set (i.e. make l a set) or itertools.groupby if duplicates will always be next to each other. These approaches will be O(n).
if you have access to a Unix system, uniq is a nice utility that is made for your problem.
uniq input.txt output.txt
see https://www.cs.duke.edu/csl/docs/unix_course/intro-89.html
I know this is a Python question, but sometimes Python is not the tool for the task.
And you can always embed a system call in your python script.
It's not clear why you're building a huge string (final) that holds the same thing the file does, or what dic is for. In terms of performance, you can look up x in y much faster if y is a set than if y is a list. Also, a minor point; shorter variable names don't improve performance, so use good ones instead. I would suggest:
def deduplicate(infile, outfile):
seen = set()
#final = []
with open(outfile, "a") as out, open(infile) as in_:
for line in in_:
check = tuple(line.split(",")[:2])
if check not in seen:
#final.append(line.strip())
out.write(line) # why 'strip' the '\n' then 'write' a new one?
seen.add(check)
#return "\n".join(final)
If you do really need final, make it a list until the last moment (see commented-out lines) - gradual string concatenation means the creation of lots of unnecessary objects.
There are a couple things that you are doing very inefficiently. The largest is that you made l a list, so the line if q in l has to search through everything in the list already in order to check if q matches it. If you make l a set, the membership check can be done using a hash calculation and array lookup, which take the same (small) amount of time no matter how much you add to the set (though it will cause l not to be read in the order that it was written).
Other little speedups that you can do include:
Using a tupple (k[1], k[0]) instead of a list for q.
You are writing your output file fi every loop. Your OS will try to batch and background the writes, but it may be faster to just do one big write at the end. I am not sure on this point but try it.

Filter massive linux log files

I have the user input into a list the items it would like to filter out. From there it filters using:
while knownIssuesCounter != len(newLogFile):
for line in knownIssues:
if line in newLogFile[knownIssuesCounter]:
if line not in issuesFound:
issuesFoundCounter[line]=1
issuesFound.append(line)
issuesFound.append(knownIssues[line])
else:
issuesFoundCounter[line]=issuesFoundCounter[line] + 1
knownIssuesCounter +=1
I'm running into hundred meg log files and it is taking FOREVER.....
Is there a better way to be doing this with Python?
Try to change issuesFound from a list to set:
issuesFound = set()
and use add instead of append:
issuesFound.add(line)
A big part of the reason your code is so slow is that if line not in issuesFound:. This requires a linear search through a huge list.
You can fix that by adding a set of seen issues (which is effectively free to search). That reduces your time from O(NM) to O(N).
But really, you can make this even simpler by removing the if entirely.
First, you can generate the issuesFound list after the fact from the keys of issuesFoundCounter. For each line in issuesFoundCounter, you want that line, and then its knownIssues[line]. So:
issuesFound = list(flatten((line, knownIssues[line]) for line in issuesFoundCounter))
(I'm using the flatten recipe from the itertools docs. You can copy that into your code, or you can just write this with itertools.chain.from_iterable instead of flatten.)
And that means you can just search if line not in issuesFoundCounter: instead of in issuesFound:, which is already a dict (and therefore effectively free to search). But if you just use setdefault—or, even simpler, use a defaultdict or a Counter instead of a dict—you can make that automatic.
So, if issuesFoundCounter is a Counter, the whole thing reduces to this:
for newLogLine in newLogFile:
for line in knownIssues:
if line in newLogLine:
issuesFoundCounter[line] += 1
And you can turn that into a generator expression, eliminating the slow-ish explicit looping in Python with faster looping inside the interpreter guts. This is only going to be, say, a fixed 5:1 speedup, as opposed to the linear-to-constant speedup from the first half, but it's still worth considering:
issuesFoundCounter = collections.Counter(line
for newLogLine in newLogFile
for line in knownIssues
if line in newLogLine)
The only problem with this is that the issuesFound list is now in arbitrary order, instead of in the order that the issues are found. If that's important, just use an OrderedCounter instead of a Counter. There's a simple recipe in the collections docs, but for your case, it can be as simple as:
class OrderedCounter(Counter, OrderedDict):
pass

Comparing file contents in Python

I have two files, say source and target. I compare each element in source to check if it also exists in target. If it does not exist in target, I print it ( the end goal is to have 0 difference). Here is the code I have written.
def finddefaulters(source,target):
f = open(source,'r')
g = open(target,'r')
reference = f.readlines()
done = g.readlines()
for i in reference:
if i not in done:
print i,
I need help with
How would this code be rated on a scale of 1-10
How can I make it better and optimal if the file sizes are huge.
Another question - When I read all the lines as list elements, they are interpreted as 'element\n' - So for correct comparison, I have to add a newline at the end of each file. Is there a way to strip the newlines so I do not have to add newline at the end of files. I tried rstrip. But it did not work.
Thanks in advance.
Regarding efficiency: The method you show has an asymptotic runtime complexity of O(m*n) where m and n are the number of elements in reference and done, i.e. if you double the size of both lists, the algorithm will run 4 times longer (times a fixed constant that is uniteresting to theoretical computer scientists). If m and n are very large, you will probably want to choose a faster algorithm, e.g sort the two lists first using the .sort() (runtime complexity: O(n * log(n))) and then go through the lists just once (runtime complexity: O(n)). That algorithm has a worst-case runtime complexity of O(n * log(n)), which is already a big improvement. However, you trade readability and simplicity of the code for efficiency, so I would only advise you to do this if absolutely necessary.
Regarding coding style: You do not .close() the file handles which you should. Instead of opening and closing the file handle, you could use the with language construct of python. Also, if you like the functional style, you could replace the for loop by a list expression:
for i in reference:
if i not in done:
print i,
then becomes:
items = [i.strip() for i in reference if i not in done]
print ' '.join(items)
However, this way you will not see any progress while the list is being composed.
As joaquin already mentions, you can loop over f directly instead of f.readlines() as file handles support the iterator protocol.
Some ideas:
1) use [with] to open files safely:
with open(source) as f:
.............
The with statement is used to wrap the
execution of a block with methods
defined by a context manager. This
allows common try...except...finally
usage patterns to be encapsulated for
convenient reuse.
2) you can iterate over the lines of a file instead of using readlines:
for line in f:
..........
3) Although for this short snippet it could be enough, try to use more informative names for your variables. One-letter names are not recommended.
4) If you want to get profit of python lib, try functions in difflib module. For example:
make_file(fromlines, tolines[, fromdesc][, todesc][, context][, numlines])
Compares fromlines and tolines (lists
of strings) and returns a string which
is a complete HTML file containing a
table showing line by line differences
with inter-line and intra-line changes
highlighted.

Categories

Resources