Suppose I have the following code:
def my_func(input_line):
is_skip_line = self.is_skip_line(input_line) # parse input line check if skip line
if is_skip_line:
# do something...
# do more ...
if is_skip_line:
# do one last thing
So we have a check for is_skip_line (if is_skip_line:) that appears twice. Does it mean that due to lazy evaluation the method self.is_skip_line(input_line) will be called twice?
If so, what is the best work around, given that self.is_skip_line(input_line) is time consuming? Do I have to "immediately invoke" it, like below?
is_skip_line = (lambda x: self.is_skip_line(x))(input_line)
Thanks.
The misconception here is that this statement is not being immediately invoked:
is_skip_line = self.is_skip_line(input_line)
...when in fact, it is.
The method self.is_skip_line will only ever be invoked once. Since you assign it to a variable, you can use that variable as many times as you like in any context you like.
If you're concerned about the performance of it, then you could use cProfile to really test the performance of the method it's called in with respect to the method it's calling.
Related
Let's say I have a python function like this:
class Something:
def my_function(...): <---- start fold
...
return None <---- end fold
def my_function2(...):
...
If I am on the first function line, def my_function -- and let's suppose that function is ~50 locs, how would I fold that function in vim? The first thing I thought of doing is zf/return -- but this is quite flawed, as (1) lots of functions won't have return statements; or event more common, there will be multiple return statements within a single function.
What would be the best way to do this?
(StackOverflow doesn't allow the word 'code' in a post??)
Try zf]M. ]M should act as a motion to take you to the end of the current method.
Try :set foldmethod=indent. It may work for you. VimWiki can be quite helpful though.
The problem with python is the lack of explicit block delimiters. So you may want to use some plugins like SimpylFold
I have been doing a lot of searching, and I don't think I've really found what I have been looking for. I will try my best to explain what I am trying to do, and hopefully there is a simple solution, and I'll be glad to have learned something new.
This is ultimately what I am trying to accomplish: Using nosetests, decorate some test cases using the attribute selector plugin, then execute test cases that match a criteria by using the -a switch during commandline invocation. The attribute values for the tests that are executed are then stored in an external location. The command line call I'm using is like below:
nosetests \testpath\ -a attribute='someValue'
I have also created a customized nosetest plugin, which stores the test cases' attributse, and writes them to an external location. The idea is that I can select a batch of tests, and by storing the attributes of these tests, I can do filtering on these results later for reporting purposes. I am accessing the method attributes in my plugin by overriding the "wantMethod" method with the code similar to the following:
def set_attribs(self, method, attribute):
if hasattr(method, attribute):
if not self.method_attributes.has_key(method.__name__):
self.method_attributes[method.__name__] = {}
self.method_attributes[method.__name__][attribute] = getattr(method, attribute)
def wantMethod(self, method):
self.set_attribs(method, "attribute1")
self.set_attribs(method, "attribute2")
pass
I have this working for pretty much all the tests, except for one case, where the test is uing the "yield" keyword. What is happening is that the methods that are generated are being executed fine, but then the method attributes are empty for each of the generated functions.
Below is the example of what I am trying to achieve. The test below retreives a list of values, and for each of those values, yields the results from another function:
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield (lambda x: f(), key)
def _do_test(self, input1, input2):
# Some code
From what I have read, and think I understand, when yield is called, it would create a new callable function which then gets executed. I have been trying to figure out how to retain the attribute values from my sample_test_generator method, but I have not been successful. I thought I could create a partial method, and then add the attribute to the method, but no luck. The tests execute without errors at all, it just seems that from my plugin's perspective, the method attributes aren't present, so they don't get recorded.
I realize this a pretty involved question, but I wanted to make sure that the context for what I am trying to achieve is clear. I have been trying to find information that could help me for this particular case, but I feel like I've reached a stumbling block now, so I would really like to ask the experts for some advice.
Thanks.
** Update **
After reading through the feedback and playing around some more, it looks like if I modified the lambda expression, it would achieve what I am looking for. In fact, I didn't even need to create the partial function:
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
yield (lambda: self._do_test)
The only downside to this approach is that the test name will not change. As I am playing around more, it looks like in nosetests, when a test generator is used, it would actually change the test name in the result based on the keywords it contains. Same thing was happening when I was using the lambda expression with a parameter.
For example:
Using lamdba expression with a parameter:
yield (lambda x: self._do_test, "value1")
In nosetests plugin, when you access the test case name, it would be displayed as "sample_test_generator(value1)
Using lambda expression without a parameter:
yield (lambda: self._do_test)
The test case name in this case would be "sample_test_generator". In my example above, if there are multiple values in the dictionary, then the yield call would occur multiple times. However, the test name would always remain as "sample_test_generator". This is not as bad as when I would get the unique test names, but then not be able to store the attribute values at all. I will keep playing around, but thanks for the feedback so far!
EDIT
I forgot to come back and provide my final update on how I was able to get this to work in the end, there was a little confusion on my part at first, and after I looked through it some more, I figured out that it had to do with how the tests are recognized:
My original implementation assumed that every test that gets picked up for execution goes through the "wantMethod" call from the plugin's base class. This is not true when "yield" is used to generate the test, because at this point, the test method has already passed the "wantMethod" call.
However, once the test case is generated through the "yeild" call, it does go through the "startTest" call from the plug-in base class, and this is where I was finally able to store the attribute successfully.
So in a nut shell, my test execution order looked like this:
nose -> wantMethod(method_name) -> yield -> startTest(yielded_test_name)
In my override of the startTest method, I have the following:
def startTest(self, test):
# If a test is spawned by using the 'yield' keyword, the test names would be the parent test name, appended by the '(' character
# example: If the parent test is "smoke_test", the generated test from yield would be "smoke_test('input')
parent_test_name = test_name.split('(')[0]
if self.method_attributes.has_key(test_name):
self._test_attrib = self.method_attributes[test_name]
elif self.method_attributes.has_key(parent_test_name):
self._test_attrib = self.method_attributes[parent_test_name]
else:
self._test_attrib = None
With this implementation, along with my overide of wantMethod, each test spawned by the parent test case also inherits attributes from the parent method, which is what I needed.
Again, thanks to all who send replies. This was quite a learning experience.
Would this fix your name issue?
def _actual_test(x, y):
assert x == y
def test_yield():
_actual_test.description = "test_yield_%s_%s" % (5, 5)
yield _actual_test, 5, 5
_actual_test.description = "test_yield_%s_%s" % (4, 8) # fail
yield _actual_test, 4, 8
_actual_test.description = "test_yield_%s_%s" % (2, 2)
yield _actual_test, 2, 2
Rename survives #attr too.
does this work?
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
def get_f(f, key):
return lambda x: f(), key
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield get_f(f, key)
def _do_test(self, input1, input2):
# Some code
The Problem ist that the local variables change after you created the lambda.
I'm wondering if anyone can think up a way to check if a function needs to return a meaningful value in Python. That is, to check whether the return value will be used for anything. I'm guessing the answer is no, and it is better to restructure my program flow. The function in question pulls its return values from a network socket. If the return value is not going to get used, I don't want to waste the resources fetching the result.
I tried already to use tracebacks to discover the calling line, but that didn't work. Here's an example of what I had in mind:
>>> def func():
... print should_return()
...
>>> func()
False
>>> ret = func()
True
The function "knows" that its return value is being assigned.
Here is my current workaround:
>>> def func(**kwargs):
... should_return = kwargs.pop('_wait', False)
... print should_return
...
>>> func()
False
>>> ret = func(_wait=True)
True
The very second line of the body of import this says it all: "explicit is better than implicit". In this case, if you provide an optional argument, the code will be more obvious (and thus easier to understand), simpler, faster and safer. Keep it as a separate argument with a name like wait.
While with difficulty you could implement it magically, it would be nasty code, prone to breaking in new versions of Python and not obvious. Avoid that route; there lieth the path unto madness.
All functions return a value when they complete.
If you're asking if they should return at all, then you are actually asking about The Halting Problem
One approach might be to return an object with a __del__ method that relies on the garbage collector removing the unused value some time in the future.
Note that it won't happen immediately; it might not even happen at all :)
You might consider returning a future, or 'promise'. That is, return another function that, when executed, performs the necessary work to actually determine the result. I seem to be thinking that you want lazy evaluation, which is "evaluate only what you need" (more or less), rather than your question, which confusingly asks: "Evaluate only if it returns a value, which might be needed".
i have some code that kinda works using inspect module, but it might be prone to break like others mention.
inspect.stack()[1].frame.f_code.co_names[-1]
will be holding the function name when user didn't assign return value to anything, when user assign to var with name XXX, this var will hold XXX. Code comparing this vs the function name to decide whether user assign the return value to any var
import inspect
def func():
tmp = inspect.stack()[1].frame.f_code.co_names[-1]
should_return = tmp != 'func'
print("execute") # execute something
# if should_return False, end here without fetching result
if should_return:
print("fetching result, user assign to {}".format(tmp))
# fetching result and return the result here
>>> func()
execute
>>>
>>> xxx=func()
execute
fetching result, user assign to xxx
>>>
All functions in Python always return. If you don't explicitly return, functions return None.
===========
def func():
while True:
pass
This function does not return.
There is no way of determining if an arbitrary function will return. If you can, you have solved the Turing problem.
I'm struggling with this using timeit and was wondering if anyone had any tips
Basically I have a function(that I pass a value to) that I want to test the speed of and created this:
if __name__=='__main__':
from timeit import Timer
t = Timer(superMegaIntenseFunction(10))
print t.timeit(number=1)
but when I run it, I get weird errors like coming from the timeit module.:
ValueError: stmt is neither a string nor callable
If I run the function on its own, it works fine. Its when I wrap it in the time it module, I get the errors(I have tried using double quotes and without..sameoutput).
any suggestions would be awesome!
Thanks!
Make it a callable:
if __name__=='__main__':
from timeit import Timer
t = Timer(lambda: superMegaIntenseFunction(10))
print(t.timeit(number=1))
Should work
Timer(superMegaIntenseFunction(10)) means "call superMegaIntenseFunction(10), then pass the result to Timer". That's clearly not what you want. Timer expects either a callable (just as it sounds: something that can be called, such as a function), or a string (so that it can interpret the contents of the string as Python code). Timer works by calling the callable-thing repeatedly and seeing how much time is taken.
Timer(superMegaIntenseFunction) would pass the type check, because superMegaIntenseFunction is callable. However, Timer wouldn't know what values to pass to superMegaIntenseFunction.
The simple way around this, of course, is to use a string with the code. We need to pass a 'setup' argument to the code, because the string is "interpreted as code" in a fresh context - it doesn't have access to the same globals, so you need to run another bit of code to make the definition available - see #oxtopus's answer.
With lambda (as in #Pablo's answer), we can bind the parameter 10 to a call to superMegaIntenseFunction. All that we're doing is creating another function, that takes no arguments, and calls superMegaIntenseFunction with 10. It's just as if you'd used def to create another function like that, except that the new function doesn't get a name (because it doesn't need one).
You should be passing a string. i.e.
t = Timer('superMegaIntenseFunction(10)','from __main__ import superMegaIntenseFunction')
One way to do it would be by using partial so that the function, 'superMegaIntenseFunction' is used as a callable (ie without the ()) in the timer or directly inside timeit.timeit. Using partial will pass the argument to the function when it will be call by the timer.
from functools import partial
from timeit import timeit
print(timeit(partial(superMegaIntenseFunction, 10), number=1))
A note for future visitors. If you need to make it work in pdb debugger, and superMegaIntenseFunction is not in the global scope, you can make it work by adding to globals:
globals()['superMegaIntenseFunction'] = superMegaIntenseFunction
timeit.timeit(lambda: superMegaIntenseFunction(x))
Note that the timing overhead is a little larger in this case because of the extra function calls. [source]
I want to profile my Python code. I am well-aware of cProfile, and I use it, but it's too low-level. (For example, there isn't even a straightforward way to catch the return value from the function you're profiling.)
One of the things I would like to do: I want to take a function in my program and set it to be profiled on the fly while running the program.
For example, let's say I have a function heavy_func in my program. I want to start the program and have the heavy_func function not profile itself. But sometime during the runtime of my program, I want to change heavy_func to profile itself while it's running. (If you're wondering how I can manipulate stuff while the program is running: I can do it either from the debug probe or from the shell that's integrated into my GUI app.)
Is there a module already written which does stuff like this? I can write it myself but I just wanted to ask before so I won't be reinventing the wheel.
It may be a little mind-bending, but this technique should help you find the "bottlenecks", it that's what you want to do.
You're pretty sure of what routine you want to focus on.
If that's the routine you need to focus on, it will prove you right.
If the real problem(s) are somewhere else, it will show you where they are.
If you want a tedious list of reasons why, look here.
I wrote my own module for it. I called it cute_profile. Here is the code. Here are the tests.
Here is the blog post explaining how to use it.
It's part of GarlicSim, so if you want to use it you can install garlicsim and do from garlicsim.general_misc import cute_profile.
If you want to use it on Python 3 code, just install the Python 3 fork of garlicsim.
Here's an outdated excerpt from the code:
import functools
from garlicsim.general_misc import decorator_tools
from . import base_profile
def profile_ready(condition=None, off_after=True, sort=2):
'''
Decorator for setting a function to be ready for profiling.
For example:
#profile_ready()
def f(x, y):
do_something_long_and_complicated()
The advantages of this over regular `cProfile` are:
1. It doesn't interfere with the function's return value.
2. You can set the function to be profiled *when* you want, on the fly.
How can you set the function to be profiled? There are a few ways:
You can set `f.profiling_on=True` for the function to be profiled on the
next call. It will only be profiled once, unless you set
`f.off_after=False`, and then it will be profiled every time until you set
`f.profiling_on=False`.
You can also set `f.condition`. You set it to a condition function taking
as arguments the decorated function and any arguments (positional and
keyword) that were given to the decorated function. If the condition
function returns `True`, profiling will be on for this function call,
`f.condition` will be reset to `None` afterwards, and profiling will be
turned off afterwards as well. (Unless, again, `f.off_after` is set to
`False`.)
`sort` is an `int` specifying which column the results will be sorted by.
'''
def decorator(function):
def inner(function_, *args, **kwargs):
if decorated_function.condition is not None:
if decorated_function.condition is True or \
decorated_function.condition(
decorated_function.original_function,
*args,
**kwargs
):
decorated_function.profiling_on = True
if decorated_function.profiling_on:
if decorated_function.off_after:
decorated_function.profiling_on = False
decorated_function.condition = None
# This line puts it in locals, weird:
decorated_function.original_function
base_profile.runctx(
'result = '
'decorated_function.original_function(*args, **kwargs)',
globals(), locals(), sort=decorated_function.sort
)
return locals()['result']
else: # decorated_function.profiling_on is False
return decorated_function.original_function(*args, **kwargs)
decorated_function = decorator_tools.decorator(inner, function)
decorated_function.original_function = function
decorated_function.profiling_on = None
decorated_function.condition = condition
decorated_function.off_after = off_after
decorated_function.sort = sort
return decorated_function
return decorator