Folding c.ode in python function - python

Let's say I have a python function like this:
class Something:
def my_function(...): <---- start fold
...
return None <---- end fold
def my_function2(...):
...
If I am on the first function line, def my_function -- and let's suppose that function is ~50 locs, how would I fold that function in vim? The first thing I thought of doing is zf/return -- but this is quite flawed, as (1) lots of functions won't have return statements; or event more common, there will be multiple return statements within a single function.
What would be the best way to do this?
(StackOverflow doesn't allow the word 'code' in a post??)

Try zf]M. ]M should act as a motion to take you to the end of the current method.

Try :set foldmethod=indent. It may work for you. VimWiki can be quite helpful though.
The problem with python is the lack of explicit block delimiters. So you may want to use some plugins like SimpylFold

Related

Executing single unittest over a sequence in python

I have the unittest that may run over some sequence, where I check results of some processing over the sequence. I don't care if there are individual failures - as long as I can pinpoint on which element of sequence the processing did not yield good result.
So, instead of writing
class MyTectClass(unittest.TestCase):
.....
def TestOverSequence(self):
for elem in sequence:
<run a bunch of asserts>
I would like to have something like
class MyTectClass(unittest.TestCase):
def __init__(self):
super().__init__()
self.sequence_iter = iter(sequence)
def TestOverElem(self):
elem = next(self.sequence_iter)
<run a bunch of asserts>
Is it doable, and if it is - how?
Thanks in advance
PS I may issue warnings - but I would rather have failed test cases.
That, what you are talking about, is called parametric testing. You add some annotational parameters to your tests, and test system repeats test with accordance to your parameters. In your example it would be a list of values, and test system would repeat test for each of them.
Looks like python still has not such feature in it's testing system: https://bugs.python.org/issue7897
But I found some self help solution here https://gist.github.com/mfazekas/1710455
I also found that separate testing framework pytest has some support for parametric testing https://docs.pytest.org/en/latest/example/parametrize.html
If you want to assert separately, then why use a loop in first place?
So either mention all asserts separately or try to add the message to assert statements to identify which one has failed.
# ...
def testOverSequence(self):
for elem in sequence:
self.assertEqual(elem, 3, "{} is not equal to 3".format(elem)) # for instance
# Something on these lines
Thanks to all who've tried to help, but after looking at the documentation again, I've found exactly what I need - unittest.subTest() context manager.
I should have seen it in the first place
class MyTectClass(unittest.TestCase):
def _some_test(**kwargs):
.......
def TestOverSequence(self):
for elem in sequence:
with self.subTest(elem=elem)
self._some_test(elem=elem)
Thanks for looking over my shoulder LOL (you know, sometimes to find the answer you need someone to look over your shoulder)

Run some common code before and after a block

In a current project, I found myself often writing code like so:
statement_x()
do_something()
do_other_thing()
statement_y()
# ...
statement_x()
do_third_thing()
do_fourth_thing()
statement_y()
As you can see, statement_x and statement_y often get repeated, and they are always paired, but I am unable to condense them into a single statement. What I would really like is a language construct like this:
def env wrapping:
statement_x()
run_code
statement_y()
In this case, I'm pretending env is a Python keyword indicating a special "sandwich function" that runs certain statements before and after a given block, the point of entry of the block being indicated by the second keyword run_code.
My above program can now be made more readable using this construct:
env wrapping:
do_something()
do_other_thing()
env wrapping:
do_third_thing()
do_fourth_thing()
Which I mean to have the exact same behavior.
As far as I know such a construct does not exist, and the point of my question is not to speculate on future Python features. However, surely this situation of "run some common code before and after a variable block" must occur often enough that Python has a convenient way of dealing with it! What is this way? Or is the Pythonic solution to simple give up and accept the repetition?
PS: I realize that I could write a function that takes the variable statements as an argument, but that would not be very user-friendly - I would end up writing huge lists of statements inside the parens of my function.
You can use a with statement.
Example using contextlib.contextmanager:
import contextlib
#contextlib.contextmanager
def doing_xy():
print('statement_x')
yield
print('statement_y')
Example usage:
>>> with doing_xy():
... print('do_something')
... print('do_other_thing')
...
statement_x
do_something
do_other_thing
statement_y
>>> with doing_xy():
... print('do_third_thing')
... print('do_fourth_thing')
...
statement_x
do_third_thing
do_fourth_thing
statement_y

Python lazy evaluation?

Suppose I have the following code:
def my_func(input_line):
is_skip_line = self.is_skip_line(input_line) # parse input line check if skip line
if is_skip_line:
# do something...
# do more ...
if is_skip_line:
# do one last thing
So we have a check for is_skip_line (if is_skip_line:) that appears twice. Does it mean that due to lazy evaluation the method self.is_skip_line(input_line) will be called twice?
If so, what is the best work around, given that self.is_skip_line(input_line) is time consuming? Do I have to "immediately invoke" it, like below?
is_skip_line = (lambda x: self.is_skip_line(x))(input_line)
Thanks.
The misconception here is that this statement is not being immediately invoked:
is_skip_line = self.is_skip_line(input_line)
...when in fact, it is.
The method self.is_skip_line will only ever be invoked once. Since you assign it to a variable, you can use that variable as many times as you like in any context you like.
If you're concerned about the performance of it, then you could use cProfile to really test the performance of the method it's called in with respect to the method it's calling.

Is there a high-level profiling module for Python?

I want to profile my Python code. I am well-aware of cProfile, and I use it, but it's too low-level. (For example, there isn't even a straightforward way to catch the return value from the function you're profiling.)
One of the things I would like to do: I want to take a function in my program and set it to be profiled on the fly while running the program.
For example, let's say I have a function heavy_func in my program. I want to start the program and have the heavy_func function not profile itself. But sometime during the runtime of my program, I want to change heavy_func to profile itself while it's running. (If you're wondering how I can manipulate stuff while the program is running: I can do it either from the debug probe or from the shell that's integrated into my GUI app.)
Is there a module already written which does stuff like this? I can write it myself but I just wanted to ask before so I won't be reinventing the wheel.
It may be a little mind-bending, but this technique should help you find the "bottlenecks", it that's what you want to do.
You're pretty sure of what routine you want to focus on.
If that's the routine you need to focus on, it will prove you right.
If the real problem(s) are somewhere else, it will show you where they are.
If you want a tedious list of reasons why, look here.
I wrote my own module for it. I called it cute_profile. Here is the code. Here are the tests.
Here is the blog post explaining how to use it.
It's part of GarlicSim, so if you want to use it you can install garlicsim and do from garlicsim.general_misc import cute_profile.
If you want to use it on Python 3 code, just install the Python 3 fork of garlicsim.
Here's an outdated excerpt from the code:
import functools
from garlicsim.general_misc import decorator_tools
from . import base_profile
def profile_ready(condition=None, off_after=True, sort=2):
'''
Decorator for setting a function to be ready for profiling.
For example:
#profile_ready()
def f(x, y):
do_something_long_and_complicated()
The advantages of this over regular `cProfile` are:
1. It doesn't interfere with the function's return value.
2. You can set the function to be profiled *when* you want, on the fly.
How can you set the function to be profiled? There are a few ways:
You can set `f.profiling_on=True` for the function to be profiled on the
next call. It will only be profiled once, unless you set
`f.off_after=False`, and then it will be profiled every time until you set
`f.profiling_on=False`.
You can also set `f.condition`. You set it to a condition function taking
as arguments the decorated function and any arguments (positional and
keyword) that were given to the decorated function. If the condition
function returns `True`, profiling will be on for this function call,
`f.condition` will be reset to `None` afterwards, and profiling will be
turned off afterwards as well. (Unless, again, `f.off_after` is set to
`False`.)
`sort` is an `int` specifying which column the results will be sorted by.
'''
def decorator(function):
def inner(function_, *args, **kwargs):
if decorated_function.condition is not None:
if decorated_function.condition is True or \
decorated_function.condition(
decorated_function.original_function,
*args,
**kwargs
):
decorated_function.profiling_on = True
if decorated_function.profiling_on:
if decorated_function.off_after:
decorated_function.profiling_on = False
decorated_function.condition = None
# This line puts it in locals, weird:
decorated_function.original_function
base_profile.runctx(
'result = '
'decorated_function.original_function(*args, **kwargs)',
globals(), locals(), sort=decorated_function.sort
)
return locals()['result']
else: # decorated_function.profiling_on is False
return decorated_function.original_function(*args, **kwargs)
decorated_function = decorator_tools.decorator(inner, function)
decorated_function.original_function = function
decorated_function.profiling_on = None
decorated_function.condition = condition
decorated_function.off_after = off_after
decorated_function.sort = sort
return decorated_function
return decorator

if else-if making code look ugly any cleaner solution?

I have around 20 functions (is_func1, is_fucn2, is_func3...) returning boolean
I assume there is only one function which returns true and I want that!
I am doing:
if is_func1(param1, param2):
# I pass 1 to following
abc(1) # I pass 1
some_list.append(1)
elif is_func2(param1, param2):
# I pass 2 to following
abc(2) # I pass 1
some_list.append(2)
...
.
.
elif is_func20(param1, param2):
...
Please note: param1 and param2 are different for each, abc and some_list take parameters depending on the function.
The code looks big and there is repetition in calling abc and some_list, I can pull this login in a function! but is there any other cleaner solution?
I can think of putting functions in a data structure and loop to call them.
What about
functionList = [is_func1, is_func2, ..., is_func20]
for index, func in enumerate(functionList):
if(func(param1, param2)):
abc(index+1)
some_list.append(index+1)
break
I can think of putting functions in a data structure and loop to call them.
Yes, probably you should do that since your code needs to be refactored,
and a data driven design is a good choice.
An example similar to BlueRaja's answer,
# arg1, arg2 and ret can have any values on each record
data = ((isfunc1, arg1, arg2, ret),
(isfunc2, arg1, arg2, ret),
(isfunc3, arg1, arg2, ret),
...)
for d in data:
if d[0](d[1], d[2]):
abc(d[3])
some_list.append(d[3])
break
If each branch of your event dispatcher is in fact different, then there just isn't any way to get around writing the individual branch handlers, and there isn't any way to get around polling the different cases and choosing a branch.
This looks a good case to apply Chain of responsibility pattern.
I know how to give the example with objects, not functions, so I'll do that:
class HandleWithFunc1
def __init__(self, otherHandler):
self.otherHandler = otherHandler
def Handle(param1, param2):
if ( should I handle with func1? ):
#Handle with func1
return
if otherHandler == None:
raise "Nobody handled the call!"
otherHandler.Handle(param1, param2)
class HandleWithFunc2:
def __init__(self, otherHandler):
self.otherHandler = otherHandler
def Handle(param1, param2):
if ( should I handle with func1? ):
#Handle with func1
return
if otherHandler == None:
raise "Nobody handled the call!"
otherHandler.Handle(param1, param2)
So you create all your classes like a chain:
handle = HandleWithFunc1(HandleWithFunc2())
then:
handle.Handle(param1, param2)
This code is prone to refactoring, here only to illustrate the usage
Try this:
value = 1 if is_func1(param1, param2) else \
2 if is_func2(param1x, param2x) else \
... else \
20 if is_func20(param1z, param2z) else 0
abc(value)
some_list.append(value)
Bear in mind that this statement was cobbled together using various websites as a reference for Python syntax, so please don't shoot me if it doesn't compile.
The basic gist is to produce a single value that corresponds to the function called (1 for is_func1, 2 for is_func2, etc.) then use that value in the abc and some_list.append functions. Going on what I was able to read about Python boolean expression evaluation, this should properly short-circuit the evaluation so that the functions stop being called as soon as one evaluates to true.
I modified BlueRaja answer for different parameters...
function_list = {is_func01: (pa1, pa2, ...),
is_func02: (pa1, pa2, pa3, ...),
....
is_func20: (pa1, ...)}
for func, pa_list in function_list.items:
if(func(*pa_list)):
abc(pa_list_dependent_parameters)
some_list.append(pa_list_dependent_parameters)
break
I don't see why it shouldn't work.
I've not used python before, but can you refer to functions by a variable?
If so, you can create an enum with entries representing each function, test all the functions in a loop, and set a variable to the 'true' function's enum.
Then you can do a switch statement on the enum.
Still, that won't 'clean up' the code much: when you have n options and need to drive down to the correct one, you'll need n blocks of code to handle it.
I'm not sure if it would be cleaner, but I think is's quite interesting solution.
First of all you should define new function, let it be semi_func, which will call abc and some_list.append do make code DRY.
Then set new variable to act as a binary result of all boolean functions, so the is_func1 is 20th bit of it, is_func2 is 19th and so on.
32 bits of integer type should be enough to handle all 20 results.
While setting value to this result variable you should use shift left to add new functions:
result = is_func1(param1, param2) << 1
result = (result | is_func2(param1, param2)) << 1
...
result = (result | is_func20(param1, param2))
For ease access define new constants like
IS_FUNC20_TRUE = 1
IS_FUNC19_TRUE = 2
IS_FUNC18_TRUE = 4
... values should be powers of 2
And in the end use switch/sase statement to call semi_func.
I know I will be modded down for being offtopic, but still. If you find anything that can be done with standard control constructs off-putting, then you need to use a different language, such as Common Lisp, which allows for macros, in effect makes it possible to create your own control constructs. (Having recently discovered anaphoric macros, I just have to recommend this.)
This specific case would be a perfect example where a macro would help, but only assuming you are doing it at multiple places in your code, otherwise it's probably not worth improving at all. And in fact, Common Lisp already has a macro like that, it's called cond.
Anyway, in Python, I think you should go along with a list of functions and a loop.

Categories

Resources