I was unable to find a reasonable way to create a variable which calls a function requiring parameters.
Here is a simplified version of my code. I would like ‘print_hello’ to print ‘hello’ when it is called, and not when it is defined.
print_hello = print(‘hello’)
When I define ‘print_hello’, it calls print(‘hello’). When I call ‘print_hello’, it gives me an error. How do I fix this?
If you just want a function that does precisely what you describe, Sheldore's answer is the simplest way to go (and more Pythonic than using a named lambda).
An alternative approach is to make a partial application of the function with functools.partial, which allows you to pass additional arguments at call time:
from functools import partial
print_hello = partial(print, "hello")
print_hello() # Prints "hello" to stdout
print_hello(file=sys.stderr) # Prints "hello" to stderr
print_hello("world") # Prints "hello world" to stdout
Just define print_hello as a lambda function
>>> print_hello = lambda: print('hello')
>>> print_hello()
hello
To delay execution, you'll have to wrap the call to print in another function. A lambda is less code than defining another function.
Note: that pep08 recommends using a def function rather than a lambda when assigning to a variable. See here. So #Sheldores answer is probably the way to go.
You need to define a function. In python a function is defined using def as shown in a simple example for your purpose below. You then call the function using the function name and (), for instance print_hello().
def print_hello(): # <--- Does not accept an argument
print('hello')
print_hello() # <--- No argument is passed
# hello
Another example to give you more idea on how to pass an argument to the function. You can define a variable that contains the string you want to print, let's say to_print and then pass this as an argument to your function during calling it. While explaining more details is out of the scope of this answer, the two examples I gave should get you started. For more details, you can refer to the official docs here
def print_hello(to_print): # <--- Accepts an argument
print(to_print)
to_print = "hello"
print_hello(to_print) # <--- Argument is passed
# hello
You could use a lambda expression:
print_hello = lambda: print('hello')
Or an actual function definition:
def print_hello(): print('hello')
Or functools.partial (this is different in that you can still use other arguments for print whereas you lose that functionality with the others unless specified in the definitions)
from functools import partial
print_hello = partial(print, 'hello')
To use any of these:
print_hello()
#'hello'
Related
I have two pieces of code that do almost the same thing. Can anyone tell me what is the difference between them or which one is the proper syntax in Python?
Code 1:
p = lambda content: print(content)
p("Hello")
# Prints Hello
Code 2:
p = print
p("Hello")
# Also Prints Hello
Code 1 is defining a new function, which calls the print() function. It would be more pythonic to write it as:
def p(content):
print(content)
lambda is normally only used for anonymous functions, not named functions.
Case 2 is simply giving another name to the print function. The two names can be used interchangeably.
The lambda function only accepts one argument, while the standard print function alloes multiple positional arguments and named arguments.
So with Code 2 you can write:
p("Hello", "world", end="")
but if you try this with Code 1 you'll get an error because you gave too many arguments to the function.
If you want to define a new function that can take all the arguments that print() takes, you can use *.
def p(*args, **kwds):
print(*args, **kwds)
or:
p = lambda *args, **kwds: print(*args, **kwds)
See What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
The first you are creating a anonymous function and assigning it to the variable p.
While in the second you are assigning the function print directly to the variable p.
Since in python functions are first class citizens, you can perform this kind of operations. Both are valid syntax, but if you only want to give a shorter name to a function, the second is simpler.
I'm trying to modify a function which uses a module-level variable variable defined below it, similar to this:
def say_hello():
print(MESSAGE)
MESSAGE = "Hello, world!"
say_hello()
I would like to make the message a parameter, like so:
MESSAGE = "Hello, world!"
def say_hello(message=MESSAGE):
print(message)
say_hello()
I've noticed that in order for this to work, I had to move the definition of MESSAGE up in the code. Apparently, all module-levels are first 'bound' and are then available within function bodies, but when provided as default function arguments, they have to be defined before the function. Is this correct?
(I would also like to read up on this to fully understand it; any references would be much appreciated).
... when provided as default function arguments, they have to be defined before the function. Is this correct?
Correct. Default arguments are evaluated at function definition time.
If you need them evaluated at function call time, this common pattern works:
def say_hello(message=None):
if message is None:
message = MESSAGE
print(message)
MESSAGE = "Hello, world!"
say_hello()
def say_hello():
print(MESSAGE)
# ^^^^^^^ This...
...is evaluated when say_hello is called. As long as MESSAGE has been assigned by the time say_hello is called, say_hello will see the value.
# vvvvvvv This...
def say_hello(message=MESSAGE):
print(message)
...is evaluated when say_hello is defined. Python evaluates default argument values at function definition time, so MESSAGE has to be assigned before say_hello is even defined for this to work.
While you're learning about default argument binding, give this a try:
def foo(bar=[]):
bar.append(3)
print(bar)
baz = []
faz = []
foo(baz)
foo(faz)
foo()
foo()
The calls with their own arguments will do what you expect - each one prints only one variable. But the calls to foo with the default argument may surprise you. The first time you call it, you get the expected result: [3]. The second time you call it, you may be surprised that the result is [3, 3].
There's actually nothing in this answer that isn't in the others. As others have said, the default argument is evaluated at the time the function is defined. What you see here is a consequence of that - the arguments are evaluated only at the time that the function is defined. bar=[] is evaluated once, giving you one list.
If you do object creation or function calls as part of your default argument, they only happen once, with the net result being a surprisingly static-acting argument. Most of the time, this is not what we're looking for.
The definitive reference for python is found at docs.python.org . The specific reference for defining functions is at https://docs.python.org/3/tutorial/controlflow.html#defining-functions.
I'd like to modify the arguments passed to a method in a module, as opposed to replacing its return value.
I've found a way around this, but it seems like something useful and has turned into a lesson in mocking.
module.py
from third_party import ThirdPartyClass
ThirdPartyClass.do_something('foo', 'bar')
ThirdPartyClass.do_something('foo', 'baz')
tests.py
#mock.patch('module.ThirdPartyClass.do_something')
def test(do_something):
# Instead of directly overriding its return value
# I'd like to modify the arguments passed to this function.
# change return value, no matter inputs
do_something.return_value = 'foo'
# change return value, based on inputs, but have no access to the original function
do_something.side_effect = lambda x, y: y, x
# how can I wrap do_something, so that I can modify its inputs and pass it back to the original function?
# much like a decorator?
I've tried something like the following, but not only is it repetitive and ugly, it doesn't work. After some PDB introspection.. I'm wondering if it's simply due to however this third party library works, as I do see the original functions being called successfully when I drop a pdb inside the side_effect.
Either that, or some auto mocking magic I'm just not following that I'd love to learn about.
def test():
from third_party import ThirdPartyClass
original_do_something = ThirdPartyClass.do_something
with mock.patch('module.ThirdPartyClass.do_something' as mocked_do_something:
def side_effect(arg1, arg2):
return original_do_something(arg1, 'overridden')
mocked_do_something.side_effect = side_effect
# execute module.py
Any guidance is appreciated!
You may want to use parameter wraps for the mock call. (Docs for reference.) This way the original function will be called, but it will have everything from Mock interface.
So for changing parameters called to original function you may want to try it like that:
org.py:
def func(x):
print(x)
main.py:
from unittest import mock
import org
of = org.func
def wrapped(a):
of('--{}--'.format(a))
with mock.patch('org.func', wraps=wrapped):
org.func('x')
org.func.assert_called_with('x')
result:
--x--
The trick is to pass the original underlying function that you still want to access as a parameter to the function.
Eg, for race condition testing, have tempfile.mktemp return an existing pathname:
def mock_mktemp(*, orig_mktemp=tempfile.mktemp, **kwargs):
"""Ensure mktemp returns an existing pathname."""
temp = orig_mktemp(**kwargs)
open(temp, 'w').close()
return temp
Above, orig_mktemp is evaluated when the function is declared, not when it is called, so all invocations will have access to the original method of tempfile.mktemp via orig_mktemp.
I used it as follows:
#unittest.mock.patch('tempfile.mktemp', side_effect=mock_mktemp)
def test_retry_on_existing_temp_path(self, mock_mktemp):
# Simulate race condition: creation of temp path after tempfile.mktemp
...
I want to have a function in main class which has parameters not only self.
class Ui_Form(object):
def clearTextEdit(self, x):
self.plainTextEdit.setPlainText(" ")
print("Script in Textbox is Cleaned!",)
x will be my additional parameter and I want clearTextEdit to be called by click.
self.pushButton_3.clicked.connect(self.clearTextEdit(x))
it does not allow me to write x as parameter in clicked. Can you help me!
Solution
This is a perfect place to use a lambda:
self.pushButton_3.clicked.connect(lambda: self.clearTextEdit(x))
Remember, connect expects a function of no arguments, so we have to wrap up the function call in another function.
Explanation
Your original statement
self.pushButton_3.clicked.connect(self.clearTextEdit(x)) # Incorrect
was actually calling self.clearTextEdit(x) when you made the call to connect, and then you got an error because clearTextEdit doesn't return a function of no arguments, which is what connect wanted.
Lambda?
Instead, by passing lambda: self.clearTextEdit(x), we give connect a function of no arguments, which when called, will call self.clearTextEdit(x). The code above is equivalent to
def callback():
return self.clearTextEdit(x)
self.pushButton_3.clicked.connect(callback)
But with a lambda, we don't have to name "callback", we just pass it in directly.
If you want to know more about lambda functions, you can check out this question for more detail.
On an unrelated note, I notice that you don't use x anywhere in clearTextEdit. Is it necessary for clearTextEdit to take an argument in the first place?
In Python, what do you do if you are using a multiprocessing and you need to give the function an extra agruement?
Example:
if value == "Y":
pool = multiprocessing.Pool(processes=8)
pool.map(verify_headers, url_list)<-need to give parameter for a password
pool.close()
pool.join()
print "Done..."
and the function would be something like:
def verify_headers(url, password):
pass
Pool.map takes a function of one argument and an iterable to produce that argument. We can turn your function of two arguments into a function of one argument by wrapping it in another function body:
def verify_headers_with_passowrd(url):
return verify_headers(url, 'secret_password')
And pass that to pool.map instead:
pool.map(verify_headers_with_password, url_list)
so long as verify_headers can take password as a keyword argument, we can shorten that a little: you can use functools.partial
pool.map(functools.partial(verify_headers, password='secret_password'), url_list)
Edit: as Bakuriu points out, multiprocessing passes data round by pickling, so the following doesn't work:
pool.map(lambda url: verify_headers(url, 'secret_password'), url_list)
Since lambda's are functions without a name, and pickle serialzes functions by name.
i believe
from functools import partial
and
pool.map(partial(verify_headers,password=password),url_list)
should work?
edit: fixed based on recommendations below
You define a function, right after the original, that accepts as argument a 2-element tuple:
def verify_headers_tuple(url_passwd):
return verify_headers(*url_passwd)
Then you can zip the original url_list with itertools.repeat(password):
pool.map(verify_headers_tuple, it.izip(url_list, it.repeat(password)))
Note that the function passed to Pool.map must be defined at the top level of a module(due to pickling restrictions), which means you cannot use partial or lambda to create a "curried function".