Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
This is more a question about good programming style. I usually work with Java, and now I do some working with Python. In Python, there is no need to hand over global variables if you only want to read from them. On the other hand, I think the Java syntax is more helpful in his regard. You have to hand over required variables, and so you can see what variables are used by what method, which I am sure is helpful for somebody who is reading your code.
Now do you hand over variables in Python although you could already access them because they're global? What is the good 'pythonic' way?
Thanks,
ZerO
def foo(a):
a = 2
foo(1)
1 is 'handed over' to method foo().
Yes, this
def foo(a):
a = 2
foo(1)
is preferred over this
a = 1
def foo():
a = 2
foo()
Imagine you have 3 methods that all do something to a list.
a_list_name = []
def a()
a_list_name.something
def b()
a_list_name.something
def c()
a_list_name.something
a()
b()
c()
If you define the list 'global' you will have to refer that exact list in each method. If you for some reason want to change the list name you now have to edit all 3 methods.
However if you pass in the list through a parameter you only have to edit the method calls and the method code can remain untouched. Like this
def a(l)
l.something
def b(l)
l.something
def c(l)
l.something
my_list = []
a(my_list)
b(my_list)
c(my_list)
This makes your code more modular, and most of all it makes your code (methods) testable because they don't depend on some variable that is defined somewhere else
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've been wondering about this for a while. If I have a function with a default argument, and that function is called by another function which needs to pass on said argument. Now that function would also need to set a default. How can I avoid copy-pasting the default value?
Example code:
def foo(a=3):
print(a)
def bar(it=10, a=3):
for i in range(it):
foo(a=a)
How do I avoid having to set the default value for a second time?
I can think of perhaps doing it with argv like:
def foo(a=3):
print(a)
def bar(it=10, *argv):
for i in range(it):
foo(*argv)
But aside from me not liking it (it doesn't play well with code completion) that only works if there is only a single function whose default parameters I want to pass on. It doesn't work if there are two or more.
The other option would be completely restructuring the code, perhaps passing on a partial application of foo, like:
def foo(a=3):
print(a)
def bar(pfoo, it=10):
for i in range(it):
pfoo()
bar(pfoo=foo) # if I want to use the default
bar(pfoo=lambda: foo(a=5)) # if I want a different value for a
But depending on the specific use case, this feels a bit overengineered (and perhaps hard to read) to me.
My "dreamcode" would be something like:
def foo(a=3):
print(a)
def bar(it=10, a=foo.defaults.a):
for i in range(it):
foo(a=a)
I noticed there are __defaults__ and __kwdefaults__ dunders, but the former is a tuple and not named, so I would have to make sure I get the order right (which is a major source of error), and the latter is only filled if there is a * argument in foo, that would otherwise swallow all the other arguments. Hence if I want to use those dunders I have to change it to:
def foo(*, a=3):
print(a)
def bar(it=10, a=foo.__kwdefaults__['a']):
for i in range(it):
foo(a=a)
But I don't really want foo to accept arbitrary arguments...
How do you guys deal with this?
For your example, if you just want to call foo() from bar() with the same default parameter of a=3, then you don't need to assign any value to a:
def foo(a=3):
print(a)
def bar(it=10):
for i in range(it):
foo()
This will call foo, and assign the default value of 3 to the parameter a
Rather than specify foo's default as the default value of bar's parameter, use a sentinel value that indicates whether you want to call foo with an explicit argument or not.
from functools import partial
use_foo_default = object()
def bar(it=10, a=use_foo_default):
f = foo if a is use_foo_default else partial(foo, a)
for i in range(it):
f()
(If you know that foo can't take None as a valid argument, you can use None in place of the explicitly declared sentinel shown here.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I understand the technical definition of python closures: let's make it concrete.
def foo(x):
def bar(y):
print(x+y)
return bar
In this example x will get bound with bar. But what are these things actually good for? ie in the toy example above one could have just as easily written
def bar(x,y):
print(x+y)
I would like to know the best use cases for using closures instead of, for example, adding extra arguments to a function.
I think the most used example of closure is for caching functions with a decorator.
def cache_decorator(f):
cache = {}
def wrapper(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
return wrapper
#cache_decorator
def some_function(*args):
...
This way the cache cannot be referenced from anywhere, since you do not want your users to tamper with it.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have seen other people ask a question but the only answers I have seen simply explain that python doesn't have the same concept of pass by reference vs pass by value as languages like C do.
for example
x=[0]
def foo(x):
x[0] += 1
In the past, I have been using this work around but it seems very un-pythonic so I'm wondering if there is a better way to do this.
let's assume for what ever reason returning values won't work, like in the case where this code runs on a separate thread.
Some python objects are immutable (tuple, int, float, str, etc). As you have noted, you cannot modify these in-place.
The best workaround is to not try to fake passing by reference, instead you should assign the result. This is both easier to read and less error prone.
In your case, you could call:
x = 0
def f(x):
return x + 1
x = f(x)
If you truly need to fake passing by reference (and I don't see why you would need that), your solution works just fine, but keep in mind that you do not actually modify the object.
x = 0
x_list = [x]
print(id(x_list[0])) # 1844716176
def f(x_list):
x_list[0] += 1
f(x_list)
print(x) # 0, not modified
print(id(x_list[0])) # 1844716208
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So I read this page about decorators, but I still don't understand when decorators are useful.
Consider a piece of code defining a function f, and then calling it multiple times. For some reason we want f to do some extra work. So we have 2 ways to do it:
define a new function g which calls f, and does the extra work needed. Then in the main code, replace all calls to f by calls to g
define a decorator g, and edit the code to add #g before calls to f
In the end, they both achieve the same result and the advantage of 2) over 1) is not obvious to me. What am i missing?
Suppose you have a lot of functions f1, f2, f3, ... and you want a regular way to make the same change to all of them to do the same extra work.
That's what you're missing and it's why decorators are useful. That is to say, functions that take a function and return a modified version of it.
The decorator # syntax is "just" for convenience. It lets you decorate the function as it is defined:
#decorated
def foo():
# several lines
instead of somewhere after the function definition:
def foo():
# several lines
foo = decorated(foo)
In fact of course the latter code is pretty horrible, since it means that by looking at the first definition of foo in the source, you don't see the same foo that users will call. So without the syntax, decorators wouldn't be so valuable because you'd pretty much always end up using different names for the decorated and undecorated functions.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What is the cleanest way of initialising several objects the same way, without making them all point to the same object? I want to do this:
a, b, c = create_object(), create_object(), create_object()
In a less verbose way.
I can do the following, but then all my variables point to the same object
a = b = c = create_object() # then a.change() changes them all!
Is there a clean, pythonic way to do this that I'm missing?
The most pythonic code is the code that makes the most sense. It's a lot better to just do
a = create_object()
b = create_object()
c = create_object()
as opposed to the alternative, a confusing mess of gibberish. Don't be afraid of having two extra lines; really, the benefit is much greater. :-)
Use a factory.
def factory(num):
for i in range(num):
yield shell()
class shell():
pass
a, b, c = factory(3)
# results:
>>> a
<__main__.shell instance at 0x0000000002BE0548>
>>> b
<__main__.shell instance at 0x0000000002BE03C8>
>>> c
<__main__.shell instance at 0x0000000002BE0588>
You can of course, always add extra parameters to be able to initialize a group of variables to be the same, or you could even have complex factories that determine their own parameters to pass to the class constructor.
You could also have it a static method of the class.
Whatever your preference is, if you want to initialize a group of variables to all be difference instances of the same class, this would be how you should do it. (although, in general, that kind of activity is not very pythonic for what it's worth...)
To make a factory using what you have shown, and to make it more robust, here is a nice sample:
def factory(obj, num, args=None):
for i in range(num):
yield obj(args) if args else obj()
a, b, c = factory(create_object, 3)