Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just want to know which way is more preferable in python.
Imagine two functions:
1 function:
def foo(key):
if bar.has_key(key):
return bar.get(key)
# do something with bar
# this will be executed if bar_key(key) is False
...
return something
2 function:
def foo(key):
if bar.has_key(key):
return bar.get(key)
else:
# do something with bar
# this will be executed if bar_key(key) is False
...
return something
As you can see the only difference is else statement. So the question is will it affect performance somehow. Or are there any reasons to include else in this type of functions?
If the choice is between those two approaches, I would pick the first one. return is pretty explicit that execution terminates at that point. I find if x { return y } else { ... } an anti-pattern for this reason (not just in Python -- I see this in C/C++ code and it irks me there, too).
If you are returning, an else block is entirely unnecessary and causes pointless indentation of a block of code that might be quite large. The more nested structures you have, the more difficult it is to maintain proper context in your head while reading code. For this reason I tend to prefer less nesting when it does not obfuscate logic, and in this case I don't think it would.
The pythonic way:
def foo(key):
return bar.get(key, something)
While this question is a little opinion based, I'd say the second is more Pythonic for the reason of "explicit is better than implicit". The second function is clearly saying "if this condition, do this. Otherwise, do this". On the other hand, the first function implies the "Otherwise" part.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have seen other people ask a question but the only answers I have seen simply explain that python doesn't have the same concept of pass by reference vs pass by value as languages like C do.
for example
x=[0]
def foo(x):
x[0] += 1
In the past, I have been using this work around but it seems very un-pythonic so I'm wondering if there is a better way to do this.
let's assume for what ever reason returning values won't work, like in the case where this code runs on a separate thread.
Some python objects are immutable (tuple, int, float, str, etc). As you have noted, you cannot modify these in-place.
The best workaround is to not try to fake passing by reference, instead you should assign the result. This is both easier to read and less error prone.
In your case, you could call:
x = 0
def f(x):
return x + 1
x = f(x)
If you truly need to fake passing by reference (and I don't see why you would need that), your solution works just fine, but keep in mind that you do not actually modify the object.
x = 0
x_list = [x]
print(id(x_list[0])) # 1844716176
def f(x_list):
x_list[0] += 1
f(x_list)
print(x) # 0, not modified
print(id(x_list[0])) # 1844716208
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Which is more stylistically accepted to do?
This:
def example_function(stuff):
thing = stuff
print(thing)
example_function('words')
Or:
def example_function(stuff):
thing = stuff
return thing
print(example_function('words'))
I'm still figuring out my way through Python, so any help would be greatly appreciated!
Consider how the function will be used. If it includes the print, you can never call the function without it producing output (ignoring monkey patching and the like). If it does not include the print, you can always print its return value explicitly if you decide you want to output the value.
In other words, lean towards printing a return value unless you have a very good reason to print from inside the function. Printing to standard output isn't actually as common as most beginner programs would leave you to believe. Most of the code one writes is intended to be used by other code, rather than communicating directly with a human.
You could take a clue from the __str__ and __repr__ methods of objects. They return a string representation, allowing you to do things like
print(dict(a=1))
print([1,2,3,4])
In general returning a string gives you more flexibility.
But while debugging code, I often include diagnostic print statements within a function.
The argparse module has:
parser.print_help()
parser.format_help()
methods. The print calls the format, and takes a file=None parameter. The default action is to write to stdout (or stderr) but the user can redirect it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I know that PEP8 dictates to not assign lambda to an expression because it misses the entire point of a lambda function.
But what about a recursive lambda function? I've found that in many cases, it's really simple, clean and efficient to make a recursion with lambda assigning it to an expression instead of defining a function. And pep8 doesn't mention recursive lambda.
For example, let's compare a function that returns the greatest common divisor between two numbers:
def gcd(a,b):
if b == 0:
return a
return gcd(b, a % b)
vs
gcd = lambda a, b: a if b == 0 else gcd(b, a % b)
So, what should I do?
You have "cheated" a bit in your question, since the regular function could also be rewritten like this:
def gcd(a,b):
return a if b == 0 else gcd(b, a % b)
Almost as short as the lambda version, and even this can be further squeezed into a single line, but at the expense of readability.
The lambda syntax is generally used for simple anonymous functions that are normally passed as arguments to other functions. Assigning a lambda function to a variable doesn't make much sense, it is just another way of declaring a named function, but is less readable and more limited (you can't use statements in it).
A lot of thought has been put into PEP8 and the recommendations in it are there for a reason, thus I would not recommend deviating from it, unless you have a very good reason for doing so.
Go with the normal function definition. There's absolutely no benefit of using lambda over def, and normal function definition is (for most people) much more readable. With lambda, you gain nothing, but you often lose readability.
I would recommend you to read this answer. Recursion doesn't change anything. In fact, in my opinion, it favours normal def even more.
If you assign a lambda to a variable, you won't be able to pass it as an argument nor return it in the same line, which is the exact purpose of lambda.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What I have found so far in many cases python code for checking business logic or any other logic is some thing like below(simplified):
user_input = 100
if user_input == 100:
do_something()
elif user_input > 100:
do_sth_different()
else:
do_correct()
When a new logic need to be checked what new python programmer(like me) do just add a new bock in elif...
What is pythonic way to check a bunch of logic without using a long chunk of if else checking?
Thanks.
The most common way is just a line of elifs, and there's nothing really wrong with that, in fact, the documentation says to use elifs as a replacement for switches. However, another really popular way is to create a dictionary of functions:
functions = {100:do_something,101:do_sth_different}
user_input = 100
try:
functions[user_input]()
except KeyError:
do_correct()
This doesn't allow for your given if user_input > 100 line, but if you just have to check equality relationships and a generic case, it works out nicely, especially if you need to do it more than once.
The try except case could be replaced by explicitly calling get on the dictionary, using your generic function as the default parameter:
functions.get(user_input,do_correct)()
If that floats your boat.
Aside from the fact that the way you're doing it is probably the best way, you could also write it as:
user_input = 100
do_something() if user_input == 100 else
do_sth_different() if user_input > 100 else
do_correct()
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Many try/except/finally-clauses not only "uglify" my code, but i find myself often using identical exception-handling for similar tasks. So i was considering reducing redundancy by "outsourcing" them to a ... decorator.
Because i was sure not to be the 1st one to come to this conclusion, I googled and found this - imho - ingenious recipe which added the possibility to handle more than one exception.
But i was surprised why this doesn't seem to be a wide known and used practice per se, so i was wondering if there is maybe an aspect i wasn't considering?
Is it bogus to use the decorator pattern for exception-handling or did i just miss it the whole time? Please enlighten me! What are the pitfalls?
Is there maybe even a package/module out there which supports the creation of such exception-handling in a reasonable way?
The biggest reason to keep the try/except/finally blocks in the code itself is that error recovery is usually an integral part of the function.
For example, if we had our own int() function:
def MyInt(text):
return int(text)
What should we do if text cannot be converted? Return 0? Return None?
If you have many simple cases then I can see a simple decorator being useful, but I think the recipe you linked to tries to do too much: it allows a different function to be activated for each possible exception--in cases such as those (several different exceptions, several different code paths) I would recommend a dedicated wrapper function.
Here's my take on a simple decorator approach:
class ConvertExceptions(object):
func = None
def __init__(self, exceptions, replacement=None):
self.exceptions = exceptions
self.replacement = replacement
def __call__(self, *args, **kwargs):
if self.func is None:
self.func = args[0]
return self
try:
return self.func(*args, **kwargs)
except self.exceptions:
return self.replacement
and sample usage:
#ConvertExceptions(ValueError, 0)
def my_int(value):
return int(value)
print my_int('34') # prints 34
print my_int('one') # prints 0
Basically, the drawback is that you no longer get to decide how to handle the exception in the calling context (by just letting the exception propagate). In some cases this may result in a lack of separation of responsibility.
Decorator in Python is not the same as the Decorator pattern, thought there is some similarity. It is not completely clear waht you mean here, but I think you mean the one from Python (thus, it is better not to use the word pattern)
Decorators from Python are not that useful for exception handling, because you would need to pass some context to the decorator. That is, you would either pass a global context, or hide function definitions within some outer context, which requires, I would say, LISP-like way of thinking.
Instead of decorators you can use contextmanagers. And I do use them for that purpose.