In other languages, like Java, you can do something like this:
String path;
if (exists(path = "/some/path"))
my_path = path;
the point being that path is being set as part of specifying a parameter to a method call. I know that this doesn't work in Python. It is something that I've always wished Python had.
Is there any way to accomplish this in Python? What I mean here by "accomplish" is to be able to write both the call to exists and the assignment to path, as a single statement with no prior supporting code being necessary.
I'll be OK with it if a way of doing this requires the use of an additional call to a function or method, including anything I might write myself. I spent a little time trying to come up with such a module, but failed to come up with anything that was less ugly than just doing the assignment before calling the function.
UPDATE: #BrokenBenchmark's answer is perfect if one can assume Python 3.8 or better. Unfortunately, I can't yet do that, so I'm still searching for a solution to this problem that will work with Python 3.7 and earlier.
Yes, you can use the walrus operator if you're using Python 3.8 or above:
import os
if os.path.isdir((path := "/some/path")):
my_path = path
I've come up with something that has some issues, but does technically get me where I was looking to be. Maybe someone else will have ideas for improving this to make it fully cool. Here's what I have:
# In a utility module somewhere
def v(varname, arg=None):
if arg is not None:
if not hasattr(v, 'vals'):
v.vals = {}
v.vals[varname] = arg
return v.vals[varname]
# At point of use
if os.path.exists(v('path1', os.path.expanduser('~/.harmony/mnt/fetch_devqa'))):
fetch_devqa_path = v('path1')
As you can see, this fits my requirement of no extra lines of code. The "variable" involved, path1 in this example, is stored on the function that implements all of this, on a per-variable-name basis.
One can question if this is concise and readable enough to be worth the bother. For me, the verdict is still out. If not for the need to call the v() function a second time, I think I'd be good with it structurally.
The only functional problem I see with this is that it isn't thread-safe. Two copies of the code could run concurrently and run into a race condition between the two calls to v(). The same problem is greatly magnified if one fails to choose unique variable names every time this is used. That's probably the deal killer here.
Can anyone see how to use this to get to a similar solution without the drawbacks?
I am new to Python and I love this language much. But I encountered one annoying issue recently when working with PyDev in eclipse.
Some method returned an instance of some class. But I cannot get intellisense for the instance's methods.
For example:
import openpyxl
from openpyxl.reader.excel import load_workbook
from openpyxl.worksheet import Worksheet
xlsFile='hello.xlsx'
wbook = load_workbook(xlsFile)
wsheet1=wbook.get_sheet_by_name('mysheet')
wsheet1.cell('A9').hyperlink=r'\\sharefolder'
wsheet2=Worksheet()
wsheet2.cell('A1').hyperlink=r'\\sharefolder'
In this code, I can get the prompt for method cell() with wsheet2, but not with wsheet1. Though they are both of Worksheet type which I have already imported. It seems python or PyDev cannot properly detect the type of the returned object.
Is this a language limitation? Or is there something I did wrong? For now, I have to dig into the source code and see what the real type of the return value is. And then check the methods defined in that type. It's very tedious.
I wrote a small test to repro this issue. Strange, the intellisense seems working.
It's a consequence of the fact that Python is dynamically typed.
In a statically-typed language such as C#, methods are annotated with their type signatures. (Aside: in some systems types can be inferred by the type checker.) The compiler knows the return type of the function, and the types the arguments are meant to have, without running your code, because you wrote the types down! This enables your tooling to not only check the types of your programs, but also to build up metadata about the methods in your program and their types; Intellisense works by querying this metadata harvested from the text of your program.
Python has no static type system built in to the language. This makes it much harder for tooling to give you hints without running the code. For example, what is the return type of this function?
def spam(eggs):
if eggs:
return "ham"
return 42
Sometimes spam returns a string; sometimes it returns an integer. What methods should Intellisense display on the return value of a call to spam?
What are the available attributes on this class?
class Spam:
def __getattr__(self, name):
if len(name) > 5:
return "foo"
return super().__getattr__(name)
Spam sometimes dynamically generates attributes: what should Intellisense display for an instance of Spam?
In these cases there is no correct answer. You might be able to volunteer some guesses (for example, you could show a list containing both str and int's methods on the return value of spam), but you can't give suggestions that will be right all the time.
So Intellisense tooling for Python is reduced to best-guesses. In the example you give, your IDE doesn't know enough about the return type of get_sheet_by_name to give you information about wsheet1. However, it does know the type of wsheet2 because you just instantiated it to a Worksheet. In your second example, Intellisense is simply making a (correct) guess about the return type of f1 by inspecting its source code.
Incidentally, auto-completion in an interactive shell like IPython is more reliable. This is because IPython actually runs the code you type. It can tell what the runtime type of an object is because the analysis is happening at runtime.
You can use an assert to tell intellisense what class you want it to be. Of course now it will thow an error if it isn't, but that's a good thing.
assert isinstance(my_variable, class_i_want_it_to_be)
This will give you the auto-complete and ctrl-click to jump to the function that you have been looking for. (At least this is how it works now in 2022, some other answers are 5 years old).
Here is a quick example.
#!/usr/bin/python3
class FooMaker():
def make_foo():
return "foo"
#this makes a list of constants
list1 = [FooMaker(),FooMaker()]
#Even if the result is the same. These are not constants
list2 = []
for i in range(2):
list2.append(FooMaker)
#intellisense knows this is a FooMaker
m1 = list1[0]
#now intellisense isn't sure what this object is
m2 = list2[0]
# Make_foo is highlighted for m1 and not for m2
m1.make_foo()
m2.make_foo()
# now make_foo is highlighted for M2
assert isinstance(m2, FooMaker)
m2.make_foo()
The color difference is subtile in my vs code. But here is a screen shot anyway.
tldr:
So many online answers just say "no" that it took me a while to say: "this is ridiculous, I don't have to deal with this in C, there must be a better way".
Yes, python is dynamically typed, but that doesn't mean that intellisence has to be necessarily banned from suggesting "you probably want this".
It also doesn't mean, you have to "just deal with it" because you chose python.
Furthermore, throwing down a lot of assert functions is good practice and will shorten your development time when things start to get complicated. You might be about to pass a variable a long way down a list of functions before you get a type error. Then you have to dig a long way back up to find it. Just say what it is when you decide what it is and that's where it will throw the error when something goes wrong.
It's also much easier to show other developers what you are trying to do. I even see asserts this in C libraries, and always wondered why they bothered in a strongly typed language. But, now it makes a lot more sense. I would also speculate there is little performance hit to adding an assert (complier stuff, blah blah, I'll leave that for the comments).
Well, technically in Python a method may return anything and the result of an operation is defined only when the operation is completed.
Consider this simple function:
def f(a):
if a == 1:
return 1 # returns int
elif a == 2:
return "2" # returns string
else:
return object() # returns an `object` instance
The function is pretty valid for Python and its result is strictly defined but only at the end of the function execution. Indeed:
>>> type(f(1))
<type 'int'>
>>> type(f(2))
<type 'str'>
>>> type(f(3))
<type 'object'>
Certainly this flexibility is something not needed all the time and most methods return something predictable apriori. An intelligent IDE could analyze the code (and some other hints like docstrings which may specify arguments and return types) but this always would be a guess with a certain level of confidence. Also there's PEP0484 which introduces type hints on the language level, but it's optional, relatively new and all legacy code definitely doesn't use it.
If PyDev doesn't work for a particular case, well, it's a pity, but it's something you should accept if you choose such a dynamic language like Python. Maybe it's worth to try a different, more intelligent IDE or have a console with an interactive Python prompt opened next to your IDE to test your code on the fly. I would suggest to use a sophisticated python shell like bpython
When writing Python code, I often find myself wanting to get behavior similar to Lisp's defvar. Basically, if some variable doesn't exist, I want to create it and assign a particular value to it. Otherwise, I don't want to do anything, and in particular, I don't want to override the variable's current value.
I looked around online and found this suggestion:
try:
some_variable
except NameError:
some_variable = some_expensive_computation()
I've been using it and it works fine. However, to me this has the look of code that's not paradigmatically correct. The code is four lines, instead of the 1 that would be required in Lisp, and it requires exception handling to deal with something that's not "exceptional."
The context is that I'm doing interactively development. I'm executing my Python code file frequently, as I improve it, and I don't want to run some_expensive_computation() each time I do so. I could arrange to run some_expensive_computation() by hand every time I start a new Python interpreter, but I'd rather do something automated, particularly so that my code can be run non-interactively. How would a season Python programmer achieve this?
I'm using WinXP with SP3, Python 2.7.5 via Anaconda 1.6.2 (32-bit), and running inside Spyder.
It's generally a bad idea to rely on the existence or not of a variable having meaning. Instead, use a sentinel value to indicate that a variable is not set to an appropriate value. None is a common choice for this kind of sentinel, though it may not be appropriate if that is a possible output of your expensive computation.
So, rather than your current code, do something like this:
# early on in the program
some_variable = None
# later:
if some_variable is None:
some_variable = some_expensive_computation()
# use some_variable here
Or, a version where None could be a significant value:
_sentinel = object()
some_variable = _sentinel # this means it doesn't have a meaningful value
# later
if some_variable is _sentinel:
some_variable = some_expensive_computation()
It is hard to tell which is of greater concern to you, specific language features or a persistent session. Since you say:
The context is that I'm doing interactively development. I'm executing my Python code file frequently, as I improve it, and I don't want to run some_expensive_computation() each time I do so.
You may find that IPython provides a persistent, interactive environment that is pleasing to you.
Instead of writing Lisp in Python, just think about what you're trying to do. You want to avoid calling an expensive function twice and having it run two times. You can write your function do to that:
def f(x):
if x in cache:
return cache[x]
result = ...
cache[x] = result
return result
Or make use of Python's decorators and just decorate the function with another function that takes care of the caching for you. Python 3.3 comes with functools.lru_cache, which does just that:
import functools
#functools.lru_cache()
def f(x):
return ...
There are quite a few memoization libraries in the PyPi for 2.7.
For the use case you give, guarding with a try ... except seems like a good way to go about it: Your code is depending on leftover variables from a previous execution of your script.
But I agree that it's not a nice implementation of the concept "here's a default value, use it unless the variable is already set". Python does not directly support this for variables, but it does have a default-setter for dictionary keys:
myvalues = dict()
myvalues.setdefault("some_variable", 42)
print some_variable # prints 42
The first argument of setdefault must be a string containing the name of the variable to be defined.
If you had a complicated system of settings and defaults (like emacs does), you'd probably keep the system settings in their own dictionary, so this is all you need. In your case, you could also use setdefault directly on global variables (only), with the help of the built-in function globals() which returns a modifiable dictionary:
globals().setdefault("some_variable", 42)
But I would recommend using a dictionary for your persistent variables (you can use the try... except method to create it conditionally). It keeps things clean and it seems more... pythonic somehow.
Let me try to summarize what I've learned here:
Using exception handling for flow control is fine in Python. I could do it once to set up a dict in which I can store what ever I want.
There are libraries and language features that are designed for some form of persistence; these can provide "high road" solutions for some applications. The shelve module is an obvious candidate here, but I would construe "some form of persistence" broadly enough to include #Blender's suggest to use memoization.
I recently had to implement a small check for any variables that might have not been initialized (and their default value is None). I came up with this:
if None in (var1, var2, var3):
error_out()
While, in my eyes, bordering on beautiful, I was wondering - is this a good way to do it? Is this the way to do it? Are there any cases in which this would produce some unexpected results?
First things first: your code is valid, readable, concise... so it might not be the way to do it (idioms evolves with time and new language features) but it certainly is one of the way to do it in a pythonic way.
Secondly, just two observations:
The standard way to generate errors in python is to raise Exceptions. You can of course wrap your exception-raising within a function, but since it's quite unusual I was just wondering if you chose this design for some specific reason. Since you can write your own Exception class, even boilerplate code like logging an error message to file could go within the class itself rather than in the wrapping function.
The way you wrote your test is such that you won't be able to assign None as a value to your variables. This might be not a problem now, but might limit your flexibility in the future. An alternative way to check for initialisation could be to simply not declare an initial value for the variable in question and then do something along the lines of:
try:
self.variable_name
except NameError:
# here the code that runs if the variable hasn't been initialised
finally:
# [optional] here the code that should run in either case
A just slightly different way to do it would be to use the built-in all method; however, this will also catch false-ish values like 0 or "", which may not be what you want:
>>> all([1, 2, 3])
True
>>> all([None, 1, 2])
False
>>> all([0, 1])
False
Allow me to leave my two cents here:
>>> any(a is None for a in [1,0])
False
>>> any(a is None for a in [1,0, None])
True
So one can:
def checkNone(*args):
if any(arg is None for arg in args):
error_out()
Nothing new here. Just IMHO maybe the part any arg is None is more readable
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've decided to learn Python 3. For those that have gone before, what did you find most useful along the way and wish you'd known about sooner?
I learned Python back before the 1.5.2 release, so the things that were key for me back then may not be the key things today.
That being said, a crucial thing that took me a little bit to realize, but I now consider crucial: much functionality that other languages would make intrinsic is actually made available by the standard library and the built-ins.
The language itself is small and simple, but until you're familiar with the built-ins and the "core parts" of the standard library (e.g., nowadays, sys, itertools, collections, copy, ...), you'll be reinventing the wheel over and over. So, the more time you invest in getting familiar with those parts, the smoother your progress will be. Every time you have a task you want to do, that doesn't seem to be directly supported by the language, first ask yourself: what built-ins or modules in the standard library will make the task much simpler, or even do it all for me? Sometimes there won't be any, but more often than not you'll find excellent solutions by proceeding with this mindset.
I wished I didn't know Java.
More functional programming. (see itertools module, list comprehension, map(), reduce() or filter())
List comprehension (makes a list cleanly):
[x for x in y if x > z]
Generator expansion (same as list comprehension but doesn't evaluate until it is used):
(x for x in y if x > z)
Two brain-cramping things. One of which doesn't apply to Python 3.
a = 095
Doesn't work. Why? The leading zero is an octal literal. The 9 is not valid in an octal literal.
def foo( bar=[] ):
bar.append( 1 )
return bar
Doesn't work. Why? The mutable default object gets reused.
What enumerate is for.
That seq = seq.append(item) and seq = seq.sort() both set seq to None.
Using set to remove duplicates.
Pretty much everything in the itertools and collections modules.
How the * and ** prefixes for function arguments work.
How default arguments to functions work internally (i.e. what f.func_defaults is).
How (why, really) to design functions so that they are useful in conjunction with map and zip.
The role of __dict__ in classes.
What import actually does.
Learn how to use iPython
It's got Tab completion.
View all the elements in your namespace with 'whos'.
After you import a module, it's easy to view the code:
>>> import os
>>> os?? # this display the actual source of the method
>>> help() # Python's interactive help. Fantastic!
Most Python modules are well documented; in theory, you could learn iPython and the rest of what you'd need to know could be learned through the same tool.
iPython also has a debug mode, pdb().
Finally, you can even use iPython as a python enabled command line. The basic UNIX commands work as %magic methods. Any commands that aren't magic command can be executed:
>>> os.system('cp file1 file2')
Don't have variable names that are types. For example, don't name a variable "file" or "dict"
Decorators. Writing your own is not something you might want to do right away, but knowing that #staticmethod and #classmethod are available from the beginning (and the difference between what they do) is a real plus.
using help() in the shell on any object, class or path
you can run import code;
code.interact(local=locals()) anywhere in your code and it will start a python shell at that exact point
you can run python -i yourscript.py to start a shell at the end of yourscript.py
Most helpful: Dive Into Python. As a commenter points out, if you're learning Python 3, Dive Into Python 3 is more applicable.
Known about sooner: virtualenv.
That a tuple of a single item must end with a comma, or it won't be interpreted as a tuple.
pprint() is very handy (yes, 2 p's)
reload() is useful when you're re-testing a module while making lots of rapid changes to a dependent module.
And learn as many common "idioms" as you can, otherwise you'll bang your head looking for a better way to do something, when the idiom really is regarded as the best way (e.g. ugly expressions like ' '.join(), or the answer to why there is no isInt(string) function.... the answer is you can just wrap the usage of a "possible" integer with a try: and then catch the exception if it's not a valid int. The solution works well, but it sounds like a terrible answer when you first encounter it, so you can waste a lot of time convincing yourself it really is a good approach.
Those are some things that wasted several hours of my time to determine that my first draft of some code which felt wrong, really was acceptable.
Readings from python.org:
http://wiki.python.org/moin/BeginnerErrorsWithPythonProgramming
http://wiki.python.org/moin/PythonWarts
List comprehensions, if you're coming to Python fresh (not from an earlier version).
Closures. Clean and concise, without having to resort to using a Strategy Pattern unlike languages such as Java
If you learn from a good book, it will not only teach you the language, it will teach you the common idioms. The idioms are valuable.
For example, here is the standard idiom for initializing a class instance with a list:
class Foo(object):
def __init__(self, lst=None):
if lst is None:
self.lst = []
else:
self.lst = lst
If you learn this as an idiom from a book, you don't have to learn the hard way why this is the standard idiom. #S.Lott already explained this one: if you try to make the default initializer be an empty list, the empty list gets evaluated just once (at compile time) and every default-initialized instance of your class gets the same list instance, which was not what was intended here.
Some idioms protect you from non-intended effects; some help you get best performance out of the language; and some are just small points of style, which help other Python fans understand your code better.
I learned out of the book Learning Python and it introduced me to some of the idioms.
Here's a web page devoted to idioms: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
P.S. Python code that follows the best-practice Python idioms often is called "Pythonic" code.
I implemented plenty of recursive directory walks by hand before I learned about os.walk()
Lambda functions
http://www.diveintopython.org/power_of_introspection/lambda_functions.html
One of the coolest things I learned about recently was the commands module:
>>> import commands
>>> commands.getoutput('uptime')
'18:24 up 10:22, 7 users, load averages: 0.37 0.45 0.41'
It's like os.popen or os.system but without all of the DeprecationWarnings.
And let's not forget PDB (Python Debugger):
% python -m pdb poop.py
Dropping into interactive mode in IPython
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed()
ipshell()
When I started with python, started out with main methods from the examples. This was because I didn't know better, after that I found this on how to create a better main method.
Sequential imports overwrite:
If you import two files like this:
from foo import *
from bar import *
If both foo.py and bar.py have a function named fubar(), having imported the files this way, when you call fubar, fubar as defined in bar.py will be executed. The best way to avoid this is to do this:
import foo
import bar
and then call foo.fubar or bar.fubar. This way, you ALWAYS know which file's definition of fubar will be executed.
Maybe a touch more advanced, but I wish I'd known that you don't use threads to take advantage of multiple cores in (C)python. You use the multiprocessing library.
Tab completion and general readline support, including histories, even in the regular python shell.
$ cat ~/.pythonrc.py
#!/usr/bin/env python
try:
import readline
except ImportError:
print("Module readline not available.")
else:
import rlcompleter
readline.parse_and_bind("tab: complete")
import os
histfile = os.path.join(os.environ["HOME"], ".pyhist")
try:
readline.read_history_file(histfile)
except IOError:
pass
import atexit
atexit.register(readline.write_history_file, histfile)
del os, histfile
and then add a line to your .bashrc
export PYTHONSTARTUP=~/.pythonrc.py
These two things lead to an exploratory programming style of "it looks like this library might do what I want", so then I fire up the python shell and then poke around using tab-completion and the help() command until I find what I need.
Generators and list comprehensions are more useful than you might think. Don't just ignore them.
I wish I knew well a functional language. After playing a bit with Clojure, I realized that lots of Python's functional ideas are borrowed from Lisp or other functional langs
I wish I'd known right off the bat how to code idiomatically in Python. You can pick up any language you like and start coding in it like it's C, Java, etc. but ideally you'll learn to code in "the spirit" of the language. Python is particularly relevant, as I think it has a definite style of its own.
While I found it a little later in my Python career than I would have liked, this excellent article wraps up many Python idioms and the little tricks that make it special. Several of the things people have mentioned in their answers so far are contained within:
Code Like a Pythonista: Idiomatic Python.
Enjoy!
Pretty printing:
>>> print "%s world" %('hello')
hello world
%s for string
%d for integer
%f for float
%.xf for exactly x many decimal places of a float. If the float has lesser decimals that indicated, then 0s are added
I really like list comprehension and all other semifunctional constructs. I wish I had known those when I was in my first Python project.
What I really liked: List comprehensions, closures (and high-order functions), tuples, lambda functions, painless bignumbers.
What I wish I had known about sooner: The fact that using Python idioms in code (e.g. list comprehensions instead of loops over lists) was faster.
That multi-core was the future. Still love Python. It's writes a fair bit of my code for me.
Functional programming tools, like all and any