When writing code in IDLE, sometimes when I insert a function like re.sub( in the example, a window pops up explaining the function and the inputs it needs. I find this very helpful and would like to have this window pop up every time. I googled and tried different key combinations but I can't see to find how you do this.
Can somebody help me with this?
It is pretty simple if you want a key combination to show calltip.
Just type "ctrl+\" as shown in the picture here:
One thing to remember it will only work when you have already typed the opening parenthesis and not before.
As shown in the screenshots below:
Parenthesis opened
Inside the Parenthesis
Without opening parenthesis
Your question is specific to the python IDLE. In IDLE, you have this functionality enabled by default. For it to work, the function (or method) has to be available in the current namespace.That means it has to either be defined in the running environment, or imported in to the running environment.
For example:
>>> def foo(x)
"""the foo function"""
return x
when you type >>> foo( in the prompt after the definition, you will see the explanation which really is the documentation contained in the docstring (the stuff between the triple quotes).
If a function or method does not have any documentation, then you will not see any explanation. For example
>>> def bar(y):
return y
In this case when you type in bar( at the prompt, IDLE will just show y, this is because the function does not have any documentation.
Some built in functions (called builtins) do not have docstrings, often this is because they are implemented in the C programming language. For example
>>> from functools import reduce
>>> reduce(
In this case IDLE will not give any hint because the function does not have any docstring for it to display.
A great companion to learning is the python standard reference. You can lookup built in function definitions there for clear explanations about what they do. On the other hand, when writing your own functions, remember to put docstrings as they will help you as you go on.
IDLE's calltips contain the function signature (if directly available) followed by the beginning of the docstring (if there is one). For builtins that have not gotten the 'Argument Clinic' treatment, the signature is the beginning of the docstring. This is the case for reduce. In 3.6 and 3.7, when I type reduce( after the import and prompt, the calltip contains the signature as given in the docstring. To see the entire reduce() docstring, use >>> help(reduce) or enter reduce.__doc__.
To see more calltips when editing in the editor, run your code after entering the import statements. For instance, if you start IDLE and immediate edit a new file and enter
import reduce
reduce(
you see no calltip, as you described in your question. But if you hit F5 after the import and return to the editor, you will. Similarly, if you want to see calltips for your own functions, run the file occasionally after defining them.
Related
Could someone tell me whether this idea is feasible in Python?
I want to have a method and the datatype of the signature is not fixed.
For example:
Foo(data1, data2) <-- Method Definition in Code
Foo(2,3) <---- Example of what would be executed in runtime
Foo(s,t) <---- Example of what would be executed in runtime
I know the code could work if i change the Foo(s,t) to Foo("s","t"). But I am trying to make the code smarter to recognize the command without the "" ...
singledispatch might be an answer, which transforms a function into a generic function, which can have different behaviors depending upon the type of its first argument.
You could see a concrete example in the above link. And you should do some special things if you want to do generic dispatch on more than one arguments.
I am new to Python and I love this language much. But I encountered one annoying issue recently when working with PyDev in eclipse.
Some method returned an instance of some class. But I cannot get intellisense for the instance's methods.
For example:
import openpyxl
from openpyxl.reader.excel import load_workbook
from openpyxl.worksheet import Worksheet
xlsFile='hello.xlsx'
wbook = load_workbook(xlsFile)
wsheet1=wbook.get_sheet_by_name('mysheet')
wsheet1.cell('A9').hyperlink=r'\\sharefolder'
wsheet2=Worksheet()
wsheet2.cell('A1').hyperlink=r'\\sharefolder'
In this code, I can get the prompt for method cell() with wsheet2, but not with wsheet1. Though they are both of Worksheet type which I have already imported. It seems python or PyDev cannot properly detect the type of the returned object.
Is this a language limitation? Or is there something I did wrong? For now, I have to dig into the source code and see what the real type of the return value is. And then check the methods defined in that type. It's very tedious.
I wrote a small test to repro this issue. Strange, the intellisense seems working.
It's a consequence of the fact that Python is dynamically typed.
In a statically-typed language such as C#, methods are annotated with their type signatures. (Aside: in some systems types can be inferred by the type checker.) The compiler knows the return type of the function, and the types the arguments are meant to have, without running your code, because you wrote the types down! This enables your tooling to not only check the types of your programs, but also to build up metadata about the methods in your program and their types; Intellisense works by querying this metadata harvested from the text of your program.
Python has no static type system built in to the language. This makes it much harder for tooling to give you hints without running the code. For example, what is the return type of this function?
def spam(eggs):
if eggs:
return "ham"
return 42
Sometimes spam returns a string; sometimes it returns an integer. What methods should Intellisense display on the return value of a call to spam?
What are the available attributes on this class?
class Spam:
def __getattr__(self, name):
if len(name) > 5:
return "foo"
return super().__getattr__(name)
Spam sometimes dynamically generates attributes: what should Intellisense display for an instance of Spam?
In these cases there is no correct answer. You might be able to volunteer some guesses (for example, you could show a list containing both str and int's methods on the return value of spam), but you can't give suggestions that will be right all the time.
So Intellisense tooling for Python is reduced to best-guesses. In the example you give, your IDE doesn't know enough about the return type of get_sheet_by_name to give you information about wsheet1. However, it does know the type of wsheet2 because you just instantiated it to a Worksheet. In your second example, Intellisense is simply making a (correct) guess about the return type of f1 by inspecting its source code.
Incidentally, auto-completion in an interactive shell like IPython is more reliable. This is because IPython actually runs the code you type. It can tell what the runtime type of an object is because the analysis is happening at runtime.
You can use an assert to tell intellisense what class you want it to be. Of course now it will thow an error if it isn't, but that's a good thing.
assert isinstance(my_variable, class_i_want_it_to_be)
This will give you the auto-complete and ctrl-click to jump to the function that you have been looking for. (At least this is how it works now in 2022, some other answers are 5 years old).
Here is a quick example.
#!/usr/bin/python3
class FooMaker():
def make_foo():
return "foo"
#this makes a list of constants
list1 = [FooMaker(),FooMaker()]
#Even if the result is the same. These are not constants
list2 = []
for i in range(2):
list2.append(FooMaker)
#intellisense knows this is a FooMaker
m1 = list1[0]
#now intellisense isn't sure what this object is
m2 = list2[0]
# Make_foo is highlighted for m1 and not for m2
m1.make_foo()
m2.make_foo()
# now make_foo is highlighted for M2
assert isinstance(m2, FooMaker)
m2.make_foo()
The color difference is subtile in my vs code. But here is a screen shot anyway.
tldr:
So many online answers just say "no" that it took me a while to say: "this is ridiculous, I don't have to deal with this in C, there must be a better way".
Yes, python is dynamically typed, but that doesn't mean that intellisence has to be necessarily banned from suggesting "you probably want this".
It also doesn't mean, you have to "just deal with it" because you chose python.
Furthermore, throwing down a lot of assert functions is good practice and will shorten your development time when things start to get complicated. You might be about to pass a variable a long way down a list of functions before you get a type error. Then you have to dig a long way back up to find it. Just say what it is when you decide what it is and that's where it will throw the error when something goes wrong.
It's also much easier to show other developers what you are trying to do. I even see asserts this in C libraries, and always wondered why they bothered in a strongly typed language. But, now it makes a lot more sense. I would also speculate there is little performance hit to adding an assert (complier stuff, blah blah, I'll leave that for the comments).
Well, technically in Python a method may return anything and the result of an operation is defined only when the operation is completed.
Consider this simple function:
def f(a):
if a == 1:
return 1 # returns int
elif a == 2:
return "2" # returns string
else:
return object() # returns an `object` instance
The function is pretty valid for Python and its result is strictly defined but only at the end of the function execution. Indeed:
>>> type(f(1))
<type 'int'>
>>> type(f(2))
<type 'str'>
>>> type(f(3))
<type 'object'>
Certainly this flexibility is something not needed all the time and most methods return something predictable apriori. An intelligent IDE could analyze the code (and some other hints like docstrings which may specify arguments and return types) but this always would be a guess with a certain level of confidence. Also there's PEP0484 which introduces type hints on the language level, but it's optional, relatively new and all legacy code definitely doesn't use it.
If PyDev doesn't work for a particular case, well, it's a pity, but it's something you should accept if you choose such a dynamic language like Python. Maybe it's worth to try a different, more intelligent IDE or have a console with an interactive Python prompt opened next to your IDE to test your code on the fly. I would suggest to use a sophisticated python shell like bpython
I've been developing a sudoku solver in Python and the following question came up while trying to improve performance:
Does python remember the result of a calculation if the same calculation has to be performed multiple times throughout the code? Example: compare the following 2 bits of code:
if get_single(foo, bar) is not None:
position = get_single(foo, bar)
single = get_single(foo, bar)
if single is not None:
position = single
Are these 2 pieces of code equal in performance or does the second piece perform faster because the calculation is only performed once?
No, Python does not remember function calls or other calculations automatically. In general, it would be very bad if it did—imagine if every call to, say, random.randrange(6) returned the same value as the first call.
However, it's not hard to explicitly make it remember calls for specific functions where it's useful. This is usually called "memoization".
See the lru_cache decorator in the docs, for a nice example built into the stdlib.* All you have to do to make it remember every call to get_single(foo, bar) is change the definition of get_single like this;
#functools.lru_cache(maxsize=None)
def get_single(foo, bar):
# etc.
Or, if get_single is someone else's code that you're importing and can't touch, you can just wrap it:
get_single = functools.lru_cache(maxsize=None)(othermod.get_single)
… and then call your wrapper instead of the module's version.
* Note that lru_cache was added in Python 3.2. If you're using 2.7 (or, for some reason, 3.0-3.1), you can install the backport from PyPI, or find any of dozens of other memoizing caches on PyPI or ActiveState—or even, noticing that the functools docs link to the source, like many other stdlib modules meant to also serve as example code, copy the source to your own project. Although, IIRC, the 3.2 code needs a small change to work with 2.7 because it relies on nonlocal to hide its internals.
That being said, even if you know get_single is memoized, it's still not very good style to call it twice. If you only need to do this once, just write the three lines of code. If you need to do it repeatedly, write a wrapper function that wraps up those three lines or code, and then calling that function will be shorter than even the two-line version.
I'm having trouble with Vim and Python completion.
In fact I'm totally confused how does this work.
I have generic gvim 7.3, on windows 7 (with python/dyn)
I'm using SuperTab plugin, amongst many others, some of which
are python-specific, with following settings in vimrc:
au FileType python set omnifunc=pythoncomplete#Complete
let g:SuperTabDefaultCompletionType = "context"
let g:SuperTabContextDefaultCompletionType = "<c-n>"
I did not set PYTHONPATH enviroment varariable.
Completion works ok for system modules.
At first I thought that it isn't working at all for non-system
code, but that's not the case.
What is happening is best shown on following code:
import numpy.random # if this line is commented completion in last line works
class C(object):
def __init__(self, x_):
self.x=x_
def getX(self):
return self.x
def pr(self):
print 'ok'
a=C(10) # nothing changes if I put C() instead, even though it would be wrong
a. # here is completion in question
Problem is that completion works (a.<tab> suggests getX and pr) if import line is commented. But it there is import numpy.random, completion brakes down.
Note: this import works normally when I run the code.
What are prerequisites for Python completion?
What's happening and what should I do to get completion working for Python.
As I am (relatively) new to Vim, any suggestion is appreciated.
EDIT:
It seems that the problem is in using a.b form in import. If I do from numpy import random, everything is ok. If this is reasonably easy to fix I would like to get a.b from to work too. But now that I know how to go around it that's not so important.
Are there more unusual problem like this one so that I know what's happening in the future?
pythoncomplete is rather old and unmaintained.
Try to use Jedi: https://github.com/davidhalter/jedi-vim
It was originally an improved pythoncomplete, but is now much much more powerful!
It works for complex code:
And has additional features:
There is a list of all possible features:
builtin functions/classes support
complex module / function / class structures
ignores syntax and indentation errors
multiple returns / yields
tuple assignments / array indexing / dictionary indexing
exceptions / with-statement
*args / **kwargs
decorators
descriptors -> property / staticmethod / classmethod
closures
generators (yield statement) / iterators
support for some magic methods: __call__, __iter__, __next__,
__get__, __getitem__, __init__
support for list.append, set.add, list.extend, etc.
(nested) list comprehensions / ternary expressions
relative imports
getattr() / __getattr__ / __getattribute__
function annotations (py3k feature, are ignored right now, but being parsed.
I don't know what to do with them.)
class decorators (py3k feature, are being ignored too, until I find a use
case, that doesn't work with Jedi)
simple/usual sys.path modifications
isinstance checks for if/while/assert
Python, being an incredibly dynamic language, doesn't lend itself to completion. There isn't any really good completion out there. It's easier to just live without it than to fight with all its problems, IMO. That said, python-mode really is fantastic, like neoascetic said.
I've been thinking about this far too long and haven't gotten any idea, maybe some of you can help.
I have a folder of python scripts, all of which have the same surrounding body (literally, I generated it from a shell script), but have one chunk that's different than all of them. In other words:
Top piece of code (always the same)
Middle piece of code (changes from file to file)
Bottom piece of code (always the same)
And I realized today that this is a bad idea, for example, if I want to change something from the top or bottom sections, I need to write a shell script to do it. (Not that that's hard, it just seems like it's very bad code wise).
So what I want to do, is have one outer python script that is like this:
Top piece of code
Dynamic function that calls the middle piece of code (based on a parameter)
Bottom piece of code
And then every other python file in the folder can simply be the middle piece of code. However, normal module wouldn't work here (unless I'm mistaken), because I would get the code I need to execute from the arguement, which would be a string, and thus I wouldn't know which function to run until runtime.
So I thought up two more solutions:
I could write up a bunch of if statements, one to run each script based on a certain parameter. I rejected this, as it's even worse than the previous design.
I could use:
os.command(sys.argv[0] scriptName.py)
which would run the script, but calling python to call python doesn't seem very elegant to me.
So does anyone have any other ideas? Thank you.
If you know the name of the function as a string and the name of module as a string, then you can do
mod = __import__(module_name)
fn = getattr(mod, fn_name)
fn()
Another possible solution is to have each of your repetitive files import the functionality from the main file
from topAndBottom import top, bottom
top()
# do middle stuff
bottom()
In addition to the several answers already posted, consider the Template Method design pattern: make an abstract class such as
class Base(object):
def top(self): ...
def bottom(self): ...
def middle(self): raise NotImplementedError
def doit(self):
self.top()
self.middle()
self.bottom()
Every pluggable module then makes a class which inherits from this Base and must override middle with the relevant code.
Perhaps not warranted for this simple case (you do still have to import the right module in order to instantiate its class and call doit on it), but still worth keeping in mind (together with its many Pythonic variations, which I have amply explained in many tech talks now available on youtube) for cases where the number or complexity of "pluggable pieces" keeps growing -- Template Method (despite its horrid name;-) is a solid, well-proven and highly scalable pattern [[sometimes a tad too rigid, but that's exactly what I address in those many tech talks -- and that problem doesn't apply to this specific use case]].
However, normal module wouldn't work here (unless I'm mistaken), because I would get the code I need to execute from the arguement, which would be a string, and thus I wouldn't know which function to run until runtime.
It will work just fine - use __import__ builtin or, if you have very complex layout, imp module to import your script. And then you can get the function by module.__dict__[funcname] for example.
Importing a module (as explained in other answers) is definitely the cleaner way to do this, but if for some reason that doesn't work, as long as you're not doing anything too weird you can use exec. It basically runs the content of another file as if it were included in the current file at the point where exec is called. It's the closest thing Python has to a source statement of the kind included in many shells. As a bare minimum, something like this should work:
exec(open(filename).read(None))
How about this?
function do_thing_one():
pass
function do_thing_two():
pass
dispatch = { "one" : do_thing_one,
"two" : do_thing_two,
}
# do something to get your string from the command line (optparse, argv, whatever)
# and put it in variable "mystring"
# do top thing
f = dispatch[mystring]
f()
# do bottom thing