I just stumbled over callstats in the sys module after PyCharm's
autocomplete suggested it. Else I probably would never have discovered it
because it doesn't get even mentioned in the docs
help(sys.callstats) gives this:
Help on built-in function callstats in module sys:
callstats(...)
callstats() -> tuple of integers
Return a tuple of function call statistics, if CALL_PROFILE was defined
when Python was built. Otherwise, return None.
When enabled, this function returns detailed, implementation-specific
details about the number of function calls executed. The return value is
a 11-tuple where the entries in the tuple are counts of:
0. all function calls
1. calls to PyFunction_Type objects
2. PyFunction calls that do not create an argument tuple
3. PyFunction calls that do not create an argument tuple
and bypass PyEval_EvalCodeEx()
4. PyMethod calls
5. PyMethod calls on bound methods
6. PyType calls
7. PyCFunction calls
8. generator calls
9. All other calls
10. Number of stack pops performed by call_function()
Now I'm curious why it doesn't get mentioned anywhere and if there is a possibility to use it in an Anaconda build for Python.
It returns None when I call sys.callstats() so I assume the answer for the latter will be no.
However I'd still be interested in seeing how an actual output of this would look like for Python builds
where this works.
Edit:
In the Issue28799 linked from the comments below the accepted answer we find the reason why callstats will be removed with Python 3.7. The stats probably wouldn't be right after an upcoming feature will be implemented:
My problem is that with my work on FASTCALL, it became harder to track where the functions are called in practice. It maybe out of the Python/ceval.c file. I'm not sure that statistics are still computed correctly after my FASTCALL changes, and I don't know how to check it.
Python has already sys.setprofile(), cProfile and profile modules. There is also sys.settrace(). Do we still need CALL_PROFILE?
Attached patch removes the feature:
Calling the the untested and undocumented sys.callstats() function now emits a DeprecationWarning warning
Remove the PyEval_GetCallStats() function and its documentation
I am curious about sys.callstats too, so I compiled a binary of Python 2.7.12 with CALL_PROFILE flag. With zero user code but only python bootstrap routines, the result of sys.callstats() is:
PCALL_ALL 1691
PCALL_FUNCTION 371
PCALL_FAST_FUNCTION 363
PCALL_FASTER_FUNCTION 257
PCALL_METHOD 59
PCALL_BOUND_METHOD 58
PCALL_CFUNCTION 892
PCALL_TYPE 394
PCALL_GENERATOR 28
PCALL_OTHER 33
PCALL_POP 2005
Don't. It's undocumented, untested, and disabled in Python 3.7. If you want to do profiling, use cProfile, profile, or sys.setprofile.
For now, if you compile Python 3.6 or 2.7 from source with CALL_PROFILE defined, then sys.callstats does exactly what the docstring says it does with CALL_PROFILE defined: it returns an 11-element tuple containing counts of various internal call types. The stats are only tracked in Python/ceval.c, so it'll miss calls that don't go through there.
Related
When writing code in IDLE, sometimes when I insert a function like re.sub( in the example, a window pops up explaining the function and the inputs it needs. I find this very helpful and would like to have this window pop up every time. I googled and tried different key combinations but I can't see to find how you do this.
Can somebody help me with this?
It is pretty simple if you want a key combination to show calltip.
Just type "ctrl+\" as shown in the picture here:
One thing to remember it will only work when you have already typed the opening parenthesis and not before.
As shown in the screenshots below:
Parenthesis opened
Inside the Parenthesis
Without opening parenthesis
Your question is specific to the python IDLE. In IDLE, you have this functionality enabled by default. For it to work, the function (or method) has to be available in the current namespace.That means it has to either be defined in the running environment, or imported in to the running environment.
For example:
>>> def foo(x)
"""the foo function"""
return x
when you type >>> foo( in the prompt after the definition, you will see the explanation which really is the documentation contained in the docstring (the stuff between the triple quotes).
If a function or method does not have any documentation, then you will not see any explanation. For example
>>> def bar(y):
return y
In this case when you type in bar( at the prompt, IDLE will just show y, this is because the function does not have any documentation.
Some built in functions (called builtins) do not have docstrings, often this is because they are implemented in the C programming language. For example
>>> from functools import reduce
>>> reduce(
In this case IDLE will not give any hint because the function does not have any docstring for it to display.
A great companion to learning is the python standard reference. You can lookup built in function definitions there for clear explanations about what they do. On the other hand, when writing your own functions, remember to put docstrings as they will help you as you go on.
IDLE's calltips contain the function signature (if directly available) followed by the beginning of the docstring (if there is one). For builtins that have not gotten the 'Argument Clinic' treatment, the signature is the beginning of the docstring. This is the case for reduce. In 3.6 and 3.7, when I type reduce( after the import and prompt, the calltip contains the signature as given in the docstring. To see the entire reduce() docstring, use >>> help(reduce) or enter reduce.__doc__.
To see more calltips when editing in the editor, run your code after entering the import statements. For instance, if you start IDLE and immediate edit a new file and enter
import reduce
reduce(
you see no calltip, as you described in your question. But if you hit F5 after the import and return to the editor, you will. Similarly, if you want to see calltips for your own functions, run the file occasionally after defining them.
win32 API QueryServiceConfig2 function supports the SERVICE_CONFIG_TRIGGER_INFO structure to get event(s) that trigger the service startup. However, python's win32service.QueryServiceConfig2() does not list such value as a parameter option. Is it possible to get that information with the win32service module?
Unfortunately, no. Here's a simple code snippet ran under Python 3.5 and PyWin32 v221:
#!/usr/bin/env python3
import win32service
if __name__ == "__main__":
for name in dir(win32service):
if name.startswith("SERVICE_CONFIG_"):
print(name, getattr(win32service, name))
Output:
(py35x64_test) e:\Work\Dev\StackOverflow\q046916726>"c:\Work\Dev\VEnvs\py35x64_test\Scripts\python.exe" a.py
SERVICE_CONFIG_DELAYED_AUTO_START_INFO 3
SERVICE_CONFIG_DESCRIPTION 1
SERVICE_CONFIG_FAILURE_ACTIONS 2
SERVICE_CONFIG_FAILURE_ACTIONS_FLAG 4
SERVICE_CONFIG_PRESHUTDOWN_INFO 7
SERVICE_CONFIG_REQUIRED_PRIVILEGES_INFO 6
SERVICE_CONFIG_SERVICE_SID_INFO 5
I've also checked win32con (which is another PyWin32 module, that only contains constant definitions), but no luck either.
Then, I took a look at ${PYWIN32_SRC_DIR}/pywin32-221/win32/src/win32service.i, and noticed that for ChangeServiceConfig2 (and also QueryServiceConfig2), the InfoLevel argument is specifically checked against the above constants, and thus passing its value (8) directly, would raise an exception (NotImplementedError).
Before going further, let's spend a little bit understanding what happens when calling such a Python wrapper (like win32service.QueryServiceConfig2):
Arguments (Python style - if any) are converted to C style
C function is called with the above converted arguments
Function result (or output arguments) - if any - are converted back to Python
For [MS.Docs]: ChangeServiceConfig2W function, data is transferred back and forth via arguments (depending on dwInfoLevel value, lpInfo can have various meanings):
Let's take a look to a value that is supported: e.g. SERVICE_CONFIG_PRESHUTDOWN_INFO:
lpInfo is a pointer to a [MS.Docs]: SERVICE_PRESHUTDOWN_INFO structure which only has a dwPreshutdownTimeout member (which is a simple DWORD)
On the other hand, SERVICE_CONFIG_TRIGGER_INFO:
lpInfo is a pointer to a [MS.Docs]: SERVICE_TRIGGER_INFO structure
pTriggers member is a pointer to a [MS.Docs]: SERVICE_TRIGGER structure
pDataItems member is a pointer to a [MS.Docs]: SERVICE_TRIGGER_SPECIFIC_DATA_ITEM structure
which is waaay more complex (and note that all involved structures have other members as well, I only listed the ones that increase the nesting level).
Adding support for all those arguments is not exactly a trivial task, so they aren't handled (at least for the moment).
There are more examples like this one, I guess it's a matter of priority, as not many people requested the functionality, combined with MS's (unfortunate?) design decision to have functions with such complex behaviors.
As an alternative (a quite complex one), you can use [Python 3.Docs]: ctypes - A foreign function library for Python, but you'll have to define all the structures that I listed above in Python (to extend ctypes.Structure).
As a side note, I was in the exact situation once: I had to call [MS.Docs]: LsaLogonUser function (!!! 14 freaking arguments !!!). Obviously, it wasn't exported by PyWin32, so I called it via ctypes, but I had to write:
~170 lines of code to define structures (only the ones that I needed for my scenario)
~50 lines to populate them and call the function
I am quite new to python programming and working on a quite large python module using IntelliJ. I was wondering if I have a static method that is called from multiple places and I change the signature of this method to accept a different no. of arguments. I might fix the actual method calls in some places but might miss changing the calls in some other. In java, I will receive a compile time error but since python is interpreted I will only figure out that sometime during runtime(probably in production). Till now I have been using the command 'python -m compileall' but I was wondering like in Java is there any way to get syntax errors in IntelliJ.
Unit tests and static code analysis tools such as pylint will help. pylint is able to detect incorrect number of arguments being passed to functions.
If you're using Python 3, function annotations might be useful, and mypy can type check annotated function calls (apparently, I've not used it).
In general the strategy for changing existing function signatures without breaking dependent code is to use keyword arguments. For example, if you wanted to add a new argument to a function, add it as a keyword argument:
#def f(a):
# """Original function"""
# print(a)
def f(a, b=None):
"""New and improved function"""
print(a)
if b is not None:
print(b)
Now calls with and without the new argument will work:
>>> f('blah')
blah
>>> f('blah', 'cough')
blah
cough
Of course this will not always work, e.g. if argument(s) are removed, or if the semantics of the function are changed in a way that breaks existing code.
I've been developing a sudoku solver in Python and the following question came up while trying to improve performance:
Does python remember the result of a calculation if the same calculation has to be performed multiple times throughout the code? Example: compare the following 2 bits of code:
if get_single(foo, bar) is not None:
position = get_single(foo, bar)
single = get_single(foo, bar)
if single is not None:
position = single
Are these 2 pieces of code equal in performance or does the second piece perform faster because the calculation is only performed once?
No, Python does not remember function calls or other calculations automatically. In general, it would be very bad if it did—imagine if every call to, say, random.randrange(6) returned the same value as the first call.
However, it's not hard to explicitly make it remember calls for specific functions where it's useful. This is usually called "memoization".
See the lru_cache decorator in the docs, for a nice example built into the stdlib.* All you have to do to make it remember every call to get_single(foo, bar) is change the definition of get_single like this;
#functools.lru_cache(maxsize=None)
def get_single(foo, bar):
# etc.
Or, if get_single is someone else's code that you're importing and can't touch, you can just wrap it:
get_single = functools.lru_cache(maxsize=None)(othermod.get_single)
… and then call your wrapper instead of the module's version.
* Note that lru_cache was added in Python 3.2. If you're using 2.7 (or, for some reason, 3.0-3.1), you can install the backport from PyPI, or find any of dozens of other memoizing caches on PyPI or ActiveState—or even, noticing that the functools docs link to the source, like many other stdlib modules meant to also serve as example code, copy the source to your own project. Although, IIRC, the 3.2 code needs a small change to work with 2.7 because it relies on nonlocal to hide its internals.
That being said, even if you know get_single is memoized, it's still not very good style to call it twice. If you only need to do this once, just write the three lines of code. If you need to do it repeatedly, write a wrapper function that wraps up those three lines or code, and then calling that function will be shorter than even the two-line version.
def funcA(x):
return x
Is funcA.__code__.__hash__() a suitable way to check whether funcA has changed?
I know that funcA.__hash__() won't work as it the same as id(funcA) / 16. I checked and this isn't true for __code__.__hash__(). I also tested the behaviour in a ipython terminal and it seemed to hold. But is this guaranteed to work?
Why
I would like to have a way of comparing an old version of function to a new version of the same function.
I'm trying to create a decorator for disk-based/long-term caching. Thus I need a way to identify if a function has changed. I also need to look at the call graph to check that none of the called functions have changed but that is not part of this question.
Requirements:
Needs to be stable over multiple calls and machines. 1 says that in Python 3.3 hash() is randomized on each start of a new instance. Although it also says that "HASH RANDOMIZATION IS DISABLED BY DEFAULT". Ideally, I'd like a function that does is stable even with randomization enabled.
Ideally, it would yield the same hash for def funcA: pass and def funcB: pass, i.e. when only the name of the function changes. Probably not necessary.
I only care about Python 3.
One alternative would be to hash the text inside the file that contains the given function.
Yes, it seems that func_a.__code__.__hash__() is unique to the specific functionality of the code. I could not find where this is implemented, or where it is __code__.__hash__() defined.
The perfect way would be to use func_a.__code__.co_code.__hash__() because co_code has the byte code as a string. Note that in this case, the function name is not part of the hash and two functions with the same code but names func_a and func_b will have the same hash.
hash(func_a.__code__.co_code)
Source.