python win32service - Getting triggered startup information for service - python

win32 API QueryServiceConfig2 function supports the SERVICE_CONFIG_TRIGGER_INFO structure to get event(s) that trigger the service startup. However, python's win32service.QueryServiceConfig2() does not list such value as a parameter option. Is it possible to get that information with the win32service module?

Unfortunately, no. Here's a simple code snippet ran under Python 3.5 and PyWin32 v221:
#!/usr/bin/env python3
import win32service
if __name__ == "__main__":
for name in dir(win32service):
if name.startswith("SERVICE_CONFIG_"):
print(name, getattr(win32service, name))
Output:
(py35x64_test) e:\Work\Dev\StackOverflow\q046916726>"c:\Work\Dev\VEnvs\py35x64_test\Scripts\python.exe" a.py
SERVICE_CONFIG_DELAYED_AUTO_START_INFO 3
SERVICE_CONFIG_DESCRIPTION 1
SERVICE_CONFIG_FAILURE_ACTIONS 2
SERVICE_CONFIG_FAILURE_ACTIONS_FLAG 4
SERVICE_CONFIG_PRESHUTDOWN_INFO 7
SERVICE_CONFIG_REQUIRED_PRIVILEGES_INFO 6
SERVICE_CONFIG_SERVICE_SID_INFO 5
I've also checked win32con (which is another PyWin32 module, that only contains constant definitions), but no luck either.
Then, I took a look at ${PYWIN32_SRC_DIR}/pywin32-221/win32/src/win32service.i, and noticed that for ChangeServiceConfig2 (and also QueryServiceConfig2), the InfoLevel argument is specifically checked against the above constants, and thus passing its value (8) directly, would raise an exception (NotImplementedError).
Before going further, let's spend a little bit understanding what happens when calling such a Python wrapper (like win32service.QueryServiceConfig2):
Arguments (Python style - if any) are converted to C style
C function is called with the above converted arguments
Function result (or output arguments) - if any - are converted back to Python
For [MS.Docs]: ChangeServiceConfig2W function, data is transferred back and forth via arguments (depending on dwInfoLevel value, lpInfo can have various meanings):
Let's take a look to a value that is supported: e.g. SERVICE_CONFIG_PRESHUTDOWN_INFO:
lpInfo is a pointer to a [MS.Docs]: SERVICE_PRESHUTDOWN_INFO structure which only has a dwPreshutdownTimeout member (which is a simple DWORD)
On the other hand, SERVICE_CONFIG_TRIGGER_INFO:
lpInfo is a pointer to a [MS.Docs]: SERVICE_TRIGGER_INFO structure
pTriggers member is a pointer to a [MS.Docs]: SERVICE_TRIGGER structure
pDataItems member is a pointer to a [MS.Docs]: SERVICE_TRIGGER_SPECIFIC_DATA_ITEM structure
which is waaay more complex (and note that all involved structures have other members as well, I only listed the ones that increase the nesting level).
Adding support for all those arguments is not exactly a trivial task, so they aren't handled (at least for the moment).
There are more examples like this one, I guess it's a matter of priority, as not many people requested the functionality, combined with MS's (unfortunate?) design decision to have functions with such complex behaviors.
As an alternative (a quite complex one), you can use [Python 3.Docs]: ctypes - A foreign function library for Python, but you'll have to define all the structures that I listed above in Python (to extend ctypes.Structure).
As a side note, I was in the exact situation once: I had to call [MS.Docs]: LsaLogonUser function (!!! 14 freaking arguments !!!). Obviously, it wasn't exported by PyWin32, so I called it via ctypes, but I had to write:
~170 lines of code to define structures (only the ones that I needed for my scenario)
~50 lines to populate them and call the function

Related

What is PyCompilerFlags in Python C API?

If you checked Python C- API documentation about running python code via the C calls, you will always find mention to PyCompilerFlags, but nothing really describes what is it except the last portion of documentation and says nothing about its possible values and their effect on execution.
PyCompilerFlags is the C API equivalent to the flags argument passed to compile and related functions in Python. This probably isn't at all obvious if you don't already know the Python docs forward and backward before looking at the CPython C-API docs.
From compile:
The optional arguments flags and dont_inherit control which future statements affect the compilation of source. If neither is present (or both are zero) the code is compiled with those future statements that are in effect in the code that is calling compile(). If the flags argument is given and dont_inherit is not (or is zero) then the future statements specified by the flags argument are used in addition to those that would be used anyway. If dont_inherit is a non-zero integer then the flags argument is it – the future statements in effect around the call to compile are ignored.
Future statements are specified by bits which can be bitwise ORed together to specify multiple statements. The bitfield required to specify a given feature can be found as the compiler_flag attribute on the _Feature instance in the __future__ module.
Following the link to future statements gives more details on how they work, and the link to the __future__ has a chart showing the list of future statements available.
Another thing that may not be obvious: each future feature flag corresponds to a flag that ends up in the co_flags attribute of a code object. So:
code = compile('1 <> 2', '', 'eval', flags=__future__.barry_as_FLUFL.compiler_flag)
assert code.co_flags & CO_FUTURE_BARRY_AS_BDFL
In C, if you pass struct PyCompilerFlags flags = { CO_FUTURE_BARRY_AS_BDFL } to get the same effect.
If you want to see the actual numeric values for those flags, you have to look up the corresponding CO_* constants in the C source or in the __future__ source.
Things are slightly different in the C API, in a few ways.
Rather than passing both flags and dont_inherit, you only pass flags, which is a complete set of all of the future statements you want to be in effect during the PyRun_* or PyCompile_* call.
Most of the functions take a PyCompile_Flags struct holding an int, instead of a raw int. This is just for the purpose of type checking; in memory, a struct holding an int is stored the same way as an int.
Many functions take their flags by pointer, so you can retrieve the possibly-updated set of flags after running the code.
Let's look at a complete example. I'll use Python 2.7 even though I've been linking to 3.7 docs, just because an example using print is simpler than one using forward annotations.
This code prints an empty tuple:
print()
But if you run the first one with PyRun_SimpleStringFlags, passing CO_FUTURE_PRINT_FUNCTION (0x10000) as the flags`, it will print a blank line, a la Python 3.
If you run this code:
from __future__ import print_function
print()
… then whether you passed in 0 or CO_FUTURE_PRINT_FUNCTION, it will print a blank line. And after the call, if you look at the flags you passed in by reference, it will have that CO_FUTURE_PRINT_FUNCTION or'd onto it. So, if you're compiling and running a chunk at a time, you can pass that same value along to the next string, and it'll inherit that future flag. (Much like when you write a future statement in the interactive interpreter, it affects all the statements you interpret after that.)

How to utilize sys.callstats?

I just stumbled over callstats in the sys module after PyCharm's
autocomplete suggested it. Else I probably would never have discovered it
because it doesn't get even mentioned in the docs
help(sys.callstats) gives this:
Help on built-in function callstats in module sys:
callstats(...)
callstats() -> tuple of integers
Return a tuple of function call statistics, if CALL_PROFILE was defined
when Python was built. Otherwise, return None.
When enabled, this function returns detailed, implementation-specific
details about the number of function calls executed. The return value is
a 11-tuple where the entries in the tuple are counts of:
0. all function calls
1. calls to PyFunction_Type objects
2. PyFunction calls that do not create an argument tuple
3. PyFunction calls that do not create an argument tuple
and bypass PyEval_EvalCodeEx()
4. PyMethod calls
5. PyMethod calls on bound methods
6. PyType calls
7. PyCFunction calls
8. generator calls
9. All other calls
10. Number of stack pops performed by call_function()
Now I'm curious why it doesn't get mentioned anywhere and if there is a possibility to use it in an Anaconda build for Python.
It returns None when I call sys.callstats() so I assume the answer for the latter will be no.
However I'd still be interested in seeing how an actual output of this would look like for Python builds
where this works.
Edit:
In the Issue28799 linked from the comments below the accepted answer we find the reason why callstats will be removed with Python 3.7. The stats probably wouldn't be right after an upcoming feature will be implemented:
My problem is that with my work on FASTCALL, it became harder to track where the functions are called in practice. It maybe out of the Python/ceval.c file. I'm not sure that statistics are still computed correctly after my FASTCALL changes, and I don't know how to check it.
Python has already sys.setprofile(), cProfile and profile modules. There is also sys.settrace(). Do we still need CALL_PROFILE?
Attached patch removes the feature:
Calling the the untested and undocumented sys.callstats() function now emits a DeprecationWarning warning
Remove the PyEval_GetCallStats() function and its documentation
I am curious about sys.callstats too, so I compiled a binary of Python 2.7.12 with CALL_PROFILE flag. With zero user code but only python bootstrap routines, the result of sys.callstats() is:
PCALL_ALL 1691
PCALL_FUNCTION 371
PCALL_FAST_FUNCTION 363
PCALL_FASTER_FUNCTION 257
PCALL_METHOD 59
PCALL_BOUND_METHOD 58
PCALL_CFUNCTION 892
PCALL_TYPE 394
PCALL_GENERATOR 28
PCALL_OTHER 33
PCALL_POP 2005
Don't. It's undocumented, untested, and disabled in Python 3.7. If you want to do profiling, use cProfile, profile, or sys.setprofile.
For now, if you compile Python 3.6 or 2.7 from source with CALL_PROFILE defined, then sys.callstats does exactly what the docstring says it does with CALL_PROFILE defined: it returns an 11-element tuple containing counts of various internal call types. The stats are only tracked in Python/ceval.c, so it'll miss calls that don't go through there.

Passing strings to Fortran subroutine using ctypes

I am currently trying to pass a string to a Fortran library. I have gotten other functions from this library to work, but this particular one seems to be unique in that it wants a string passed to it as an argument to the function.
Looking at the source code, the function requires three arguments
SUBROUTINE EZVOLLIB(VOLEQI,DBHOB,HTTOT,VOL)
and the arguments are defined:
IMPLICIT NONE
CHARACTER*(*) VOLEQI
CHARACTER*10 VOLEQ
REAL DBHOB,HTTOT,TOPD, VOL(15), MHT
INTEGER REGN,ERRFLG
In Python my call to the function looks like
from ctypes import *
mylib = cdll.LoadLibrary('/home/bryce/Programming/opencompile/libvollib.so')
dbhob = c_float(42.2)
vol = (c_float * 15)()
voleqi = c_char_p("101DVEW119 ")
mylib.ezvollib_(voleqi, dbhob, vol)
This runs without a segmentation fault, but does not seem to "fill" the variable vol with the desired 15 float values.
Is there any way to get vol to retrieve the values being returned from the EZVOLLIB function?
There are many similar questions here, but it is hard to find an exact duplicate. There are several possible ways to do that with different degrees of universal correctness and portability.
The most correct way is to use modern Fortran to C interoperability as explained in fortran77, iso_c_binding and c string That requires writing more Fortran wrapping code.
There are people who are strictly against writing any more Fortran, even-though that is the only portable solution. In that case they must explore what is the actual calling convention for Fortran strings in their compiler. Usually a hidden integer argument with the string length is passed to the subroutine. For gfortran see https://gcc.gnu.org/onlinedocs/gfortran/Argument-passing-conventions.html
Your ctypes interface could employ these compiler-specific calling conventions but then the interface will be, well, compiler-specific. But you are already relying on the specific name mangling ezvollib_ and that is compiler-specific as well. You can find examples here on SO where people were bitten by relying on that.
Also note, as noted by High Performance Mark, that the subroutine in question has four arguments, not three: EZVOLLIB(VOLEQI,DBHOB,HTTOT,VOL). Calling it with just three as in mylib.ezvollib_(voleqi, dbhob, vol) is an error. You are missing the HTTOT argument.

Hashing a Python function

def funcA(x):
return x
Is funcA.__code__.__hash__() a suitable way to check whether funcA has changed?
I know that funcA.__hash__() won't work as it the same as id(funcA) / 16. I checked and this isn't true for __code__.__hash__(). I also tested the behaviour in a ipython terminal and it seemed to hold. But is this guaranteed to work?
Why
I would like to have a way of comparing an old version of function to a new version of the same function.
I'm trying to create a decorator for disk-based/long-term caching. Thus I need a way to identify if a function has changed. I also need to look at the call graph to check that none of the called functions have changed but that is not part of this question.
Requirements:
Needs to be stable over multiple calls and machines. 1 says that in Python 3.3 hash() is randomized on each start of a new instance. Although it also says that "HASH RANDOMIZATION IS DISABLED BY DEFAULT". Ideally, I'd like a function that does is stable even with randomization enabled.
Ideally, it would yield the same hash for def funcA: pass and def funcB: pass, i.e. when only the name of the function changes. Probably not necessary.
I only care about Python 3.
One alternative would be to hash the text inside the file that contains the given function.
Yes, it seems that func_a.__code__.__hash__() is unique to the specific functionality of the code. I could not find where this is implemented, or where it is __code__.__hash__() defined.
The perfect way would be to use func_a.__code__.co_code.__hash__() because co_code has the byte code as a string. Note that in this case, the function name is not part of the hash and two functions with the same code but names func_a and func_b will have the same hash.
hash(func_a.__code__.co_code)
Source.

Why are some function parameters stored on the stack and some on the heap?

I'm reading "Gray Hat Python". This book teaches debugging techniques in which you can change the variable value through the debugger.
In the code below the author teaches us how to change the counter variable value. But I want to do more, so I add the 'Hello' parameter to the printf function so I can change it into something else like 'Bye'.
What I found through the debugger is that 'Hello' is stored on the Heap. The address in the Heap at which 'Hello' is stored is saved in the Stack; why?
My question is: on what basis are some parameters stored on the stack and some on the heap?
from ctypes import *
import time
msvcrt = cdll.msvcrt
counter = 99999999
while 1:
msvcrt.printf("Loop iteration %d!\n" , counter, "Hello")
time.sleep(2)
counter += 1
These things are defined in calling conventions (which is part of an ABI). A calling convention defines a few things, for instance:
where (stack or register) and how (in a single cell, spread over multiple cells, reference to heap) to store arguments,
the order (left-to-right or right-to-left) in which to store parameters,
who is responsible for cleaning up the stack after the call (caller or callee),
which registers should be preserved.
Over the years, a bunch of slightly different calling conventions have been used for 32-bit x86 processors (with names like cdecl, stdcall, and fastcall). For 64-bit x86 processors, there are essentially only two calling conventions (one is used by Microsoft, one is used by everyone else on the planet).
On 32-bit Windows, printf uses the cdecl convention. On 64-bit Windows, printf uses the calling convention from Microsoft's 64-bit ABI.
Much more information about calling conventions can be found in this answer.

Categories

Resources