How to diagnose why Ctrl + C not stopping pserve - python

I'm experimenting porting a project from Cherrypy to Pyramid web framework. I have a small part converted and notice that Ctrl+C will not stop the Pyramid application. A cookiecutter version will stop with Ctrl+C. I end up needing to kill the process every time.
I am serving using the pserve command that uses the waitress WSGI server in both cases...
pserve development.ini
I should also note: I am running Debian Stretch in a VirtualBox VM.
Is there a way to know why the behavior has changed or how to restore Ctrl+C shutdown? How could I know if something is now blocking this from happening?
-- Additional Information asked for in comments --
Using grep Sig /proc/process_id/status yields the following:
SigQ: 0/15735
SigPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000001001000
SigCgt: 0000000180004002 hex/binary 110000000000000000100000000000010
Using GDB and getting a py-bt
(gdb) py-bt
Traceback (most recent call first):
<built-in method select of module object at remote 0x7f914f837e58>
File "/usr/lib/python3.5/asyncore.py", line 144, in poll
r, w, e = select.select(r, w, e, timeout)
File "/usr/lib/python3.5/asyncore.py", line 203, in loop
poll_fun(timeout, map)
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/waitress/server.py", line 131, in run
use_poll=self.adj.asyncore_use_poll,
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/waitress/__init__.py", line 17, in serve
server.run()
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/waitress/__init__.py", line 20, in serve_paste
serve(app, **kw)
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/paste/deploy/util.py", line 55, in fix_call
val = callable(*args, **kw)
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/paste/deploy/loadwsgi.py", line 189, in server_wrapper
**context.local_conf)
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/pyramid/scripts/pserve.py", line 239, in run
server(app)
File "/home/clutton/programs/python/webapps_pyramid/env/lib/python3.5/site-packages/pyramid/scripts/pserve.py", line 32, in main
return command.run()
File "/home/clutton/programs/python/webapps_pyramid/env/bin/pserve", line 11, in <module>
sys.exit(main())

In order to diagnose where I was running into issues I took the following steps guided by many of the comments made on the question.
I inspected the process to ensure that signals were indeed being caught.
grep Sig /proc/process_id/status
Which yields the following information:
SigQ: 0/15735
SigPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000001001000
SigCgt: 0000000180004002 hex/binary 110000000000000000100000000000010
The SigCgt indicates signals that are indeed being listened to, in the above the hex value converted to binary shows that (from right to left) signal 2 and 15 were indeed bound.
At this point we need to diagnose why there would be a handler yet it appeared to not be working. So the remaining question was what was the handler. To find that out I used the Python signal module and added some code where I could see it in a debugger...
import signal
s = signal.getsignal(2)
Once I did that I found that the handler referenced a function from a standalone script that is part of the project. I was overwriting the default signal handlers in order to do cleanup before terminating the process but... I was importing it as well in part of this project that had it's own process. Since the project is normally developed on Windows, I was probably dealing with different signals when using Ctrl-C previously so this bug has existed for a long time and doing some Linux development work for the project brought it to light.

Related

Pyinstaller not allowing multiprocessing with MacOS

I have a python file that I would like to package as an executable for MacOS 11.6.
The python file (called Service.py) relies on one other json file and runs perfectly fine when running with python. My file uses argparse as the arguments can differ depending on what is needed.
Example of how the file is called with python:
python3 Service.py -v Zephyr_Scale_Cloud https://myurl.cloud/ philippa#email.com password1 group3
The file is run in exactly the same way when it is an executable:
./Service.py -v Zephyr_Scale_Cloud https://myurl.cloud/ philippa#email.com password1 group3
I can package the file using PyInstaller and the executable runs.
Command used to package the file:
pyinstaller --paths=/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ Service.py
However, when I get to the point that requires multiprocessing, the arguments get lost. My second argument (here noted as https://myurl.cloud) is a URL that I require.
The error I see is:
[MAIN] Starting new process RUNID9157
url before constructing the client recognised as pipe_handle=15
usage: Service [-h] test_management_tool url
Service: error: the following arguments are required: url
Traceback (most recent call last):
File "urllib3/connection.py", line 174, in _new_conn
File "urllib3/util/connection.py", line 72, in create_connection
File "socket.py", line 954, in getaddrinfo
I have done some logging and the url does get correctly read. But as soon as the process started and picking up what it needs to, the url is changed to 'pipe_handle=x', in the code above it is pipe_handle=15.
I need the url to retrieve an authentication token, but it just stops being read as the correct value and is changed to this pipe_handle value. I have no idea why.
Has anyone else seen this?!
I am using Python 3.9, PyInstaller 4.4 and ArgParse.
I have also added
if __name__ == "__main__":
if sys.platform.startswith('win'):
# On Windows - multiprocessing is different to Unix and Mac.
multiprocessing.freeze_support()
to my if name = "main" section as I saw this on other posts but it doesn't help.
Can someone please assist?
Sending commands via sys.argv is complicated by the fact that multiprocessing's "spawn" start method uses that to pass the file descriptors for the initial communication pipes between the parent and child.
I'm projecting here a little because you did not share the code of how/where you call argparse, and how/where you call multiprocessing
If you are parsing args outside of if __name__ == "__main__":, the args may get parsed (re-parsed on child import __main__) before sys.argv gets automatically cleaned up by multiprocessing.spawn.prepare() in the child. You should be able to fix this by moving the argparse stuff inside your target function. It also may be easier to parse the args in the parent, and simply send the parsed results as an argument to the target function. See this answer of mine for further discussion on sys.argv with multiprocessing.

"No child processes" error thrown by OCaml Sys.command function

I am trying to use Frama-c via python application. This python application sets some env variables and system path. From this application, I am calling Frama-c as a python process as following:
cmd = ['/usr/local/bin/frama-c', '-wp', '-wp-print', '-wp-out', '/home/user/temp','/home/user/project/test.c']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
When this code is executed from Python application I am getting following error:
[kernel] Parsing FRAMAC_SHARE/libc/__fc_builtin_for_normalization.i (no preprocessing)
[kernel] warning: your preprocessor is not known to handle option ` -nostdinc'. If pre-processing fails because of it, please add -no-cpp-gnu-like option to Frama-C's command-line. If you do not want to see this warning again, use explicitely -cpp-gnu-like option.
[kernel] warning: your preprocessor is not known to handle option `-dD'. If pre-processing fails because of it, please add -no-cpp-gnu-like option to Frama-C's command-line. If you do not want to see this warning again, use explicitely -cpp-gnu-like option.
[kernel] Parsing 2675891095.c (with preprocessing)
[kernel] System error: /usr/bin/gcc -c -C -E -I. -dD -nostdinc -D__FC_MACHDEP_X86_32 -I/usr/local/share/frama-c/libc -o '/tmp/2675891095.cc8bf16.i' '/home/user/project/test.c': No child processes
I am finding it hard to debug what is causing the error:
System error: /usr/bin/gcc -c -C -E -I. -dD -nostdinc -D__FC_MACHDEP_X86_32 -I/usr/local/share/frama-c/libc -o '/tmp/2675891095.cc8bf16.i' '/home/user/project/test.c': No child
processes
Is there a way to generate more error log from Frama-c that might help me figure out the issue?
Note that this error only occur when I start the process(to execute Frama-c) from my application, and not if I start it from a python console. And it happens only on Linux machine and not on Windows machine.
Any help is appreciated. Thanks!!
Update :
I realized that by using -kernel-debug flag I can obtain stack trace. So I tried the option and get the following information:
Fatal error: exception Sys_error("gcc -E -C -I. -dD -D__FRAMAC__
-nostdinc -D__FC_MACHDEP_X86_32 -I/usr/local/share/frama-c/libc -o '/tmp/2884428408.c2da79b.i'
'/home/usr/project/test.c': No
child processes")
Raised by primitive operation at file
"src/kernel_services/ast_queries/file.ml", line 472, characters 9-32
Called from file "src/kernel_services/ast_queries/file.ml", line 517,
characters 14-26
Called from file "src/kernel_services/ast_queries/file.ml", line 703,
characters 46-59
Called from file "list.ml", line 84, characters 24-34
Called from file "src/kernel_services/ast_queries/file.ml", line 703,
characters 17-76
Called from file "src/kernel_services/ast_queries/file.ml", line 1587,
characters 24-47
Called from file "src/kernel_services/ast_queries/file.ml", line 1667,
characters 4-27
Called from file "src/kernel_services/ast_data/ast.ml", line 108,
characters 2-28
Called from file "src/kernel_services/ast_data/ast.ml", line 116,
characters 53-71
Called from file "src/kernel_internals/runtime/boot.ml", line 29,
characters 6-20
Called from file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 787, characters 2-9
Called from file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 817, characters 18-64
Called from file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 228, characters 4-8
Re-raised at file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 244, characters 12-15
Called from file "src/kernel_internals/runtime/boot.ml", line 72,
characters 2-127
And I looked at the file "src/kernel_services/ast_queries/file.ml", line 472 and the code executed is Sys.command cpp_command.
I am not sure why "No Child Processes" error is thrown when trying to execute execute gcc.
Update: I have Ocaml version: 4.02.3, Python version: 2.7.8 and Frama-C version: Silicon-20161101
I know nothing about Frama-C. However, the error message is coming from somebody's (OCaml's? Python's?) runtime, indicating that a system call failed with the ECHILD error. The two system calls that system() makes are fork() and waitpid(). It's the latter system call that can return ECHILD. What it means is that there's no child process to wait for. One good possibility is that the fork() failed. fork() fails when the system is full of processes (unlikely) or when a per-user process limit has been reached. You could check whether you're running up against a limit of this kind.
Another possibility that occurs to me is that some other part of the code is already handling child processes using signal handling (SIGCHLD). So the reason there's no child process to wait for is that it has already been handled elsewhere. This gets complicated pretty fast, so I would hope this isn't the problem.

BlockingSwitchOutError in debugger after upgrading to PyCharm 2016.2

After upgrading from PyCharm 2016.1.4 to 2016.2, when running the debugger and setting any breakpoint, PyCharm halts in various places where I have no breakpoint set, and logs this to stderr:
Traceback (most recent call last):
File "/usr/local/pycharm/debug-eggs/pycharm-debug.egg/_pydevd_bundle/pydevd_frame.py", line 539, in trace_dispatch
self.do_wait_suspend(thread, frame, event, arg)
File "/usr/local/pycharm/debug-eggs/pycharm-debug.egg/_pydevd_bundle/pydevd_frame.py", line 71, in do_wait_suspend
self._args[0].do_wait_suspend(*args, **kwargs)
File "/usr/local/pycharm/debug-eggs/pycharm-debug.egg/pydevd.py", line 714, in do_wait_suspend
time.sleep(0.01)
File "/home/jaza/mypyapp/mypyfile.py", line 999, in mypyfunc
gevent.sleep(seconds)
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 194, in sleep
hub.wait(loop.timer(seconds, ref=ref))
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 630, in wait
result = waiter.get()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 878, in get
return self.hub.switch()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 608, in switch
switch_out()
File "/usr/local/lib/python2.7/dist-packages/gevent/hub.py", line 612, in switch_out
raise BlockingSwitchOutError('Impossible to call blocking function in the event loop callback')
BlockingSwitchOutError: Impossible to call blocking function in the event loop callback
OS: Linux Mint 17.3 (i.e. almost identical to Ubuntu 14.04). Using latest gevent (1.1.2).
If I open my old PyCharm (i.e. 2016.1.4), and do the same thing - i.e. start the debugger, set a breakpoint, run my app - I don't get these errors, and PyCharm doesn't halt anywhere in the code except at my breakpoint.
I also tried just "downgrading" the debugger, by renaming the debug-eggs directory and replacing it with a symlink to the old debug-eggs path, and then running the rest of PyCharm on the latest version. This didn't fix the problem, i.e. it still resulted in BlockingSwitchOutError being raised numerous times.
Seems likely that this is a bug in PyCharm 2016.2. I've submitted a bug report to JetBrains for this, see https://youtrack.jetbrains.com/issue/PY-20183 . But posting here on SO as well, in case anyone sees a problem with the code in my app (the use of gevent.sleep(seconds) ?), meaning that the code happened to work before, but was going to break sooner or later.
The solution posted by Elizaveta Shashkova on the PyCharm issue tracker at https://youtrack.jetbrains.com/issue/PY-20183 worked for me:
The new feature has appeared in PyCharm: breakpoint thread suspend policy. You should go Run | View breakpoints, select the breakpoint and change its threads suspend policy: "Thread" or "All". Also you can set the default policy for all your breakpoints.
After changing suspend policy from "All" to "Thread", debugger is no longer breaking outside of my breakpoints nor throwing BlockingSwitchOutError.
And, re:
Also do you have the setting "gevent compatible" turned on? https://www.jetbrains.com/help/pycharm/2016.1/python-debugger.html
No, I don't have this turned on, and I fixed my issue without turning it on. But will try turning it on if I have similar issues in future.

What is Hudson doing with my Python script?

I am developing a Python script (here after script) that searches through large log files for a search string. In normal use, it is called from a Hudson front end.
About the Hudson interface:
The Hudson job creates a temporary batch file on a connected virtual machine (VM) that calls the script and passes it some parameters. We have had hundreds of successful instances using this setup, but something is now creating an error.
About the script:
The log files are contained in dozens of compressed .tgz files. My script searches each log in each .tgz file.
One of the command line args that my script accepts is a True/False parameter called PROCESS_IN_PARALLEL. If PROCESS_IN_PARALLEL is set to True, then each .tgz file is searched in its own thread (using the multiprocessing module). If PROCESS_IN_PARALLEL is set to False, then each .tgz file is searched in sequence (using a loop).
What works:
I have a batch file on the VM that I use for testing my script. I can successfully use this .bat to call my script with PROCESS_IN_PARALLEL set to either (1a) True or (1b) False. Of course, it runs much faster when True (about 4x faster).
I have quadruple-checked that this .bat passes the same parameters to my script as Hudson, and in the same order. I have also added a line to my script that logs the input parameters to a file, and have found that Hudson is indeed passing the correct parameters in the correct order.
I can successfully use Hudson to call my script with PROCESS_IN_PARALLEL set to False.
I have tested the current iteration of my script using the above three test cases multiple times (even using multiple configurations of other parameters), all successfully.
What doesn't work:
If I use Hudson to call my script with PROCESS_IN_PARALLEL set to True, then I get a strange error. Here is the traceback:
Traceback (most recent call last):
File "F:\Scripts\Parse_LogFiles_Archive\parseLogs_Archive_8-19-13.py", line 40, in main
searchHits = searchTarList(searchDir, newDirectory, argv)
File "F:\Scripts\Parse_LogFiles_Archive\parseLogs_Archive_8-19-13.py", line 163, in searchTarList
hits = processPool.map(searchTar, tarMap)
File "E:\Python27\lib\multiprocessing\pool.py", line 225, in map
return self.map_async(func, iterable, chunksize).get()
File "E:\Python27\lib\multiprocessing\pool.py", line 522, in get
raise self._value IOError: [Errno 9] Bad file descriptor
According to my research, this error happens when Python attempts to read a file which is opened in write mode.
My question:
Is there a genius out there who know both Python and Hudson well enough to know what is happening?

Using Ladon in Python 2.6 on windows

I have been trying to create a web service out some python scripts, and haven't seemed to have had much luck. I am new to web services in general, but would really like to get this figured out. I'm on Windows 7 and use IIS7. The service also needs to be SOAP.
I've read through most posts that have anything to do with python and SOAP and tried out pretty much all the different libraries, but most of them just seem over my head (especially ZSI/SOAPpy). The Ladon Library seems like it would be best (and simplest) for what I need, but the tutorial http://www.youtube.com/watch?v=D_JYjEBedk4&feature=feedlik loses me at 5:10 when he brings it to the server. When I type the ladon2.6ctl in the cmd, it seems like windows gets quite confused. I'm guessing it is a little different because he is running on Linux and using Apache.
With that, any tips on how to get a python web service running on Microsoft 'stuff' would be greatly appreciated, as I have been trying to figure this stuff out for way too long.
One thing to note is the reason things are so specific (and seemingly strange) is because the scripts I have do a lot of geoprocessing with ESRI's "arcpy".
--Addition--
Traceback on localhost:8080/TestService:
Traceback (most recent call last):
<br>File "c:\Python26\ArcGIS10.0\lib\site-packages\ladon-0.5.1-py2.6.egg\ladon\server\wsgi_application.py", line 229, in __call__
exec("import %s" % ','.join(self.service_list))
File "<string>", line 1, in <module>
File "c:\Users\r\workspace\ladon\src\testspace.py", line 3, in <module>
class TestService2(object):
File "c:\Users\r\workspace\ladon\src\testspace.py", line 4, in TestService2
#ladonize(int,int,rtype=int)
File "c:\Python26\ArcGIS10.0\lib\site-packages\ladon-0.5.1-py2.6.egg\ladon\ladonizer\decorator.py", line 87, in decorator
ladon_method_info = global_service_collection().add_service_method(f,*def_args,**def_kw)
File "c:\Python26\ArcGIS10.0\lib\site-packages\ladon-0.5.1-py2.6.egg\ladon\ladonizer\collection.py", line 119, in add_service_method
sinfo = self.source_info(src_fname)
File "c:\Python26\ArcGIS10.0\lib\site-packages\ladon-0.5.1-py2.6.egg\ladon\ladonizer\collection.py", line 79, in source_info
a = ast.parse(src)
File "c:\Python26\ArcGIS10.0\lib\ast.py", line 37, in parse
return compile(expr, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
from ladon.ladonizer import ladonize
^
SyntaxError: invalid syntax
sample code:
from ladon.ladonizer import ladonize
class TestService2(object):
#ladonize(int,int,rtype=int)
def sum(self,a,b):
'''add two numbers<br>
param a: number 1
param b: number 2
rtype: sum of result
'''
return a+b
I must admit I normally use Linux for almost everything and I haven't tried Ladon on Windows for a while. I will spin up my windows installation later today and see if there is any trouble.
You wrote that ladon2.6ctl get's confused. Do you have an exception Traceback?
To summarize the fix for anyone else interested, delete the "syslog import" from these 3 ladon modules: ladon/interfaces/jsonwsp.py - line 6
ladon/dispatcher/dispatcher.py - line 7
ladon/server/wsgi_application.py - line 37
Then, you need to change the linefeed from window's default of /r/n to /n. In Eclipse, go to Window -> Preferences -> General, then select (not drop down) the Workspace tab. On the bottom right, select "other" under New text file line delimiter and change it to Unix.

Categories

Resources