I'm obtaining a PID, using python, of a CGI script, however, the PID is not valid i.e. can't Taskkill it from CL. I get:
"Process: no process found with pid xxxx" where xxxx is the pid
I thought maybe I have to kill a parent python shell instance, but os.ppid doesn't work in windows.
So then I installed psutil python module and can now get parent PID, but it just shows the parent as the actual WebServer (Abyss), which I don't think I want to kill, since it is the http process that I notice runs constantly and not just a CGI interpreter instance.
Using psutil I CAN get the process status of the actual script, using the pid returned by os.getpid(), and see that it is running. So the pid works for purposes of process information retrieval using psutil. But this gets me no further to obtaining the actual PID I need to kill the script, using EITHER Taskkill on the CL or via kill() from psutil!
What exactly is a cgi shell script from a process perspective, and if it is not a process, why is os.getpid() returning a pid?
Why are you assured that your CGI script are still working when you try to kill it? Web server starts one instance of CGI script for one request and when script finishes it... just finishes.
Related
Got a script for activating a python venv and running a server in the background, but right now I am trying to keep the pid when I start the process and then kill the process with pid after I am done. However, it is not all the time is gets killed.
My question is, can I run the process with a name, then killing it by using pkill name after? and how will that look
#!/bin/sh
ROOT_DIR=$(pwd)
activate(){
source $ROOT_DIR/.venv/bin/activate
python3 src/server.py -l & pid=$! # <== This is the process
python3 src/client.py localhost 8080
}
activate
sleep 10
kill "$pid"
printf "\n\nServer is done, terminating processes..."
You can run programs with a specific command name by using the bash buildin exec. Note that exec replaces the shell with the command so you have to run it in a subshell environment like:
( exec -a my_new_name my_old_command ) &
However, it probably won't help you much because this sets the command line name, which is apparently different from the command name. So executing the above snippet will show your process as "my_new_name" for example in top or htop, but pkill and killall are filtering by the command name and will thus not find a process called "my_new_name".
While it is interesting, how one can start a command with a different name than the executable, it is most likely not the cause of your problem. PIDs never change, so I assume that the problem lays somewhere different.
My best guess is that the server binds a socket to listen on a specific port. If the program is not shutdown gracefully but killed the port number remains occupied and is only freed by the kernel after some time during some kind of kernel garbage collect. If the program is restarted after a short period of time it finds the port already been occupied and prints a misleading message, that says it is already running. If that is indeed the cause of your problem I would strongly consider implementing a way to graceful shutdown the server. (may be already closing the socket in a destructor or something similar could help)
I think you should have to use systemd for this case:
https://github.com/torfsen/python-systemd-tutorial
I am using pexpect to run a start command on an in-house application. The start command starts a number of processes. As the processes are starting one by one in the background everything looks good, but when the 'start' process finishes and the pexpect process ends, the processes that have been started also die.
child = pexpect.spawn('foo start')
child.logfile = log
child.wait()
For this scenario, I can use nohup and it works as expected.
child = pexpect.spawn('bash -c "nohup foo start"')
However, there is also an installer for the same in-house application that has the same issue, part of the installation is to start the processes. The installer is interactive and requires input, so nohup will not work.
How can I prevent the processes that are started by the installer from dying when the pexpect session ends?
Note: The start and install processes work fine when executed from a standard terminal session. They are not tied to the session in any way.
I couldn't find much in the documentation about it, but including the "ignore_sighup=True" option in the spawn command fixed my issue.
child = pexpect.spawn('foo start', ignore_sighup=True)
The task is to use python to run a remote process in background and immediately close the ssh session.
I have a remote script name 'start' under server:PATH/, the start script does nothing but lunch a long-live background program. 'start' script which has one line:
nohup PATH/Xprogram &
When I use python subprocess module to call my remote 'start' script, it does start OK. But the issue is: it seems the SSH connection is persist, meaning I am getting stdout from the remote Xprogram (since it is a long live program which has output to stdout). Does this indicating ssh connection is still there ?
All I need is call the start script without blocking and forget about it (leave the long-live program running, close ssh, release resources).
my python function call looks like this:
ret = subprocess.Popen(["ssh", "xxx#servername", "PATH/start"])
if I use ret.terminate() after the command, it then will kill the long-live program too.
I have also tried spur module. basically the same thing.
=====update====
#Dunes' answer solves the problem. Based on his answer, I did more digging and found this link very helpful.
My understanding of this is: basically, if any file descriptor is still held by your process (e.g. stdout held by my XProgram), then SSH session won't exit. However redirect stdout/stderr to NULL effectively close those file descriptor and let SSH session exit normally.
solution
ret = subprocess.Popen(["ssh", "xxx#servername", "PATH/start >dev/null 2>&1"])
After playing about a bit I found that nohup doesn't seem to be properly disconnecting the child process from the parent ssh session (as it should be). This means you have to manually close stdout or point it at a file, e.g.
Using bash:
ssh user#host "nohup PATH/XProgram >&- &"
Shell agnostic (as far as I know):
ssh user#host "nohup PATH/XProgram >/dev/null 2>&1 &"
In python:
from shlex import split
from subprocess import Popen
p = Popen(split('ssh user#host "nohup PATH/XProgram >&- &"'))
p.communicate() # returns (None, None)
Try
subprocess.Popen(["ssh", "xxx#servername", "nohup PATH/start & disown"])
For me,
subprocess.Popen(["ssh", "xxx#servername", "nohup sleep 1000 & disown"])
lets my script exit immediately while leaving sleep running on the server awhile.
When your script dies, an ssh process is left on your system, but killing it doesn't kill the remote process.
Let me start with what I'm really trying to do. We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath. We picked Jython in particular because we only need to depend on the standalone jython.jar in our startup script. We decided we could write a jython script that uses subprocess.Popen to launch our application's jvm and then terminates.
One more thing. Our application uses a lot of legacy debug code that prints to standard out. So the startup script typically has been redirecting stdout/stderr to a log file. I attempted to reproduce that with our jython script like this:
subprocess.Popen(args,stdout=logFile,stderr=logFile)
After this line the launcher script and hosting jvm for jython terminates. The problem is nothing shows up in the logFile. If I instead do this:
subprocess.Popen(args,stdout=logFile,stderr=logFile).wait()
then we get logs. So the parent process needs to run parallel to the application process launched via subprocess? I want to avoid having two running jvms.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates? Is there a better way to launch the application jvm from jython? Is Jython a bad solution anyway?
We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath.
You could use a platform independent script to generate a platform specific startup script either at installation time or before each invocation. In the latter case, additionally, you need a simple static platform specific script that invokes your platform independent startup-script-generating script and then the generated script itself. In both cases you start your application by calling a static platform specific script.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates?
You could open file/redirect in a child process e.g., using shell:
Popen(' '.join(args+['>', 'logFile', '2>&1']), # shell specific cmdline
shell=True) # on Windows see _cmdline2list to understand what is going on
I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6)
Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process.