I'm running a kind of touchy .exe file in Python to receive a couple of data measurements. The file should open, take the measurement, then close. The issue is sometimes it crashes and I need to be able to take these measurements every 10 minutes over a long period of time.
What I need is a 'check' to see if the .exe is not responding and if it's not, then to have it kill the process. Or to just kill the whole script after every measurement taken. The issue is that the script gets stuck when it tries to run the .exe file that's not responding.
Here's the script:
FNULL = open(os.devnull, 'a')
filename = "current_pressure.log"
command = '"*SRH#\r"'
args = "httpget -r -o " + filename + " -C 2 -S " + command + IP
subprocess.call(args, stdout=FNULL, stderr=FNULL, shell=False)
Basically, need something like:
"if httpget.exe not responding, then kill process"
OR
"kill above script if running after longer than 20 seconds"
Use a timer to kill the process if its gone on too long. Here I've got two timers for a graceful and hard termination but you can just do the kill if you want.
import threading
FNULL = open(os.devnull, 'a')
filename = "current_pressure.log"
command = '"*SRH#\r"'
args = "httpget -r -o " + filename + " -C 2 -S " + command + IP
proc = subprocess.Popen(args, stdout=FNULL, stderr=FNULL, shell=False)
nice = threading.Timer(20, proc.terminate)
nice.start()
mean = threading.Timer(22, proc.kill)
mean.start()
proc.wait()
nice.cancel()
mean.cancel()
Generally, when a program hangs while working on windows we try to go to Task Manager and end the process of that particular program. When this approach fails, we experiment with some third party softwares to terminate it. However, there is even another better way for terminating such hanged programs automatically
http://www.problogbooster.com/2010/01/automatically-kills-non-responding.html
Related
I've got a python script here which downloads a jar file from github, executes it, waits for 3 minutes and is supposed to kill the process.
Note that this works perfectly fine on windows, but somehow the script is not doing what I need it to do on Ubuntu. The jar file indeed does what I need it to do, however after that, the python script does not continue.
To summarise, nothing after p2 = subprocess.Popen(["java", "-jar", "serverstarter-2.0.1.jar", "&"], stdout=subprocess.DEVNULL) gets run. Not even time.sleep(180).
So what I am trying to figure out is why executing the jar file within the script seemingly "stalls" the script.
Note that another python script calls this script in this way subprocess.run(["python3", p, "&"], stdout=subprocess.DEVNULL).
Here is the code:
wget.download("https://github.com/AllTheMods/alltheservers/releases/download/2.0.1/serverstarter-2.0.1.jar", bar=None)
p1 = subprocess.run(["chmod", "+x", "serverstarter-2.0.1.jar"], stdout=subprocess.DEVNULL)
p2 = subprocess.Popen(["java", "-jar", "serverstarter-2.0.1.jar", "&"], stdout=subprocess.DEVNULL)
time.sleep(180)
p2.kill()
try:
log_files = glob(src + "**/*.log")
for files in map(str, log_files):
os.remove(files)
zip_files = glob(src + "**/*.zip")
for files in map(str, zip_files):
os.remove(files)
startserver_files = glob(src + "**/startserver.*")
for files in map(str, startserver_files):
os.remove(files)
serverstarter_files = glob(src + "**/serverstarter*.*")
for files in map(str, serverstarter_files):
os.remove(files)
files_to_move = glob(src + "**/*")
for files in map(str, files_to_move):
shutil.move(files, dest)
time.sleep(20)
forge_jar_file = glob(dest + "forge-*.jar")
for files in map(str, forge_jar_file):
print(files)
os.rename(files, "{}{}".format(dest, "atm6.jar"))
except Exception as e:
post_to_slack(f"Error occured in {os.path.basename(__file__)}! {e}")
quit()
If you on Linux, Type this on terminal:
ps -ef | grep -i java
and check the process ID of your jar file and then kill it using pkill command.
Quote from kill method documentation:
https://pythontic.com/multiprocessing/process/kill
When invoked kill() uses SIGKILL and terminates the process.
When SIGKILL is used, the process does not know it has been issued a > SIGKILL. The process cannot do any cleanup. The process is killed
immediately.Remember, though miniscule it takes a finite time to kill the process.
Not all process respond to SIGKILL !!!!
Suggesting to kill ALL processes having argument serverstarter-2.0.1.jar in the command.
Like this:
pkill -9 -f "serverstarter-2.0.1.jar"
If cannot kill ALL processes having argument serverstarter-2.0.1.jar . You need to save the PID of the created process after creation time and kill it.
I wrote a python script, "download_vod.py" that runs a child process using subprocess module.
download_vod.py
#!/usr/bin/python
import subprocess
url = xxx
filename = xxx.mp4
cmd = "ffmpeg -i " + url + " -c:v copy -c:a copy " + "\"" + filename + "\""
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
proc.wait()
When I run foreground on a bash shell as below, it works fine and terminates properly
./download_vod.py
But a problem occurs when I run the script background as below.
./download_vod.py&
The script always stops as below.
[1]+ Stopped download_vod.py
If I resume the job as below, it resumes and terminates properly.
bg
I assume it is caused by running subprocess because it never happens without subprocess.
Would you let me know what happens to the subprocess (child process) when I run the python script as background? And how would it be fixed?
I have time consuming SNMP walk task to perform which I am running as a background process using Popen command. How can I capture the output of this background task in a log file. In the below code, I am trying to do snampwalk on each IP in ip_list and logging all the results to abc.txt. However, I see the generated file abc.txt is empty.
Here is my sample code below -
import subprocess
import sys
f = open('abc.txt', 'a+')
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
for ip in ip_list:
cmd = "snmpwalk.exe -t 1 -v2c -c public "
cmd = cmd + ip
print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=f)
p.wait()
f.close()
print("File output - " + open('abc.txt', 'r').read())
the sample output from the command can be something like this for each IP -
sysDescr.0 = STRING: Software: Whistler Version 5.1 Service Pack 2 (Build 2600)
sysObjectID.0 = OID: win32
sysUpTimeInstance = Timeticks: (15535) 0:02:35.35
sysContact.0 = STRING: unknown
sysName.0 = STRING: UDLDEV
sysLocation.0 = STRING: unknown
sysServices.0 = INTEGER: 72
sysORID.4 = OID: snmpMPDCompliance
I have already tried Popen. But it does not logs output to a file if it is a time consuming background process. However, it works when I try to run background process like ls/dir. Any help is appreciated.
The main issue here is the expectation of what Popen does and how it works I assume.
p.wait() here will wait for the process to finish before continuing, that is why ls for instance works but more time consuming tasks doesn't. And there's nothing flushing the output automatically until you call p.stdout.flush().
The way you've set it up is more meant to work for:
Execute command
Wait for exit
Catch output
And then work with it. For your usecase, you'd better off using an alternative library or use the stdout=subprocess.PIPE and catch it yourself. Which would mean something along the lines of:
import subprocess
import sys
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
with open('abc.txt', 'a+') as output:
for ip in ip_list:
print(cmd := f"snmpwalk.exe -t 1 -v2c -c public {ip}")
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) # Be wary of shell=True
while process.poll() is None:
for c in iter(lambda: process.stdout.read(1), ''):
if c != '':
output.write(c)
with open('abc.txt', 'r') as log:
print("File output: " + log.read())
The key things to take away here is process.poll() which checks if the process has finished, if not, we'll try to catch the output with process.stdout.read(1) to read one byte at a time. If you know there's new lines coming, you can switch those three lines to output.write(process.stdout.readline()) and you're all set.
I am trying to execute a non-blocking bash script from python and to get its return code. Here is my function so far:
def run_bash_script(script_fullname, logfile):
my_cmd = ". " + script_fullname + " >" + logfile +" 2>&1"
p = subprocess.Popen(my_cmd, shell=True)
os.waitpid(p.pid, 0)
print(p.returncode)
As you can see, all the output is redirected into a log file, which I can monitor while the bash process is running.
However, the last command just returns 'None' instead of a useful exit code.
What am I doing wrong here?
You should use p.wait() rather than os.waitpid(). os.waitpid() is a low level api and it knows nothing about the Popen object so it could not touch p.
I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.