I've got a python script here which downloads a jar file from github, executes it, waits for 3 minutes and is supposed to kill the process.
Note that this works perfectly fine on windows, but somehow the script is not doing what I need it to do on Ubuntu. The jar file indeed does what I need it to do, however after that, the python script does not continue.
To summarise, nothing after p2 = subprocess.Popen(["java", "-jar", "serverstarter-2.0.1.jar", "&"], stdout=subprocess.DEVNULL) gets run. Not even time.sleep(180).
So what I am trying to figure out is why executing the jar file within the script seemingly "stalls" the script.
Note that another python script calls this script in this way subprocess.run(["python3", p, "&"], stdout=subprocess.DEVNULL).
Here is the code:
wget.download("https://github.com/AllTheMods/alltheservers/releases/download/2.0.1/serverstarter-2.0.1.jar", bar=None)
p1 = subprocess.run(["chmod", "+x", "serverstarter-2.0.1.jar"], stdout=subprocess.DEVNULL)
p2 = subprocess.Popen(["java", "-jar", "serverstarter-2.0.1.jar", "&"], stdout=subprocess.DEVNULL)
time.sleep(180)
p2.kill()
try:
log_files = glob(src + "**/*.log")
for files in map(str, log_files):
os.remove(files)
zip_files = glob(src + "**/*.zip")
for files in map(str, zip_files):
os.remove(files)
startserver_files = glob(src + "**/startserver.*")
for files in map(str, startserver_files):
os.remove(files)
serverstarter_files = glob(src + "**/serverstarter*.*")
for files in map(str, serverstarter_files):
os.remove(files)
files_to_move = glob(src + "**/*")
for files in map(str, files_to_move):
shutil.move(files, dest)
time.sleep(20)
forge_jar_file = glob(dest + "forge-*.jar")
for files in map(str, forge_jar_file):
print(files)
os.rename(files, "{}{}".format(dest, "atm6.jar"))
except Exception as e:
post_to_slack(f"Error occured in {os.path.basename(__file__)}! {e}")
quit()
If you on Linux, Type this on terminal:
ps -ef | grep -i java
and check the process ID of your jar file and then kill it using pkill command.
Quote from kill method documentation:
https://pythontic.com/multiprocessing/process/kill
When invoked kill() uses SIGKILL and terminates the process.
When SIGKILL is used, the process does not know it has been issued a > SIGKILL. The process cannot do any cleanup. The process is killed
immediately.Remember, though miniscule it takes a finite time to kill the process.
Not all process respond to SIGKILL !!!!
Suggesting to kill ALL processes having argument serverstarter-2.0.1.jar in the command.
Like this:
pkill -9 -f "serverstarter-2.0.1.jar"
If cannot kill ALL processes having argument serverstarter-2.0.1.jar . You need to save the PID of the created process after creation time and kill it.
Related
I wrote a python script, "download_vod.py" that runs a child process using subprocess module.
download_vod.py
#!/usr/bin/python
import subprocess
url = xxx
filename = xxx.mp4
cmd = "ffmpeg -i " + url + " -c:v copy -c:a copy " + "\"" + filename + "\""
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
proc.wait()
When I run foreground on a bash shell as below, it works fine and terminates properly
./download_vod.py
But a problem occurs when I run the script background as below.
./download_vod.py&
The script always stops as below.
[1]+ Stopped download_vod.py
If I resume the job as below, it resumes and terminates properly.
bg
I assume it is caused by running subprocess because it never happens without subprocess.
Would you let me know what happens to the subprocess (child process) when I run the python script as background? And how would it be fixed?
I use the subprocess module to launch a programm in "terminal mode" by adding a flag after the executable like this:
subprocess.call(nuke + " -t ")
This results in the terminal mode, so all following commands are in the context of the programm (my guess is that it is the programms python interpreter).
Nuke 11.1v6, 64 bit, built Sep 8 2018. Copyright (c) 2018 The Foundry Visionmongers Ltd. All Rights Reserved. Licence expires on: 2020/3/15
>>>
How can I keep pushing commands to the interpreter from my python script that launched the terminal mode?
How would you quit this programms interpreter from the script?
EDIT:
nuketerminal = subprocess.Popen(nuke + " -t " + createScript)
nuketerminal.kill()
terminates the process before the python interpreter is loaded and the script is executed any idea on how to solve this elegantly without a delay?
from subprocess import Popen, PIPE
p = subprocess.Popen([nuke, "-t"], stdin=PIPE, stdout=PIPE) # opens a subprocess
p.stdin.write('a line\n') # writes something to stdin
line = p.stdout.readline() # reads something from the subprocess stdout
Not syncing reads and writes may cause a deadlock, f.e. when both your main process and subprocess will wait for input.
You can wait for the subprocess to end:
return_code = p.wait() # waits for the process to end and returns the return code
# or
stdoutdata, stderrdata = p.communicate("input") # sends input and waits for the subprocess to end, returning a tuple (stdoutdata, stderrdata).
Or you can end the subprocess with:
p.kill()
Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python
I'm running a kind of touchy .exe file in Python to receive a couple of data measurements. The file should open, take the measurement, then close. The issue is sometimes it crashes and I need to be able to take these measurements every 10 minutes over a long period of time.
What I need is a 'check' to see if the .exe is not responding and if it's not, then to have it kill the process. Or to just kill the whole script after every measurement taken. The issue is that the script gets stuck when it tries to run the .exe file that's not responding.
Here's the script:
FNULL = open(os.devnull, 'a')
filename = "current_pressure.log"
command = '"*SRH#\r"'
args = "httpget -r -o " + filename + " -C 2 -S " + command + IP
subprocess.call(args, stdout=FNULL, stderr=FNULL, shell=False)
Basically, need something like:
"if httpget.exe not responding, then kill process"
OR
"kill above script if running after longer than 20 seconds"
Use a timer to kill the process if its gone on too long. Here I've got two timers for a graceful and hard termination but you can just do the kill if you want.
import threading
FNULL = open(os.devnull, 'a')
filename = "current_pressure.log"
command = '"*SRH#\r"'
args = "httpget -r -o " + filename + " -C 2 -S " + command + IP
proc = subprocess.Popen(args, stdout=FNULL, stderr=FNULL, shell=False)
nice = threading.Timer(20, proc.terminate)
nice.start()
mean = threading.Timer(22, proc.kill)
mean.start()
proc.wait()
nice.cancel()
mean.cancel()
Generally, when a program hangs while working on windows we try to go to Task Manager and end the process of that particular program. When this approach fails, we experiment with some third party softwares to terminate it. However, there is even another better way for terminating such hanged programs automatically
http://www.problogbooster.com/2010/01/automatically-kills-non-responding.html
I'm creating a python script that runs rsync using subprocess and then gets the stdout and print it.
The script runs mulitple rsync process bases on a conf file using this code:
for share in shares.split(', '):
username = parser.get(share, 'username')
sharename = parser.get(share, 'name')
local = parser.get(share, 'local')
remote = parser.get(share, 'remote')
domain = parser.get(share, 'domain')
remotedir = username+"#"+domain+":"+remote
rsynclog = home + "/.bareshare/"+share+"rsync.log"
os.system("cp "+rsynclog+" "+rsynclog+".1 && rm "+rsynclog) # MOve and remove old log
rsync="rsync --bwlimit="+upload+" --stats --progress -azvv -e ssh "+local+" "+username+"#"+domain+":"+remote+" --log-file="+rsynclog+" &"
# Run rsync of each share
# os.system(rsync)
self.rsyncRun = subprocess.Popen(["rsync","--bwlimit="+upload,"--stats","--progress","-azvv","-e","ssh",local,remotedir,"--log-file="+rsynclog], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
I thinkg that this might not be the best thing to do - running multiple syncs at the sime time. How could I set up this so I wait for one process to finish before next one starts?
You can find my complete script here: https://github.com/danielholm/BareShare/blob/master/bareshare.py
Edit: And How do I make self.rsyncRun to die when done? When rsync is done with all the files, it seems like it continues altough it shouldnt be doing that.
Calling
self.rsyncRun.communicate()
will block the main process until the rsyncRun process has finished.
If you do not want the main process to block, then spawn a thread to handle the calls to subprocess.Popen:
import threading
def worker():
for share in shares.split(', '):
...
rsyncRun = subprocess.Popen(...)
out, err = rsyncRun.communicate()
t = threading.Thread(target = worker)
t.daemon = True
t.start()
t.join()