I have a shell script which I am calling in Python using os.system("./name_of_script")
I would prefer to do this call based on user input(ie a user types "start" and the call is done, and some other stuff in the python program is also done, when a user types "stop" the script is terminated) but i find that this call takes up the whole focus on the terminal (I dont really know the right word for it, but basically the whole program stalls on this call since my shell script executes until a keyboard interrupt is received). Then when I do a keyboard interrupt, that is the only moment that the shell script stops executing and the rest of the code afterwards is executed. Is this possible in python?
Simply constructing a Popen object, as in:
p = subprocess.Popen(['./name_of_script'])
...starts the named program without blocking on it to complete.
If you later want to see if it's done yet, you can check p.poll() for an update on its status.
This is also faster and safer than os.system(), in that it doesn't involve a shell (unless the script you're invoking runs one itself), so you aren't exposing yourself to shellshock, shell injection vulnerabilities, or other shell-related issues unnecessarily.
Related
I am running some Python code using a SLURM script on a remote server accessed through SSH. At some point, issues related to licenses on the SLURM platform may happen, generating errors in Python and ending the subprocess. I want to use try-except to let the Python subprocess wait until the issue is fixed, after that it can keep running from where it stopped.
What are some smart implementations for that?
My most obvious solution is just keeping Python inside a loop if the error occurs and letting it read a file every X seconds, when I finally fix the error and want it to keep running from where it stopped, I would write something on the file and break the loop. I wonder if there is a smarter way to provide input to the Python subprocess while it is running through the SLURM script.
One idea might be to add a signal handler for signal USR1 to your Python script like this.
In the signal handler function, you can set a global variable or send a message or set a threading.Event that the main process is waiting on.
Then you can signal the process with:
kill -USR1 <PID>
or with the Python os.kill() equivalent.
Though I do have to agree there is something to be said for the simplicity of your process doing:
touch /tmp/blocked.$$
and your program waiting in a loop with a 1s sleep for that file to be removed. This way you can tell which process id is blocked.
I am writing a small Python script for Windows that runs a certain program using subprocess.Popen and then, after a while, kills it. I could use Popen.terminate or Popen.kill to terminate that program, however, as indicated by the docs, those would act the same by calling the Windows API function TerminateProcess. However, that function terminates the program immediately, which may result in errors. I would like to replicate the result of hitting the 'X' button, which uses the 'ExitProcess' function. Is that possible?
I have a Python script, part of a test system that calls many third party tools/processes on multiple [Windows] machines, and hence has been designed to clean up comprehensively/carefully when aborted with CTRL-C; the clean-up can take many seconds, depending on what's going on. This clean-up process works fine from a [Windows] command prompt.
I run that Python script from [a scripted pipeline] Jenkinsfile, using return_value = bat("python my_script.py params", returnStatus: true), which also works fine.
However I need to be able to perform the abort/clean-up during a Jenkins [v2.263.4] run, i.e. when someone presses the little red X, and that bit I can't fathom. I understand that Jenkins sends SIGTERM when the abort button is pressed so I am trapping that in my_script.py:
SAVED_SIGTERM_HANDLER = signal(SIGTERM, sigterm_handler)
...and running the processes I would normally call from a KeyboardInterrupt in sigterm_handler() as well, but they aren't being called. I understand that the IO stream to the Jenkins console stops the moment the abort button is pressed; I can see that the clean-up functions aren't being called by looking at the behaviour of my script(s) from the "other side": it appears as though my_script.py is simply stopping dead, all connections from it drop the moment the abort button is pressed, there is no clean-up.
Can anyone suggest a way of making the abort button in Jenkins give my bat()ed Python script time to clean-up? Or am I just doing something wrong? Or is there some other approach to this within Jenkins that I'm missing?
You should be able to use a "post" action to execute any clean up needed: https://www.jenkins.io/doc/book/pipeline/syntax/#post
I know that doesn't take into account the cleanup logic you already have but it's probably the safest thing to do. Maybe separate out the cleanup logic into a separate script and make it idempotent and then you can call it no matter what at the end of a pipeline and if it has already run then it should do nothing if run again.
After much figuring out, and kudos to our tools people who found the critical "cookie" implementation detail in Jenkins, the workaround to take control of the abort process [on Windows] is as follows:
have Jenkins call a wrapper, let's call it (a), and open a socket or a named-pipe (socket would work on both Linux and Windows),
(a) then launches (b), via "start" so that (b) runs as a separate process but, CRITICALLY, the environment that (a) passes to (b) MUST have JENKINS_SERVER_COOKIE="ignore" added to it; Jenkins uses this flag to find the processes it has launched in order to kill them, so you must set this "cookie" to "ignore" to stop Jenkins killing (b),
(b) connects back to (a) via the socket or pipe,
(a) remains running for as long as (b) is connected to the socket or pipe but also lets itself be killed by CTRL-C/SIGTERM,
(b) then launches the thing you actually want to run,
when (a) is terminated by a Jenkins abort (b) notices (because the socket or pipe will close) and performs a controlled shut-down of the thing you wanted to run before (b) exits,
separately, make a thing, let's call it (c), which checks whether the socket/named-pipe is present: if it is then (b) hasn't terminated yet,
in Jenkinsfile, wrap the calling of (a) in a try()/catch()/finally() and call (c) from the finally(), hence ensuring that the Jenkins pipeline only finishes when (b) has terminated (you might want to add a guard timer for safety).
Quite a thing, and all for the lack of what would be a relatively simple API in Jenkins.
I'm working on a tool for data entry at my job where it basically takes a report ID number, opens a PDF to that page of that report, allows you to input the information and then saves it.
I'm completely new to instantiating new processes in python; this is the first time that I've really tried to do it. so basically, I have a relevant function:
def get_report(id):
path = report_path(id)
if not path:
raise NameError
page = get_page(path, id)
proc = subprocess.Popen(["C:\Program Files (x86)\Adobe\Reader 11.0\Reader\AcroRd32.exe", "/A", "page={}".format(page),
path])
in order to open the report in Adobe Acrobat and be able to input information while the report is still open, I determined that I had to use multiprocessing. So, as a result, in the main loop of the program, where it iterates through data and gets the report ID, I have this:
for row in rows:
print 'Opening report for {}'.format(ID)
arg = ID
proc = Process(target=get_report, args=(arg,))
proc.start()
row[1] = raw_input('Enter the desired value: ')
rows.updateRow(row)
while proc.is_alive():
pass
This way, one can enter data without the program hanging on the subprocess.Popen() command. However, if it simply continues on to the next record without closing the Acrobat window that pops up, then it won't actually open the next report. Hence the while proc.is_alive():, as it gives one a chance to close the window manually. I'd like to kill the process immediately after 'enter' is hit and the value entered, so it will go on and just open the next report with even less work. I tried several different things, ways to kill processes through the pid using os.kill(); I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
What am I missing here? How do I kill the process and close the window that it opened in? Is this even possible? Like I said, I have just about 0 experience with processes in python. If I'm doing something horribly wrong, please let me know!
Thanks in advance.
To kill/terminate a subprocess, call proc.kill()/proc.terminate(). It may leave grandchildren processes running, see subprocess: deleting child processes in Windows
This way, one can enter data without the program hanging on the subprocess.Popen() command.
Popen() starts the command. It does not wait for the command to finish. There are .wait() method and convenience functions such as call()
Even if Popen(command).wait() returns i.e., if the corresponding external process has exited; it does not necessarily mean that the document is closed in the general case (the launcher app is done but the main application may persist).
i.e., the first step is to drop unnecessary multiprocessing.Process and call Popen() in the main process instead.
The second step is to make sure to start an executable that owns the opened document i.e., if it is killed the corresponding document won't stay opened: AcroRd32.exe might be already such program (test it: see whether call([r'..\AcroRd32.exe', ..]) waits for the document to be closed) or it might have a command-line switch that enables such behavior. See How do I launch a file in its default program, and then close it when the script finishes?
I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
If kill() and Popen() behave the same in your case then either you've made a mistake (they don't behave the same: you should create a minimal standalone code example with a dummy pdf that demonstrates the problem. Describe using words: what do you expect to happen (step by step) and what happens instead) or AcroRd32.exe is just a launcher app that I've described above (it just opens the document and immediately exits without waiting for the document to be closed).
I have a script which runs 2 threads infinitely. (Each thread is an infinite while loop) Whenever I run it normally, I use ctrl + Z or ctrl + C to stop its execution (depending on the OS). But ever since I added it to the /etc/rc.local file in Linux, for automatic startup upon boot, I am unable to use these commands to forcefully exit.
This has forced me to include something in the python script itself to cleanly exit when I type a certain key. How do I do so?
The problem is that I'm running a multithreaded application, which runs continuously and does not wait for any user inputs.
I added this to the start of a loop in my thread-
ip = raw_input()
if ip == 'quit':
quit()
But this will NOT work since it blocks for a user input, and stops the script. I don't want the script to be affected at all by this. I just want it to respond when I want to stop it. My question is not what command to use (which is explained here- Python exit commands - why so many and when should each be used?), but how I should use it without affecting the flow of my program.
Keep the code that handles the KeyboardInterrupt and send it an INT signal to stop the program: kill -INT $pid from the shell, where $pid is the process ID (PID) of the program. That's essentially the same as pressing CTRL+C in a shell where the program runs in the foreground.
Writing the program's PID into a file right after it started, either from within the program itself or from the code which started it asynchronously, makes it easier to send a signal later, without the need to search for the process in the process list.
One way is to have the threads examine a global variable as a part of their loop, and terminate (break out of the loop and terminate, that is) when the variable is set.
The main thread can then simply set the variable and join() all existing threads before terminating. You should be aware that if the individual threads are blocked waiting for some event to occur before they next check whether the global variable has been set, then they will hang anyway until that event occurs.