Python check if another script is running and stop it - python

I have several python scripts that turn my TV on and off. Sometimes the TV does not respond the first time so I use a while loop to continue sending the command until the "success" response is sent. Up to 10 times.
I need to check if one of these programs are running when any of them are started and kill the first process.
This answer uses domain locks and I think this could work but I dont really understand whats happening there:
https://stackoverflow.com/a/7758075/2005444
What I dont know is what the process_name would be. The scripts are titles tvon.py, tvoff.py, and tvtoggle.py. Is it just the title? Would it include the extension? How do I get the pid so I can kill the process?
This is running on Ubuntu 14.04.1
EDIT: all I really need is to search for any of these running scripts first. Also, instead of killing the process maybe I could just wait for it to finish. I could just do a loop and break it if none of those processes are running.
The reason I need to do this is if the tv is off and the off script is run it will never succeed. The TV wont respond if it is already off. Which is why I built in the limit of 10 commands. It never really takes over 4 so 10 is overkill. The problem is if the off command is trying to run and I turn the TV on using the tvon script the TV will turn on and back off. Although the TV limits how often commands can be accepted, which reduces the chance of this happening I still want the to be as cleanly working as possible.
EDIT:
I found that I can not kill the process because it can lock the tty port up which requires a manual restart. So I think the smarter way is to have the second process wait until the first is done. Or find a way to tell the first process to stop at a specific point in the loop so I know its not transmitting.

If you have a socket, use it. Sockets provide full-blown bidirectional communication. Just write your script to kill itself if it receives anything on the socket. This can be most easily done by creating a separate thread which tries to do a socket.recv() (for SOCK_DGRAM) or socket.accept() (for SOCK_STREAM/SOCK_SEQPACKET), and then calls sys.exit() once that succeeds.
You can then use socket.send() (SOCK_DGRAM) or socket.connect() (SOCK_STREAM/SOCK_SEQPACKET) from the second script instance to ask the first instance to exit.

This function can kill a python script by name on *nix systems. It looks through a list of running processes, finds the PID of the one associated with your script, and issues a kill command.
import subprocess
def killScript(scriptName):
# get running processes with the ps aux command
res = subprocess.check_output(["ps","aux"], stderr=subprocess.STDOUT)
for line in res.split("\n"):
# if one of the lines lists our process
if line.find(scriptName) != -1:
info = []
# split the information into info[]
for part in line.split(" "):
if part.strip() != "":
info.append(part)
# the PID is in the second slot
PID = info[1]
#kill the PID
subprocess.check_output(["kill",PID], stderr=subprocess.STDOUT)
At the beginning of your tv script you could run something like:
killList = ["tvon.py", "tvoff.py", "tvtoggle.py"]
for script in killList:
killScript(script)

Related

How to make the Python subprocess wait for some input when running through SLURM script?

I am running some Python code using a SLURM script on a remote server accessed through SSH. At some point, issues related to licenses on the SLURM platform may happen, generating errors in Python and ending the subprocess. I want to use try-except to let the Python subprocess wait until the issue is fixed, after that it can keep running from where it stopped.
What are some smart implementations for that?
My most obvious solution is just keeping Python inside a loop if the error occurs and letting it read a file every X seconds, when I finally fix the error and want it to keep running from where it stopped, I would write something on the file and break the loop. I wonder if there is a smarter way to provide input to the Python subprocess while it is running through the SLURM script.
One idea might be to add a signal handler for signal USR1 to your Python script like this.
In the signal handler function, you can set a global variable or send a message or set a threading.Event that the main process is waiting on.
Then you can signal the process with:
kill -USR1 <PID>
or with the Python os.kill() equivalent.
Though I do have to agree there is something to be said for the simplicity of your process doing:
touch /tmp/blocked.$$
and your program waiting in a loop with a 1s sleep for that file to be removed. This way you can tell which process id is blocked.

Is there a way I can store the output of a terminal command into a file using python?

I want to store the output of the terminal command top into a file, using Python.
In the terminal, when I type top and hit enter, I get an output that is real time, so it keeps updating. I want to store this into a file for a fixed duration and then stop writing.
file=open("data.txt","w")
file.flush()
import os,time
os.system("top>>data.txt -n 1")
time.sleep(5)
exit()
file.close()
I have tried to use time.sleep() and then exit(), but it doesn't work, and the only way top can be stopped is in the terminal, by Control + C
The process keeps running and the data is continuously written onto the file, which is not ideal, as one would guess
For clarity: I know how to write the output on to the file, I just want to stop writing after a period
system will wait for the end of the child process. If you do not want that, the Pythonic way is to directly use the subprocess module:
import subprocess
timeout=60 # let top run for one minute
file=open("data.txt","w")
top = subprocess.Popen(["top", "-n", 1], stdout=file)
if top.wait(timeout) is None: # wait at most timeout seconds
top.terminate() # and terminate child
The panonoic way (which is highly recommended for robust code) would be to use the full path of top. I have not here, because it may depend on the actual system...
The issue you could be facing is that os.system starts the process as part of the current process. So the rest of your script will not be run until the command you run has completed execution.
I think what you want to be doing is executing your console command on another thread so that the thread running your python script can continue while the command runs in the background. See run a python program on a new thread for more info.
I'd suggest something like (this is untested):
import os
import time
import multiprocessing
myThread = multiprocessing.process(target=os.system, args=("top>>data.txt -n 1",))
myThread.start()
time.sleep(5)
myThread.terminate()
That being said, you may need to consider the thread safety of os.system(), if it is not thread safe you'll need to find an alternative that is.
Something else worth noting (and that I know little about) is that it may not be ideal to terminate threads in this way, see some of the answers here: Is there any way to kill a Thread?

Python3 Non-blocking input or killing threads

Reading through posts of similar questions I strongly suspect there is no way to do what I'm trying to do but figured I'd ask. I have a program using python3 that is designed to run headless, receiving commands from remote users that have logged in. One of the commands of course is a shutdown so that the program can be ended cleanly. This section is working correctly.
However while working on this I realized an option to be able to enter commands directly, without a remote connection, would be useful in the event something unusual happened to prevent remote access. I added a local_control function that runs in it's own thread so that it doesn't interfere with the main loop. This works great for all commands except for the shutdown command.
I have a variable that both loops monitor so that they can end when the shutdown command is sent. Sending the shutdown command from within local_control works fine because the loop ends before getting back to input(). however when sending the shutdown command remotely the program doesn't end until someone presses the enter key locally because that loop remains stuck at input(). As soon as enter is pressed the program continues, successfully breaks the loop and continues with the shutdown as normal. Below is an example of my code.
import threading
self.runserver = True
def local_control(): #system to control server without remote access
while self.runserver:
raw_input = input()
if raw_input == "shutdown":
self.runserver = False
mythread = threading.Thread(target=local_control)
mythread.start()
while self.runserver:
some_input = get_remote_input() #getting command from remote user
if some_input == "shutdown":
self.runserver = False
sys.exit(0) #server is shutdown cleanly
Because the program runs primarily headless GUI options such as pygame aren't an option. Other solutions I've found online involve libraries that are not cross-platform such as msvcrt, termios, and curses. Although it's not as clean an option I'd settle for simply killing the thread to end it if I could however there is no way to do that as well. So is there a cross-platform, non-GUI option to have a non-blocking input? Or is there another way to break a blocked loop from another thread?
Your network-IO thread is blocking the processing of commands while waiting for remote commands, so it will only evaluate the state of runserver after get_remote_input() returns (and it's command is processed).
You will need three threads:
One which loops in local_control(), sending commands to the processing thread.
One which loops on get_remote_input(), also sending commands to the processing thread.
A processing thread (possibly the main thread).
A queue will probably be helpful here, since you need to avoid the race condition caused by unsynchronized access as currently present with regards to runserver.
Not a portable solution, but in *nix, you might be able send yourself an interrupt signal from the local_control function to break the blocking input(). You'll need the pthread ID (pthread_self and save it somewhere readable from local_control) for the network control thread so you can call pthread_kill.

subprocess or threads for checking pings to a server?

I've never used multithreaded processes before or tried to make them and I'm a noob to coding subprocesses but I understand what forking is conceptually and it isn't that hard to understand per se.
What I'm trying to do is keep track of ping spikes on network. Basically, I would run ping 1.2.3.4 in a subprocess and check the output of it in the main thread, then process that output. However, the code as of now looks like this:
def run_background_ping():
background_ping=subprocess.Popen(args)
with background_ping.stdout:
for line in iter(background_ping.stdout.readline)
process(line)
which means that I'm still running the ping function in a subprocess running on the main thread. What that means practically is that I see the output of the ping constantly, which I don't think I should be. in which case, how do i get this operation to run in the background and constantly poll it while some other things are happening? What i want is this
def ping_subprocess()
ping the server in a subprocess
return a string whenever there is output
def read_ping_subprocess_output()
current_ping = poll(ping_subprocess.output)
eval(current_ping)
ping_subprocess()
read_ping_subprocess_output() # <-- how do i get to this code
but the problem is that the ping_subprocess() method will wait until the output of the process finishes, which will technically never happen since the ping process runs indefinitely and would only be stopped if i purposefully give it some other signal which i plan on coding in later.
How do I make it such that the subprocess method will return once the process starts instead of waiting for it to end so that I can read its output? I'm sure another option is to constantly append the output of the ping function into a file (within reason) and read that file simultaneously with another program, but I'm not sure if that's the best idea.
So in terms of threads (if this is needed) how would this go? I've heard that mixing threads and subprocesses is a bad idea.

PSUtil and Wine: Trying to get time a process ends, but Wine processes never terminate (just go to sleep)?

I am using Python's PSUtil library to detect the exact time a given process completes. Currently I do this by looking for when the process terminates, for which the following suite of tests works well on both Windows and regular Linux programs:
# Run a command and keep a reference
if do_run_command:
process = psutil.Popen(command)
# Latch onto running process by PID
else:
process = psutil.Process(int(pid))
# Loop until the process ends
while True:
if not process.is_running():
break
if process.status() == psutil.STATUS_ZOMBIE:
break
try:
# Command that gets information about the process
except psutil.NoSuchProcess:
break
# Store end time
However, when I try to use this on a program being run under Wine, none of those tests ever triggers. This is because it appears Wine processes never actually terminate, but rather go to sleep indefinitely. That includes the program running in Wine, the script that launches the program in Wine, etc. etc.
This is causing problems both in being able to detect when the programs truly end, and because the processes build up in memory until I eventually have to kill them manually.
-=-=-=-=-
I am running on CentOS 6.0. The main questions I have boil down to these:
Is there some sort of configuration that can be set for Wine to have
it close the processes when complete? (I am rather new to Wine, and searching for this has proved maddeningly difficult)
Alternatively, is there another PSUtil test I could add to my Python script to get it to detect when these Wine processes are finishing (even if they are sleeping indefinitely)?

Categories

Resources