Sending the signals between two applications - python

What I would like to do, is some kind of communication between two applications:
first application would run and call the second application (first application would be closed) then if the second application has finished its job I would like to sent some kind of signal to the first application and launch it again (second application would be closed this time).
The only one idea I got is to write to a file, when the second application has finished its job and check in the first application if the file exists... is there any other way to do that?

It's a little unclear what you're trying to do. Are you simply trying to chain applications together, i.e. when the first application finishes it calls the second application, and when the second finishes it calls the first, etc.? If this is true, then you simply have to have one application spawn the other and exit immediately instead of waiting. Take a look at subprocess.Popen, which lets you do this (subprocess.call always waits).
However, it sounds like maybe you want the first application to continue running, but to reset itself when the second one finishes. In this case the second application is in fact the child of the first. In the first, you can check if the second has finished after calling Popen by using poll; then, when the second has finished, have the first app respawn itself and then exit, just as described above using Popen. If you want it to work asynchronously, you can create a separate thread in the first app that calls wait, and then exits; but you have to be careful with synchronization in this case. I can't say more because I don't know what the first app is doing while the second runs.
If you want the second application to keep running but the send a signal back to the first application, which is also running, create a pipe between the two, so that when the child writes to its stdout, it's actually writing to a pipe, which the parent can read. If your parent (the first application) is able to block waiting for the second application to finish doing what it's doing, then you just do this:
p = subprocess.Popen('myprogram.exe', stdout = subprocess.PIPE)
str = p.stdout.readline()
Then, the call to readline will block until the second program prints something out (e.g. a line just containing "done"), at which point str in the first program will contain the line printed out by the second program.
If you need for the first program to do something else at the same time, and periodically poll the pipe in a non-blocking fashion, it gets trickier. Take a look at this answer for a solution that works on both Unix and Windows:
Non-blocking read on a subprocess.PIPE in python

Related

subprocess or threads for checking pings to a server?

I've never used multithreaded processes before or tried to make them and I'm a noob to coding subprocesses but I understand what forking is conceptually and it isn't that hard to understand per se.
What I'm trying to do is keep track of ping spikes on network. Basically, I would run ping 1.2.3.4 in a subprocess and check the output of it in the main thread, then process that output. However, the code as of now looks like this:
def run_background_ping():
background_ping=subprocess.Popen(args)
with background_ping.stdout:
for line in iter(background_ping.stdout.readline)
process(line)
which means that I'm still running the ping function in a subprocess running on the main thread. What that means practically is that I see the output of the ping constantly, which I don't think I should be. in which case, how do i get this operation to run in the background and constantly poll it while some other things are happening? What i want is this
def ping_subprocess()
ping the server in a subprocess
return a string whenever there is output
def read_ping_subprocess_output()
current_ping = poll(ping_subprocess.output)
eval(current_ping)
ping_subprocess()
read_ping_subprocess_output() # <-- how do i get to this code
but the problem is that the ping_subprocess() method will wait until the output of the process finishes, which will technically never happen since the ping process runs indefinitely and would only be stopped if i purposefully give it some other signal which i plan on coding in later.
How do I make it such that the subprocess method will return once the process starts instead of waiting for it to end so that I can read its output? I'm sure another option is to constantly append the output of the ping function into a file (within reason) and read that file simultaneously with another program, but I'm not sure if that's the best idea.
So in terms of threads (if this is needed) how would this go? I've heard that mixing threads and subprocesses is a bad idea.

How to integrate killable processes/thread in Python GUI?

Kind all, I'm really new to python and I'm facing a task which I can't completely grasp.
I've created an interface with Tkinter which should accomplish a couple of apparently easy feats.
By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file.
The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely.
The "Stop" button should stop the acquisition and essentially it should:
Kill the read/write Thread
Close the file
Close the serial port
Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this?
Thank you all!
First, you have to choose whether you are going to use threads or processes.
I will not go too much into differences, google it ;) Anyway, here are some things to consider: it is much easier to establish communication between threads than betweeween processes; in Python, all threads will run on the same CPU core (see Python GIL), but subprocesses may use multiple cores.
Processes
If you are using subprocesses, there are two ways: subprocess.Popen and multiprocessing.Process. With Popen you can run anything, whereas Process gives a simpler thread-like interface to running python code which is part of your project in a subprocess.
Both can be killed using terminate method.
See documentation for multiprocessing and subprocess
Of course, if you want a more graceful exit, you will want to send an "exit" message to the subprocess, rather than just terminate it, so that it gets a chance to do the clean-up. You could do that e.g. by writing to its stdin. The process should read from stdin and when it gets message "exit", it should do whatever you need before exiting.
Threads
For threads, you have to implement your own mechanism for stopping, rather than using something as violent as process.terminate().
Usually, a thread runs in a loop and in that loop you check for a flag which says stop. Then you break from the loop.
I usually have something like this:
class MyThread(Thread):
def __init__(self):
super(Thread, self).__init__()
self._stop_event = threading.Event()
def run(self):
while not self._stop_event.is_set():
# do something
self._stop_event.wait(SLEEP_TIME)
# clean-up before exit
def stop(self, timeout):
self._stop_event.set()
self.join(timeout)
Of course, you need some exception handling etc, but this is the basic idea.
EDIT: Answers to questions in comment
thread.start_new_thread(your_function) starts a new thread, that is correct. On the other hand, module threading gives you a higher-level API which is much nicer.
With threading module, you can do the same with:
t = threading.Thread(target=your_function)
t.start()
or you can make your own class which inherits from Thread and put your functionality in the run method, as in the example above. Then, when user clicks the start button, you do:
t = MyThread()
t.start()
You should store the t variable somewhere. Exactly where depends on how you designed the rest of your application. I would probably have some object which hold all active threads in a list.
When user clicks stop, you should:
t.stop(some_reasonable_time_in_which_the_thread_should_stop)
After that, you can remove the t from your list, it is not usable any more.
First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them.
Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop.

Python: kill process and close window it opened in (windows)

I'm working on a tool for data entry at my job where it basically takes a report ID number, opens a PDF to that page of that report, allows you to input the information and then saves it.
I'm completely new to instantiating new processes in python; this is the first time that I've really tried to do it. so basically, I have a relevant function:
def get_report(id):
path = report_path(id)
if not path:
raise NameError
page = get_page(path, id)
proc = subprocess.Popen(["C:\Program Files (x86)\Adobe\Reader 11.0\Reader\AcroRd32.exe", "/A", "page={}".format(page),
path])
in order to open the report in Adobe Acrobat and be able to input information while the report is still open, I determined that I had to use multiprocessing. So, as a result, in the main loop of the program, where it iterates through data and gets the report ID, I have this:
for row in rows:
print 'Opening report for {}'.format(ID)
arg = ID
proc = Process(target=get_report, args=(arg,))
proc.start()
row[1] = raw_input('Enter the desired value: ')
rows.updateRow(row)
while proc.is_alive():
pass
This way, one can enter data without the program hanging on the subprocess.Popen() command. However, if it simply continues on to the next record without closing the Acrobat window that pops up, then it won't actually open the next report. Hence the while proc.is_alive():, as it gives one a chance to close the window manually. I'd like to kill the process immediately after 'enter' is hit and the value entered, so it will go on and just open the next report with even less work. I tried several different things, ways to kill processes through the pid using os.kill(); I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
What am I missing here? How do I kill the process and close the window that it opened in? Is this even possible? Like I said, I have just about 0 experience with processes in python. If I'm doing something horribly wrong, please let me know!
Thanks in advance.
To kill/terminate a subprocess, call proc.kill()/proc.terminate(). It may leave grandchildren processes running, see subprocess: deleting child processes in Windows
This way, one can enter data without the program hanging on the subprocess.Popen() command.
Popen() starts the command. It does not wait for the command to finish. There are .wait() method and convenience functions such as call()
Even if Popen(command).wait() returns i.e., if the corresponding external process has exited; it does not necessarily mean that the document is closed in the general case (the launcher app is done but the main application may persist).
i.e., the first step is to drop unnecessary multiprocessing.Process and call Popen() in the main process instead.
The second step is to make sure to start an executable that owns the opened document i.e., if it is killed the corresponding document won't stay opened: AcroRd32.exe might be already such program (test it: see whether call([r'..\AcroRd32.exe', ..]) waits for the document to be closed) or it might have a command-line switch that enables such behavior. See How do I launch a file in its default program, and then close it when the script finishes?
I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
If kill() and Popen() behave the same in your case then either you've made a mistake (they don't behave the same: you should create a minimal standalone code example with a dummy pdf that demonstrates the problem. Describe using words: what do you expect to happen (step by step) and what happens instead) or AcroRd32.exe is just a launcher app that I've described above (it just opens the document and immediately exits without waiting for the document to be closed).

Python check if another script is running and stop it

I have several python scripts that turn my TV on and off. Sometimes the TV does not respond the first time so I use a while loop to continue sending the command until the "success" response is sent. Up to 10 times.
I need to check if one of these programs are running when any of them are started and kill the first process.
This answer uses domain locks and I think this could work but I dont really understand whats happening there:
https://stackoverflow.com/a/7758075/2005444
What I dont know is what the process_name would be. The scripts are titles tvon.py, tvoff.py, and tvtoggle.py. Is it just the title? Would it include the extension? How do I get the pid so I can kill the process?
This is running on Ubuntu 14.04.1
EDIT: all I really need is to search for any of these running scripts first. Also, instead of killing the process maybe I could just wait for it to finish. I could just do a loop and break it if none of those processes are running.
The reason I need to do this is if the tv is off and the off script is run it will never succeed. The TV wont respond if it is already off. Which is why I built in the limit of 10 commands. It never really takes over 4 so 10 is overkill. The problem is if the off command is trying to run and I turn the TV on using the tvon script the TV will turn on and back off. Although the TV limits how often commands can be accepted, which reduces the chance of this happening I still want the to be as cleanly working as possible.
EDIT:
I found that I can not kill the process because it can lock the tty port up which requires a manual restart. So I think the smarter way is to have the second process wait until the first is done. Or find a way to tell the first process to stop at a specific point in the loop so I know its not transmitting.
If you have a socket, use it. Sockets provide full-blown bidirectional communication. Just write your script to kill itself if it receives anything on the socket. This can be most easily done by creating a separate thread which tries to do a socket.recv() (for SOCK_DGRAM) or socket.accept() (for SOCK_STREAM/SOCK_SEQPACKET), and then calls sys.exit() once that succeeds.
You can then use socket.send() (SOCK_DGRAM) or socket.connect() (SOCK_STREAM/SOCK_SEQPACKET) from the second script instance to ask the first instance to exit.
This function can kill a python script by name on *nix systems. It looks through a list of running processes, finds the PID of the one associated with your script, and issues a kill command.
import subprocess
def killScript(scriptName):
# get running processes with the ps aux command
res = subprocess.check_output(["ps","aux"], stderr=subprocess.STDOUT)
for line in res.split("\n"):
# if one of the lines lists our process
if line.find(scriptName) != -1:
info = []
# split the information into info[]
for part in line.split(" "):
if part.strip() != "":
info.append(part)
# the PID is in the second slot
PID = info[1]
#kill the PID
subprocess.check_output(["kill",PID], stderr=subprocess.STDOUT)
At the beginning of your tv script you could run something like:
killList = ["tvon.py", "tvoff.py", "tvtoggle.py"]
for script in killList:
killScript(script)

Python send jobs to queue processed by Popen

I currently have a working python application, gui with wxpython. I send this application a folder which then gets processed by a command line application via Popen. Each time I run this application it take about 40 mins+ to process before it finishes. While a single job processes I would like to queue up another job, I don't want to submit multiple jobs at the same time, I want to submit one job, while it's processing I want to submit another job, so when the first one finishes it would then just process the next, and so on, but I am unsure of how to go about this and would appreciate some suggestions.
Presumably you have either a notification that the task has finished being passed back to the GUI or the GUI is checking the state of the task periodically. In either case you can allow the user to just add to a list of directories to be processed and when your popen task has finished take the first one off of the list and start a new popen task, (remembering to remove the started one off of the list.
Use subprocess.call() instead of Popen, or use Popen.wait().

Categories

Resources