I would like to establish a very simple communication between two python scripts. I have decided that the best way to communicate and to have both scripts read from a text file. I would like the main program to wait while to child programs execute.
Normally I would make the main program wait x amount of time and continuously check the text file for an okay flag. However I have seen people talk about using a signal.
Could someone please give an example of this.
There is Popen.send_signal() method that allows you to send a signal to a child process.
Here's code example that sends SIGINT to ping subprocess to get the summary in the output on exit.
You need one process to write and one to read; both processes reading leads to no communication. Signals are used only for special proposes, not for normal inter-process-communication. Use something like pipes or sockets. It's not more complicated than files, but much more powerful.
Related
I'm trying to build a system that would manage a small database and populate it with data from the web.
I would like to have this process run in the background, but still have some way of interacting with it.
How can I go about this in Python?
I would like to know how to do this in two cases:
from within the same python script something like daemon = task.start() followed by daemon.get_info() or daemon.do_something()
from the shell (via another program I could make) myclient get_info or myclient do_something
Could someone give me some key concepts to go look into?
edit: I just read this blogpost, is socket programming (as indicated in his last example) the best way to go about this?
So in the end I landed on some terminology that I was missing.
The core concept seems to be inter-process communication (ipc).
On unix variants the two easiest ways of implementing this are:
Named Pipes (one way communication)
Sockets (two way)
A python script that would make use of these could spawn another thread which would repeatedly read from the pipe and communicate messages back to the main thread through a queue.
I'm trying to communicate with gdb asynchronously using pexpect. If I use the same pipe to do it, the commands sent using pexpect's sendline() function gets mixed into each other. And if I synchronize it like this:
def send_command(str):
global p
with GDB_Engine.lock:
p.sendline(str)
p.expect_exact("(gdb)")
It'll be too slow since there'll be tons of commands coming thru. So the thing I want to do is to implement different pipes for every send_command() block then close it when the work finishes. By this way, the text generated by sendline() commands won't get mixed into each other and I'll also be able to execute things asynchronously.
I have a threaded C program that I want to launch using a Python GUI in a Unix environment. I want to be able to change the state of the GUI by gathering output from the C program.
For instance, this is output from my C program using printf:
Thread on tile 1: On
Thread on tile 2: OFF
Thread on tile 3: Disable
...
Thread on tile 61: ON
I would update my GUI based on the output. What makes the problem difficult is both my GUI and the C program need to run simultaneously and updates happening in realtime. I also need to be able to send commands to the C program from my GUI.
I'm new to Python, C, and Unix (I know, complete rookie status).
I've read up on subprocess, Popen, and pexpect, but not sure how to put it all together of if this is even possible at all.
Thanks in advance
The basic outline of an approach would be to have your python GUI create a new process with the C program and then have the python GUI reading from one end of a pipe while the C program is writing to the other end of the pipe. The python GUI will read the output from the C program, interpret the output, and then do something based on what it has read.
Multiprocessing with Python.
How to fork and return text with Python
Recipe to fork a daemon process on Unix
The recipe article has comments about doing redirection of standard in and standard out which you may need to do.
There's a toughie. I've run into this problem in the past (with no truly satisfactory solution):
https://groups.google.com/forum/?fromgroups#!topic/comp.lang.python/79uoHgAbg18
As suggested there, take a look at this custom module:
http://pypi.python.org/pypi/sarge/0.1
http://sarge.readthedocs.org/en/latest/
Edit #Richard (not enough rep to comment): The problem with pipes is that unless they are attached to an interactive terminal, they are fully buffered -- meaning that none of the output is passed through the pipe to the Python until the C prog is done running, which certainly doesn't qualify as a real time.
Edit 2: Based on Richard's link and some earlier thinking I had done, it occurred to me that it might be possible to manually loop over the pipe by treating it as a file object and only reading one line at a time:
from time import sleep
# Assume proc is the Popen object
wait_time = 1 # 1 second, same delay as `tail -f`
while True: # Or whatever condition you need
line = proc.stdout.readline()
if line != '' and line != '\n':
parse_line_do_stuff()
sleep(wait_time)
This assumes that readline() is non-blocking, and further assumes that the pipe is at most line buffered, and even then it might not work. I've never tried it.
You have two processes, and you want them to communicate. This is known as "interprocess communication" or even IPC. If you Google for "interprocess communication" you can find some information, like this:
http://beej.us/guide/bgipc/output/html/singlepage/bgipc.html
You might want to try a "domain socket" for communicating between the C program and the Python GUI. The guide linked above explains how to talk through a domain socket in C; here is a tutorial on talking through a domain socket with Python.
http://www.doughellmann.com/PyMOTW/socket/uds.html
Currently I'm trying to convert my little python script to support multiple threads/cores. I've been reading about the multiprocessing module for several days now and I've also been trying to get it to suit my needs for some time, still I don't have a clue why it won't work.
This is the working code, and this is my approach on implementing the pool workers. As there are no locks in place and I didn't want to make it too complicated at first I already disabled the logging to file.
Still it doesn't work. It doesn't even output any kind of error message. After running it it just displays the welcome message and then it just keeps running, but without outputting any of the desired output, which would be 2 lines per converted file (before + after converting).
all your workers do is wait for started subprocesses to finish. they don't have any real work to do as that is performed by the external subprocesses, so they will be idle all the time.
using multiprocessing for what you do really is overkill, it's much more appropriate to use threads for that.
if you want to learn how to do multiprocessing, try something which involves inter-process communication, synchronisation, pipes, ...
but to also address your question:
hava a look at what arguments subprocess.call takes. you call it with a single space-separated command string. if you want that to work you have to pass shell=True, otherwise the whole string is interpreted as the executable's name.
the preferred way to call a program using subprocess is is to specify program and arguments as a list:
subprocess.Popen(['/path/to/program', 'arg1', 'arg2'], *otherarguments)
I have a subprocess that I use. I must be able to asynchronously read and write to/from this process to it's respective stdout and stdin.
How can I do this? I have looked at subprocess, but the communicate method waits for process termination (which is not what I want) and the subprocess.stdout.read method can block.
The subprocess is not a Python script but can be edited if absolutely necessary. In total I will have around 15 of these subprocesses.
Have a look how communicate is implemented.
There are essentially 2 ways to do it:
either use select() and be notified whether you can read/write,
or delegate the read and write, which both can block, to a thread, respectively.
Have you considered using some queue or NOSQL DB for inter process communication?
I would suggest you to use Redis, and read and write to different keys with your processes.
Have a look at sarge: http://sarge.readthedocs.org/en/latest/index.html
From the sarge docs:
If you want to interact with external programs from your Python applications, Sarge is a library which is intended to make your life easier than using the subprocess module in Python’s standard library.