I would like to do something like this:
import MyProcess
proc1 = MyProcess('python').start()
proc2 = MyProcess('bash').start()
### This would mimic what happens in a python shell
print(proc1('a=10')) ==> ___
print(proc1('a*2')) ==> 20
proc1('b=a*2')
print(proc1("print(b)") ==> 20
### This would mimic what happens in a bash shell
proc2("a=hello")
proc2("b=there")
c = proc2("echo \"$a $b\"")
print(c) ==> hello there
proc1.stop()
proc2.stop()
I'm not really sure where to start. I tried using subprocess but as soon as a read back the stdout from my last command (issued with process.stdin.write, the process seems to quit and won't execute any further commands. Also, multiprocessing might be better as it can make better use of resources like multiple cores.
I can't find any examples of multiprocessing where you start a process, issue a command, get the output, issue another command, get that output, etc. It seems like it's always about starting the process and then letting it finish. Would I need the process to wrap some kind of while True loop?
Related
I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling
I'm trying to run two processes in parallel. Both programs do not "end" without Ctrl+C (by the way, I'm on Linux), and so os.system will not return the output of a command. I want a way to create two processes independently of the main Python thread, and read text from them as it appears. I also want to be able to send characters to the process (not as a command, because the process interprets key presses by itself) I need something like this:
process1 = System("sh process1")
process2 = System("sh process2")
process1.Send("Hello, I'm sending text into process 1.")
text = process1.Read()
process2.Send(text)
Is there a way of doing this? I've looked into the Subprocess module, but I'm not sure it achieves quite what I want - or if it does, I'm not sure how to do it.
many thanks to anyone who answers,
Subprocess does what you want. Here's an example of writing to and reading from an external command:
import subprocess
proc = subprocess.Popen(["sed", "-u", "s/foo/bar/g"],
shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
proc.stdin.write("foobar\n");
print proc.stdout.readline(); # Writes "barbar"
I just want to build a little python music client on my raspberry pi. I installed "mpg321" and it works great but now my problem. After sending the command
os.system("mpg321 -R testPlayer")
python waits for user input like play, pause or quit. If I write this in my terminal the player pause the music oder quits. Perfect but I want python to do that so I send the command
os.system("LOAD test.mp3")
where LOAD is the command for loading this mp3. But nothing happens. When I quit the player via terminal I get the error:
sh: 1: LOAD: not found
I think this means that
os.system("mpg321 -R testPlayer")
takes the whole process and after I quit it python tries to execute the comman LOAD. So how do I get these things work together?
My code:
import os
class PyMusic:
def __init__(self):
print "initial stuff later"
def playFile(self, fileName, directory = ""):
os.system("mpg321 -R testPlayer")
os.system("LOAD test.mp3")
if __name__ == "__main__":
pymusic = PyMusic()
pymusic.playFile("test.mp3")
Thanks for your help!
First, you should almost never be using os.system. See the subprocess module.
One major advantage of using subprocess is that you can choose whatever behavior you want—run it in the background, start it and wait for it to finish (and throw an exception if it returns non-zero), interact with its stdin and stdout explicitly, whatever makes sense.
Here, you're not trying to run another command "LOAD test.mp3", you're trying to pass that as input to the existing process. So:
p = subprocess.Popen(['mpg321', '-R', 'testPlayer'], stdin=PIPE)
Then you can do this:
p.stdin.write('LOAD test.mp3\n')
This is roughly equivalent to doing this from the shell:
echo -e 'LOAD test.mp3\n' | mpg321 -R testPlayer
However, you should probably read about communicate, because whenever it's possible to figure out how to make your code work with communicate, it's a lot simpler than trying to deal with generic I/O (especially if you've never coded with pipes, sockets, etc. before).
Or, if you're trying to interact with a command-line UI (e.g., you can't send the command until you get the right prompt), you may want to look at an "expect" library. There are a few of these to choose from, so you should search at PyPI to find the right one for you (although I can say that I've used pexpect successfully in the past, and the documentation is full of samples that get the ideas across a lot more quickly than most expect documentation does).
You are looking for a way to send data to stdin. Here is an example of this using Popen:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['mpg321', '-R testPlayer'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
mpg123_stdout = p.communicate(input='LOAD test.mp3\n')[0]
print(mpg123_stdout)
You establish pointers to stdin and stdout, then after you start your process, you communicate with stdin and read from stdout. Be sure to send new lines (carriage returns)
I have a set of command line tools that I'd like to run in parallel on a series of files. I've written a python function to wrap them that looks something like this:
def process_file(fn):
print os.getpid()
cmd1 = "echo "+fn
p = subprocess.Popen(shlex.split(cmd1))
# after cmd1 finishes
other_python_function_to_do_something_to_file(fn)
cmd2 = "echo "+fn
p = subprocess.Popen(shlex.split(cmd2))
print "finish"
if __name__=="__main__":
import multiprocessing
p = multiprocessing.Pool()
for fn in files:
RETURN = p.apply_async(process_file,args=(fn,),kwds={some_kwds})
While this works, it does not seem to be running multiple processes; it seems like it's just running in serial (I've tried using Pool(5) with the same result). What am I missing? Are the calls to Popen "blocking"?
EDIT: Clarified a little. I need cmd1, then some python command, then cmd2, to execute in sequence on each file.
EDIT2: The output from the above has the pattern:
pid
finish
pid
finish
pid
finish
whereas a similar call, using map in place of apply (but without any provision for passing kwds) looks more like
pid
pid
pid
finish
finish
finish
However, the map call sometimes (always?) hangs after apparently succeeding
Are the calls to Popen "blocking"?
No. Just creating a subprocess.Popen returns immediately, giving you an object that you could wait on or otherwise use. If you want to block, that's simple:
subprocess.check_call(shlex.split(cmd1))
Meanwhile, I'm not sure why you're putting your args together into a string and then trying to shlex them back to a list. Why not just write the list?
cmd1 = ["echo", fn]
subprocess.check_call(cmd1)
While this works, it does not seem to be running multiple processes; it seems like it's just running in serial
What makes you think this? Given that each process just kicks off two processes into the background as fast as possible, it's going to be pretty hard to tell whether they're running in parallel.
If you want to verify that you're getting work from multiple processing, you may want to add some prints or logging (and throw something like os.getpid() into the messages).
Meanwhile, it looks like you're trying to exactly duplicate the effects of multiprocessing.Pool.map_async out of a loop around multiprocessing.Pool.apply_async, except that instead of accumulating the results you're stashing each one in a variable called RESULT and then throwing it away before you can use it. Why not just use map_async?
Finally, you asked whether multiprocessing is the right tool for the job. Well, you clearly need something asynchronous: check_call(args(file1)) has to block other_python_function_to_do_something_to_file(file1), but at the same time not block check_call(args(file2)).
I would probably have used threading, but really, it doesn't make much difference. Even if you're on a platform where process startup is expensive, you're already paying that cost because the whole point is running N * M bunch of child processes, so another pool of 8 isn't going to hurt anything. And there's little risk of either accidentally creating races by sharing data between threads, or accidentally creating code that looks like it shares data between processes that doesn't, since there's nothing to share. So, whichever one you like more, go for it.
The other alternative would be to write an event loop. Which I might actually start doing myself for this problem, but I'd regret it, and you shouldn't do it…
Prior to this,I run two command in for loop,like
for x in $set:
command
In order to save time,i want to run these two command in the same time,like parallel method in makefile
Thanks
Lyn
The threading module won't give you much performance-wise because of the Global Interpreter Lock.
I think the best way to do this is to use the subprocess module and open each command with it's own stdout.
processes = {}
for cmd in ['cmd1', 'cmd2', 'cmd3']:
p = subprocess.Popen('cmd1', stdout=subprocess.PIPE)
processes[p.stdout] = p
while len(processes):
rfds, _, _ = select.select(processes.keys(), [], [])
for fd in rfds:
process = processses[fd]
print fd.read()
if process.returncode is not None:
print "Process {0} returned with code {1}".format(process.pid, process.returncode)
del processes[fd]
You basically have to use select to see which file descriptors are ready and you have to check their returncode to see if doing a "read" caused them to exit. Processes basically go into a wait state until their stdout is closed. If you would like to do some things while you're waiting, you can put a timeout on select.select() so you'll stop waiting after so long. You can test the length of rfds and if it is 0 then you know that the timeout happened.
twisted or select module is probably what you're after.
If all you want to do is a bunch of batch commands, shell scripts, ie
#!/bin/sh
for i in "command1 command2 command3"; do
$i &
done
Might work better. Alternately, a Makefile like you said.
Look at the threading module.