execv multiple executables in single python script? - python

From what I can tell, execv overtakes the current process, and once the called executable finishes, the program terminates. I want to call execv multiple times within the same script, but because of this, that cannot be done.
Is there an alternative to execv that runs within the current process (i.e. prints to same stdout) and won't terminate my program? If so, what is it?

Yes, use subprocess.
os.execv* is not approporiate for your task, from doc:
These functions all execute a new program, replacing the current
process; they do not return. On Unix, the new executable is loaded
into the current process, and will have the same process id as the
caller.
So, as you want the external exe to print to the same output, this is what you might do:
import subprocess
output = subprocess.check_output(['your_exe', 'arg1'])
By default, check_output() only returns output written to standard output. If you want both standard output and error collected, use the stderr argument.
output = subprocess.check_output(['your_exe', 'arg1'], stderr=subprocess.STDOUT)

The subprocess module in the stdlib is the best way to create processes.

Related

Using subprocess.call to execute python file?

I am using subprocess.call in order to execute another python file. Considering that the script that will be called will never terminate the execution as it's inside an infinite loop, how can I make it possible for the original script to continue the execution after the subprocess call ?
Example:
I have script1.py which does some calculations and then calls script2.py using subprocess.call(["python", "script2.py"]), since it's inside an infinite loop the script1 gets stuck on execution, is there another way to run the file other than using subprocess module ?
subprocess.call(["python", "script2.py"]) waits for the sub-process to finish.
Just use Popen instead:
proc = subprocess.Popen(["python", "script2.py"])
You can later do proc.poll() to see whether it is finished or not, or proc.wait() to wait for it to finish (as call does), or just forget about it and do other things instead.
BTW, you might want to ensure that the same python is called, and that the OS can find it, by using sys.executable instead of just "python":
subprocess.Popen([sys.executable, "script2.py"])

Binding / piping output of run() on/into function in python3 (lynux)

I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling

Get the output of python subprocess in console

process = subprocess.check_output(BACKEND+"mainbgw setup " + str(NUM_USERS), shell=True,\
stderr=subprocess.STDOUT)
I am using the above statement to run a C program in django-python based server for some computations, there are some printf() statements whose output I would like to see on stdout while the server is running and executing the subprocess, how can that be done ?
If you actually don't need the output to be available to your python code as a string, you can just use os.system, or subprocess.call without redirecting stdout elsewhere. Then stdout of your C program will just go directly to stdout of your python program.
If you need both streaming stdout and access to the output as a string, you should use subprocess.Popen (or the old popen2.popen4) to obtain a file descriptor of the output stream, then repeatedly read lines from the stream until you exhausted it. In the mean time, you keep a concatenated version of all data you grabbed. This is an example of the loop.

Executing a command and storing its output in a variable

I'm currently trying to write a python script that, among many things, calls an executable and stores what that executable sends to stdout in a variable. Here is what I have:
1 #!/usr/bin/python
2 import subprocess
3
4 subprocess.call("./pmm", shell=True)
How would I get the output of pmm to be stored in a variable?
In Python 2.7 (and 3.1 or above), you can use subprocess.check_output(). Example from the documentation:
>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
p = subprocess.Popen(["./pmm"], shell=False, stdout=subprocess.PIPE)
output = p.stdout.read()
I wrote a post about this some time ago:
http://trifoliummedium.blogspot.com/2010/12/running-command-line-with-python-and.html
Use p.communicate() to get both stdout and stderr
First you have to save a reference to the subprocess (bind it to a name ... which, in other languages and more informally is referred to as "assigning it to a variable"). So you should use something like proc = subprocess.Popen(...)
From there I recommend that you call proc.poll() to test if the program has completed, and either sleep (using the time.sleep() function, for example) or perform other work (using select.select() for example) and then checking again, later. Or you can call proc.wait() so that you're sure the this ./pmm command has completed it's work before your program continues. The poll() method on an subprocess instance will return "None" if the subprocess it still running; otherwise it'll return the exit value of the command that was running on that subprocess. The wait() method for a subprocess will cause your program to block and then return the exit value.
After that you can call (output, errormsgs) = proc.communicate() to capture any output or error messages from your subprocess. If the output is too large it could cause problems; using the process instance's .stdout (PIPE file descriptor) is tricky and, if you were going to attempt this then you should use features in the fcntl (file descriptor control) module to switch it into a non-blocking mode and be prepared to handle he exceptions raised when attempting read() calls on the buffer when it's empty.

How to spawn multiple python scripts from a python program?

I want to spawn (fork?) multiple Python scripts from my program (written in Python as well).
My problem is that I want to dedicate one terminal to each script, because I'll gather their output using pexpect.
I've tried using pexpect, os.execlp, and os.forkpty but neither of them do as I expect.
I want to spawn the child processes and forget about them (they will process some data, write the output to the terminal which I could read with pexpect and then exit).
Is there any library/best practice/etc. to accomplish this job?
p.s. Before you ask why I would write to STDOUT and read from it, I shall say that I don't write to STDOUT, I read the output of tshark.
See the subprocess module
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
From Python 3.5 onwards you can do:
import subprocess
result = subprocess.run(['python', 'my_script.py', '--arg1', val1])
if result.returncode != 0:
print('script returned error')
This also automatically redirects stdout and stderr.
I don't understand why you need expect for this. tshark should send its output to stdout, and only for some strange reason would it send it to stderr.
Therefore, what you want should be:
import subprocess
fp= subprocess.Popen( ("/usr/bin/tshark", "option1", "option2"), stdout=subprocess.PIPE).stdout
# now, whenever you are ready, read stuff from fp
You want to dedicate one terminal or one python shell?
You already have some useful answers for Popen and Subprocess, you could also use pexpect if you're already planning on using it anyways.
#for multiple python shells
import pexpect
#make your commands however you want them, this is just one method
mycommand1 = "print 'hello first python shell'"
mycommand2 = "print 'this is my second shell'"
#add a "for" statement if you want
child1 = pexpect.spawn('python')
child1.sendline(mycommand1)
child2 = pexpect.spawn('python')
child2.sendline(mycommand2)
Make as many children/shells as you want and then use the child.before() or child.after() to get your responses.
Of course you would want to add definitions or classes to be sent instead of "mycommand1", but this is just a simple example.
If you wanted to make a bunch of terminals in linux, you just need to replace the 'python' in the pextpext.spawn line
Note: I haven't tested the above code. I'm just replying from past experience with pexpect.

Categories

Resources