Python - How to use py.path.LocalPath.sysexec() - python

I'm trying to use LocalPath.sysexec() from the py library, but the documentation is not clear enough for me to understand how to invoke it with the right syntax.
Here is some fake code that mimics my syntax:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Testing pylib implementation of subprocess.Popen
from py.path import local
import sys
path = local(sys.argv[1])
path.sysexec("/usr/bin/foo", ["arg1", "arg2"])

You can look the source code:
def sysexec(self, *argv, **popen_opts):
""" return stdout text from executing a system child process,
where the 'self' path points to executable.
The process is directly invoked and not through a system shell.
"""
from subprocess import Popen, PIPE
argv = map_as_list(str, argv)
popen_opts['stdout'] = popen_opts['stderr'] = PIPE
proc = Popen([str(self)] + argv, **popen_opts)
stdout, stderr = proc.communicate()
ret = proc.wait()
if py.builtin._isbytes(stdout):
stdout = py.builtin._totext(stdout, sys.getdefaultencoding())
if ret != 0:
if py.builtin._isbytes(stderr):
stderr = py.builtin._totext(stderr, sys.getdefaultencoding())
raise py.process.cmdexec.Error(ret, ret, str(self),
stdout, stderr,)
return stdout
clearly, It use the python subprocess module. If you have not been used subprocess, you can click the above link to raed the docs.
In this function, It use subprocess construct a Popen object, and use the argv you pass.
Then call the Popen.wait(), block until the command execute finish. And return the stdout of the command.
Example:
local_path = py._path.local.LocalPath('/usr/bin/ls')
print(local_path.sysexec())
# out: file1\nfile2\nfile3...
print(local_path.sysexec('-l'))
# out likes "ls -l" out

Related

Detemine if python script is called as subprocess and passing args

I have a script that gives the option to run a second script after completion. I am wondering if there is a good way for the second script to know if it was run on its own or as a subprocess. If it was called as a subprocess, pass args into the second script.
The end of the first script is below:
dlg = wx.MessageDialog(None, "Shall we format?",'Format Files',wx.YES_NO | wx.ICON_QUESTION)
result = dlg.ShowModal()
if result == wx.ID_YES:
call("Threading.py", shell=True)
else:
pass
The second script is a standalone script that takes 3 files and formats them into one. The args would just set file names in the second script.
So I would retrieve the parent process pid with os.getppid() and pass then this to the subprocess as arguments using Popen:
(parent.py)
#!/usr/bin/env python
import sys
import os
from subprocess import Popen, PIPE
output = Popen(['./child.py', str( os.getppid() )], stdout=PIPE)
print output.stdout.read()
and
(child.py)
#!/usr/bin/env python
import sys
import os
parent_pid = sys.argv[1]
my_pid = str(os.getppid())
print "Parent is %s child is %s " % ( parent_pid, my_pid )
So when you call the child from the parent
$ ./parent.py
Parent is 72297 child is 72346
At this point is easy to make a comparison and check the pid.

How to create a subprocess in Python, send multiple commands based on previous output

I am writing a program which initiates a connection to a remote machine, then dynamically sending multiple commands to it by monitoring the response. Instead of using pexpect, what else can I use? I am trying to use subprocess.Popen, but communicate() method will kill the process.
Pexpect version: 2.4, http://www.bx.psu.edu/~nate/pexpect/pexpect.html
Referring to the API for subprocess in:
https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
Thanks
Refer the subprocess documentation to understand the basics here
You could do something like this ...
Again, this is just a pointer... this approach may/may not be a best fit for your use case.
Explore -> and Test to find what works for you!
import shlex
import subprocess
import sys
class Command(object):
""" Generic Command Interface ."""
def execute(self, cmd):
proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE)
stdout_value = proc.communicate()[0]
exit_value = proc.poll()
if exit_value:
logger.error('Command execution failed. Command : %s' % cmd)
return exit_value, stdout_value
if __name__ == '__main__':
cmd = Command()
host = '' # HOSTNAME GOES HERE
cmd_str = '' # YOUR COMMAND GOES HERE
cmdline = 'ksh -c "ssh root#{0} "{1}""'.format(host, cmd_str)
exit_value, stdout_value = cmd.execute(cmdline)
if exit_value == 0:
# execute other command/s
# you basically use the same logic as above
else:
# return Or execute other command/s

Interaction between Python script and linux shell

I have a Python script that needs to interact with the user via the command line, while logging whatever is output.
I currently have this:
# lots of code
popen = subprocess.Popen(
args,
shell=True,
stdin=sys.stdin,
stdout=sys.stdout,
stderr=sys.stdout,
executable='/bin/bash')
popen.communicate()
# more code
This executes a shell command (e.g. adduser newuser02) just as it would when typing it into a terminal, including interactive behavior. This is good.
Now, I want to log, from within the Python script, everything that appears on the screen. But I can't seem to make that part work.
I've tried various ways of using subprocess.PIPE, but this usually messes up the interactivity, like not outputting prompt strings.
I've also tried various ways to directly change the behavior of sys.stdout, but as subprocess writes to sys.stdout.fileno() directly, this was all to no avail.
Popen might not be very suitable for interactive programs due to buffering issues and due to the fact that some programs write/read directly from a terminal e.g., to retrieve a password. See Q: Why not just use a pipe (popen())?.
If you want to emulate script utility then you could use pty.spawn(), see the code example in Duplicating terminal output from a Python subprocess or in log syntax errors and uncaught exceptions for a python subprocess and print them to the terminal:
#!/usr/bin/env python
import os
import pty
import sys
with open('log', 'ab') as file:
def read(fd):
data = os.read(fd, 1024)
file.write(data)
file.flush()
return data
pty.spawn([sys.executable, "test.py"], read)
Or you could use pexpect for more flexibility:
import sys
import pexpect # $ pip install pexpect
with open('log', 'ab') as fout:
p = pexpect.spawn("python test.py")
p.logfile = fout # or .logfile_read
p.interact()
If your child process doesn't buffer its output (or it doesn't interfere with the interactivity) and it prints its output to its stdout or stderr then you could try subprocess:
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE, STDOUT
with open('log','ab') as file:
p = Popen([sys.executable, '-u', 'test.py'],
stdout=PIPE, stderr=STDOUT,
close_fds=True,
bufsize=0)
for c in iter(lambda: p.stdout.read(1), ''):
for f in [sys.stdout, file]:
f.write(c)
f.flush()
p.stdout.close()
rc = p.wait()
To read both stdout/stderr separately, you could use teed_call() from Python subprocess get children's output to file and terminal?
This should work
import subprocess
f = open('file.txt','w')
cmd = ['echo','hello','world']
subprocess.call(cmd, stdout=f)

Using POpen to send a variable to Stdin and to send Stdout to a variable

In shell script, we have the following command:
/script1.pl < input_file| /script2.pl > output_file
I would like to replicate the above stream in Python using the module subprocess. input_file is a large file, and I can't read the whole file at once. As such I would like to pass each line, an input_string into the pipe stream and return a string variable output_string, until the whole file has been streamed through.
The following is a first attempt:
process = subprocess.Popen(["/script1.pl | /script2.pl"], stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell = True)
process.stdin.write(input_string)
output_string = process.communicate()[0]
However, using process.communicate()[0] closes the stream. I would like to keep the stream open for future streams. I have tried using process.stdout.readline(), instead, but the program hangs.
To emulate /script1.pl < input_file | /script2.pl > output_file shell command using subprocess module in Python:
#!/usr/bin/env python
from subprocess import check_call
with open('input_file', 'rb') as input_file
with open('output_file', 'wb') as output_file:
check_call("/script1.pl | /script2.pl", shell=True,
stdin=input_file, stdout=output_file)
You could write it without shell=True (though I don't see a reason here) based on 17.1.4.2. Replacing shell pipeline example from the docs:
#!/usr/bin/env python
from subprocess import Popen, PIPE
with open('input_file', 'rb') as input_file
script1 = Popen("/script1.pl", stdin=input_file, stdout=PIPE)
with open("output_file", "wb") as output_file:
script2 = Popen("/script2.pl", stdin=script1.stdout, stdout=output_file)
script1.stdout.close() # allow script1 to receive SIGPIPE if script2 exits
script2.wait()
script1.wait()
You could also use plumbum module to get shell-like syntax in Python:
#!/usr/bin/env python
from plumbum import local
script1, script2 = local["/script1.pl"], local["/script2.pl"]
(script1 < "input_file" | script2 > "output_file")()
See also How do I use subprocess.Popen to connect multiple processes by pipes?
If you want to read/write line by line then the answer depends on the concrete scripts that you want to run. In general it is easy to deadlock sending/receiving input/output if you are not careful e.g., due to buffering issues.
If input doesn't depend on output in your case then a reliable cross-platform approach is to use a separate thread for each stream:
#!/usr/bin/env python
from subprocess import Popen, PIPE
from threading import Thread
def pump_input(pipe):
try:
for i in xrange(1000000000): # generate large input
print >>pipe, i
finally:
pipe.close()
p = Popen("/script1.pl | /script2.pl", shell=True, stdin=PIPE, stdout=PIPE,
bufsize=1)
Thread(target=pump_input, args=[p.stdin]).start()
try: # read output line by line as soon as the child flushes its stdout buffer
for line in iter(p.stdout.readline, b''):
print line.strip()[::-1] # print reversed lines
finally:
p.stdout.close()
p.wait()

Popen waiting for child process even when the immediate child has terminated

I'm working with Python 2.7 on Windows 8/XP.
I have a program A that runs another program B using the following code:
p = Popen(["B"], stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
return
B runs a batch script C. C is a long running script and I want B to exit even though C has not finished. I have done it using the following code (in B):
p = Popen(["C"])
return
When I run B, it works as expected. When I run A however, I expected it to exit when B exits. But A waits until C exits even though B has already exitted. Any ideas on what's happening and what possible solutions could be?
Unfortunately, the obvious solution of changing A to look like B is not an option.
Here is a functional sample code to illustrate this issue:
https://www.dropbox.com/s/cbplwjpmydogvu2/popen.zip?dl=1
The zip file consists of the following files with the following contents:
A.py
from subprocess import PIPE, Popen
import sys
def log(line):
with open("log.txt", "a") as logfile:
logfile.write(line)
log("\r\n\r\nA: I'll wait for B\r\n")
p = Popen(["C:\\Python27\\python.exe", "B.py"], stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
log("A: Done.\r\n")
sys.exit(0)
B.py
from subprocess import Popen, PIPE
import sys
def log(line):
with open("log.txt", "a") as logfile:
logfile.write(line)
log("B: launching C\r\n")
p = Popen(["C.bat"])
log("B: Not waiting for C at all. bye!\r\n")
sys.exit(0)
C.bat
#echo off
echo C: Start long running task : %time% >> "log.txt"
ping -n 10 127.0.0.1>nul
echo C: Stop long running task : %time% >> "log.txt"
Any input is much appreciated.
You could provide start_new_session analog for the C subprocess:
#!/usr/bin/env python
import os
import sys
import platform
from subprocess import Popen, PIPE
# set system/version dependent "start_new_session" analogs
kwargs = {}
if platform.system() == 'Windows':
# from msdn [1]
CREATE_NEW_PROCESS_GROUP = 0x00000200 # note: could get it from subprocess
DETACHED_PROCESS = 0x00000008 # 0x8 | 0x200 == 0x208
kwargs.update(creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP)
elif sys.version_info < (3, 2): # assume posix
kwargs.update(preexec_fn=os.setsid)
else: # Python 3.2+ and Unix
kwargs.update(start_new_session=True)
p = Popen(["C"], stdin=PIPE, stdout=PIPE, stderr=PIPE, **kwargs)
assert not p.poll()
[1]: Process Creation Flags for CreateProcess()
Here is a code snippet adapted from Sebastian's answer and this answer:
#!/usr/bin/env python
import os
import sys
import platform
from subprocess import Popen, PIPE
# set system/version dependent "start_new_session" analogs
kwargs = {}
if platform.system() == 'Windows':
# from msdn [1]
CREATE_NEW_PROCESS_GROUP = 0x00000200 # note: could get it from subprocess
DETACHED_PROCESS = 0x00000008 # 0x8 | 0x200 == 0x208
kwargs.update(creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP, close_fds=True)
elif sys.version_info < (3, 2): # assume posix
kwargs.update(preexec_fn=os.setsid)
else: # Python 3.2+ and Unix
kwargs.update(start_new_session=True)
p = Popen(["C"], stdin=PIPE, stdout=PIPE, stderr=PIPE, **kwargs)
assert not p.poll()
I've only tested it personally on Windows.

Categories

Resources