converting check_call statement to subprocess.Popen - python

I am running into the below error while converting a check_call statement to subprocess.Popen,am messing up around "&&" I think ,can anyone help on how to fix it?
check_call("git fetch ssh://username#company.com:29418/platform/vendor/company-proprietary/radio %s && git cherry-pick FETCH_HEAD" % change_ref , shell=True)
proc = subprocess.Popen(['git', 'fetch', 'ssh://username#company.com:29418/platform/vendor/company-proprietary/radio', change_ref , '&& git' , 'cherry-pick', 'FETCH_HEAD'], stderr=subprocess.PIPE)
Error:-
fatal: Invalid refspec '&& git

rc = Popen("cmd1 && cmd2", shell=True).wait()
if rc != 0:
raise Error(rc)
Or
rc = Popen(["git", "fetch", "ssh://...", change_ref]).wait()
if rc != 0:
raise Error("git fetch failed: %d" % rc)
rc = Popen("git cherry-pick FETCH_HEAD".split()).wait()
if rc != 0:
raise Error("git cherry-pick failed: %d" % rc)
To capture stderr:
proc_fetch = Popen(["git", "fetch", "ssh://...", change_ref], stderr=PIPE)
stderr = proc_fetch.communicate()[1]
if p.returncode == 0: # success
p = Popen("git cherry-pick FETCH_HEAD".split(), stderr=PIPE)
stderr = p.communicate()[1]
if p.returncode != 0: # cherry-pick failed
# handle stderr here
...
else: # fetch failed
# handle stderr here
...

&& is a shell feature. Run the commands separately:
proc = subprocess.Popen(['git', 'fetch', 'ssh://username#company.com:29418/platform/vendor/company-proprietary/radio', change_ref], stderr=subprocess.PIPE)
out, err = proc.communicate() # Wait for completion, capture stderr
# Optional: test if there were no errors
if not proc.returncode:
proc = subprocess.Popen(['git' , 'cherry-pick', 'FETCH_HEAD'], stderr=subprocess.PIPE)
out, err = proc.communicate()

Related

Simple parallelization of subprocess.run() calls?

Consider the following snippet that runs three different subprocesses one after the other with subprocess.run (and notably all with defaulted kwargs):
import subprocess
p1 = subprocess.run(args1)
if p1.returncode != 0:
error()
p2 = subprocess.run(args2)
if p2.returncode != 0:
error()
p3 = subprocess.run(args3)
if p3.returncode != 0:
error()
How can we rewrite this so that the subprocesses are run in parallel to each other?
With Popen right? What does that exactly look like?
For reference, the implementation of subprocess.run is essentially:
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
exc.stdout, exc.stderr = process.communicate()
else:
process.wait()
raise
except:
process.kill()
raise
retcode = process.poll()
return CompletedProcess(process.args, retcode, stdout, stderr)
So something like...
with Popen(args1) as p1:
with Popen(args2) as p2:
with Popen(args3) as p3:
try:
p1.communicate(None, timeout=None)
p2.communicate(None, timeout=None)
p3.communicate(None, timeout=None)
except:
p1.kill()
p2.kill()
p3.kill()
raise
if p1.poll() != 0 or p2.poll() != 0 or p3.poll() != 0:
error()
Is that along the right lines?
I would just use multiprocessing to accomplish your mission but ensuring that your invocation of subprocess.run uses capture_output=True so that the output from the 3 commands running in parallel are not interlaced:
import multiprocessing
import subprocess
def runner(args):
p = subprocess.run(args, capture_output=True, text=True)
if p.returncode != 0:
raise Exception(r'Return code was {p.returncode}.')
return p.stdout, p.stderr
def main():
args1 = ['git', 'status']
args2 = ['git', 'log', '-3']
args3 = ['git', 'branch']
args = [args1, args2, args3]
with multiprocessing.Pool(3) as pool:
results = [pool.apply_async(runner, args=(arg,)) for arg in args]
for result in results:
try:
out, err = result.get()
print(out, end='')
except Exception as e: # runner completed with an Exception
print(e)
if __name__ == '__main__': # required for Windows
main()
Update
With just subprocess we have something like:
import subprocess
args1 = ['git', 'status']
args2 = ['git', 'log', '-3']
args3 = ['git', 'branch']
p1 = subprocess.Popen(args1)
p2 = subprocess.Popen(args2)
p3 = subprocess.Popen(args3)
p1.communicate()
rc1 = p1.returncode
p2.communicate()
rc2 = p2.returncode
p3.communicate()
rc3 = p3.returncode
But, for whatever reason on my Windows platform I never saw the output from the third subprocess command ('git branch'), so there must be some limitation there. Also, if the command you were running required input from stdin before proceeding, that input would have to be provided to the communicate method. But the communicate method would not complete until the entire subprocess has completed and you would get no parallelism, so as a general solution this is not really very good. In the multiprocessing code, there is no problem with having stdin input to communicate.
Update 2
When I recode it as follows, I now get all the expected output. I am not sure why it makes a difference, however. According to the documentation, Popen.communicate:
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate and set the returncode attribute. The optional input argument should be data to be sent to the child process, or None, if no data should be sent to the child. If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
So the call should be waiting for the process to terminate. Nevertheless, my preceding comment about the situation where the command you are executing requiring stdin input (via a pipe) would not run in parallel without using multiprocessing.
import subprocess
args1 = ['git', 'status']
args2 = ['git', 'log', '-3']
args3 = ['git', 'branch']
with subprocess.Popen(args1) as p1:
with subprocess.Popen(args2) as p2:
with subprocess.Popen(args3) as p3:
p1.communicate()
rc1 = p1.returncode
p2.communicate()
rc2 = p2.returncode
p3.communicate()
rc3 = p3.returncode

Python3 pipe output of multiple processes to a single process [duplicate]

I know how to run a command using cmd = subprocess.Popen and then subprocess.communicate.
Most of the time I use a string tokenized with shlex.split as 'argv' argument for Popen.
Example with "ls -l":
import subprocess
import shlex
print subprocess.Popen(shlex.split(r'ls -l'), stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE).communicate()[0]
However, pipes seem not to work... For instance, the following example returns noting:
import subprocess
import shlex
print subprocess.Popen(shlex.split(r'ls -l | sed "s/a/b/g"'), stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE).communicate()[0]
Can you tell me what I am doing wrong please?
Thx
I think you want to instantiate two separate Popen objects here, one for 'ls' and the other for 'sed'. You'll want to pass the first Popen object's stdout attribute as the stdin argument to the 2nd Popen object.
Example:
p1 = subprocess.Popen('ls ...', stdout=subprocess.PIPE)
p2 = subprocess.Popen('sed ...', stdin=p1.stdout, stdout=subprocess.PIPE)
print p2.communicate()
You can keep chaining this way if you have more commands:
p3 = subprocess.Popen('prog', stdin=p2.stdout, ...)
See the subprocess documentation for more info on how to work with subprocesses.
I've made a little function to help with the piping, hope it helps. It will chain Popens as needed.
from subprocess import Popen, PIPE
import shlex
def run(cmd):
"""Runs the given command locally and returns the output, err and exit_code."""
if "|" in cmd:
cmd_parts = cmd.split('|')
else:
cmd_parts = []
cmd_parts.append(cmd)
i = 0
p = {}
for cmd_part in cmd_parts:
cmd_part = cmd_part.strip()
if i == 0:
p[i]=Popen(shlex.split(cmd_part),stdin=None, stdout=PIPE, stderr=PIPE)
else:
p[i]=Popen(shlex.split(cmd_part),stdin=p[i-1].stdout, stdout=PIPE, stderr=PIPE)
i = i +1
(output, err) = p[i-1].communicate()
exit_code = p[0].wait()
return str(output), str(err), exit_code
output, err, exit_code = run("ls -lha /var/log | grep syslog | grep gz")
if exit_code != 0:
print "Output:"
print output
print "Error:"
print err
# Handle error here
else:
# Be happy :D
print output
shlex only splits up spaces according to the shell rules, but does not deal with pipes.
It should, however, work this way:
import subprocess
import shlex
sp_ls = subprocess.Popen(shlex.split(r'ls -l'), stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
sp_sed = subprocess.Popen(shlex.split(r'sed "s/a/b/g"'), stdin = sp_ls.stdout, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
sp_ls.stdin.close() # makes it similiar to /dev/null
output = sp_ls.communicate()[0] # which makes you ignore any errors.
print output
according to help(subprocess)'s
Replacing shell pipe line
-------------------------
output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
HTH
"""
Why don't you use shell
"""
def output_shell(line):
try:
shell_command = Popen(line, stdout=PIPE, stderr=PIPE, shell=True)
except OSError:
return None
except ValueError:
return None
(output, err) = shell_command.communicate()
shell_command.wait()
if shell_command.returncode != 0:
print "Shell command failed to execute"
return None
return str(output)
Thank #hernvnc, #glglgl, and #Jacques Gaudin for the answers. I fixed the code from #hernvnc. His version will cause hanging in some scenarios.
import shlex
from subprocess import PIPE
from subprocess import Popen
def run(cmd, input=None):
"""Runs the given command locally and returns the output, err and exit_code."""
if "|" in cmd:
cmd_parts = cmd.split('|')
else:
cmd_parts = []
cmd_parts.append(cmd)
i = 0
p = {}
for cmd_part in cmd_parts:
cmd_part = cmd_part.strip()
if i == 0:
if input:
p[i]=Popen(shlex.split(cmd_part),stdin=PIPE, stdout=PIPE, stderr=PIPE)
else:
p[i]=Popen(shlex.split(cmd_part),stdin=None, stdout=PIPE, stderr=PIPE)
else:
p[i]=Popen(shlex.split(cmd_part),stdin=p[i-1].stdout, stdout=PIPE, stderr=PIPE)
i = i +1
# close the stdin explicitly, otherwise, the following case will hang.
if input:
p[0].stdin.write(input)
p[0].stdin.close()
(output, err) = p[i-1].communicate()
exit_code = p[0].wait()
return str(output), str(err), exit_code
# test case below
inp = b'[ CMServer State ]\n\nnode node_ip instance state\n--------------------------------------------\n1 linux172 10.90.56.172 1 Primary\n2 linux173 10.90.56.173 2 Standby\n3 linux174 10.90.56.174 3 Standby\n\n[ ETCD State ]\n\nnode node_ip instance state\n--------------------------------------------------\n1 linux172 10.90.56.172 7001 StateFollower\n2 linux173 10.90.56.173 7002 StateLeader\n3 linux174 10.90.56.174 7003 StateFollower\n\n[ Cluster State ]\n\ncluster_state : Normal\nredistributing : No\nbalanced : No\ncurrent_az : AZ_ALL\n\n[ Datanode State ]\n\nnode node_ip instance state | node node_ip instance state | node node_ip instance state\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n1 linux172 10.90.56.172 6001 P Standby Normal | 2 linux173 10.90.56.173 6002 S Primary Normal | 3 linux174 10.90.56.174 6003 S Standby Normal'
cmd = "grep -E 'Primary' | tail -1 | awk '{print $3}'"
run(cmd, input=inp)

Python Script for getting Bitbucket links

I have a python script which will create the branches and uploads the file in Bitbucket but i am trying to get the links from the Bitbucket after creating branches. How can i do that?
from subprocess import PIPE
import subprocess
import os
import getpass
import pandas as pd
import shutil
git_command = ['git', 'status']
current_user = getpass.getuser()
# Assuming every use has same location for their local repository,
# otherwise this should be set manually
repository = r'C:\Users\%s\bb96' % current_user
git_config = subprocess.call(['git', 'config', 'core.autocrlf', 'true'], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
#------------------------------------------------------------------------------
git_fetch = subprocess.Popen(['git', 'fetch'], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout_f, stderr_f = git_fetch.communicate()
print(stdout_f)
print(stderr_f)
'''
1. Provide path to where all the BSTM are being stored
Provide app_id
Provide schema name
Provide type of the bstms to be created
for ongoing process -> o
for history load -> h
2. Program creates list of branches to be created as follows:
feature/schema-physcial_tab_name-o
3. For each new feature branch creates a directory tree:
appid\schema\table_name\misc\o\bstm, e.g.:
edld_bb96_rdiapid\crz_rdi_ca\table_name\misc\o\bstm
'''
def loading_to_bitbucket(bstm_path, app_id, schema, bstm_type):
# creates list of bstms
feature_list = []
feature_bstm_ref = {}
bstm_list = [f for f in os.listdir(bstm_path) if os.path.isfile(os.path.join(bstm_path, f)) and f[-4:] == ".csv"]
# creates name of future branch based on table physical name
for bstm in bstm_list:
directorytree = ''
df = pd.read_csv(os.path.join(bstm_path, bstm))
for r in range(len(df)):
for c in range(len(df.columns.values.tolist())):
if str(df.iloc[r,c]).strip() == "Target Physical Table Name":
directorytree = os.path.join(repository,app_id,schema,df.iloc[r+1,c].lower(),'misc',bstm_type,'bstm')
feature_list.append("feature/"+schema+"-"+df.iloc[r+1,c].lower()+"-o")
feature_bstm_ref["feature/"+schema+"-"+df.iloc[r+1,c].lower()+"-o"] = [bstm, directorytree]
break
else:
continue
break
# for each new future branch, new branch is created in bitbucket and file is loaded,
# for existing bstm newer version os loaded only if the bstm file was updated
for feature in feature_list:
compare_flag = 0
x = ''
y = ''
next_release = False
#---------
print(" ")
print("Current iteration: " + feature)
git_pull = subprocess.call(['git', 'pull'], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
#git_pull = subprocess.Popen(['git', 'pull'], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
#stdout_md, stderr_md = git_pull.communicate()
#print(stdout_md)
#print(stderr_md)
if git_pull != 0:
print("GIT PULL didn't succeed, check your git status.")
break
else:
print("GIT PULL ended successfully.")
checkout_dev = subprocess.call(['git', 'checkout', 'dev'], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
if checkout_dev != 0:
print("Can\'t checkout DEV branch, check git status")
break
else:
print("Checked out on DEV successfully.")
create_branch = subprocess.Popen(['git', 'checkout', '-b', feature], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout_cb, stderr_cb = create_branch.communicate()
#print(str(stdout_cb))
#print(str(stderr_cb))
if str(stderr_cb).find("fatal: A branch named " + "'" + feature + "'" + " already exists.") < 0 :
try:
createdirtree = subprocess.Popen(['mkdir', feature_bstm_ref[feature][1]], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout_md, stderr_md = createdirtree.communicate()
print("Created feature branch: " + feature)
except:
print('Error while creating directory tree for ' + feature)
continue
else:
try:
checkout_branch = subprocess.Popen(['git', 'checkout', feature], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout_chb, stderr_chb = checkout_branch.communicate()
print("Checked out on branch: " + feature)
except:
print("Error")
break
try:
x = (os.path.join(bstm_path, feature_bstm_ref[feature][0]).replace('\\', '/'))
y = os.path.join(feature_bstm_ref[feature][1],feature_bstm_ref[feature][0]).replace('\\', '/')
next_release = os.path.isfile(y) # values True/False
#print(next_release)
if next_release:
print("Comparing files",)
compare_files = subprocess.Popen(['git', 'diff', '--shortstat', x, y], stdout=PIPE, stderr=PIPE, shell=True)
stdout, stderr = compare_files.communicate()
#print(str(stdout))
#print(str(stderr))
if str(stdout).find('file changed') > 0 :
compare_flag = 1
if compare_flag == 0:
try:
print("Nothing has changed, move to next avalaible feature branch.")
reset_local = subprocess.Popen(['git', 'reset', '--hard', 'origin/'+ feature], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout, stderr = reset_local.communicate()
continue
except:
print("Something went wrong.")
break
except:
print("comparing files didn't succeed.")
break
try:
if compare_flag != 0:
print("Newer version found, new version will be loaded")
else:
print("Initial load of file.")
shutil.copy(os.path.join(bstm_path, feature_bstm_ref[feature][0]), feature_bstm_ref[feature][1])
except shutil.Error:
print(shutil.Error)
print ("Unable to copy file." + feature_bstm_ref[feature][0])
break
# Add the file
try:
add_file = subprocess.Popen(["git", "add", "-A"],cwd = repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout, stderr = add_file.communicate()
#print(str(stdout))
#print(str(stderr))
except:
print ("git add error" + feature)
break
# Compile the commit message and then commit.
try:
message = str(input("Provide commit message: "))
#message = "Initial release " + feature_bstm_ref[feature][0][5:-4] #Assuming that each BSTM will have BSTM_ prefix
commit_info = subprocess.Popen(["git", "commit", "-m", message], cwd = repository,stdout=PIPE, stderr=PIPE, shell=True)
stdout, stderr = commit_info.communicate()
#print(str(stdout))
#print(str(stderr))
except:
print ("git commit error" + feature)
break
# Push to the target BSTM. If you have passphrase for your BitBucket,
# in this step a window will prompt out asking you to input the passphrase.
# After you input the passphrase, the upload will be completed for this feature.
try:
push_to_remote = subprocess.Popen(["git", "push", "origin", "-u",feature], cwd = repository, stdout=PIPE, stderr=PIPE, shell=True)
stdout, stderr = push_to_remote.communicate()
#print(str(stdout))
#print(str(stderr))
print ("Git Bucket uploading succeeded for " + feature)
except:
print ("git push error" + feature)
break
checkout_dev = subprocess.call(['git', 'checkout', 'dev'], cwd=repository, stdout=PIPE, stderr=PIPE, shell=True)
print("All features from the list are checked/loaded.")
#------------------------------------------------------------------------------
bstm_path = r'M:\EDW\EDW_old\Projects\'
app_id = 'dummy'
schema = 'dummy_schema'
bstm_type = 'o'
# calling the function:
loading_to_bitbucket(bstm_path, app_id, schema, bstm_type)....

Why is my python function being skipped?

I've got a small script that's trying to execute an external command. But for some reason, the function that I made to execute the command is being completely skipped over! No errors seem to be raised, it just doesn't execute. I've got a few debug print statements inside it to verify that the function gets entered, but they never print. And I've got a print statement outside of it to verify that the script isn't dying. So what gives?
from xml.etree import ElementTree as et
import subprocess
pomFileLocation = "pom.xml"
uiAutomationCommand = "mvn clean install"
revertPomFileCommand = "git checkout pom.xml"
profileToSetToDefault = "smoketest"
def modifyxml( datafile, value ):
print( "modifying " + datafile )
tree = et.parse( datafile )
rootNodes = tree.getroot()
for node in rootNodes:
if "profiles" in node.tag:
for profile in node.iter():
foundIt = False
for param in profile.iter():
if "id" in param.tag and profileToSetToDefault in param.text:
foundIt = True
break
if foundIt == True:
for param in profile.iter():
if "activation" in param.tag:
for child in param.iter():
if "activeByDefault" in child.tag:
child.text = value
tree.write( datafile )
return
def runExternalCommand( comm ):
print( "running command " + comm )
p = subprocess.Popen( comm, bufsize=-1, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE ).communicate()[0]
print( str(p) )
while( True ):
print( "still running" )
retcode = p.poll()
line = p.stdout.readline()
yield line
if( retcode is not None ):
print("Exiting")
break
return
if __name__ == '__main__':
modifyxml( pomFileLocation, "true" )
#runExternalCommand( uiAutomationCommand )
runExternalCommand( revertPomFileCommand )
print( "finished" )
runExternalCommand uses yield, so if you want it to execute all the way to the end, you ought to call it like for something in runExternalCommand(revertPomFileCommand):. Or just delete the yield line, since you don't seem to need it anyway.
def runExternalCommand( comm ):
print( "running command " + comm )
p = subprocess.Popen( comm, bufsize=-1, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE ).communicate()[0]
print( str(p) )
while( True ):
print( "still running" )
retcode = p.poll()
line = p.stdout.readline()
yield line
if( retcode is not None ):
print("Exiting")
break
return
if __name__ == '__main__':
modifyxml( pomFileLocation, "true" )
#runExternalCommand( uiAutomationCommand )
for line in runExternalCommand( revertPomFileCommand ):
pass
print( "finished" )
Or
def runExternalCommand( comm ):
print( "running command " + comm )
p = subprocess.Popen( comm, bufsize=-1, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE ).communicate()[0]
print( str(p) )
while( True ):
print( "still running" )
retcode = p.poll()
line = p.stdout.readline()
if( retcode is not None ):
print("Exiting")
break
return
if __name__ == '__main__':
modifyxml( pomFileLocation, "true" )
#runExternalCommand( uiAutomationCommand )
runExternalCommand( revertPomFileCommand )
print( "finished" )
As #Kevin said, the main (but not the only) issue is that runExternalCommand is a generator. To consume it, you could run: print(list(runExternalCommand(revertPomFileCommand))).
Though the function runExternalCommand() is broken: there is no point to call p.stdout.readline() after .communicate() returns (the latter waits for the child process to finish and returns the whole output at once).
It is not clear what result you want to get e.g., to run the git command and to store its output in a variable, you could use subprocess.check_output():
from subprocess import check_output, STDOUT
output = check_output("git checkout pom.xml".split(),
stderr=STDOUT, universal_newlines=True)
To discard child's stdout/stderr instead of saving it, use subprocess.check_call():
from subprocess import check_call, DEVNULL, STDOUT
check_call("git checkout pom.xml".split(),
stdout=DEVNULL, stderr=STDOUT)
For the code example, to read output while the child process is still running, see Constantly print Subprocess output while process is running.

AttributeError: 'module' object has no attribute 'kill'

Here is my code:
def cmdoutput(cmd1, flag):
finish = time.time() + 50
p = subprocess.Popen(cmd1, stdin=subprocess.PIPE, stdout = subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
while p.poll() is None:
time.sleep(1)
if finish < time.time():
os.kill(p.pid, signal.SIGTERM)
print "timed out and killed child, collecting what output exists so far"
if (flag == "1"):#To enable container
out, err = p.communicate(input='container\nzone1')
else:
out, err = p.communicate()
print (out)
return out
When I run this script, I get
Attribute Error: 'module' object has no attribute 'kill'.
What's wrong with my code?
I think you have your own os.py.
Put print os.__file__ before os.kill(...) line, and you will see what's going on.
UPDATE
os.kill is only available in unix in jython
Instead of os.kill(...), use p.kill().
UPDATE
p.kill() not work. (At least in Windows + Jython 2.5.2, 2.5.3).
p.pid is None.
http://bugs.jython.org/issue1898
Change your code as follow. Change CPYTHON_EXECUTABLE_PATH, CMDOUTPUT_SCRIPT_PATH.
CPYTHON_EXECUTABLE_PATH = r'c:\python27\python.exe' # Change path to python.exe
CMDOUTPUT_SCRIPT_PATH = r'c:\users\falsetru\cmdoutput.py' # Change path to the script
def cmdoutput(cmd1, flag):
return subprocess.check_output([CPYTHON_EXECUTABLE_PATH, CMDOUTPUT_SCRIPT_PATH, flag])
Save following code as cmdoutput.py
import subprocess
import sys
def cmdoutput(cmd1, flag):
finish = time.time() + 50
p = subprocess.Popen(cmd1, stdin=subprocess.PIPE, stdout = subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
while p.poll() is None:
time.sleep(1)
if finish < time.time():
p.kill()
return '<<timeout>>'
if flag == "1":
out, err = p.communicate('container\nzone1')
else:
out, err = p.communicate()
return out
if __name__ == '__main__':
cmd, flag = sys.argv[1:3]
print(cmdoutput(cmd, flag))

Categories

Resources