Changing a function's behavior based on platform, pythonically - python

Use case:
Have to work with python subprocess32 module, for timeouts, but the code may run on windows as well. The module documentation suggests
if os.name == 'posix' and sys.version_info[0] < 3:
import subprocess32 as subprocess
else:
import subprocess
The problem:
I think the above method does not take into account that the methods like communicate, check_output, wait have timeout parameter only with subprocess32 module.
All calls to those will fail with this method
I don't wish to implement 2 different variants of the same function, conditionally import modules and everything.
Looking for a pythonic way of handling this. My hunch says that decorators and partial functions should help but can't seem to figure the precise and concise way.
Any suggestions ?

I devised a super ugly way of doing this using partial functions
from functools import partial
from subprocess import check_output
import subprocess
if os.name == 'posix' and sys.version_info[0] < 3:
from subprocess32 import check_output
import subprocess32 as subprocess
check_output = partial(check_output,timeout=10)
def execute_cmd(cmd, args):
command = []
command.append(cmd)
command = command + args
try:
proc_out = check_output(command, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
print("Failed to execute local command \nError code:%s, Output:%s",
e.cmd, e.returncode, e.output)
# I wish to handle TimeoutExpired exception here, but then this won't be generic
except:
print("Command %s failed to execute on host", command)
raise

Related

Mocking python subprocess.call function and capture its system exit code

Writing test cases to handle successful and failed python subprocess calls, I need to capture subprocess.call returning code.
Using python unittest.mock module, is it possible to patch the subprocess.call function and capture its real system exit code?
Consider an external library with the following code:
## <somemodule.py> file
import subprocess
def run_hook(cmd):
# ...
subprocess.call(cmd, shell=True)
sum_one = 1+1
return sum_one
I can't modify the function run_hook. It is not part of my code. But, the fact is that subprocess.call is being called among other statements.
Here we have a snippet of code returning a forced system exit error code 1:
## <somemodule.py> tests file
import subprocess
from somemodule import run_hook
try:
from unittest.mock import patch
except ImportError:
from mock import patch
#patch("subprocess.call", side_effect=subprocess.call)
def test_run_hook_systemexit_not_0(call_mock):
python_exec = sys.executable
cmd_parts = [python_exec, "-c", "'exit(1)'"] # Forced error code 1
cmd = " ".join(cmd_parts)
run_hook(cmd)
call_mock.assert_called_once_with(cmd, shell=True)
# I need to change the following assertion to
# catch real return code "1"
assert "1" == call_mock.return_value(), \
"Forced system exit(1) should return 1. Just for example purpose"
How can I improve this test to capture the expected real value for any subprocess.call return code?
For compatibility purposes, new subprocess.run (3.5+) can't be used. This library is still broadly used by python 2.7+ environments.
About subprocess.call, the documentation says:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
All you need to do is to modify your run_hook function and return the exit code:
def run_hook(cmd):
# ...
return subprocess.call(cmd, shell=True)
This will simply your test code.
def test_run_hook_systemexit_not_0():
python_exec = sys.executable
args = [python_exec, "-c", "'exit(1)'"]
assert run_hook(args) == 1
My advice: use subprocess.run instead
Edit
If you want to check the exit code of subprocess.call you need to patch it with your own version, like this:
import subprocess
_original_call = subprocess.call
def assert_call(*args, **kwargs):
assert _original_call(*args, **kwargs) == 0
Then, you use assert_call as a side effect function for your patch:
from unittest.mock import patch
#patch('subprocess.call', side_effect=assert_call)
def test(call):
python_exec = sys.executable
args = [python_exec, "-c", "'exit(1)'"]
run_hook(args)
A wrapper around subprocess.call can handle the assertion verification.
In this case, I declare this wrapper as the side_effect argument in #patch definition.
In this case, the following implementation worked well.
import sys
import unittest
try:
from unittest.mock import patch
except ImportError:
from mock import patch
def subprocess_call_assert_wrap(expected, message=None):
from subprocess import call as _subcall
def _wrapped(*args, **kwargs):
if message:
assert expected == _subcall(*args, **kwargs), message
else:
assert expected == _subcall(*args, **kwargs)
return _wrapped
class TestCallIsBeingCalled(unittest.TestCase):
#patch("subprocess.call", side_effect=subprocess_call_assert_wrap(expected=0))
def test_run_hook_systemexit_0(self, call_mock):
python_exec = sys.executable
cmd_parts = [python_exec, "-c", "'exit(0)'"]
cmd = " ".join(cmd_parts)
run_hook(cmd)
call_mock.assert_called_once_with(cmd, shell=True)
#patch("subprocess.call", side_effect=subprocess_call_assert_wrap(expected=1))
def test_run_hook_systemexit_not_0(self, call_mock):
python_exec = sys.executable
cmd_parts = [python_exec, "-c", "'exit(1)'"]
cmd = " ".join(cmd_parts)
run_hook(cmd)
call_mock.assert_called_once_with(cmd, shell=True)
After some tests with taking this approach, it seems possible to use for a more general purpose calls, like:
def assert_wrapper(expected, callable, message=None):
def _wrapped(*args, **kwargs):
if message:
assert expected == callable(*args, **kwargs), message
else:
assert expected == callable(*args, **kwargs)
return _wrapped
This is not the best approach, but it seems reasonable.
There is some best known lib with similar behavior that I can use in this project?

Python multiprocessing without locking the parent process

OS:- Mac OSX
Python
I'm new to Multiprocessing with python. For my application, I want to open a new process from main and run it without locking the main process. for eg. I'm running process A and now i need to open a new application from A, lets call it process B. I want to open B in such a way that it does not blocks process A and still from Process i should be able to stop the process whenever i wish to.
Till now whatever code i have tried are basic and they lock the process A. and hence i'm unable to achieve it. Is there any workaround to do this ?
I read about fork and spawn but couldn't understand how can i use it to open an application. And i have tried threading also. But with no success. Can anyone tell me how can i do that ?
Currently I'm using subprocess.call() to open Mac Applications through Python.
It would be really helpful.
EDIT :-
I tried the accepted answer of this link
but to no avail. Because it would block the terminal and once we close the app manually it exits with output 0.
Also I have tried this solution. It would do the same. I want to make the the calling process not to be blocked by the called process.
while doing the same task in windows with os.system() gives me exactly what i want. But i don't know how can i do this in Mac.
EDIT 2: CODE
module 1:
import subprocess
def openCmd(name):
subprocess.call(["/usr/bin/open", "-W", "-n", "-a", "/Applications/"+name+".app"])
def closeCmd(name):
subprocess.call(['osascript', '-e', 'tell "'+name+'" to quit'])
main module:
import speech_recognition as sr
import pyttsx
import opCl
speech_engine = pyttsx.init('nsss')
speech_engine.setProperty('rate', 150)
OPEN_COGNATES=['open']
CLOSE_COGNATES=['close']
def speak(text):
speech_engine.say(text)
speech_engine.runAndWait()
re = sr.Recognizer()
def listen():
with sr.Microphone() as source:
re.adjust_for_ambient_noise(source)
while True:
speak("Say something!")
print ">>",
audio = re.listen(source)
try:
speak("now to recognise it,")
value=re.recognize_google(audio)
print (value)
speak("I heard you say {}".format(value))
value=value.strip().split()
name=" ".join(value[1:])
if value[0] in OPEN_COGNATES:
speak("opening "+name)
opCl.openCmd(name)
pass
elif value[0] in CLOSE_COGNATES:
speak("opening "+name)
opCl.closeCmd(name)
pass
else:
pass
except sr.UnknownValueError as e:
speak("Could not understand audio")
print ('Could not understand audio')
except sr.RequestError as e:
speak("can't recognise what you said.")
print ("can't recognise what you said")
if __name__=='__main__':
listen()
Comment: it gave a traceback. FileNotFoundError: [Errno 2] No such file or directory: 'leafpad'
As i wrote, I can't us "/usr/bin/open" and osascript, so my example uses 'leafpad'.
Have you tried replacing Popen([name]) with your
Popen(["/usr/bin/open", "-W", "-n", "-a", "/Applications/"+name+".app"])?
You must pass the same command args as you start it from the command line.
Read this: launch-an-app-on-os-x-with-command-line
Reread From Python » 3.6.1 Documentation subprocess.Popen
Note
shlex.split() can be useful when determining the correct tokenization for args, especially in complex cases:
Python » 3.6.1 Documentation:
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False, timeout=None)
Run the command described by args. Wait for command to complete, then return the returncode attribute
Your given "/usr/bin/open" and osascript didn't work for me.
From Python » 3.6.1 Documentation subprocess.Popen
NOWAIT example, for instance:
import subprocess
def openCmd(name):
subprocess.Popen([name])
def closeCmd(name):
subprocess.Popen(['killall', name])
if __name__ == '__main__':
while True:
key = input('input 1=open, 0=cloes, q=quit:')
if key == '1':
openCmd(('leafpad'))
if key == '0':
closeCmd('leafpad')
if key == 'q':
break
Note: Killing a process can lead to data loos and or other problems.
Tested with Python:3.4.2
I'm

python 2.4.3, unable to use check_output [duplicate]

I've been reading the Python documentation about the subprocess module (see here) and it talks about a subprocess.check_output() command which seems to be exactly what I need.
However, when I try and use it I get an error that it doesn't exist, and when I run dir(subprocess) it is not listed.
I am running Python 2.6.5, and the code I have used is below:
import subprocess
subprocess.check_output(["ls", "-l", "/dev/null"])
Does anyone have any idea why this is happening?
It was introduced in 2.7 See the docs.
Use subprocess.Popen if you want the output:
>>> import subprocess
>>> output = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE).communicate()[0]
IF it's used heavily in the code you want to run but that code doesn't have to be maintained long-term (or you need a quick fix irrespective of potential maintenance headaches in the future) then you could duck punch (aka monkey patch) it in wherever subprocess is imported...
Just lift the code from 2.7 and insert it thusly...
import subprocess
if "check_output" not in dir( subprocess ): # duck punch it in!
def f(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd)
return output
subprocess.check_output = f
Minor fidgeting may be required.
Do bear in mind though the onus is on you to maintain dirty little backports like this. If bugs are discovered and corrected in the latest python then you a) have to notice that and b) update your version if you want to stay secure. Also, overriding & defining internal functions yourself is the next guy's worst nightmare, especially when the next guy is YOU several years down the line and you've forgot all about the grody hacks you did last time! In summary: it's very rarely a good idea.
Thanks to the monkey patch suggestion (and my attempts failing - but we were consuming CalledProcessError output, so needed to monkey patch that)
found a working 2.6 patch here:
http://pydoc.net/Python/pep8radius/0.9.0/pep8radius.shell/
"""Note: We also monkey-patch subprocess for python 2.6 to
give feature parity with later versions.
"""
try:
from subprocess import STDOUT, check_output, CalledProcessError
except ImportError: # pragma: no cover
# python 2.6 doesn't include check_output
# monkey patch it in!
import subprocess
STDOUT = subprocess.STDOUT
def check_output(*popenargs, **kwargs):
if 'stdout' in kwargs: # pragma: no cover
raise ValueError('stdout argument not allowed, '
'it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE,
*popenargs, **kwargs)
output, _ = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd,
output=output)
return output
subprocess.check_output = check_output
# overwrite CalledProcessError due to `output`
# keyword not being available (in 2.6)
class CalledProcessError(Exception):
def __init__(self, returncode, cmd, output=None):
self.returncode = returncode
self.cmd = cmd
self.output = output
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (
self.cmd, self.returncode)
subprocess.CalledProcessError = CalledProcessError

Check if a program exists from a python script [duplicate]

This question already has answers here:
Test if executable exists in Python?
(15 answers)
Closed 4 years ago.
How do I check if a program exists from a python script?
Let's say you want to check if wget or curl are available. We'll assume that they should be in path.
It would be the best to see a multiplatform solution but for the moment, Linux is enough.
Hints:
running the command and checking for return code is not always enough as some tools do return non 0 result even when you try --version.
nothing should be visible on screen when checking for the command
Also, I would appreciate a solution that that is more general, like is_tool(name)
shutil.which
Let me recommend an option that has not been discussed yet: a Python implementation of which, specifically shutil.which. It was introduced in Python 3.3 and is cross-platform, supporting Linux, Mac, and Windows. It is also available in Python 2.x via whichcraft. You can also just rip the code for which right out of whichcraft here and insert it into your program.
def is_tool(name):
"""Check whether `name` is on PATH and marked as executable."""
# from whichcraft import which
from shutil import which
return which(name) is not None
distutils.spawn.find_executable
Another option that has already been mentioned is distutils.spawn.find_executable.
find_executable's docstring is as follows:
Tries to find 'executable' in the directories listed in 'path'
So if you pay attention, you'll note that the name of the function is somewhat misleading. Unlike which, find_executable does not actually verify that executable is marked as executable, only that it is on the PATH. So it's entirely possible (however unlikely) that find_executable indicates a program is available when it is not.
For example, suppose you have a file /usr/bin/wget that is not marked executable. Running wget from the shell will result in the following error: bash: /usr/bin/wget: Permission denied. which('wget') is not None will return False, yet find_executable('wget') is not None will return True. You can probably get away with using either function, but this is just something to be aware of with find_executable.
def is_tool(name):
"""Check whether `name` is on PATH."""
from distutils.spawn import find_executable
return find_executable(name) is not None
The easiest way is to try to run the program with the desired parameters, and handle the exception if it doesn't exist:
try:
subprocess.call(["wget", "your", "parameters", "here"])
except FileNotFoundError:
# handle file not found error.
This is a common pattern in Python: EAFP
In Python 2, you had to catch OsError instead, since the more fine-grained exception classes for OS errors did not exist yet:
try:
subprocess.call(["wget", "your", "parameters", "here"])
except OSError as e:
if e.errno == errno.ENOENT:
# handle file not found error.
else:
# Something else went wrong while trying to run `wget`
raise
You could use a subprocess call to the binary needed with :
"which" : *nix
"where" : Win 2003 and later (Xp has an addon)
to get the executable path (supposing it is in the environment path).
import os
import platform
import subprocess
cmd = "where" if platform.system() == "Windows" else "which"
try:
subprocess.call([cmd, your_executable_to_check_here])
except:
print "No executable"
or just use Ned Batchelder's wh.py script, that is a "which" cross platform implementation:
http://nedbatchelder.com/code/utilities/wh_py.html
import subprocess
import os
def is_tool(name):
try:
devnull = open(os.devnull)
subprocess.Popen([name], stdout=devnull, stderr=devnull).communicate()
except OSError as e:
if e.errno == os.errno.ENOENT:
return False
return True
I would probably shell out to which wget or which curl and check that the result ends in the name of the program you are using. The magic of unix :)
Actually, all you need to do is check the return code of which. So... using our trusty subprocess module:
import subprocess
rc = subprocess.call(['which', 'wget'])
if rc == 0:
print('wget installed!')
else:
print('wget missing in path!')
Note that I tested this on windows with cygwin... If you want to figure out how to implement which in pure python, i suggest you check here: http://pypi.python.org/pypi/pycoreutils (oh dear - it seems they don't supply which. Time for a friendly nudge?)
UPDATE: On Windows, you can use where instead of which for a similar effect.
I'd go for:
import distutils.spawn
def is_tool(name):
return distutils.spawn.find_executable(name) is not None
I'd change #sorin's answer as follows, the reason is it would check the name of the program without passing the absolute path of the program
from subprocess import Popen, PIPE
def check_program_exists(name):
p = Popen(['/usr/bin/which', name], stdout=PIPE, stderr=PIPE)
p.communicate()
return p.returncode == 0
import os
import subprocess
def is_tool(prog):
for dir in os.environ['PATH'].split(os.pathsep):
if os.path.exists(os.path.join(dir, prog)):
try:
subprocess.call([os.path.join(dir, prog)],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
except OSError, e:
return False
return True
return False
A slight modification to #SvenMarnach's code that addresses the issue of printing to the standard output stream. If you use the subprocess.check_output() function rather than subprocess.call() then you can handle the string that is normally printed to standard out in your code and still catch exceptions and the exit status code.
If you want to suppress the standard output stream in the terminal, don’t print the std out string that is returned from check_output:
import subprocess
import os
try:
stdout_string = subprocess.check_output(["wget", "--help"], stderr=subprocess.STDOUT)
# print(stdout_string)
except subprocess.CalledProcessError as cpe:
print(cpe.returncode)
print(cpe.output)
except OSError as e:
if e.errno == os.errno.ENOENT:
print(e)
else:
# Something else went wrong while trying to run `wget`
print(e)
The non-zero exit status code and output string are raised in the CalledProcessError as subprocess.CalledProcessError.returncode and subprocess.CalledProcessError.output so you can do whatever you'd like with them.
If you want to print the executable's standard output to the terminal, print the string that is returned:
import subprocess
import os
try:
stdout_string = subprocess.check_output(["wget", "--help"], stderr=subprocess.STDOUT)
print(stdout_string)
except subprocess.CalledProcessError as cpe:
print(cpe.returncode)
print(cpe.output)
except OSError as e:
if e.errno == os.errno.ENOENT:
print(e)
else:
# Something else went wrong while trying to run `wget`
print(e)
print() adds an extra newline to the string. If you want to eliminate that (and write std error to the std err stream instead of the std out stream as shown with the print() statements above), use sys.stdout.write(string) and sys.stderr.write(string) instead of print():
import subprocess
import os
import sys
try:
stdout_string = subprocess.check_output(["bogus"], stderr=subprocess.STDOUT)
sys.stdout.write(stdout_string)
except subprocess.CalledProcessError as cpe:
sys.stderr.write(cpe.returncode)
sys.stderr.write(cpe.output)
except OSError as e:
if e.errno == os.errno.ENOENT:
sys.stderr.write(e.strerror)
else:
# Something else went wrong while trying to run `wget`
sys.stderr.write(e.strerror)

subprocess.check_output() doesn't seem to exist (Python 2.6.5)

I've been reading the Python documentation about the subprocess module (see here) and it talks about a subprocess.check_output() command which seems to be exactly what I need.
However, when I try and use it I get an error that it doesn't exist, and when I run dir(subprocess) it is not listed.
I am running Python 2.6.5, and the code I have used is below:
import subprocess
subprocess.check_output(["ls", "-l", "/dev/null"])
Does anyone have any idea why this is happening?
It was introduced in 2.7 See the docs.
Use subprocess.Popen if you want the output:
>>> import subprocess
>>> output = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE).communicate()[0]
IF it's used heavily in the code you want to run but that code doesn't have to be maintained long-term (or you need a quick fix irrespective of potential maintenance headaches in the future) then you could duck punch (aka monkey patch) it in wherever subprocess is imported...
Just lift the code from 2.7 and insert it thusly...
import subprocess
if "check_output" not in dir( subprocess ): # duck punch it in!
def f(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd)
return output
subprocess.check_output = f
Minor fidgeting may be required.
Do bear in mind though the onus is on you to maintain dirty little backports like this. If bugs are discovered and corrected in the latest python then you a) have to notice that and b) update your version if you want to stay secure. Also, overriding & defining internal functions yourself is the next guy's worst nightmare, especially when the next guy is YOU several years down the line and you've forgot all about the grody hacks you did last time! In summary: it's very rarely a good idea.
Thanks to the monkey patch suggestion (and my attempts failing - but we were consuming CalledProcessError output, so needed to monkey patch that)
found a working 2.6 patch here:
http://pydoc.net/Python/pep8radius/0.9.0/pep8radius.shell/
"""Note: We also monkey-patch subprocess for python 2.6 to
give feature parity with later versions.
"""
try:
from subprocess import STDOUT, check_output, CalledProcessError
except ImportError: # pragma: no cover
# python 2.6 doesn't include check_output
# monkey patch it in!
import subprocess
STDOUT = subprocess.STDOUT
def check_output(*popenargs, **kwargs):
if 'stdout' in kwargs: # pragma: no cover
raise ValueError('stdout argument not allowed, '
'it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE,
*popenargs, **kwargs)
output, _ = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd,
output=output)
return output
subprocess.check_output = check_output
# overwrite CalledProcessError due to `output`
# keyword not being available (in 2.6)
class CalledProcessError(Exception):
def __init__(self, returncode, cmd, output=None):
self.returncode = returncode
self.cmd = cmd
self.output = output
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (
self.cmd, self.returncode)
subprocess.CalledProcessError = CalledProcessError

Categories

Resources