I am developing a small tool with python in Linux. Earlier I was using Python 2.7 but now I changed it to Python 3.4 to see if it could help in solving my problem. When I am giving the following code:
try:
x=subprocess.check_output(command, shell=True, timeout=3)
except subprocess.TimeoutExpired as exc:
print ("Timeout bro")
exit()
except Exception as e:
msg = "Some issues in fetching details"
print (msg)
Since the command fetches details from another device and the device is not functioning properly, it is getting timed out after 3 secs and printing the message "Timeout bro". I read the security issues with using shell=True and therefore I made it shell=False for one time and for the second I removed that argument.
try:
x=subprocess.check_output(command, shell=False, timeout=3)
except subprocess.TimeoutExpired as exc:
print ("Timeout bro")
exit()
except Exception as e:
msg = "Some issues in fetching details"
print (msg)
I read at various places that the command works equally well with shell=False. But as soon as I run the above code with shell=False the code directly prints "Some issues in fetching details" without waiting for 3 secs. Is there any way through which I can run the code without shell=True? Please help. Thanks!
When using shell=True, the command may be a string. When using shell=False, the command should be a list of strings, with the first string being the executable, and the subsequent strings being arguments to be passed to the executable.
You might try splitting the command with shlex.split:
import shlex
x = subprocess.check_output(shlex.split(command), shell=False, timeout=3)
By default, when posix=True, shlex.split drops backslashes. So if shlex.split does not work with your command, you may need to use posix=False or split the command manually.
Try splitting the command with command.split(). A string will work in case of shell=True but for shell=False it expects a list of args. However, beware that split won't work in some cases like if you have space in a path etc. I suggest using shlex in that case.
Related
In Python 3.8.13 on Windows 11, I am trying to call a program with subprocess.run() and use the argument timeout to exit the program in case it runs for more than n seconds. However, in some cases, it does not work. Here is my code:
command = ['python', './fast-downward/fast-downward.py', 'domain.pddl',
'problem1.pddl', '--search', 'astar(blind())']
try:
planner_output = subprocess.run(command, timeout=10, shell=False,
stdout=subprocess.PIPE).stdout.decode('utf-8')
except TimeoutExpired as e:
planner_output = 'timeout'
print(planner_output)
I have tried the solutions posted by other people (like this StackOverflow post) but they did not work for my case, maybe due to a different Python/OS version. Do you know of any workaround for my problem?
I am trying to display the final results.txt file via default program. I've tried with bare Popen() without run() and got the same effect. The target file is opening (for me it's the see mode) but after exiting it I receive:
Warning: program returned non-zero exit code #256
Is there any way to ignore it and prevent my program from displaying such warning? I don't care about it because it's the last thing the program does, so I don't want people to waste their time clicking Enter each time...
Code's below:
from subprocess import run, Popen
if filepath[len(filepath)-1] != '/':
try:
results = run(Popen(['start', 'results.txt'], shell=True), stdout=None, shell=True, check=False)
except TypeError:
pass
else:
try:
results = run(Popen(['open', 'results.txt']), stdout=None, check=False)
except TypeError:
pass
except FileNotFoundError:
try:
results = run(Popen(['see', 'results.txt']), stdout=None, check=False)
except TypeError:
pass
except FileNotFoundError:
pass
Your immediate error is that you are mixing subprocess.run with subprocess.Popen. The correct syntax is
y = subprocess.Popen(['command', 'argument'])
or
x = subprocess.run(['command', 'argument'])
but you are incorrectly combining them into, effectively
x = subprocess.run(subprocess.Popen(['command', 'argument']), shell=True)
where the shell=True is a separate bug in its own right (though it will weirdly work on Windows).
What happens is that Popen runs successfully, but then you try to run run on the result, which of course is not a valid command at all.
You want to prefer subprocess.run() over subprocess.Popen in this scenario; the latter is for hand-wiring your own low-level functionality in scenarios where run doesn't offer enough flexibility (such as when you require the subprocess to run in parallel with your Python program as an independent process).
Your approach seems vaguely flawed for Unix-like systems; you probably want to run xdg-open if it's available, otherwise the value of os.environ["PAGER"] if it's defined, else fall back to less, else try more. Some ancient systems also have a default pager called pg.
You will definitely want to add check=True to actually make sure your command fails properly if the command cannot be found, which is the diametrical opposite of what you appear to be asking. With this keyword parameter, Python checks whether the command worked, and will raise an exception if not. (In its absence, failures will be silently ignored, in general.) You should never catch every possible exception; instead, trap just the one you really know how to handle.
Okay, I've achieved my goal with a different approach. I didn't need to handle such exception, I did it without the subprocess module.
Question closed, here's the final code (it looks even better):
from os import system
from platform import system as sysname
if sysname() == 'Windows':
system('start results.txt')
elif sysname() == 'Linux':
system('see results.txt')
elif sysname() == 'Darwin':
system('open results.txt')
else:
pass
I have the following Python(2.7) code:
try:
FNULL = open(os.devnull,'w')
subprocess.check_call(["tar", "-czvf", '/folder/archive.tar.gz', '/folder/some_other_folder'], stdout=FNULL, stderr=subprocess.STDOUT)
except Exception as e:
print str(e)
The problem which I face is that, when there is no more space for the archive, print str(e) prints Command '['tar', '-czvf', '/folder/archive.tar.gz', '/folder/some_other_folder']' returned non-zero exit status 1, which is true, but I want to catch the real error here, that is gzip: write error: No space left on device (I got the this error when I ran the same tar comand manually). Is that possible somehow? I assume that gzip is another process within tar. Am I wrong? Please keep in mind that upgrading to Python 3 is not possible.
EDIT: I also tried to use subprocess.check_output() and print the contents of e.output but that also didn't work
Python 3 solution for sane people
On Python 3, the solution is simple, and you should be using Python 3 for new code anyway (Python 2.7 ended all support nearly a year ago):
The problem is that the program is echoing the error to stderr, so check_output doesn't capture it (either normally, or in the CalledProcessError). The best solution is to use subprocess.run (which check_call/check_output are just a thin wrapper over) and ensure you capture both stdout and stderr. The simplest approach is:
try:
subprocess.run(["tar", "-czvf", '/folder/archive.tar.gz', '/folder/some_other_folder'],
check=True, stdout=subprocess.DEVNULL, stderr=subprocess.PIPE)
# ^ Ignores stdout ^ Captures stderr so e.stderr is populated if needed
except CalledProcessError as e:
print("tar exited with exit status {}:".format(e.returncode), e.stderr, file=sys.stderr)
Python 2 solution for people who like unsupported software
If you must do this on Python 2, you have to handle it all yourself by manually invoking Popen, as none of the high level functions available there will cover you (CalledProcessError didn't spawn a stderr attribute until 3.5, because no high-level API that raised it was designed to handle stderr at all):
with open(os.devnull, 'wb') as f:
proc = subprocess.Popen(["tar", "-czvf", '/folder/archive.tar.gz', '/folder/some_other_folder'],
stdout=f, stderr=subprocess.PIPE)
_, stderr = proc.communicate()
if proc.returncode != 0:
# Assumes from __future__ import print_function at top of file
# because Python 2 print statements are terrible
print("tar exited with exit status {}:".format(proc.returncode), stderr, file=sys.stderr)
I am getting a weird "Access denied - \" error\warning when I run the following script:
import os
import subprocess
directory = r'S:\ome\directory'
subprocess.run(['find', '/i', '"error"', '*.txt', '>Errors.log'], shell=True, cwd=directory)
I have also checked:
print(os.access(directory, os.R_OK)) # prints True
print(os.access(directory, os.W_OK)) # prints True
and they both print True.
The error message is printed while the subprocess command is running but the process is not killed; nothing is raised. As a result, wrapping it into a try-except even without specifying the exception is not catching anything. When the process finishes, the file is created (Error.log) but contains the wrong results.
Running the exact same command (find /i "fatal" *.txt >Error.log) from a cmd opened in the specified directory produces the correct results.
So in which way are the two approaches different?
Approach 1 (from Python):
subprocess.run(['find', '/i', '"error"', '*.txt', '>Errors.log'], shell=True, cwd=r'S:\ome\directory')
Approach 2 (from cmd):
S:\ome\directory>find /i "error" *.txt >Errors.log
I am still not sure what exactly the problem is but changing:
subprocess.run(['find', '/i', '"error"', '*.txt', '>Errors.log'], shell=True, cwd=r'S:\ome\directory')
to:
subprocess.run('find /i "error" *.txt >Errors.log', shell=True, cwd=directory)
does the trick.
As it appears, manually stitching the command works. If anybody has more info on the matter, I would be very grateful.
From Popen constructor (which called by subprocess.run)
On Windows, if args is a sequence, it will be converted to a string in
a manner described in Converting an argument sequence to a string on
Windows. This is because the underlying CreateProcess() operates on
strings.
The problem is that CreateProcess does not support redirection.
see : How do I redirect output to a file with CreateProcess?
Try to pass an output file handler for the stdout argument:
import shlex
import subprocess
with open("Errors.log", "w") as output_fh:
# command = ['find', '/i', '"fatal"', '*.txt']
command = shlex.split(r'find /i \"fatal\" *.txt')
try:
subprocess.run(command, shell=True, stdout=output_fh)
except subprocess.CalledProcessError:
pass
Perhaps, this is the only way to implement your task, because subprocess.run doesn't run redirections (> or >>) described by its arguments.
I am trying to create linux groups using python.Below is function for creating groups.
def createGroups(self):
adminGroupCommand="groupadd "+ self.projectName+'admin'
userGroupCommand="groupadd "+ self.projectName+'user'
try:
os.system(adminGroupCommand)
except OSError as err:
print("group already exists: "+adminGroupCommand)
try:
os.system(userGroupCommand)
except OSError as err:
print("group already exists: "+userGroupCommand)
The function is successfully creating groups.But if i run the same again it gives below output when run,
groupadd: group 'akphaadmin' already exists
groupadd: group 'akphauser' already exists
Its not printing my custom message in "except"block.How can i force the function to print custom message when creating groups if they already exists.
The function os.system does not report errors from the command line in that way. If you're lucky you'll get the return code (that should be zero on success), but you cannot rely on that according to the documentation.
Instead the documentation recommends to use the subprocess module instead:
def createGroups(self):
adminGroupCommand="groupadd "+ self.projectName+'admin'
userGroupCommand="groupadd "+ self.projectName+'user'
try:
subprocess.check_call(adminGroupCommand, shell=True)
except subprocess.CalledProcessError:
print("group already exists: "+adminGroupCommand)
try:
subprocess.check_call(userGroupCommand, shell=True)
except subprocess.CalledProcessError as err:
print("group already exists: "+userGroupCommand)
The CalledProcessError has an attribute returncode that you can examine if the groupadd command has different return codes for different causes of failure.
Also note that shell=True means that you rely on the command interpreter to forward the return code (which may not always the case). Instead you could call the command directly:
adminGroupCommand=["groupadd", self.projectName, 'admin']
...
try:
subprocess.check_call(adminGroupCommand)
...which also has the benefit that if self.projectName contains spaces, asterisks (or other characters that the command line interpreter might interpret) they will be sent to the command in unmodified form (as single command line argument).
Another benefit in using subprocess is that you can control where the output of the command is being directed. If you want to discard stderr or stdout you can redirect it to /dev/null without relying on a shell to do that for you:
subprocess.check_call(adminGroupCommand, stderr=os.devnull, stdout=os.devnull)
There are also possibility to redirect to subprocess.PIPE which allows your python code to read the standard output or standard error from the subprocess.