Python executables in /usr/bin directory are wrapped in sys.exit - python

Is there a difference between the two code snippets:
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
main()
Vs
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
I see, most python executables in my ubuntu /usr/bin (or) /usr/local/bin directory use sys.exit. Doesn't the process stop, once the function returns.
Why do people wrap their executable functions inside sys.exit?
Note: This code is taken from openstack-nova python client and this question focusses only on python's sys.exit and not about openstack internals.

sys.exit() is there to pass the right exit code back to shell. If you want to differentiate the response in case of (for example) bad authentication, network issues, broken response, etc. exit codes are useful for that.
If you don't use specific sys.exit(value), you have two options only - success (exit code 0), or exception was thrown (exit code 1).

There are actually two ways to use sys.exit, as explained in the docs.
Your main can return 0 on success and 1 (or any other number from 2-127) on error; that number becomes your program's exit status. (The number 2 has a special meaning; it implies that the failure was because of invalid arguments. Some argument-parsing libraries will automatically sys.exit(2) if they can't parse the command line. The other numbers 3-127 all mean whatever you want them to.)
Or you can return None on success, and a string (or any object with a useful __str__ method) on failure. A None means exit status 0, anything else gets printed to stderr and gives exit status 1.
It used to be traditional to use the second form to signal failure by doing something like return "Failed to open file" from your main function, and the docs still mention doing that, but it's not very common anymore; it's just as easy, and more flexible, to output what you want and return the number you want.
If you just fall off the end of the script without a sys.exit, that's equivalent to sys.exit(0); if you exit through an exception, that's equivalent to passing the traceback to sys.exit—it prints the traceback to stderr and exits with status 1.

Related

Python subprocess — how to ignore exit code warnings?

I am trying to display the final results.txt file via default program. I've tried with bare Popen() without run() and got the same effect. The target file is opening (for me it's the see mode) but after exiting it I receive:
Warning: program returned non-zero exit code #256
Is there any way to ignore it and prevent my program from displaying such warning? I don't care about it because it's the last thing the program does, so I don't want people to waste their time clicking Enter each time...
Code's below:
from subprocess import run, Popen
if filepath[len(filepath)-1] != '/':
try:
results = run(Popen(['start', 'results.txt'], shell=True), stdout=None, shell=True, check=False)
except TypeError:
pass
else:
try:
results = run(Popen(['open', 'results.txt']), stdout=None, check=False)
except TypeError:
pass
except FileNotFoundError:
try:
results = run(Popen(['see', 'results.txt']), stdout=None, check=False)
except TypeError:
pass
except FileNotFoundError:
pass
Your immediate error is that you are mixing subprocess.run with subprocess.Popen. The correct syntax is
y = subprocess.Popen(['command', 'argument'])
or
x = subprocess.run(['command', 'argument'])
but you are incorrectly combining them into, effectively
x = subprocess.run(subprocess.Popen(['command', 'argument']), shell=True)
where the shell=True is a separate bug in its own right (though it will weirdly work on Windows).
What happens is that Popen runs successfully, but then you try to run run on the result, which of course is not a valid command at all.
You want to prefer subprocess.run() over subprocess.Popen in this scenario; the latter is for hand-wiring your own low-level functionality in scenarios where run doesn't offer enough flexibility (such as when you require the subprocess to run in parallel with your Python program as an independent process).
Your approach seems vaguely flawed for Unix-like systems; you probably want to run xdg-open if it's available, otherwise the value of os.environ["PAGER"] if it's defined, else fall back to less, else try more. Some ancient systems also have a default pager called pg.
You will definitely want to add check=True to actually make sure your command fails properly if the command cannot be found, which is the diametrical opposite of what you appear to be asking. With this keyword parameter, Python checks whether the command worked, and will raise an exception if not. (In its absence, failures will be silently ignored, in general.) You should never catch every possible exception; instead, trap just the one you really know how to handle.
Okay, I've achieved my goal with a different approach. I didn't need to handle such exception, I did it without the subprocess module.
Question closed, here's the final code (it looks even better):
from os import system
from platform import system as sysname
if sysname() == 'Windows':
system('start results.txt')
elif sysname() == 'Linux':
system('see results.txt')
elif sysname() == 'Darwin':
system('open results.txt')
else:
pass

exit function not working

I am trying to exit the program if the value is not what I want but it doesn't do anything. What seems to be the problem? I think it stops the program but does not print "Invalid Subnet Mask".
from sys import exit
def maskvalidation(mask):
string = mask.split(".")
toInt = [int(i) for i in string]
binary = [bin(x) for x in toInt]
purebin = []
for x in binary:
purebin.append(x[2:])
if purebin[0] == "11111111":
print("good")
else:
exit("Invalid Subnet Mask")
maskvalidation("251.0.0.0")
help for sys.exit
exit(...)
exit([status])
Exit the interpreter by raising SystemExit(status).
If the status is omitted or None, it defaults to zero (i.e., success).
If the status is an integer, it will be used as the system exit status.
If it is another kind of object, it will be printed and the system
exit status will be one (i.e., failure).
So it is being used correctly in the question.
Furthermore, your sample program works fine for me on linux for Python2.7 and Python3.4.
It does however send the output to stderr rather than stdout, so perhaps that is the problem you are seeing

Python exit commands - why so many and when should each be used?

It seems that python supports many different commands to stop script execution.The choices I've found are: quit(), exit(), sys.exit(), os._exit()
Have I missed any?
What's the difference between them? When would you use each?
Let me give some information on them:
quit() simply raises the SystemExit exception.
Furthermore, if you print it, it will give a message:
>>> print (quit)
Use quit() or Ctrl-Z plus Return to exit
>>>
This functionality was included to help people who do not know Python. After all, one of the most likely things a newbie will try to exit Python is typing in quit.
Nevertheless, quit should not be used in production code. This is because it only works if the site module is loaded. Instead, this function should only be used in the interpreter.
exit() is an alias for quit (or vice-versa). They exist together simply to make Python more user-friendly.
Furthermore, it too gives a message when printed:
>>> print (exit)
Use exit() or Ctrl-Z plus Return to exit
>>>
However, like quit, exit is considered bad to use in production code and should be reserved for use in the interpreter. This is because it too relies on the site module.
sys.exit() also raises the SystemExit exception. This means that it is the same as quit and exit in that respect.
Unlike those two however, sys.exit is considered good to use in production code. This is because the sys module will always be there.
os._exit() exits the program without calling cleanup handlers, flushing stdio buffers, etc. Thus, it is not a standard way to exit and should only be used in special cases. The most common of these is in the child process(es) created by os.fork.
Note that, of the four methods given, only this one is unique in what it does.
Summed up, all four methods exit the program. However, the first two are considered bad to use in production code and the last is a non-standard, dirty way that is only used in special scenarios. So, if you want to exit a program normally, go with the third method: sys.exit.
Or, even better in my opinion, you can just do directly what sys.exit does behind the scenes and run:
raise SystemExit
This way, you do not need to import sys first.
However, this choice is simply one on style and is purely up to you.
The functions* quit(), exit(), and sys.exit() function in the same way: they raise the SystemExit exception. So there is no real difference, except that sys.exit() is always available but exit() and quit() are only available if the site module is imported (docs).
The os._exit() function is special, it exits immediately without calling any cleanup functions (it doesn't flush buffers, for example). This is designed for highly specialized use cases... basically, only in the child after an os.fork() call.
Conclusion
Use exit() or quit() in the REPL.
Use sys.exit() in scripts, or raise SystemExit() if you prefer.
Use os._exit() for child processes to exit after a call to os.fork().
All of these can be called without arguments, or you can specify the exit status, e.g., exit(1) or raise SystemExit(1) to exit with status 1. Note that portable programs are limited to exit status codes in the range 0-255, if you raise SystemExit(256) on many systems this will get truncated and your process will actually exit with status 0.
Footnotes
* Actually, quit() and exit() are callable instance objects, but I think it's okay to call them functions.
Different Means of Exiting
os._exit():
Exit the process without calling the cleanup handlers.
exit(0):
a clean exit without any errors / problems.
exit(1):
There was some issue / error / problem and that is why the program is exiting.
sys.exit():
When the system and python shuts down; it means less memory is being used after the program is run.
quit():
Closes the python file.
Summary
Basically they all do the same thing, however, it also depends on what you are doing it for.
I don't think you left anything out and I would recommend getting used to quit() or exit().
You would use sys.exit() and os._exit() mainly if you are using big files or are using python to control terminal.
Otherwise mainly use exit() or quit().
sys.exit is the canonical way to exit.
Internally sys.exit just raises SystemExit. However, calling sys.exitis more idiomatic than raising SystemExit directly.
os.exit is a low-level system call that exits directly without calling any cleanup handlers.
quit and exit exist only to provide an easy way out of the Python prompt. This is for new users or users who accidentally entered the Python prompt, and don't want to know the right syntax. They are likely to try typing exit or quit. While this will not exit the interpreter, it at least issues a message that tells them a way out:
>>> exit
Use exit() or Ctrl-D (i.e. EOF) to exit
>>> exit()
$
This is essentially just a hack that utilizes the fact that the interpreter prints the __repr__ of any expression that you enter at the prompt.

How to silence "sys.excepthook is missing" error?

NB: I have not attempted to reproduce the problem described below under Windows, or with versions of Python other than 2.7.3.
The most reliable way to elicit the problem in question is to pipe the output of the following test script through : (under bash):
try:
for n in range(20):
print n
except:
pass
I.e.:
% python testscript.py | :
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
My question is:
How can I modify the test script above to avoid the error message when the script is run as shown (under Unix/bash)?
(As the test script shows, the error cannot be trapped with a try-except.)
The example above is, admittedly, highly artificial, but I'm running into the same problem sometimes when the output of a script of mine is piped through some 3rd party software.
The error message is certainly harmless, but it is disconcerting to end-users, so I would like to silence it.
EDIT: The following script, which differs from the original one above only in that it redefines sys.excepthook, behaves exactly like the one given above.
import sys
STDERR = sys.stderr
def excepthook(*args):
print >> STDERR, 'caught'
print >> STDERR, args
sys.excepthook = excepthook
try:
for n in range(20):
print n
except:
pass
How can I modify the test script above to avoid the error message when the script is run as shown (under Unix/bash)?
You will need to prevent the script from writing anything to standard output. That means removing any print statements and any use of sys.stdout.write, as well as any code that calls those.
The reason this is happening is that you're piping a nonzero amount of output from your Python script to something which never reads from standard input. This is not unique to the : command; you can get the same result by piping to any command which doesn't read standard input, such as
python testscript.py | cd .
Or for a simpler example, consider a script printer.py containing nothing more than
print 'abcde'
Then
python printer.py | python printer.py
will produce the same error.
When you pipe the output of one program into another, the output produced by the writing program gets backed up in a buffer, and waits for the reading program to request that data from the buffer. As long as the buffer is nonempty, any attempt to close the writing file object is supposed to fail with an error. This is the root cause of the messages you're seeing.
The specific code that triggers the error is in the C language implementation of Python, which explains why you can't catch it with a try/except block: it runs after the contents of your script has finished processing. Basically, while Python is shutting itself down, it attempts to close stdout, but that fails because there is still buffered output waiting to be read. So Python tries to report this error as it would normally, but sys.excepthook has already been removed as part of the finalization procedure, so that fails. Python then tries to print a message to sys.stderr, but that has already been deallocated so again, it fails. The reason you see the messages on the screen is that the Python code does contain a contingency fprintf to write out some output to the file pointer directly, even if Python's output object doesn't exist.
Technical details
For those interested in the details of this procedure, let's take a look at the Python interpreter's shutdown sequence, which is implemented in the Py_Finalize function of pythonrun.c.
After invoking exit hooks and shutting down threads, the finalization code calls PyImport_Cleanup to finalize and deallocate all imported modules. The next-to-last task performed by this function is removing the sys module, which mainly consists of calling _PyModule_Clear to clear all the entries in the module's dictionary - including, in particular, the standard stream objects (the Python objects) such as stdout and stderr.
When a value is removed from a dictionary or replaced by a new value, its reference count is decremented using the Py_DECREF macro. Objects whose reference count reaches zero become eligible for deallocation. Since the sys module holds the last remaining references to the standard stream objects, when those references are unset by _PyModule_Clear, they are then ready to be deallocated.1
Deallocation of a Python file object is accomplished by the file_dealloc function in fileobject.c. This first invokes the Python file object's close method using the aptly-named close_the_file function:
ret = close_the_file(f);
For a standard file object, close_the_file(f) delegates to the C fclose function, which sets an error condition if there is still data to be written to the file pointer. file_dealloc then checks for that error condition and prints the first message you see:
if (!ret) {
PySys_WriteStderr("close failed in file object destructor:\n");
PyErr_Print();
}
else {
Py_DECREF(ret);
}
After printing that message, Python then attempts to display the exception using PyErr_Print. That delegates to PyErr_PrintEx, and as part of its functionality, PyErr_PrintEx attempts to access the Python exception printer from sys.excepthook.
hook = PySys_GetObject("excepthook");
This would be fine if done in the normal course of a Python program, but in this situation, sys.excepthook has already been cleared.2 Python checks for this error condition and prints the second message as a notification.
if (hook && hook != Py_None) {
...
} else {
PySys_WriteStderr("sys.excepthook is missing\n");
PyErr_Display(exception, v, tb);
}
After notifying us about the missing excepthook, Python then falls back to printing the exception info using PyErr_Display, which is the default method for displaying a stack trace. The very first thing this function does is try to access sys.stderr.
PyObject *f = PySys_GetObject("stderr");
In this case, that doesn't work because sys.stderr has already been cleared and is inaccessible.3 So the code invokes fprintf directly to send the third message to the C standard error stream.
if (f == NULL || f == Py_None)
fprintf(stderr, "lost sys.stderr\n");
Interestingly, the behavior is a little different in Python 3.4+ because the finalization procedure now explicitly flushes the standard output and error streams before builtin modules are cleared. This way, if you have data waiting to be written, you get an error that explicitly signals that condition, rather than an "accidental" failure in the normal finalization procedure. Also, if you run
python printer.py | python printer.py
using Python 3.4 (after putting parentheses on the print statement of course), you don't get any error at all. I suppose the second invocation of Python may be consuming standard input for some reason, but that's a whole separate issue.
1Actually, that's a lie. Python's import mechanism caches a copy of each imported module's dictionary, which is not released until _PyImport_Fini runs, later in the implementation of Py_Finalize, and that's when the last references to the standard stream objects disappear. Once the reference count reaches zero, Py_DECREF deallocates the objects immediately. But all that matters for the main answer is that the references are removed from the sys module's dictionary and then deallocated sometime later.
2Again, this is because the sys module's dictionary is cleared completely before anything is really deallocated, thanks to the attribute caching mechanism. You can run Python with the -vv option to see all the module's attributes being unset before you get the error message about closing the file pointer.
3This particular piece of behavior is the only part that doesn't make sense unless you know about the attribute caching mechanism mentioned in previous footnotes.
I ran into this sort of issue myself today and went looking for an answer. I think a simple workaround here is to ensure you flush stdio first, so python blocks instead of failing during script shutdown. For example:
--- a/testscript.py
+++ b/testscript.py
## -9,5 +9,6 ## sys.excepthook = excepthook
try:
for n in range(20):
print n
+ sys.stdout.flush()
except:
pass
Then with this script nothing happens, as the exception (IOError: [Errno 32] Broken pipe) is suppressed by the try...except.
$ python testscript.py | :
$
In your program throws an exception that can not be caught using try/except block. To catch him, override function sys.excepthook:
import sys
sys.excepthook = lambda *args: None
From documentation:
sys.excepthook(type, value, traceback)
When an exception is raised and uncaught, the interpreter calls
sys.excepthook with three arguments, the exception class, exception
instance, and a traceback object. In an interactive session this
happens just before control is returned to the prompt; in a Python
program this happens just before the program exits. The handling of
such top-level exceptions can be customized by assigning another
three-argument function to sys.excepthook.
Illustrative example:
import sys
import logging
def log_uncaught_exceptions(exception_type, exception, tb):
logging.critical(''.join(traceback.format_tb(tb)))
logging.critical('{0}: {1}'.format(exception_type, exception))
sys.excepthook = log_uncaught_exceptions
I realize that this is an old question, but I found it in a Google search for the error. In my case it was a coding error. One of my last statements was:
print "Good Bye"
The solution was simply fixing the syntax to:
print ("Good Bye")
[Raspberry Pi Zero, Python 2.7.9]

How to make the program run again after unexpected exit in Python?

I'm writing an IRC bot in Python, due to the alpha nature of it, it will likely get unexpected errors and exit.
What's the techniques that I can use to make the program run again?
You can use sys.exit() to tell that the program exited abnormally (generally, 1 is returned in case of error).
Your Python script could look something like this:
import sys
def main():
# ...
if __name__ == '__main__':
try:
main()
except Exception as e:
print >> sys.stderr, e
sys.exit(1)
else:
sys.exit()
You could call again main() in case of error, but the program might not be in a state where it can work correctly again.
It may be safer to launch the program in a new process instead.
So you could write a script which invokes the Python script, gets its return value when it finishes, and relaunches it if the return value is different from 0 (which is what sys.exit() uses as return value by default).
This may look something like this:
import subprocess
command = 'thescript'
args = ['arg1', 'arg2']
while True:
ret_code = subprocess.call([command] + args)
if ret_code == 0:
break
You can create wrapper using subprocess(http://docs.python.org/library/subprocess.html) which will spawn your application as a child process and track it's execution.
The easiest way is to catch errors, and close the old and open a new instance of the program when you do catch em.
Note that it will not always work (in cases it stops working without throwing an error).

Categories

Resources