stdout Won't Flush After Exception - python

I have the following Python code:
import sys
import traceback
fifo_in = sys.argv[1]
while 1:
try:
exec open(fifo_in)
except:
traceback.print_exc()
sys.stdout.flush()
The first argument is a named pipe created by mkfifo. So the following prints '1':
mkfifo input
python script.py input
... in a separate terminal ...
echo "print 1" > input
Great, so far so good. But when I do something like echo "foobar" > input, the script only prints part of the traceback. It then pauses until I send it another command, and the output gets all mixed up:
echo "asdf" > input # pause here and check output
echo "print 1" > input
... in output terminal ...
Traceback (most recent call last):
File "test.py", line 8, in <module>
exec open(fifo_in)
File "in", line 1, in <module>
...PAUSES HERE...
print 1
NameError: name 'asdf' is not defined
What's going on? How can I get stdout to flush fully and why is it out of order? I've tried using traceback.format_exc instead, then printing it by hand, but I get the same result. Calling sys.stderr.flush does not fix anything either. I've also tried putting a sleep in the loop to see if that helps, but nothing.
UPDATE
One interesting piece of behavior I am seeing: If I ctrl+c it, normally the program keeps running - the try/except just catches the KeyboardInterrupt and it keeps looping. However, if I ctr+c it after sending it an error, the program exits and I get the following. It's almost like it pauses inside of print_exc:
^CTraceback (most recent call last):
File "test.py", line 10, in <module>
traceback.print_exc()
File "/usr/lib/python2.7/traceback.py", line 232, in print_exc
print_exception(etype, value, tb, limit, file)
File "/usr/lib/python2.7/traceback.py", line 125, in print_exception
print_tb(tb, limit, file)
File "/usr/lib/python2.7/traceback.py", line 69, in print_tb
line = linecache.getline(filename, lineno, f.f_globals)
File "/usr/lib/python2.7/linecache.py", line 14, in getline
lines = getlines(filename, module_globals)
File "/usr/lib/python2.7/linecache.py", line 40, in getlines
return updatecache(filename, module_globals)
File "/usr/lib/python2.7/linecache.py", line 132, in updatecache
with open(fullname, 'rU') as fp:
KeyboardInterrupt

I think you want to look at the stdlib code module
This behavior is from using exec. Exec is for evaluating python code so "print 1" executes the python code print 1, where as "asdf" will raise a NameError as it does not exist in the context. exec open(fifo_in) is strange as it shouldn't work. The while will also eat up 100% cpu.
UPDATE: fix sleep duration
Here is a modified version of your code to try.
import sys
import time
import traceback
fifo_in = sys.argv[1]
try:
fp = open(fifo_in) # will block until pipe is opened for write
except IOError:
traceback.print_exc()
except OSError:
traceback.print_exc()
data = None
while True:
try:
data = fp.read()
try:
exec data
except:
traceback.print_exc()
finally:
time.sleep(0.1)
except KeyboardInterrupt:
break

Related

how to catch subprocess call exception in python?

I have a python code as follows:
try:
print("Running code " + str(sub.id))
r = subprocess.call("node codes.js > outputs.txt", shell=True)
except:
print("Error running submission code id " + str(sub.id))
The code is running node command using subprocess.call. The node command is running codes.js file. Sometimes if there is error in code like if there is document. command then the code throws error.
With try and except it is not catching the error thrown when the node command fails.
The error thrown is as follows
There is document. line in the code so node cannot understand that line so it throws error.
/home/kofhearts/homework/codes.js:5
document.getElementById("outputalert").innerHTML = "Hacked";
^
ReferenceError: document is not defined
at solve (/home/kofhearts/homework/codes.js:5:3)
at Object.<anonymous> (/home/kofhearts/homework/codes.js:13:28)
at Module._compile (internal/modules/cjs/loader.js:1068:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:933:32)
at Function.Module._load (internal/modules/cjs/loader.js:774:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/home/kofhearts/homework/assignments/management/commands/police.py", line 73, in handle
if isCorrect(data.strip()[:-1], sub.question.outputs, sub.question, sub.code):
File "/home/kofhearts/homework/assignments/views.py", line 566, in isCorrect
givenans = [json.loads(e.strip()) for e in received.split('|')]
File "/home/kofhearts/homework/assignments/views.py",
How is it possible to catch the error when subprocess.call fails? Thanks for the help!
How is it possible to catch the error when subprocess.call fails?
The 'standard' way to do this is to use subprocess.run:
from subprocess import run, CalledProcessError
cmd = ["node", "code.js"]
try:
r = run(cmd, check=True, capture_output=True, encoding="utf8")
with open("outputs.txt", "w") as f:
f.write(r.stdout)
except CalledProcessError as e:
print("oh no!")
print(e.stderr)
Note that I have dropped the redirect and done it in python. You might be able to redirect with shell=True, but it's a whole security hole you don't need just for sending stdout to a file.
check=True ensures it will throw with non-zero return state.
capture_output=True is handy, because stderr and stdout are passed through to the exception, allowing you to retrieve them there. Thank to #OlvinRoght for pointing that out.
Lastly, it is possible to check manually:
r = run(cmd, capture_output=True, encoding="utf8")
if r.returncode:
print("Failed", r.stderr, r.stdout)
else:
print("Success", r.stdout)
I would generally avoid this pattern as
try is free for success (and we expect this to succeed)
catching exceptions is how we normally handle problems, so it's the Right Way (TM)
but YMMV.

Does Python traceback.print_exc() prints to stdout or stderr?

I've read some Python docs, but I can't find where the print_exc function prints. So I searched some stack overflow, it says "print_exc() prints formatted exception to stdout". Link
I've been so confused.. In my opinion, that function should print to stderr because it's ERROR!.. What is right?
It prints to stderr, as can be seen from the following test:
$ cat test.py
try:
raise IOError()
except:
import traceback
traceback.print_exc()
$ python test.py
Traceback (most recent call last):
File "test.py", line 2, in <module>
raise IOError()
IOError
$ python test.py > /dev/null
Traceback (most recent call last):
File "test.py", line 2, in <module>
raise IOError()
IOError
$ python test.py 2> /dev/null
$
BTW you can also control it:
import traceback
import sys
try:
raise Exception
except Exception as E:
traceback.print_exc(file=sys.stderr)
According to the python documentation states "If file is omitted or None, the output goes to sys.stderr; otherwise it should be an open file or file-like object to receive the output."
This means you can control how / where the output is printed.
with open(outFile) as fp
print_exc(fp)
The above example will print to the file 'outFile'

why `pdb` states something unrelated and misleading?

My Python script reports where it goes wrong ("line 122" in myscript.py), when I run it in a shell:
$ toc2others.py -i toc -p pg
Traceback (most recent call last):
File "~/myscript.py", line 122, in <module>
p = re.match(keywords[index+1][0], inlines[n+1], re.IGNORECASE)
IndexError: list index out of range
It is because keywords[index+1] goes out of the index range of keywords.
When I run it under pdb, however, it doesn't report where it goes wrong, but says something unrelated (error is reported to take place at import re).
$ pdb ~/myscript.py -i toc -p pg
> /myscript.py(3)<module>()
-> import re
(Pdb) c
Traceback (most recent call last):
File "/usr/lib/python2.7/pdb.py", line 1314, in main
pdb._runscript(mainpyfile)
File "/usr/lib/python2.7/pdb.py", line 1233, in _runscript
self.run(statement)
File "/usr/lib/python2.7/bdb.py", line 387, in run
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "~/myscript.py", line 3, in <module>
import re
IndexError: list index out of range
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
I wonder why pdb states something unrelated and misleading?
Can pdb state where it actually goes wrong?
Thanks.
It's a bug, actually.
See issues:
http://bugs.python.org/issue16482
http://bugs.python.org/issue17277
This only happens if exception is thrown on module-level of executed file, i.e. not inside any function. So if you just put your code in a main() function, this will fix it. Or you can use ipython, which is much more fun for debugging:
ipython ~/myscript.py --pdb -- -i toc -p pg
This will run the script and only stop if there's an error, and it also does not suffer from the above bug.

Popen subprocess exception

Sorry if this is a simple question and has been answered before, but I couldn't find it anywhere.
I'm trying to listen to UDP packets and if they are certain packets, run different batch scripts. I have this working correctly, but I have found that if the Popen command doesn't find the file it triggers an exception and the script stops running. Ideally, I want this to print a message and then continue listening for other packets and act upon them, just giving us a message for debugging. Here is the code I have used, how could I do this?
if rxdata == "Camera 1":
from subprocess import Popen
try:
p = Popen("Camera 1.bat", cwd=r"C:\xxx")
stdout, stderr = p.communicate()
except FileNotFoundError():
print('Camera 1.bat not found')
elif rxdata == "Camera 2":
from subprocess import Popen
p = Popen("Camera 2.bat", cwd=r"C:\xxx")
stdout, stderr = p.communicate()
In both examples, I receive the following and the script closes.
Traceback (most recent call last):
File "C:\UDP Listener.py", line 42, in <module>
p = Popen("Camera 1.bat", cwd=r"C:\xxx")
File "C:\Python34\lib\subprocess.py", line 858, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1111, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
Thanks in advance
Matt
You must not use the brackets behind the FileNotFoundError (don't call it, just "name" it). Test (with Python 2):
try:
b = a
except NameError():
print "NameError caught."
Execution:
Traceback (most recent call last):
File "test.py", line 2, in <module>
b = a
NameError: name 'a' is not defined
For instance, OSError is a type, whereas OSError() creates an instance of this type:
>>> type(OSError)
<type 'type'>
>>> type(OSError())
<type 'exceptions.OSError'>
Strangely, after re-installing python on my PC everything is now working correctly. Not sure what went wrong but when I run the code now and an exception is found then the code prints as expected.
Thanks for your help!

Python: Suppressing errors from going to commandline?

When I try to execute a python program from command line, it gives the following error. These errors do not cause any problem to my ouput. I dont want it to be displayed in the commandline
Traceback (most recent call last):
File "test.py", line 88, in <module>
p.feed(ht)
File "/usr/lib/python2.5/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/usr/lib/python2.5/HTMLParser.py", line 148, in goahead
k = self.parse_starttag(i)
File "/usr/lib/python2.5/HTMLParser.py", line 226, in parse_starttag
endpos = self.check_for_whole_start_tag(i)
File "/usr/lib/python2.5/HTMLParser.py", line 301, in check_for_whole_start_tag
self.error("malformed start tag")
File "/usr/lib/python2.5/HTMLParser.py", line 115, in error
raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: malformed start tag, at line 319, column 25
How could I suppress the errors?
Doesn't catching HTMLParseError work for you? If test.py is the name of your python file, it's propagated up to there, so it should.
Here's an example how to suppress such an error. You might want to tweak it a bit to match your code.
try:
# Put parsing code here
except HTMLParseError:
pass
You can also just suppress the error message by redirecting stderr to null, like Ignacio suggested. To do it in code, you can just write the following:
import sys
class DevNull:
def write(self, msg):
pass
sys.stderr = DevNull()
However, this is probably not be what you want, because from your error it looks like the script execution is stopped, and you probably want it to be continued.
Redirect stderr to /dev/null.
python somescript.py 2> /dev/null
In python 3, #Boaz Yaniv's answer can be simplified as
sys.stderr = object
since every class in python3 is inherited from Object, so technically this would work, at least I've tried it by myself in python 3.6.5 environment.
Here is a more readable, succinct solution for handling errors that are safe to ignore, without having to resort to the typical try/except/pass code block.
from contextlib import suppress
with suppress(IgnorableErrorA, IgnorableErrorB):
do_something()

Categories

Resources