Python stderr for imported class - python

I am trying to pipe the error messages to a file in my python script. I have it working fine for the most part. The problem comes when I import another file, and there is an error in that file. Here is an example (logger.py):
import time
import sys
import test
class Logger(object):
def __init__(self,outputType):
self.outputType=outputType;
self.log = open("serverLog.log", "w")
def write(self, message):
self.log = open("serverLog.log", "a")
self.log.write(message)
self.log.close()
sys.stdout = Logger("stdout")
sys.stderr = Logger("stderr")
j=0
while 3<4:
print "Sdf"
j=j+1
if j>4:
print k
time.sleep(1)
The above file works fine for logging the output and errors (when test.py is not imported).
Here is the second file that I am importing (test.py) with an intentional error:
import time
time.sleep(1)
print x
When I run logger.py, all of the output and errors go to serverLog.log, except for the error cause by the import of test.py.
I am wondering if it is possible to pipe the error messages from test.py to serverLog.log without adding anything to test.py.

You should define any modules after:
sys.stdout = Logger("stdout")
sys.stderr = Logger("stderr")
And result will be:
cat serverLog.log
Traceback (most recent call last):
File "/root/untitled/x.py", line 16, in <module>
import test1
File "/root/untitled/test1.py", line 3, in <module>
print x
NameError: name 'x' is not defined
My code:
import time
import sys
class Logger(object):
def __init__(self,outputType):
self.outputType=outputType;
self.log = open("serverLog.log", "w")
def write(self, message):
self.log = open("serverLog.log", "a")
self.log.write(message)
self.log.close()
sys.stdout = Logger("stdout")
sys.stderr = Logger("stderr")
import test1
j=0
while 3<4:
print "Sdf"
j=j+1
if j>4:
print k
time.sleep(1)

The problem here is that I think you think import creates a new thread. Here are the lines of code that are being executed:
import time
<all the code in time>
import sys
<all the code in sys>
import test
import time # Now we're in test.py
time.sleep(1) # We're still in the main thread!
print x
The python interpreter then produces the error. None of your Logger code ever gets executed. The solution, as Valeriy has given, is to put the Logger code before you import test.

Related

Redirect all stdout/stderr globally to logger

Background
I have a very large python application that launches command-line utilities to get pieces of data it needs. I currently just redirect the python launcher script to a log file, which gives me all of the print() output, plus the output of the command-line utilities, i.e.:
python -m launcher.py &> /root/out.log
Problem
I've since implemented a proper logger via logging, which lets me format the logging statements more precisely, lets me limit log file size, etc. I've swapped out most of my print()statements with calls to my logger. However, I have a problem: none of the output from the command-line applications is appearing in my log. It instead gets dumped to the console. Also, the programs aren't all launched the same way: some are launched via popen(), some by exec(), some by os.system(), etc.
Question
Is there a way to globally redirect all stdout/stderr text to my logging function, without having to re-write/modify the code that launches these command-line tools? I tried setting setting the following which I found in another question:
sys.stderr.write = lambda s: logger.error(s)
However it fails with "sys.stderr.write is read-only".
While this is not a full answer, it may show you a redirect to adapt to your particular case. This is how I did it a while back. Although I cannot remember why I did it this way, or what the limitation was I was trying to circumvent, the following is redirecting stdout and stderr to a class for print() statements. The class subsequently writes to screen and to file:
import os
import sys
import datetime
class DebugLogger():
def __init__(self, filename):
timestamp = datetime.datetime.strftime(datetime.datetime.utcnow(),
'%Y-%m-%d-%H-%M-%S-%f')
#build up full path to filename
logfile = os.path.join(os.path.dirname(sys.executable),
filename + timestamp)
self.terminal = sys.stdout
self.log = open(logfile, 'a')
def write(self, message):
timestamp = datetime.datetime.strftime(datetime.datetime.utcnow(),
' %Y-%m-%d-%H:%M:%S.%f')
#write to screen
self.terminal.write(message)
#write to file
self.log.write(timestamp + ' - ' + message)
self.flush()
def flush(self):
self.terminal.flush()
self.log.flush()
os.fsync(self.log.fileno())
def close(self):
self.log.close()
def main(debug = False):
if debug:
filename = 'blabla'
sys.stdout = DebugLogger(filename)
sys.stderr = sys.stdout
print('test')
if __name__ == '__main__':
main(debug = True)
import sys
import io
class MyStream(io.IOBase):
def write(self, s):
logger.error(s)
sys.stderr = MyStream()
print('This is an error', stream=sys.stderr)
This make all call to sys.stderr go to the logger.
The original one is always in sys.__stderr__

Why does this test crash when pdb.set_trace() is called?

Simple unittest below.
If I run it (e.g., python -m unittest module_name) without 'test' as an argument, it passes. If I run it with 'test' as an argument, I get "TypeError: bad argument type for built-in operation". Why?
from io import StringIO
import sys
from unittest import TestCase
class TestSimple(TestCase):
def test_simple(self):
old_stdout = sys.stdout
buf = StringIO()
try:
sys.stdout = buf
print('hi')
finally:
import pdb
if 'test' in sys.argv:
pdb.set_trace()
sys.stdout = old_stdout
contextlib.redirect_stdout version:
from contextlib import redirect_stdout
from io import StringIO
import pdb
import sys
from unittest import TestCase
class TestSimple(TestCase):
def test_simple(self):
buf = StringIO()
with redirect_stdout(buf):
print('hi')
pdb.set_trace()
print('finis')
Thanks in advance.
Edit:
The original program was tested in Python 3.4 in both Debian and Windows 7.
Something similar (using environment flags instead of a command line argument) appears to hang in Python 2, but pressing c allows it to finish, so I'm guessing it might just be that pdb's UI has been redirected.But the Python 3 version has the behavior initial described (crashes), although a colleague tested on 3.4 on Mac OS and saw the "hang" behavior.
You need to give pdb the original stdout:
pdb.Pdb(stdout=sys.__stdout__).set_trace()

Is there a way to redirect stderr to file in Jupyter?

There was a redirect_output function in IPython.utils, and there was a %%capture magic function, but these are now gone, and this thread on the topic is now outdated.
I'd like to do something like the following:
from IPython.utils import io
from __future__ import print_function
with io.redirect_output(stdout=False, stderr="stderr_test.txt"):
while True:
print('hello!', file=sys.stderr)
Thoughts? For more context, I am trying to capture the output of some ML functions that run for hours or days, and output a line every 5-10 seconds to stderr. I then want to take the output, munge it, and plot the data.
You could probably try replacing sys.stderr with some other file descriptor the same way as suggested here.
import sys
oldstderr = sys.stderr
sys.stderr = open('log.txt', 'w')
# do something
sys.stderr = oldstderr
Update: starting form Python 3.4, you should consuder using contextlib.redirect_stdout() instead, like this:
f = io.StringIO()
with redirect_stdout(f):
print('a')
s = f.getvalue()
#Ben, just replacing sys.stderr did not work, and the full flush logic suggested in the post was necessary. But thank you for the pointer as it finally gave me a working version:
import sys
oldstderr = sys.stderr
sys.stderr = open('log.txt', 'w')
class flushfile():
def __init__(self, f):
self.f = f
def __getattr__(self,name):
return object.__getattribute__(self.f, name)
def write(self, x):
self.f.write(x)
self.f.flush()
def flush(self):
self.f.flush()
sys.sterr = flushfile(sys.stderr)
from __future__ import print_function
# some long running function here, e.g.
for i in range(1000000):
print('hello!', file=sys.stderr)
sys.stderr = oldstderr
It would have been nice if Jupyter kept the redirect_output() function and/or the %%capture magic.

Output is empty when mocking input in Python Unit Test

So I've been having this issue for some time and can't find a solution.I have this run code which is pretty basic. I want to test for the expected output, "TEST" ,when I use side_effects to mock the input. The first time the input function is called I mock 'y' and then I mock '1' the second time it's called, which should then trigger the print statement. The problem is the output that is coming back is empty. I don't know whats going on, but when I run the main method manually and enter the inputs I get the expected output so I know the run code works as intended, but something funky is happening during the test.
here is my run code
def main():
newGame = input("")
if newGame == 'y':
print("1.Scallywag\n2.Crew\n3.Pirate")
difficulty = input("")
if difficulty == '1':
print("TEST")
main()
and here is my test code
import unittest
from unittest.mock import patch
import io
import sys
from Run import main
class MyTestCase(unittest.TestCase):
#patch('builtins.input', side_effects=['y','1'])
def test_output(self,m):
saved_stdout = sys.stdout
try:
out = io.StringIO()
sys.stdout = out
main()
output = out.getvalue().strip()
self.assertIn("TEST", output)
finally:
sys.stdout = saved_stdout
if __name__ == "__main__":
unittest.main()
and here is the AssertionError i get back along with the trace back, take note that its expecting "" which shouldn't be the case.
F
======================================================================
FAIL: test_output (__main__.MyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python33\lib\unittest\mock.py", line 1087, in patched
return func(*args, **keywargs)
File "C:\Users\jsalce\Desktop\Testcases\Test.py", line 20, in test_output
self.assertIn("TEST", output)
AssertionError: 'TEST' not found in ''
----------------------------------------------------------------------
Ran 1 test in 0.006s
FAILED (failures=1)
Thank you all in advance
You patch for input does not work as required because you are not giving it a function. Try this:
import unittest
from unittest.mock import patch, MagicMock
import io
import sys
from Run import main
class MyTestCase(unittest.TestCase):
##patch('builtins.input', side_effects=['y','1'])
#patch('builtins.input', MagicMock(side_effect=['y','1']))
def test_output(self):
saved_stdout = sys.stdout
try:
out = io.StringIO()
sys.stdout = out
main()
output = out.getvalue().strip()
self.assertIn("TEST", output)
#I used equals to see if I am truly grabbing the stdout
#self.assertEquals("TEST", output)
finally:
sys.stdout = saved_stdout
if __name__ == "__main__":
unittest.main(verbosity=2)
And also, you do not need variable 'm' in your test_output signature.
Print("String", file=out)
Is what you're looking for, you'll need to pass out to main though.

Is there a trick to break on the print builtin with pdb?

Basically, the title.
I am trying to trace down where a spurious print happens in a large codebase, and I would like to break, or somehow get a stack trace whenever a print "happens." Any ideas?
For this particular case you can redirect stdout to a helper class that prints the output and its caller. You can also break on one of its methods.
Full example:
import sys
import inspect
class PrintSnooper:
def __init__(self, stdout):
self.stdout = stdout
def caller(self):
return inspect.stack()[2][3]
def write(self, s):
self.stdout.write("printed by %s: " % self.caller())
self.stdout.write(s)
self.stdout.write("\n")
def test():
print 'hello from test'
def main():
# redirect stdout to a helper class.
sys.stdout = PrintSnooper(sys.stdout)
print 'hello from main'
test()
if __name__ == '__main__':
main()
Output:
printed by main: hello from main
printed by main:
printed by test: hello from test
printed by test:
You can also just print inspect.stack() if you need more thorough information.
The only thin I can think of would be to replace sys.stdout, for example with a streamwriter as returned by codecs.getwriter('utf8'). Then you can set a breakpoint on it's write method in pdb. Or replace it's write method with debugging code.
import codecs
import sys
writer = codecs.getwriter('utf-8')(sys.stdout) # sys.stdout.detach() in python3
old_write = writer.write
def write(data):
print >>sys.stderr, 'debug:', repr(data)
# or check data + pdb.set_trace()
old_write(data)
writer.write = write
sys.stdout = writer
print 'spam', 'eggs'

Categories

Resources