Is there a way to read stdout in python? - python

What i want is to get all the text that i wrote in stdout as a string.
from sys import stdout
stdout.read() # throws io.UnsupportedOperation: not readable
Example of what i want to get:
print("abc")
stdout.read() == "abc" # True

No. As the documentation says, stdout is not readable. Think of it as sending information to a physical printer. For instance, when you send a page of text to your FAX-printer-scanner device, how would your program be able to read that output? The characters are sent to an output buffer, down to the physical device, and flushed out to the paper.
The canonical way to handle this is with logging, which has several support packages in most mature languages, including Python. You create a logger whose log method (write the output) echoes its input to both print and another store of your creation. You add a read method to give you access to that store.
This gives you a little research to do and some coding work, but I trust you can start from here. Look for the logger tutorials on line. Of course, if you get stuck with that coding, you can post your example on Stack Overflow. :-)

If you are using 3.4 or higher, there is this recipe found in the documentation for contextlib
f = io.StringIO()
with contextlib.redirect_stdout(f):
... stuff ...
result = f.getvalue()
Note that the effect on stdout is global, so don't use it in libraries or in a threaded application.

If you don't want to use logger, you could create a custom print function:
from io import StringIO
printstore = StringIO()
def myprint(*args, **kwargs):
print(*args, **kwargs) # unmodified print
kwargs["file"] = printstore
print(*args, **kwargs) # print to StringIO
This has the advantage that you get all the flexibility of the builtin print.
A drawback is that it catches only output printed with myprint.

Related

Page current stdout to less, in Python

I've a snippet like this:
my_string = "foo bar"
def print_string(fd=sys.stdout):
print(my_string, file=fd)
How do I get to pipe the output of print_string to a pager, say, less?
I'm aware of using subprocess.Popen with stdin=PIPE, and then using proc.communicate(), but then I can only write my_string directly, not redirect from an existing descriptor.
Although a bit silly, but I tried the below; I'm not surprised that it doesn't work:
proc = subprocess.Popen("less -".split(), stdin=sys.stdout)
print_string()
proc.wait()
Git commands seems to do the same thing, effectively: pipes its output through a pager; and I was trying to achieve a similar effect.
Less needs to read from the "real" stdin to get key presses, otherwise it can't react to user input. Instead you can create a temporary file and let less read that:
import subprocess
import tempfile
with tempfile.NamedTemporaryFile("w") as f:
f.write("hello world!")
f.flush() // flush or otherwise the content
// might not be written when less tries to read
p = subprocess.Popen(["/usr/bin/less", f.name])
p.wait()
This might have security consequences, best to read the documentation on tempfile before using it on something super secure.
I'm also not sure how git does it or if there is a better way, but it worked in my short tests.

Protocol Handshaking Python

For my A level computing project for year 13 im writing an email client, I need to Model how pythons SMTP protocol works and show protocol handshaking. What I want to know is that when I log into gmails mail server to send an email using smtp is there a way to print out what the line of code does.
So I would want to show exactly what is going on when the line is executed.
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.login("youremailusername", "password")
msg = "\nHello!" # The /n separates the message from the headers
server.sendmail("you#gmail.com", "target#example.com", msg)
Cheers guys
Assuming that by "what the line of code does" you mean "what protocol messages get sent to and received from the server", smtplib.SMTP.set_debuglevel(level):
Set the debug output level. A true value for level results in debug messages for connection and for all messages sent to and received from the server.
If by "what the line of code does" you want to know the Python code that's being executed, you can step into the function call in the debugger. Or just read the source. Like many modules in the stdlib, smtplib is designed to be useful as sample code as well as a practical module, so at the top of the docs, there's a link to smtplib.py.
Is there a way I can write that output to a tkinter window or file?
If you look at the source linked above, you can see that it just uses print calls for its debug logging. So, this gives you a few options:
Fork smtplib and replace those print calls with something better.
Monkeypatch smtplib to give it a print function that shadows the global one. (This only works in Python 3.x; in 2.x, print isn't a function.)
Open a text file, and just assign sys.stderr = my_text_file. (This only works for files, not tkinter. And it also catches all stderr, not just the logging from smtplib.)
Create a file-like object that does whatever you want in its write method, and assign sys.stderr to that. (This works for anything you want to do, including adding to a tkinter edit window, but of course it still catches all stderr.)
Wrap the script from outsideā€”e.g., with a wrapper script that uses subprocess.Popen to call the real script and capture its stderr in a pipe.
Which one is appropriate depends on your needs. For your assignment, assuming nothing is writing to stderr but smtplib's debug output during the time you're doing smtplib stuff, I think the file-like object idea might make sense. So:
class MyDumbFakeStderr(object):
def write(self, output):
add_to_my_file_or_tkinter_thing(output)
sys.stderr = MyDumbFakeStderr()
try:
# your smtplib code here
finally:
sys.stderr = sys.__stderr__
Obviously restoring stderr is unnecessary if you're just going to quit as soon as you're done, while if you want to do it repeatedly you'll probably want to wrap it in a contextlib.contextmanager, etc. Also, this MyDumbFakeStderr is pretty dumb (hence the name); it works fine for wrapping code that does nothing but print whole lines to stderr, but for anything more complicated you might need to add your own line buffering, or make it an io.TextIOBase, etc. This is just an idea to get you started.
You can read the function's source code.
http://www.opensource.apple.com/source/python/python-3/python/Lib/smtplib.py (search for sendmail)
You can also read a bit about SMTP: http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol#SMTP_transport_example
And try to relate the two

Python: Printing file to stream as file changes

I am running a large suite of unittests in a subprocess through another application (Autodesk Maya). Maya runs a special Python interpreter with it's own libraries that cannot be used outside of the application, thus the need to test within the application. I would like the parent process to print the results of the testing as it is happening. The subprocess is very 'noisy' though, so I do not want to simply redirect the subprocess's stdout to the parent process's stdout. Instead, I would like the test runner to somehow directly stream to the parent process's stdout
I am currently using a TextTestRunner in the subprocess with it's stdout set to an open file. The parent process knows where this file exists, and writes the contents of the tile to stdout once the subprocess is complete. Since the tests can take a long time to run though, I would prefer that the parent process can somehow 'stream' the contents of this file as it is being created by the subprocess. But I am not sure how to do this or if there is a better approach.
Here's an example of how this is currently set up.
module_path = 'my.test.module'
suite_callable = 'suite'
stream_fpath = '/tmp/the_test_results.txt'
script_fpath = '/tmp/the_test_script.py'
script = '''
import sys
if sys.version_info[0] <= 2 and sys.version_info[1] <= 6:
import unittest2 as unittest
else:
import unittest
import {module_path}
suite = {module_path}.{suite_callable}()
with open("{stream_path}", "w") as output:
runner = unittest.TextTestRunner(stream=output)
runner.run(suite)
output.close()
'''.format(**locals())
with open(script_fpath, 'w') as f:
f.write(script)
subprocess.call(['maya', '-command', '\'python("execfile(\\"{script_fpath}\\")")\''.format(**locals())]
with open(stream_fpath, 'r') as f:
print f.read()
Thanks for any info!
Rather than writing to a file, you shoud be able to make a file-like object to replace stderr. The object's write method could do something with each input as it comes in; you could squirt it to something listening on TCP, or print stuff to a TK window, or anything else in addition to logging to a file if you still want the results.
Implementing a stream replacement is pretty simple, in this case you probably only need to implement write, writelines, open and close (unless your testrunner also uses flush).
class FakeStdErr(object):
def __init__(self)
self.lines = []
def write(self, text):
self.lines.append(text)
def writelines(self, *args):
for item in args: self.lines.append(item)
def open(self):
self.lines = []
def close (self):
pass
In your use case you might want to use a silencer class (which is a variant on the same trick) to replace the default stdout (to shut up your chatty process) and direct your test runner stream to this guy; after all the tests are done you could dump the contents to disk as a file or print them to the screen by restoring the default stdout (the link shows how to do that if you're not familiar).
(Edited - suggest using stderr, or parsing)
Alternative 1: Intercept the output rather than having it go to a file.
Have script write to sys.stderr instead of the open() of stream_fpath:
runner = unittest.TextTestRunner(stream=sys.stderr)
Replace subprocess.call with running = subprocess.Popen(<existing parameters>, stderr=PIPE). Then read running.stderr until EOF or until running.poll() returns other than None. You can do what you want with the data. For example, you can print it to the screen and also print it to stream_fpath.
This assumes that the noisy output comes from maya, which will still be dumping to stdout.
Alternative 2: parse the noisy output from stdout=PIPE. If you can differentiate the test runner's output by adding some tag to each line, you can search for that tag and only print the lines that match.
Popen documentation (Python 2)

Silence loggers and printing to screen - Python

I'm having a problem with my python script.
It's printing massive amounts of data on the screen, and I would like to prevent all sorts of printing to screen.
Edit:
The library I'm using is mechanize, and it's printing a LOT of data on screen.
I have set these to false with no luck!
br.set_debug_redirects(False)
br.set_debug_responses(False)
br.set_debug_http(False)
Any ideas?
Help would be amazing and very much appreciated!
(Based on your 2nd edit)
If you don't want to disable all output, you can try to be specific to mechanize itself. http://wwwsearch.sourceforge.net/mechanize/ provides a snippet, which I've modified (though I'm not sure if it will work):
import logging
logger = logging.getLogger("mechanize")
# only log really bad events
logger.setLevel(logging.ERROR)
When you print something it goes to the screen through the sys.stdout file. You can change this file to any other file (eg, a log file you open) so that nothing is printed to the screen:
import sys
# save the old stdout so you can print later (do sys.stdout = OLD_STDOUT)
OLD_STDOUT = sys.stdout
sys.stdout = open("logfile.txt", 'w')
Of course, if you're talking about some library that you're calling, it may be printing to sys.stderr. Luckily, you can do the exact same thing for this one (continuing from above):
OLD_STDERR = sys.stderr
sys.stderr = open("errorLog.txt", 'w')
Now if, for some reason, you want to completely ignore stdout (or stderr) and never see it again, you can define your own file-like classes that simply discard the objects:
class Discarder(object):
def write(self, text):
pass # do nothing
# now discard everything coming out of stdout
sys.stdout = Discarder()
And, to add to the din of possible solutions, here is a solution that works in Unix shells:
# discards all input (change /dev/null to a file name to keep track of output)
python yourScript.py > /dev/null
You may redirect sys.stdout and sys.stderr to a file or any file like object of yours e.g.
class EatLog(object):
def write(self):
pass
sys.stdout = EatLog()
but i would not recommend that, simpler option is to use OS level redirection e.g.
python myscript.py > out.log
you can use the StringIO module, too, instead of rolling your own stdout stream. Occasionally, the stdout needs more than a write method (flush is another common one), which StringIO will handle.
import StringIO
import sys
sys.stdout = StringIO.StringIO()

How do you make Python wait so that you can read the output?

I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python.
My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read.
(My apologies if this question is a silly one, but I'm very new at Python and programming in general.)
If you don't want to use raw_input() or input() you could log your output (stdout, stderr) to a file or files.
You could either use the logging module, or just redirect sys.stdout and sys.stderr.
I would suggest using a combination of the logging and traceback if you want to log errors with their trace stack.
Something like this maybe:
import logging, traceback
logging.basicConfig(filename=r'C:\Temp\log.txt', level=logging.DEBUG)
try:
#do some stuff
logging.debug('I did some stuff!')
except SomeException:
logging.error(traceback.format_exc())
Here's an example of redirecting stdout and stderr:
if __name__ == '__main__':
save_out = sys.stdout # save the original stdout so you can put it back later
out_file = open(r'C:\Temp\out.txt', 'w')
sys.stdout = out_file
save_err = sys.stderr
err_file = open(r'C:\Temp\err.txt', 'w')
sys.stderr = err_file
main() #call your main function
sys.stdout = save_out # set stdout back to it's original object
sys.stderr = save_err
out_file.close()
err_file.close()
I'm going to point out that this is not the easiest or most straight forward way to go.
This is a "problem" with Notepad2, not Python itself.
Unless you want to use input()/sleep (or any other blocking function) in your scripts, I think you have to turn to the settings in Notepad2 and see what that has to offer.
you could start in the command window. e.g.:
c:\tmp\python>main.py
adding raw_input() (or input() in py3k) at the end of your script will let you freeze it for until enter is pressed, but it's not a good thing to do.
You can add a call to raw_input() to the end of your script in order to make it wait until you press Enter.

Categories

Resources