print on console and text file simultaneously python - python

I am trying to output comments on python console and at the same time, saving into a text file and it should run recursively. I found a code and modified:
import sys
def write(input_text):
print("Coming through stdout")
# stdout is saved
save_stdout = sys.stdout
fh = open(path,"w")
sys.stdout = fh
print(input_text)
# return to normal:
sys.stdout = save_stdout
fh.close()
def testing():
write('go')
I reuse this command, and it only saved the last received print data. any clue?
Thanks

All you need is (assuming "path" is defined already):
def print_twice(*args,**kwargs):
print(*args,**kwargs)
with open(path,"a") as f: # appends to file and closes it when finished
print(file=f,*args,**kwargs)
Exactly the same thing will be printed and written to the file. The logging module is overkill for this simple task.
Please tell me that you don't actually think that writing data to a file in Python requires messing around with stdout like in your code. That would be ridiculous.

You pass the 'w' mode to the open function wich erase any content in the file.
You should use the 'a' mode for appending in the file.
BTW you should consider using the logging module with two handlers. One writting to stdout and the other to a file. See logging handlers in the python documentation.

If you want to see the output on the screen and save them on a text file as well, then:
python <your-script-name> | tee output.txt
Can change "output.txt" to any file name you want. Ignore, if I misunderstood your question.

Related

Using sys.stdout to save the console output in a txt. file but keep seen the running output on the pycham console

I'm need to save some outputs that I run on the Pycharm. I know that I can use the sys.stdout to do it, but when I use it the pycharm console doesnt't show me anything until the end of the run process and I need to see the running text some times to see if something went wrong during the process.
Can someone help me with that?
What I'm using as code to redirect the console text to a .txt file:
import sys
file_path = 'log.txt'
sys.stdout = open(file_path, "w")
ok, I see you're trying to override sys.stdout, it wrong.
you can save something data to file, like this:
file_path = 'log.txt'
my_file = open(file_path, "w")
my_file.write("some text")
my_file.close()
you can also select a file when using the print function to write it to a file instead of sys.stdout (by default "print" writes to sys.stdout):
file_path = 'log.txt'
my_file = open(file_path, "w")
print('some text', file=my_file)
my_file.close()
You might want to try using the python Logging library, which will allow you to save print content to the stdout console and the log file simultaneously, its much more robust and documented than the response I have included next.
The issue for yourself seems to be that when you first launch the python session the stdout stream directed to the python console, when you change stdout to be a IOStream (text file), you essentially redirect it to there; preventing any output from being sent to the python console.
If you want your own solution you can try to create a wrapper around the two streams like the one below:
import _io # For type annotations
import datetime
import sys
class Wrapper:
def __init__(self, stdout: _io.TextIOWrapper, target_log_file: _io.TextIOWrapper):
self.target_log_file = target_log_file
self.stdout = stdout
def write(self, o):
self.target_log_file.write(o)
self.stdout.write(o)
def flush(self):
self.target_log_file.flush()
self.stdout.flush()
def main():
file_path = 'log.txt'
sys.stdout = Wrapper(sys.stdout, open(file_path, 'w'))
for i in range(0, 10):
print(f'stdout test i:{i} time:{datetime.datetime.now()}')
if __name__ == '__main__':
main()
there is a very nice trick from Mark Lutz(Learning Python), which could be useful for you. The idea here is to use a file as long as you need and then go back to normal mode. Now because you need it the other way around you could comment out all the marked parts and as soon as you're satisfied with the result, you could activate them, like this :
import sys # <-comment out to see the output
tmp = sys.stdout # <-comment out to see the output
sys.stdout = open(file_path, "w") # <-comment out to see the output
print "something" # redirected to the file
sys.stdout.close() # <-comment out to see the output
sys.stdout = tmp # returns to normal mode / comment out
Do you need to use sys.stdout? The easiest solution may be to define a method which does both. For instance:
def print_and_export(text, path):
print(text)
f = open(path, 'a')
f.write(text)
f.close()
Then call this method from anywhere using print_and_export('Some text', 'log.txt').
But specifically for logging, I would preferably use the built in 'logging' module:
import logging
FILE_PATH = 'log.txt'
LOG_FORMAT = '[%(asctime)s] - %(filename)-30s - line %(lineno)-5d - %(levelname)-8s - %(message)s'
logging.basicConfig(level=logging.DEBUG,
format=LOG_FORMAT,
handlers=[logging.FileHandler(FILE_PATH), logging.StreamHandler()])
log = logging.getLogger()
Then you can simply call log.info("Some text"). It should print to the PyCharm console and append the text to your log file.
All the answers above are awesome and will give you what you are after. I have a different answer, but it will not give you both console output and file output. For that, you need to do file IO. But I'll mention my method just so you know.
Make your program print the exact things you want to the console. Then, save it and open up the command prompt. You can do this by browsing to the location of the .py files, click the file path bar, type cmd which will replace all the text there and hit enter.
Once in the command prompt, type python, followed by the name of the file you want to run. Let's assume the name is myfile.py. You can write:
python myfile.py > output.txt
If you hit enter, whatever gets printed onto the console will be diverted to a .txt file named output, saved on the same folder where your .py files are. You can add parameter passing to the print() function and have errors get a different output group (stderr), and divert them to a different file. So, in the end you can make it that you have one file for normal outputs and another file for just error outputs. You can give the Python documentation a read to find it.
https://helpdeskgeek.com/how-to/redirect-output-from-command-line-to-text-file/
will help you with explaining what I am trying to describe. Could be useful for you in the future. In doing so, you are not running the code in PyCharm, but straight in the command prompt. So this way, you can literally do file IO without doing file IO. (without doing file IO in Python at least)

Is there a way to call the currently open text file for writing?

I'm wondering if there is a way to write to a file that was opened in a separate script in Python. For example if the following was run within main.py:
f = open(fname, "w")
writer.write()
Then, within a separate script called writer.py, we have a function write() with the form:
def write()
get_currently_open_file().write("message")
Without defining f within writer.py. This would be similar to how matplotlib has the method:
pyplot.gca()
Which returns the current axis that's open for plotting. This allows you to plot to an axis defined previously without redefining it within the script you're working in.
I'm trying to write to a file with inputs from many different scripts and it would help a lot to be able to write to a file without reading a file object or filename as an input to each script.
Yes. Python functions have local variables, but those are only the variables that are assigned in the function. Python will look to the containing scope for the others. If you use f, but don't try to assign f, python will find the one you created in the global scope.
def write():
f.write("text")
fname = "test"
f = open(fname, "w")
write()
This only works if the function is in the same module as the global variable (python "global" is really "module level").
UPDATE
Leveraging a function's global namespace, you could write a module that holds the writing function and a variable holding the file. Every script/module that imports this module could use the write function that gets its file handles from its own module. In this example, filewriter.py is the common place where test.py and somescript.py cooperate on file management.
filewriter.py
def opener(filename, mode="r"):
global f
f = open(filename, mode)
def write(text):
return f.write(text) # uses the `f` in filewriter namespace
test.py
from filewriter import write
def my_test():
write("THIS IS A TEST\n")
somescript.py
import filewriter
import test
filewriter.opener("test.txt", "w")
test.my_test()
# verify
filewriter.f.seek(0)
assert f.read() == "THIS IS A TEST\n"
Writing as a separate answer because it's essentially unrelated to my other answer, the other semi-reasonable solution here is to define a protocol in terms of the contextvars module. In the file containing write, you define:
import contextlib
import io
import sys
from contextvars import ContextVar
outputctx: ContextVar[io.TextIOBase] = ContextVar('outputctx', default=sys.stdout)
#contextlib.contextmanager
def using_output_file(file):
token = outputctx.set(file)
try:
yield
finally:
outputctx.reset(token)
Now, your write function gets written as:
def write():
outputctx.get().write("message")
and when you want to redirect it for a time, the code that wants to do so does:
with open(fname, "w") as f, using_output_file(f):
... do stuff where calling write implicitly uses the newly opened file ...
... original file is restored ...
The main differences between this and using sys.stdout with contextlib.redirect_stdout are:
It's opt-in, functions have to cooperate to use it (mild negative)
It's explicit, so no one gets confused when the code says print or sys.stdout.write and nothing ends up on stdout
You don't mess around with sys.stdout (temporarily cutting off sys.stdout from code that doesn't want to be redirected)
By using contextvars, it's like thread-local state (where changing it in one thread doesn't change it for other threads, which would cause all sorts of problems if multithreaded code), but moreso; even if you're writing asyncio code (cooperative multitasking of tasks that are all run in the same thread), the context changes won't leak outside the task that makes them, so there's no risk that task A (which wants to be redirected) changes how task B (which does not wish to be redirected) behaves. By contrast, contextlib.redirect_stdout is explicitly making global changes; all threads and tasks see the change, they can interfere with each other, etc. It's madness.
Obviously what you're asking for is hacky, but there are semi-standard ways to express the concept "The thing we're currently writing to". sys.stdout is one of those ways, but it's normally sent to the terminal or a specific file chosen outside the program by the user through piping syntax. That said, you can perform temporary replacement of sys.stdout so that it goes to an arbitrary location, and that might satisfy your needs. Specifically, you use contextlib.redirect_stdout in a with statement.
On entering the with, sys.stdout is saved and replaced with an arbitrary open file; while in the with all code (including code called from within the with, not just the code literally shown in the block) that writes to sys.stdout instead writes to the replacement file, and when the with statement ends, the original sys.stdout is restored. Such uses can be nested, effectively creating a stack of sys.stdouts where the top of the stack is the current target for any writes to sys.stdout.
So for your use case, you could write:
import sys
def write():
sys.stdout.write("message")
and it would, by default, write to sys.stdout. But if you called write() like so:
from contextlib import redirect_stdout
with open(fname, "w") as f, redirect_stdout(f): # Open a file and redirect stdout to it
write()
the output would seamlessly go to the file located wherever fname describes.
To be clear, I don't think this is a good idea. I think the correct solution is for the functions in the various scripts to just accept a file-like object as an argument which they will write to ("Explicit is better than implicit", per the Zen of Python). But it's an option.

Breaking the python code when particular file is being opened

I want to run code under debugger and stop it when file being opened. I want to do that regardless of technique by which the file was opened. AFAIK there are two ways of opening file (if there are more then I want to stop code also on that case) and I want to stop the code when one of those are being executed:
with open(filename, "wb") as outFile:
or
object = open(file_name [, access_mode][, buffering])
is this possible under pdb or ipdb ?
PS: I do not know the line where file is being opened if I know I can set the breakpoint manually. Also I could grep for open( and set the breakpoint on found lines but if my code uses modules this might been problematic. Also if the file is opened another way not by open (I do not know if this is possible just guessing, maybe for appending etc.) this wouldn't work.
Ideally you'd put a breakpoint in the open builtin function, but that is not possible. Instead, you can override it, and place the breakpoint there:
import __builtin__
def open(name, mode='', buffer=0):
return __builtin__.open(name, mode, buffer) # place a BreakPoint here
Of course you'll be breaking at any file opening, not just the one you wanted.
So you can refine that a bit and place a conditional breakpoint:
import ipdb
import __builtin__
def open(name, mode='', buffer=0):
if name == 'myfile.txt':
ipdb.set_trace() ######### Break Point ###########
return __builtin__.open(name, mode, buffer)
f = open('myfile.txt', 'r')
Run your python program with python -m pdb prog.py.
If you don't know where the open call is, you need to patch the original open at the earliest possible point (e.g. the __main__-guard) like this:
import __builtin__
_old_open = open
def my_open(*args, **kwargs):
print "my_open"
return _old_open(*args, **kwargs)
setattr(__builtin__, 'open', my_open)
print open(__file__, "rb").read()

How to save python screen output to a text file

I'd like to query items from a dict and save the printed output to a text file.
Here's what I have:
import json
import exec.fullog as e
inp = e.getdata() #inp now is a dict() which has items, keys and values.
#Query
print('Data collected on:', inp['header']['timestamp'].date())
print('\n CLASS 1 INFO\n')
for item in inp['Demographics']:
if item['name'] in ['Carly', 'Jane']:
print(item['name'], 'Height:', item['ht'], 'Age:', item['years'])
for item in inp['Activity']:
if item['name'] in ['Cycle', 'Run', 'Swim']:
print(item['name'], 'Athlete:', item['athl_name'], 'Age:', item['years'])
A quick and dirty hack to do this within the script is to direct the screen output to a file:
import sys
stdoutOrigin=sys.stdout
sys.stdout = open("log.txt", "w")
and then reverting back to outputting to screen at the end of your code:
sys.stdout.close()
sys.stdout=stdoutOrigin
This should work for a simple code, but for a complex code there are other more formal ways of doing it such as using Python logging.
Let me summarize all the answers and add some more.
To write to a file from within your script, user file I/O tools that are provided by Python (this is the f=open('file.txt', 'w') stuff.
If don't want to modify your program, you can use stream redirection (both on windows and on Unix-like systems). This is the python myscript > output.txt stuff.
If you want to see the output both on your screen and in a log file, and if you are on Unix, and you don't want to modify your program, you may use the tee command (windows version also exists, but I have never used it)
Even better way to send the desired output to screen, file, e-mail, twitter, whatever is to use the logging module. The learning curve here is the steepest among all the options, but in the long run it will pay for itself.
abarnert's answer is very good and pythonic. Another completely different route (not in python) is to let bash do this for you:
$ python myscript.py > myoutput.txt
This works in general to put all the output of a cli program (python, perl, php, java, binary, or whatever) into a file, see How to save entire output of bash script to file for more.
If you want the output to go to stdout and to the file, you can use tee:
$ python myscript.py | tee myoutput.txt
For more on tee, see: How to redirect output to a file and stdout
What you're asking for isn't impossible, but it's probably not what you actually want.
Instead of trying to save the screen output to a file, just write the output to a file instead of to the screen.
Like this:
with open('outfile.txt', 'w') as outfile:
print >>outfile, 'Data collected on:', input['header']['timestamp'].date()
Just add that >>outfile into all your print statements, and make sure everything is indented under that with statement.
More generally, it's better to use string formatting rather than magic print commas, which means you can use the write function instead. For example:
outfile.write('Data collected on: {}'.format(input['header']['timestamp'].date()))
But if print is already doing what you want as far as formatting goes, you can stick with it for now.
What if you've got some Python script someone else wrote (or, worse, a compiled C program that you don't have the source to) and can't make this change? Then the answer is to wrap it in another script that captures its output, with the subprocess module. Again, you probably don't want that, but if you do:
output = subprocess.check_output([sys.executable, './otherscript.py'])
with open('outfile.txt', 'wb') as outfile:
outfile.write(output)
You would probably want this. Simplest solution would be
Create file first.
open file via
f = open('<filename>', 'w')
or
f = open('<filename>', 'a')
in case you want to append to file
Now, write to the same file via
f.write(<text to be written>)
Close the file after you are done using it
#good pracitice
f.close()
Here's a really simple way in python 3+:
f = open('filename.txt', 'w')
print('something', file = f)
^ found that from this answer: https://stackoverflow.com/a/4110906/6794367
This is very simple, just make use of this example
import sys
with open("test.txt", 'w') as sys.stdout:
print("hello")
python script_name.py > saveit.txt
Because this scheme uses shell command lines to start Python programs, all the usual shell syntax applies. For instance, By this, we can route the printed output of a Python script to a file to save it.
I found a quick way for this:
log = open("log.txt", 'a')
def oprint(message):
print(message)
global log
log.write(message)
return()
code ...
log.close()
Whenever you want to print something just use oprint rather than print.
Note1: In case you want to put the function oprint in a module then import it, use:
import builtins
builtins.log = open("log.txt", 'a')
Note2: what you pass to oprint should be a one string (so if you were using a comma in your print to separate multiple strings, you may replace it with +)
We can simply pass the output of python inbuilt print function to a file after opening the file with the append option by using just two lines of code:
with open('filename.txt', 'a') as file:
print('\nThis printed data will store in a file', file=file)
Hope this may resolve the issue...
Note: this code works with python3 however, python2 is not being supported currently.
idx = 0
for wall in walls:
np.savetxt("C:/Users/vimal/OneDrive/Desktop/documents-export-2021-06-11/wall/wall_"+str(idx)+".csv",
wall, delimiter=",")
idx += 1
class Logger:
def __init__(self, application_log_file, init_text="Program started", print_with_time=True):
import sys
self.__output_num = 0
self.origin = sys.stdout
self.init_text = init_text
self.__init = False
self.print_with_time = print_with_time
self.log_file = application_log_file
self.data = ""
self.last_time = 0
sys.stdout = self
sys.stderr = self
def flush(self):
if self.data == "\n":
return
sys.stdout = self.origin
print(self.__create_log_text(self.data) if self.print_with_time else self.data)
with open(self.log_file, "a", encoding="utf-8") as log:
log.write(self.__create_log_text(self.data))
self.data = ""
sys.stdout = self
def __create_log_text(self, string: str):
if self.last_time == str(datetime.datetime.today())[:-7]:
return string
self.last_time = str(datetime.datetime.today())[:-7]
if not self.__init:
self.__init = True
return str(datetime.datetime.today())[:-7] + " | " + f"{self.init_text}\n"
return str(datetime.datetime.today())[:-7] + " | " + string
def write(self, data):
self.data += data

How do I export the output of Python's built-in help() function

I've got a python package which outputs considerable help text from: help(package)
I would like to export this help text to a file, in the format in which it's displayed by help(package)
How might I go about this?
pydoc.render_doc(thing) to get thing's help text as a string. Other parts of pydoc like pydoc.text and pydoc.html can help you write it to a file.
Using the -w modifier in linux will write the output to a html in the current directory, for example;
pydoc -w Rpi.GPIO
Puts all the help() text that would be presented from the command help(Rpi.GPIO) into a nicely formatted file Rpi.GPIO.html, in the current directory of the shell
This is a bit hackish (and there's probably a better solution somewhere), but this works:
import sys
import pydoc
def output_help_to_file(filepath, request):
f = open(filepath, 'w')
sys.stdout = f
pydoc.help(request)
f.close()
sys.stdout = sys.__stdout__
return
And then...
>>> output_help_to_file(r'test.txt', 're')
An old question but the newer recommended generic solution (for Python 3.4+) for writing the output of functions that print() to terminal is using contextlib.redirect_stdout:
import contextlib
def write_help(func, out_file):
with open(out_file, 'w') as f:
with contextlib.redirect_stdout(f):
help(func)
Usage example:
write_help(int, 'test.txt')
To get a "clean" text output, just as the built-in help() would deliver, and suitable for exporting to a file or anything else, you can use the following:
>>> import pydoc
>>> pydoc.render_doc(len, renderer=pydoc.plaintext)
'Python Library Documentation: built-in function len in module builtins\n\nlen(obj, /)\n Return the number of items in a container.\n'
If you do help(help) you'll see:
Help on _Helper in module site object:
class _Helper(__builtin__.object)
| Define the builtin 'help'.
| This is a wrapper around pydoc.help (with a twist).
[rest snipped]
So - you should be looking at the pydoc module - there's going to be a method or methods that return what help(something) does as a string...
Selected answer didn't work for me, so I did a little more searching and found something that worked on Daniweb. Credit goes to vegaseat. https://www.daniweb.com/programming/software-development/threads/20774/starting-python/8#post1306519
# simplified version of sending help() output to a file
import sys
# save present stdout
out = sys.stdout
fname = "help_print7.txt"
# set stdout to file handle
sys.stdout = open(fname, "w")
# run your help code
# its console output goes to the file now
help("print")
sys.stdout.close()
# reset stdout
sys.stdout = out
The simplest way to do that is via using
sys module
it opens a data stream between the operation system and it's self , it grab the data from the help module then save it in external file
file="str.txt";file1="list.txt"
out=sys.stdout
sys.stdout=open('str_document','w')
help(str)
sys.stdout.close
The cleanest way
Assuming help(os)
Step 1 - In Python Console
import pydoc
pydoc.render_doc(os, renderer=pydoc.plaintext)`
#this will display a string containing help(os) output
Step 2 - Copy string
Step 3 - On a Terminal
echo "copied string" | tee somefile.txt
If you want to write Class information in a text file. Follow below steps
Insert pdb hook somewhere in the Class and run file
import pdb; pdb.set_trace()
Perform step 1 to 3 stated above
In Windows, just open up a Windows Command Line window, go to the Lib subfolder of your Python installation, and type
python pydoc.py moduleName.memberName > c:\myFolder\memberName.txt
to put the documentation for the property or method memberName in moduleName into the file memberName.txt. If you want an object further down the hierarchy of the module, just put more dots. For example
python pydoc.py wx.lib.agw.ultimatelistctrl > c:\myFolder\UltimateListCtrl.txt
to put the documentation on the UltimateListCtrl control in the agw package in the wxPython package into UltimateListCtrl.txt.
pydoc already provides the needed feature, a very well-designed feature that all question-answering systems should have. The pydoc.Helper.init has an output object, all output being sent there. If you use your own output object, you can do whatever you want. For example:
class OUTPUT():
def __init__(self):
self.results = []
def write(self,text):
self.results += [text]
def flush(self):
pass
def print_(self):
for x in self.results: print(x)
def return_(self):
return self.results
def clear_(self):
self.results = []
when passed as
O = OUTPUT() # Necessarily to remember results, but see below.
help = pydoc.Helper(O)
will store all results in the OUTPUT instance. Of course, beginning with O = OUTPUT() is not the best idea (see below). render_doc is not the central output point; output is. I wanted OUTPUT so I could keep large outputs from disappearing from the screen using something like Mark Lutz' "More". A different OUTPUT would allow you to write to files.
You could also add a "return" to the end of the class pydoc.Helper to return the information you want. Something like:
if self.output_: return self.output_
should work, or
if self.output_: return self.output.return_()
All of this is possible because pydoc is well-designed. It is hidden because the definition of help leaves out the input and output arguments.
Using the command line we can get the output directly and pipe it to whatever is useful.
python -m pydoc ./my_module_file.py
-- the ./ is important, it tells pydoc to look at your local file and not attempt to import from somewhere else.
If you're on the mac you can pipe the output to pbcopy and paste it into a documentation tool of your choice.
python -m pydoc ./my_module_file.py | pbcopy

Categories

Resources