I want to save my console output in a text file, but I want it to be as it happens so that if the program crashes, logs will be saved.
Do you have some ideas?
I can't just specify file in logger because I have a lot of different loggers that are printing into the console.
I think that you indeed can use a logger, just adding a file handler, from the logging module you can read this
As an example you can use something like this, which logs both to the terminal and to a file:
import logging
from pathlib import Path
root_path = <YOUR PATH>
log_level = logging.DEBUG
# Print to the terminal
logging.root.setLevel(log_level)
formatter = logging.Formatter("%(asctime)s | %(levelname)s | %(message)s", "%Y-%m-%d %H:%M:%S")
stream = logging.StreamHandler()
stream.setLevel(log_level)
stream.setFormatter(formatter)
log = logging.getLogger("pythonConfig")
if not log.hasHandlers():
log.setLevel(log_level)
log.addHandler(stream)
# file handler:
file_handler = logging.FileHandler(Path(root_path / "process.log"), mode="w")
file_handler.setLevel(log_level)
file_handler.setFormatter(formatter)
log.addHandler(file_handler)
log.info("test")
If you have multiple loggers, you can still use this solutions as loggers can inherit from other just put the handler in the root logger and ensure that the others take the handler from that one.
As an alternative you can use nohup command which will keep the process running even if the terminal closes and will return the outputs to the desired location:
nohup python main.py > log_file.out &
There are literally many ways to do this. However, they are not all suitable for different reasons (maintainability, ease of use, reinvent the wheel, etc.).
If you don't mind using your operating system built-ins you can:
forward standard output and error streams to a file of your choice with python3 -u ./myscript.py 2>&1 outputfile.txt.
forward standard output and error streams to a file of your choice AND display it to the console too with python3 -u ./myscript.py 2>&1 tee outputfile.txt. The -u option specifies the output is unbuffered (i.e.: whats put in the pipe goes immediately out).
If you want to do it from the Python side you can:
use the logging module to output the generated logs to a file handle instead of the standard output.
override the stdout and stderr streams defined in sys (sys.stdout and sys.stderr) so that they point to an opened file handle of your choice. For instance sys.stdout = open("log-stdout.txt", "w").
As a personnal preference, the simpler, the better. The logging module is made for the purpose of logging and provides all the necessary mechanisms to achieve what you want. So.. I would suggest that you stick with it. Here is a link to the logging module documentation which also provides many examples from simple use to more complex and advanced use.
Related
I have a simple piece of code which runs a specified launchfile:
roslaunch.configure_logging(uuid)
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
file = [(roslaunch.rlutil.resolve_launch_arguments(cli)[0], cli[2:])]
launch = roslaunch.parent.ROSLaunchParent(uuid, file)
Execution of the launchfile stuff generates lots of logging output on stdout/err, so the actual script's output is getting lost.
Is it possible to somehow redirect or disable printing it on the screen?
Two options:
via env vars (export).
via python logging module.
Using env vars
You can set ROS logging to a /dev/null like file system.
See ROS env vars: https://wiki.ros.org/ROS/EnvironmentVariables
export ROS_LOG_DIR=/black/hole
Use https://github.com/abbbi/nullfsvfs to create the dir.
Using python logging
import logging
logging.getLogger("rospy").setLevel(logging.CRITICAL)
https://docs.python.org/3/library/logging.html#logging-levels
How about redirecting stdout and stderr to files? As long as you redirect them before your calls to roslaunch it should put all the output into the files you point to (or /dev/null to ignore it).
import sys
sys.stdout = open('redirect.out','w')
sys.stderr = open('redirect.err','w')
I'm using PyCharm to develop a GAE app in Mac OS X. Is there any way to display colours in the run console of PyCharm?
I've set a handler to output colours in ansi format. Then, I've added the handler:
LOG = logging.getLogger()
LOG.setLevel(logging.DEBUG)
for handler in LOG.handlers:
LOG.removeHandler(handler)
LOG.addHandler(ColorHandler())
LOG.info('hello!')
LOG.warning('hello!')
LOG.debug('hello!')
LOG.error('hello!')
But the colour is the same.
EDIT:
A response from JetBrains issue tracker: Change line 55 of the snippet from sys.stderr to sys.stdout. stderr stream is always colored with red color while stdout not.
Now colours are properly displayed.
As of at least PyCharm 2017.2 you can do this by enabling:
Run | Edit Configurations... | Configuration | Emulate terminal in output console
PyCharm doesn't support that feature natively, however you can download the Grep Console plugin and set the colors as you like.
Here's a screenshot:
http://plugins.jetbrains.com/files/7125/screenshot_14104.png (link is dead)
I hope it helps somewhat :) although it doesn't provide fully colorized console, but it's a step towards it.
Late to the party, but anyone else with this issue, here's the solution that worked for me:
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
This came from this answer
Sept. 2019: PyCharm Community 2019.1
PyCharm colored all the logs including info/debug in red.
The upshot is: it is not a PyCharm problem, this is how the default logging is configured.
Everything written to sys.stderr is colored red by PyCharm.
When using StreamHandler() without arguments, the default stream is sys.stderr.
For getting non-colored logs back, specify logging.StreamHandler(stream=sys.stdout) in basic config like this:
logging.basicConfig(
level=logging.DEBUG,
format='[%(levelname)8s]: %(message)s',
handlers=[
logging.FileHandler(f'{os.path.basename(__file__)}.log'),
logging.StreamHandler(sys.stdout),
])
or be more verbose:
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
This fixed my red PyCharm logs.
What solved it for me (on PyCharm 2017.2) was going to Preferences -> Editor -> Color Scheme -> Console Colors and changing the color of Console -> Error output. Of course this also changes the error color but at least you don't see red all the time...
PyCharm 2019.1.1 (Windows 10, 1709) - runned snippet as is - works correctly.
Bug: setFormatter - does not work.
Fix: make change in line 67 and get rid on line 70-71 (unformatted handler adding).
self.stream.write(record.msg + "\n", color)
to
self.stream.write(self.format(record) + "\n", color)
Line 70-71 can be moved under manual file run construction for save test ability:
if __name__ == "__main__":
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(ColorHandler())
logging.debug("Some debugging output")
logging.info("Some info output")
logging.error("Some error output")
logging.warning("Some warning output")
Compared it with standard StreamHandler:
import logging
import logging_colored
log_format = logging.Formatter("[%(threadName)-15.15s] [%(levelname)-5.5s] %(message)s")
logger = logging.getLogger('Main')
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setFormatter(log_format)
logger.addHandler(console)
console = logging_colored.ColorHandler()
console.setFormatter(log_format)
logger.addHandler(console)
...
I discovered the following solution. Apparently Pycharm redirects sys.stdout. From the sys module documentation:
sys.__stdin__
sys.__stdout__
sys.__stderr__
These objects contain the original values of stdin, stderr and stdout
at the start of the program. They are used during finalization, and
could be useful to print to the actual standard stream no matter if
the sys.std* object has been redirected.
It can also be used to restore the actual files to known working file
objects in case they have been overwritten with a broken object.
However, the preferred way to do this is to explicitly save the
previous stream before replacing it, and restore the saved object.
Therefore, to solve this issue you can redirect output to sys.__stdout__. Example configuration from my log_config.yml:
console:
class: logging.StreamHandler
level: DEBUG
stream: "ext://sys.__stdout__"
formatter: colorFormatter
I set up a basic python logger that writes to a log file and to stdout. When I run my python program locally, log messages with logging.info appear as expected in the file and in the console. However, when I run the same program remotely via ssh -n user#server python main.py neither the console nor the file show any logging.info messages.
This is the code used to set up the logger:
def setup_logging(model_type, dataset):
file_name = dataset + "_" + model_type + time.strftime("_%Y%m%d_%H%M")
logging.basicConfig(
level=logging.INFO,
format="[%(levelname)-5.5s %(asctime)s] %(message)s",
datefmt='%H:%M:%S',
handlers=[
logging.FileHandler("log/{0}.log".format(file_name)),
logging.StreamHandler()
])
I already tried the following things:
Sending a message to logging.warning: Those appear as expected on the root logger. However, even without setting up the logger and falling back to the default logging.info messages do not show up.
The file and folder permissions seem to be alright and an empty file is created on disk.
Using print works as usual as well
If you look into the source code of basicConfig function, you will see that the function is applied only when there are no handlers on the root logger:
_acquireLock()
try:
force = kwargs.pop('force', False)
if force:
for h in root.handlers[:]:
root.removeHandler(h)
h.close()
if len(root.handlers) == 0:
handlers = kwargs.pop("handlers", None)
if handlers is None:
...
I think, one of the libraries you use configures logging on import. And as you see from the sample above, one of the solutions is to use force=True argument.
A possible disadvantage is that several popular data-science libraries keep a reference to the loggers they configure, so that when you reconfigure logging yourselves their old loggers with the handlers are still there and do not see your changes. In which case you will also need to clean the handlers for those loggers as well.
I seem to be running into a problem when I am logging data after invoking another module in an application I am working on. I'd like assistance in understanding what may be happening here.
To replicate the issue, I have developed the following script...
#!/usr/bin/python
import sys
import logging
from oletools.olevba import VBA_Parser, VBA_Scanner
from cloghandler import ConcurrentRotatingFileHandler
# set up logger for application
dbg_h = logging.getLogger('dbg_log')
dbglog = '%s' % 'dbg.log'
dbg_rotateHandler = ConcurrentRotatingFileHandler(dbglog, "a")
dbg_h.addHandler(dbg_rotateHandler)
dbg_h.setLevel(logging.ERROR)
# read some document as a buffer
buff = sys.stdin.read()
# generate issue
dbg_h.error('Before call to module....')
vba = VBA_Parser('None', data=buff)
dbg_h.error('After call to module....')
When I run this, I get the following...
cat somedocument.doc | ./replicate.py
ERROR:dbg_log:After call to module....
For some reason, my last dbg_h logger write attempt is getting output to the console as well as getting written to my dbg.log file? This only appears to happen AFTER the call to VBA_Parser.
cat dbg.log
Before call to module....
After call to module....
Anyone have any idea as to why this might be happening? I reviewed the source code of olevba and did not see anything that stuck out to me specifically.
Could this be a problem I should raise with the module author? Or am I doing something wrong with how I am using the cloghandler?
The oletools codebase is littered with calls to the root logger though calls to logging.debug(...), logging.error(...), and so on. Since the author didn't bother to configure the root logger, the default behavior is to dump to sys.stderr. Since sys.stderr defaults to the console when running from the command line, you get what you're seeing.
You should contact the author of oletools since they're not using the logging system effectively. Ideally they would use a named logger and push the messages to that logger. As a work-around to suppress the messages you could configure the root logger to use your handler.
# Set a handler
logger.root.addHandler(dbg_rotateHandler)
Be aware that this may lead to duplicated log messages.
I'm using PyCharm to develop a GAE app in Mac OS X. Is there any way to display colours in the run console of PyCharm?
I've set a handler to output colours in ansi format. Then, I've added the handler:
LOG = logging.getLogger()
LOG.setLevel(logging.DEBUG)
for handler in LOG.handlers:
LOG.removeHandler(handler)
LOG.addHandler(ColorHandler())
LOG.info('hello!')
LOG.warning('hello!')
LOG.debug('hello!')
LOG.error('hello!')
But the colour is the same.
EDIT:
A response from JetBrains issue tracker: Change line 55 of the snippet from sys.stderr to sys.stdout. stderr stream is always colored with red color while stdout not.
Now colours are properly displayed.
As of at least PyCharm 2017.2 you can do this by enabling:
Run | Edit Configurations... | Configuration | Emulate terminal in output console
PyCharm doesn't support that feature natively, however you can download the Grep Console plugin and set the colors as you like.
Here's a screenshot:
http://plugins.jetbrains.com/files/7125/screenshot_14104.png (link is dead)
I hope it helps somewhat :) although it doesn't provide fully colorized console, but it's a step towards it.
Late to the party, but anyone else with this issue, here's the solution that worked for me:
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
This came from this answer
Sept. 2019: PyCharm Community 2019.1
PyCharm colored all the logs including info/debug in red.
The upshot is: it is not a PyCharm problem, this is how the default logging is configured.
Everything written to sys.stderr is colored red by PyCharm.
When using StreamHandler() without arguments, the default stream is sys.stderr.
For getting non-colored logs back, specify logging.StreamHandler(stream=sys.stdout) in basic config like this:
logging.basicConfig(
level=logging.DEBUG,
format='[%(levelname)8s]: %(message)s',
handlers=[
logging.FileHandler(f'{os.path.basename(__file__)}.log'),
logging.StreamHandler(sys.stdout),
])
or be more verbose:
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
This fixed my red PyCharm logs.
What solved it for me (on PyCharm 2017.2) was going to Preferences -> Editor -> Color Scheme -> Console Colors and changing the color of Console -> Error output. Of course this also changes the error color but at least you don't see red all the time...
PyCharm 2019.1.1 (Windows 10, 1709) - runned snippet as is - works correctly.
Bug: setFormatter - does not work.
Fix: make change in line 67 and get rid on line 70-71 (unformatted handler adding).
self.stream.write(record.msg + "\n", color)
to
self.stream.write(self.format(record) + "\n", color)
Line 70-71 can be moved under manual file run construction for save test ability:
if __name__ == "__main__":
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(ColorHandler())
logging.debug("Some debugging output")
logging.info("Some info output")
logging.error("Some error output")
logging.warning("Some warning output")
Compared it with standard StreamHandler:
import logging
import logging_colored
log_format = logging.Formatter("[%(threadName)-15.15s] [%(levelname)-5.5s] %(message)s")
logger = logging.getLogger('Main')
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setFormatter(log_format)
logger.addHandler(console)
console = logging_colored.ColorHandler()
console.setFormatter(log_format)
logger.addHandler(console)
...
I discovered the following solution. Apparently Pycharm redirects sys.stdout. From the sys module documentation:
sys.__stdin__
sys.__stdout__
sys.__stderr__
These objects contain the original values of stdin, stderr and stdout
at the start of the program. They are used during finalization, and
could be useful to print to the actual standard stream no matter if
the sys.std* object has been redirected.
It can also be used to restore the actual files to known working file
objects in case they have been overwritten with a broken object.
However, the preferred way to do this is to explicitly save the
previous stream before replacing it, and restore the saved object.
Therefore, to solve this issue you can redirect output to sys.__stdout__. Example configuration from my log_config.yml:
console:
class: logging.StreamHandler
level: DEBUG
stream: "ext://sys.__stdout__"
formatter: colorFormatter