I'm using a deep learning library, Caffe, which is written in C++ and has an interface to Python. One of my commands creates a lot of unnecessary output to the log and I would really like to remove that by temporarily disabling logging.
Caffe uses GLOG and I've tried usingos.environ["GLOG_minloglevel"] = "2" to only log important messages. However, that didn't work. I've also tried using the Python logging module to shut down all logging temporarily using the code below, which didn't work either.
root_logger = logging.getLogger()
root_logger.disabled = True
net = caffe.Net(model_file, pretrained, caffe.TEST)
root_logger.disabled = False
GLOG_minloglevel=3 ,only by executing that line in Python before calling
so,you can try
os.environ["GLOG_minloglevel"] ="3"
import caffe
You likely need to set the log level environmental variable before you start Python. Or at leastt this worked for me:
GLOG_minloglevel=3 python script.py
Which silenced loading messages.
Related
I am using python logging module and it only works when there is ERROR logging level although DEBUG is set as the level in logging.
I have used below config in __init__ method of every local class that I am importing into main.py
Example:
In gitlabcsr.py class,
self.archfilepath = archfilepath
self.csrlogger = logging.getLogger("gitlabcsr")
self.csrlogger.setLevel(level=logging.DEBUG)
And using the logger as in the same class,
self.csrlogger.debug("the reports are downloaded from gitlab repo")
However, only when I use self.csrlogger.error("the reports are downloaded from gitlab repo"), the message is getting printed.
I don't know why DEBUG is not considered. Please let me know what I am missing here.
Is there a way to turn off console logging for Hydra, but keep file logging? I am encountering a problem where Hydra is duplicating all my console prints. These prints are handled by Pytorch Lightning and I want them to stay like that. However, I am fine with hydra logging them to a file (once per print), but I do not want to see my prints twice in the console.
I think we have a similar issue here https://github.com/facebookresearch/hydra/issues/1012
Have you tried setting
hydra/job_logging=none
hydra/hydra_logging=none
as suggested in the issue and see if works better for you?
I struggled a bit with the hydra documentation which is why I wanted to write a detailed explanation here so that other people can have it easy. In order to be able to use the answer proposed by #j_hu, i.e.:
hydra/job_logging=none
hydra/hydra_logging=none
with hydra 1.0 (which is the stable version at the time I am writing this answer) you need to first:
Create a directory called hydra within your config directory.
Create two subdirectories: job_logging and hydra_logging.
Create two none.yaml files in both of those directories as described below.
# #package _group_
version: 1
root: null
disable_existing_loggers: false
After this is done, you can use the none.yaml configuration to either override the logging via the command line:
python main.py hydra/job_logging=none hydra/hydra_logging=none
or via the config.yaml file:
defaults:
- hydra/hydra_logging: none
- hydra/job_logging: none
I have a python script with an error handling using the logging module. Although this python script works when imported to google colab, it doesn't log the errors in the log file.
As an experiment, I tried this following script in google colab just to see if it writes log at all
import logging
logging.basicConfig(filename="log_file_test.log",
filemode='a',
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.DEBUG)
logging.info("This is a test log ..")
To my dismay, it didn't even create a log file named log_file_test.log. I tried running the same script locally and it did produce a file log_file_test.log with the following text
13:20:53,441 root INFO This is a test log ..
What is it that I am missing here?
For the time being, I am replacing the error logs with print statements, but I assume that there must be a workaround to this.
Perhaps you've reconfigured your environment somehow? (Try Runtime menu -> Reset all runtimes...) Your snippets works exactly as written for me --
logging.basicConfig can be run just once*
Any subsequent call to basicConfig is ignored.
* unless you are in Python 3.8 and use the flag force=True
logging.basicConfig(filename='app.log',
level=logging.DEBUG,
force=True, # Resets any previous configuration
)
Workarounds (2)
(1) You can easily reset the Colab workspace with this command
exit
Wait for it to come back and try your commands again.
(2) But, if you plan to do the reset more than once and/or are learning to use logging, maybe it is better to use %%python magic to run the entire cell in a subprocess. See photo below.
What is it that I am missing here?
Deeper understanding of how logging works. It is a bit tricky, but there are many good webs explaining the gotchas.
In Colab
https://realpython.com/python-logging
[This answer][1] cover the issue.
You have to:
Clear your log handlers from the environment with logging.root.removeHandler
Set log level with logging.getLogger('RootLogger').setLevel(logging.DEBUG).
Setting level with logging.basicConfig only did not work for me.
I have a problem with my logging in my python script. I run the same script multiple times (to have several simulations) using Pool for increased performance. In my script I'm using a logger with MemoryHandler, defined as below:
capacity=5000000000
filehandler_name = SOME_NAME
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG)
filehandler = logging.FileHandler(filehandler_name)
memoryhandler = logging.handlers.MemoryHandler(
capacity=capacity,
flushLevel=logging.ERROR,
target=filehandler
)
logger.addHandler(memoryhandler)
and I log information using logger.info(...). However, I noticed that the logging is not always working. When I check different log files (I have one log sile per simulation), some log files contain data, the others are empty. There is not particular pattern in which are empty and which are not, usually it happens at random. I tried many things but it seems like I'm missing something. Does anyone has any suggestion on why Python logger might not be always working corretly?
Without a code snippet I would guess it is caused by the multiprocessing, you mention:
using Pool for increased performance..
You can check the official documentation on how to use logging module while multiprocessing.
I am new to Python. I have created some C/C++ extensions for Python and able to build those with the help of Python disutils setup script. But, I have to integrate this setup script to an existing build system. So, I wrote another script to call this setup script using run_setup() method.
distributionObj = run_setup("setup.py",["build_ext"])
Now, I want if any error occurs during the building of extension (Compiler, Linker or anything), I must be able to get the information along with the error string from the caller script to notify the build process.
Please provide me some suggestion.
Setting DISTUTILS_DEBUG=1 in the environment will cause debug logging.
distutils1 (first version) uses too a internal version of logging (a bit hardcoded, it is not using the standard logging module). I think that it is possible to set the verbosity level coding something like:
import distutils.log
distutils.log.set_verbosity(-1) # Disable logging in disutils
distutils.log.set_verbosity(distutils.log.DEBUG) # Set DEBUG level
All distutils's logging levels available:
DEBUG = 1
INFO = 2
WARN = 3
ERROR = 4
FATAL = 5
You can see the source code of the class "Log" of distutils for reference. Usually for Python 2.7 in /usr/lib/python2.7/distutils/log.py
Passing the -v parameter to python setup.py build to increase verbosity usually works to get more detailed errors.
The verbose option is not additive, it converts to a boolean. Thus no matter how many times you invoke the verbose option it will always be 1 and 1 always sets the level to INFO, which is the default anyway.