Turn off console logging for Hydra when using Pytorch Lightning - python

Is there a way to turn off console logging for Hydra, but keep file logging? I am encountering a problem where Hydra is duplicating all my console prints. These prints are handled by Pytorch Lightning and I want them to stay like that. However, I am fine with hydra logging them to a file (once per print), but I do not want to see my prints twice in the console.

I think we have a similar issue here https://github.com/facebookresearch/hydra/issues/1012
Have you tried setting
hydra/job_logging=none
hydra/hydra_logging=none
as suggested in the issue and see if works better for you?

I struggled a bit with the hydra documentation which is why I wanted to write a detailed explanation here so that other people can have it easy. In order to be able to use the answer proposed by #j_hu, i.e.:
hydra/job_logging=none
hydra/hydra_logging=none
with hydra 1.0 (which is the stable version at the time I am writing this answer) you need to first:
Create a directory called hydra within your config directory.
Create two subdirectories: job_logging and hydra_logging.
Create two none.yaml files in both of those directories as described below.
# #package _group_
version: 1
root: null
disable_existing_loggers: false
After this is done, you can use the none.yaml configuration to either override the logging via the command line:
python main.py hydra/job_logging=none hydra/hydra_logging=none
or via the config.yaml file:
defaults:
- hydra/hydra_logging: none
- hydra/job_logging: none

Related

pycharm project: source root and imports not updated?

I have a python project (in Pycharm), and I have, let's say, 2 folders in it. One is called data and the other is algorithms. The data folder has a python file where I import some data from an excel sheet. And another file where I have defined some constants.
The algorithm folder has, let's say, one python file where I import the constants and the data from the data folder. I use:
from data.constants import const
from data.struct import table
When I run the algorithms (that are in the algorithm folder), things work perfectly. But when I change a constant in the constant file or the data in the excel sheet, nothing much changes. In other words, the constants are not updated when imported again and the same for the excel data imported. The old values of the constants and the table are used.
I tried to mark both folders as source root, but the problem persists.
What I do now, is close pycharm and reopen it again, but if there is a better way to handle this rather than closing and losing vars in the python console, I would be grateful to know about it!
I am not sure if I get it correct or not but try following. Once you change the constants in constants file try to import it again, i.e. do the following
from data.constants import const.
After this you see that constants are not changed ?
Please try this:
from constants.constant import v
print('v =', v)
del v
The problem can be connected with cache. Here is similar problem as yours but for spider
Spyder doesn't detect changes in imported python files
Check this post
pycharm not updating with environment variables
As it suggests you may have to take few steps. Set Environment Variables,
or check the solution suggested here
Pycharm: set environment variable for run manage.py Task .
I found the answer in this post :
https://stackoverflow.com/a/5399339/13890967
Basically add these two lines into the settings>Console>Python Console
%load_ext autoreload
%autoreload 2
see this answer as well for better visualization:
https://stackoverflow.com/a/60099037/13890967
and this answer for syntax errors:
https://stackoverflow.com/a/32994117/13890967

Problem with Logging Module in Google Colab

I have a python script with an error handling using the logging module. Although this python script works when imported to google colab, it doesn't log the errors in the log file.
As an experiment, I tried this following script in google colab just to see if it writes log at all
import logging
logging.basicConfig(filename="log_file_test.log",
filemode='a',
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.DEBUG)
logging.info("This is a test log ..")
To my dismay, it didn't even create a log file named log_file_test.log. I tried running the same script locally and it did produce a file log_file_test.log with the following text
13:20:53,441 root INFO This is a test log ..
What is it that I am missing here?
For the time being, I am replacing the error logs with print statements, but I assume that there must be a workaround to this.
Perhaps you've reconfigured your environment somehow? (Try Runtime menu -> Reset all runtimes...) Your snippets works exactly as written for me --
logging.basicConfig can be run just once*
Any subsequent call to basicConfig is ignored.
* unless you are in Python 3.8 and use the flag force=True
logging.basicConfig(filename='app.log',
level=logging.DEBUG,
force=True, # Resets any previous configuration
)
Workarounds (2)
(1) You can easily reset the Colab workspace with this command
exit
Wait for it to come back and try your commands again.
(2) But, if you plan to do the reset more than once and/or are learning to use logging, maybe it is better to use %%python magic to run the entire cell in a subprocess. See photo below.
What is it that I am missing here?
Deeper understanding of how logging works. It is a bit tricky, but there are many good webs explaining the gotchas.
In Colab
https://realpython.com/python-logging
[This answer][1] cover the issue.
You have to:
Clear your log handlers from the environment with logging.root.removeHandler
Set log level with logging.getLogger('RootLogger').setLevel(logging.DEBUG).
Setting level with logging.basicConfig only did not work for me.

Disable and renable logging created from C++ module in Python

I'm using a deep learning library, Caffe, which is written in C++ and has an interface to Python. One of my commands creates a lot of unnecessary output to the log and I would really like to remove that by temporarily disabling logging.
Caffe uses GLOG and I've tried usingos.environ["GLOG_minloglevel"] = "2" to only log important messages. However, that didn't work. I've also tried using the Python logging module to shut down all logging temporarily using the code below, which didn't work either.
root_logger = logging.getLogger()
root_logger.disabled = True
net = caffe.Net(model_file, pretrained, caffe.TEST)
root_logger.disabled = False
GLOG_minloglevel=3 ,only by executing that line in Python before calling
so,you can try
os.environ["GLOG_minloglevel"] ="3"
import caffe
You likely need to set the log level environmental variable before you start Python. Or at leastt this worked for me:
GLOG_minloglevel=3 python script.py
Which silenced loading messages.

How to get error log of a disutils setup in Python?

I am new to Python. I have created some C/C++ extensions for Python and able to build those with the help of Python disutils setup script. But, I have to integrate this setup script to an existing build system. So, I wrote another script to call this setup script using run_setup() method.
distributionObj = run_setup("setup.py",["build_ext"])
Now, I want if any error occurs during the building of extension (Compiler, Linker or anything), I must be able to get the information along with the error string from the caller script to notify the build process.
Please provide me some suggestion.
Setting DISTUTILS_DEBUG=1 in the environment will cause debug logging.
distutils1 (first version) uses too a internal version of logging (a bit hardcoded, it is not using the standard logging module). I think that it is possible to set the verbosity level coding something like:
import distutils.log
distutils.log.set_verbosity(-1) # Disable logging in disutils
distutils.log.set_verbosity(distutils.log.DEBUG) # Set DEBUG level
All distutils's logging levels available:
DEBUG = 1
INFO = 2
WARN = 3
ERROR = 4
FATAL = 5
You can see the source code of the class "Log" of distutils for reference. Usually for Python 2.7 in /usr/lib/python2.7/distutils/log.py
Passing the -v parameter to python setup.py build to increase verbosity usually works to get more detailed errors.
The verbose option is not additive, it converts to a boolean. Thus no matter how many times you invoke the verbose option it will always be 1 and 1 always sets the level to INFO, which is the default anyway.

Combining Sphinx documentation from multiple subprojects: Handling indices, syncing configuration, etc

We have a multi-module project documented with the (excellent) Sphinx. Our setup is not unlike one described on the mailing list. Overall this works great! But we have a few questions about doing so:
The submodule tables of contents will include index links. At best these will link to the wrong indices. (At worst this seems to trigger a bug in Sphinx, but I'm using the devel version so that's reasonable). Is there a way of generating the index links only for the topmost toctree?
Are there best practices for keeping the Sphinx configuration in sync between multiple projects? I could imagine hacking something together around from common_config import *, but curious about other approaches.
While we're at it, the question raised in the mailing list post (alternative to symlinking subproject docs?) was never answered. It's not important to me, but it may be important to other readers.
I'm not sure what you mean by this. Your project's index appears to be just fine. Could you clarify on this, please?
As far as I've seen, from common_config import * is the best approach for keeping configuration in sync.
I think the best way to do this is something like the following directory structure:
main-project/
conf.py
documentation.rst
sub-project-1/
conf.py - imports from main-project/conf.py
documentation.rst
sub-project-2/
conf.py - likewise, imports from main-project/conf.py
documentation.rst
Then, to just package sub-project-1 or sub-project-2, use this UNIX command:
sphinx-build main-project/ <output directory> <paths to sub-project docs you want to add>
That way, not only will the main project's documentation get built, the sub-project documentation you want to add will be added as well.
To package main-project:
sphinx-build main-project/ <output directory>
I'm pretty sure this scheme will work, but I've yet to test it out myself.
Hope this helps!
Regarding point 2 (including common configuration), I'm using:
In Python 2:
execfile (os.path.abspath("../../common/conf.py"))
In Python 3:
exec (open('../../common/conf.py').read())
Note that, unlike the directory structure presented by #DangerOnTheRanger, I prefer to keep a separate directory for common documentation, which is why common appears in the path above.
My common/conf.py file is a normal Sphynx file. Then, each of the specific documentation configuration includes that common file and overrides values as necessary, like in this example:
import sys
import os
execfile (os.path.abspath("../../common/conf.py"))
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# If true, links to the reST sources are added to the pages.
html_copy_source = False
html_show_sourcelink = False

Categories

Resources