I had my scrapy app deployed to scrapyd, and in the log file of each job:
http://{host}:6800/logs/{project_name}/{spider_name}/{job_id}.log
I'm not seeing the logs that I logged using the logger I defined, but if I changed it to use self.logger.info(....) it shows up in the jobs' log file.
LOGGER = logging.getLogger(__name__)
LOGGER.info('...') # this not show up in the log file
self.logger.info('...') # this shows up in the log file
Can anyone provide some insights, please!
Found the reason.
Because I added:
LOGGER.propagate = False
Related
I have a Cloud Function running Python 3.7 runtime triggered from a Pub/Sub Topic.
In the code, I have places where I use print() to write logs. However, when I go to the logs tab of my function, I see that an extra blank line is added after each log. I would like to remove these, since this is basically doubling my usage of the Logging API.
I have tried using print(message, end="") but this did not remove the blank lines.
Thanks in advance.
Although I have not found out the root cause for the blank line, I was able to resolve this by using the google-cloud-logging library as suggested by John in the comment of my question.
Resulting code is as below:
import google.cloud.logging
import logging
# set up logging client when run on GCP
if not os.environ.get("DEVELOPMENT"): # custom environment variable
# only on GCP
logging_client = google.cloud.logging.Client()
logging_client.setup_logging()
# define logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # min logging level for logger
# define handler only on local
# only add handlers in local, since Cloud Function there already is a handler attached to the logger
# adding another handler in Cloud Function will result in duplicate logging with severity = ERROR
if os.environ.get("DEVELOPMENT"):
console_handler = logging.StreamHandler() # handler to write to stream
console_handler.setLevel(logging.DEBUG) # min logging level for handler
# add handler to logger
logger.addHandler(console_handler)
def my_function():
logger.info('info')
This code will,
not send code to GCP logs when function is executed on local
will print INFO and DEBUG logs both on local and on GCP
Thank you both for your suggestions.
Instead of using print. Use Logger.
import logging
import logging.handlers as handlers
logger = logging.getLogger("YourCloudFunctionLoggerName")
logger.setLevel(logging.DEBUG)
The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')
I'm using python's logging module. I've initialized it as:
import logging
logger = logging.getLogger(__name__)
in every of my modules. Then, in the main file:
logging.basicConfig(level=logging.INFO,filename="log.txt")
Now, in the app I'm also using WSGIServer from gevent. The initializer takes a log argument where I can add a logger instance. Since this is an HTTP Server it's very verbose.
I would like to log all of my app's regular logs to "log.txt" and WSGIServer's logs to "http-log.txt".
I tried this:
logging.basicConfig(level=logging.INFO,filename="log.txt")
logger = logging.getLogger(__name__)
httpLogger = logging.getLogger("HTTP")
httpLogger.addHandler(logging.FileHandler("http-log.txt"))
httpLogger.addFilter(logging.Filter("HTTP"))
http_server = WSGIServer(('0.0.0.0', int(config['ApiPort'])), app, log=httpLogger)
This logs all HTTP messages into http-log.txt, but also to the main logger.
How can I send all but HTTP messages to the default logger (log.txt), and HTTP messages only to http-log.txt?
EDIT: Since people are quickly jumping to point that this Logging to two files with different settings has an answer, plese read the linked answer and you'll see they don't use basicConfig but rather initialize each logger separately. This is not how I'm using the logging module.
Add the following line to disable propagation:
httpLogger.propagate = False
Then, it will no longer propagate messages to its ancestors' handlers which includes the root logger for which you have set up the general log file.
I want to redirect all the logs for my web app to a particular log file
I am looking for one common setting ( code/ini ) that can re-direct all the logs for my app to a particular file.
You should be able to use the regular python logging handlers to set up file logging. See Pyramid's docs and the Python docs. For example, here's part of a logger config I've used to set up a rotating file log on the file system:
[handler_filelog]
class = logging.handlers.TimedRotatingFileHandler
args = ('/path/to/mylog.log','D', 1, 15)
level = NOTSET
formatter = generic
I'm using Django with mod_python on remote server and need to add some logging.
I try to add some test log, in settings.py I add this code:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='/home/voip_maker/django.log', filemode='a+')
logging.debug("hello world!")
logging.info("this is some interesting info!")
logging.error("this is an error!")
Then I restart apache and try to open my project via web,
but there are no changes in log file.
Can you please help me with this issue, how I must configure logging to get changes in log file.
Thanks very much,
Django has made logging even simpler. Please check the below docs:
https://docs.djangoproject.com/en/1.4/topics/logging/
All you need to do is to add a dictionary to settings.py file