I am using Kedro but I can't get my logging file to be used. I am following the tutorial. The log file was created but is still empty.
Steps done:
Configured logging
class ProjectContext(KedroContext):
def _setup_logging(self) -> None:
log = logging.getLogger(__name__)
handler = TimedRotatingFileHandler(filename='logs/mypipeline.log', when='d', interval=1)
f_format = logging.Formatter('%(asctime)s %(levelname)s %(funcName)s %(lineno)d %(message)s ')
handler.setFormatter(f_format)
log.addHandler(handler)
log.setLevel(logging.DEBUG)
Use logging (in my nodes.py file)
import logging
log = logging.getLogger(__name__)
log.warning("Issue warning")
log.info("Send information")
And after running the pipeline the log file is created but keeps empty.
Any advice?
Ok, problem solved! It was missing the logger definition on the logging.yml file! Thank you guys for your support!
Related
I wanted to have colored log files for one of my programs. So I tried utilizing colored logging from one of answers on stackoverflow questions, and it worked great in the python terminal but the log file showed strange characters and no color as shown below.
In Terminal :
In Log File :
I tried many ways , but still am getting the same .
Can anyone help me , Below is my code .
import coloredlogs
import logging
# Create a logger object.
logger = logging.getLogger(__name__)
# Create a filehandler object
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# Create a ColoredFormatter to use as formatter for the FileHandler
formatter = coloredlogs.ColoredFormatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
# Install the coloredlogs module on the root logger
coloredlogs.install(level='DEBUG')
logger.debug("this is a debugging message")
logger.info("this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")
I am trying to learn about logging. My code is logging the information that I want but it's duplicating the message.
The first code block below is at the top of my py file.
import logging
import traceback
# setup the file name
log_file_name = dt.date.today().strftime('%Y_%B_%d')
log_path = "C:/some_path/"
# setup the logging
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
file_handler = logging.FileHandler(log_path + log_file_name + '.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
In my function I'm using the line like below to log some information.
logger.debug('some message')
My py file doesn't import any other modules which have any logging information so I'm a bit confused as to why the messages are being duplicated?
I am trying to set up a logger for my AWS Glue job using Python's logging module. I have a Glue job with the type as "Python Shell" using Python version 3.
Logging works fine if I instantiate the logger without any name, but if I give my logger a name, it no longer works, and I get an error which says: Log stream not found.
I have the following code in an example Glue job:
import sys
import logging
# Version 1 - this works fine
logger = logging.getLogger()
log_format = "[%(asctime)s %(levelname)-8s %(message)s"
# Version 2 - this fails
logger = logging.getLogger(name = "foobar")
log_format = "[%(name)s] %(asctime)s %(levelname)-8s %(message)s"
date_format = "%a, %d %b %Y %H:%M:%S %Z"
log_stream = sys.stdout
if logger.handlers:
for handler in logger.handlers:
logger.removeHandler(handler)
logging.basicConfig(level = logging.INFO, format = log_format, stream =
log_stream, datefmt = date_format)
logger.info("This is a test.")
Note that I'm removing the handlers based on this post.
If I instantiate the logger using Version 1 of the code, it runs successfully and I am able to view the logs, as well as query them in CloudWatch.
If I run Version 2, giving the logger a name, the Glue job still succeeds. However, if I try to view the logs, I get the following error message:
Log stream not found
The log stream jr_f137743545d3d242618ac95d859b9146fd15d15a0aadce64d8f3ba991ffed012 could not be found. Check if it was correctly created and retry.
And I am also not able to query these logs in CloudWatch.
I have tried running this code locally using python version 3.6.0, and both versions work. Additionally, both versions of this logging code work inside of a Lambda funcvtion. They only fail in Glue.
This code worked for me:
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
root.addHandler(handler)
root.info("check")
I had a similar issue but fixed it with a combination of correct roles and looking in the right place in Cloudwatch. Make sure you're using the GlueServiceRole. Both Steve and your logging code is fine but the place that you are taken in Cloudwatch when you click on the "logs" button in Glue isn't the correct logging folder.
Go back into log groups then go into /aws-glue/python-jobs/error and that is the where the logger write to, while the stdout writes to the folder /aws-glue/python-jobs/output. It's not a very intuitive setup writing the logs to the error logs folder but hey ho, I'm sure there is a way of configuring it to get it to write where expected.
You should be able to name the log stream by using the following (replace "logger-name-here" with your desired log stream name):
import logging
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=MSG_FORMAT, datefmt=DATETIME_FORMAT)
logger = logging.getLogger(<logger-name-here>)
logger.setLevel(logging.INFO)
logger.info("Test log message")
I'm getting mad at the logging module from Python, because I really have no idea anymore why the logger is printing out the logging messages to the console (on the DEBUG level, even though I set my FileHandler to INFO). The log file is produced correctly.
But I don't want any logger information on the console.
Here is my configuration for the logger:
template_name = "testing"
fh = logging.FileHandler(filename="testing.log")
fr = logging.Formatter("%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s")
fh.setFormatter(fr)
fh.setLevel(logging.INFO)
logger = logging.getLogger(template_name)
# logger.propagate = False # this helps nothing
logger.addHandler(fh)
Would be nice if anybody could help me out :)
I found this question as I encountered a similar issue, after I had removed the logging.basicConfig(). It started printing all logs in the console.
In my case, I needed to change the filename on every run, to save a log file to different directories. Adding the basic config on top (even if the initial filename is never used, solved the issue for me. (I can't explain why, sorry).
What helped me was adding the basic configuration in the beginning:
logging.basicConfig(filename=filename,
format='%(levelname)s - %(asctime)s - %(name)s - %(message)s',
filemode='w',
level=logging.INFO)
Then changing the filename by adding a handler in each run:
file_handler = logging.FileHandler(path_log + f'/log_run_{c_run}.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
logger_TS.addHandler(file_handler)
Also curious side-note. If I don't set the formater (file_handler.setFormatter(formatter)) before setting the handler with the new filename, the initially formated logging (levelname, time, etc.) is missing in the log files.
So, the key is to set the logging.basicConfig before, then set the add the handler. As #StressedBoi69420 indicated above.
Hope that helps a bit.
You should be able to add a StreamHandler to handle stdout and set the handlers log level to a level above 50. (Standard log levels are 50 and below.)
Example of how I'd do it...
import logging
import sys
console_log_level = 100
logging.basicConfig(level=logging.INFO,
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
filename="testing.log",
filemode="w")
console = logging.StreamHandler(sys.stdout)
console.setLevel(console_log_level)
root_logger = logging.getLogger("")
root_logger.addHandler(console)
logging.debug("debug log message")
logging.info("info log message")
logging.warning("warning log message")
logging.error("error log message")
logging.critical("critical log message")
Contents of testing.log...
2019-11-21 12:53:02,426,426 root INFO info log message
2019-11-21 12:53:02,426,426 root WARNING warning log message
2019-11-21 12:53:02,426,426 root ERROR error log message
2019-11-21 12:53:02,426,426 root CRITICAL critical log message
Note: The only reason I have the console_log_level variable is because I pulled most of this code from a default function that I use that will set the console log level based on an argument value. That way if I want to make the script "quiet", I can change the log level based on a command line arg to the script.
I know this question been asked twice now in StackOverflow, but nobody answers yet the question.
Here's my code:
logging.basicConfig(filename="logfile.log", filemode='w',
format='%(asctime)s:%(levelname)s:%(message)s', datefmt='%m/%d/%Y %H:%M:%S')
logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical error message')
Output console is EMPTY. Only the logfile.log has the log strings. But when I remove the filename attribute, it started showing the console. I want to show in console and write in my log file. What do I miss? Please answer with code. I read the documentation twice or thrice already. Thank you.
Just get a handle to the logger and add the StreamHandler and the FileHandler
import logging
logFormatter = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s")
logger = logging.getLogger()
fileHandler = logging.FileHandler("{0}/{1}.log".format(logPath, fileName))
fileHandler.setFormatter(logFormatter)
logger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
logger.addHandler(consoleHandler)