Python logging not recording debug messages - python

I have created a package and I'm adding logging file to log debug information, here is my __init__.py file:
import logging
logging.basicConfig(filename='logs/tmp.log',
format='%(levelname)s %(asctime)s :: %(message)s',
level=logging.DEBUG)
logger = logging.getLogger('logs/tmp.log')
logger.debug('First debug!')
When I ran the code, the file logs/tmp.log wasn't even created, so I created the file manually just in case logging needs the file to exists, but it's not working either, any ideas on what am I doing wrong?

Related

Prevent Generation of Log File with Python logging

I have a simple script that I run as an exe file on Windows. However, when I am developing the script, I run it from the command line and use the logging module to output debug info to a log file. I would like to turn off the generation of the log file for my production code. How would I go about doing that?
This is the logging config I have setup now:
import logging
...
logging.basicConfig(filename='file.log',
filemode="w",
level=logging.DEBUG,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
...
logging.debug("Debug message")
If you don't mind the generation of an empty log file for production, you can simply increase the threshold of logging to a level above logging.DEBUG, such as logging.INFO, so that messages logged with logging.debug won't get output to the log file:
logging.basicConfig(filename='file.log', # creates an empty file.log
filemode="w",
level=logging.INFO,
format="%(asctime)s: %(name)s - %(levelname)s - %(message)s",
datefmt='%d-%b-%y %H:%M:%S',
)
logging.debug("Debug message") # nothing would happen
logging.info("FYI") # logs 'FYI'
If you don't want logging to function at all, an easy approach is to override logging with a Mock object:
import logging
from unittest.mock import Mock
environment = 'production'
if environment == 'production':
logging = Mock()
...
logging.basicConfig(...) # nothing would happen
logging.debug(...) # nothing would happen

how to configure logging system in one file on python

I have two files.
first is the TCP server.
second is the flask app. they are one project but they are inside of a separated docker container
they should write logs same file due to being the same project
ı try to create my logging library ı import my logging library to two file
ı try lots of things
firstly ı deleted bellow code
if (logger.hasHandlers()):
logger.handlers.clear()
when ı delete,ı get same logs two times
my structure
docker-compose
docker file
loggingLib.py
app.py
tcp.py
requirements.txt
.
.
.
my last logging code
from logging.handlers import RotatingFileHandler
from datetime import datetime
import logging
import time
import os, os.path
project_name= "proje_name"
def get_logger():
if not os.path.exists("logs/"):
os.makedirs("logs/")
now = datetime.now()
file_name = now.strftime(project_name + '-%H-%M-%d-%m-%Y.log')
log_handler = RotatingFileHandler('logs/'+file_name,mode='a', maxBytes=10000000, backupCount=50)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ', '%d-%b-%y %H:%M:%S')
formatter.converter = time.gmtime
log_handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
if (logger.hasHandlers()):
logger.handlers.clear()
logger.addHandler(log_handler)
return logger
it is working but only in one file
if app.py works first, it only makes a log
other file don't make any logs
Anything that directly uses files – config files, log files, data files – is a little trickier to manage in Docker than running locally. For logs in particular, it's usually better to set your process to log directly to stdout. Docker will collect the logs, and you can review them with docker logs. In this setup, without changing your code, you can configure Docker to send the logs somewhere else or use a log collector like fluentd or logstash to manage the logs.
In your Python code, you usually will want to configure the detailed logging setup at the top level, on the root logger
import logging
def main():
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ',
datefmt='%d-%b-%y %H:%M:%S',
level=logging.INFO
)
...
and in each individual module you can just get a local logger, which will inherit the root logger's setup
import logging
LOGGER = logging.getLogger(__name__)
With its default setup, Docker will capture log messages into JSON files on disk. If you generate a large amount of log messages in a long-running container, it can lead to local disk exhaustion (it will have no effect on memory available to processes). The Docker logging documentation advises using the local file logging driver, which does automatic log rotation. In a Compose setup you can specify logging: options:
version: '3.8'
services:
app:
image: ...
logging:
driver: local
You can also configure log rotation on the default JSON File logging driver:
version: '3.8'
services:
app:
image: ...
logging:
driver: json-file # default, can be omitted
options:
max-size: 10m
max-file: 50
You "shouldn't" directly access the logs, but they are in a fairly stable format in /var/lib/docker, and tools like fluentd and logstash know how to collect them.
If you ever decide to run this application in a cluster environment like Kubernetes, that will have its own log-management system, but again designed around containers that directly log to their stdout. You would be able to run this application unmodified in Kubernetes, with appropriate cluster-level configuration to forward the logs somewhere. Retrieving a log file from opaque storage in a remote cluster can be tricky to set up.

how to import packages and some custom setting professionally in python?

I'm using logging packages with some custom setting and I have multiple directory, sub-directory and file So I'm coping and pasting logging setting in every file, this seems unprofessional. like
I'm doing in every file:
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s',
level=logging.INFO,datefmt='%Y-%m-%d %H:%M:%S')
I want to put this in thing like utils.py file So I only import function from utils and start work.
What I try to do, file-name: utils.py
def logger():
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
Importing in some other file and seem it's not working,file-name: database_service.py
from src.utils import logger as logging
def connection_creator():
try:
client = MongoClient(DB_MACHINE, DB_PORT, serverSelectionTimeoutMS=2000)
status = client.server_info()['ok']
logging.info(f'Connection created Successfully! ["Status":{status}] `localhost` Port: `27017`')
db_connection = client['techexpert']
return db_connection
except Exception as error:
logging.error(f'in Creating connection `localhost` Port: `27017`! {error}')
db_connection = connection_creator()
Here logging.info() in unresolved reference So what could be best professional way to import these type of settings.

Unable to log debug messages to the system log in Python

I'm using Python 3.4 on Mac OSX. I have the following code to setup a logger:
LOGGER = logging.getLogger(PROGRAM_NAME)
LOGGER.setLevel(logging.DEBUG)
LOGGER.propagate = False
LOGGER_FH = logging.FileHandler(WORKING_DIR + "/syslog.log", 'a')
LOGGER_FH.setLevel(logging.DEBUG)
LOGGER_FH.setFormatter(logging.Formatter('%(name)s: [%(levelname)s] %(message)s'))
LOGGER.addHandler(LOGGER_FH)
LOGGER_SH = logging.handlers.SysLogHandler(address='/var/run/syslog',
facility=logging.handlers.SysLogHandler.LOG_USER)
LOGGER_SH.setLevel(logging.DEBUG)
LOGGER_SH.setFormatter(logging.Formatter('%(name)s: [%(levelname)s] %(message)s'))
LOGGER.addHandler(LOGGER_SH)
The FileHandler works perfectly, and I'm able to see all expected messages at all logging levels show up in the log. The SysLogHandler doesn't work correctly. I'm unable to see any LOGGER.info() or LOGGER.debug() messages in the syslog output. I can see error and warning messages, but not info or debug. Even tweaking the /etc/syslog.conf file has no effect (even after explicitly reloading the syslog daemon with launchctl). What am I missing here ?
Try: address='/dev/log'
It's a bit confusing from the doc, but "address" is expected to be a unix domain socket, not a file.

Try to write into syslog

I am working in linux and the process rsyslogd is listening to port 514.
The following code can't write into /var/log/syslog.
Is anybody know what is the problem?
import logging
import logging.handlers
root_logger = logging.getLogger()
root_logger.setLevel(config.get_value("log_level"))
syslog_hdlr = SysLogHandler(address='/dev/log', facility=SysLogHandler.LOG_DAEMON)
syslog_hdlr.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)s: %(levelname)s %(message)s')
syslog_hdlr.setFormatter(formatter)
root_logger.addHandler(syslog_hdlr)
logger = logging.getLogger("imapcd.daemon")
logger.debug('test')
This code works fine in my system if I make some changes:
import logging.handlers as sh
syslog_hdlr = sh.SysLogHandler(address='/dev/log', facility=sh.SysLogHandler.LOG_DAEMON)
and
root_logger.setLevel(logging.DEBUG)
So check the logging level you are getting from config is not more restrictive than DEBUG (ex: if it is set to INFO no debug messages are printed).
If you still don't see anything on syslog try to use the syslog module and see if you get anything from there:
import syslog
syslog.syslog(syslog.LOG_ERR, "MY MESSAGE")

Categories

Resources