Python logging module doesn't work within installed windows service - python

Why is it that calls to logging framework within a python service do not produce output to the log (file, stdout,...)?
My python service has the general form:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('out.log')
logger.addHandler(fh)
logger.error("OUTSIDE")
class Service (win32serviceutil.ServiceFramework):
_svc_name_ = "example"
_svc_display_name_ = "example"
_svc_description_ = "example"
def __init__(self,args):
logger.error("NOT LOGGED")
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
self.stop = True
def SvcDoRun(self):
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
self.main()
def main(self):
# Service Logic
logger.error("NOT LOGGED EITHER")
pass
The first call to logger.error produces output, but not the two inside the service class (even after installing the service and making sure it is running).

I've found that only logging within the actual service loop works with the logging module and the log file ends up in something like C:\python27\Lib\site-packages\win32.
I abandoned logging with the logging module for Windows as it didn't seem very effective. Instead I started to use the Windows logging service, e.g. servicemanager.LogInfoMsg() and related functions. This logs events to the Windows Application log, which you can find in the Event Viewer (start->run->Event Viewer, Windows Logs folder, Application log).

You have to write the full path of the log file.
e.g.
fh = logging.FileHandler('C:\\out.log')

actually outside logger initialized twice.
these 2 outside loggers are in different processes. one is the python process and another one is windows-service process.
for some reason, the 2nd one didn't configured success, and inside loggers in this process too. that's why u cant find the inside logs.

Related

Websocket code works on Windows but not Linux

I'm running the same code; the following works in Windows, but will run correctly on Ubuntu (16.04).
import websocket
import json
class WhatEver(object):
def __init__(self):
self.ws = websocket.WebSocketApp(
'wss://beijing.51nebula.com/',
on_message=self.on_ws_message,
on_open=self.on_open
)
def rin_forever(self):
print("start run forever")
self.ws.run_forever()
def on_ws_message(self, ws,message):
print (message)
self.ws.close()
def _send_msg(self, params):
call = {"id": 1, "method": "call",
"params": params}
self.ws.send(json.dumps(call))
def on_open(self, ws):
print("start open function")
self._send_msg([1, "login", ["",""]])
if __name__ == '__main__':
ws=WhatEver()
print("start")
ws.rin_forever()
print("close")
I've tried to reinstalled all modules (including the same version of Python and websocket between both Windows and Ubuntu), the print of this code is correct on the Windows system:
start
start run forever
start open function
{"id":1,"jsonrpc":"2.0","result":true}
close
But when it run in Ubuntu, while it does print, it misses some print statements:
start
start run forever
close
When I debug the code in Ubuntu, I found that the main thread stops in the self.ws.run_forever() call and never jumps to the on_open function. Then it breaks out.
You are using two different versions of the library, with the version on Windows being older than version 0.53. As of version 0.53, the websocket project differentiates callback behaviour between bound methods and regular functions.
You are passing in bound methods (self.on_open and self.on_ws_message), at which point the ws argument is not passed in. Those methods are apparently expected to have access to the websocket already via their instance, probably because the expected use-case is to create a subclass from the socket class.
This is unfortunately not documented by the project, and the change appears to have been causing problems for more people.
So for version 0.53 and newer, drop the ws argument from your callbacks:
class WhatEver(object):
def __init__(self):
self.ws = websocket.WebSocketApp(
'wss://beijing.51nebula.com/',
on_message=self.on_ws_message,
on_open=self.on_open
)
# ...
def on_ws_message(self, message):
print(message)
self.ws.close()
# ...
def on_open(self):
print("start open function")
self._send_msg([1, "login", ["", ""]])
And you can discover issues like these by enabling logging; the websocket module logs exceptions it encounters in callbacks to the logger.getLogger('websocket') logger. A quick way to see these issues is to enable tracing:
websocket.enableTrace(True)
which adds a logging handler just to that logging object, turns on logging.DEBUG level reporting for that object and in addition enables full socket data echoing.
Or you can configure logging to output messages in general with the logging.basicConfig() function:
import logging
logging.basicConfig()
which lets you see logging.ERROR level messages and up.
With using the latter option, the uncorrected version of the code prints out:
start
start run forever
ERROR:websocket:error from callback <bound method WhatEver.on_open of <__main__.WhatEver object at 0x1119ec668>>: on_open() missing 1 required positional argument: 'ws'
close
You can verify the version of websocket-client you have installed by printing websocket.__version__:
>>> import websocket
>>> websocket.__version__
'0.54.0'

Python module "logging" double output

I want to use the logging module, but I'm having some trouble because it is outputting twice. I've read a lot of posts with people having the same issue and log.propagate = False or `log.handlers.pop()´ fixed it for them. This doesn't work for me.
I have a file called logger.py that looks like this:
import logging
def __init__():
log = logging.getLogger("output")
filelog = logging.FileHandler("output.log")
formatlog = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
filelog.setFormatter(formatlog)
log.addHandler(filelog)
log.setLevel(logging.INFO)
return log
So that I from multiple files can write:
import logger
log = logger.__init__()
But this is giving me issues. So I've seen several solutions, but I don't know how to incorporate it in multiple scripts without defining the logger in all of them.
I found a solution which was really simple. All I needed to do was to add an if statement checking if the handlers already existed. So my logger.py file now looks like this:
import logging
def __init__():
log = logging.getLogger("output")
if not log.handlers:
filelog = logging.FileHandler("output.log")
formatlog = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
filelog.setFormatter(formatlog)
log.addHandler(filelog)
log.setLevel(logging.INFO)
return log
multiple scripts lead to multiple processes. so you unluckily create multiple objects returned from the logger.__init__() function.
usually, you have one script which creates the logger and the different processes,
but as i understand, you like to have multiple scripts logging to the same destination.
if you like to have multiple processes to log to the same destination, i recommend using inter process communication as "named pipes" or otherwise using a UDP/TCP port for logging.
There are also queue modules available in python to send a (atomic) logging entry to be logged in one part (compared to append to a file by multiple processes - which may lead from 111111\n and 22222\n to 11212121221\n\n in the file)
otherwise think about named pipes...
Code snippet for logging server
Note: i just assumed to log everything as error...
import socket
import logging
class mylogger():
def __init__(self, port=5005):
log = logging.getLogger("output")
filelog = logging.FileHandler("output.log")
formatlog = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
filelog.setFormatter(formatlog)
log.addHandler(filelog)
log.setLevel(logging.INFO)
self.log = log
UDP_IP = "127.0.0.1" # localhost
self.port = port
self.UDPClientSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)
self.UDPClientSocket.bind((UDP_IP, self.port))
def pollReceive(self):
data, addr = self.UDPClientSocket.recvfrom(1024) # buffer size is 1024 bytes
print("received message:", data)
self.log.error( data )
pass
log = mylogger()
while True:
log.pollReceive()
You have to be very carefully by adding new handler to your logger with:
log.addHandler(...)
If you add more than one handler to your logger, you will get more than one outputs. Be aware that this is only truth for logger which are running in the same thread. If you are running a class which is derived from Thread, then it is the same thread. But if you are running a class which is derived from Process, then it is another thread. To ensure that you only have one handler for your logger according to this thread, you should use the following code (this is an example for SocketHandler):
logger_root = logging.getLogger()
if not logger_root.hasHandlers():
socket_handler = logging.handlers.SocketHandler('localhost', logging.handlers.DEFAULT_TCP_LOGGING_PORT)
logger_root.addHandler(socket_handler)

Logging with WSGI server and flask application

I am using wsgi server to spawn the servers for my web application. I am having problem with information logging.
This is how I am running the app
from gevent import monkey; monkey.patch_all()
from logging.handlers import RotatingFileHandler
import logging
from app import app # this imports app
# create a file to store weblogs
log = open(ERROR_LOG_FILE, 'w'); log.seek(0); log.truncate();
log.write("Web Application Log\n"); log.close();
log_handler = RotatingFileHandler(ERROR_LOG_FILE, maxBytes =1000000, backupCount=1)
formatter = logging.Formatter(
"[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s"
)
log_handler.setFormatter(formatter)
app.logger.setLevel(logging.DEBUG)
app.logger.addHandler(log_handler)
# run the application
server= wsgi.WSGIServer(('0.0.0.0', 8080), app)
server.serve_forever()
However, on running the application it is not logging anything. I guess it must be because of WSGI server because app.logger works in the absence of WSGI. How can I log information when using WSGI?
According to the gevent uwsgi documentation you need to pass your log handler object to the WSGIServer object at creation:
log – If given, an object with a write method to which request (access) logs will be written. If not given, defaults to sys.stderr. You may pass None to disable request logging. You may use a wrapper, around e.g., logging, to support objects that don’t implement a write method. (If you pass a Logger instance, or in general something that provides a log method but not a write method, such a wrapper will automatically be created and it will be logged to at the INFO level.)
error_log – If given, a file-like object with write, writelines and flush methods to which error logs will be written. If not given, defaults to sys.stderr. You may pass None to disable error logging (not recommended). You may use a wrapper, around e.g., logging, to support objects that don’t implement the proper methods. This parameter will become the value for wsgi.errors in the WSGI environment (if not already set). (As with log, wrappers for Logger instances and the like will be created automatically and logged to at the ERROR level.)
so you should be able to do wsgi.WSGIServer(('0.0.0.0', 8080), app, log=app.logger)
You can log like this -
import logging
import logging.handlers as handlers
.
.
.
logger = logging.getLogger('MainProgram')
logger.setLevel(10)
logHandler = handlers.RotatingFileHandler(filename.log,maxBytes =1000000, backupCount=1)
logger.addHandler(logHandler)
logger.info("Logging configuration done")
.
.
.
# run the application
server= wsgi.WSGIServer(('0.0.0.0', 8080), app, log=logger)
server.serve_forever()

Python logging with rsyslog

I've inherited the following python file:
import logging
from logging.handlers import SysLogHandler
class Logger(object):
# Return a logging instance used throughout the library
def __init__(self):
self.logger = logging.getLogger('my_daemon')
# Log an info message
def info(self, message, *args, **kwargs):
self.__log(logging.INFO, message, *args, **kwargs)
# Configure the logger to log to syslog
def log_to_syslog(self):
formatter = logging.Formatter('my_daemon: [%(levelname)s] %(message)s')
handler = SysLogHandler(address='/dev/log', facility=SysLogHandler.LOG_DAEMON)
handler.setFormatter(formatter)
self.logger.addHandler(handler)
self.logger.setLevel(logging.INFO)
I see that the init method looks for a logger called my_daemon, which I can't find anywhere on my system. Do I have to manually create the file and if so where should I put it?
Also, log_to_syslog appears to listen to socket /dev/log, and when I run sudo lsof /dev/log I get the following:
[vagrant#server]$ sudo lsof /dev/log
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rsyslogd 989 root 0u unix 0xffff880037a880c0 0t0 8099 /dev/log
When I look at /etc/rsyslog.conf I see the following:
# rsyslog v5 configuration file
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
So I'm a bit lost here. My init function seems to be instructing python to use a logfile called my_daemon, which I can't find anywhere, and /etc/rsyslog.conf seems to be telling the machine to use /var/log/messages, which does not contain any logs from my app.
Update: here's how I'm trying to log messages
import os
from logger import Logger
class Server(object):
def __init__(self, options):
self.logger = Logger()
def write(self, data):
self.logger.info('Received new data from controller, applying')
print 'hello'
The write method from server.py does print 'hello' to the screen so I know we're getting close, it's just the logger that's not doing what I would expect
my_daemon is not a log file - it is just a developer-specified name indicating an "area" in an application. See this information on what loggers are.
The log_to_syslog is not listening on a socket - the rsyslog daemon is doing that. Also, I don't see where log_to_syslog is called - is it? If it isn't called, that would explain why no messages are being seen in the syslog.

Python Tornado - disable logging to stderr

I have minimalistic Tornado application:
import tornado.ioloop
import tornado.web
class PingHandler(tornado.web.RequestHandler):
def get(self):
self.write("pong\n")
if __name__ == "__main__":
application = tornado.web.Application([ ("/ping", PingHandler), ])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Tornado keeps reporting error requests to stderr:
WARNING:tornado.access:404 GET / (127.0.0.1) 0.79ms
Question: It want to prevent it from logging error messages. How?
Tornado version 3.1; Python 2.6
Its clear that "someone" is initializing logging subsystem when we start Tornado. Here is the code from ioloop.py that reveals the mystery:
def start(self):
if not logging.getLogger().handlers:
# The IOLoop catches and logs exceptions, so it's
# important that log output be visible. However, python's
# default behavior for non-root loggers (prior to python
# 3.2) is to print an unhelpful "no handlers could be
# found" message rather than the actual log entry, so we
# must explicitly configure logging if we've made it this
# far without anything.
logging.basicConfig()
basicConfig is called and configures default stderr handler.
So to setup proper logging for tonado access, you need to:
Add a handler to tornado.access logger: logging.getLogger("tornado.access").addHandler(...)
Disable propagation for the above logger: logging.getLogger("tornado.access").propagate = False. Otherwise messages will arrive BOTH to your handler AND to stderr
The previous answer was correct, but a little incomplete. This will send everything to the NullHandler:
hn = logging.NullHandler()
hn.setLevel(logging.DEBUG)
logging.getLogger("tornado.access").addHandler(hn)
logging.getLogger("tornado.access").propagate = False
You could also quite simply (in one line) do:
logging.getLogger('tornado.access').disabled = True

Categories

Resources