Sometimes I need to check the output of a python web apllication.
If I execute the application directly, I can see it from terminal screen.
But I have no idea how to check that for mod_wsgi. Will it appear in a seperate log of apache? Or do I need to add some codes for logging?
Instead of print "message", you could use sys.stderr.write("message")
For logging to stderr with a StreamHandler:
import logging
handler = logging.StreamHandler(stream=sys.stderr)
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
log.addHandler(handler)
log.info("Message")
wsgilog is simple, and can redirect standard out to a log file automatically for you. Breeze to set up, and I haven't had any real problems with it.
No WSGI application component which claims to be portable should write to standard output. That is, an application should not use the Python print statement without directing output to some alternate stream. An application should also not write directly to sys.stdout. (ModWSGI Wiki)
So don't... Instead, I recommend using a log aggregation tool like sentry. It is useful while developing and a must-have in production.
Related
### Start 4 subprocesses ###
server = tornado.httpserver.HTTPServer(app)
server.bind(8000)
server.start(4) # 4 subprocesses
### Logger using TimeRotatingFileHandler within each app ###
timefilehandler = logging.handlers.TimedRotatingFileHandler(
filename=os.path.join(dirname, logname + '.log'),
when='MIDNIGHT',
interval=1,
encoding='utf-8'
)
Using tornado with mutiple subprocesses and logger resulted in multiple logging files subfixed like(if using file name as logging name):
service_0.log
service_1.log
service_2.log
service_3.log
Is it possible to enable all the subprocesses to write to one place in tornado? Or if it is better to use some log aggregation tools to handle the hassle since it is quite inconvenient to check the logs one by one, any ideas? Thanks in advance.
You can't have different (sub)processes write to a single file - if you want to solve that you should use a log aggregator where different tornado servers log to a common endpoint (either in the cloud or locally). If you're not inclined to use a third party solution you can write one in Tornado.
Look into https://docs.python.org/3/library/logging.handlers.html to see if there's anything you like.
Or you can grep 4 files at the same time.
p.s. IIRC using subprocesses is not recommended for production, so I would suggest you run 4 processes with different ports and use the port in the log name as well.
I just discovered this very strange behaviour of the logging module in Spyder:
import logging
logging.getLogger("hurricane").handlers
Out[2]: [] # expected
logging.getLogger("tornado").handlers
Out[3]: [<StreamHandler <stderr> (NOTSET)>] # Where does that StreamHandler come from?!
Note that these are the first lines from a freshly started interpreter. So I haven't imported tornado or any other package except logging. Yet, unlike any other logger I tried to get, it comes with a StreamHandler attached.
Why?
Related question: How to prevent Python Tornado from logging to stdout/console?
I think Tornado uses logging for its normal-operation request logging. This is not a great idea - an application is supposed to work exactly as before if all logging is disabled (aside from the logging part, of course) but numerous web server authors use logging functionality as a convenience to do part of their normal operation rather than just using it for diagnostics. I believe you can turn this off using a logging=none configuration option; see this page for more information.
In an application I want to collect messages related to some dedicated part of the processing, and then show these messages later at user request. I would like to report severity (e.g. info, warning), time of message, etc., so for this I considered to use the Python standard logging module to collect the messages and related information. However, I don't want these messages to go to a console or file.
Is there a way to create a Python logger, using logging, where the messages are kept internally (in memory) only, until read out by the application. I would expect start of code like:
log = logging.getLogger('my_logger')
... some config of log for internal only; not to console
log.error('Just some error')
... some code to get/clear messages in log until now
I have tried to look in logging — Logging facility for Python, but most example are for immediate output to file or console, so an example for internal logging or reference is appreciated.
You should just use another handler. You could use a StreamHandler over an io.StringIO that would simply log to memory:
log = logging.getLogger('my_logger')
memlog = io.StringIO()
log.addHandler(logging.StreamHandler(memlog))
All logging sent to log can be found in memlog.getvalue()
Of course, this is just a simple Handler that concatenates everything in one single string, even if for versions >= 3.2 each record is terminated, by default with a \n. For more specific requirements, you could have a look at a QueueHandler or implement a dedicated Handler.
References: logging.handlers in the Python Standard Library reference manual.
I am trying to setup logging when using IPython parallel. Specifically, I would like to redirect log messages from the engines to the client. So, rather than each of the engines logging individually to their own log files, as in IPython.parallel - can I write my own log into the engine logs?, I am looking for something like How should I log while using multiprocessing in Python?
Based on reviewing the IPython code base, I have the impression that the way to do this would be to register a zmq.log.hander.PUBHandler with the logging module (see documentation in iploggerapp.py). I have tried this in various ways, but none seem to work. I also tried to register a logger via IPython.parallel.util. connect_engine_logger, but this also does not appear to do anything.
update
I have made some progress on this problem. If I specify in ipengine_config c.IPEngineApp.log_url, then the logger of the IPython application has the appropriate EnginePubHandler. I checked this via
%%px
from IPython.config import Application
log = Application.instance().log
print(log.handlers)
Which indicated that the application logger has an EnginePUBHandler for each engine. Next, I can start the iplogger app in a separate terminal and see the log messages from each engine.
However, What I would like to achieve is to see these log messages in the notebook, rather than in a separate terminal. I have tried starting iplogger from within the notebook via a system call, but this crashes.
My cherrypy application (3.2.2 on Python 2.6) is using 3rd party libs, and these libs use standard logging internally, like so:
logger = logging.getLogger(__name__)
logger.info("a message from some library")
Now, in my cherrypy config, I have:
log.access_file = '/path/access.log'
log.error_file = '/path/error.log'
but only CP messages ever appear in these two files, not any of the other logging. But I need all logging there, not just the logging that CP itself issues internally.
Is there a way to capture all output, including the 3rd party logging (which I assume goes into stdout/stderr and then disappears, as the process is a detached daemon), into /path/error.log? Plug arbitrary stdout/stderr into CP's log somehow?
Is this what you're looking for? All output into the error log?
python yourCherryServer.py &>> /path/error.log