I’m developing a Python Django REST API. Currently, I need to incorporate a logger to Syslog.
Do I need to just define the logger and Syslog handler in the Settings.py?
I’m relatively new to Django and using the Syslog protocol, so I appreciate any help and what Python modules to use.
Yes, you can for example use the syslog library to log messages.
Using it, is pretty easy.
import syslog
# create syslog handle
syslog.openlog(ident="settings.py", facility=syslog.LOG_LOCAL0)
# log some message
syslog.syslog(syslog.LOG_INFO, "Some log message.")
The log message will not get written to /var/log/local0.log - probably it will also get written to /var/log/rsyslog or /var/log/messages depending on the standard configuration of your /etc/rsyslog.conf file.
Related
Suppose there is a system that is run on GCP, but as a backup, can be run locally.
When running on the cloud, stackdriver is pretty straightforward.
However, I need my system to push to stackdriver if on the cloud, and if not on the cloud, use the local python logger.
I also don't want to include any logic to do so, and this should be automatic.
When logging, log straight to Python/local logger.
If on GCP -> push these to stackdriver.
I can write logic that could implement this but that is bad practice. There surely is a direct way of getting this to work.
Example
import google.cloud.logging
client = google.cloud.logging.Client()
client.setup_logging()
import logging
cl = logging.getLogger()
file_handler = logging.FileHandler('file.log')
cl.addHandler(file_handler)
logging.info("INFO!")
This will basically log to python logger, and then 'always' upload to cloud logger. How can I have it so that I don't need to explicitly add import google.cloud.logging and basically if stackdriver is installed, it directly gets the logs? Is that even possible? If not can someone explain how this would be handled from a best practices perspective?
Attempt 1 [works]
Created /etc/google-fluentd/config.d/workflow_log.conf
<source>
#type tail
format none
path /home/daudn/this_log.log
pos_file /var/lib/google-fluentd/pos/this_log.pos
read_from_head true
tag workflow-log
</source>
Created /var/log/this_log.log
pos_file /var/lib/google-fluentd/pos/this_log.pos exists
import logging
cl = logging.getLogger()
file_handler = logging.FileHandler('/var/log/this_log.log')
file_handler.setFormatter(logging.Formatter("%(asctime)s;%(levelname)s;%(message)s"))
cl.addHandler(file_handler)
logging.info("info_log")
logging.error("error_log")
This works! Look for your logs for the specific VM and not global>python
Fortunately, this is a story that is handled. Stackdriver Logging is a very versatile framework for logging. However, there are a variety of logging APIs and Google's intent was not that you had to rewrite all your existing applications to leveraging the Stackdriver logging native APIs. Instead, you can use a logging API of your choice (including standard and defacto APIs) and these logging APIs will then map to Stackdriver. If executed outside a GCP environment or you simply wish to switch to an alternate log collector, your applications would not have to be re-coded or recompiled.
A list of the logging APIs available for different languages can be found at Setting Up Language Runtimes and this includes Setting Up Stackdriver Logging for Python.
For Python, at runtime, you have a configuration property (eg an Environment variable) that declares whether or not you wish to use Stackdriver. If set to true, then .. and only then ... would you execute the login that sets up the native Python logging for Stackdriver otherwise that logic would not be called and hence you would have no dependency on Stackdriver.
A possible piece of code might be:
if os.environ.get('USE_STACKDRIVER') == 'true':
import google.cloud.logging
client = google.cloud.logging.Client()
client.setup_logging()
You do not need to specifically enable or use Stackdriver in your program. You can use the Python logger and write to any file you want. However, Stackdriver only logs specific log files. This means that you would need to manually set up Stackdriver to log "your" log files.
In your example, you are writing to file.log. Modify /etc/google-fluentd/config.d/mylogfile.conf to include the following. You will need to specify the full path for file.log and not just the file name. In this example, I named it /var/log/mylogfile.log. This example also assumes that your logs start each line with a date.
<source>
#type tail
# Parse the timestamp, but still collect the entire line as 'message'
format /^(?<message>(?<time>[^ ]*\s*[^ ]* [^ ]*) .*)$/
path /var/log/mylogfile.log
pos_file /var/lib/google-fluentd/pos/mylogfile.log.pos
read_from_head true
tag auth
</source>
For more information read the following document:
Stackdriver - Configuring the Agent
Now your program will run outside GCP and when running on a configured instance, log to Stackdriver.
Note: I would do the opposite of what you have asked. I would always use Stackdriver. When not running in GCP I would manually set up Stackdriver on my desktop, local server, etc and continue to log to Stackdriver.
Our log formatter has a user defined parameter. It is defined as:
'%(asctime)s|%(levelname)s|%(name)s|REQID:%(req_id)s|%(module)s:%(lineno)s|%(message)s'
where req_id is a request id, generated by application code for every request. When we are processing the requests, on our application code, we have access to this req_id, and we use it for logging purposes like this:
logger = logging.LoggerAdapter(logging.getLogger(service_name), {'req_id': req_id})
logger.debug('A debug message')
I am trying to make the tornado logger conforming to our log format, but since tornado has no access to our application level req_id it fails with:
KeyError: 'req_id'
How can I tell tornado to use a LoggerAdapter for tornado.access, with a user provided context?
EDIT
As a workaround, I tried the following:
Since it is not possible for me to tell tornado what loggers to use, I managed to hack my way around this limitation by reconfiguring the tornado logger in each request, adding the contextual information using a logging filter.
Unfortunately, reconfiguring the log for each request does not work since tornado will be serving requests in parallel, and we will get an inconsistent state.
How can we pass user context for the tornado loggers then?
The tornado.access log can be controlled by overriding Application.log_request in a subclass or using the log_function application setting. This method defaults to writing to the tornado.access log, but you can override it to log however you want.
Note however that the tornado.general and tornado.application loggers cannot be overridden in this way, so your log formatters/filters must still be able to handle messages that do not have the req_id field.
The Python logger library has the option of logging timestamps and file information in log file/console using the Formatter class as below:
import logging
logformatter = logging.Formatter('%(asctime)s (%(filename)s:%(lineno)s)- %(levelname)s - %(message)s')
streamlogger = logging.getLogger()
streamlogger.setLevel('DEBUG')
consolelogger = logging.StreamHandler()
consolelogger.setFormatter(logformatter)
consolelogger.setLevel('DEBUG')
streamlogger.addHandler(consolelogger)
streamlogger.debug('ZiZi')
and the output would look like this this:
2017-01-19 16:06:15,381 (testlogger.py:19)- DEBUG - ZiZi
In Robot Framework, the keyword LOG is used to log into report file and/or console. There is also a LOG TO CONSOLE keyword which only prints out given message into console. But none of these two keywords have an API for deploying what Formatter performs in Python's logging library.
Is there any trick to embed this functionality into Robot Framework? Are there any other Robot Framework keywords/libraries which I'm not aware of?
In my mind there are two ways you can achieve this kind of logging. Both of them would generate a new file, in your desired format.
The first one is to use the Robot Listener functionality. This is a set of predefined events that you can create a class for. Log Message and Message being two of particular interest to you.
The other one is a recently released project Robot Background logger that extends the standard logger class of Robotframework. This should provide some control over the formatting of the message.
I am using wsgi as suggested in the cookbook, and it logs the webpy messages into the log. I am really struggling with adding my own messages to the logs.
eg: I want to add query parameters and other warnings.
For those with similar problems please use:
logger = web.ctx.env.get('wsgilog.logger')
to get the logger and then,
logger.info('hello world')
to log the message
I'm new to GAE, and have not been able to figure out how to configure 'print' statements to the logging console rather than the browser. For example:
class Feed(webapp.RequestHandler):
def post(self):
feeditem = Feeditem()
feeditem.author = self.request.get('from')
feeditem.content = self.request.get('content')
feeditem.put()
notify_friends(feeditem)
self.redirect('/')
def notify_friends(feeditem):
"""Alerts friends of a new feeditem"""
print 'Feeditem = ', feeditem
When I do something like the above, the print in notify_friends outputs to the browser and somehow prevents the self.redirect('/') in the post method that called it. Commenting it out corrects the issue.
Is there a way to change this behavior?
EDIT: Google App Engine tag removed as this is general.
You should instead use the logging module, like so:
import logging
def notify_friends(feeditem):
"""Alerts friends of a new feeditem"""
logging.info('Feeditem = %s', feeditem)
There are a variety of logging levels you can use, from debug to critical. By default, though, the App Engine SDK only shows you log messages at level info and above, so that's what I've suggested here. You can ask it to show you debug messages if you want, but you'll probably be overwhelmed with useless (to you) logging information, at least when running in the SDK.
See the logging module docs for more info.
Oh, and the nice thing about using the logging module is that you'll have access to your log messages in production, under the "Logs" section of your app's the App Engine dashboard.
This is not just a problem with GAE. It is a general issue. You can't print out HTML and then try to have a redirect header. The answer to your question is no, you can't change the behavior. What exactly are you trying to achieve? You might be able to get what you want a different way.