I followed this guide https://firebase.google.com/docs/hosting/cloud-run to setup cloud run docker.
Then I tried to follow this guide https://cloud.google.com/run/docs/logging to perform a simple log. Trying to write a structured log to stdout
This is my code:
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = "projects/sp-64d90/traces/" + trace[0]
# Complete a structured log entry.
entry = dict(severity='NOTICE',
message='This is the default display field.',
# Log viewer accesses 'component' as jsonPayload.component'.
component='arbitrary-property',
**global_log_fields)
print(json.dumps(entry))
I cannot see this log in the Cloud Logs Viewer. I do see the http Get logs each time I call the docker.
Am I missing anything? I am new to this and wondered what is the simples way to be able to log information and view it assuming the docker I created was exactly with the steps from the guide (https://firebase.google.com/docs/hosting/cloud-run)
Thanks
I am running into the exact same issue. I did find that flushing stdout causes the logging to appear when it otherwise would not. Looks like a bug in Cloud Run to me.
print(json.dumps(entry))
import sys
sys.stdout.flush()
Output with flushing
#For Python/Java
Using "google-cloud-logging" module is the easiest way to push container logs to Stackdriver logs. COnfigure google-cloud-logging to work with python's default logging module
import logging as log
import google.cloud.logging as logging
def doSomething(param):
logging_client = logging.Client()
logging_client.setup_logging()
log.info(f"Some log here: {param}")
now you should see this log in Stackdriver logging under Cloud Run Revision.
An easy way to integrate Google Cloud Platform logging into your Python code is to create a subclass from logging.StreamHandler. This way logging levels will also match those of Google Cloud Logging, enabling you to filter based on severity. This solution also works within Cloud Run containers.
Also you can just add this handler to any existing logger configuration, without needing to change current logging code.
import json
import logging
import os
import sys
from logging import StreamHandler
from flask import request
class GoogleCloudHandler(StreamHandler):
def __init__(self):
StreamHandler.__init__(self)
def emit(self, record):
msg = self.format(record)
# Get project_id from Cloud Run environment
project = os.environ.get('GOOGLE_CLOUD_PROJECT')
# Build structured log messages as an object.
global_log_fields = {}
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header and project:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = (
f"projects/{project}/traces/{trace[0]}")
# Complete a structured log entry.
entry = dict(severity=record.levelname, message=msg)
print(json.dumps(entry))
sys.stdout.flush()
A way to configure and use the handler could be:
def get_logger():
logger = logging.getLogger(__name__)
if not logger.handlers:
gcp_handler = GoogleCloudHandler()
gcp_handler.setLevel(logging.DEBUG)
gcp_formatter = logging.Formatter(
'%(levelname)s %(asctime)s [%(filename)s:%(funcName)s:%(lineno)d] %(message)s')
gcp_handler.setFormatter(gcp_formatter)
logger.addHandler(gcp_handler)
return logger
1.Follow the guide you mentioned Serve dynamic content and host microservices with Cloud Run
2.Add the following code to index.js
const {Logging} = require('#google-cloud/logging');
const express = require('express');
const app = express();
app.get('/', (req, res) => {
console.log('Hello world received a request.');
const target = process.env.TARGET || 'World';
const projectId = 'your-project';
const logging = new Logging({projectId});
// Selects the log to write to
const log = logging.log("Cloud_Run_Logs");
// The data to write to the log
const text = 'Hello, world!';
// The metadata associated with the entry
const metadata = {
resource: {type: 'global'},
// See: https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#logseverity
severity: 'INFO',
};
// Prepares a log entry
const entry = log.entry(metadata, text);
async function writeLog() {
// Writes the log entry
await log.write(entry);
console.log(`Logged the log that you just created: ${text}`);
}
writeLog();
res.send(`Hello ${target}!`);
});
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
3.Check the logs under Logging/Global
Edit
For python:
import os
import google.cloud.logging
import logging
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
target = os.environ.get('TARGET', 'World')
# Instantiates a client
client = google.cloud.logging.Client()
# Connects the logger to the root logging handler; by default this captures
# all logs at INFO level and higher
client.setup_logging()
# The data to log
text = 'Hello, these are logs from cloud run!'
# Emits the data using the standard logging module
logging.warning(text)
return 'Hello {}!\n'.format(text)
There is support for Bunyan and Winston node.js libraries in Google Cloud Loggging:
https://cloud.google.com/logging/docs/setup/nodejs#using_bunyan
https://cloud.google.com/logging/docs/setup/nodejs#using_winston
Typically, if you are not looking to do structured logging, all you need to do is print things to stdout/stderr and Cloud Run will pick it up.
This is documented at https://cloud.google.com/run/docs/logging and it has Node.js example for structured and non-structured logging as well.
Related
So, in my main app.py, I have :
app = create_app()
...
app.run()
In __init__.py I have:
def create_app():
...
app = Flask(__name__)
setup_logging(app)
return app
def setup_logging(app):
RequestID(app)
handler = logging.FileHandler(app.container.config.get("app.LOG_FILE"))
handler.setFormatter(logging.Formatter("%(asctime)s : %(levelname)s : %(request_id)s - %(message)s"))
handler.addFilter(RequestIDLogFilter()) # << Add request id contextual filter
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(level="DEBUG")
And in my routes.py, I have: (This wiring/linking is done with the help of dependency injection)
def configure(app):
app.add_url_rule("/", "check", check)
...
def check():
logging.info("You hit /")
# Here I want to return the Request UUID that's generated by RequestLogIDFilter
return make_response(jsonify(dict(ok=True)), 200)
How do I access the Request UUID generated by the RequestLogIDFilter? In my log file I correctly see the log messages as:
2022-08-15 07:00:18,030 : INFO : 27b437fd-98be-4bc9-a609-912043e3a38e - Log message test memberId=9876
I want to take this value 27b437fd-98be-4bc9-a609-912043e3a38e and include it in the response headers as X-REQUEST-UUID: 27b437fd-98be-4bc9-a609-912043e3a38e
An alternative was to rip out RequestLogIDFilter and only work with flask-request-id-header except, there's no way to write it to log file (logging.FileHandler does not have a write() method so it fails)
This is something trivial I'm sure but the official docs nowhere mention how to access these request ids for logging to file or sending back as a response header.
If you are using flask_log_request_id then you probably want this as given in their github repo
Code copied here in case link does not work
from flask_log_request_id import current_request_id
#app.after_request
def append_request_id(response):
response.headers.add('X-REQUEST-ID', current_request_id())
return response
I have a basic logger set up using the logging library in Python 3.10.4. I'm attempting to make the FastAPI exception handler log any exceptions it handles, to no avail. The handler is running, as I can see that my custom Response is returned, but no log.
logging.basicConfig(filename='mylan.log')
logger = logging.getLogger("MyLAN")
logger.setLevel(logging.DEBUG)
discord_handler = DiscordHandler(webhook_url, agent, notify_users=notify_users, emit_as_code_block=False, max_size=2000)
# Add log level to handlers
discord_handler.setLevel(logging.DEBUG)
# Add format to handlers
discord_handler.setFormatter(FORMAT)
# Add the handlers to the Logger
logger.addHandler(discord_handler)
logger.debug("Logger created")
app = FastAPI()
logger.debug('test') # This works
#app.exception_handler(Exception)
def handle_exception(req, exc):
logger.debug("Something's brokey") # This does not
return Response("Internal Server Error Test", status_code=500)
I can also confirm that the logger works, as it logs a simple message on startup which is saved successfully.
I'm not even getting any errors in stdout that might guide me towards a solution.
Anyone have any ideas?
It works for me as written. Here's the code I tested. It's basically identical to what you have, but it's runnable, and includes a /error route that raises a KeyError to test the exception handler:
import logging
from fastapi import FastAPI, Response
from discord_handler import DiscordHandler
webhook_url = "https://discord.com/api/webhooks/..."
logging.basicConfig(filename="mylan.log")
logger = logging.getLogger("MyLAN")
logger.setLevel(logging.DEBUG)
discord_handler = DiscordHandler(
webhook_url,
"Example",
notify_users=[],
emit_as_code_block=True,
max_size=2000,
)
# Add log level to handlers
discord_handler.setLevel(logging.DEBUG)
# Add the handlers to the Logger
logger.addHandler(discord_handler)
logger.debug("Logger created")
app = FastAPI()
logger.debug("test") # This works
#app.exception_handler(Exception)
def handle_exception(req, exc):
logger.debug("Something's broken") # This does not
return Response("Internal Server Error Test", status_code=500)
#app.get("/error")
def error():
raise KeyError("example")
When this code starts up, I see in my discord channel:
Logger created...
test...
And when I make a request to /error, I see:
Something's broken...
I want to make log files when I run my minimal flask app, basically, I want to log all logging messages from the python console when different endpoints are used. To do so, I used a flask logging mechanism but couldn't catch logging messages including a running sessions on python concole, logging info from testing different endpoints with the specific routes. How can I catch running session info from python console? How do I make log for errors, warning, session info from python console? Is there any way to make this happen? any thought?
my current attempt
here is my current attempt which failed to catch log info on python console:
import os, logging, logging.handlers, traceback
from flask import Flask, jsonify, request
from flask_restplus import Api, Resource, Namespce,fields, reqparse, inputs
def primes_method1(n):
out = list()
for num in range(1, n+1):
prime = True
for i in range(2, num):
if (num % i == 0):
prime = False
if prime:
out.append(num)
return out
def getLogger(logname, logdir, logsize=500*1024, logbackup_count=4):
if not os.path.exists(logdir):
os.makedirs(logdir)
logfile='%s/%s.log' % (logdir, logname)
loglevel = logging.INFO
logger = logging.getLogger(logname)
logger.setLevel(loglevel)
if logger.handlers is not None and len(logger.handlers) >= 0:
for handler in logger.handlers:
logger.removeHandler(handler)
logger.handlers = []
loghandler = logging.handlers.RotatingFileHandler(
logfile, maxBytes=logsize, backupCount=logbackup_count)
formatter = logging.Formatter('%(asctime)s-%(name)s-%(levelname)s-%(message)s')
loghandler.setFormatter(formatter)
logger.addHandler(loghandler)
return logger
app = Flask(__name__)
api = Api(app)
ns = api.namespace('ns')
payload = api.model('Payload', {
'num': fields.Integer(required=True)
})
logger = getLogger('testLog', '~/')
#ns.route('/primenumber')
class PrimeResource(Resource):
#ns.expect(payload)
def post(self):
logger.info("get prime number")
param = request.json['num']
try:
res = primes_method1(param)
return jsonify({'output': res})
except:
return None, 400
ns1 = Namespce('')
#ns1.route('/log')
class logResource(Resource):
def get(self):
logger.info("return saved all logers from above endpoints")
if __name__ == '__main__':
api.add_namespace(ns)
api.add_namespace(ns1)
app.run(debug=True)
basically, when I test endpoint with sample data, I want to see all logged messages at #ns1.route('/log') endpoint. In my attempt, I couldn't catch running session info on python console. How to log flask running session info on python console? Is there any way to do this?
Your question isn't really related to Flask, unless I'm overlooking something unique to Flask that doesn't conform to the logging interface. This isn't a workaround either, just normal usage of the tools.
Try to add a handler to your logger like so:
loghandler = logging.handlers.RotatingFileHandler(
logfile, maxBytes=logsize, backupCount=logbackup_count)
formatter = logging.Formatter('%(asctime)s-%(name)s-%(levelname)s-%(message)s')
loghandler.setFormatter(formatter)
logger.addHandler(loghandler)
loghandler2 = logging.StreamHandler()
loghandler2.setFormatter(formatter)
logger.addHandler(loghandler2)
return logger
In Google App Engine, first generation, logs are grouped automatically by request in Logs Viewer, and in the second generation it's easy enough to set up.
In background Cloud Functions I can't find any way of doing it (save manually filtering by executionId in Logs Viewer). From various articles around the web I read that the key is to set the trace argument to the Trace ID when calling the Stackdriver Logging API, and that in HTTP contexts this ID can be found in the X-Cloud-Trace-Context header.
There are no headers in background contexts (for example, called from Pub/Sub or Storage triggers). I've tried setting this to an arbitrary value, such as the event_id from the function context, but no grouping happens.
Here's a minified representation of how I've tried it:
from google.cloud.logging.resource import Resource
import google.cloud.logging
log_name = 'cloudfunctions.googleapis.com%2Fcloud-functions'
cloud_client = google.cloud.logging.Client()
cloud_logger = cloud_client.logger(log_name)
request_id = None
def log(message):
labels = {
'project_id': 'settle-leif',
'function_name': 'group-logs',
'region': 'europe-west1',
}
resource = Resource(type='cloud_function', labels=labels)
trace_id = f'projects/settle-leif/traces/{request_id}'
cloud_logger.log_text(message, trace=trace_id, resource=resource)
def main(_data, context):
global request_id
request_id = context.event_id
log('First message')
log('Second message')
This is currently possible.
It's on our roadmap to provide this support: https://github.com/GoogleCloudPlatform/functions-framework-python/issues/79
I use django and apache with mod_wsgi.
I'm trying this module: https://github.com/opiate/SimpleWebSocketServer
Basically I'm trying to integrate websocket server with my django app, so I can share variables and do db queries with both servers.
this is my code using this library:
myserver.py:
from SimpleWebSocketServer import WebSocket, SimpleWebSocketServer
import thread
class socket(WebSocket):
def handleMessage(self):
if self.data is None:
self.data = ''
print self.data
# echo message back to client
self.sendMessage(str(self.data))
def handleConnected(self):
print self.address, 'connected'
#print self.request
#print self.server.connections
def handleClose(self):
print self.address, 'closed'
server = SimpleWebSocketServer('',8001,socket)
#server.serveforever()
thread.start_new_thread(server.serveforever,())
print "done"
I use thread so it wont block the rest of the code.
If I run this code alone, and create a webSocket in the browser:
javascript:
var socket;
function startSocket(){
socket= new WebSocket("ws://localhost:8001");
socket.onopen= function(evt) {test.innerHTML+="connected\n";};
socket.onclose = function(evt) {test.innerHTML+="disconnected\n"};
socket.onmessage = function(evt) { alert(evt.data);};
socket.onerror = function(evt) {alert("error");};
}
startSocket();
everything is working fine. the problem is how to integrate it in my django code.
So I put myserver.py code in the __init__.py file of my project. I've created a setting WEB_SOCKET, to which I assign the SimpleWebSocket instance. it does create the object (I've checked in debug mode), but still- no events, no connection, no nothing. it does not work. why?
And perhaps there's another solution to this problem? I need something easy and simple, like this module, and it must be able to integrate with django.
You just need to add the location of your project to the python path so you can setup in the os.environ the location of your project settings and that would do it.
import sys,os
sys.path.append('path_to_my_dango_project')
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
from app.models import *
...
obj=Model()
..
obj.save()