I want to log using Stackdriver Logging to App Engine using Redis queue. So I'm using Redis Server, Redis Queue and Python logging to do this. Here's my code:
import logging
from redis import Redis
from rq import Queue
import time
class SomeClass():
def log_using_redis(self,text):
log_text = logging.warn(text)
f=open("stack_log.txt","a+")
f.write(str(text))
return "logged Successfully using redis"
def get(self):
text = 'Hello, Logged Successfully!'+time.strftime('%a, %d %b %Y %H:%M:%S %Z(%z)')
redis_conn = Redis()
q = Queue(connection=redis_conn)
job = q.enqueue(self.log_using_redis,text)
print job.result
When I run RQ worker I'm getting some output on terminal but couldn't find where the logs are being stored.
If I try to log directly without using Redis, the logs are being stored at Global in the logging section of Google Cloud. The queue is working properly, to check I've been appending the text to a file.
It seems the logging isn't working. If it is being logged, where can I find my logs on Google Cloud?
Taking into account that you are using Python client library, use print() function to obtain the desired results. I don’t know if you are testing the application locally or you have deployed it.
If you are testing the application locally: print() function
output can be found in the cloud shell.
If you have deployed the application: go to the GCP console, App Engine and services. Select the service in which you have deployed
your application. In the right side click on tools and select
"Logs". This will redirect the page to your app logs.
A more precise logging can be defined using the Stackdriver Logging for Python. The warning level can be defined. This can help you manage your application or identify events of interest. Find an example code here.
You might find useful Stackdriver Logging agent, an application based on fluentd that runs on your virtual machine (VM) instances. The Logging agent is pre-configured to send logs from VM instances to Stackdriver Logging. There are source and configuraiton files available for redis.
If you want a more general vision, App Engine flexible environment logs official documentation can help you to understand the different available logs.
Related
I'm running a Python gRPC server on Cloud Run and attempting to add instrumentation to capture trace information. I have a basic setup currently, however I'm having trouble making use of propagation as shown in the OpenTelemetry docs.
Inbound requests have the x-cloud-trace-context header, and I can log the header value in the gRPC method I've been working with, however the traces created by the OpenTelemetry library always have a different ID than the trace ID from the request header.
This is the simple tracing.py module I've created to provide configuration and access to the current Tracer instance:
"""Utility functions for tracing."""
import opentelemetry.exporter.cloud_trace as cloud_trace
import opentelemetry.propagate as propagate
import opentelemetry.propagators.cloud_trace_propagator as cloud_trace_propagator
import opentelemetry.trace as trace
from opentelemetry.sdk import trace as sdk_trace
from opentelemetry.sdk.trace import export
import app_instance
def get_tracer() -> trace.Tracer:
"""Function that provides an object for tracing.
Returns:
trace.Tracer instance.
"""
return trace.get_tracer(__name__)
def configure_tracing() -> None:
trace.set_tracer_provider(sdk_trace.TracerProvider())
if app_instance.IS_LOCAL:
print("Configuring local tracing.")
span_exporter: export.SpanExporter = export.ConsoleSpanExporter()
else:
print(f"Configuring cloud tracing in environment {app_instance.ENVIRONMENT}.")
span_exporter = cloud_trace.CloudTraceSpanExporter()
propagate.set_global_textmap(cloud_trace_propagator.CloudTraceFormatPropagator())
trace.get_tracer_provider().add_span_processor(export.SimpleSpanProcessor(span_exporter))
This configure_tracing function is called by the entrypoint script run on container start, so it executes before any requests are handled. When running in Google Cloud, the CloudTraceFormatPropagator should be what's required to ensure trace propagation, however it doesn't seem to be working for me.
This is the simple gRPC method I've been implementing with:
import grpc
from opentelemetry import trace
import stripe
from common import cloud_logging, datastore_utils, proto_helpers, tracing
from services.payment_service import payment_service_pb2
from third_party import stripe_client
def GetStripeInvoice(
self, request: payment_service_pb2.GetStripeInvoiceRequest, context: grpc.ServicerContext
) -> payment_service_pb2.StripeInvoiceResponse:
tracer: trace.Tracer = tracing.get_tracer()
with tracer.start_as_current_span('GetStripeInvoice'):
print(f"trace ID from header: {dict(context.invocation_metadata()).get('x-cloud-trace-context')}")
cloud_logging.info(f"Getting Stripe invoice.")
order = datastore_utils.get_pb_with_pb_key(request.order)
try:
invoice: stripe.Invoice = stripe_client.get_invoice(
invoice_id=order.stripe_invoice_id
)
cloud_logging.info(f"Retrieved Stripe invoice. Amount due: {invoice['amount_due']}")
except stripe.error.StripeError as e:
cloud_logging.error(
f"Failed to retrieve invoice: {e}"
)
context.abort(code=grpc.StatusCode.INTERNAL, details=str(e))
return payment_service_pb2.StripeInvoiceResponse(
invoice=proto_helpers.create_struct(invoice)
)
I've even gone as far as adding the x-cloud-trace-context header to local client requests, to no avail - the included value isn't used when starting traces.
I'm not sure what I'm missing here - I can see traces in the Cloud Trace dashboard so I believe the basic instrumentation is correct, however there's obviously something going on with the configuration/usage of the CloudTraceFormatPropagator.
It turns out that my configuration wasn't correct - or, I should say, it wasn't complete. I'd followed this basic example from the docs for the Google Cloud OpenTelemetry library, but I didn't realize that manually instrumenting wasn't needed.
I removed the call to tracer.start_as_current_span in my gRPC method, installed the gRPC instrumentation package (opentelemetry-instrumentation-grpc), and added it to the tracing configuration step during startup of my gRPC server, which now looks something like this:
from opentelemetry.instrumentation import grpc as grpc_instrumentation
from common import tracing # from my original question
def main():
"""Starts up GRPC server."""
# Set up tracing
tracing.configure_tracing()
grpc_instrumentation.GrpcInstrumentorServer().instrument()
# Set up the gRPC server
server = grpc.server(futures.ThreadPoolExecutor(max_workers=100))
# set up services & start
This approach has solved the issue described in my question - my log messages are now threaded in the expected manner
As someone new to telemetry & instrumentation, I didn't realize that I'd need to take an extra step since I'm tracing gRPC requests, but it makes sense now.
I ended up finding some helpful examples in a different set of docs - I'm not sure why these are separate from the docs linked earlier in this answer.
EDIT: Ah, I believe the gRPC instrumentation, and thus the related docs, are part of a separate but related project wherein contributors can add packages that instrument libraries of interest (i.e. gRPC, redis, etc). It'd be helpful if it was unified, which is the topic of this issue in the main OpenTelemetry Python repo.
While reviewing Google Documentation of OpenTelemetry using Python, I found some configurations that could help with the issue of tracing the correct ID. Additionally, there is a troubleshooting document to view traces in your Google Cloud Project when you expect trace data to be present.
Python-OpenTelemetry - https://cloud.google.com/trace/docs/setup/python-ot
Google Cloud Trace Troubleshooting - https://cloud.google.com/trace/docs/troubleshooting
For secure channels, you need to pass in chanel_type=’secure’. It is explained in the following link: https://github.com/open-telemetry/opentelemetry-python-contrib/issues/365
You need to use the x-cloud-trace-context header to ensure your traces use the same trace ID as the load balancer and AppServer on Google Cloud Run, and all link up in Google Trace.
The code below works to see you logs alongside traces in Google Trace’s Trace List view:
from opentelemetry import trace
from opentelemetry.trace.span import get_hexadecimal_trace_id, get_hexadecimal_span_id
current_span = trace.get_current_span()
if current_span:
trace_id = current_span.get_span_context().trace_id
span_id = current_span.get_span_context().span_id
if trace_id and span_id:
logging_fields['logging.googleapis.com/trace'] = f"projects/{self.gce_project}/traces/{get_hexadecimal_trace_id(trace_id)}"
logging_fields['logging.googleapis.com/spanId'] = f"{get_hexadecimal_span_id(span_id)}"
logging_fields['logging.googleapis.com/trace_sampled'] = True
The documentation and code above were tested using Flask Framework.
Suppose there is a system that is run on GCP, but as a backup, can be run locally.
When running on the cloud, stackdriver is pretty straightforward.
However, I need my system to push to stackdriver if on the cloud, and if not on the cloud, use the local python logger.
I also don't want to include any logic to do so, and this should be automatic.
When logging, log straight to Python/local logger.
If on GCP -> push these to stackdriver.
I can write logic that could implement this but that is bad practice. There surely is a direct way of getting this to work.
Example
import google.cloud.logging
client = google.cloud.logging.Client()
client.setup_logging()
import logging
cl = logging.getLogger()
file_handler = logging.FileHandler('file.log')
cl.addHandler(file_handler)
logging.info("INFO!")
This will basically log to python logger, and then 'always' upload to cloud logger. How can I have it so that I don't need to explicitly add import google.cloud.logging and basically if stackdriver is installed, it directly gets the logs? Is that even possible? If not can someone explain how this would be handled from a best practices perspective?
Attempt 1 [works]
Created /etc/google-fluentd/config.d/workflow_log.conf
<source>
#type tail
format none
path /home/daudn/this_log.log
pos_file /var/lib/google-fluentd/pos/this_log.pos
read_from_head true
tag workflow-log
</source>
Created /var/log/this_log.log
pos_file /var/lib/google-fluentd/pos/this_log.pos exists
import logging
cl = logging.getLogger()
file_handler = logging.FileHandler('/var/log/this_log.log')
file_handler.setFormatter(logging.Formatter("%(asctime)s;%(levelname)s;%(message)s"))
cl.addHandler(file_handler)
logging.info("info_log")
logging.error("error_log")
This works! Look for your logs for the specific VM and not global>python
Fortunately, this is a story that is handled. Stackdriver Logging is a very versatile framework for logging. However, there are a variety of logging APIs and Google's intent was not that you had to rewrite all your existing applications to leveraging the Stackdriver logging native APIs. Instead, you can use a logging API of your choice (including standard and defacto APIs) and these logging APIs will then map to Stackdriver. If executed outside a GCP environment or you simply wish to switch to an alternate log collector, your applications would not have to be re-coded or recompiled.
A list of the logging APIs available for different languages can be found at Setting Up Language Runtimes and this includes Setting Up Stackdriver Logging for Python.
For Python, at runtime, you have a configuration property (eg an Environment variable) that declares whether or not you wish to use Stackdriver. If set to true, then .. and only then ... would you execute the login that sets up the native Python logging for Stackdriver otherwise that logic would not be called and hence you would have no dependency on Stackdriver.
A possible piece of code might be:
if os.environ.get('USE_STACKDRIVER') == 'true':
import google.cloud.logging
client = google.cloud.logging.Client()
client.setup_logging()
You do not need to specifically enable or use Stackdriver in your program. You can use the Python logger and write to any file you want. However, Stackdriver only logs specific log files. This means that you would need to manually set up Stackdriver to log "your" log files.
In your example, you are writing to file.log. Modify /etc/google-fluentd/config.d/mylogfile.conf to include the following. You will need to specify the full path for file.log and not just the file name. In this example, I named it /var/log/mylogfile.log. This example also assumes that your logs start each line with a date.
<source>
#type tail
# Parse the timestamp, but still collect the entire line as 'message'
format /^(?<message>(?<time>[^ ]*\s*[^ ]* [^ ]*) .*)$/
path /var/log/mylogfile.log
pos_file /var/lib/google-fluentd/pos/mylogfile.log.pos
read_from_head true
tag auth
</source>
For more information read the following document:
Stackdriver - Configuring the Agent
Now your program will run outside GCP and when running on a configured instance, log to Stackdriver.
Note: I would do the opposite of what you have asked. I would always use Stackdriver. When not running in GCP I would manually set up Stackdriver on my desktop, local server, etc and continue to log to Stackdriver.
I have a python code downloading files from google drive by googleapiclient API. The python code uses logger to print information. I set the basic log level to be INFO. However, there is some logger info from calling the API. To be specific, the other and unwanted logger info is:
2019-02-12 03:52:21,269 INFO URL being requested:
2019-02-12 03:52:21,091 INFO Starting new HTTP connection
2019-02-12 03:52:19,691 INFO Attempting refresh to obtain initial access_token
From what I googled, logging.getLogger("requests").setLevel(logging.WARNING) seems able to mute the logging info of Starting new HTTP connection. But how can I mute the other two?
Grepping through my virtual environment, I was able to see that it's the googleapiclient library that's creating the logger. You can mute those messages with
logging.getLogger("googleapiclient.discovery").setLevel(logging.WARNING)
Or at the module level
logging.getLogger("googleapiclient").setLevel(logging.WARNING)
I have a flask API hosted on Azure and I am using azure_storage_logging.handlers package to send API runtime logs to Azure Storage. I am using BlobStorageRotatingFileHandler for it.
I receive few logs in my storage account. However, a huge number of logs are missing. My API is very CPU intensive.
Please let me know how to solve this problem.
def fun_logging(id, logfilename , loggername):
mystorageaccountname = STORAGE_ACCOUNT_NAME
mystorageaccountkey = STORAGE_ACCOUNT_KEY
mystoragecontainer = STORAGE_CONTAINER
utctime = asctime(gmtime())
logger = logging.getLogger(loggername)
logger.setLevel(logging.DEBUG)
log_formater = logging.Formatter('%(utctime)s - %(id)s - %(levelname)s - %(message)s')
azure_blob_handler = BlobStorageRotatingFileHandler(filename = logfilename, account_name=mystorageaccountname,account_key=mystorageaccountkey, delay=False, maxBytes= 10000,container=mystoragecontainer)
azure_blob_handler.setLevel(logging.DEBUG)
azure_blob_handler.setFormatter(log_formater)
if (logger.hasHandlers()):
logger.handlers.clear()
logger.addHandler(azure_blob_handler)
logger = logging.LoggerAdapter(logger, {'id': id, 'utctime':utctime})
return logger
####### Calling function
logger = fun_logging(id, 'Logs//xyz.log', 'xyz')
logger.info(Result.log) ## the variable I am logging
I searched for BlobStorageRotatingFileHandler you used, and found the github repo michiya/azure-storage-logging. I'm not sure whether it was used in your flask project. If yes, I advise you to use TableStorageHandler for logging in your scenario for writing a huge number of logs, not using BlobStorageRotatingFileHandler.
After I reviewed the code of this repo, there is an issue to use BlockBlob to store an entire log file once, not to append a log record into an AppendBlob. So it's not suitable for your scenario with massive logs. And Azure recommends to use AppendBlob for logging, see below comes from here if using Blob Storage.
Append blobs are used for logging, such as when you want to write to a
file and then keep adding more information. Most objects stored in
Blob storage are block blobs.
So you can try to use Append Blob via Azure Blob Storage SDK for Python for wrapping a logging API by your self. Otherwise, Azure Table Storage is a good choise which you can try for logging. For massive logs recording, the best practice is that to write logs into EventHubs, then to use other services like Stream Analysis to filter and transfer data into Blob Storage.
Hope it helps.
Use case - some user data is getting loaded in the backend (flask), and the progress is shown on the frontend through a loading bar. The backend has a generator which loads the data and keeps yielding the progress (this generator is returned as a response using stream_with_context). The frontend queries the flask view using a javascript EventSource object.
Code:
#app.route("/progress", methods=['GET'])
def progress():
gen = get_user_data()
return Response(stream_with_context(gen), mimetype= 'text/event-stream')
def get_user_data():
n = 100 (number of data points to be loaded)
for i in range(1,n+1):
#load data
yield "data:" + str((float(i)/(n))*100) + "\n\n"
yield "data:" + "close" + "\n\n"
This works fine on my local environment. However, when I deploy it on google app engine flexible environment, the loading bar goes directly from 0 to 100. That is, instead of the front end getting updates each time my generator yields, I'm getting all the EventSource messages at once (when the generator has finished execution).
My app.yaml:
runtime: python
env: flex
entrypoint: gunicorn --timeout 240 -b :$PORT app:app
runtime_config:
python_version: 2
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Any idea on how I can get this to work on google app engine?
"An EventSource instance opens a persistent connection to an HTTP server", according to this documentation. This solution is not going to work in App Engine according to the explanation provided here:
You could attempt to declare "Content-Type: text/event-stream" on your
own vanilla App Engine handler, and use an EventSource
https://developer.mozilla.org/en-US/docs/Web/API/EventSource
object in the browser to initiate a keep-alive connection. The problem
is, App Engine waits for the handler on your app to return fully before flushing the buffer and sending the response data. You can find
this documented here:
https://cloud.google.com/appengine/docs/java/requests#Java_Responses
for java
https://cloud.google.com/appengine/docs/python/requests#Python_Responses
for python
https://cloud.google.com/appengine/docs/php/requests#PHP_Responses
for php
https://cloud.google.com/appengine/docs/go/requests#Go_Responses
for go.
What this means in practice is that your stream will not be
"kept-alive" and will close each time one response is sent. Or, if you
implement your server-sent event code server-side as most people do,
it will buffer up all of its responses and finally send them all only
when it terminates.
There are a couple of complex workarounds currently available:
Using Pusher: "Pusher is a hosted API for sending real-time, bi-directional messages via WebSockets to apps and other INternet-connected devices." This is not an official product documentation, but its author is a Googler.
If you use Firebase: "You can use App Engine in conjunction with the Firebase Realtime Database to send immediate updates to browser and mobile clients without a persistent streaming connection to the server or long polling. "
A simpler way will be available soon, according to message #231 in this issuetracker. Flex WebSockets Beta launch will be available soon but for Standard environment, it is "at least a year away". Star the issuetracker post if you want to get automatic notifications on comments and updates.