Disable info messages in boto3 SNS python - python

I am using logging python package to log my errors messages
import logging
logging.basicConfig(filename='messages.log', level=logging.INFO)
logging.info('some message')
it is writing my custom log messages to messages.log file working fine,
when I started using boto3 SNS library
when I publish messages to SNS, again
Starting new HTTPS connection (1): sns.us-west-2.amazonaws.com
above extra line was writing to messages.log file automatically
I think by default logging package adding info messages to messages.log file correct me if I am wrong.
Is it really info messages are coming from boto3 SNS package ?? if yes, how to disable this info & warning messages that are coming from boto3 SNS package ???
Thanks

Fortunately, i got the answer from GitHub, if anyone wants answer please refer below link
https://github.com/boto/boto3/issues/1248#issuecomment-326049463

Related

How to use Syslog in Django REST framework?

I’m developing a Python Django REST API. Currently, I need to incorporate a logger to Syslog.
Do I need to just define the logger and Syslog handler in the Settings.py?
I’m relatively new to Django and using the Syslog protocol, so I appreciate any help and what Python modules to use.
Yes, you can for example use the syslog library to log messages.
Using it, is pretty easy.
import syslog
# create syslog handle
syslog.openlog(ident="settings.py", facility=syslog.LOG_LOCAL0)
# log some message
syslog.syslog(syslog.LOG_INFO, "Some log message.")
The log message will not get written to /var/log/local0.log - probably it will also get written to /var/log/rsyslog or /var/log/messages depending on the standard configuration of your /etc/rsyslog.conf file.

Tracing a Python gRPC server deployed on Cloud Run with OpenTelemetry

I'm running a Python gRPC server on Cloud Run and attempting to add instrumentation to capture trace information. I have a basic setup currently, however I'm having trouble making use of propagation as shown in the OpenTelemetry docs.
Inbound requests have the x-cloud-trace-context header, and I can log the header value in the gRPC method I've been working with, however the traces created by the OpenTelemetry library always have a different ID than the trace ID from the request header.
This is the simple tracing.py module I've created to provide configuration and access to the current Tracer instance:
"""Utility functions for tracing."""
import opentelemetry.exporter.cloud_trace as cloud_trace
import opentelemetry.propagate as propagate
import opentelemetry.propagators.cloud_trace_propagator as cloud_trace_propagator
import opentelemetry.trace as trace
from opentelemetry.sdk import trace as sdk_trace
from opentelemetry.sdk.trace import export
import app_instance
def get_tracer() -> trace.Tracer:
"""Function that provides an object for tracing.
Returns:
trace.Tracer instance.
"""
return trace.get_tracer(__name__)
def configure_tracing() -> None:
trace.set_tracer_provider(sdk_trace.TracerProvider())
if app_instance.IS_LOCAL:
print("Configuring local tracing.")
span_exporter: export.SpanExporter = export.ConsoleSpanExporter()
else:
print(f"Configuring cloud tracing in environment {app_instance.ENVIRONMENT}.")
span_exporter = cloud_trace.CloudTraceSpanExporter()
propagate.set_global_textmap(cloud_trace_propagator.CloudTraceFormatPropagator())
trace.get_tracer_provider().add_span_processor(export.SimpleSpanProcessor(span_exporter))
This configure_tracing function is called by the entrypoint script run on container start, so it executes before any requests are handled. When running in Google Cloud, the CloudTraceFormatPropagator should be what's required to ensure trace propagation, however it doesn't seem to be working for me.
This is the simple gRPC method I've been implementing with:
import grpc
from opentelemetry import trace
import stripe
from common import cloud_logging, datastore_utils, proto_helpers, tracing
from services.payment_service import payment_service_pb2
from third_party import stripe_client
def GetStripeInvoice(
self, request: payment_service_pb2.GetStripeInvoiceRequest, context: grpc.ServicerContext
) -> payment_service_pb2.StripeInvoiceResponse:
tracer: trace.Tracer = tracing.get_tracer()
with tracer.start_as_current_span('GetStripeInvoice'):
print(f"trace ID from header: {dict(context.invocation_metadata()).get('x-cloud-trace-context')}")
cloud_logging.info(f"Getting Stripe invoice.")
order = datastore_utils.get_pb_with_pb_key(request.order)
try:
invoice: stripe.Invoice = stripe_client.get_invoice(
invoice_id=order.stripe_invoice_id
)
cloud_logging.info(f"Retrieved Stripe invoice. Amount due: {invoice['amount_due']}")
except stripe.error.StripeError as e:
cloud_logging.error(
f"Failed to retrieve invoice: {e}"
)
context.abort(code=grpc.StatusCode.INTERNAL, details=str(e))
return payment_service_pb2.StripeInvoiceResponse(
invoice=proto_helpers.create_struct(invoice)
)
I've even gone as far as adding the x-cloud-trace-context header to local client requests, to no avail - the included value isn't used when starting traces.
I'm not sure what I'm missing here - I can see traces in the Cloud Trace dashboard so I believe the basic instrumentation is correct, however there's obviously something going on with the configuration/usage of the CloudTraceFormatPropagator.
It turns out that my configuration wasn't correct - or, I should say, it wasn't complete. I'd followed this basic example from the docs for the Google Cloud OpenTelemetry library, but I didn't realize that manually instrumenting wasn't needed.
I removed the call to tracer.start_as_current_span in my gRPC method, installed the gRPC instrumentation package (opentelemetry-instrumentation-grpc), and added it to the tracing configuration step during startup of my gRPC server, which now looks something like this:
from opentelemetry.instrumentation import grpc as grpc_instrumentation
from common import tracing # from my original question
def main():
"""Starts up GRPC server."""
# Set up tracing
tracing.configure_tracing()
grpc_instrumentation.GrpcInstrumentorServer().instrument()
# Set up the gRPC server
server = grpc.server(futures.ThreadPoolExecutor(max_workers=100))
# set up services & start
This approach has solved the issue described in my question - my log messages are now threaded in the expected manner
As someone new to telemetry & instrumentation, I didn't realize that I'd need to take an extra step since I'm tracing gRPC requests, but it makes sense now.
I ended up finding some helpful examples in a different set of docs - I'm not sure why these are separate from the docs linked earlier in this answer.
EDIT: Ah, I believe the gRPC instrumentation, and thus the related docs, are part of a separate but related project wherein contributors can add packages that instrument libraries of interest (i.e. gRPC, redis, etc). It'd be helpful if it was unified, which is the topic of this issue in the main OpenTelemetry Python repo.
While reviewing Google Documentation of OpenTelemetry using Python, I found some configurations that could help with the issue of tracing the correct ID. Additionally, there is a troubleshooting document to view traces in your Google Cloud Project when you expect trace data to be present.
Python-OpenTelemetry - https://cloud.google.com/trace/docs/setup/python-ot
Google Cloud Trace Troubleshooting - https://cloud.google.com/trace/docs/troubleshooting
For secure channels, you need to pass in chanel_type=’secure’. It is explained in the following link: https://github.com/open-telemetry/opentelemetry-python-contrib/issues/365
You need to use the x-cloud-trace-context header to ensure your traces use the same trace ID as the load balancer and AppServer on Google Cloud Run, and all link up in Google Trace.
The code below works to see you logs alongside traces in Google Trace’s Trace List view:
from opentelemetry import trace
from opentelemetry.trace.span import get_hexadecimal_trace_id, get_hexadecimal_span_id
current_span = trace.get_current_span()
if current_span:
trace_id = current_span.get_span_context().trace_id
span_id = current_span.get_span_context().span_id
if trace_id and span_id:
logging_fields['logging.googleapis.com/trace'] = f"projects/{self.gce_project}/traces/{get_hexadecimal_trace_id(trace_id)}"
logging_fields['logging.googleapis.com/spanId'] = f"{get_hexadecimal_span_id(span_id)}"
logging_fields['logging.googleapis.com/trace_sampled'] = True
The documentation and code above were tested using Flask Framework.

Logging on either GCP or local

Suppose there is a system that is run on GCP, but as a backup, can be run locally.
When running on the cloud, stackdriver is pretty straightforward.
However, I need my system to push to stackdriver if on the cloud, and if not on the cloud, use the local python logger.
I also don't want to include any logic to do so, and this should be automatic.
When logging, log straight to Python/local logger.
If on GCP -> push these to stackdriver.
I can write logic that could implement this but that is bad practice. There surely is a direct way of getting this to work.
Example
import google.cloud.logging
client = google.cloud.logging.Client()
client.setup_logging()
import logging
cl = logging.getLogger()
file_handler = logging.FileHandler('file.log')
cl.addHandler(file_handler)
logging.info("INFO!")
This will basically log to python logger, and then 'always' upload to cloud logger. How can I have it so that I don't need to explicitly add import google.cloud.logging and basically if stackdriver is installed, it directly gets the logs? Is that even possible? If not can someone explain how this would be handled from a best practices perspective?
Attempt 1 [works]
Created /etc/google-fluentd/config.d/workflow_log.conf
<source>
#type tail
format none
path /home/daudn/this_log.log
pos_file /var/lib/google-fluentd/pos/this_log.pos
read_from_head true
tag workflow-log
</source>
Created /var/log/this_log.log
pos_file /var/lib/google-fluentd/pos/this_log.pos exists
import logging
cl = logging.getLogger()
file_handler = logging.FileHandler('/var/log/this_log.log')
file_handler.setFormatter(logging.Formatter("%(asctime)s;%(levelname)s;%(message)s"))
cl.addHandler(file_handler)
logging.info("info_log")
logging.error("error_log")
This works! Look for your logs for the specific VM and not global>python
Fortunately, this is a story that is handled. Stackdriver Logging is a very versatile framework for logging. However, there are a variety of logging APIs and Google's intent was not that you had to rewrite all your existing applications to leveraging the Stackdriver logging native APIs. Instead, you can use a logging API of your choice (including standard and defacto APIs) and these logging APIs will then map to Stackdriver. If executed outside a GCP environment or you simply wish to switch to an alternate log collector, your applications would not have to be re-coded or recompiled.
A list of the logging APIs available for different languages can be found at Setting Up Language Runtimes and this includes Setting Up Stackdriver Logging for Python.
For Python, at runtime, you have a configuration property (eg an Environment variable) that declares whether or not you wish to use Stackdriver. If set to true, then .. and only then ... would you execute the login that sets up the native Python logging for Stackdriver otherwise that logic would not be called and hence you would have no dependency on Stackdriver.
A possible piece of code might be:
if os.environ.get('USE_STACKDRIVER') == 'true':
import google.cloud.logging
client = google.cloud.logging.Client()
client.setup_logging()
You do not need to specifically enable or use Stackdriver in your program. You can use the Python logger and write to any file you want. However, Stackdriver only logs specific log files. This means that you would need to manually set up Stackdriver to log "your" log files.
In your example, you are writing to file.log. Modify /etc/google-fluentd/config.d/mylogfile.conf to include the following. You will need to specify the full path for file.log and not just the file name. In this example, I named it /var/log/mylogfile.log. This example also assumes that your logs start each line with a date.
<source>
#type tail
# Parse the timestamp, but still collect the entire line as 'message'
format /^(?<message>(?<time>[^ ]*\s*[^ ]* [^ ]*) .*)$/
path /var/log/mylogfile.log
pos_file /var/lib/google-fluentd/pos/mylogfile.log.pos
read_from_head true
tag auth
</source>
For more information read the following document:
Stackdriver - Configuring the Agent
Now your program will run outside GCP and when running on a configured instance, log to Stackdriver.
Note: I would do the opposite of what you have asked. I would always use Stackdriver. When not running in GCP I would manually set up Stackdriver on my desktop, local server, etc and continue to log to Stackdriver.

How to fix error "Your credentials class does not support session injection. Performance will not be at the maximum." message when sending a message?

I'm using the python botbuilder-sdk and when the server sends any message it prints the following message to the console:
Your credentials class does not support session injection. Performance will not be at the maximum.
I've tried just downloading the samples found on the github and the issue persisted.
This is what I currently have for sending the messages.
credentials = MicrosoftAppCredentials(APP_ID, APP_PASSWORD)
connector = ConnectorClient(credentials, base_url=activity.service_url)
reply = self.buildReply(activity, activity.text)
connector.conversations.send_to_conversation(reply.conversation.id, reply)
The unknown message seems to result from the send_to_conversation function.
Any insight as to why its appearing would be helpful. Thank you!
The Python Bot Builder SDK uses msrest-for-python. You can see that the warning message is displayed if the credentials class doesn't contain a signed_session or a refresh_session method. You can see in the Bot Builder source code that MicrosoftAppCredentials does not contain either of these methods. With this information, you can raise an issue in the Bot Builder Python repo on GitHub: https://github.com/microsoft/botbuilder-python/issues

How to disable the python log `URL being requested` `Refreshing access_token`

I have a python code downloading files from google drive by googleapiclient API. The python code uses logger to print information. I set the basic log level to be INFO. However, there is some logger info from calling the API. To be specific, the other and unwanted logger info is:
2019-02-12 03:52:21,269 INFO URL being requested:
2019-02-12 03:52:21,091 INFO Starting new HTTP connection
2019-02-12 03:52:19,691 INFO Attempting refresh to obtain initial access_token
From what I googled, logging.getLogger("requests").setLevel(logging.WARNING) seems able to mute the logging info of Starting new HTTP connection. But how can I mute the other two?
Grepping through my virtual environment, I was able to see that it's the googleapiclient library that's creating the logger. You can mute those messages with
logging.getLogger("googleapiclient.discovery").setLevel(logging.WARNING)
Or at the module level
logging.getLogger("googleapiclient").setLevel(logging.WARNING)

Categories

Resources