I would like to scrub sensitive data from Python before I send it to Sentry
However, in method before_send and truncate_breadcrumb_message I am not sure where I can get the list of local variables and scrub them.
sentry_sdk.init(
dsn=settings.get('SENTRY_DSN', ""),
before_breadcrumb=truncate_breadcrumb_message,
integrations=[FlaskIntegration()],
before_send=sanitize_sentry_event,
)
def sanitize_sentry_event(event, hint):
pass
def truncate_breadcrumb_message(crumb, hint):
pass
def raise_execption(password):
auth = 5
raise Exception()
In the above method, I wouldn't want password and auth to be send to Sentry at all.
How can I do it?
event is a JSON payload that contains the same exact JSON you see in the "JSON" download in Sentry's UI. So you have a event like this:
{
"exception": {
"values": [
{
"stacktrace": {
"frames": [
{"vars": ...}
]
}
}
]
}
}
And you want to remove vars, you need to do this:
def sanitize_sentry_event(event, hint):
for exception in event.get("exception", {}).get("values", []):
for frame in exception.get("stacktrace", {}).get("frames", []):
frame.pop("vars", None)
for exception in event.get("threads", {}).get("values", []):
for frame in exception.get("stacktrace", {}).get("frames", []):
frame.pop("vars", None)
return event
You probably want to wrap the entire function body with a try-except. If the function raises an exception the event is dropped. Make sure to test this using init(debug=True) to see all exceptions your before_send hook might throw
found it here
Code for anyone who migrated from raven and wants to use raven processors / sanitize_keys
from raven.processors import SanitizeKeysProcessor, SanitizePasswordsProcessor
class FakeRavenClient:
sanitize_keys = [
'card_number',
'card_cvv',
'card_expiration_date',
]
processors = [
SanitizePasswordsProcessor(FakeRavenClient),
SanitizeKeysProcessor(FakeRavenClient),
]
def before_send(event, hint):
for processor in processors:
event = processor.process(event)
return event
sentry_sdk.init(
before_send=before_send,
)
Related
I have a GraphQL system associated with a Django app that seems to be working fine, except that it's completely ignoring errors in mutations. That is, if the mutation executes with no errors, everything behaves as expected. But if I raise an exception on the first line of the mutation, I don't get any indication of the error -- nothing in the app logs, and the graphQL response is just a json will null contents, e.g.:
{
"data": {
"exampleMutation": {
"mutationResponseSchema": null
}
}
}
Even if I wrap the django pieces (e.g. trying to get a filterset) in a try: except:, the behavior is the same as if I raise the exception. IOW, an exception being thrown (even if it's handled) seems to trigger an empty response being sent.
I am at a total loss for where these exceptions are going -- it seems that the behavior on encountering an exception is to ignore it and just return a null JSON.
Furthermore, I have an app with the same basic layout but built off an older image of Python (3.8 vs. this one at 3.11, so the django/graphene versions and related dependencies are newer). The old app handled exceptions as usual and would return messages via the endpoints when raised using Django/Graphene classes with the same structure as the one I'm having the problem with.
I don't know if something changed in graphene's error handling, but I can't seem to find a clear answer to that.
For example, if I write the following mutation:
class ExampleMutation(graphene.Mutation):
class Arguments:
fail_here = graphene.String()
some_schema = graphene.Field(SomeSchema)
#authorized
def mutate(root, info, user, **kwargs):
# could also just raise Exception('automatic exception') here and get same behavior.
if kwargs.get('fail_here') == 'yes':
raise Exception('text') # Doesn't seem to matter what exception is raised
else:
django_model = SomeSchemaModel.objects.first()
return ExampleMutation(some_schema=django_model)
The response to e.g.
mutation exampleMutation($failHere: String){
exampleMutation(
failHere: $failHere,
) {
someSchema
{
field1
field2
}
}
}
is valid and behaves as expected if the mutation is called with e.g. {"failHere": "No"}. Ergo, the structure of the graphQL/Django stuff is not the problem.
The problem is that when the endpoint is called with {"failHere": "yes"} (or if I just raise an error on the first line of the mutation), the response is:
{
"data": {
"exampleMutation": {
"someSchema": null
}
}
}
The above might be a bug, I posted to github. But in case this comes up for someone else, this is the workaround that got things working:
In django's settings (with appropriate SCHEMA for your app):
GRAPHENE = {
'SCHEMA': 'core.graphql.index.schema',
'MIDDLEWARE': ['graphene_django.debug.middleware.DjangoDebugMiddleware'],
}
I'm not certain of this, but I found from some searching that perhaps with newer versions of graphene-django, the DjangoDebugMiddleware middleware is required to send these exceptions in the JSON response.
In any case, the real problem is that the GraphQLView handler seems to need the middleware sent not as a list (as the settings shown above require), but instead as double list, which can be achieved by overriding the instantiation like this:
from graphene_django.views import GraphQLView, graphene_settings
class GQLView(GraphQLView):
def __init__(self, *args, **kwargs):
kwargs.update({'middleware':[graphene_settings.MIDDLEWARE]}) #note the extra list level
and then in urls.py you'd have something like:
urlpatterns = [
# Graphql
(r'graphql', GQLView.as_view())
]
I am trying to write some test cases for some code I've developed using Elasticsearch and Django. The concept is straightforward - I just want to test a get request, which will be an Elasticsearch query. However, I am constructing the query as a nested dict. When I pass the nested dict to the Client object in the test script it gets passed through Django until it ends up at the urlencode function which doesn't look like it can handle nested dicts only MultivalueDicts. Any suggestions or solutions? I don't want to use any additional packages as I don't want to depend on potentially non-supported packages for this application.
Generic Code:
class MyViewTest(TestCase):
es_connection = elasticsearch.Elasticsearch("localhost:9200")
def test_es_query(self):
client = Client()
query = {
"query": {
"term": {
"city": "some city"
}
}
}
response = client.get("", query)
print(response)
Link for urlencode function: urlencode Django
The issue is clearly at the conditional statement when the urlencode function checks if the dictionary value is a str or bytes object. If it isn't it creates a generator object which can never access the nested portions of the dictionary.
EDIT: 07/25/2018
So I was able to come up with a temporary work around to at least run the test. However, it is ugly and I feel like there must be a better way. The first thing I tried was specifying the content_type and converting the dict to a json string first. However, Django still kicked back and error in the urlencode function.
class MyViewTest(TestCase):
es_connection = elasticsearch.Elasticsearch("localhost:9200")
def test_es_query(self):
client = Client()
query = {
"query": {
"term": {
"city": "some city"
}
}
}
response = client.get("", data=json.dumps(query), content_type="application/json")
print(response)
So instead I had to do:
class MyViewTest(TestCase):
es_connection = elasticsearch.Elasticsearch("localhost:9200")
def test_es_query(self):
client = Client()
query = {
"query": {
"term": {
"city": "some city"
}
}
}
query = json.dumps(query)
response = client.get("", data={"q": query}, content_type="application/json")
print(response)
This let me send the HttpRequest to my View and parse it back out using:
json.loads(request.GET["q"])
Then I was able to successfully get the requested data from Elasticsearch and return it as an HttpResponse. I feel like in Django though there has to be a way to just pass a json formatted string directly to the Client object's get function. I thought specifying the content_type as application/json would work but it still calls the urlencode function. Any ideas? I really don't want to implement this current system into production.
I'm invoking a Python-based AWS Lambda method from API Gateway in non-proxy mode. How should I properly handle exceptions, so that an appropriate HTTP status code is set along with a JSON body using parts of the exception.
As an example, I have the following handler:
def my_handler(event, context):
try:
s3conn.head_object(Bucket='my_bucket', Key='my_filename')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
raise ClientException("Key '{}' not found".format(filename))
# or: return "Key '{}' not found".format(filename) ?
class ClientException(Exception):
pass
Should I throw an exception or return a string? Then how should I configure the Integration Response? Obviously I've RTFM but the FM is FU.
tl;dr
Your Lambda handler must throw an exception if you want a non-200 response.
Catch all exceptions in your handler method. Format the caught exception message into JSON and throw as a custom Exception type.
Use Integration Response to regex your custom Exception found in the errorMessage field of the Lambda response.
API Gateway + AWS Lambda Exception Handling
There's a number of things you need know about Lambda, API Gateway and how they work together.
Lambda Exceptions
When an exception is thrown from your handler/function/method, the exception is serialised into a JSON message. From your example code, on a 404 from S3, your code would throw:
{
"stackTrace": [
[
"/var/task/mycode.py",
118,
"my_handler",
"raise ClientException(\"Key '{}' not found \".format(filename))"
]
],
"errorType": "ClientException",
"errorMessage": "Key 'my_filename' not found"
}
API Gateway Integration Response
Overview
"Integration Responses" map responses from Lambda to HTTP codes. They also allow the message body to be altered as they pass through.
By default, a "200" Integration Response is configured for you, which passes all responses from Lambda back to client as is, including serialised JSON exceptions, as an HTTP 200 (OK) response.
For good messages, you may want to use the "200" Integration Response to map the JSON payload to one of your defined models.
Catching exceptions
For exceptions, you'll want to set an appropriate HTTP status code and probably remove the stacktrace to hide the internals of your code.
For each HTTP Status code you wish to return, you'll need to add an "Integration Response" entry. The Integration Response is configured with a regex match (using java.util.regex.Matcher.matches() not .find()) that matches against the errorMessage field. Once a match has been made, you can then configure a Body Mapping Template, to selectively format a suitable exception body.
As the regex only matches against the errorMessage field from the exception, you will need to ensure that your exception contains enough information to allow different Integration Responses to match and set the error accordingly.
(You can not use .* to match all exceptions, as this seems to match all responses, including non-exceptions!)
Exceptions with meaning
To create exceptions with enough details in their message, error-handling-patterns-in-amazon-api-gateway-and-aws-lambda blog recommends that you create an exception handler in your handler to stuff the details of the exception into a JSON string to be used in the exception message.
My prefered approach is to create a new top method as your handler which deals with responding to API Gateway. This method either returns the required payload or throws an exception with a original exception encoded as a JSON string as the exception message.
def my_handler_core(event, context):
try:
s3conn.head_object(Bucket='my_bucket', Key='my_filename')
...
return something
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
raise ClientException("Key '{}' not found".format(filename))
def my_handler(event=None, context=None):
try:
token = my_handler_core(event, context)
response = {
"response": token
}
# This is the happy path
return response
except Exception as e:
exception_type = e.__class__.__name__
exception_message = str(e)
api_exception_obj = {
"isError": True,
"type": exception_type,
"message": exception_message
}
# Create a JSON string
api_exception_json = json.dumps(api_exception_obj)
raise LambdaException(api_exception_json)
# Simple exception wrappers
class ClientException(Exception):
pass
class LambdaException(Exception):
pass
On exception, Lambda will now return:
{
"stackTrace": [
[
"/var/task/mycode.py",
42,
"my_handler",
"raise LambdaException(api_exception_json)"
]
],
"errorType": "LambdaException",
"errorMessage": "{\"message\": \"Key 'my_filename' not found\", \"type\": \"ClientException\", \"isError\": true}"
}
Mapping exceptions
Now that you have all the details in the errorMessage, you can start to map status codes and create well formed error payloads. API Gateway parses and unescapes the errorMessage field, so the regex used does not need to deal with escaping.
Example
To catch this ClientException as 400 error and map the payload to a clean error model, you can do the following:
Create new Error model:
{
"type": "object",
"title": "MyErrorModel",
"properties": {
"isError": {
"type": "boolean"
},
"message": {
"type": "string"
},
"type": {
"type": "string"
}
},
"required": [
"token",
"isError",
"type"
]
}
Edit "Method Response" and map new model to 400
Add new Integration Response
Set code to 400
Set regex to match "ClientException" types with tolerance for whitespace: .*"type"\s*:\s*"ClientException".*
Add a Body Mapping Template for application/json to map the contents of errorMessage to your model:
#set($inputRoot = $input.path('$'))
#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
{
"isError" : true,
"message" : "$errorMessageObj.message",
"type": "$errorMessageObj.type"
}
I am looking into the swampdragon chat_example. In the router.py as per documentation get_subscription_channel gives channel name.
When I tried to change the retrun value it still works.
How can I limit the messages to specific group/channel. What things I need to do in the front end.
from swampdragon import route_handler
from swampdragon.route_handler import BaseRouter
class ChatRouter(BaseRouter):
route_name = 'chat-route'
valid_verbs = ['chat', 'subscribe']
def get_subscription_channels(self, **kwargs):
return ['chatrm']
def chat(self, *args, **kwargs):
errors = {}
if errors:
self.send_error(errors)
else:
self.send({'status': 'ok'})
self.publish(self.get_subscription_channels(), kwargs)
route_handler.register(ChatRouter)
Here is the subscription method.
function subscribe () {
swampdragon.subscribe('chat-route', 'local-channel', null, function (context, data) {
// any thing that happens after successfully subscribing
}, function (context, data) {
// any thing that happens if subscribing failed
});
}
I also came across the same issue. Here the problem is that you are not publishing data to the channel that you have subscribed . You subscribed for channel named 'local-channel' but in your router.py you are publishing or routing data another channel named 'chatrm'. Thats why you are not getting any notification. There are two ways you can fix it.
1. You need to change the get_subscription_channels method in router.py as shown below.
def get_subscription_channels(self, **kwargs):
return ['local-channel']
OR
2. Change the subscription method like the one below:
function subscribe () {
swampdragon.subscribe('chat-route', 'chatrm', null, function (context, data) {
// any thing that happens after successfully subscribing
}, function (context, data) {
// any thing that happens if subscribing failed
});
}
I have some custom flask methods in an eve app that need to communicate with a telnet device and return a result, but I also want to pre-populate data into some resources after retrieving data from this telnet device, like so:
#app.route("/get_vlan_description", methods=['POST'])
def get_vlan_description():
switch = prepare_switch(request)
result = dispatch_switch_command(switch, 'get_vlan_description')
# TODO: populate vlans resource with result data and return status
My settings.py looks like this:
SERVER_NAME = '127.0.0.1:5000'
DOMAIN = {
'vlans': {
'id': {
'type': 'integer',
'required': True,
'unique': True
},
'subnet': {
'type': 'string',
'required': True
},
'description': {
'type': 'boolean',
'default': False
}
}
}
I'm having trouble finding docs or source code for how to access a mongo resource directly and insert this data.
Have you looked into the on_insert hook? From the documentation:
When documents are about to be stored in the database, both on_insert(resource, documents) and on_insert_<resource>(documents) events are raised. Callback functions could hook into these events to arbitrarily add new fields, or edit existing ones. on_insert is raised on every resource being updated while on_insert_<resource> is raised when the <resource> endpoint has been hit with a POST request. In both circumstances, the event will be raised only if at least one document passed validation and is going to be inserted. documents is a list and only contains documents ready for insertion (payload documents that did not pass validation are not included).
So, if I get what you want to achieve, you could have something like this:
def telnet_service(resource, documents):
"""
fetch data from telnet device;
update 'documents' accordingly
"""
pass
app = Eve()
app.on_insert += telnet_service
if __name__ == "__main__":
app.run()
Note that this way you don't have to mess with the database directly as Eve will take care of that.
If you don't want to store the telnet data but only send it back along with the fetched documents, you can hook to on_fetch instead.
Lastly, if you really want to use the data layer you can use app.data.driveras seen in this example snippet.
use post_internal
Usage example:
from run import app
from eve.methods.post import post_internal
payload = {
"firstname": "Ray",
"lastname": "LaMontagne",
"role": ["contributor"]
}
with app.test_request_context():
x = post_internal('people', payload)
print(x)