How to set segment (top) level AWS Xray annotations in Python Lambda - python

I am successfully using AWS Xray within a Python v2 Lambda. The patch_all() is working well to automatically patch a portion of my libraries, i.e boto3, for Xray.
I am unable to set high level annotations that persist across the lower level subsegments. Can annotations in lambda be set like this? If not how else should they be set? I've tried getting current subsegment and segment.
import json
import re
import boto3
import logging
import sys
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
patch_all()
def lambda_handler(event, context):
subsegment_ref = xray_recorder.current_subsegment()
subsegment_ref.put_annotation('account_id', 'foo')

Lambda function segment is not generated by X-Ray SDK. We are working with Lambda team to provide better experience but currently there is no workaround to annotate the segment.
For annotating subsegment, you can create a subsegment inside the handler and then add annotation to it. You can see the quick start guide https://github.com/aws/aws-xray-sdk-python for creating custom subsegment.
The easiest way is using context manager style:
with xray_recorder.in_subsegment('pick_a_subsegment_name') as subsegment:
subsegment.put_annotation('key', 'value')

Related

How to configure CorsRule for CDK using python

I am trying to figure out the proper syntax for setting up cors on an s3 bucket using CDK (python). The class aws_s3.CorsRule requires 3 params (allowed_methods, allowed_origins, max_age=None). I am trying to specify the allowed_methods which takes in a list of methods but the bases is enum.Enum. So how do I create a list of these methods. This is what I have tried but it doesn't pass validation.
s3.Bucket(self, "StaticSiteBucket",
bucket_name="replaceMeWithBucketName",
versioned=True,
removal_policy=core.RemovalPolicy.DESTROY,
website_index_document="index.html",
cors=s3.CorsRule(allowed_methods=[s3.HttpMethods.DELETE],allowed_origins=["*"],max_age=3000)
)
The only thing Im focused on is the cors line:
cors=s3.CorsRule(allowed_methods=[s3.HttpMethods.DELETE],allowed_origins=["*"],max_age=3000)
Trying to read the documentation is like peeling an onion.
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_s3/HttpMethods.html#aws_cdk.aws_s3.HttpMethods
I tried calling each one individually as you can see using s3.HttpMethods.DELETE but that fails when it tries to synthesize.
looks like you at least forgot to wrap the param you pass to cors as a list. I agree that the docs are a bit of a rabbit hole, but you can see the Bucket docs specifies the cors param as (Optional[List[CorsRule]])
This is mine:
from aws_cdk import core
from aws_cdk import aws_s3
from aws_cdk import aws_apigateway
aws_s3.Bucket(self,
'my_bucket',
bucket_name='my_bucket',
removal_policy=core.RemovalPolicy.DESTROY,
cors=[aws_s3.CorsRule(
allowed_headers=["*"],
allowed_methods=[aws_s3.HttpMethods.PUT],
allowed_origins=["*"])
])
So yours should be:
cors=[s3.CorsRule(
allowed_methods=[s3.HttpMethods.DELETE],
allowed_origins=["*"],
max_age=3000)]

How to programmatically create topics using kafka-python?

I am getting started with Kafka and fairly new to Python. I am using this library named kafka-python to communicate with my Kafka broker. Now I need to dynamically create a topic from my code, from the docs what I see is I can call create_topics() method to do so, however I am not sure, how will I get an instance of this class. I am not able to understand this from the docs.
Can some one help me with this?
You first need to create an instance of KafkaAdminClient. The following should do the trick for you:
from kafka.admin import KafkaAdminClient, NewTopic
admin_client = KafkaAdminClient(
bootstrap_servers="localhost:9092",
client_id='test'
)
topic_list = [NewTopic(name="example_topic", num_partitions=1, replication_factor=1)]
admin_client.create_topics(new_topics=topic_list, validate_only=False)
Alternatively, you can use confluent_kafka client which is a lightweight wrapper around librdkafka:
from confluent_kafka.admin import AdminClient, NewTopic
admin_client = AdminClient({"bootstrap_servers": "localhost:9092"})
topic_list = [NewTopic("example_topic", 1, 1)]
admin_client.create_topics(topic_list)

Calling Custom Lambda Layers Functions on Lambda

I'm trying to implement a custom AWS Lambda layer in order to use it with my functions.
It should be a simple layer that gets some parameter from ssm and initialize puresec's function_shield for protection of my services.
The code looks more less like this:
import os
import boto3
import function_shield as shield
STAGE = os.environ['stage']
REGION = os.environ['region']
PARAMETERS_PREFIX = os.environ['parametersPrefix']
class ParameterNotFoundException(Exception):
pass
session = boto3.session.Session(region_name=REGION)
ssm = session.client('ssm')
# function_shield config
parameter_path = f"/{PARAMETERS_PREFIX}/{STAGE}/functionShieldToken"
try:
shield_token = ssm.get_parameter(
Name=parameter_path,
WithDecryption=True,
)['Parameter']['Value']
except Exception:
raise ParameterNotFoundException(f'Parameter {parameter_path} not found.')
policy = {
"outbound_connectivity": "block",
"read_write_tmp": "block",
"create_child_process": "block",
"read_handler": "block"
}
def configure(p):
"""
update function_shield policy
:param p: policy dict
:return: null
"""
policy.update(p)
shield.configure({"policy": policy, "disable_analytics": True, "token": shield_token})
configure(policy)
I want to be able to link this layer to my functions for it to be protected in runtime.
I'm using the serverless framework, and it seems like my layer was deployed just fine, as it was with my example function. Also, the AWS console shows me that the layer was linked in my function.
I named my layer 'shield' and tried to import it by its name on my test function:
import os
import shield
def test(event, context):
shield.configure(policy) # this should be reusable for easy tweaking whenever I need to give more or less permissions to my lambda code.
os.system('ls')
return {
'rep': 'ok'
}
Ideally, I should get and error on CloudWatch telling me that function_shield has prevented a child_process from running, however I instead receive an error telling me that there is no 'shield' declared on my runtime.
What am I missing?
I couldn't find any custom code examples being used for layers apart from numpy, scipy, binaries, etc.
I'm sorry for my stupidity...
Thanks for your kindness!
You also need to name the file in your layer shield.py so that it's importable in Python. Note it does not matter how the layer itself is named. That's a configuration in the AWS world and has no effect on the Python world.
What does have an effect is the structure of the layer archive. You need to place the files you want to import into a python directory, zip it and use that resulting archive as a layer (I'm pressuming serverless framework is doing this for you).
In the Lambda execution environment, the layer archive gets extracted into /opt, but it's only /opt/python that's declared in the PYTHONPATH. Hence the need for the "wrapper" python directory.
Have a look here. I have described all the necessary steps to set up or call custom lambda layers functions on lambda.
https://medium.com/#nimesh.kumar031/how-to-set-up-layers-python-in-aws-lambda-functions-1355519c11ed?source=friends_link&sk=af4994c28b33fb5ba7a27a83c35702e3

Who created an Amazon EC2 instance using Boto and Python?

I want to know who created a particular instance. I am using Cloud Trail to find out the statistics, but I am not able to get a particular statistics of who created that instance. I am using Python and Boto3 for finding out the details.
I am using this code- Lookup events() from Cloud trail in boto3, to extract the information about an instance.
ct_conn = sess.client(service_name='cloudtrail',region_name='us-east-1')
events=ct_conn.lookup_events()
I found out the solution to the above problem using lookup_events() function.
ct_conn = boto3.client(service_name='cloudtrail',region_name='us-east-1')
events_dict= ct_conn.lookup_events(LookupAttributes=[{'AttributeKey':'ResourceName', 'AttributeValue':'i-xxxxxx'}])
for data in events_dict['Events']:
json_file= json.loads(data['CloudTrailEvent'])
print json_file['userIdentity']['userName']
#Karthik - Here is the sample of creating session
import boto3
import json
import os
session = boto3.Session(region_name='us-east-1',aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'])
ct_conn = session.client(service_name='cloudtrail',region_name='us-east-1')
events_dict= ct_conn.lookup_events(LookupAttributes=[{'AttributeKey':'ResourceName', 'AttributeValue':'i-xxx'}])
for data in events_dict['Events']:
json_file= json.loads(data['CloudTrailEvent'])
print (json_file['userIdentity']['userName'])

How do I switch backends in libusb for python?

I am using pyusb and according to the docs it runs on any one of three backends. libusb01 libusb10 and openusb. I have all three backends installed. How can I tell which backend it is using and how can I switch to a different one?
I found the answer by looking inside the usb.core source file.
You do it by importing the backend and then setting a parameter inside the find method of usb.core. Like so:
import usb.backend.libusb1 as libusb1
import usb.backend.libusb0 as libusb0
import usb.backend.openusb as openusb
and then any one of:
devices = usb.core.find(find_all=1, backend=libusb1.get_backend() )
devices = usb.core.find(find_all=1, backend=libusb0.get_backend() )
devices = usb.core.find(find_all=1, backend=openusb.get_backend() )
This assumes you are using pyusb-1.0.0a3. For 1.0.0a2 the libs are called libusb10, libusb01 and openusb. Of course, you'd only need to import the one you want.

Categories

Resources