I would like to manage my AWS Lambda Functions from Python script, without having to use the AWS Website Console. The idea is to be able to quickly recreate/migrate/setup my applications objects (tables, functions etc.) into a new AWS Cloud Account or Region.
It is easy to do this with DynamoDB tables, for instance:
import boto3
resource = boto3.resource(service_name='dynamodb', region_name='region', ...)
resource.create_table(
TableName='table_name',
KeySchema=[...],
AttributeDefinitions={...},
ProvisionedThroughput={...}
)
Done! I just created a new DynamoDB table from Python Script. How can I do the same for Lambda Functions? Say... create a new function, configure ‘Function Name’ and ‘Runtime’, maybe set up a ‘Role’ and upload a script from file. That’d be really helpful.
Thanks in advance.
To create lambda function using boto3, you can use create_function.
The AWS docs also provide an example of how to use the create_function:
response = client.create_function(
Code={
'S3Bucket': 'my-bucket-1xpuxmplzrlbh',
'S3Key': 'function.zip',
},
Description='Process image objects from Amazon S3.',
Environment={
'Variables': {
'BUCKET': 'my-bucket-1xpuxmplzrlbh',
'PREFIX': 'inbound',
},
},
FunctionName='my-function',
Handler='index.handler',
KMSKeyArn='arn:aws:kms:us-west-2:123456789012:key/b0844d6c-xmpl-4463-97a4-d49f50839966',
MemorySize=256,
Publish=True,
Role='arn:aws:iam::123456789012:role/lambda-role',
Runtime='nodejs12.x',
Tags={
'DEPARTMENT': 'Assets',
},
Timeout=15,
TracingConfig={
'Mode': 'Active',
},
)
print(response)
Related
Sorry Docker starter question here.
I'm currently trying to build an app with Python using FastApi and dockerize it. When it's dockerized I will connect it to an AWS Lambda. The problem is, how can I test my Lambda before deploying it to ECR?
I already tried to use the local Lambda invoke with: localhost:9000/2015-03-31/functions/function/invocations and create a post request reading a file:
{
"resource": "/",
"path": "/upload/",
"httpMethod": "POST",
"requestContext": {},
"multiValueQueryStringParameters": null,
"headers": {
"Accept": "application/json",
"Content-Type": "application/json" },
"body": { "filename": "image.jpg" },
"files": { "upload": "image.jpg" }
}
I don't get it to work...
Code:
#app.post("/upload/")
async def upload_image(request: Request):
print(request)
print(await request.json())
print(await request.body())
return {"received_request_body": request.json()}
handler = Mangum(app)
Does your container image include the runtime interface emulator (RIE)? Did you build and run the container image? Take a read through the following reference:
https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
You might also check out AWS SAM CLI, which offers a nice workflow for local build and test of Lambda functions.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-build.html
I am not sure about your complete infra.But based on the above given limited information I am trying to answer.
You can test lambda independently with test events.You can do this from either AWS console or from cli (aws lambda invoke).
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html
But if you want to test it with api then use api gateway along with your lambda which will expose it as an endpoint.Then you can test your lambda which ever way you are comfortable(like curl command/postman etc..)
Reference
Api Gateway with lambda
Navigation for test event from Lambda console
Test event for Lambda screenshot
I am trying to compile information from a list of EC2s that I have on a .csv using Python + Boto3.
This .csv contains the Private IPs of those instances. The following command returns everything that I need:
aws ec2 describe-network-interfaces --filters Name=addresses.private-ip-address,Values="<PRIVATE IP>" --region <MY REGION>
So I've decided to use Boto3 to do something similar.
But my code isn't returning the information inside the dictionary because I cannot specify the Region inside the code.
The documentation allows me to specify the Availability Zone, but it won't just work.
ec2 = boto3.client('ec2')
describe_network_interfaces = ec2.describe_network_interfaces(
Filters=[
{
'Name': 'addresses.private-ip-address',
'Values': [
'<PRIVATE IP>'
],
'Name': 'availability-zone',
'Values': [
'<REGION>'
]
}
],
MaxResults=123
)
print(describe_network_interfaces)
☝️ This returns me this 👇
{'NetworkInterfaces': [], 'ResponseMetadata': { <LOTS OF METADATA> }}
I believe it is not working because I can't specify the Region with describe_network_interfaces of Boto3. But I can do it with the AWS CLI command.
Any suggestions?
OBS: popen is not a good idea for this current project.
Thanks in advance.
You can set the region at the client level with something like:
my_region = "us-east-1"
ec2 = boto3.client('ec2', region_name=my_region)
This worked in my environment successfully to get information about systems running in another region.
I want to create ~267 Cloudwatch alarms, manual process is so pathetic, can someone guide me to use Boto3 script so that I can set up all alarms in a one shot.
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Create alarm
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization',
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-xxxxxxxxxx'
},
],
Unit='Seconds'
)
Assuming you want to add a CloudWatch alarm for different EC2 instances, you can simply put the instance IDs in a list and iterate over that list to create the alarms. That'd look like:
import boto3
cloudwatch = boto3.client('cloudwatch')
ec2_instances = [
'i-xxxxxxxxx1',
'i-xxxxxxxxx2',
'i-xxxxxxxxx3'
]
for ec2_instance in ec2_instances:
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization_%s' % ec2_instance,
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': ec2_instance
},
],
Unit='Seconds'
)
Here is a simple script I use to set up CloudWatch alarms on my running EC2 instances. The aim is to reboot my EC2 instances if StatusCheckFailed_Instance is True.
In case you are getting the "Insufficient Data" message as well, its worthwhile creating the same alarm on the EC2 console and then making sure your put_metric_alarm call matches the source/CloudFormation JSON.
AWS seems to be really fussy about the JSON. Once I matched the EC2 console's JSON exactly it worked like a charm.
Hope this helps someone.
import boto3
# Specify your region here
region = "ap-northeast-1"
ec2_client = boto3.client("ec2", region_name=region)
cloudwatch = boto3.client('cloudwatch')
# Get running EC2 instances
reservations = ec2_client.describe_instances(Filters=[
{
"Name": "instance-state-name",
"Values": ["running"],
}
]).get("Reservations")
# Set up an alarm for each instance
for reservation in reservations:
for instance in reservation["Instances"]:
instance_id = instance['InstanceId']
cloudwatch.put_metric_alarm(
AlarmName=f'Status_Check_{instance_id}',
AlarmDescription=f'Alarm when status check fails on {instance_id}',
ActionsEnabled=True,
OKActions=[],
AlarmActions=[
f"arn:aws:automate:{region}:ec2:reboot"
],
InsufficientDataActions=[],
MetricName='StatusCheckFailed_Instance',
Namespace='AWS/EC2',
Statistic='Maximum',
Dimensions=[
{
'Name': 'InstanceId',
'Value': instance_id
},
],
Period=60,
EvaluationPeriods=2,
DatapointsToAlarm=2,
Threshold=0.99,
ComparisonOperator='GreaterThanOrEqualToThreshold'
)
Currently I'm working on Lambda function to create, reboot, delete, modify a ElastiCache Redis using Python 2.7 script.
For this I need IAM rules and Policies also.
I'm done with EC2, RDS with Stop & Start actions. I have not seen any solutions for ElastiCache Redis, So Can you people anyone provide me scripts or solutions at least to Delete, Create a ElastiCache Redis.
You can use the AWS ElastiCache Python SDK to Create, Reboot, Delete & Modify using boto3.
Create: create_cache_cluster()
Reboot: reboot_cache_cluster()
Delete: delete_cache_cluster()
Modify: modify_cache_cluster()
example:
import boto3
client = boto3.client('elasticache')
response = client.create_cache_cluster(
CacheClusterId='string',
ReplicationGroupId='string',
AZMode='single-az'|'cross-az',
PreferredAvailabilityZone='string',
PreferredAvailabilityZones=[
'string',
],
NumCacheNodes=123,
CacheNodeType='string',
Engine='string',
EngineVersion='string',
CacheParameterGroupName='string',
CacheSubnetGroupName='string',
CacheSecurityGroupNames=[
'string',
],
SecurityGroupIds=[
'string',
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
SnapshotArns=[
'string',
],
SnapshotName='string',
PreferredMaintenanceWindow='string',
Port=123,
NotificationTopicArn='string',
AutoMinorVersionUpgrade=True|False,
SnapshotRetentionLimit=123,
SnapshotWindow='string',
AuthToken='string'
)
For more details on parameters refer this link.
Is it possible to extract data (to google cloud storage) from a shared dataset (where I have only have view permissions) using the client APIs (python)?
I can do this manually using the web browser, but cannot get it to work using the APIs.
I have created a project (MyProject) and a service account for MyProject to use as credentials when creating the service using the API. This account has view permissions on a shared dataset (MySharedDataset) and write permissions on my google cloud storage bucket. If I attempt to run a job in my own project to extract data from the shared project:
job_data = {
'jobReference': {
'projectId': myProjectId,
'jobId': str(uuid.uuid4())
},
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
I get the error:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Value 'myProjectId' in content does not agree with value
sharedProjectId'. This can happen when a value set through a parameter
is inconsistent with a value set in the request.">
Using the sharedProjectId in both the jobReference and sourceTable I get:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Access Denied: Job myJobId: The user myServiceAccountEmail
does not have permission to run a job in project sharedProjectId">
Using myProjectId for both the job immediately comes back with a status of 'DONE' and with no errors, but nothing has been exported. My GCS bucket is empty.
If this is indeed not possible using the API, is there another method/tool that can be used to automate the extraction of data from a shared dataset?
* UPDATE *
This works fine using the API explorer running under my GA login. In my code I use the following method:
service.jobs().insert(projectId=myProjectId, body=job_data).execute()
and removed the jobReference object containing the projectId
job_data = {
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
but this returns the error
Access Denied: Table sharedProjectId:sharedDatasetId.sharedTableId: The user 'serviceAccountEmail' does not have permission to export a table in
dataset sharedProjectId:sharedDatasetId
My service account now is an owner on the shared dataset and has edit permissions on MyProject, where else do permissions need to be set or is it possible to use the python API using my GA login credentials rather than the service account?
* UPDATE *
Finally got it to work. How? Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)
After trying to reproduce the issue, I was running into the parse errors.
I did how ever play around with the API on the Developer Console [2] and it worked.
What I did notice is that the request code below had a different format than the documentation on the website as it has single quotes instead of double quotes.
Here is the code that I ran to get it to work.
{
'configuration': {
'extract': {
'sourceTable': {
'projectId': "sharedProjectID",
'datasetId': "sharedDataSetID",
'tableId': "sharedTableID"
},
'destinationUri': "gs://myBucket/myFile.csv"
}
}
}
HTTP Request
POST https://www.googleapis.com/bigquery/v2/projects/myProjectId/jobs
If you are still running into problems, you can try the you can try the jobs.insert API on the website [2] or try the bq command tool [3].
The following command can do the same thing:
bq extract sharedProjectId:sharedDataSetId.sharedTableId gs://myBucket/myFile.csv
Hope this helps.
[2] https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
[3] https://cloud.google.com/bigquery/bq-command-line-tool
Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)