I am working on executing an AWS Lambda code from Code-pipeline. I have given the lambda role full access to EC2 and code-deploy. The commands generally work when I am not triggering them from code-pipeline. When Triggered from code-pipeline, they just keep on running, even though success is sent. What am i doing wrong?
Code :
import boto3
import json
def lambda_handler(event, context):
reservations = boto3.client('ec2').describe_instances()['Reservations']
instances_list = []
process_instance_list = []
command = 'COMMAND TO EXECUTE ON SERVER'
ssm = boto3.client('ssm')
for res in reservations:
instances = res['Instances']
for inst in res['Instances']:
for tag in inst['Tags']:
#print("Tag value is {}".format(tag['Value']))
if tag['Value']=='Ubuntu_Magento':
print("{} {} {}".format(tag['Value'], inst['InstanceId'], inst['LaunchTime']))
instances_list.append(inst)
instances_list.sort(key=lambda x: x['LaunchTime'])
instance_id = instances_list[0]['InstanceId']
ssmresponse = ssm.send_command(InstanceIds=[instance_id], DocumentName='AWS-RunShellScript', Parameters= { 'commands': [command]})
code_pipeline = boto3.client('codepipeline')
job_id = event['CodePipeline.job']['id']
code_pipeline.put_job_success_result(jobId=job_Id)
Any lambda by default has lifespan of 15 mins only, after that it exits no matter what. I think it has something to do with the way you are trigger it.
Related
I have a lambda function that will read an excel file and do some stuffs and then store the result in a different S3 bucket.
def lambda_handler(event, context):
try:
status = int(event['status'])
if status:
Reading_Bucket_Name = 'read-data-s3'
Writing_Bucket_Name = 'write-excel-file-bucket'
rowDataFile = 'Analyse.xlsx'
HTMLfileName = 'index.html'
url = loading_function(Reading_Bucket_Name=Reading_Bucket_Name, Writing_Bucket_Name=Writing_Bucket_Name, rowDataFile=rowDataFile, HTMLfileName=HTMLfileName)
status = 0
return {"statusCode": 200, "URL": url}
else:
return {"statusCode": 400, "Error": "The code could not be executed"}
except Exception as e:
print('#________ An error occurred while reading Status code int(event[status]) ________#')
print(e)
raise e
return None
The code is only supposed to work once! And that it returns the URL and then turns off and exit the Lambda function.
But the problem is: I will get the first output, and then the lambda function will call itself again! And it will go to the exception and execute it at least many times! Because there is no event['status']. 'This must be received if I call this function by:
{
"status": "1"
}
How can I stop execution after getting the first output?
Update:
This will cause the problem by uploading a new file to an S3 bucket:
s3_client = boto3.client('s3')
fig.write_html('/tmp/' + HTMLfileName, auto_play=False)
response = s3_client.upload_file('/tmp/' + HTMLfileName, Writing_Bucket_Name, HTMLfileName, ExtraArgs={'ACL':'public-read', 'ContentType':'text/html'})
return True
Given that the Lambda function appears to be running when a new object is created in the Amazon S3 bucket, it would appear that the bucket has been configured with an Event Notification that is triggering the AWS Lambda function.
To check this, go the to bucket in the S3 management console, go to the Properties tab and scroll down to Event notifications. Look for any configured events that trigger a Lambda function.
I'm trying to run a Glue job by calling it from lambda function. The glue job in itself is running perfectly fine but when I trigger it from lambda function, I get the below error:
[ERROR] ParamValidationError: Parameter validation failed: Bucket name must match the regex \"^[a-zA-Z0-9.\\-_]{1,255}$\" or be an ARN matching the regex \"^arn:(aws).*:(s3|s3-object-lambda):[a-z\\-0-9]*:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\\-.]{1,63}$|^arn:(aws).*:s3-outposts:[a-z\\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\\-]{1,63}$\""
There is no issue in my bucket name as I am able to do different actions with it and also my glue job is working fine when running it standalone.
Any help would be appreciated.
Thanks in advance.
Maybe you are including the s3:// protocol when indicating the bucket name and it is not required.
I was able to solve it by making a few changes.
My initial code was:
import json
import os
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
import boto3
client = boto3.client('glue')
glueJobName = "MyTestJob"
def lambda_handler(event, context):
logger.info('## INITIATED BY EVENT: ')
logger.info(event['detail'])
response = client.start_job_run(JobName = glueJobName)
logger.info('## STARTED GLUE JOB: ' + glueJobName)
logger.info('## GLUE JOB RUN ID: ' + response['JobRunId'])
return response
Once I removed the logging part (code below), it worked without any error:
from __future__ import print_function
import boto3
import urllib
print('Loading function')
glue = boto3.client('glue')
def lambda_handler(event, context):
gluejobname="MyTestJob"
runId = glue.start_job_run(JobName=gluejobname)
status = glue.get_job_run(JobName=gluejobname, RunId=runId['JobRunId'])
print("Job Status : ", status['JobRun']['JobRunState'])
What could be the issue here?
Thanks
I'd like to make a python Azure Function App (web API) to process a queue of tasks. I already setup some trigger that call this API whenever a task is inserted into the queue. As this API will process all of the current tasks in the queue, I would like to prevent the API to execute if there is other execution of this API at the time, to avoid processing conflicts.
I think of using a database locking mechanism but it doesn't look so elegant. Is there any singleton design pattern that can used in Python Azure function App for this purpose? Thanks.
I found a way to solve this problem using Azure Durable function. There are 3 types of functions in an Azure Durable Function app: Orchestration Client function, Orchestrator function, Activity functions. I just need to add some checking steps in the Orchestration Client function like the following example:
# This function an HTTP starter function for Durable Functions.
import logging
import azure.functions as func
import azure.durable_functions as df
def is_finished(runtime_status : df.models.OrchestrationRuntimeStatus):
result = False
if runtime_status is None or \
runtime_status in [df.OrchestrationRuntimeStatus.Canceled,
df.OrchestrationRuntimeStatus.Completed,
df.OrchestrationRuntimeStatus.Failed,
df.OrchestrationRuntimeStatus.Terminated]:
result = True
return result
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
client = df.DurableOrchestrationClient(starter)
# general azure function url : http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>
# function.json -> "route": "orchestrators/{functionName}/{instanceId}"
orchestrator_instance_id = req.route_params['instanceId']
function_name = req.route_params['functionName']
INVENSYNC_ORCHESTRATOR_INSTANCE_ID = '117610EF-BC37-4E31-BFA4-205EBB3CC54E' # just select any key
if orchestrator_instance_id == INVENSYNC_ORCHESTRATOR_INSTANCE_ID:
existing_instance_status = await client.get_status(orchestrator_instance_id)
logging.info(f"InventorySyncHttpStart() - existing_instance_status = '{existing_instance_status}'.")
if existing_instance_status is None or \
is_finished(existing_instance_status.runtime_status):
logging.info(f"InventorySyncHttpStart() - existing_instance_status.runtime_status = '{existing_instance_status.runtime_status}'.")
orchestrator_instance_id = await client.start_new(function_name, orchestrator_instance_id)
logging.info(f"Started orchestration with ID = '{orchestrator_instance_id}'.")
result = client.create_check_status_response(req, orchestrator_instance_id)
else:
result = func.HttpResponse(status_code=409, body=f"An instance with ID '{orchestrator_instance_id}' already exists")
else:
result = func.HttpResponse(status_code=406, body=f"Invalid Instance ID '{orchestrator_instance_id}' in URL")
return result
I want to use AWS Spot instances to train Neural Networks. To prevent loss of the model when the spot instance is terminated, I plan to create a snapshot of the EBS volume, make a new volume and attach it to a reserved instance. How can I mount, or make the EBS volume available using python & boto3.
These are the steps used to make the volume available on Linux, but I want to automate the process so that I don't need to SSH into the instance every time. Here is the code I use to attach the volume -
import boto3
ec2 = boto3.resource('ec2')
spot = ec2.Instance('i-9a8f5082')
res = ec2.Instance('i-86e65a13')
snapshot = ec2.create_snapshot(VolumeId="vol-5315f7db", Description="testing spot instances")
volume = ec2.create_volume(SnapshotId=snapshot.id, AvailabilityZone='us-west-2a')
res.attach_volume(VolumeId="vol-5315f7db", Device='/dev/sdy')
snapshot.delete()
You need to run mount command on instance. 2 way for it. One is the sending command with a ssh connection like #mootmoot wrote. The other one is the sending command with AWS SSM service like #Mark B wrote. Here is the detailed SSM solution sample, you can ignore unnecessary parts for you:
Send bash command to instances using AWS SSM:
# Amazon EC2 Systems Manager requires
# 1. An IAM role for EC2 instances that will process commands. There should be a system manager role and the instance should use this role ! (Did it while creation instance)
# 2. And a separate role for users executing commands. Aws IAM user that has access and secret keys should have ssm permission. (i.e. AmazonSSMFullAccess)
# http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-configuring-access-policies.html
def execute_commands_on_linux_instances(commands, instance_ids):
client = boto3.client('ssm', **conn_args) # Need your credentials here
all_ssm_enabled_instances, ssm_enabled_instances, not_worked_instances = [],[],[]
not_worked_instances = instance_ids.copy()
all_ssm_enabled_instances = list()
outputs = list({})
not_executed = list()
# Select only the Instances that have an active ssm agent.
if len(client.describe_instance_information()['InstanceInformationList']) > 0:
resp = client.describe_instance_information(MaxResults=20)['InstanceInformationList']
for ins in resp:
all_ssm_enabled_instances.append(ins['InstanceId'])
ssm_enabled_instances = list(set(all_ssm_enabled_instances).intersection(instance_ids))
not_worked_instances = list(set(instance_ids).difference(all_ssm_enabled_instances))
# Now, send the command !
resp = client.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': [commands]},
InstanceIds=ssm_enabled_instances,
)
# get the command id generated by the send_command
com_id = resp['Command']['CommandId']
# Wait until all the commands status are out of Pending and InProgress
list_comm = client.list_commands( CommandId=com_id)
while True:
list_comm = client.list_commands( CommandId=com_id)
if (list_comm['Commands'][0]['Status'] == 'Pending'or list_comm['Commands'][0]['Status'] == 'InProgress'):
continue
else:
# Commands on all Instances were executed
break
# Get the responses the instances gave to this command. (stdoutput and stderror)
# Althoug the command could arrive to instance, if it couldn't be executed by the instance (response -1) it will ignore.
for i in ssm_enabled_instances:
resp2 = client.get_command_invocation(CommandId=com_id, InstanceId=i)
if resp2['ResponseCode'] == -1:
not_executed.append(i)
else:
outputs.append({'ins_id': i, 'stdout': resp2['StandardOutputContent'],
'stderr': resp2['StandardErrorContent']})
# Remove the instance that couldn't execute the command ever, add it to not_worked_instances
ssm_enabled_instances = list(set(ssm_enabled_instances).difference(not_executed))
not_worked_instances.extend(not_executed)
return ssm_enabled_instances, not_worked_instances, outputs
else:
print("There is no any available instance that has a worked SSM service!")
return ssm_enabled_instances, not_worked_instances, outputs
Create Instances with required IAM Instance profile that has required role that has required policy. As a result of this instance creation, instances have running SSM agents:
def create_ec2_instance(node_type):
# define userdata to be run at instance launch
userdata = """#cloud-config
runcmd:
- cd /tmp
- sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
"""
ec2_r = boto3.resource('ec2', **conn_args)
rolename = "amazonec2ssmrole"
i_pro_name = "ins_pro_for_ssm"
# Create an iam instance profile and add required role to this instance profile.
# Create a role and attach a policy to it if not exist.
# Instances will have this role to build ssm (ec2 systems manager) connection.
iam = boto3.resource('iam', **conn_args)
try:
response= iam.meta.client.get_instance_profile(InstanceProfileName=i_pro_name)
except:
iam.create_instance_profile(InstanceProfileName=i_pro_name)
try:
response = iam.meta.client.get_role(RoleName=rolename)
except:
iam.create_role(
AssumeRolePolicyDocument='{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":["ec2.amazonaws.com"]},"Action":["sts:AssumeRole"]}]}',
RoleName=rolename)
role = iam.Role(rolename)
role.attach_policy(PolicyArn='arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM')
iam.meta.client.add_role_to_instance_profile(InstanceProfileName=i_pro_name, RoleName=rolename)
iam_ins_profile = {'Name': i_pro_name}
if node_type == "Medium":
instance = ec2_r.create_instances(
ImageId='ami-aa5ebdd2',
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.medium',
KeyName=key_pair_name,
IamInstanceProfile=iam_ins_profile,
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 20}}])
elif node_type == "Micro":
instance = ec2_r.create_instances(
ImageId='ami-aa5ebdd2',
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.micro',
KeyName=key_pair_name,
IamInstanceProfile=iam_ins_profile,
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 10}}])
else:
print("Node Type Error")
return -1
# Wait for the instance state, default --> one wait is 15 seconds, 40 attempts
print('Waiting for instance {0} to switch to running state'.format(instance[0].id))
waiter = ec2_r.meta.client.get_waiter('instance_running')
waiter.wait(InstanceIds=[instance[0].id])
instance[0].reload()
print('Instance is running, public IP: {0}'.format(instance[0].public_ip_address))
return instance[0].id
Don't forget giving ssm permission. (i.e. AmazonSSMFullAccess) to the Aws IAM user that has access and secret keys.
By the way, conn_args can be defined as follows:
conn_args = {
'aws_access_key_id': Your_Access_Key,
'aws_secret_access_key': Your_Secret_Key,
'region_name': 'us-west-2'
}
You have to perform those steps in the operating system. You can't perform those steps via the AWS API (Boto3). Your best bet is to script those steps and then kick off the script somehow via Boto3, possibly using the AWS SSM service.
What's wrong with sending and execute ssh script remotely? Assume you are using ubuntu , i.e.
ssh -i your.pem ubuntu#ec2_name_or_ip 'sudo bash -s' < mount_script.sh
If you attach tag to those resources, you can later use boto3 to inquired the resources by universal tag name, instead tied to the specific static id.
I'm using ipython to get an understanding of Boto3 and interacting with EC2 instances. Here is the code I'm using to create an instance:
import boto3
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
new_instance = ec2.create_instances(
ImageId='ami-d05e75b8',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName=<name_of_my_key>,
SecurityGroups=['<security_group_name>'],
DryRun = False
)
This starts an EC2 instance fine, and I can get the public DNS name, ip and other info from the AWS console. But, when I try to get the public DNS using Boto, by doing this:
new_instance[0].public_dns_name
Returns blank quotes. Yet, other instance details, such as:
new_instance[0].instance_type
Returns the correct information.
Any ideas? Thanks.
EDIT:
So if I do:
def get_name(inst):
client = boto3.client('ec2')
response = client.describe_instances(InstanceIds = [inst[0].instance_id])
foo = response['Reservations'][0]['Instances'][0]['NetworkInterfaces'][0]['Association']['PublicDnsName']
return foo
foo = get_name(new_instance)
print foo
Then it will return the public DNS. But it doesn't make sense to me why I need to do all of this.
The Instance object you get back is only hydrated with the response attributes from the create_instances call. Since the DNS name is not available until the instance has reached the running state [1], it will not be immediately present. I imagine the time between you creating the instance and calling describe instances is long enough for the micro instance to start.
import boto3
ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
ImageId='ami-f0091d91',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName='<KEY-NAME>',
SecurityGroups=['<GROUP-NAME>'])
instance = instances[0]
# Wait for the instance to enter the running state
instance.wait_until_running()
# Reload the instance attributes
instance.load()
print(instance.public_dns_name)
Here my wrapper:
import boto3
from boto3.session import Session
def credentials():
"""Credentials:"""
session = Session(aws_access_key_id= 'XXXXXXXXX',
aws_secret_access_key= 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
ec2 = boto3.resource('ec2', region_name='us-east-2')
return ec2
def get_public_dns(instance_id):
"""having the instance_id, gives you the public DNS"""
ec2 = credentials()
instance = ec2.Instance(instance_id)
instancePublicDNS = instance.public_dns_name
return instancePublicDNS
Then you just need to use your instance_id to get public dns of any of your actives ec2:
dns = get_public_dns(instance_id)
Remember to change "region_name" to your zone and add your "aws_access_key_id" and "aws_secret_access_key"
import boto3
import pandas as pd
session = boto3.Session(profile_name='aws_dev')
dev_ec2_client = session.client('ec2')
response = dev_ec2_client.describe_instances()
df = pd.DataFrame(columns=['InstanceId', 'InstanceType', 'PrivateIpAddress','PublicDnsName'])
i = 0
for res in response['Reservations']:
df.loc[i, 'InstanceId'] = res['Instances'][0]['InstanceId']
df.loc[i, 'InstanceType'] = res['Instances'][0]['InstanceType']
df.loc[i, 'PrivateIpAddress'] = res['Instances'][0]['PrivateIpAddress']
df.loc[i, 'PublicDnsName'] = res['Instances'][0]['PublicDnsName']
i += 1
print df
Note:
Change this profile with your AWS profile name profile_name='aws_dev'
This code is working for Python3