How to execute commands on an EC2 instance using boto3 - python

Can anyone tell me if I can execute shell commands using boto3 on an AWS instance?
I read about boto.manage.cmdshell in a few places, but it is deprecated in boto3.

ssm_client = boto3.client('ssm')
response = ssm_client.send_command(
InstanceIds=['i-03#####'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': ['start ecs']}, )
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-03######',
)
print(output)

ssm = boto3.client('ssm' )
testCommand = ssm.send_command( InstanceIds=[ 'i-123123123123' ], DocumentName='AWS-RunShellScript', Comment='la la la', OutputS3BucketName='myOutputS3Bucket', OutputS3KeyPrefix='i-123123123123', Parameters={ "commands":[ "ip config" ] } )
i-123123123123 is a pretend ec2 instance id.
I put that in the OutputS3KeyPrefix to get a unique place to store logs in the bucket.
You can install the ssm agent like this;
ec2r = boto3.resource('ec2' )
userdata = """#cloud-config
runcmd:
- /home/ec2-user/sudo npm run prod
- cd /tmp
- curl https://amazon-ssm-%s.s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o amazon-ssm-agent.rpm
- yum install -y amazon-ssm-agent.rpm
""" % region
if ssm == "on":
instance = ec2r.create_instances( ImageId=ami, MinCount=1, MaxCount=1, KeyName=keyname, InstanceType=instancetype,
NetworkInterfaces=[{
'DeviceIndex': 0,
'AssociatePublicIpAddress': False,
'SubnetId': mySub,
'Groups': secGroupList,
'AssociatePublicIpAddress': AssociatePublicIpAddress
}],
Monitoring={ 'Enabled': False },
UserData=userdata,
IamInstanceProfile={
'Name': rolename
},
EbsOptimized=False
)

I know I am answering to bit old thread. I am not sure even at that time SSM existed. But now you can use SSM send_command from boto3 to run commands directly on ec2 instances.
Here is the sample to run PowerShell commands on EC2 instances
import boto3
ssm_client = boto3.client('ssm', region_name="us-west-2") # use region code in which you are working
response = ssm_client.send_command(
InstanceIds=[
"i-03########" # use instance id on which you want to execute, even multiple is allowd
],
DocumentName="AWS-RunPowerShellScript",
Parameters={
'commands':[
'ipconfig'
]
},
})
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-03######',
)
print(output)
For more information read boto3 SSM docs
For information on SSM itself refer AWS docs

No. The boto.manage.cmdshell functionality in boto was not migrated to boto3. The original boto.manage.cmdshell functionality used Paramiko which you could use directly with boto3 if you want to have SSH functionality with boto3.
Here's a boto3 github issue on this topic.
As #jarmod points out there is new AWS functionality as of October 2015 that enables you to run commands on Windows systems using AWS EC2 SSM. You can access this in boto3 with the boto3 SSM client as of botocore version 1.3.1.
Here's a boto3 github issue on supporting "EC2 Run Command"

Change
command_id = response['Command']['CommandId']
to
command_id = context.aws_request_id

Documentation says:
aws_request_id
AWS request ID associated with the request. This is the ID returned to the client that called the invoke method.
Change:
command_id = response['Command']['CommandId']
for:
command_id = context.aws_request_id

Related

Python Boto3 to Aws Sdk for blob storage

This code retrieves the buckets of a Amazon S3-compatible storage (not Amazon AWS but the Zadara compatible cloud storage) and IT WORKS:
import boto3
from botocore.client import Config
session = boto3.session.Session( )
s3_client = session.client(
service_name = 's3',
region_name = 'IT',
aws_access_key_id = 'xyz',
aws_secret_access_key = 'abcedf',
endpoint_url = 'https://nothing.com:443',
config = Config(signature_version='s3v4'),
)
print('Buckets')
boto3.set_stream_logger(name='botocore')
print(s3_client.list_buckets())
I am trying to use the same method to access S3 via C# and AWS SDK, anyway I always obtain the error "The request signature we calculated does not match the signature you provided. Check your key and signing method.".
AmazonS3Config config = new AmazonS3Config();
config.AuthenticationServiceName = "s3";
config.ServiceURL = "https://nothing.com:443";
config.SignatureVersion = "s3v4";
config.AuthenticationRegion = "it";
AmazonS3Client client = new AmazonS3Client(
"xyz",
"abcdef",
config);
ListBucketsResponse r = await client.ListBucketsAsync();
What can I do? Why it is not working? I can't get a solution.
I tried also to trace debug infos:
Python
boto3.set_stream_logger(name='botocore')
C#
AWSConfigs.LoggingConfig.LogResponses = ResponseLoggingOption.Always;
AWSConfigs.LoggingConfig.LogMetrics = true;
AWSConfigs.LoggingConfig.LogTo = Amazon.LoggingOptions.SystemDiagnostics;
AWSConfigs.AddTraceListener("Amazon", new System.Diagnostics.ConsoleTraceListener());
but for C# it is not logging the whole request.
Any suggestion?

Can I get region value for a given instance using boto 3 by passing on just private IP of ec2-instance

Trying to get AWS Region for particular instance. Is that possible that by passing on only ec2 instance ip to get to know its region ?
What I tried:
import boto3
client = boto3.client('s3') # example client, could be any
client.meta.region_name
but it showing same region for all servers..
Unfortunately, there is no native cross-region get_instance_by_private_ip API available. But, you can do something like this
import boto3
def find_region_by_private_ip_address(ip):
ec2 = boto3.resource('ec2', 'us-east-1')
regions = [r['RegionName'] for r in ec2.meta.client.describe_regions()['Regions']]
for region in regions:
ec2 = boto3.resource('ec2', region)
instance_iterator = ec2.instances.filter(
Filters=[
{
'Name': 'private-ip-address',
'Values': [
ip
]
},
]
)
instance_list = list(instance_iterator)
if len(instance_list) > 0:
return region
If performance is critical, you can do multi-threading or multi-processing to query regions in parallel.

How do I pass multiple parameters to AWS SSM send_command with Boto3

I have created a custom AWS SSM document for use with Run Command, I am then trying to use Boto 3 in order to send that command to a single EC2 instance.
The document require 2 parameters to be sent to it but I cannot figure out how to correctly do this by looking at the docs here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ssm.html#SSM.Client.send_command
I can use the CLI successfully with this command:
aws ssm send-command \
--document-name "ResetVpnMfa" \
--document-version "1" \
--targets '[{"Key":"InstanceIds","Values":["i-abcabcab"]}]' \
--parameters '{"command":["GoogleAuthlock"],"username":["some.user"]}' \
--timeout-seconds 30 \
--region eu-west-1
My Python code:
import boto3
client = boto3.client('ssm', region_name='eu-west-1')
params={
'command': ['GoogleAuthLock'],
'username': ['some.user'],
}
response = client.send_command(
InstanceIds=['i-abcabcab'],
DocumentName='ResetVpnMfa',
DocumentVersion='1',
TimeoutSeconds=30,
Comment='VPN MFA reset for some.user via Boto',
Parameters=params
)
I get the following error:
botocore.errorfactory.InvalidParameters: An error occurred (InvalidParameters) when calling the SendCommand operation:
The SSM Document itself:
---
schemaVersion: "2.2"
description: "Unlock or reset MFA on OpenVPN"
parameters:
username:
type: "String"
description: "VPN user e.g. digger.dachshund"
command:
type: "String"
description: "Command to unlock or reset MFA on OpenVPN."
allowedValues:
- GoogleAuthlock
- GoogleAuthRegen
mainSteps:
- action: "aws:runShellScript"
name: "VPNResetMFA"
inputs:
runCommand:
- "/usr/local/openvpn_as/scripts/sacli --user {{username}} --lock 0 {{command}}"
There’s a mismatch between your SSM document and the python code. You’ve misspelled GoogleAuthlock as GoogleAuthLock. Parameters variables is such,
params={
'command': ['GoogleAuthlock'],
'username': ['some.user'],
}

Boto3: Empty Datapoint from AWS instance

I wanted to write a Python script which will fetch my EC2 CPU utilization. While using get_metric_statistics() method, the output response shows like this .
As far as I know the 'Datapoints:[]' array shouldn't be empty. It should return something to get the CPU load %. My code is
import boto3
import sys
import datetime
client = boto3.client('cloudwatch')
response = client.get_metric_statistics(
Namespace='AWS/EC2',
MetricName='CPUUtilization',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-***********'
},
],
StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
EndTime=datetime.datetime.utcnow(),
Period=120,
Statistics=[
'Average',
],
Unit='Percent'
)
print(response)
Any help on what is wrong here? Thanks!
To use percentile statistics you must first enable detailed monitoring.
Amazon EC2 Metrics and Dimensions
By default, it takes the default region, which is "ap-southeast-1" (it's in the .aws/config). In the .aws/credentials file, it automatically takes the secret key & access key that was used while configuring the api. If they conflict, the datapoint returns empty.
Solution:
1. Create a new profile for every region. configure the region in .aws/config. Mine looks like this
`[default]
region = ap-southeast-1
[profile nv]
region = us-east-1
[profile prod]
region = us-east-1
~ `
nv and prod is usermade profilenames.
Enter the name instead of default
session = boto3.Session(profile_name='default')
Enter the secret key & access key in the same way in .aws/credentials.
You can use aws configure --your_profile_name to create the profiles inside the config & credential files

python - Mount EBS volume using boto3

I want to use AWS Spot instances to train Neural Networks. To prevent loss of the model when the spot instance is terminated, I plan to create a snapshot of the EBS volume, make a new volume and attach it to a reserved instance. How can I mount, or make the EBS volume available using python & boto3.
These are the steps used to make the volume available on Linux, but I want to automate the process so that I don't need to SSH into the instance every time. Here is the code I use to attach the volume -
import boto3
ec2 = boto3.resource('ec2')
spot = ec2.Instance('i-9a8f5082')
res = ec2.Instance('i-86e65a13')
snapshot = ec2.create_snapshot(VolumeId="vol-5315f7db", Description="testing spot instances")
volume = ec2.create_volume(SnapshotId=snapshot.id, AvailabilityZone='us-west-2a')
res.attach_volume(VolumeId="vol-5315f7db", Device='/dev/sdy')
snapshot.delete()
You need to run mount command on instance. 2 way for it. One is the sending command with a ssh connection like #mootmoot wrote. The other one is the sending command with AWS SSM service like #Mark B wrote. Here is the detailed SSM solution sample, you can ignore unnecessary parts for you:
Send bash command to instances using AWS SSM:
# Amazon EC2 Systems Manager requires
# 1. An IAM role for EC2 instances that will process commands. There should be a system manager role and the instance should use this role ! (Did it while creation instance)
# 2. And a separate role for users executing commands. Aws IAM user that has access and secret keys should have ssm permission. (i.e. AmazonSSMFullAccess)
# http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-configuring-access-policies.html
def execute_commands_on_linux_instances(commands, instance_ids):
client = boto3.client('ssm', **conn_args) # Need your credentials here
all_ssm_enabled_instances, ssm_enabled_instances, not_worked_instances = [],[],[]
not_worked_instances = instance_ids.copy()
all_ssm_enabled_instances = list()
outputs = list({})
not_executed = list()
# Select only the Instances that have an active ssm agent.
if len(client.describe_instance_information()['InstanceInformationList']) > 0:
resp = client.describe_instance_information(MaxResults=20)['InstanceInformationList']
for ins in resp:
all_ssm_enabled_instances.append(ins['InstanceId'])
ssm_enabled_instances = list(set(all_ssm_enabled_instances).intersection(instance_ids))
not_worked_instances = list(set(instance_ids).difference(all_ssm_enabled_instances))
# Now, send the command !
resp = client.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': [commands]},
InstanceIds=ssm_enabled_instances,
)
# get the command id generated by the send_command
com_id = resp['Command']['CommandId']
# Wait until all the commands status are out of Pending and InProgress
list_comm = client.list_commands( CommandId=com_id)
while True:
list_comm = client.list_commands( CommandId=com_id)
if (list_comm['Commands'][0]['Status'] == 'Pending'or list_comm['Commands'][0]['Status'] == 'InProgress'):
continue
else:
# Commands on all Instances were executed
break
# Get the responses the instances gave to this command. (stdoutput and stderror)
# Althoug the command could arrive to instance, if it couldn't be executed by the instance (response -1) it will ignore.
for i in ssm_enabled_instances:
resp2 = client.get_command_invocation(CommandId=com_id, InstanceId=i)
if resp2['ResponseCode'] == -1:
not_executed.append(i)
else:
outputs.append({'ins_id': i, 'stdout': resp2['StandardOutputContent'],
'stderr': resp2['StandardErrorContent']})
# Remove the instance that couldn't execute the command ever, add it to not_worked_instances
ssm_enabled_instances = list(set(ssm_enabled_instances).difference(not_executed))
not_worked_instances.extend(not_executed)
return ssm_enabled_instances, not_worked_instances, outputs
else:
print("There is no any available instance that has a worked SSM service!")
return ssm_enabled_instances, not_worked_instances, outputs
Create Instances with required IAM Instance profile that has required role that has required policy. As a result of this instance creation, instances have running SSM agents:
def create_ec2_instance(node_type):
# define userdata to be run at instance launch
userdata = """#cloud-config
runcmd:
- cd /tmp
- sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
"""
ec2_r = boto3.resource('ec2', **conn_args)
rolename = "amazonec2ssmrole"
i_pro_name = "ins_pro_for_ssm"
# Create an iam instance profile and add required role to this instance profile.
# Create a role and attach a policy to it if not exist.
# Instances will have this role to build ssm (ec2 systems manager) connection.
iam = boto3.resource('iam', **conn_args)
try:
response= iam.meta.client.get_instance_profile(InstanceProfileName=i_pro_name)
except:
iam.create_instance_profile(InstanceProfileName=i_pro_name)
try:
response = iam.meta.client.get_role(RoleName=rolename)
except:
iam.create_role(
AssumeRolePolicyDocument='{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":["ec2.amazonaws.com"]},"Action":["sts:AssumeRole"]}]}',
RoleName=rolename)
role = iam.Role(rolename)
role.attach_policy(PolicyArn='arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM')
iam.meta.client.add_role_to_instance_profile(InstanceProfileName=i_pro_name, RoleName=rolename)
iam_ins_profile = {'Name': i_pro_name}
if node_type == "Medium":
instance = ec2_r.create_instances(
ImageId='ami-aa5ebdd2',
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.medium',
KeyName=key_pair_name,
IamInstanceProfile=iam_ins_profile,
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 20}}])
elif node_type == "Micro":
instance = ec2_r.create_instances(
ImageId='ami-aa5ebdd2',
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.micro',
KeyName=key_pair_name,
IamInstanceProfile=iam_ins_profile,
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 10}}])
else:
print("Node Type Error")
return -1
# Wait for the instance state, default --> one wait is 15 seconds, 40 attempts
print('Waiting for instance {0} to switch to running state'.format(instance[0].id))
waiter = ec2_r.meta.client.get_waiter('instance_running')
waiter.wait(InstanceIds=[instance[0].id])
instance[0].reload()
print('Instance is running, public IP: {0}'.format(instance[0].public_ip_address))
return instance[0].id
Don't forget giving ssm permission. (i.e. AmazonSSMFullAccess) to the Aws IAM user that has access and secret keys.
By the way, conn_args can be defined as follows:
conn_args = {
'aws_access_key_id': Your_Access_Key,
'aws_secret_access_key': Your_Secret_Key,
'region_name': 'us-west-2'
}
You have to perform those steps in the operating system. You can't perform those steps via the AWS API (Boto3). Your best bet is to script those steps and then kick off the script somehow via Boto3, possibly using the AWS SSM service.
What's wrong with sending and execute ssh script remotely? Assume you are using ubuntu , i.e.
ssh -i your.pem ubuntu#ec2_name_or_ip 'sudo bash -s' < mount_script.sh
If you attach tag to those resources, you can later use boto3 to inquired the resources by universal tag name, instead tied to the specific static id.

Categories

Resources