I have tried to create user credential in AWS EC2 Windows Instance using Boto3 AWS-RunPowerShellScript. But when i trying to run this below code, getting error "AttributeError: 'ec2.ServiceResource' object has no attribute 'send_command'".
import boto3
import os
import subprocess
ssm_client = boto3.resource('ec2',aws_access_key_id='xxxxxx',aws_secret_access_key='yyyyy', region_name='us-west-2')
session = boto3.Session (profile_name='exampleaws')
response = ssm_client.send_command(
InstanceIds=[
"i-xxxxxx" # use instance id on which you want to execute, even multiple is allowd
],
DocumentName="AWS-RunPowerShellScript",
Parameters={
'commands':[
'New-LocalUser "Latchu" -Password "ABCD123" -FullName "LakshmiNarayanan" -Description "User-Details'
]
},
)
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-xxxxxx',
)
print(output)```
Your ssm_client is incorrect as send_command is property of ssm, not ec2. Thus, it should be:
ssm_client = boto3.client('ssm', aws_access_key_id='xxxxxx', aws_secret_access_key='yyyyy', region_name='us-west-2')
Also your session is not used at all. There still could be other errors which are not apparent yet.
Related
I'm trying to insert an item into my DynamoDB using the Python SDK (I'm not familiar with Python too much). I'm also using MQTT to retrieve some data. Currently the MQTT part works fine (I have some hard coded values for now so I can focus on getting the AWS functionality working)
I'm also using an AWS Educate account so need to pass in my access keys etc (which I've removed from the code for posting here). The code right now is:
from pprint import pprint
import boto3
import paho.mqtt.client as paho
import json
from types import SimpleNamespace
broker = "35.173.177.9"
#MQTT function
def onMessage(client, userdata, message):
print("Message recieved: ", str(message.payload.decode("utf-8", "ignore")))
movie_resp = putItem(1000, "1.1.1.1", "Pi", "06/01/21")
print("Put item succeeded:")
pprint(movie_resp, sort_dicts=False)
def putItem(id, ip, deviceName, clickDate, dynamodb=None):
session = boto3.session(
aws_access_key_id="",
aws_secret_access_key="",
aws_session_token="",
region_name='us-west-1'
)
if not dynamodb:
dynamodb = session.resource('dynamodb')
table = dynamodb.Table('piData')
response = table.put_item(
Item={
'ip': ip,
'id': id,
'deviceName': deviceName,
'clickDate': clickDate
}
)
return response
#MQTT code below
client = paho.Client("awsUbuntuEC2")
client.on_message = onMessage
client.connect(broker)
client.subscribe("jsonData")
client.loop_forever()
When running the code I get:
"TypeError: 'module' object is not callable"
on line
session = boto3.session(
Any ideas?
Try adding from boto3 import Session at the headers and try running the file again, please.
change from session = boto3.session to :
session = boto3.Session (Capital S in session )
I'm trying to list the EC2 instances in a specific AWS account using named profiles and boto3 in Python.
The error says:
File ".\aws_ec2_list_instance_info.py", line 18, in <module>
running_instances = ec2.instances.filter(Filters=[{
File "C:\Users\tdunphy\AppData\Local\Programs\Python\Python37-32\lib\site-packages\botocore\client.py", line 601, in __getattr__
self.__class__.__name__, item)
AttributeError: 'EC2' object has no attribute 'instances'
This is my code:
from collections import defaultdict
import boto3
aws_account = input("Enter the name of the AWS account you'll be working in: ")
# Connect to EC2
session = boto3.Session(profile_name=aws_account)
ec2 = session.client('ec2')
# Get information for all running instances
running_instances = ec2.instances.filter(Filters=[{
'Name': 'instance-state-name',
'Values': ['running']}])
What am I doing wrong?
There doesn't seem to be an instance function for ec2 client. Maybe something like this is what you're looking for?
describe_instances, https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_instances
check out the boto3 documentation here. https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_instances
or check out this https://github.com/CloudBoltSoftware/cloudbolt-forge/blob/master/blueprints/aws_rds_instance/create_aws_rds_instance.py
I run following code and get error:
#!python -u
from boto3 import client
def is_tags():
response = client('ec2', 'us-east-1').tag_resources(
ResourceARNList=[
'arn:aws:ec2:us-east-1::image/ami-55ef662f'
],
Tags=[
{
'Key': 'Name',
'Value': 'john',
},
],
)
if __name__ == '__main__':
is_tags()
It is throwing the following error:
AttributeError: 'EC2' object has no attribute 'tag_resources'
What am I doing wrong?
You are using the library incorrectly, the client object has no attribute called tag_resources, due to which the call to it is failing.
You can refer the correct usage from the boto3 documentation for Tag in EC2:
import boto3
ec2 = boto3.resource('ec2', 'us-east-1')
tag = ec2.Tag('resource_id','key','value')
EDIT: I am not sure if there is a single API for tagging multiple type of resources that consistently works. You seem to be following this API, in which case, you have to define your client correctly, like:
client = boto3.client('resourcegroupstaggingapi', 'us-east-1')
I'm using ipython to get an understanding of Boto3 and interacting with EC2 instances. Here is the code I'm using to create an instance:
import boto3
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
new_instance = ec2.create_instances(
ImageId='ami-d05e75b8',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName=<name_of_my_key>,
SecurityGroups=['<security_group_name>'],
DryRun = False
)
This starts an EC2 instance fine, and I can get the public DNS name, ip and other info from the AWS console. But, when I try to get the public DNS using Boto, by doing this:
new_instance[0].public_dns_name
Returns blank quotes. Yet, other instance details, such as:
new_instance[0].instance_type
Returns the correct information.
Any ideas? Thanks.
EDIT:
So if I do:
def get_name(inst):
client = boto3.client('ec2')
response = client.describe_instances(InstanceIds = [inst[0].instance_id])
foo = response['Reservations'][0]['Instances'][0]['NetworkInterfaces'][0]['Association']['PublicDnsName']
return foo
foo = get_name(new_instance)
print foo
Then it will return the public DNS. But it doesn't make sense to me why I need to do all of this.
The Instance object you get back is only hydrated with the response attributes from the create_instances call. Since the DNS name is not available until the instance has reached the running state [1], it will not be immediately present. I imagine the time between you creating the instance and calling describe instances is long enough for the micro instance to start.
import boto3
ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
ImageId='ami-f0091d91',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName='<KEY-NAME>',
SecurityGroups=['<GROUP-NAME>'])
instance = instances[0]
# Wait for the instance to enter the running state
instance.wait_until_running()
# Reload the instance attributes
instance.load()
print(instance.public_dns_name)
Here my wrapper:
import boto3
from boto3.session import Session
def credentials():
"""Credentials:"""
session = Session(aws_access_key_id= 'XXXXXXXXX',
aws_secret_access_key= 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
ec2 = boto3.resource('ec2', region_name='us-east-2')
return ec2
def get_public_dns(instance_id):
"""having the instance_id, gives you the public DNS"""
ec2 = credentials()
instance = ec2.Instance(instance_id)
instancePublicDNS = instance.public_dns_name
return instancePublicDNS
Then you just need to use your instance_id to get public dns of any of your actives ec2:
dns = get_public_dns(instance_id)
Remember to change "region_name" to your zone and add your "aws_access_key_id" and "aws_secret_access_key"
import boto3
import pandas as pd
session = boto3.Session(profile_name='aws_dev')
dev_ec2_client = session.client('ec2')
response = dev_ec2_client.describe_instances()
df = pd.DataFrame(columns=['InstanceId', 'InstanceType', 'PrivateIpAddress','PublicDnsName'])
i = 0
for res in response['Reservations']:
df.loc[i, 'InstanceId'] = res['Instances'][0]['InstanceId']
df.loc[i, 'InstanceType'] = res['Instances'][0]['InstanceType']
df.loc[i, 'PrivateIpAddress'] = res['Instances'][0]['PrivateIpAddress']
df.loc[i, 'PublicDnsName'] = res['Instances'][0]['PublicDnsName']
i += 1
print df
Note:
Change this profile with your AWS profile name profile_name='aws_dev'
This code is working for Python3
Can anyone tell me if I can execute shell commands using boto3 on an AWS instance?
I read about boto.manage.cmdshell in a few places, but it is deprecated in boto3.
ssm_client = boto3.client('ssm')
response = ssm_client.send_command(
InstanceIds=['i-03#####'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': ['start ecs']}, )
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-03######',
)
print(output)
ssm = boto3.client('ssm' )
testCommand = ssm.send_command( InstanceIds=[ 'i-123123123123' ], DocumentName='AWS-RunShellScript', Comment='la la la', OutputS3BucketName='myOutputS3Bucket', OutputS3KeyPrefix='i-123123123123', Parameters={ "commands":[ "ip config" ] } )
i-123123123123 is a pretend ec2 instance id.
I put that in the OutputS3KeyPrefix to get a unique place to store logs in the bucket.
You can install the ssm agent like this;
ec2r = boto3.resource('ec2' )
userdata = """#cloud-config
runcmd:
- /home/ec2-user/sudo npm run prod
- cd /tmp
- curl https://amazon-ssm-%s.s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o amazon-ssm-agent.rpm
- yum install -y amazon-ssm-agent.rpm
""" % region
if ssm == "on":
instance = ec2r.create_instances( ImageId=ami, MinCount=1, MaxCount=1, KeyName=keyname, InstanceType=instancetype,
NetworkInterfaces=[{
'DeviceIndex': 0,
'AssociatePublicIpAddress': False,
'SubnetId': mySub,
'Groups': secGroupList,
'AssociatePublicIpAddress': AssociatePublicIpAddress
}],
Monitoring={ 'Enabled': False },
UserData=userdata,
IamInstanceProfile={
'Name': rolename
},
EbsOptimized=False
)
I know I am answering to bit old thread. I am not sure even at that time SSM existed. But now you can use SSM send_command from boto3 to run commands directly on ec2 instances.
Here is the sample to run PowerShell commands on EC2 instances
import boto3
ssm_client = boto3.client('ssm', region_name="us-west-2") # use region code in which you are working
response = ssm_client.send_command(
InstanceIds=[
"i-03########" # use instance id on which you want to execute, even multiple is allowd
],
DocumentName="AWS-RunPowerShellScript",
Parameters={
'commands':[
'ipconfig'
]
},
})
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-03######',
)
print(output)
For more information read boto3 SSM docs
For information on SSM itself refer AWS docs
No. The boto.manage.cmdshell functionality in boto was not migrated to boto3. The original boto.manage.cmdshell functionality used Paramiko which you could use directly with boto3 if you want to have SSH functionality with boto3.
Here's a boto3 github issue on this topic.
As #jarmod points out there is new AWS functionality as of October 2015 that enables you to run commands on Windows systems using AWS EC2 SSM. You can access this in boto3 with the boto3 SSM client as of botocore version 1.3.1.
Here's a boto3 github issue on supporting "EC2 Run Command"
Change
command_id = response['Command']['CommandId']
to
command_id = context.aws_request_id
Documentation says:
aws_request_id
AWS request ID associated with the request. This is the ID returned to the client that called the invoke method.
Change:
command_id = response['Command']['CommandId']
for:
command_id = context.aws_request_id