Boto create launch configuration in different VPC with fabric and boto - python

I keep getting this error returned from my boto create_launch_configuration() cmd wrapped in a fabric task.
This is the cmd:
if user_data != '':
security_groups=list('sg-d73fc5b2')
print "Trying to use this AMI [%s]" % image_ami
lc = LaunchConfiguration(
name=launch_config_name,
image_id=image_ami,
key_name=env.aws_key_name,
security_groups=security_groups,
instance_type=instance_type
)
launch_config = autoscale_conn.create_launch_configuration(lc)
and this is the response
<ErrorResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<Error>
<Type>Sender</Type>
<Code>ValidationError</Code>
<Message>No default VPC for this user</Message>
</Error>
<RequestId>4371fa63-e008-11e3-8554-ff532bce5053</RequestId>
</ErrorResponse>
We disabled the default VPC in order to try and minimise mistakes being applied to a VPC via API calls. We have several VPC's running from the same account and it would be useful to be able to specify the VPC via boto.
Has anyone any idea how I can set this default VPC on a per task basis?

As stated here you should specify a subnet when creating auto-scaling group. And though it is not stated that you have to have default VPC for creating launch configuration, I would say that reading this. Particularly this lines:
If your AWS account comes with a default VPC and if you want to create your Auto Scaling group in default VPC, follow the instructions in ...
So you just need to create auto-scaling group in the desired subnet and use your launch configuration for this group.

Related

Connecting to DocumentDB from AWS Lambda using Python

I am trying to connect to DocumentDB from a Lambda function.
I have configured my DocumentDB as per this tutorial and can access it through the cloud9 command prompt.
The documentDB cluster is part of two security groups. The first security group is called demoDocDB and the second called default and is the vpc defulat security group.
The inbound rules for demoDocDB forward requests from the cloud9 instance to port 27017 where my documentDB database is running.
The inbound rules for the defualt security group specify all traffic, all port ranges and a source of itself. The VPC ID is the default VPC setup.
In lambda when editing the VPC details, I have inputted:
VPC - The defualt VPC
Subnets - Chosen all 3 subnets available
Security Groups - The default security group for VPC
The function has worked twice in writting to the Database, the rest of the time it has timed out, the timeout on the Lambda function is 2 minutes but before reaching that it will throw a time out error.
[ERROR] ServerSelectionTimeoutError: MY_DATABASE_URL:27017: [Errno -2] Name or service not known
The snippet of code below is what is trying to be executed, the function will never reach the print("INSERTED DATA") it times out during the insert statement.
def getDBConnection():
client = pymongo.MongoClient(***MY_URL***)
##Specify the database to be used
db = client.test
print("GOT CONNECTION",db)
##Specify the collection to be used
col = db.myTestCollection
print("GOT COL",col)
##Insert a single document
col.insert_one({'hello':'Amazon DocumentDB'})
print("INSERTED DATA")
##Find the document that was previously written
x = col.find_one({'hello':'Amazon DocumentDB'})
##Print the result to the screen
print("RETRIEVED DATA",x)
##Close the connection
client.close()
I have tried changing the version of pymongo as this thread suggested however it did not help.
Make sure your Lambda function is not in the public subnet, otherwise, it will not work. So, that means you need to go back to the Lambda console and remove the public subnet from the VPC editable section.
Make sure you have a Security group specifically for your Lambda Function as follows:
Lambda Security Group Outbound Rule:
Type Protocol Port Range Destination
All Traffic All All 0.0.0.0/0
You can also restrict this to HTTP/HTTPS on Ports 80/443 if you'd like.
2.Check the Security Group of your DocumentDB Cluster to see if it is set up with an inbound rule as follows:
Type Protocol Port Range Source
Custom TCP TCP 27017 Lambda Security Group
Your Lambda Function needs to have the correct permissions, those are:
Managed policy AWSLambdaBasicExecutionRole
Managed policy AWSLambdaVPCAccessExecutionRole
After doing this your VPC section should look something like this:
1. VPC - The default VPC
2. Subnets - Chosen 2 subnets (Both Private)
3. Security Group for your Lambda function. Not the default security group
And that should do it for you. Let me know if it does not work though and I'll try and help you troubleshoot.

Cant write to Database with AWS Lambda

I am trying to write files to a postgres database with AWS Lambda but I am facing an error:
Calling the invoke API action failed with this message: Network Error
My code looks like this:
from sqlalchemy import create_engine
import pandas as pd
def test(event=None, context=None):
conn = create_engine('postgresql://user:password#url:5439/database')
df = pd.DataFrame([{'A': 'foo', 'B': 'green', 'C': 11},{'A':'bar', 'B':'blue', 'C': 20}])
df.to_sql('your_table', conn, index=False, if_exists='replace', schema='schema')
test()
Resources:
Memory - 1280MB
Timeout - 2 minutes
What is the problem here and how else could I write pandas Dataframe to a Database with AWS Lambda?
I'm assuming the Postgres instance is in RDS.
Is your lambda in your VPC? You can check this on the function's page in admin console, in the VPC box. By default it's not and the VPC box says "None".
Case 1: Lambda is not in VPC
Then the issue might be that the security group associated with your RDS instance does not allow connections from outside the VPC. That's the default if you didn't touch the security group. Find the security group for your RDS instance from the RDS admin, then check out the "Inbound rules" for that security group. Lambdas don't have an IP so you'll need to add an inbound rule allowing at least postgres traffic for source "0.0.0.0/0", i.e. the entire internet.
This should be sufficient but note that this is not considered very good for security, since anyone can now in theory reach your DB (and worse if they can guess the password). But depending on your project that might not be a problem for you. If that is an issue for you, you could instead associate your lambda with the same VPC the RDS instance is in, in order to provide better networking security, and move to Case 2.
Case 2: Lambda is in a VPC
I'm assuming you put the lambda in the same VPC as the RDS instance for simplicity - if not you probably know what you're doing.
All you need to do now (providing you didn't touch other network configs) is ensure your RDS instance's security group allows access from your lambda's security group. So you could put both in the default security group, or put them in separate groups but make sure the RDS one has an inbound rule allowing the lambda one.
Note that if your lambda also needs to call external services (since you mention querying an API), in order to enable that, after linking it to your VPC you'll also need to create a NAT Gateway like I described here: https://stackoverflow.com/a/61273118/299754

test ansible roles with molecule and boto3

I have ansible roles that creates servers, S3 buckets, security groups ... and I want to establish some unit testing using Molecule.
After some researches, I found out that Molecule is using Testinfra to run some assert commands on the remote/local host. That can work for my roles that create some servers like apache2, nginx.. but how about the other roles that are just creating some other aws resources like load balancers, autoscaling groups, security groups, or just s3 buckets? in this case, there will be no host nor instances.
It would be easy to make tests by Unittest and boto3 and call the AWS API, but my question is can I use molecule only and fire up an EC2 instance everytime I want to test my role of security group and then do something like this :
def test_security_group_has_80_open(host):
cmd = host.run('aws ec2 describe-security-groups --group-names MySecurityGroup')
return_code = cmd.rc
output = cmd.stdout
assert output.contains('"ToPort": 80')
That EC2 instance would have AWSCLI installed. Is this a correct way ? Is it possible to test all type of roles by Molecule by firing an EC2 that runs awscli calls ?
I cannot comment or else I would, but to speed things up you can configure Molecule to not manage the create and destroy sequences. And use the delegated driver with the converge playbook having connection=local. This way you can simply just create the the security group using the role without provisioning instances and use boto3 to confirm your changes are correct.
This way you only need your test environment to have the proper keys available to make the API calls using boto instead of also worrying about whether or not the EC2 instance does as well.

Python AWS lambda function connecting to RDS mysql: Timeout error

When the python lambda function is executed I get "Task timed out after 3.00 seconds" error. I am trying the same example function.
When I try to run the same code from eclipse it works fine and I can see the query result. Same way I can connect to the db instance from local-machine Mysql workbench without any issues.
I tried creating a role with with full administrator access policy for this lambda function and even then its not working fine. The db instance has a vpc and I just added my local ip address there using the edit CIDR option so I can access the instance through my local machine workbench. For VPC, subnet and security group parameter in lambda function I gave the same values as I have in the RDS db instance.
I have also increased the timeout for lambda function and still I see the timeout error.
Any input would be appreciated.
For VPC, subnet and security group parameter in lambda function I gave the same values as I have in the RDS db instance.
Security groups don't automatically trust their own members to access other members.
Add a rule to this security group for "MySQL" (TCP port 3306) but instead of specifying an IP address, start typing s g into the box and select the id of the security group that you are adding the rule to -- so that the group is self-referential.
Note that this is probably not the correct long-term fix, because if your Lambda function needs to access the Internet or most AWS services, the Lambda function needs to be on a private subnet behind a NAT device. That does not describe the configuration of the subnet where your RDS instance is currently configured, because you mentioned adding your local IP to allow access to RDS. That suggests your RDS is on a public subnet.
See also Why Do We Need Private Subnets in VPC for a better understanding of public vs. private subnets.

How do I list Security Groups of current Instance in AWS EC2?

EDIT Removed BOTO from question title as it's not needed.
Is there a way to find the security groups of an EC2 instance using Python and possible Boto?
I can only find docs about creating or removing security groups, but I want to trace which security groups have been added to my current EC2 instance.
To list the security groups of current instance, you don't need Boto/Boto3. Make use of the AWS meta-data server.
import os
sgs = os.popen("curl -s http://169.254.169.254/latest/meta-data/security-groups").read()
print sgs
You can check it from that instance and execute below command
curl http://169.254.169.254/latest/meta-data/security-groups
or from aws-cli also
aws ec2 describe-security-groups

Categories

Resources