Not able to connect with mongo from AWS lambda functionin python - python

I have aws lambda which is not able to connect with mongo through VPC.
import pymongo
def handler(event, context):
try:
client = pymongo.MongoClient(host="xxxxxxx", port=27017, username=x1, password=x2, authsource="x3, authMechanism='SCRAM-SHA-1')
except pymongo.errors.ServerSelectionTimeoutError as err:
print(err)
Not able to found the server.
I have created a security group and new roles have given VPC and lambda full access too but not able to connect.
Taken help from https://blog.shikisoft.com/access-mongodb-instance-from-aws-lambda-python/ as well as https://blog.shikisoft.com/running-aws-lambda-in-vpc-accessing-rds/ links.
Please be helpful.
Trying since yesterday but no luck.

Let me try to help you figured out where the problem is.
1. Are your MongoDB EC2 Instance and your Lambda hosted on the same VPC?
If this the cause of your problem, you should move your services into the same VPC.
2. Is your Security Group that attached to your MongoDB EC2 Instance
and your Lambda has whitelisted/include the default sg?
You have to include the default sg into your Security Group so, services/instances within that VPC can communicate.
3. Is your hostname publicly or privately accessed ?
If Lambda needs to connect over Internet to access your MongoDB instance, you don't need to attach your Lambda into a VPC.
Inside a VPC, Lambda requires a NAT Gateway to communicate to open
world. Try to communicate privately if your MongoDB instance and
Lambda are in the same VPC.
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Hope these answers are helpful to you.

Related

Unable to connect to aws redshift from python within lambda

I am trying to connect to redshift with python through lambda. The purpose is to perform queries on the redshift database.
I've tried this by getting the temp aws credentials and connecting with psycopg2, but it isn't successful without any error messages. (IE: the lambda just time out)
rs_host = "mytest-cluster.fooooooobaaarrrr.region111111.redshift.amazonaws.com"
rs_port = 5439
rs_dbname = "dev"
db_user = "barrr_user"
def lambda_handler(events, contx):
# The cluster_creds is able to be obtained successfully. No issses here
cluster_creds = client.get_cluster_credentials(DbUser=db_user,
DbName=rs_dbname,
ClusterIdentifier="mytest-cluster",
AutoCreate=False)
try:
# It is this psycopg2 connection that cant work...
conn = psycopg2.connect(host=rs_host,
port=rs_port,
user=cluster_creds['DbUser'],
password=cluster_creds['DbPassword'],
database=rs_dbname
)
return conn
except Exception as e:
print(e)
Also, the lambda execution role itself has these policies:
I am not sure why am I still not able to connect to redshift via python to perform queries.
I have also tried with the sqlalchemy libary but no luck there.
As what Johnathan Jacobson mentioned above. It was the security groups and network permissions that caused my problem.
You can maybe review the documentation at Create AWS Lambda Function to Connect Amazon Redshift with C-Sharp in Visual Studio
Since you have already your code in Python, you can concentrate on the networking part of the tutorial
While launching AWS Lambda functions, it is possible to select a VPC and subnet where the serverless lambda function servers will spinup
You can choose exactly the same VPC and the subnet(s) where you have created your Amazon Redshift cluster
Also, revise the IAM role you have attached to the AWS Lambda function. It requires additionally the AWSLambdaVPCAccessExecutionRole policy
This will be solving issues between connections from different VPCs
Again, even you have launched the lambda function in the same VPC and subnet with Redshift cluster, it is better to check the security group of the cluster so that it accepts connections
Hope it works,

How connect to internal Jira instance via AWS lambda

I'm trying connect to internal Jira instance use my AWS lambda function. The lambda cannot connect to this web-resource because VPN is required. (all work properly on my local machine under VPN connection).
Please any suggestion regarding this issue ?
Since there is an AWS Direct Connect connection between an Amazon VPC and your corporate network, the AWS Lambda function can be configured to connect to the VPC.
The Lambda function can then connect to the JIRA instance on the corporate network by using the private IP address of the JIRA instance.
You might be able to configure the VPC to defer to your corporate DNS server to resolve the DNS Name into the IP address, but I would recommend you start by using the private IP address itself to ensure that connectivity is working.
It would be best to use SQS messaging.
Define a message format
From your lambda function, publish a message when you want to query something
Have an app/worker running inside your network, which is listening to your SQS queue.
Your app/worker receives that message and can query to your Jira instance.
When the result is ready, your app/worker can publish it on other SQS queue
You can receive the message in another lambda function, or If you want to receive the response in a same lambda function, you can poll for messages and wait till you get your desired message.
If you have multiple such lambda functions, or many requests going on. You can put a request-id in messages. So, when receiving a message, you can come to know which is your desired message and delete that one only.

Connect AWS RDS (psql) in AWS Lambda

I wrote a simple lambda function in python to fetch some data from AWS RDS. PostgreSQL is the database engines.
conn = psycopg2.connect(host=hostname, user=username, password=password, dbname=db_name, connect_timeout=50)
I did like this. But it didn't work. Always returns an error like this
Response:
{
"errorMessage": "2018-06-06T11:28:53.775Z Task timed out after 3.00 seconds"
}
How can I resolve this??
It is most probably timing-out because the network connection cannot be established.
If you wish to connect to the database via a public IP address, then your Lambda function should not be connected to the VPC. Instead, the connection will go from Lambda, via the internet, into the VPC and to the Amazon RDS instance.
If you wish to connect to the database via a private IP address, then your Lambda function should be configured to use the same VPC as the Amazon RDS instance.
In both cases, the connection should be established using the DNS Name of the RDS instance, but it will resolve differently inside and outside of the VPC.
Finally, the Security Group associated with the Amazon RDS instance needs to allow the incoming connection. This, too, will vary depending upon whether the request is coming from public or private space. You can test by opening the security group to 0.0.0.0/0 and, if it works, then try to restrict it to the minimum possible range.

AWS Lambda Function cannot access other services

I have a problem with an AWS Lambda Function which depends upon DynamoDB and SQS to function properly. When I try to run the lambda stack, they time out when trying to connect to the SQS service. The AWS Lambda Function lies inside a VPC with the following setup:
A VPC with four subnets
Two subsets are public, routing their 0.0.0.0/16 traffic to an internet gateway
A MySQL server sits in a public subnet
The other two contain the lambdas and route their 0.0.0.0/16 traffic to a NAT which lives in one of the public subnets.
All route tables have a 10.0.0.0/16 to local rule (is this the problem because Lambdas use private Ip's inside a VPC?)
The main rout table is the one with the NAT, but I explicitly associated the public nets with the internet gateway routing table
The lambdas and the mysql server share a security group which allows for inbound internal access (10.x/16) as well as unrestricted outbound traffic (0.0.0.0/16).
Traffic between lambdas and the mysql instance is no problem (except if I put the lambdas outside the VPC, then they can't access the server even if I open up all ports). Assume the code for the lambdas is also correct, as it worked before I tried to mask it in a private net. Also the lambda execution roles have been set accordingly (or do they need adjustments after moving them to a private net?).
Adding a dynamodb endpoint solved the problems with the database, but there are no VPC endpoints available for some of the other services. Following some answers I found here, here, here and in the announcements / tutorials here and here, I am pretty sure I followed all the recommended steps.
I would be very thankful and glad for any hints where to check next, as I have currently no idea what could be the problem here.
EDIT: The function don't seem to have any internet access at all, since a toy example I checked also timed out:
import urllib.request
def lambda_handler(event, context):
test = urllib.request.urlopen(url="http://www.google.de")
return test.status
Of course the problem was sitting in front of the monitor again. Instead of routing 0.0.0.0/0 (any traffic) to the internet gateway, I had just specified 0.0.0.0/16 (traffic from machines with an 0.0.x.x ip) to the gate. Since no machines with such ip exists any traffic was blocked from entering leaving the VPC.
#John Rotenstein: Thx, though for the hint about lambdash. It seems like a very helpful tool.
Your configuration sounds correct.
You should test the configuration to see whether you can access any public Internet sites, then test connecting to AWS.
You could either write a Lambda function that attempts such connections or you could use lambdash that effectively gives you a remote shell running on Lambda. This way, you can easily test connectivity from the command line, such as curl.

Python AWS lambda function connecting to RDS mysql: Timeout error

When the python lambda function is executed I get "Task timed out after 3.00 seconds" error. I am trying the same example function.
When I try to run the same code from eclipse it works fine and I can see the query result. Same way I can connect to the db instance from local-machine Mysql workbench without any issues.
I tried creating a role with with full administrator access policy for this lambda function and even then its not working fine. The db instance has a vpc and I just added my local ip address there using the edit CIDR option so I can access the instance through my local machine workbench. For VPC, subnet and security group parameter in lambda function I gave the same values as I have in the RDS db instance.
I have also increased the timeout for lambda function and still I see the timeout error.
Any input would be appreciated.
For VPC, subnet and security group parameter in lambda function I gave the same values as I have in the RDS db instance.
Security groups don't automatically trust their own members to access other members.
Add a rule to this security group for "MySQL" (TCP port 3306) but instead of specifying an IP address, start typing s g into the box and select the id of the security group that you are adding the rule to -- so that the group is self-referential.
Note that this is probably not the correct long-term fix, because if your Lambda function needs to access the Internet or most AWS services, the Lambda function needs to be on a private subnet behind a NAT device. That does not describe the configuration of the subnet where your RDS instance is currently configured, because you mentioned adding your local IP to allow access to RDS. That suggests your RDS is on a public subnet.
See also Why Do We Need Private Subnets in VPC for a better understanding of public vs. private subnets.

Categories

Resources