I want to use awslocal/localstack to mock out the endpoint/etc for my lambda function that sends an email using AWS ses/boto3. i have used localstack for creating queues and SNS messages but not for aws ses. Normally for the queues i would write something in my awslocal sqs create-queue and etc... in my localstack.sh
I can't for the life of me figure out what I would need to create for SES using awslocal. i know my current issue is aws ses specific and probably related to localstack.sh because i can connect to create a queue.
I want to:
test locally and when I call boto3.client.send_message I want it to connect to my localhost port so I can receive the email sent successfully
I tried:
awslocal ses --endpoint-url="http://localhost:4579/"
did not work
any ideas?
Related
I am trying to send messages using Databricks to Azure Service bus topic using connection string defined in the Pyspark code/using key vault. As per client policy they will be updating the keys frequently so has asked to use REST API calls to get the connection string from the service bus everytime we need to send a message!
Is there any way I can do this using rest API calls?
Yes you can use ListKeys api call from the management.azure.com REST API to list them, see below
https://learn.microsoft.com/en-us/rest/api/servicebus/stable/disaster-recovery-configs/list-keys
I have a Python script that I want to run and text me a notification if a certain condition is met. I'm using Twilio, so I have a Twilio API token and I want to keep it secret. I have it successfully running locally, and now I'm working on getting it running on an EC2 instance.
Regarding AWS steps, I've created an IAM user with permissions, launched the EC2 instance (and saved the ssh keys), and created some parameters in the AWS SSM Parameter store. Then I ssh'd into the instance and installed boto3. When I try to use boto3 to grab a parameter, I'm unable to locate the credentials:
# test.py
import boto3
ssm = boto3.client('ssm', region_name='us-west-1')
secret = ssm.get_parameter(Name='/test/cli-parameter')
print(secret)
# running the file in the console
>> python test.py
...
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I'm pretty sure this means it can't find the credentials that were created when I ran aws configure and it created the .aws/credentials file. I believe the reason for this is because I ran aws configure on my local machine, rather than running it while ssh'd into the instance. I did this to keep my AWS ID and secret key off of my EC2 instance, because I thought I'm supposed to keep that private and not put tokens/keys on my EC2 instance. I think I can solve the issue by running aws configure while ssh'd into my instance, but I want to understand what happens if there's a .aws/credentials file on my actual EC2 instance, and whether or not this is dangerous. I'm just not sure how this is all supposed to be structured, or what is a safe/correct way of running my script and accessing secret variables.
Any insight at all is helpful!
I suspect the answer you're looking for looks something like:
Create an IAM policy which allows access to the SSM parameter (why not use the SecretStore?)
Attach that IAM policy to a role.
Attach the role to your EC2 instance (instance profile)
boto3 will now automatically collect an AWS secret key, etc.. from the meta data service when it needs to talk to the parameter store.
I'm trying connect to internal Jira instance use my AWS lambda function. The lambda cannot connect to this web-resource because VPN is required. (all work properly on my local machine under VPN connection).
Please any suggestion regarding this issue ?
Since there is an AWS Direct Connect connection between an Amazon VPC and your corporate network, the AWS Lambda function can be configured to connect to the VPC.
The Lambda function can then connect to the JIRA instance on the corporate network by using the private IP address of the JIRA instance.
You might be able to configure the VPC to defer to your corporate DNS server to resolve the DNS Name into the IP address, but I would recommend you start by using the private IP address itself to ensure that connectivity is working.
It would be best to use SQS messaging.
Define a message format
From your lambda function, publish a message when you want to query something
Have an app/worker running inside your network, which is listening to your SQS queue.
Your app/worker receives that message and can query to your Jira instance.
When the result is ready, your app/worker can publish it on other SQS queue
You can receive the message in another lambda function, or If you want to receive the response in a same lambda function, you can poll for messages and wait till you get your desired message.
If you have multiple such lambda functions, or many requests going on. You can put a request-id in messages. So, when receiving a message, you can come to know which is your desired message and delete that one only.
I'm a complete noob with Python and boto and trying to establish a basic connection to ec2 services.
I'm running the following code:
ec2Conn = boto.connect_ec2('username','password')
group_name = 'python_central'
description = 'Python Central: Test Security Group.'
group = ec2Conn.create_security_group(group_name, description)
group.authorize('tcp', 8888,8888, '0.0.0.0/0')
and getting the following error:
AWS was not able to validate the provided access credentials
I've read some posts that this might be due to time difference between my machine and the EC2 server but according to the logs, they are the same:
host:ec2.us-east-1.amazonaws.com x-amz-date:20161213T192005Z
host;x-amz-date
515db222f793e7f96aa93818abf3891c7fd858f6b1b9596f20551dcddd5ca1be
2016-12-13 19:20:05,132 boto [DEBUG]:StringToSign:
Any idea how to get this connection running?
Thanks!
Call made to the AWS API require authentication via Access Key and Secret Key. These can be obtained from the Identity and Access Management (IAM) console, under the Security Credentials tab for a user.
See: Getting Your Access Key ID and Secret Access Key
If you are unfamiliar with Python, you might find it easier to call AWS services by using the AWS Command-Line Interface (CLI). For example, this single-line command can launch an Amazon EC2 instance:
aws ec2 run-instances --image-id ami-c2d687ad --key-name joe --security-group-id sg-23cb34f6 --instance-type t1.micro
See: AWS CLI run-instances documentation
I have a lambda function which SSH ec2 instance and run some commands. This lambda function is triggered from SNS topic. SNS topic is integrated with a cloudwatch alarm. I am using python 2.7 in lambda function followed this thread https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/. is it possible to get EC2 public IP address which actually triggered alarm?
It depends on the CloudWatch Alarm you are using to trigger the SNS publish.
My suggestion is to print out the entire event dictionary in your function and check if there is any mention about EC2 instance ID.
In case of CloudWatch EC2 alarm (eg. CPU usage) you'll find the instance ID in the metric Dimension.
# Python example
import json
message = json.loads(event['Records'][0]['Sns']['Message'])
instance_id = message['Trigger']['Dimensions'][0]
If you have the instance ID you can easily retrieve the instance public IP using boto3 as follows:
# Python example
import boto3
instance_id = 'xxxx' # This is the instance ID from the event
ec2 = boto3.client('ec2')
instances = ec2.describe_instances(InstanceIds=[instance_id])
public_ip = instances[0]['Reservations'][0]['Instances'][0]['PublicIpAddress']
Finally, as you are performing SSH from Lambda function to your EC2 instance keep in mind that Lambda Functions out of VPC get a dynamic public IP therefore it is impossible to restrict your EC2 instance security group for SSH. Leaving SSH opened to the entire world is not a good practice from a security perspective.
I suggest to run both EC2 and Lambda function in VPC restricting SSH access to your EC2 instances from Lambda vpc security group only. In that case you'll need to retrieve the private Ip address rather than the public one to be able to ssh your instance (the python logic is the same as above, the only difference is that you use 'PrivateIpAddress' instead of 'PublicIpAddress') . This is way more secure than using public internet.
I hope this helps.
G