I'm trying to set and get keys from ElastiCache (memcached) from a python lambda function using Boto3. I can figure out how to get the endpoints but that's pretty much it. Is there some documentation out there that shows the entire process?
It sounds like you are trying to interact with Memcached via Boto3. This is not possible. Boto3 is for interacting with the AWS API. You can manage your ElastiCache servers via the AWS API, but you can't interact with the Memcached software running on those servers. You need to use a Memcached client library like python-memcached in your Python code to actually get and set keys in your Memcached cluster.
Also, your Lambda function will need to reside in the same VPC as the ElastiCache node(s).
I had the exact timeout problem listed in the commment of the older post. My bug is in the security group for memcached. Here is the working version in terraform:
resource "aws_security_group" "memcached" {
vpc_id = "${aws_vpc.dev.id}"
name = "memcached SG"
ingress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
egress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
tags = {
Name = "memcached SG"
}
}
I tested the connection by creating a EC2 instance in public subnet and do "telnet (input your cache node URL) 11211".
Related
For security purpose, I have disabled public access under Networking Tab in Keyvault and have a private endpoint in place. Both keyvault and private endpoint reside in same resource group. I have an app registration for my application for which I have granted access under Access policies in Keyvault.
Using Python SDK,
from azure.keyvault.secrets import SecretClient
from azure.identity import ClientSecretCredential as cs
keyVaultName = "<NAME>"
kvURI = "https://<NAME>.vault.azure.net"
AZ_TENANT_ID = '<AZ_TENANT_ID>'
AZ_CLIENT_ID = '<AZ_CLIENT_ID>'
AZ_CLIENT_SECRET = '<AZ_CLIENT_SECRET>'
credential = cs(
tenant_id=AZ_TENANT_ID,
client_id=AZ_CLIENT_ID,
client_secret=AZ_CLIENT_SECRET)
def set_secret(secretname,secretvalue):
print(credential)
secret_client = SecretClient(vault_url=kvURI, credential=credential)
secret = secret_client.set_secret(secretname,secretvalue,enabled=True)
sec_dic={}
sec_dic['name']=secret.name
sec_dic['value']=secret.value
sec_dic['properties']=secret.properties.version
return sec_dic
xx=set_secret('g','ff')
print(xx)
When running this code, I get the follwing error,
azure.core.exceptions.HttpResponseError: (Forbidden) Public network access is disabled and request is not from a trusted service nor via an approved private link.
Code: Forbidden
Message: Public network access is disabled and request is not from a trusted service nor via an approved private link.
Inner error: {
"code": "ForbiddenByConnection"
}
What am I doing wrong? How do I connect to keyvault that has no public access only via private endpoint?
I have reproduced in my environment, and got expected results as below:
Firstly, I have done the same process you have explained, and I got same error as you have got:
So, this error comes when you create a private endpoint.
When you create a private endpoint with a particular Virtual Network-Subnet, you need to create a Virtual Machine where Virtual Network is integrated.
Then we need to open the Vm created integrated with the above Virtual Network-Subnet, and now when I try to access the key Vault, I got required result as below:
References:
azure - Unable to get storage account container details - Stack Overflow
Azure Key Vault not allow access via private endpoint connection
Azure functions and Azure KeyVault communicating through service endpoint
I have an API Gateway defined in the python cdk that will accept CURL Restful requests to upload / read / delete files from an S3 bucket:
api = api_gw.RestApi(self, "file-api",
rest_api_name="File REST Service")
file = api.root.add_resource("{id}")
get_files_integration = api_gw.LambdaIntegration(handler,
request_templates={"application/json": '{ "statusCode": "200" }'})
post_file_integration = api_gw.LambdaIntegration(handler)
get_file_integration = api_gw.LambdaIntegration(handler)
delete_file_integration = api_gw.LambdaIntegration(handler)
api.root.add_method("GET", get_files_integration, authorization_type=api_gw.AuthorizationType.COGNITO, authorizer=auth)
file.add_method("POST", post_file_integration); # POST /{id}
file.add_method("GET", get_file_integration); # GET /{id}
file.add_method("DELETE", delete_file_integration); # DELETE /{id}
Is it possible to enable CORS on the API Gateway so that it will perform pre-flight checks and allow external access from a localhost on another machine?
I have attempted to use the existing add_core_preflight() method defined in the documentation I can find but believe this may no longer be valid as of CDK 2.0.
Yes, IResource.add_cors_preflight() does exactly this.
You can also specify default CORS config with the default_cors_preflight_options attribute of RestApi.
Here are the examples from the docs. They're in Typescript, but it will work the same in Python.
The following example will enable CORS for all methods and all origins on all resources of the API:
new apigateway.RestApi(this, 'api', {
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
allowMethods: apigateway.Cors.ALL_METHODS // this is also the default
}
})
The following example will add an OPTIONS method to the myResource API resource, which only allows GET and PUT HTTP requests from the origin https://amazon.com.
declare const myResource: apigateway.Resource;
myResource.addCorsPreflight({
allowOrigins: [ 'https://amazon.com' ],
allowMethods: [ 'GET', 'PUT' ]
});
I'm having a very hard time trying to find out how to correctly configure sqs in boto3 to be able to send messages to my sqs queue. It looks like there is some confusion around boto3 and legacy endpoints but I'm getting the error message The address 'https://us-west-2.queue.amazonaws.com/xxxx/my-name' is not valid for this endpoint. for each permutation of the config I can imagine. Here's the code.
# Tried both of these
sqs_queue_url = 'https://sqs.us-west-2.amazonaws.com/xxxx/my-queue'
sqs_queue_url = 'https://us-west-2.queue.amazonaws.com/xxxx/my-queue'
# Tried both of these
sqs = boto3.client("sqs", endpoint_url="https://sqs.us-west-2.amazonaws.com")
sqs = boto3.client("sqs")
# _endpoint updates
logger.info("sqs endpoint: %s", sqs._endpoint)
# Keeps failing
sqs.send_message(QueueUrl=sqs_queue_url, MessageBody=message_json)
I'm hoping this is a silly mistake. What config am I missing?
From docs, AWS CLI and Python SDK use legacy endpoints:
If you use the AWS CLI or SDK for Python, you can use the following legacy endpoints.
Also, when you set endpoint you need to add https:
sqs = boto3.client("sqs", endpoint_url="https://us-west-2.queue.amazonaws.com")
I manage a couple (presently, but will increase) clusters at GKE and up till now have been ok launching things manually as needed. I've started working my own API that can take in requests to spin up new resources on-demand for a specific cluster but in order to make it scalable I need to do something more dynamic than switching between clusters with each request. I have found a link for a Google API python client that supposedly can access GKE:
https://developers.google.com/api-client-library/python/apis/container/v1#system-requirements
I've also found several other clients (specifically one I was looking closely at was the nodejs client from godaddy) that can access Kubernetes:
https://github.com/godaddy/kubernetes-client
The Google API Client doesn't appear to be documented for use with GKE/kubectl commands, and the godaddy kubernetes-client has to access a single cluster master but can't reach one at GKE (without a kubectl proxy enabled first). So my question is, how does one manage kubernetes on GKE programmatically without having to use the command-line utilities in either nodejs or python?
I know this question is a couple of years old, but hopefully this helps someone. Newer GKE APIs are available for Node.js here: https://cloud.google.com/nodejs/docs/reference/container/0.3.x/
See list of container APIs here: https://developers.google.com/apis-explorer/#p/container/v1/
Once connected via the API, you can access cluster details, which includes the connectivity information for connecting to the master with standard API calls.
I just posted an article on Medium with an example of how to do this
The first part of the article outlines how to setup the service account, roles and credentials and load them as Environmental variables. Once done, you could then run the following python:
from kubernetes import client
import base64
from tempfile import NamedTemporaryFile
import os
import yaml
from os import path
def main():
try:
host_url = os.environ["HOST_URL"]
cacert = os.environ["CACERT"]
token = os.environ["TOKEN"]
# Set the configuration
configuration = client.Configuration()
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.b64decode(cacert))
configuration.ssl_ca_cert = cert.name
configuration.host = host_url
configuration.verify_ssl = True
configuration.debug = False
configuration.api_key = {"authorization": "Bearer " + token}
client.Configuration.set_default(configuration)
# Prepare all the required properties in order to run the create_namespaced_job method
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#create_namespaced_job
v1 = client.BatchV1Api()
with open(path.join(path.dirname(__file__), "job.yaml")) as f:
body = yaml.safe_load(f)
v1.create_namespaced_job(namespace="default", body=body, pretty=True)
return f'Job created successfully', 200
except Exception as e:
return str(e), 500
if __name__ == '__main__':
main()
Is it possible to create an ec2 instance using boto3 in python?
Boto3 document is not helping here, and I couldn't find any helping documents online. please provide some sample codes/links.
The API has changed but it's right there in the documentation
# Boto 3
ec2.create_instances(ImageId='<ami-image-id>', MinCount=1, MaxCount=5)
Link to the documentation:
http://boto3.readthedocs.org/en/latest/guide/migrationec2.html#launching-new-instances
You can run the code I used from the boto3 docs. You can add or remove parameters as per your requirements, but this is what you would normally require:
import boto3
client = boto3.client('ec2', region_name='us-west-2')
response = client.run_instances(
BlockDeviceMappings=[
{
'DeviceName': '/dev/xvda',
'Ebs': {
'DeleteOnTermination': True,
'VolumeSize': 8,
'VolumeType': 'gp2'
},
},
],
ImageId='ami-6cd6f714',
InstanceType='t3.micro',
MaxCount=1,
MinCount=1,
Monitoring={
'Enabled': False
},
SecurityGroupIds=[
'sg-1f39854x',
],
)
The link you're really looking for in the documentation is the create_instances() method of the ServiceResource object. This is the type of object you are calling if you create an EC2 resource like this:
s = boto3.Session(region_name="us-west-1")
ec2 = s.resource('ec2')
...
instance = ec2.create_instances(**y_kwargs)
This contains a more detailed example and a longer list of available parameters.
You can also get parameter values for AWS instances that are already running using the AWS command line interface:
$ aws ec2 describe-instances
This prints out a JSON file from which relevant parameters can be extracted and passed to the create_instances() method. (Or, you can use a boto client and call the describe_instances() method.)
(Note: If you're wondering what the difference is between the Client and the Resource, they serve different purposes for the same end - the client is a lower-level interface while the Resource is a higher-level interface.)
Refer to API docs has all available options to create instance
http://boto3.readthedocs.org/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances
If your running from your windows computer you need configure AWS Cli with proper EC2 permisssion to launch instance.
#
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
ImageId='ami-5eb63a32',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
)
print(instance[0].id)