I've been trying to find a way to delete routes in AWS programatically. I've built a python application for managing AWS resources using boto and boto3. When dealing with the clean up after deletion of VPC Peering, I have blackholed routes left over. I don't want to delete the Route Tables in question, just the blackholed routes.
The AWS CLI has a delete-route function, but I can't find the corresponding function in boto and I'd prefer not to run the AWS CLI directly from my python app if I can avoid it.
In boto3 (and boto) there are methods for creating routes, but I couldn't find any for deleting routes (just deleting the whole route table). I've searched this out numerous times but haven't come close to finding an answer.
Any help?
I do see a method in boto 2.38.
class boto.vpc.VPCConnection
delete_route(route_table_id, destination_cidr_block, dry_run=False)
Deletes a route from a route table within a VPC.
Parameters:
route_table_id (str) – The ID of the route table with the route.
destination_cidr_block (str) – The CIDR address block used for
destination match.
dry_run (bool) – Set to True if the operation
should not actually run.
Return type: bool Returns: True if
successful
Related
I'm building a flask server in python with Cloud Run, for a chatbot to call.
Sometimes if user wants to do something with the chatbot, the bot need ask the user to login to a 3rd party server before doing the things.
I have two routes:
Route 1 is "/login", it returns a simple iframe which will open a login page in a 3rd party server, generate a "session_id", and save some info I already get to a global variable dict called "runtimes" with the "session_id" as key, so that I can use it later when visitor successfully logged in.
Route 2 is "/callback/<session_id>". After user successfully login to its account, the 3rd party server will call this route with a token in url parameters. Then I will use the "session_id" to read the saved info from "runtimes", and do later things.
It works well in my local machine. But in Google Cloud Run, because it support multiple instances, sometimes it will trigger a new instance when server calls "callback", so it cannot get the "runtime" because they are in different instances.
I know that I can save the runtimes dict to a database to solve this problem, but it looks too overkill...Just not seem right.
Is there any easy way that I can make the "runtimes" be shared between instances?
The solution here is to use a central point of storage: database, memorystore, firestore,... something out of Cloud Run itself.
You can also try the Cloud Run execution runtime v2 that allow you to mount a network disk, such as Cloud Storage or Filestore. You can imagine to store the session data in a file which has the name of the session ID.
Note: On Cloud Run side, something is cooking, but it's not 100% safe, it will be a best effort. A database backup will be required even with that new feature
I would like to create resources depending on the parameters value. How can I achieve that?
Example:
vpc_create = core.CfnParameter(stack, "createVPC")
condition = core.CfnCondition(stack,
"testeCondition",
expression=core.Fn.condition_equals(vpc_create, True)
)
vpc = ec2.Vpc(stack, "MyVpc", max_azs=3)
How add the condition to VPC resource for VPC being created only if the parameter it's true?
I think that I need to get Cloudformation resource, something like that:
vpc.node.default_child # And I think this returns an object from ec2.CfnVPC class, but I'm stuck here.
Thanks
Conditional resource creation and a lot of other flexibility is possible using context data. AWS itself recommends context over parameters
In general, we recommend against using AWS CloudFormation parameters with the AWS CDK. Unlike context values or environment variables, the usual way to pass values into your AWS CDK apps without hard-coding them, parameter values are not available at synthesis time, and thus cannot be easily used in other parts of your AWS CDK app, particularly for control flow.
Please read in full at: https://docs.aws.amazon.com/cdk/latest/guide/parameters.html
I have an app that is meant to integrate with third-party apps. These apps should be able to trigger a function when data changes.
The way I was envisioning this, I would use a node function to safely prepare data for the third parties, and get the url to call from the app's configuration on firestore. I would call that url from the node function, and wait for it to return, updating results as necessary (actually, triggering a push notification). -- these third-party functions would tend to be python functions, so my demo should be in python.
I have the initial node function and firestore setup so that I am currently triggering a ECONNREFUSED -- because I don't know how to set up the third-party function.
Let's say this is the function I need to trigger:
def hello_world(request):
request_json = request.get_json()
if request_json and 'name' in request_json:
name = request_json['name']
else:
name = 'World'
return 'Hello, {}!\n'.format(name)
Do I need to set up a separate gcloud account to host this function, or can I include it in my firestore functions? If so, how do I deploy this to firestore? Typically with my node functions, I am running firebase deploy and it automagically finds my functions from my index.js file.
If you're asking whether Cloud Functions that are triggered by Cloud Firestore can co-exist in a project with Cloud Functions that are triggered by HTTP(S) requests, then the answer is "yes they can". There is no need to set up a separate (Firebase or Cloud) project for each function type.
However: when you deploy your Cloud Functions through the Firebase CLI with firebase deploy, it will remove any functions that it finds in the project, that are not in the code. If you have functions both in Python and in Node.js, there is never a single codebase that contains both, so a blanket deploy would always delete some of your functions. So in that case, you should use the granular deploy option of the Firebase CLI.
I am building out a product that will use the serverless architecture on Amazon (using this example project).
Right now the product is usable by anyone. However, I don't want just anyone to be able to add/update/delete from the database. I do want anyone to be able to read from it though. So, I'd like to use two different sets of credentials. The first would be distributed with the application and would allow read only access. The second set remains internal and would be embedded in OS variables that the application would utilize.
It looks like these permissions are set up in the serverless.yml file, but this is only for one set of credentials.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMODB_TABLE}"
How I can set up two different roles?
IAM offers a number of pre-defined, managed IAM policies for DynamoDB, including:
AmazonDynamoDBReadOnlyAccess
AmazonDynamoDBFullAccess
Create two IAM roles with these managed policies: one for your read-only application and the other for your internal system. If either/both are running on EC2 then, rather than rely on credentials in environment variables, you can launch these EC2 instances with the relevant IAM role.
I am writing a lambda function on Amazon AWS Lambda. It accesses the URL of an EC2 instance, on which I am running a web REST API. The lambda function is triggered by Alexa and is coded in the Python language (python3.x).
Currently, I have hard coded the URL of the EC2 instance in the lambda function and successfully ran the Alexa skill.
I want the lambda function to automatically obtain the IP from the EC2 instance, which keeps changing whenever I start the instance. This would ensure that I don't have to go the code and hard code the URL each time I start the EC2 instance.
I stumbled upon a similar question on SO, but it was unanswered. However, there was a reply which indicated updating IAM roles. I have already created IAM roles for other purposes before, but I am still not used to it.
Is this possible? Will it require managing of security groups of the EC2 instance?
Do I need to set some permissions/configurations/settings? How can the lambda code achieve this?
Additionally, I pip installed the requests library on my system, and I tried uploading a '.zip' file with the structure :
REST.zip/
requests library folder
index.py
I am currently using the urllib library
When I use zip files for my code upload (I currently edit code inline), it can't even accesse index.py file to run the code
You could do it using boto3, but I would advise against that architecture. A better approach would be to use a load balancer (even if you only have one instance), and then use the CNAME record of the load balancer in your application (this will not change for as long as the LB exists).
An even better way, if you have access to your own domain name, would be to create a CNAME record and point it to the address of the load balancer. Then you can happily use the DNS name in your Lambda function without fear that it would ever change.