I am looking for an AWS instance that will allow me to run a simple python script made with a flask that runs constantly in an infinite loop.
The instance must be created and configured via an API call from the main web page with its own database in postgres.
Example:
Client1 -> Go to the web page and create his own instance -> The web page calls the AWS API -> AWS creates an instance1 for client1 which is constantly running.
Note: I am looking for a cheaper service for it. It may be something between the EC2 instance and the Lambda function (the Labmda function doesn't work because it needs to be autonomous).
If the Python instance script fails for any reason, it should restart automatically.
Also the client1 has the option to turn the script off and on whenever he wants.
Thank you very much in advance!
Elastic Beanstalk is the equivalent of Heroku.
https://aws.amazon.com/elasticbeanstalk/
Related
I have recently started exploring azure container apps as a microservice.
I have kept the minimum number of replicas to be 0 and maximum to be 10.
I am using a queue trigger input binding, that whenever a message comes in the queue it is processed.
I was expected it to work like a function app, where the container might be invoked on the input trigger. However, what I have observed is that the trigger doesnot get processed on the conditions I described above.
If I change the replicas to 1, then the trigger gets processed like a function app. But this method doesn't make it a serverless service as one instance is ON all the time and is costing me money (also unable to find how much it is costing in the idle state).
Can someone please guide me if I understood the container apps correctly, and is there a way to only invoke the container when a message comes to the queue?
The scenario that you are describing is what we support with ScaledJobs in KEDA instead of ScaledObject (deamon-like workloads).
ScaledJobs, however, are not supported in Azure Container Apps yet and is tracked on GitHub.
Based on example in documentation, you can scale from 0 for azure storage queue using keda scaler.
currently I automatically start a VM after running a cloud function via this code:
def start_vm(context, event):
compute = googleapiclient.discovery.build('compute', 'v1')
result = compute.instances().start(project='PROJECT', zone='ZONE', instance='NAME').execute()
Now I am looking for a way to deliver a message or a parameter at the same time. After the VM starts and based on the added message/parameter, a different code runs. Does anyone know how to achieve this?
Appreciate every help.
Thank you.
You can use the Guest attributes. The Cloud Functions add the guest attribute and then run the VM.
In the startup script, you read the data in the guest attributes and then you use them to perform stuff.
The other solution is to start a webserver in the VM and then to POST a request to this webserver.
This solution is better is you have several task to perform on the VM. But, take care of the security is you expose a webserver. Expose it only internally and use a VPC connector on your Cloud Function to reach your VM.
I recently got my Flask site up and running, which uses data that's scraped from several other web sites and displays it on my own. My issue is that I don't know how to run the web scraping script during the deploy. It runs indefinitely and updates every minute, which is crucial to what I'm working on. Thanks for any tips.
Scripts that run indefinitely are not a good fit for App Engine or similar products like Cloud Functions or Cloud Run. These products are meant to respond to a single request and may spin up new instances to handle new requests, and then spin the instance down.
If you need to run a script continuously, Compute Engine might be a better fit. Another alternative would be to reconfigure your script to be something that can be run repeatedly, and make it a Cloud Function that is called repeatedly by Cloud Scheduler.
I am trying to create an AWS SAM app with multiple AWS Serverless functions.
The app has 1 template.yaml file which have resource of 2 different serverless lambda functions, for instance "Consumer Lambda" and "Worker Lambda". Consumer gets triggered at a rate of 5 minutes. The consumer uses boto3 library to trigger the worker lambda function. This code works when the worker lambda is deployed on AWS.
But I want to test both the functions locally with Sam local invoke "Consumer" which invokes "Worker" also locally.
Here's a screenshot of the YAML file:
I am using Pycharm to run the project. There is an option to run only 1 function at a time which then creates only one folder in the build folder.
I have to test if Consumer is able to invoke worker locally in pycharm before deployment. I think there is some way to do it but not sure how to. I did some extensive search but didn't yield anything.
Any help is appreciated. Thanks in advance
You can start the lambda invoke endpoint in the following way (official docs):
sam local start-lambda
Now you can point your AWS resource client to port 3001 and trigger the functions locally.
For eg. If you are doing this on Python, it can be acheived in the following way with boto3:
boto3
# Create a lambda client
lambda_client = boto3.client('lambda',
region_name="<localhost>",
endpoint_url="<http://127.0.0.1:3001>",
use_ssl=False,
verify=False)
# Invoke the function
lambda_client.invoke(FunctionName=<function_name>,
Payload=<lambda_payload>)
I am trying to setup a python script that I can have running all the time and then use a HTTP command to activate an action in the script. So that when I type a command like this into a web browser:
http://localhost:port/open
The script executes a piece of code.
The idea is that I will run this script on a computer on my network and activate the code remotely from elsewhere on the network.
I know this is possible with other programming languages as I've seen it before, but I can't find any documentation on how to do it in python.
Is there an easy way to do this in Python or do I need to look into other languages?
First, you need to select a web framework. I will recommand using Flask, since it is lightweight and really easy to start using it fast.
We begin by initializing your app and setting a route. your_open_func() (in the code below) which is decorated with the #app.route("/open") decorator will be triggered and run when you will send a request to that preticular url (for example http://127.0.0.1:5000/open)
As Flask's website says: flask is fun. The very first example (with minor modifications) from there suits your needs:
from flask import Flask
app = Flask(__name__)
#app.route("/open")
def your_open_func():
# Do your stuff right here.
return 'ok' # Remember to return or a ValueError will be raised.
In order to run your app app.run() is usually enough, but in your case you want other computers on your network to be able to access the app, so you should call the run() method like so: app.run(host="0.0.0.0").
By passing that parameter you are making the server publicly available.