Running intensive jobs adhoc in a cloud service - python

I'm looking for some advice to run intensive jobs on demand in somewhere like AWS or Digital Ocean.
Here's my scenario:
I have a template/configuration of a VM with dependencies (imagemagick, ruby, python etc)
I have a codebase that runs a job, eg: querying a db and running reports, then emailing those reports to my user base
I want to be able to run and trigger this job externally (i.e maybe via some webapp somewhere else, or via a command line somewhere - maybe some cron on another cloud instance somewhere)
When I run this job, it needs to spin up a copy of this template on AWS or DO, run the job, which could run for any length of time, until all reports are generated and sent out
Once the job has finished, shutdown the instance so I'm not paying for something to always be running in the background
I'd like to not have to commit to one service (i.e AWS) but rather have a template that can be dropped in anywhere to test out the differences in cloud providers
Initially, I was thinking rubber but this seems more something you'd use for CI, rather than being able to spin up an instance, run a long running job, then shut down the instance once finished.
Does anything already exist for this, or would I need to build something myself hooking into the relevant APIs?

Related

AWS CLOUD9: How can I automate a task to be run on daily basis?

I've created a python script that grabs information from an API and sends it in an email. I'd like to automate this process so to run on daily basis let's say at 9AM.
The servers must be asleep when they will not be running this automation.
What is the easiest way to achieve this?
Note: Free version of AWS.
cloud9 is the ide that lets you write, run, and debug your code with just a browser.
"It preconfigures the development environment with all the SDKs, libraries, and plug-ins needed for serverless development. Cloud9 also provides an environment for locally testing and debugging AWS Lambda functions. This allows you to iterate on your code directly, saving you time and improving the quality of your code."
okay for the requirement you have posted :-
there are 2 ways of achieving this
on a local system use cron job scheduler daemon to run the script. a tutorial for cron.tutorial for cron
same thing can also be achieved by using a lambda function. lambda only runs when it is triggered, using compute resources for that particular time when it is invoked so your servers are sleeping for the rest of time( technically you are not provisioning any server for lambda)
convert your script in a function for lambda. and then use event bridge service where you can specify a corn expression to run your script everyday at 9am. wrote an article on the same may it can help.
note :- for email service you can use ses https://aws.amazon.com/ses/. my article uses ses.
To schedule Events you'd need a Lambda function with Cloudwatch events such as follow. https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
Cloud9 is an IDE.

Scheduled job from Flask application

I am hoping to gain a basic understanding of scheduled task processes and why things like Celery are recommended for Flask.
My situation is a web-based tool which generates spreadsheets based on user input. I save those spreadsheets to a temp directory, and when the user clicks the "download" button, I use Flask's "send_from_directory" function to serve the file as an attachment. I need a background service to run every 15 minutes or so to clear the temp directory of all files older than 15 minutes.
My initial plan was a basic python script running in a while(True) loop, but I did some research to find what people normally do, and everything recommends Celery or other task managers. I looked into Celery and found that I also need to learn about redis, and I need to apparently host redis in a unix environment. This is a lot of trouble for a script that just deletes files every 15 minutes.
I'm developing my Flask app locally in Windows with the built-in development server and deploying to a virtual machine on company intranet with IIS. I'm learning as I go, so please explain why this much machinery is needed to regularly call a script that simply deletes things. It seems like a vast overcomplication, but as I said, I'm trying to learn as I go so I want to do/learn it correctly.
Thanks!
You wouldn't use Celery or redis for this. A cron job would be perfectly appropriate.
Celery is for jobs that need to be run asynchronously but in response to events in the main server processes. For example, if a sign up form requires sending an email notification, that would be scheduled and run via Celery so as not to block the main web response.

How can I run a python script on Google Cloud project or AWS instances without manually firing them up?

I have some python scripts that require a good machine. Instead of starting instances manually on GCP or AWS, then making sure all python libraries are installed, can I do it through python for example so that the instance is on only for the time needed to run the script?
If you're in AWS you could just create Lambda functions for your scripts and set those on a timer via Lambda or use Cloudwatch to trigger them.
In both AWS and Google Cloud, you can do just about anything via a programming language including Python.
Last year, AWS announced EC2 Pause and Resume. This feature allows you to setup and configure an EC2 instance and when you are finished with your data processing, put the instance to sleep. You then just pay for storage and IP address costs.
New – Hibernate Your EC2 Instances
Google has also announced alpha features for pausing Compute Engine instances, but this feature is not generally available today - you must apply to use this feature.
Another option supported by both AWS and Google today is instance templates. This allows you to create a template with all the options that you want, such as installing packages on startup. You can then launch a new custom instance from the console, CLI or with your favorite programming language. When your task is complete you can then stop or terminate the instance.
Of course there is also the standard method. Launch an instance, configure as required and then stop the instance. When you need processing power, start the instance, data process and then stop again. The difference between this method and pausing an instance is the total time to start an instance is faster with resume. Sort of like your laptop. Close the lid and the laptop goes to sleep. Open the lid and you have almost instant-on.
If you are fortunate enough to have a running Kubernetes cluster, you can do everything with a container and launch the container via CLI. The container will automatically stop once the container finishes its task.
You can have a script that invokes AWS CLI to start up an instance, connect to it and run the script via SSH and then terminate the instance. See AWS CLI documentation here - https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html.
An alternative to #bspeagle's suggestion of AWS Lambda is GCP's Cloud Functions.

How to run a briefly running Docker container on Azure on a daily basis?

In the past, I've been using WebJobs to schedule small recurrent tasks that perform a specific background task, e.g., generating a daily summary of user activities. For each task, I've written a console application in C# that was published as an Azure Webjob.
Now I'd like to daily execute some Python code that is already working in a Docker container. I think I figured out how to get a container running in Azure. Right now, I want to minimize the operation cost since the container will only run for a duration of 5 minutes. Therefore, I'd like to somehow schedule that my container starts once per day (at 1am) and shuts down after completion. How can I achieve this setup in Azure?
I'd probably write a scheduled build job on vsts\whatever to run at 1am daily to launch a container on Azure Container Instances. Container should shutdown on its own when the program exists (so your program has to do that without help from outside).
Azure Functions has what you need to schedule daily tasks. In your case, you would select the Python runtime and schedule the job though Azure Portal (Select the Timer option)

what is a robust way to execute long-running tasks/batches under Django?

I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages.
Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh.
This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user.
The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch).
Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available.
It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages.
What I currently have:
Each batch is in its own script.
I can run it on the command line (via if __name__ == "main":).
The batches are also hooked up as celery tasks and work fine that way.
Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well.
The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends.
The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism.
What's not so great.
In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about.
So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset.
This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access.
Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point.
Some options I have thought about:
writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django.
Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages.
an alternative task/process manager to celery with less capability but also less moving parts?
not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess.
environment:
nginx, wsgi, ubuntu on virtualbox, chef to build VMs.
I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict.

Categories

Resources