I am trying to figure out the best way to run a python process typically taking 10-30 (max an hour ish) minutes on my local machine. The process will be manually triggered, and may not be triggered for ours or days.
I am a bit confused, because I read official ms-docs stating that one should avoid long running processes in function apps (https://learn.microsoft.com/en-us/azure/azure-functions/performance-reliability#avoid-long-running-functions) but at the same time, the functionTimeout for the Premium and Dedicated plans can be unlimited.
I am hesitant to use a standard web app with an API since it seems overkill to have it running 24/7.
Are there any ideal resources for this?
you can use consumption based Azure durable functions, they can run for hours or even days.
Related
I have a simple python script that I would like to run thousands of it's instances on GCP (at the same time). This script is triggered by the $Universe scheduler, something like "python main.py --date '2022_01'".
What architecture and technology I have to use to achieve this.
PS: I cannot drop $Universe but I'm not against suggestions to use another technologies.
My solution:
I already have a $Universe server running all the time.
Create Pub/Sub topic
Create permanent Compute Engine that listen to Pub/Sub all the time
$Universe send thousand of events to Pub/Sub
Compute engine trigger the creation of a Python Docker Image on another Compute Engine
Scale the creation of the Docker images (I don't know how to do it)
Is it a good architecture?
How to scale this kind of process?
Thank you :)
It might be very difficult to discuss architecture and design questions, as they usually are heavy dependent on the context, scope, functional and non functional requirements, cost, available skills and knowledge and so on...
Personally I would prefer to stay with entirely server-less approach if possible.
For example, use a Cloud Scheduler (server less cron jobs), which sends messages to a Pub/Sub topic, on the other side of which there is a Cloud Function (or something else), which is triggered by the message.
Should it be a Cloud Function, or something else, what and how should it do - depends on you case.
As I understand, you will have a lot of simultaneous call on a custom python code trigger by an orchestrator ($Universe) and you want it on GCP platform.
Like #al-dann, I would go to serverless approach in order to reduce the cost.
As I also understand, pub sub seems to be not necessary, you will could easily trigger the function from any HTTP call and will avoid Pub Sub.
PubSub is necessary only to have some guarantee (at least once processing), but you can have the same behaviour if the $Universe validate the http request for every call (look at http response code & body and retry if not match the expected result).
If you want to have exactly once processing, you will need more tooling, you are close to event streaming (that could be a good use case as I also understand). In that case in a full GCP, I will go to pub / sub & Dataflow that can guarantee exactly once, or Kafka & Kafka Streams or Flink.
If at least once processing is fine for you, I will go http version that will be simple to maintain I think. You will have 3 serverless options for that case :
App engine standard: scale to 0, pay for the cpu usage, can be more affordable than below function if the request is constrain to short period (few hours per day since the same hardware will process many request)
Cloud Function: you will pay per request(+ cpu, memory, network, ...) and don't have to think anything else than code but the code executed is constrain on a proprietary solution.
Cloud run: my prefered one since it's the same pricing than cloud function but you gain the portability, the application is a simple docker image that you can move easily (to kubernetes, compute engine, ...) and change the execution engine depending on cost (if the load change between the study and real world).
The challenge is to run a set of data processing and data science scripts that consume more memory than expected.
Here are my requirements:
Running 10-15 Python 3.5 scripts via Cron Scheduler
These different 10-15 scripts each take somewhere between 10 seconds to 20 minutes to complete
They run on different hours of the day, some of them run every 10 minute while some run once a day
Each script logs what it has done so that I can take a look at it later if something goes wrong
Some of the scripts sends e-mails to me and to my team mates
None of the scripts have an HTTP/web server component; they all run on Cron schedules and not user-facing
All the scripts' code is fed from my Github repository; when scripts wake up, they first do a git pull origin master and then start executing. That means, pushing to master causes all scripts to be on the latest version.
Here is what I currently have:
Currently I am using 3 Digital Ocean servers (droplets) for these scripts
Some of the scripts require a huge amount of memory (I get segmentation fault in droplets with less than 4GB of memory)
I am willing to introduce a new script that might require even larger memory (the new script currently faults in a 4GB droplet)
The setup of the droplets are relatively easy (thanks to Python venv) but not to the point of executing a single command to spin off a new droplet and set it up
Having a full dedicated 8GB / 16B droplet for my new script sounds a bit inefficient and expensive.
What would be a more efficient way to handle this?
What would be a more efficient way to handle this?
I'll answer in three parts:
Options to reduce memory consumption
Minimalistic architecture for serverless computing
How to get there
(I) Reducing Memory Consumption
Some handle large loads of data
If you find the scripts use more memory than you expect, the only way to reduce the memory requirements is to
understand which parts of the scripts drive memory consumption
refactor the scripts to use less memory
Typical issues that drive memory consumption are:
using the wrong data structure - e.g. if you have numerical data it is usually better to load the data into a numpy array as opposed to a Python array. If you create a lot of objects of custom classes, it can help to use __slots__
loading too much data into memory at once - e.g. if the processing can be split into several parts independent of each other, it may be more efficient to only load as much data as one part needs, then use a loop to process all the parts.
hold object references that are no longer needed - e.g. in the course of processing you create objects to represent or process some part of the data. If the script keeps a reference to such an object, it won't get released until the end of the program. One way around this is to use weak references, another is to use del to dereference objects explicitely. Sometimes it also helps to call the garbage collector.
using an offline algorithm when there is an online version (for machine learning) - e.g. some of scikit's algorithms provide a version for incremental learning such as LinearRegression => SGDRegressior or LogisticRegression => SGDClassifier
some do minor data science tasks
Some algorithms do require large amounts of memory. If using an online algorithm for incremental learning is not an option, the next best strategy is to use a service that only charges for the actual computation time/memory usage. That's what is typically referred to as serverless computing - you don't need to manage the servers (droplets) yourself.
The good news is that in principle the provider you use, Digital Ocean, provides a model that only charges for resources actually used. However this is not really serverless: it is still your task to create, start, stop and delete the droplets to actually benefit. Unless this process is fully automated, the fun factor is a bit low ;-)
(II) Minimalstic Architecture for Serverless Computing
a full dedicated 8GB / 16B droplet for my new script sounds a bit inefficient and expensive
Since your scripts run only occassionally / on a schedule, your droplet does not need to run or even exist all the time. So you could set this is up the following way:
Create a schedulding droplet. This can be of a small size. It's only purpose is to run a scheduler and to create a new droplet when a script is due, then submit the task for execution on this new worker droplet. The worker droplet can be of the specific size to accommodate the script, i.e. every script can have a droplet of whatever size it requires.
Create a generic worker. This is the program that runs upon creation of a new droplet by the scheduler. It receives as input the URL to the git repository where the actual script to be run is stored, and a location to store results. It then checks out the code from the repository, runs the scripts and stores the results.
Once the script has finished, the scheduler deletes the worker droplet.
With this approach there are still fully dedicated droplets for each script, but they only cost money while the script runs.
(III) How to get there
One option is to build an architecture as described above, which would essentially be an implementation of a minimalistic architecture for serverless computing. There are several Python libraries to interact with the Digital Ocean API. You could also use libcloud as a generic multi-provider cloud API to make it easy(ier) to switch providers later on.
Perhaps the better alternative before building yourself is to evaluate one of the existing open source serverless options. An extensive curated list is provided by the good fellows at awesome-serverless. Note at the time of writing this, many of the open source projects are still in their early stages, the more mature options are commerical.
As always with engineering decisions, there is a trade-off between the time/cost required to build or host yourself v.s. the cost of using a readily-available commercial service. Ultimately that's a decision only you can take.
I have this python script that needs to be scheduled to run once a day. It will take around 4-6GB of memory (due to large amount of dataframe operations). I will be using AWS and I would like to what is the best practice to handle such task. Is it a good idea to put it in a container like docker before deployment?
Since your memory needs to be on ram then I'd recommend using a memory optimized ec2 instance with a CloudWatch event.
To minimize the cost, however, you don't want to have this EC2 running the whole day so what you can do is have a couple of lambda functions sitting between CloudWatch and the EC2 to :
start the ec2 instance once the daily trigger runs and
stop the ec2 instance with a trigger from your Python code that runs once it's finished
If that doesn't make much sense let me know and I'll try and elaborate with a diagram.
In the documentation indicate that a task :
Tasks targeted at an automatic scaled module must finish execution
within 10 minutes. If you have tasks that require more time or
computing resources, they can be sent to manual or basic scaling
modules, where they can run up to 24 hours.
The link surrounding manual or basic scaling modules talks about a target, but doesn't say more about how to have a task that runs for a day.
You guessed my question :) How do I tell GAE that this specific task will be run for a day, not a minute ?
You'll need to configure a module to use basic or manual scaling, deploy your task handling code to an instance for that module.
You can read more about configuring modules/versions/instances on the App Engine Modules page for Python
I am new to AWS EC2 so that I make this post for some questions.
1) Right now, I am considering running some script on the server. I use two tools usually. One is a software can only be used in Windows. The other is just python. Should I open two instances, one for windows, one for ubuntu? Or just one instance of Windows with Git Bash installed? I want to be cost and performance efficiently.
2) I am not going to use the script very often (usually 2-3 hours per day or 10-12 hours per week). Therefore, is it easy to schedule those jobs automatically across the instances? I mean it can automatically turn off and restart given appropriate time.
3) Some of the script involves web scraping. I am also wondering if it is ok to switch IP address every time I run the script. Mainly, it is for python script.
Thanks.
1) Well, off course, the less instances you have, the less you will pay. Python can run on Windows, I just don't know how tricky it would be to make it work in your case. It all depends on what you are running and what are your management requirements. Those script languages were originally designed for Unix environments, so people usually runs it on those kind of systems, so running it in Windows may be a little unpleasant. Anyway, I don't think you should ask someone else it, you should figure it out yourself what suits you best.
2) AWS doesn't have a scheduler for EC2 (stop, starting, etc, given date/times/recurrence). It's something that I miss on it too. So, to achieve something like this you have some options.
Turning your temporary instance into an auto-scaling group of 1 instance, and scheduling policies to scale it in to zero instances and scale it out to 1 instance again when you want. The problem with this approach is: if you can't be sure how long it will take for your job to be completed, then you have a problem, off course, because those scheduled actions are based in fixed date/times. One solution for this would be the temporary instance itself changing the autoscaling group configuration to zero instances via API when it has finished. (In this case, you would just have a scale out scheduled policy, to launch the instance, leaving the termination of it to be done 'manually', via auto-scaling group configuration handling from inside the temporary instance). But be aware that auto-scaling is very tricky for begginers, and you should go throught the documentation before using it. (For exemple, each time you scale in and out you instances, they're terminated, not just stopped, and you lose every data on it.)
Not using auto-scaling group, having a regular instance, and scheduling all those actions from outside it via API. It could be from your Windows (master) instance. In this case, the master would start the temporary instance via API, which would run its things and then turn itself off when it had finished. Otherwise, the master instance would have to keep polling the temporary one somehow to know when the jobs are done and it can be shutdown from outside.
There are probably more complicated ways for doing this (Elastic Beanstalk crons, maybe).
I think, in this case, the more simple, the better. So, I would stick to the option 2). You will only need to figure how to install and use AWS CLI on Windows and manage IAM credentials and permissions to provide your CLI access enough for it to do what it needs.
3) If you don't assign an Elastic IP to your instance, you will get a different IP each time you stop and start it, so this is, by default, what you want. In auto-scaling, this is the only way, you can't even assign a fixed IP to instances.
I hope I could help you a little bit.