I have some DAGs already defined in Airflow that perform queries on third party APIs, pull some data partitioned by date (for example the trending items for yesterday) and write them to the DB. They can also be triggered manually with a bunch of parameters to download the same items without the date-based logic. So far so good, this is a standard scenario for Airflow.
I now want to reuse and adapt some of these dags to perform special queries: in Airflow's terms this means receiving different Job's parameters. I can do it one by one manually but clearly this is not the best. The main reason is that these third party APIs have daily quota thresholds that we don't want to cross. So we are not free to run everything every day but we need to be considerate with the executions.
So let's say I want to download 100 entities, which ID I can download through a service call and let's say my quota is 10 per day. One solution would be to create a DAG that does the call, saves the ids into a database with the date in which they should be executed, but I'm doing the Airflow scheduler's job and it seems not good.There are many things that could go wrong.
I could do the same trick but with something that looks like a queue: one manual DAG puts tasks in the queue and another, daily, DAG pulls from the queue. This one kinda works in my mind but it seems like a lot of effort and I'm not sure what should keep track of the queue. Something like Celery seems like an overkill so probably I would have to use a database. Still, it seems like over engineering and some kind of Airflow anti-pattern but I don't have much experience with the tool so feedbacks are welcome.
Are there other options? Is there some Airflow's feature that would solve this easily?
I am trying to do some tasks in django that consume alot of time. For that, I will be running background tasks.
After some R&D, i have found two solutions:
Celery with RabbitMQ.
Django Background tasks.
Both options seem to fulfill the criteria but setting up Celery will require some work. Now as far as the second option is concerned, setup is fairly simple and in fairly quick amount of time, i can go on writing background tasks. Now my questions if i adopt the 2nd option is this:
How well does Django Background tasks perform ? (Scalability wise in Production environment).
Can i poll the tasks (after some time) in DB to check the task's status?
Architecture of Django-Background-tasks? Couldn't find any clear explanation about it's architecture (Or have I missed some resource?)
Again coming to the first point, how well does Django Background tasks perform in production. (Talking about prior experience of using this in prod.)
Setting up celery takes work (although less when using Redis). It's also serious tool with almost a decade of investment and widespread industry adoption.
As for performance, scaling behaviors of task systems which are backed by queues vs those backed by RDBMs are well understood – but may not be relevant to you as "scalability" is a very subjective term. This thread provides some good framing on the subject and questions.
Comparing stars on GitHub (bg tasks' 3XX vs Celery's 13XXX), you should realize Django-Background-tasks has a smaller user base, and you're probably going to need to get into the internals to understand the architecture and precise mechanics. That shouldn't stop you – just be prepared to DIY when answers aren't forthcoming.
How well does Django Background tasks perform ? - This will depend upon how and what you implement. One thing to note is, Django-background-tasks is based upon database where celery can have redis/rabbitmq as backend, so most probably we'll see considerable performance difference here.
Can I poll the tasks (after some time) in DB to check the task's status? - It's possible in celery and maybe you can find a solution by inspecting django-background-tasks internal code. But one thing is, we can abort celery task, which maybe not possible in Django-Background-tasks.
Architecture of Django-Background-tasks? Couldn't find any clear explanation about it's architecture (Or have I missed some resource?) - It's simple Django based project. You can have a look at code. It's seems to be pretty straightforward.
Again coming to the first point, how well does Django Background tasks perform in production. - Haven't used in production. But since Django-Background-tasks is database based and celery can be configured to use redis/rabbitmq - I think celery have a plus point here.
To me this comparison, seems to be link comparing pistol with a high-end automatic machine guns. Both do same job. But one simple straightforward - other little complicated but with lots of options and scope.
Choose based on your use case.
I have decided to use Django-Background-Tasks. Let me clarify my motivations.
The tasks that will be processed by Django-Background-Tasks doesn't need to be processed in a fast manner. As it is stated by the name, they are background tasks. I accept delays.
The architecture of Django-Background-Tasks is very simple. When you call a method to be process in the background in your code a task record is inserted to the Django-Background-Tasks tables in your database. And the method you called is not executed actually. It is proxied. Then you should trigger another process to execute the jobs. Your method is then executed in this process.
The process that execute jobs can be executed by a cron entry in your server.
Since this setup is so easy and work for I decided to use Django-Background-Tasks. But If I needed something more responsive and fast I would use Celery since it is using memory and there is an active process that processes the jobs. Which isn't the case in Django-Background-Tasks.
Im currently making a program that would send random text messages at randomly generated times during the day. I first made my program in python and then realized that if I would like other people to sign up to receive messages, I would have to use some sort of online framework. (If anyone knowns a way to use my code in python without having to change it that would be amazing, but for now I have been trying to use web2py) I looked into scheduler but it does not seem to do what I have in mind. If anyone knows if there is a way to pass a time value into a function and have it run at that time, that would be great. Thanks!
Check out the Apscheduler module for cron-like scheduling of events in python - In their example it shows how to schedule some python code to run in a cron'ish way.
Still not sure about the random part though..
As for a web framework that may appeal to you (seeing you are familiar with Python already) you should really look into Django (or to keep things simple just use WSGI).
Best.
I think that actually you can use Scheduler and Tasks of web2py. I've never used it ;) but the documentation describes creation of a task to which you can pass parameters from your code - so something you need - and it should work fine for your needs:
scheduler.queue_task('mytask', start_time=myrandomtime)
So you need web2py's cron job, running every day and firing code similar to the above for each message to be sent (passing parameters you need, possibly message content and phone number, see examples in web2py book). This would be a daily creation of tasks which would be processed later by the scheduler.
You can also have a simpler solution, one daily cron job which prepares the queue of messages with random times for the next day and the second one which runs every, like, ten minutes, checks what awaits to be processed and sends messages. So, no Tasks. This way is a bit ugly though (consider a single processing which takes more then 10 minutes). You may also want to have and check some statuses of the messages to be processed (like pending, ongoing, done) to prevent a situation in which two jobs are working on the same message and to allow tracking progress of the processing. Anyway, you could use the cron method it in an early version of your software and later replace it by a better method :)
In any case, you should check expected number of messages to process and average processing time on your target platform - to make sure that the chosen method is quick enough for your needs.
This is an old question but in case someone is interested, the answer is APScheduler blocking scheduler with jobs set to run in regular intervals with some jitter
See: https://apscheduler.readthedocs.io/en/3.x/modules/triggers/interval.html
I have found this soultion for adding periodic task schedules dynamically with django-celery.
My use case is mailings, which being added individually for users of web-site, each mailing has a PeriodicTask associated with it, so there is potentially may be huge quantity of PeriodicTask records in DB.
Im interested - is it valid (legal, proper, right) solution in that case, or it is better to have only one or few PeriodicTask's which would check mailings for last time they been sent and send them if necessary?
According to it's creator, Ask Solem in this thread:
There is no known limit to the number of periodic tasks, and the celerybeat scheduler should perform well even with a large number of schedule entries.
That Google group thread and this one are the most clarifying about the concern you have.
Said that, I'd like to give you an advice: even when celerybeat scheduler is able to handle huge amounts of periodical tasks, that will come to a cost: more database entries, more tasks to monitor, more ram, maybe more complexity for debugging because you are creating dynamic tasks, more hits to database because you will have to check for each mailing its sent datetime and then see if you send that email.
On the other hand, if you can have one one periodical task that can do one query to retrieve just the mailing instances that have to be sent and the fire one subtask task per email you have to send, then it would look simpler in your code, when you have to debug it and when you have to monitor it. Just my two cents.
Hope it helps.
Could you not have a single periodic task which runs every day, week or whatever, and inside that calculate in the first part all the users which require mailings at that time? Once you know all of these, you could kick-off a sub-task in celery for each of these so that these are all executed asynchronously and will allow the main task to complete very quickly, e.g.
#task
def send_periodic_emails():
users_who_need_mail = get_users_who_need_mail()
for user in users_who_need_mail:
send_user_email.delay(user.id)
#task
def send_user_email(user_id):
# Do email sending here
I appreciate this doesn't answer the question as it's formed, but it should allow you to avoid finding out whether this limit exists or adding scheduled tasks programatically!
A lot depends on the nature of your work. If you can group your users into classes for mailing purposes then it would seem natural to schedule mailing of the groups rather than mailing the individual users. If everyone is on a different schedule then by all means schedule each one individually. It's certainly legal and there's no compelling reason to avoid it if it's the natural solution to your problems.
You may want to run some tests to get an idea of the load you will generate, but your approach doesn't seem unreasonable.
After talking with a friend of mine from Google, I'd like to implement some kind of Job/Worker model for updating my dataset.
This dataset mirrors a 3rd party service's data, so, to do the update, I need to make several remote calls to their API. I think a lot of time will be spent waiting for responses from this 3rd party service. I'd like to speed things up, and make better use of my compute hours, by parallelizing these requests and keeping many of them open at once, as they wait for their individual responses.
Before I explain my specific dataset and get into the problem, I'd like to clarify what answers I'm looking for:
Is this a flow that would be well suited to parallelizing with MapReduce?
If yes, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? (I'm not sure exactly what counts as a "Job", so I don't know exactly how I'll be billed)
If no, Is there another system/pattern I should use? and Is there a library that will help me do this in python (On AWS, usign EC2 + EBS)?
Are there any problems you see with the way I've designed this job flow?
Ok, now onto the details:
The dataset consists of users who have favorite items and who follow other users. The aim is to be able to update each user's queue -- the list of items the user will see when they load the page, based on the favorite items of the users she follows. But, before I can crunch the data and update a user's queue, I need to make sure I have the most up-to-date data, which is where the API calls come in.
There are two calls I can make:
Get Followed Users -- Which returns all the users being followed by the requested user, and
Get Favorite Items -- Which returns all the favorite items of the requested user.
After I call get followed users for the user being updated, I need to update the favorite items for each user being followed. Only when all of the favorites are returned for all the users being followed can I start processing the queue for that original user. This flow looks like:
Jobs in this flow include:
Start Updating Queue for user -- kicks off the process by fetching the users followed by the user being updated, storing them, and then creating Get Favorites jobs for each user.
Get Favorites for user -- Requests, and stores, a list of favorites for the specified user, from the 3rd party service.
Calculate New Queue for user -- Processes a new queue, now that all the data has been fetched, and then stores the results in a cache which is used by the application layer.
So, again, my questions are:
Is this a flow that would be well suited to parallelizing with MapReduce? I don't know if it would let me start the process for UserX, fetch all the related data, and come back to processing UserX's queue only after that's all done.
If yes, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? Is there a limit on how many "threads" I can have waiting on open API requests if I use their module?
If no, Is there another system/pattern I should use? and Is there a library that will help me do this in python (On AWS, usign EC2 + EBS?)?
Are there any problems you see with the way I've designed this job flow?
Thanks for reading, I'm looking forward to some discussion with you all.
Edit, in response to JimR:
Thanks for a solid reply. In my reading since I wrote the original question, I've leaned away from using MapReduce. I haven't decided for sure yet how I want to build this, but I'm beginning to feel MapReduce is better for distributing / parallelizing computing load when I'm really just looking to parallelize HTTP requests.
What would have been my "reduce" task, the part that takes all the fetched data and crunches it into results, isn't that computationally intensive. I'm pretty sure it's going to wind up being one big SQL query that executes for a second or two per user.
So, what I'm leaning towards is:
A non-MapReduce Job/Worker model, written in Python. A google friend of mine turned me onto learning Python for this, since it's low overhead and scales well.
Using Amazon EC2 as a compute layer. I think this means I also need an EBS slice to store my database.
Possibly using Amazon's Simple Message queue thingy. It sounds like this 3rd amazon widget is designed to keep track of job queues, move results from one task into the inputs of another and gracefully handle failed tasks. It's very cheap. May be worth implementing instead of a custom job-queue system.
The work you describe is probably a good fit for either a queue, or a combination of a queue and job server. It certainly could work as a set of MapReduce steps as well.
For a job server, I recommend looking at Gearman. The documentation isn't awesome, but the presentations do a great job documenting it, and the Python module is fairly self-explanatory too.
Basically, you create functions in the job server, and these functions get called by clients via an API. The functions can be called either synchronously or asynchronously. In your example, you probably want to asynchronously add the "Start update" job. That will do whatever preparatory tasks, and then asynchronously call the "Get followed users" job. That job will fetch the users, and then call the "Update followed users" job. That will submit all the "Get Favourites for UserA" and friend jobs together in one go, and synchronously wait for the result of all of them. When they have all returned, it will call the "Calculate new queue" job.
This job-server-only approach will initially be a bit less robust, since ensuring that you handle errors and any down servers and persistence properly is going to be fun.
For a queue, SQS is an obvious choice. It is rock-solid, and very quick to access from EC2, and cheap. And way easier to set up and maintain than other queues when you're just getting started.
Basically, you will put a message onto the queue, much like you would submit a job to the job server above, except you probably won't do anything synchronously. Instead of making the "Get Favourites For UserA" and so forth calls synchronously, you will make them asynchronously, and then have a message that says to check whether all of them are finished. You'll need some sort of persistence (a SQL database you're familiar with, or Amazon's SimpleDB if you want to go fully AWS) to track whether the work is done - you can't check on the progress of a job in SQS (although you can in other queues). The message that checks whether they are all finished will do the check - if they're not all finished, don't do anything, and then the message will be retried in a few minutes (based on the visibility_timeout). Otherwise, you can put the next message on the queue.
This queue-only approach should be robust, assuming you don't consume queue messages by mistake without doing the work. Making a mistake like that is hard to do with SQS - you really have to try. Don't use auto-consuming queues or protocols - if you error out, you might not be able to ensure that you put a replacement message back on the queue.
A combination of queue and job server may be useful in this case. You can get away with not having a persistence store to check job progress - the job server will allow you to track job progress. Your "get favourites for users" message could place all the "get favourites for UserA/B/C" jobs into the job server. Then, put a "check all favourites fetching done" message on the queue with a list of tasks that need to be complete (and enough information to restart any jobs that mysteriously disappear).
For bonus points:
Doing this as a MapReduce should be fairly easy.
Your first job's input will be a list of all your users. The map will take each user, get the followed users, and output lines for each user and their followed user:
"UserX" "UserA"
"UserX" "UserB"
"UserX" "UserC"
An identity reduce step will leave this unchanged. This will form the second job's input. The map for the second job will get the favourites for each line (you may want to use memcached to prevent fetching favourites for UserX/UserA combo and UserY/UserA via the API), and output a line for each favourite:
"UserX" "UserA" "Favourite1"
"UserX" "UserA" "Favourite2"
"UserX" "UserA" "Favourite3"
"UserX" "UserB" "Favourite4"
The reduce step for this job will convert this to:
"UserX" [("UserA", "Favourite1"), ("UserA", "Favourite2"), ("UserA", "Favourite3"), ("UserB", "Favourite4")]
At this point, you might have another MapReduce job to update your database for each user with these values, or you might be able to use some of the Hadoop-related tools like Pig, Hive, and HBase to manage your database for you.
I'd recommend using Cloudera's Distribution for Hadoop's ec2 management commands to create and tear down your Hadoop cluster on EC2 (their AMIs have Python set up on them), and use something like Dumbo (on PyPI) to create your MapReduce jobs, since it allows you to test your MapReduce jobs on your local/dev machine without access to Hadoop.
Good luck!
Seems that we're going with Node.js and the Seq flow control library. It was very easy to move from my map/flowchart of the process to a stubb of the code, and now it's just a matter of filling out the code to hook into the right APIs.
Thanks for the answers, they were a lot of help finding the solution I was looking for.
I am working with a similar problem that i need to solve. I was also looking at MapReduce and using the Elastic MapReduce service from Amazon.
I'm pretty convinced MapReduce will work for this problem. The implementation is where I'm getting hung up, becauase I'm not sure my reducer even needs to do anything.
I'll answer your questions as I understand your (and my) problem, and hopefully it helps.
Yes I think it'll be suited well. You could look at leveraging the Elastic MapReduce service's multiple steps option. You could use 1 Step to fetch a the people a user is following, and another step to compile a list of tracks for each of those followers, and the reducer for that 2nd step would probably be the one to build the cache.
Depends on how big your data-set is and how often you'll be running it. It's hard to say without knowing how big the data-set is (or is going to get) if it'll be cost effective or not. Initially, it'll probably be quite cost-effective, as you won't have to manage your own hadoop cluster, nor have to pay for EC2 instances (assuming that's what you use) to be up all the time. Once you reach the point where you're actually crunching this data for a long period of time, it probably will make less and less sense to use Amazon's MapReduce service, because you'll constantly have nodes online all the time.
A job is basically your MapReduce task. It can consist of multiple steps (each MapReduce task is a step). Once your data has been processed and all steps have been completed, your Job is done. So you're effectively paying for CPU time for each node in the Hadoop cluster. so, T*n where T is the Time (in hours) it takes to process your data, and n is the number of nodes you tell Amazon to spin up.
I hope this helps, good luck. I'd like to hear how you end up implementing your Mappers and Reducers, as I'm solving a very similar problem and I'm not sure my approach is really the best.