As I can see in top utility celery procecess consume a lot of CPU time. So I want to profile it.
I can do it manually on developer machine like so:
python -m cProfile -o test-`date +%Y-%m-%d-%T`.prof ./manage.py celeryd -B
But to have accurate timings I need to profile it on production machine. On that machine (Fedora 14) celery is launched by init scripts. E.g.
service celeryd start
I have figured out these scripts eventually call manage.py celeryd_multi eventually. So my question is how can I tell celeryd_multi to start celery with profiling enabled? In my case this means add -m cProfile -o out.prof options to python.
Any help is much appreciated.
I think you're confusing two separate issues. You could be processing too many individual tasks or an individual task could be inefficient.
You may know which of these is the problem, but it's not clear from your question which it is.
To track how many tasks are being processed I suggest you look at celerymon. If a particular task appears more often that you would expect then you can investigate where it is getting called from.
Profiling the whole of celery is probably not helpful as you'll get lots of code that you have no control over. As you say it also means you have a problem running it in production. I suggest you look at adding the profiling code directly into your task definition.
You can use cProfile.run('func()') as a layer of indirection between celery and your code so each run of the task is profiled. If you generate a unique filename and pass it as the second parameter to run you'll have a directory full of profile data that you can inspect on a task-by-task basis, or use pstats.add to combine multiple task runs together.
Finally, per-task profiling means you can also turn profiling on or off using a setting in your project code either globally or by task, rather than needing to modify the init scripts on your server.
Related
I am developing a python (2.7) application which takes real time commands and performs tasks (run other modules of the app) on command.
Right now there is only one process which is run like python run.py. So, if some task takes little longer, say 20seconds, the application is blocked for those 20 seconds and can't process commands which might come before the process is finished.
I want to launch tasks on command in a way that, they run in parallel and doesn't block the subsequent commands.
What is the most elegant way to do this?
So far I found subprocess.popen can be used for scripts. But I want don't want to run the tasks as commands.
I suggest Celery for this. Once you have set it up, it is very easy to integrate with your application. It will run tasks in the background, limit resource usage if necessary, automatically scale for demand if necessary.
I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages.
Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh.
This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user.
The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch).
Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available.
It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages.
What I currently have:
Each batch is in its own script.
I can run it on the command line (via if __name__ == "main":).
The batches are also hooked up as celery tasks and work fine that way.
Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well.
The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends.
The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism.
What's not so great.
In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about.
So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset.
This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access.
Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point.
Some options I have thought about:
writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django.
Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages.
an alternative task/process manager to celery with less capability but also less moving parts?
not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess.
environment:
nginx, wsgi, ubuntu on virtualbox, chef to build VMs.
I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict.
I have this Django cron job script (I am using kronos for that, which is great).
Since I trigger this job every minute, I want to make sure that there isn't another instance of the script already running. If there is a previous job running, then I want to skip the current execution.
I know I can do that with a lock file, but that's not very reliable and can cause problems when you reboot in the middle of an execution (you have to clear the lock file), etc.
What's the best way to do this using Python (Django in this case)?
EDIT: I am targeting Linux, sorry for leaving this out.
There's a Django app here: https://github.com/jsocol/django-cronjobs
Also available on pip as cronjobs.
Just in case you go straight to pip; You register jobs using the decorator like so:
# myapp/cron.py
import cronjobs
#cronjobs.register
def periodic_task():
pass
Then run the command via:
$ ./manage.py cron periodic_task
It has job locking by default but you can disable it when you apply the decorator:
#register(lock=False)
I'm new to Python (relatively new to programing in general) and I have created a small python script that scrape some data off of a site once a week and stores it to a local database (I'm trying to do some statistical analysis on downloaded music). I've tested it on my Mac and would like to put it up onto my server (VPS with WiredTree running CentOS 5), but I have no idea where to start.
I tried Googling for it, but apparently I'm using the wrong terms as "deploying" means to create an executable file. The only thing that seems to make sense is to set it up inside Django, but I think that might be overkill. I don't know...
EDIT: More clarity
You should look into cron for this, which will allow you to schedule the execution of your Python script.
If you aren't sure how to make your Python script executable, add a shebang to the top of the script, and then add execute permissions to the script using chmod.
Copy script to server
test script manually on server
set cron, "crontab -e" to a value that will test it soon
once you've debugged issues set cron to the appropriate time.
Sounds like a job for Cron?
Cron is a scheduler that provides a way to run certain scripts (apps, etc.) at certain times.
Here is a short tutorial that explains how to set up cron.
See this for more general cron information.
Edit:
Also, since you are using CentOS: if you end up having issues with your script later on... it could partly be caused by SELinux. There are ways to disable SELinux on your server (if you have enough access permissions.) But... there are arguments against disabling SELinux, as well.
I have a python process (Pylons webapp) that is constantly using 10-30% of CPU. I'll improve/tune logging to get some insight of what's going on, but until then, are there any tools/techniques that allow to see what python process is doing, how many and how busy threads it has etc?
Update:
configured access log which shows that there are no requests going on, webapp is just idling
no point to plug in paste.profile in middleware chain since there are no requests, activity must be happening either in webapp's worker threads or paster web server
running paster like this: "python -m cProfile -o outfile /usr/bin/paster serve dev.ini" and inspecting results shows that most time is spent in "posix.waitpid". Paster runs webapp in subprocess, subprocess activity is not picked up by profiler
looking into ;hacking PasteScript "serve" command so that subprocesses would get profiled
Another update:
After much tinkering, sticking profiler in various places and getting familiar with PasteScript insides, I discovered that the constant CPU load goes away if application is started without "--reload" parameter (this flag tells paster to restart itself if code changes, handy in development), which is fine in production environment.
Profiling might help you learn a bit of what it's doing. If your sort the output by "time" you will see which functions are chowing up cpu time, which should give you some good hints.
As you noted, in --reload mode, Paste sweeps the filesystem every second to see if any of the files loaded have changed. If they have, then Paste reloads the process. You can also manually tell Paste to monitor non-Python code modules for changes if desired.
You can change the reload interval with the --reload-interval option, this will reduce the CPU usage when using --reload as it will sweep less often.