I have tried to run a xmpp process along with django server, so i included the xmpp process in manage.py so that both of them run simultaneoulsy. Now I have a problem that xmpp process is in an infinite loop and so django server wont start until I break the loop which isn't the task I wanted to do.
Is there a way so that I can run them simultaneously.
Your problem is probably that the XMPP process expects to be the only thread in the process, and so it blocks waiting for input.
You might be able to get around the problem by creating a new thread that then runs the XMPP process, see http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/
Be aware that there might be other interactions between the XMPP process and Django that will lead to problems, because they share the same address space.
If you just want to start some process whenever you run the Django server, see: How do I run another script in Python without waiting for it to finish?
Related
I tried asking this question a few weeks ago, but included too many unnecessary details about what I am doing. This is my attempt to see if narrowing the question can get me some useful feedback.
I am running a number of servers via fastapi and uvicorn, and in many cases using the #app.on_event("shutdown") decorator for some code which should automatically run when the server is shut down. Initially, I was calling each of these servers in its own terminal, and pressing ctrl-c in that terminal was sufficient to reliably shut down the server and call the shutdown code. However, that is no longer practical for me to do, and so I am wondering, what other signals or procedures can be used to trigger the code under #app.on_event("shutdown") in fastapi with uvicorn. So far, I have confirmed that sending terminate or kill signals to the uvicorn process do not cause that code to be triggered on shutdown, and I am not sure what else would do the job. So, for other users of fastapi and uvicorn, what other than ctrl-c do you use to kill your servers?
I'm trying to implement a simple scheduler on the machine that I share with my colleague. The idea is to run a process in the background as a server, identified by its pid. I can submit the program I want to run to this server process, say in a different bash terminal, and let the server process schedule the job regarding the availability of the hardware resource. The submit program should be able to lock some content in memory and communicate with the server.
I was trying to use python multiprocessing or subprocess module to do the above thing. But I didn't have a clear idea how this should be done. Any help would be appreciated.
I think cron job or celery would be a better choice for you use case. Just create the celery task and delay its execution according to your needs.
Updated post:
I have a python web application running on a port. It is used to monitor some other processes and one of its features is to allow users to restart his own processes. The restart is done through invoking a bash script, which will proceed to restart those processes and run them in the background.
The problem is, whenever I kill off the python web application after I have used it to restart any user's processes, those processes will take take over the port used by the python web application in a round-robin fashion, so I am unable to restart the python web application due to the port being bounded. As a result, I must kill off the processes involved in the restart until nothing occupies the port the python web application uses.
Everything is ok except for those processes occupying the port. That is really undesirable.
Processes that may be restarted:
redis-server
newrelic-admin run-program (which spawns another web application)
a python worker process
UPDATE (6 June 2013): I have managed to solve this problem. Look at my answer below.
Original Post:
I have a python web application running on a port. This python program has a function that calls a bash script. The bash script spawns a few background processes, then exits.
The problem is, whenever I kill the python program, the background processes spawned by the bash script will take over and occupy that same port.
Specifically the subprocesses are:
a redis server (with daemonize = true in the configuration file)
newrelic-admin run-program (spawns a web application)
a python worker process
Update 2: I've tried running these with nohup. Only the python worker process doesnt attempt to take over the port after I kill the python web application. The redis server and newrelic-admin still do.
I observed this problem when I was using subprocess.call in the python program to run the bash script. I've tried a double fork method in the python program before running the bash script, but it results in the same problem.
How can I prevent any processes spawned from the bash script from taking over the port?
Thank you.
Update: My intention is that, those processes spawned by the bash script should continue running if the python application is killed off. Currently, they do continue running after I kill off the python application. The problem is, when I kill off the python application, the processes spawned by the bash script start to take over the port in a round-robin fashion.
Update 3: Based on the output I see from 'pstree' and 'ps -axf', processes 1 and 2 (the redis server and the web app spawned by newrelic-admin run-program) are not child processes of the python web application. This makes it even weirder that they take over the port that the python web application occupies when I kill it... Anyone knows why?
Just some background on the methods I've tried to solve my above problem, before I go on to the answer proper:
subprocess.call
subprocess.Popen
execve
the double fork method along with one of the above (http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/)
By the way, none of the above worked for me. Whenever I killed off the web application that executes the bash script (which in turns spawns some background processes we shall denote as Q now), the processes in Q will in a round-robin fashion, take over the port occupied by the web application, so I had to kill them one by one before I could restart my web application.
After many days of living with this problem and moving on to other parts of my project, I thought of some random Stack Overflow posts and other articles on the Internet and from my own experience, recalled my experience of ssh'ing into a remote and starting a detached screen session, then logging out, and logging in again some time later to discover the screen session still alive.
So I thought, hey, what the heck, nothing works so far, so I might as well try using screen to see if it can solve my problem. And to my great surprise and joy it does! So I am posting this solution hopefully to help those who are facing the same issue.
In the bash script, I simply started the processes using a named screen process. For instance, for the redis application, I might start it like this:
screen -dmS redisScreenName redis-server redis.conf
So those processes will keep running on those detached screen sessions they were started with. In this case, I did not daemonize the redis process.
To kill the screen process, I used:
screen -S redisScreenName -X quit
However, this does not kill the redis-server. So I had to kill it separately.
Now, in the python web application, I can just use subprocess.call to execute the bash script, which will spawn detached screen sessions (using 'screen -dmS') which run the processes I want to spawn. And when I kill off the python web application, none of the spawned processes take over its port. Everything works smoothly.
I'm coming from PHP/Apache world where running an application is super easy. Whenever PHP application crashes Apache process running that request will stop but server will be still ruining happily and respond to other clients. Is there a way to have Python application work in a smilar way. How would I setup wsgi server like Tornado or CherryPy so it will work similarly? also, how would I run several applications from one server with different domains?
What you are after would possibly happen anyway for WSGI severs. This is because any Python exception only affects the current request and the framework or WSGI server would catch the exception, log it and translate it to a HTTP 500 status page. The application would still be in memory and would continue to handle future requests.
What we get down to is what exactly you mean by 'crashes Apache process'.
It would be rare for your code to crash, as in cause the process to completely exit due to a core dump, the whole process. So are you being confused in your terminology in equating an application language level error to a full process crash.
Even if you did find a way to crash a process, Apache/mod_wsgi handles that okay and the process will be replaced. The Gunicorn WSGI server will also do that. CherryPy will not unless you have a process manager running which monitors it and the process monitor restarts it. Tornado in its single process mode will have the same problem. Using Tornado as the worker in Gunicorn is one way around that plus I believe Tornado itself may have some process manager in it now for running multiple process which allow it to restart processes if they die.
Do note that if your application bug which caused the Python exception is bad enough and it corrupts state within the process, subsequent requests may possibly have issues. This is the one difference with PHP. With PHP, after any request, whether successful or not, the application is effectively thrown away and doesn't persist. So buggy code cannot affect subsequent requests. In Python, because the process with loaded code and retained state is kept between requests, then technically you could get things in a state where you would have to restart the process to fix it. I don't know of any WSGI server though that has a mechanism to automatically restart a process if one request returned an error response.
If you're in an UNIX-like environment, you can run mod_wsgi under Apache in Daemon Mode. This means there will be a separate process for the Python code, and even if it crashes the server will continue running normally (and hopefully the WSGI process will restart itself). A WSGI application can run under multiple processes and multiple threads per process.
As for running multiple domains in the same server, check Name-Based Virtual Hosts.
Question pretty much says it all. If I am running Tornado on a server with Supervisor, what happens to active requests when I deploy code and need to restart the Tornado server? Are they dropped mid-request? Are they allowed to finish?
Supervisord send a signal like HUP or TERM to tornado process, the most important point is how tornado deal with it.
Unfortunately, tornado will simple exit when it get signal like HUP, TERM, INT.
Tornado has a sub module named autoreload, it make the application could detect the code files' changes and reload the application, but it only works the debug mode for one process, and not in WSGI applications. It's development tool.
But, we can define a function within run tornado.autoreload._reload function by manual, and register it for HUP sigal. tornado.autoreload.add_reload_hook can add functions should be called when reload.
Because the tornado doesn't manage the processes well on fork mode, so it's suggested running many independent processes for different ports. On this mode, the _reload will works like set debug flag.
After all, test and benchmark it for make sure it works well in your application.