I am using uwsgi for restful server.
Type this in shell nohup /home/ubuntu/anaconda3/envs/py37/bin/uwsgi --ini uwsgi.ini &
OK, now it works when shell closed.
however ,,,,, is it ok for long time use???
Normally i use apache + uwsgi for web server as daemon, but what I need now is just internal restful server.
It looks satisfying my purpose,but is there any problem to continue using??
I want to know the difference of daemon and shell command.
Related
I have my web application built in python. I wish to have ssl certificate installed. How would I know which web server is running on my server? nginx or apache? Currently if i stop apache or nginx, the server does not shut down. I am able to see the running application even when the web server is down.
Use netstat in this case.
This will allow you to see what exactly is listening on the TCP port.
If you happen to be using Windows:
netstat -a -n -b
Linux:
sudo netstat -ltnp
I have a Django AWS server that I need to keep running over the weekend for it to be graded. I typically start it from an SSH using PuTTY with:
python manage.py runserver 0.0.0.0:8000
I was originally thinking of making a bash script to do the task of starting the server, monitoring it, and restarting it when needed using the but was told it wasn't going to work. Why?
1) Start the server using python manage.py runserver 0.0.0.0:8000 & to send it to the background
2) After <some integer length 'x'> minutes of sleeping, check if the server isn't up using ss -tulw and grep the result for the port the server should be running on.
3) Based on the result from step (2), we either need to sleep for 'x' minutes again, or restart the server (and possibly fully-stop anything left running beforehand).
Originally, I thought it was a pretty decent idea, as we can't always be monitoring the server.
EDIT: Checked that ss -tulw | grep 8000 correctly grabs the server while running server:
if I understand you correctly, this is a non-production Django app. You could run a test server using Django's development server like python manage.py runserver 0.0.0.0:8000 as you did.
Thinks like monit (https://mmonit.com/monit/) or supervisord (http://supervisord.org/) are meant to do what you described - monitoring a process and restart it if necessary, but you could also just use a cron job that runs perhaps every minute. In the cron job, you:
Check whether your process is still running and or still listening on port 8000.
Abort if already running.
Restart if stopped or not listening to port 8000.
I am using different Servers alongside Django Server. For Example MongoDB server and Celery[command]
I want to ask that how can I execute other CMD commands automatically whenever I start "**
python manage.py runserver
**"
Depends on what OS you use, on my Ubuntu for local development I do this:
Create .sh script. For example start_project.sh with this code:
cd /path/to/project
source /venv/bin/activate
python manage.py runserver & celery -A project worker --loglevel=debug
And then just run bash start_project.sh
Also you can add more commands to start separated by &
You should write a shell script which contains commands to start each service and then use it to get your projects running. For example here is a sample:
sudo service mongodb start
celery -A worker appname.celery
python manage.py runserver 0.0.0.0:80 > /dev/null 2>&1 &
Due to you use the term CMD I guess you use a Windows based OS. I would then say that you probably have the mongoDB service installation? (otherwise reinstall mongoDB as Service).
By defualt set to autostart (changable to non-autostart). If you changed the service for mongoDB to manual starting method, then you could start it in CMD as
net start mongoDB
I do not use/know what "Celery" is but quick google made it sound as some sort of message que. Which in my opinion should be or at least should have a service installation in which case you should use that and then use autostart/manual as described for mongoDB.
I'm a newcomer to web applications and AWS, so please forgive me if this is the answer is bit trivial!
I'm hosting a python web application on a AWS EC2 server using nginx + uWSGI. This is all working perfectly, except when I terminate my connection (using putty), my uWSGI application stops running, producing a "502 Bad Gateway" error from nginx.
I'm aware of adding the "&" to the uwsgi start up command (below), but that does not work after I close out my connection.
uwsgi --socket 127.0.0.1:8000 -master -s /tmp/uwsgi.sock --chmod-socket=666 -w wsgi2 &
How do I persist my uWSGI application to continue hosting my web app after I log out/terminate my connection?
Thanks in advance!
You'll typically want to start uwsgi from an init script. There are several ways to do this and the exact procedure depends on the Linux distribution you're using:
SystemV init script
upstart script (Ubuntu)
supervisor (Python-based process manager)
For the purposes of my use case, I will not have to reboot my machine in the near future, so Linux's nohup (no hangup) command works perfectly for this. It's a quick and dirty hack that's super powerful.
django-admin.py runserver is running while developing. I have an open webpage connected to my sse endpoint.
Seems like using django-sse breaks to server autoreload feature, cf. this issue.
What's worse is that if I manually restart the server (Ctr+C & django-admin.py runserver), it fails with a 'port already in use error' and I need to ps grep runserver kill whatever_id first, a true PITA.
So:
How comes using persistent connections breaks my dev workflow ?
Is there an easy workaround not involving patching django ?
In production I'm using a Procfile with foreman to launch gunicorn gevent workers. Here a manual restart goes fine (open connections are closed) but there's not autoreload feature nor any log printed in the terminal.