How to start other Servers Automatically after Django Server starts - python

I am using different Servers alongside Django Server. For Example MongoDB server and Celery[command]
I want to ask that how can I execute other CMD commands automatically whenever I start "**
python manage.py runserver
**"

Depends on what OS you use, on my Ubuntu for local development I do this:
Create .sh script. For example start_project.sh with this code:
cd /path/to/project
source /venv/bin/activate
python manage.py runserver & celery -A project worker --loglevel=debug
And then just run bash start_project.sh
Also you can add more commands to start separated by &

You should write a shell script which contains commands to start each service and then use it to get your projects running. For example here is a sample:
sudo service mongodb start
celery -A worker appname.celery
python manage.py runserver 0.0.0.0:80 > /dev/null 2>&1 &

Due to you use the term CMD I guess you use a Windows based OS. I would then say that you probably have the mongoDB service installation? (otherwise reinstall mongoDB as Service).
By defualt set to autostart (changable to non-autostart). If you changed the service for mongoDB to manual starting method, then you could start it in CMD as
net start mongoDB
I do not use/know what "Celery" is but quick google made it sound as some sort of message que. Which in my opinion should be or at least should have a service installation in which case you should use that and then use autostart/manual as described for mongoDB.

Related

When running Python Flask Server, Jenkins is not successful and continues to load

I am building CICD through Jenkins.
But there are problems.
It is planning to upload source code first and turn on flask server through batch file.
I wrote a shell script for Jenkins' Build>Execute Shell.
postCommand=/cygdrive/c/workspace/ContactPortal_Flask/run.bat
sshpass -p ${deployPassword} ssh -o StrictHostKeyChecking=no ${deployUser}#${deployServer} ${postCommand}
Here is run.bat file
set FLASK_ENV=development
set path=%path%;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\develop\instantclient_12_1;C:\develop\Anaconda3;C:\develop\Anaconda3\Library\mingw-w64\bin;C:\develop\Anaconda3\Library\usr\bin;C:\develop\Anaconda3\Library\bin;C:\develop\Anaconda3\Scripts;
set "START=C:\workspace\ContactPortal_Flask\start.bat"
cd C:\workspace\ContactPortal_Flask
python -m flask run
then, The source code upload was successful, and turning on the flask server was also successful, but Jenkins was not marked Success and continued to load.
please help!!
I think the main problem here is that python -m flask run starts the server and will not finish until user hit Ctrl+C.
Since the target system is on Windows, you may want to create custom service and have jenkin start that service at the end instead. For service creation see https://learn.microsoft.com/en-us/troubleshoot/windows-client/deployment/create-user-defined-service And by starting this service (e.g. with NET START <service-name>) jenkin can finish, and flask can start running in the background.
Also for a production system, you may want to consider checking this and pick a proper web server instead of using the builtin web server provided by flask.

BASH: Monitor/maintain a server

I have a Django AWS server that I need to keep running over the weekend for it to be graded. I typically start it from an SSH using PuTTY with:
python manage.py runserver 0.0.0.0:8000
I was originally thinking of making a bash script to do the task of starting the server, monitoring it, and restarting it when needed using the but was told it wasn't going to work. Why?
1) Start the server using python manage.py runserver 0.0.0.0:8000 & to send it to the background
2) After <some integer length 'x'> minutes of sleeping, check if the server isn't up using ss -tulw and grep the result for the port the server should be running on.
3) Based on the result from step (2), we either need to sleep for 'x' minutes again, or restart the server (and possibly fully-stop anything left running beforehand).
Originally, I thought it was a pretty decent idea, as we can't always be monitoring the server.
EDIT: Checked that ss -tulw | grep 8000 correctly grabs the server while running server:
if I understand you correctly, this is a non-production Django app. You could run a test server using Django's development server like python manage.py runserver 0.0.0.0:8000 as you did.
Thinks like monit (https://mmonit.com/monit/) or supervisord (http://supervisord.org/) are meant to do what you described - monitoring a process and restart it if necessary, but you could also just use a cron job that runs perhaps every minute. In the cron job, you:
Check whether your process is still running and or still listening on port 8000.
Abort if already running.
Restart if stopped or not listening to port 8000.

Django: Error: You don't have permission to access that port

I'm very new to this whole setup so please be nice.
On dev the command usually works with no errors but since I have been experimenting with different commands for Django someting has gone wrong.
python manage.py runserver 0.0.0.0:80
I don't have permission to use this port anymore. I can use port 8080 but the website doesn't work when I add the port to the end of the usual host name in the url. When I used port 80 I never had to add :80 to the url anyway.
I had an error where I didn't have permissions to the log file but I changed the permissions on that file. It seems there is now many things I don't have permissions for.
Django 1.8.5.
Using a virtual envirnment and I have 2 apps in the project.
If you're on Linux, you'll receive this error.
First and foremost, Django does not have a production server, just a very basic development server and uses port 8080 by default.
when you execute the command
python manage.py runserver
you tell django to start its development server and it runs so you can test your web app before deployment to a production server.
Django Documentation -> django-admin -> Run Server
The way to access the server is to use your browser and plug in the URL in the address bar as so
localhost:8080
by default, most HTTP applications run on port 80 unless otherwise stated. For example, your MySQL server could run on port 3306 by default.
Basically, you can think of ports as old school telephone lines that connect you to whom ever your looking to communicate with.
There's nothing really special about any of this. You should probably play with bottle to get the basics down first; just a friendly suggestion.
You can dig in to the details on the website. While not secure, you can use sudo to run on port 80, but for security reasons you should avoid it.
#mtt2p mentions a serverfault post that does a great job of the why
I'm sure there's a way to tell the server to allow only local connections, but you should only use 0.0.0.0:80 when you want to show off your work to other people or see what your web app looks like on other devices.
In the long run, sudo is just easier and quicker, but lazy and insecure.
This is a link that explains it in the context of a virtualenv.
Django runserver error when specifying port
The answer states
I guess the sudo command will run the process in the superuser
context, and the superuser context lack virtualenv settings.
Make a shell script to set the virtualenv and call manage.py
runserver, then sudo this script instead.
You should note that the answer explaining a virtualenv based context is also insecure. It should just be run as
sudo python manage.py runserver 80
not
sudo bash script-name
outside of a virtualenv. Doing so defeats the purpose of sand-boxing your application. If you ignore this, you'll be exposing yourself to a race condition.
I am in Xubuntu 20.04 version and I use this command (because I have an python env) :
$ sudo ~/.virtualenvs/myproject/bin/python manage.py runserver 0.0.0.0:80
And to know where is your envs python folder, I did :
$ which python
sudo python manage.py runserver 0.0.0.0:80
you need admin rights for port 80
I set up a virtualenv called "aira" and installed virtualenvwrapper in the root environment (my virtualenvwrapper settings in /root/.bashrc are at the bottom). This reduces the number of sudo commands I need to cascade to -c together to get runserver working:
sudo sh -c "workon aira && python manage.py runserver --insecure 0.0.0.0:80"
If you've set up your django app's virtualenv without virtualenvwrapper you'll need to manually change to the correct directory and activate your virtualenv within the sudo command sequence. My virtualenv is called aira and I keep my virtualenvs in /root/.virtualenvs. My django project is in the ubuntu user's home directory:
sudo sh -c "source $HOME/.virtualenvs/aira/bin/activate && cd /ubuntu/src/aira/ && python manage.py runserver --insecure 0.0.0.0:80"
If you've installed django and your requirements.txt in the system site packages then you can use sudo to runserver.
sudo python manage.py runserver --insecure 0.0.0.0:80"
The --insecure option allows staticfiles to serve your static assets (images, css, javascript).
For completeness, here're my virtualenvwrapper configuration variables in /root/.bashrc on Ubuntu 16.04:
# python3 is used for virtualenv and virtualenvwrapper
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
# *root* reuses the virtualenvs in *ubuntu*'s home directory
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=/home/ubuntu/src
source /usr/local/bin/virtualenvwrapper.sh
sudo ./venv/bin/python manage.py runserver 80

Celery worker working from command line but not as daemon, in a virtualenv

System Info
Ubuntu 12.04 LTS
Django 1.5.5
Python 2.7.3
Celery 3.1.9
I am running this on a vagrant virtual machine (with puppet) and attempting to set up celery to run the worker as a daemon as described in the celery docs here as well as the celery setup for django described here. I am using a virtualenv for the project located at
/home/vagrant/virtualenvs/myproj
The actual project files are located at
/srv/myproj
I have been able to start the the worker and the beat scheduler without issue when located in the /srv/myproj directory using the command line statements.
~/virtualenvs/myproj/bin/celery -A app beat
~/virtualenvs/myproj/bin/celery worker -A app
Both beat and the worker start without issue and the scheduled task is passed to the worker and executed. The problem arises when I attempt to attempt to run them as background processes. I am using the scripts found on the celery github repo in /etc/init.d/ and using the following configuration settings in my celeryd and celerybeat files located in /etc/default
CELERY_BIN="/home/vagrant/virtualenvs/myproj/bin/celery"
CELERYD_CHDIR="/srv/myproj"
Attempting to run the services as sudo with
sudo service celeryd start
sudo service celerybeat start
Causes an error message to be thrown, I believe this is because it is using the python located in usr/lib instead of the python in the virtualenv. The error thrown is a cannot import name (the package exists in the virtualenv but not globally hence my assumption).
I also noticed on the Running the worker as a daemon it states that workers should run as unprivileged users, and that you should start workers and beat as using the multi or
--detach command. This way I was able to start the worker (not beat) but all the .log and .pid files are being created in my current directory instead of where I've specified in the /etc/default/celeryd config file.
Does anyone have a solution for getting celery to work in a virtualenv? I feel like I'm really close and am overlooking some simple part of the configuration.
I was eventually able to get this working by using supervisor and setting the environment variables in the [program:celery] environment option.

How do I run Django as a service?

I am having difficulty running Django on my Ubuntu server. I am able to run Django but I don't know how to run it as a service.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Here is what I am doing:
I log onto my Ubuntu server
Start my Django process: sudo ./manage.py runserver 0.0.0.0:80 &
Test: Traffic passes and the app displays the right page.
Now I close my terminal window and it all stops. I think I need to run it as a service somehow, but I can't figure out how to do that.
How do I keep my Django process running on port 80 even when I'm not logged in?
Also, I get that I should be linking it through Apache, but I'm not ready for that yet.
Don't use manage.py runserver to run your server on port 80. Not even for development. If you need that for your development environment, it's still better to redirect traffic from 8000 to 80 through iptables than running your django application as root.
In django documentation (or in other answers to this post) you can find out how to run it with a real webserver.
If, for any other reason you need a process to keep running in background after you close your terminal, you can't just run the process with & because it will be run in background but keep your session's session id, and will be closed when the session leader (your terminal) is terminated.
You can circunvent this behaviour by running the process through the setsid utility. See your manpage for setsid for more details.
Anyway, if after reading other comments, you still want to use the process with manage.py, just add "nohup" before your command line:
sudo nohup /home/ubuntu/django_projects/myproject/manage.py runserver 0.0.0.0:80 &
For this kind of job, since you're on Ubuntu, you should use the awesome Ubuntu upstart.
Just specify a file, e.g. django-fcgi, in case you're going to deploy Django with FastCGI:
/etc/init/django-fcgi.conf
and put the required upstart syntax instructions.
Then you can you would be able to start and stop your runserver command simply with:
start runserver
and
stop runserver
Examples of managing the deployment of Django processes with Upstart: here and here. I found those two links helpful when setting up this deployment structure myself.
The problem is that & runs a program in the background but does not separate it from the spawning process. However, an additional issue is that you are running the development server, which is only for testing purposes and should not be used for a production environment.
Use gunicorn or apache with mod_wsgi. Documentation for django and these projects should make it explicit how to serve it properly.
If you just want a really quick-and-dirty way to run your django dev server on port 80 and leave it there -- which is not something I recommend -- you could potentially run it in a screen. screen will create a terminal that will not close even if you close your connection. You can even run it in the foreground of a screen terminal and disconnect, leaving it to run until reboot.
If you are using virtualenv,the sudo command will execute the manage.py runserver command outside of the virtual enviorment context, and you'll get all kind of errors.
To fix that, I did the following:
while working on the virtual env type:
which python
outputs: /home/oleg/.virtualenvs/openmuni/bin/python
then type:
sudo !!
outputs: /usr/bin/python
Then all what's left to do is create a symbolic link between the global python and the python at the virtualenv that you currently use, and would like to run on 0.0.0.0:80
first move the global python folder to a backup location:
mv /usr/bin/python /usr/bin/python.old
/usr/bin/python
that should do it:
ln -s /usr/bin/python /home/oleg/.virtualenvs/openmuni/bin/python
that's it! now you can run sudo python manage.py runserver 0.0.0.0:80 in virtaulenv context!
Keep in mind that if you are using postgres DB on your developement local setup, you'll probably need a root role.
Credit to #ydaniv

Categories

Resources