Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to set up my first web server using the combination of Flask, uWSGI, and nginx. I've had some success getting the Flask & uWSGI components running. I've also gotten many tips from various blogs on how to set this up. However, there is no consistency and the articles suggest many different ways of setting this up especially where folder structures, nginx configurations and users/permissions are concerned (I've tried some of these suggestions and many do work, but I am not sure which is best). So is there one basic "best practice" way of setting up this stack?
nginx + uwsgi + flask make for a potent stack! I add supervisor to the mix and configure it as follows.
Run both uwsgi and nginx out of supervisor for better process control. You can then start supervisor at boot and it will run uwsgi and nginx in the right order. It will also intelligently try to keep them alive if they die. See a sample supervisor configuration below.
If you are running nginx and uwsgi on the same host, use unix sockets rather than HTTP.
The nginx master process must run as root if your web server is listening on port 80. I usually run my web server on some other port (like 8080) and use a load balancer in front to listen on port 80 and proxy to nginx.
Make sure your uwsgi server has access to read/write to the socket file you choose as well as proper permissions to any app code and data directories.
Don't worry too much about your folder structure, especially if you're using a Linux distro like Ubuntu that has sensible defaults. The main supervisor config file can include files from a subdirectory like /etc/supervisor/conf.d/ to separate your app-specific configuration from the supervisor core config. Same goes for nginx, only /etc/nginx/sites-enabled.
Sample supervisor config for uwsgi and nginx:
$ cat /etc/supervisor/conf.d/app.conf
[program:app]
command=/usr/local/bin/uwsgi
--enable-threads
--single-interpreter
--vacuum
--chdir /path/to/app
--uid www-data
--log-syslog
--processes 4
--socket /tmp/app.sock
-w mypython:app
--master
directory=/path/to/app
autostart=true
autorestart=true
priority=999
stopsignal=INT
environment=SOMEVAR=somevalue
[program:nginx]
command=/usr/sbin/nginx
autostart=true
autorestart=true
priority=999
Sample nginx.conf:
$ cat /etc/nginx/sites-enabled/myapp.conf
server {
listen 8080;
client_max_body_size 4G;
server_name localhost;
keepalive_timeout 5;
location / {
include uwsgi_params;
uwsgi_pass unix:///tmp/app.sock;
}
}
There are two parts to this, one is setting up the system itself (by this I mean, the operating system and its various path/filesystems) and the second part is installing and configuring the components.
I will concentrate on the second part, which I believe is the crux of your question:
nginx should be installed by your operating system's native package management utilities. This will make sure that all permissions are set correctly, and the configuration files are where you (or anyone else) would expect them. For example this means on debian-like systems (such as ubuntu and its various cousins), the configurations are in /etc/nginx/, sites are configured by adding files in /etc/nginx/sites-available/ and so on. It also means that if and when updates are pushed by your OS vendor, they will automatically be installed by your packaging software.
uWSGI you should install by source; because it has a very fast development cycle and improvements in uwsgi are going to have a positive affect on your application. The installation process is simple, and there are no special permissions required; other than the normal root/superuser permissions you would need to install applications system-wide.
Your application's source files. For this, I would strongly advise creating separate user roles for each application and isolate all permissions and all related files (for example, log files generated by uwsgi) so that they are all owned by the same user. This makes sure that other applications/users cannot read error messages/log files, and that one user has all the permissions to read/debug everything related to that application without using tools such as sudo.
Other than the three points mentioned above, actually getting all these components to work together is a standard process:
Create your wsgi process/handler in your application. For flask, the default flask application already provides this interface.
Run this file using your wsgi engine. This is uwsgi or gunicorn or similar. Make sure you are using the binary protocol.
Map your static files to a location that is serviced by your web proxy (this is nginx); and create a upstream server that points to the location that the wsgi process is expecting connections. This could be a port or a pipe (depending on how you have setup the components).
Optional use a process manager like supervisor to control the wsgi processes so that they are restarted upon system reboot and are easier to manage.
Everything else is subject to personal preferences (especially when it comes to file system layouts). For large applications, the creators of flask provide blueprints but again note that they do not extol any file system/directory layout.
As a general rule for any Python package, I would refer you to this link.
What you are asking for is not "best practices", but "conventions". No, there are no conventions in the project about paths, privileges and so on. Each sysadmin (or developer) has its needs and tastes, so if you are satisfied with your current setup... well "mission accomplished". There are no uWSGI gods to make happy :) Obviously distro-supplied packages must have their conventions, but again they are different from distro to distro.
Related
We have web application which is running with django, python and PostgreSQL. We are also using virtualenv.
To start the web service, we first activate the virtualenv and then start python as service on 8080 with nohup.
But after sometime nohup process dies. Is there any way to launch service as demon like apache, or use some thing like monit?
I am new to this, please excuse my mistakes
So a runserver command should only be used in testing environments. And just like #Alasdair said, Django docs already have interesting information about that topic.
I would suggest using gunicorn as a wsgi with nginx as a reverse proxy. You can find more information here
And i would suggest using supervisor to monitor and control your gunicorn workers. More information can be found here
It may be a good idea to deploy your application using apache or ngnix. There is official Django documentation on how to do it with apache - https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/
Apache does support virtual environment - just add python-home=<path_to_your_virtual_env> to the WSGIDaemonProcess directive when using daemon mode of mod_wsgi:
WSGIDaemonProcess django python-path=/opt/portal/src/ python-home=/opt/venv/django home=/opt/portal/
Best practice for how to use mod_wsgi and virtual environments is explained in:
http://modwsgi.readthedocs.io/en/develop/user-guides/virtual-environments.html
I was able to do it, but forgot to update the answers.IF any one is looking for same they can follow this.
Best way to run django app in production is to run with
django+gunicorn+supervisor+nginx.
I used gunicorn which is a Python WSGI HTTP Server for UNIX where you can control thread count, timeout settings and much more. gunicorn was running was on socket, it could be run on port but to reduce tcp overhead we ran on socket.
Supervisor is used to run this gunicorn script as supervisor is simple tool which is used to control your process.
and with the help of nginx reverse proxy Our Django site was life.
For more details follow below blog.
http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/
So I have been working on a Badgr server for another department. I have built it using Python 2.7 and django. From what I have heard Django is only used for dev websites.
I want to take this project and convert it to run on something meant for a production environment. But I really have no idea how to proceed. Sorry if this is a really noob question, I am a system administrator not a dev.
(env)[root#badgr code]# ./manage.py runserver &
Performing system checks...
System check identified no issues (0 silenced).
August 08, 2016 - 16:31:48
Django version 1.7.1, using settings 'mainsite.settings'
Starting development server at #####//127.0.0.1:8000/
Quit the server with CONTROL-C.
But I can't seem to connect to it when I go to #####//myserver:8000,
I know the traffic from my PC is hitting the server because I see it in tcpdump on TCP 8000. I have been told runserver blocks traffic from external sources because of it being meant for dev only.
After talking with some people they recommend that I switch to Apache or Gunicorn?
Here are some instructions I was sent from the Django documentation: https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/ Although I can't really make heads or tails of what I should do. Any input would be appreciated. Thanks
I recommend you to use gunicorn and Nginx to run a Django project on your production server. Both are easy to google for official docs and recipes, and their combination is one of the fastest, as long as your code is not to slow. (Nginx+uWSGI is another good option, but a bit harder for beginners).
Gunicorn can be installed with pip install unicorn or the same way you installed Django and then launched with simple gunicorn yourproject.wsgi (refer to docs for more configuration options).
Nginx (use your distribution's package manager to install it) should be configured for reverse proxy mode and also to serve static/media files from your respective static/media root (manage.py collectstatic must be used to keep static files up-to-date). Read documentation to understand basic principles and use this except as an example for your /etc/nginx/sites-enabled/yoursite.conf:
server {
listen 80 default;
server_name example.com;
root /path/to/project/root/static;
location /media {
alias /path/to/project/root/media;
}
location /static {
alias /path/to/project/root/static;
}
location /favicon.ico {
alias /path/to/project/root/static/favicon.ico;
}
location / {
proxy_pass http://localhost:8000;
include proxy_params;
}
}
There's more to it if you need ssl or www/non-www redirect (both are highly recommended to set up) but this example should be enough for you to get started.
To run gunicorn automatically you can either use supervisor or system unit system (be it systemd or something else).
Note: All of this assumes you're using linux. You probably should not use anything else on a production server anyway.
Consider getting some professional help if you feel you can't understand how to deal with all this, there are many freelance sysadmins who would be happy to help you for a reasonable fee.
First of all, you should really be using a "long term support" version of Django, not 1.7.1. The current LTS release is 1.8.14; see https://www.djangoproject.com/download/ for details.
The Django documentation link you were given is just one part of what you need to understand. A better place to start is actually the first link on that page, which is https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/modwsgi/.
I am coming from a Java/Tomcat background and was wondering if there is anything out there which could be similar to the Tomcat manager application?
I'm imagining a webapp that I can use to easily deploy and un-deploy Flask based webapps. I guess an analogy to Tomcat would be a WSGI server with a web based manager.
Unfortunately, the deployment story for Python / WSGI is not quite as neat as Java's WAR file based deployment. (And, while Python is not Java that doesn't mean that WAR file deployments aren't nice). So you don't have anything that will quite match your expectations there - but you may be able to cobble together something similar.
First, you'll want a web server that can easily load and unload WSGI applications without requiring a server restart - the one that immediately jumps to mind is uwsgi in emperor mode (and here's an example setup).
Second, you need a consistent way lay out your applications so the WSGI file can be picked up / generated. Something as simple as always having a root-level app.wsgi file that can be copied to the directory being watched by uwsgi will do.
Third, you'll need a script that can take a web application folder / virtualenv and move / symlink it to the "available applications" folder. You'll need another one that can add / symlink, touch (to restart) and remove (to shutdown) the app.wsgi files from the directory(ies) that uwsgi is watching for new vassel applications. If you need to run it across multiple machines (or even just one remote machine) you could use Fabric.
Fourth and finally, you'll need a little web application to enable you to manage the WSGI files for these available applications without using the command line. Since you just spent all this time building some infrastructure for it, why not use Flask and deploy it on itself to make sure everything works?
It's not a pre-built solution, but hopefully this at least points you in the right direction.
I would love to be able to use Python and Django for web applications at the company I work for. We currently use PHP because everyone is familiar with it and easy to deploy for a large number of clients. We can host anywhere between 10 to 100 websites on a single virtual server.
Is it possible to serve a number of websites from a single Apache and Python installation? Each website must have their own domain among other things, such as email accounts.
I wouldn't use Apache, the current best practice is an Nginx frontend proxying requests to uWSGI servers. Read about the uWSGI Emperor mode. It's very versatile. http://uwsgi-docs.readthedocs.org/en/latest/Emperor.html. Each individual app can be modified, removed added to dynamically. We use it at PythonAnywhere to serve thousands of web applications
There are other WSGI servers that you can use as well. uWSGI just seems the most scalable in my experience.
Yes, It is definitely possible. In our setup, typically we have django behind mod_wsgi, Apache and nginx
You can configure apache's Virtualhost, to point to a specific mod_wsgi which in turn points to specific code.
Quoting from here - Refer to the SO post for further information.
There are at least two methods you can try to serve from a single
instance:
Use apache + mod_wsgi and use the WSGIApplicationGroup and/or
WSGIProcessGroup directives. I've never needed these before so can't
be completely sure these will work the way you want, but regardless
you can definitely use mod_wsgi in daemon mode to greatly improve
your memory footprint.
You can play with Django middleware to deny/allow URLs based on the
request hostname (see HttpRequest.get_host() in the Django docs).
For that matter, even though it would be a slight performance hit,
you can put a decorator on all your views that checks the incoming
host.
Yes, you can easily serve up many sites using a single Apache / mod_wsgi installation. Typically you would do that with a separate virtualhost section for each website. See the virtualhost docs. You want to use a different servername directive in each virtual host config to specify what hostnames get routed to which config. See more detailed documentation in name based virtual hosts
I'm currently trying to set up nginx + uWSGI server for my Django homepage. Some tutorials advice me to create specific UNIX users for certain daemons. Like nginx user for nginx daemon and so on. As I'm new to Linux administration, I thought just to create second user for running all the processes (nginx, uWSGI etc.), but it turned out that I need some --system users for that.
Main question is what users would you set up for nginx + uWSGI server and how to work with them? Say, I have server with freshly installed Debian Squeeze.
Should I install all the packages, virtual environment and set up all the directories as root user and then create system ones to run the scripts?
I like having regular users on a system:
multiple admins show up in sudo logs -- there's nothing quite like asking a specific person why they made a specific change.
not all tasks require admin privileges, but admin-level mistakes can be more costly to repair
it is easier to manage the ~/.ssh/authorized_keys if each file contains only keys from a specific user -- if you get four or five different users in the file, it's harder to manage. Small point :) but it is so easy to write cat ~/.ssh/id_rsa.pub | ssh user#remotehost "cat - > ~/.ssh/authorized_keys" -- if one must use >> instead, it's precarious. :)
But you're right, you can do all your work as root and not bother with regular user accounts.