Is it possible to set two different django projects on the same IP address/server (Linode in this case)? For exmaple, django1_project running on www.example.com and django2_project on django2.example.com. This is preferable, but if this is not possible then how to make two djangos, i.e. one running on www.example.com/django1 and the second on www.example.com/django2? Do I need to adapt the settings.py, wsgi.py files or apache files (at /etc/apache2/sites-available) or something else?
Thank you in advance for your help!
Yes that's possible to host several Python powered sites with Apache + mod_wsgi from one host / Apache instance. The only constraint : all apps / sites must be powered by the same Python version, though each app may have (or not) its own virtualenv (which is strongly recommended). It is also recommended to use mod_wsgi daemon mode and have each Django site run in separate daemon process group.
I'm not familiar with Linode restrictions, but if you have control over your Apache files then you could certainly do it with name-based virtual hosting. Set up two VirtualHost containers with the same IP address and port (and this assumes that both www.example.com and django2.example.com resolve to that IP address) and then differentiate requests using the ServerName setting in the container. In Apache 2.4 name-based virtual hosting is automatic. In Apache 2.2 you need the NameVirtualHost directive.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to set up my first web server using the combination of Flask, uWSGI, and nginx. I've had some success getting the Flask & uWSGI components running. I've also gotten many tips from various blogs on how to set this up. However, there is no consistency and the articles suggest many different ways of setting this up especially where folder structures, nginx configurations and users/permissions are concerned (I've tried some of these suggestions and many do work, but I am not sure which is best). So is there one basic "best practice" way of setting up this stack?
nginx + uwsgi + flask make for a potent stack! I add supervisor to the mix and configure it as follows.
Run both uwsgi and nginx out of supervisor for better process control. You can then start supervisor at boot and it will run uwsgi and nginx in the right order. It will also intelligently try to keep them alive if they die. See a sample supervisor configuration below.
If you are running nginx and uwsgi on the same host, use unix sockets rather than HTTP.
The nginx master process must run as root if your web server is listening on port 80. I usually run my web server on some other port (like 8080) and use a load balancer in front to listen on port 80 and proxy to nginx.
Make sure your uwsgi server has access to read/write to the socket file you choose as well as proper permissions to any app code and data directories.
Don't worry too much about your folder structure, especially if you're using a Linux distro like Ubuntu that has sensible defaults. The main supervisor config file can include files from a subdirectory like /etc/supervisor/conf.d/ to separate your app-specific configuration from the supervisor core config. Same goes for nginx, only /etc/nginx/sites-enabled.
Sample supervisor config for uwsgi and nginx:
$ cat /etc/supervisor/conf.d/app.conf
[program:app]
command=/usr/local/bin/uwsgi
--enable-threads
--single-interpreter
--vacuum
--chdir /path/to/app
--uid www-data
--log-syslog
--processes 4
--socket /tmp/app.sock
-w mypython:app
--master
directory=/path/to/app
autostart=true
autorestart=true
priority=999
stopsignal=INT
environment=SOMEVAR=somevalue
[program:nginx]
command=/usr/sbin/nginx
autostart=true
autorestart=true
priority=999
Sample nginx.conf:
$ cat /etc/nginx/sites-enabled/myapp.conf
server {
listen 8080;
client_max_body_size 4G;
server_name localhost;
keepalive_timeout 5;
location / {
include uwsgi_params;
uwsgi_pass unix:///tmp/app.sock;
}
}
There are two parts to this, one is setting up the system itself (by this I mean, the operating system and its various path/filesystems) and the second part is installing and configuring the components.
I will concentrate on the second part, which I believe is the crux of your question:
nginx should be installed by your operating system's native package management utilities. This will make sure that all permissions are set correctly, and the configuration files are where you (or anyone else) would expect them. For example this means on debian-like systems (such as ubuntu and its various cousins), the configurations are in /etc/nginx/, sites are configured by adding files in /etc/nginx/sites-available/ and so on. It also means that if and when updates are pushed by your OS vendor, they will automatically be installed by your packaging software.
uWSGI you should install by source; because it has a very fast development cycle and improvements in uwsgi are going to have a positive affect on your application. The installation process is simple, and there are no special permissions required; other than the normal root/superuser permissions you would need to install applications system-wide.
Your application's source files. For this, I would strongly advise creating separate user roles for each application and isolate all permissions and all related files (for example, log files generated by uwsgi) so that they are all owned by the same user. This makes sure that other applications/users cannot read error messages/log files, and that one user has all the permissions to read/debug everything related to that application without using tools such as sudo.
Other than the three points mentioned above, actually getting all these components to work together is a standard process:
Create your wsgi process/handler in your application. For flask, the default flask application already provides this interface.
Run this file using your wsgi engine. This is uwsgi or gunicorn or similar. Make sure you are using the binary protocol.
Map your static files to a location that is serviced by your web proxy (this is nginx); and create a upstream server that points to the location that the wsgi process is expecting connections. This could be a port or a pipe (depending on how you have setup the components).
Optional use a process manager like supervisor to control the wsgi processes so that they are restarted upon system reboot and are easier to manage.
Everything else is subject to personal preferences (especially when it comes to file system layouts). For large applications, the creators of flask provide blueprints but again note that they do not extol any file system/directory layout.
As a general rule for any Python package, I would refer you to this link.
What you are asking for is not "best practices", but "conventions". No, there are no conventions in the project about paths, privileges and so on. Each sysadmin (or developer) has its needs and tastes, so if you are satisfied with your current setup... well "mission accomplished". There are no uWSGI gods to make happy :) Obviously distro-supplied packages must have their conventions, but again they are different from distro to distro.
I've just inherited a Python application which is running under Apache 2.4, mod_wsgi 3.4 and Python 2.7. The same application serves both HTTP and HTTPS requests.
In the existing code, it is trying to determine if the reqeust was HTTPS by checking the environment:
if context.environ.get('HTTPS') not in ['on', '1']:
This check is failing, even when the connection actually was HTTPS. On looking at an extended traceback showing the environment variables, I saw that HTTPS was not actually in the environment passed from Apache.
So my questions are:
Is this an Apache configuration problem?
Is this check completely wrong and should be rewritten to check something else? And if so, what?
Or should I give up and replace Apache with nginx like I really want to do?
Apache's mod_ssl can be configured to set the HTTPS environment variable, but doesn't do so by default for performance reasons.
You could explicitly enable it, but since you're using a WSGI application, it's probably a better idea to check the wsgi.url_scheme environment variable instead; the WSGI spec guarantees its presence, and it won't require any further changes to your application if you do eventually move to nginx.
When using CGI, WSGI or SSI you will need to advise Apache to send the HTTPS header otherwise it will beempty. You can do this with mod_env in the config or .htaccess
<IfModule mod_env.c>
SetEnv HTTPS on
</IfModule>
Do i need to use NginX or am i able to host it without it?
I am developing my first django project and am at the point where i can run the app project using the command:
./manage.py run_gunicorn -c config/gunicorn
I can then view it going to:
http://127.0.0.1:8000/resources/
I would now like to try hosting it so that other PCs can access this.
Gunicorn is wsgi http server. It is best to use Gunicorn behind HTTP proxy server. We strongly advise you to use nginx.
# http://gunicorn.org/#deployment
Although there are many HTTP proxies available, we strongly advise that you use Nginx. If you choose another proxy server you need to make sure that it buffers slow clients when you use default Gunicorn workers. Without this buffering Gunicorn will be easily susceptible to denial-of-service attacks.
# http://docs.gunicorn.org/en/latest/deploy.html
Of course not. You can use lighttpd or any other web server that supports WSGI, SCGI, FastCGI or AJP. You may refer to this python documentation and django documentation, and these two questions on stackoverflow: Cleanest & Fastest server setup for Django, Differences and uses between WSGI, CGI, FastCGI, and mod_python in regards to Python? might be also helpful.
You don't need a frontend proxy; you can put a standalone webserver like gunicorn directly in production. But there are various reasons why you probably want to use a frontend webserver anyway.
I would love to be able to use Python and Django for web applications at the company I work for. We currently use PHP because everyone is familiar with it and easy to deploy for a large number of clients. We can host anywhere between 10 to 100 websites on a single virtual server.
Is it possible to serve a number of websites from a single Apache and Python installation? Each website must have their own domain among other things, such as email accounts.
I wouldn't use Apache, the current best practice is an Nginx frontend proxying requests to uWSGI servers. Read about the uWSGI Emperor mode. It's very versatile. http://uwsgi-docs.readthedocs.org/en/latest/Emperor.html. Each individual app can be modified, removed added to dynamically. We use it at PythonAnywhere to serve thousands of web applications
There are other WSGI servers that you can use as well. uWSGI just seems the most scalable in my experience.
Yes, It is definitely possible. In our setup, typically we have django behind mod_wsgi, Apache and nginx
You can configure apache's Virtualhost, to point to a specific mod_wsgi which in turn points to specific code.
Quoting from here - Refer to the SO post for further information.
There are at least two methods you can try to serve from a single
instance:
Use apache + mod_wsgi and use the WSGIApplicationGroup and/or
WSGIProcessGroup directives. I've never needed these before so can't
be completely sure these will work the way you want, but regardless
you can definitely use mod_wsgi in daemon mode to greatly improve
your memory footprint.
You can play with Django middleware to deny/allow URLs based on the
request hostname (see HttpRequest.get_host() in the Django docs).
For that matter, even though it would be a slight performance hit,
you can put a decorator on all your views that checks the incoming
host.
Yes, you can easily serve up many sites using a single Apache / mod_wsgi installation. Typically you would do that with a separate virtualhost section for each website. See the virtualhost docs. You want to use a different servername directive in each virtual host config to specify what hostnames get routed to which config. See more detailed documentation in name based virtual hosts
Could someone tell me how I can run Django on two ports simultaneously? The default Django configuration only listens on port 8000. I'd like to run another instance on port xxxx as well. I'd like to redirect all requests to this second port to a particular app in my Django application.
I need to accomplish this with the default Django installation and not by using a webserver like nginx, Apache, etc.
Thank you
Let's say I two applications in my Django application. Now i don't mean two separate Django applications but the separate folders inside the 'app' directory. Let's call this app1 and app2
I want all requests on port 8000 to go to app1 and all requests on port XXXX to go to app2
HTH.
Just run two instances of ./manage.py runserver. You can set a port by simply specifying it directly: ./manage.py runserver 8002 to listen on port 8002.
Edit I don't really understand why you want to do this. If you want two servers serving different parts of your site, then you have in effect two sites, which will need two separate settings.py and urls.py files. You'd then run one instance of runserver with each, passing the settings flag appropriately: ./manage.py runserver 8002 --settings=app1.settings
One other thing to consider - django's session stuff will use the same session cookie for each site, and since cookies are not port specific, you'll have issues with getting logged out every time you switch between windows unless you use multiple browser sessions/private browsing during development.
Although this is what you need to do when logging in as 2 different users on the same site, logging into 2 different sites both running django on different localhost ports doesn't have to work like this.
One easy solution is to write a simple middleware to fix this by appending the port number to the variable name used to store your session id. Here's the one I use.
The built-in web-server is intended for development only, so you should really be using apache or similar in an situation where you need to run on multiple ports.
On the other hand you should be able to start up multiple servers just by starting multiple instances of runserver. As long as you are using a separate database server I don't think that will have any extra problems.
If you need more information about the configuration of server/servers you can check out Django documentation related to this topic.