I am trying to use cherrypy virtualhost dispatcher for serving multiple different applications.
My idea was to have separate configuration file for each application, but I am kinda lost.
If I use virtualhost dispatcher, all applications are in same namespace, so for example section for database connection can occur only once. Or not? Can you please help?
For my current purposes, I am satisfied with this solution:
I create separate config file for a cherrypy application and I am using the same class as cherrypy for parsing the file.
from cherrypy.lib.reprconf import Config
settings = Config(os.path.join(confPath, "settings.cfg"))
Also, there is python standard module for handling config files named configparser.
This question is also quite irrelevant for me, because serving multiple cherrypy applications (as I thought about it) is quite difficult with cherrypy server. I decided to use cherrypy as WSGI server behind appache and this solves the problem explicitly.
Related
I would love to be able to use Python and Django for web applications at the company I work for. We currently use PHP because everyone is familiar with it and easy to deploy for a large number of clients. We can host anywhere between 10 to 100 websites on a single virtual server.
Is it possible to serve a number of websites from a single Apache and Python installation? Each website must have their own domain among other things, such as email accounts.
I wouldn't use Apache, the current best practice is an Nginx frontend proxying requests to uWSGI servers. Read about the uWSGI Emperor mode. It's very versatile. http://uwsgi-docs.readthedocs.org/en/latest/Emperor.html. Each individual app can be modified, removed added to dynamically. We use it at PythonAnywhere to serve thousands of web applications
There are other WSGI servers that you can use as well. uWSGI just seems the most scalable in my experience.
Yes, It is definitely possible. In our setup, typically we have django behind mod_wsgi, Apache and nginx
You can configure apache's Virtualhost, to point to a specific mod_wsgi which in turn points to specific code.
Quoting from here - Refer to the SO post for further information.
There are at least two methods you can try to serve from a single
instance:
Use apache + mod_wsgi and use the WSGIApplicationGroup and/or
WSGIProcessGroup directives. I've never needed these before so can't
be completely sure these will work the way you want, but regardless
you can definitely use mod_wsgi in daemon mode to greatly improve
your memory footprint.
You can play with Django middleware to deny/allow URLs based on the
request hostname (see HttpRequest.get_host() in the Django docs).
For that matter, even though it would be a slight performance hit,
you can put a decorator on all your views that checks the incoming
host.
Yes, you can easily serve up many sites using a single Apache / mod_wsgi installation. Typically you would do that with a separate virtualhost section for each website. See the virtualhost docs. You want to use a different servername directive in each virtual host config to specify what hostnames get routed to which config. See more detailed documentation in name based virtual hosts
I have Django installed on a corporate intranet, running on Apache using mod_wsgi. The Apache server has SSL set up, and everything is working as expected when users hit CGI pages, but my django applications don't seem to be picking up any SSL related environment variables (ssl_client_s_dn, etc) whether they go to my django pages with http:// or https://.
I think the main issue is that I don't understand where django is pulling it's os.environ from, and how to change that. I've looked through the official django documentation and other posts on 'django' and 'ssl', but those seem to deal with views that need to sometimes be http, and sometimes https. I am just trying to get ssl related information to show up in os.environ at all, and I don't know where to look next.
What do you need to know? httpd.conf wsgi configuration settings? My projects settings.py, or lines in wsgi.py?
Any help is much appreciated.
If you are using mod_wsgi within Apache, and you set the environment variables in Apache (using SetEnv), then you are out of luck: mod_wsgi's environment variables are separate from these in Apache.
Many people are running into similar problems, when setting variable in Apache using SetEnv is completely invisible when trying to access os.environ in Django.
There is however some solution. It is listed here, and basically comes to these lines in your wsgi script (adjusted to your needs):
def application(environ, start_response):
os.environ['ssl_client_s_dn'] = environ['ssl_client_s_dn']
return _application(environ, start_response)
Which basically translates first argument of your WSGI application into os.environ dictionary. For details see the post linked above.
I want to have simple program in python that can process different requests (POST, GET, MULTIPART-FORMDATA). I don't want to use a complete framework.
I basically need to be able to get GET and POST params - probably (but not necessarily) in a way similar to PHP. To get some other SERVER variables like REQUEST_URI, QUERY, etc.
I have installed nginx successfully, but I've failed to find a good example on how to do the rest. So a simple tutorial or any directions and ideas on how to setup nginx to run certain python process for certain virtual host would be most welcome!
Although you can make Python run a webserver by itself with wsgiref, I would recommend using one of the many Python webservers and/or web frameworks around. For pure and simple Python webhosting we have several options available:
gunicorn
tornado
twisted
uwsgi
cherrypy
If you're looking for more features you can look at some web frameworks:
werkzeug
flask
masonite
cherrypy (yes, cherrypy is both a webserver and a framework)
django (for completeness, I know that was not the purpose of the question)
You should look into using Flask -- it's an extremely lightweight interface to a WSGI server (werkzeug) which also includes a templating library, should you ever want to use one. But you can totally ignore it if you'd like.
You can use thttpd. It is a lightweight wsgi server for running cgi scripts. It works well with nginx. How to setup thttpd with Nginx is detailed here: http://nginxlibrary.com/running-cgi-scripts-using-thttpd/
All the same you must use wsgi server, as nginx does not support fully this protocol.
I'm running ubuntu with nginx with fastcgi, is that all I need to serve python apps also?
You'll probably also need flup to bridge wsgi and fcgi. You obviously need Python, and whatever libraries your app depends upon. Likely need a database and the appropriate connectors as well, but that should all be in the documentation of whatever project you're trying to host (or framework you're using to write with).
Short answer: almost.
I have a Bluehost account where I can run Python scripts as CGI. I guess it's the simplest CGI, because to run I have to define the following in .htaccess:
Options +ExecCGI
AddType text/html py
AddHandler cgi-script .py
Now, whenever I look up web programming with Python, I hear a lot about WSGI and how most frameworks use it. But I just don't understand how it all fits together, especially when my web server is given (Apache running at a host's machine) and not something I can really play with (except defining .htaccess commands).
How are WSGI, CGI, and the frameworks all connected? What do I need to know, install, and do if I want to run a web framework (say web.py or CherryPy) on my basic CGI configuration? How to install WSGI support?
How WSGI, CGI, and the frameworks are all connected?
Apache listens on port 80. It gets an HTTP request. It parses the request to find a way to respond. Apache has a LOT of choices for responding. One way to respond is to use CGI to run a script. Another way to respond is to simply serve a file.
In the case of CGI, Apache prepares an environment and invokes the script through the CGI protocol. This is a standard Unix Fork/Exec situation -- the CGI subprocess inherits an OS environment including the socket and stdout. The CGI subprocess writes a response, which goes back to Apache; Apache sends this response to the browser.
CGI is primitive and annoying. Mostly because it forks a subprocess for every request, and subprocess must exit or close stdout and stderr to signify end of response.
WSGI is an interface that is based on the CGI design pattern. It is not necessarily CGI -- it does not have to fork a subprocess for each request. It can be CGI, but it doesn't have to be.
WSGI adds to the CGI design pattern in several important ways. It parses the HTTP Request Headers for you and adds these to the environment. It supplies any POST-oriented input as a file-like object in the environment. It also provides you a function that will formulate the response, saving you from a lot of formatting details.
What do I need to know / install / do if I want to run a web framework (say web.py or cherrypy) on my basic CGI configuration?
Recall that forking a subprocess is expensive. There are two ways to work around this.
Embedded mod_wsgi or mod_python embeds Python inside Apache; no process is forked. Apache runs the Django application directly.
Daemon mod_wsgi or mod_fastcgi allows Apache to interact with a separate daemon (or "long-running process"), using the WSGI protocol. You start your long-running Django process, then you configure Apache's mod_fastcgi to communicate with this process.
Note that mod_wsgi can work in either mode: embedded or daemon.
When you read up on mod_fastcgi, you'll see that Django uses flup to create a WSGI-compatible interface from the information provided by mod_fastcgi. The pipeline works like this.
Apache -> mod_fastcgi -> FLUP (via FastCGI protocol) -> Django (via WSGI protocol)
Django has several "django.core.handlers" for the various interfaces.
For mod_fastcgi, Django provides a manage.py runfcgi that integrates FLUP and the handler.
For mod_wsgi, there's a core handler for this.
How to install WSGI support?
Follow these instructions.
https://code.google.com/archive/p/modwsgi/wikis/IntegrationWithDjango.wiki
For background see this
http://docs.djangoproject.com/en/dev/howto/deployment/#howto-deployment-index
I think Florian's answer answers the part of your question about "what is WSGI", especially if you read the PEP.
As for the questions you pose towards the end:
WSGI, CGI, FastCGI etc. are all protocols for a web server to run code, and deliver the dynamic content that is produced. Compare this to static web serving, where a plain HTML file is basically delivered as is to the client.
CGI, FastCGI and SCGI are language agnostic. You can write CGI scripts in Perl, Python, C, bash, whatever. CGI defines which executable will be called, based on the URL, and how it will be called: the arguments and environment. It also defines how the return value should be passed back to the web server once your executable is finished. The variations are basically optimisations to be able to handle more requests, reduce latency and so on; the basic concept is the same.
WSGI is Python only. Rather than a language agnostic protocol, a standard function signature is defined:
def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type','text/plain')]
start_response(status, response_headers)
return ['Hello world!\n']
That is a complete (if limited) WSGI application. A web server with WSGI support (such as Apache with mod_wsgi) can invoke this function whenever a request arrives.
The reason this is so great is that we can avoid the messy step of converting from a HTTP GET/POST to CGI to Python, and back again on the way out. It's a much more direct, clean and efficient linkage.
It also makes it much easier to have long-running frameworks running behind web servers, if all that needs to be done for a request is a function call. With plain CGI, you'd have to start your whole framework up for each individual request.
To have WSGI support, you'll need to have installed a WSGI module (like mod_wsgi), or use a web server with WSGI baked in (like CherryPy). If neither of those are possible, you could use the CGI-WSGI bridge given in the PEP.
You can run WSGI over CGI as Pep333 demonstrates as an example. However every time there is a request a new Python interpreter is started and the whole context (database connections, etc.) needs to be build which all take time.
The best if you want to run WSGI would be if your host would install mod_wsgi and made an appropriate configuration to defer control to an application of yours.
Flup is another way to run with WSGI for any webserver that can speak FCGI, SCGI or AJP. From my experience only FCGI really works, and it can be used in Apache either via mod_fastcgi or if you can run a separate Python daemon with mod_proxy_fcgi.
WSGI is a protocol much like CGI, which defines a set of rules how webserver and Python code can interact, it is defined as Pep333. It makes it possible that many different webservers can use many different frameworks and applications using the same application protocol. This is very beneficial and makes it so useful.
If you are unclear on all the terms in this space, and lets face it, its a confusing acronym-laden one, there's also a good background reader in the form of an official python HOWTO which discusses CGI vs. FastCGI vs. WSGI and so on: http://docs.python.org/howto/webservers.html
It's a simple abstraction layer for Python, akin to what the Servlet spec is for Java. Whereas CGI is really low level and just dumps stuff into the process environment and standard in/out, the above two specs model the http request and response as constructs in the language. My impression however is that in Python folks have not quite settled on de-facto implementations so you have a mix of reference implementations, and other utility-type libraries that provide other things along with WSGI support (e.g. Paste). Of course I could be wrong, I'm a newcomer to Python. The "web scripting" community is coming at the problem from a different direction (shared hosting, CGI legacy, privilege separation concerns) than Java folks had the luxury of starting with (running a single enterprise container in a dedicated environment against statically compiled and deployed code).