I have multiple Django projects being used as individual services, they are then all used by each other in return.
This then means all the services are set running on individual ports which can be a bit unreliable as I need to remember when starting the project with
manage.py runserver 0.0.0.0:8080
Ideally for each project I would just use the runserver command and it would know which port to run on automatically.
Is this possible without the need of bash aliases?
This is well beyond the scope of what the development server should be doing. If you need to run your apps in a way that they can actually talk to each other, even in development, you should probably be using a more configurable server. Gunicorn would be ideal. Then you could use something like Foreman (or the Python port, Honcho) with a Procfile that lists all the apps and their ports, then start the whole thing with a single command.
Consider this: that information has to be stored somewhere. Hence you could put it in a configuration file and make a script that loads the appropriate host and port to run. The manage.py file doesn't support a particular configuration file so you'll need to do this outside of manage.py. The script would then callmanage.py runserver host:port` with the correct details.
I have ever tried to do this, I thought maybe I cound the set default port in the setting.py.
When I dug a bit to the code, I found that the default port number is defined in django/core/management/commands/runserver.py. It's used by all django projects, so you can't set individual default ports for different django projects.
Related
I use ...\pserve development.ini --reload in my dev environment to restart my API when code change.
The doc says:
Auto-template-reload behavior is not recommended for production sites
as it slows rendering slightly; it’s usually only desirable during
development.
But the doc has no proposition for a production environment. What's the recommendation to reload, do I have to make it manually every time?
Yes, you will need to restart the service if you change anything in your config file.
If you know that you'll be changing things and don't want to restart it every time that happens, move some of your configs to a database and refactor your app to read from that. This won't be possible for everything, and you'll need to be careful that when an update happens it is applied correctly, but it can be done for some things.
First of all, you are talking about the section of the documentation, Automatically Reloading Templates. That only discusses how to reload templates automatically, not your entire application.
The documentation explicitly states not to use --reload in production. That is an automatic function, not a manual one.
If you change your code and deploy it to a production environment, it is assumed that you would restart your application manually, thereby removing the need to use --reload when invoking pserve production.ini.
I started to write a small REST API using Python with Falcon and Gunicorn. I would like to write some integration tests and I am not sure how to set up a proper test environment (for example to switch to another database). Do you have some good advice or tutorials?
My current idea is to maybe introduce some middleware and to provide a header. If the header is set, I could switch to my test configuration.
Definitely don't add middleware for the sole purpose of integration testing. What you should do is set up some configuration files for your server to use. Dev, Test, and Prod is a decent setup. Each file can point to a different database and have different ports for your server. I'm sure you will even be able to have Dev and Test servers up and running at the same time on your personal computer without any issues. Python has a build in config module that you can use. You can set environment variables in your shell so your server knows which configuration file to use. E.G. in bash FALCON_ENV='DEV' Then in python you can use the os module to get the environment variable - os.environ['FALCON_ENV']. Hope that helps, feel free to ask any more questions.
You might want try using the virtual testing environment and testing helpers provided by falcon core:
http://falcon.readthedocs.io/en/stable/api/testing.html
I am working on a Django based application whose location on my disk is home/user/Documents/project/application. Now this application takes in some values from the user and writes them into a file located in a folder which is under the project directory i.e home/user/Documents/project/folder/file. While running the development server using the command python manage.py runserver everything worked fine, however after deployment the application/views.py which accesses the file via open('folder/path','w') is not able to access it anymore, because by default it looks in var/www folder when deployed via apache2 server using mod_wsgi.
Now, I am not putting the folder into /var/www because it is not a good practise to put any python code there as it might become readable clients which is a major security threat. Please let me know, how can I point the deployed application to read and write to correct file.
The real solution is to install your data files in /srv/data/myapp or some such so that you can give the webserver user correct permissions to only those directories. Whether you choose to put your code in /var/www or not, is a separate question, but I would suggest putting at least your wsgi file there (and, of course, specifying your <DocumentRoot..> correctly.
Could someone tell me how I can run Django on two ports simultaneously? The default Django configuration only listens on port 8000. I'd like to run another instance on port xxxx as well. I'd like to redirect all requests to this second port to a particular app in my Django application.
I need to accomplish this with the default Django installation and not by using a webserver like nginx, Apache, etc.
Thank you
Let's say I two applications in my Django application. Now i don't mean two separate Django applications but the separate folders inside the 'app' directory. Let's call this app1 and app2
I want all requests on port 8000 to go to app1 and all requests on port XXXX to go to app2
HTH.
Just run two instances of ./manage.py runserver. You can set a port by simply specifying it directly: ./manage.py runserver 8002 to listen on port 8002.
Edit I don't really understand why you want to do this. If you want two servers serving different parts of your site, then you have in effect two sites, which will need two separate settings.py and urls.py files. You'd then run one instance of runserver with each, passing the settings flag appropriately: ./manage.py runserver 8002 --settings=app1.settings
One other thing to consider - django's session stuff will use the same session cookie for each site, and since cookies are not port specific, you'll have issues with getting logged out every time you switch between windows unless you use multiple browser sessions/private browsing during development.
Although this is what you need to do when logging in as 2 different users on the same site, logging into 2 different sites both running django on different localhost ports doesn't have to work like this.
One easy solution is to write a simple middleware to fix this by appending the port number to the variable name used to store your session id. Here's the one I use.
The built-in web-server is intended for development only, so you should really be using apache or similar in an situation where you need to run on multiple ports.
On the other hand you should be able to start up multiple servers just by starting multiple instances of runserver. As long as you are using a separate database server I don't think that will have any extra problems.
If you need more information about the configuration of server/servers you can check out Django documentation related to this topic.
I'm running a Django app on Apache + mod_python. When I make some changes to the code, sometimes they have effect immediately, other times they don't, until I restart Apache. However I don't really want to do that since it's a production server running other stuff too. Is there some other way to force that?
Just to make it clear, since I see some people get it wrong, I'm talking about a production environment. For development I'm using Django's development server, of course.
If possible, you should switch to mod_wsgi. This is now the recommended way to serve Django anyway, and is much more efficient in terms of memory and server resources.
In mod_wsgi, each site has a .wsgi file associated with it. To restart a site, just touch the relevant file, and only that code will be reloaded.
As others have suggested, use mod_wsgi instead. To get the ability for automatic reloading, through touching the WSGI script file, or through a monitor that looks for code changes, you must be using daemon mode on UNIX. A slight of hand can be used to achieve same on Windows when using embedded mode. All the details can be found in:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
You can reduce number of connections to 1 by setting "MaxRequestsPerChild 1" in your httpd.conf file. But do it only on test server, not production.
or
If you don't want to kill existing connections and still restart apache you can restart it "gracefully" by performing "apache2ctl gracefully" - all existing connections will be allowed to complete.
Use a test server included in Django. (like ./manage.py runserver 0.0.0.0:8080) It will do most things you would need during development. The only drawback is that it cannot handle simultaneous requests with multi-threading.
I've heard that there is a trick that setting Apache's max instances to 1 so that every code change is reflected immediately--but because you said you're running other services, so this may not be your case.