I am using Django and channels (for WebSockets).
Earlier, when I was developing, I created some objects in memory when a user does a request, and then websockets can use these objects.
After, I run the production server with ssl, and for testing, I had to run apps separately: python manage.py startsslserver and daphne ... project.asgi:application.
And now sockets do not have access to objects, which are initialized in django server.
Anybody knows, how I can solve this problem?
More information is needed for a clear answer. django websocket problems are usually associated with other causes such as nginx.conf or daphne.service etc..
Related
I'm learning Django for the first time and I'm also relatively new to Python. On the Django documentation, it says,
"You’ve started the Django development server, a lightweight Web
server written purely in Python. [...] don’t use this server in
anything resembling a production environment. It’s intended only for
use while developing."
Why shouldn't I use the Django server for production? Why do I need a separate server? I'm from a Node/Express background, and when I created an Express application, I could just deploy it to Heroku without doing too much. How will this be different for Django?
Because of security and performance reasons. It's only meant to be used while developing.
I'm a newbie in django and have a project that involves distributed remote storage and I'm suggested to use mod x-sendfile as part of the project procedure.
In have a django app that receives a file and transforms it into N segments each to be stored on a distinct server. Those servers having a django app receiving and storing the segments.
But since mod x-sendfile works need apache and I am just in developing and trying stage this question occurred to me.
I googled a lot but found nothing in that regard.
So my question being: Is it possible to use apache as django web server during the development of django apps? Does it make sense in development mode to replace apache with django built-in web server?
There should be nothing stopping you from installing a copy of Apache on your workstation and using it for developing, and since you're working on something that depends on some of that functionality it makes perfect sense for you to use that for your development server instead of ./manage.py runserver.
Most people use Djangos built-in server because they don't need more than that for what they're trying to do - it sounds like your solution does.
Heck, since you're testing distributed you may even want to consider grabbing a virtualization tool (qemu, virtualbox, et al) so you can have a faux-distributed setup to work with (I'd suggest doing a bit of scripting to make it easy to deploy / restart them all at once though - it'll save you from having to track down issues where the code that's running is older than you thought it was).
Your development environment can be what you need it to be for what you're doing.
I am trying to build a web-app that has both a Python part and a Node.js part. The Python part is a RESTful API server, and the Node.js will use sockets.io and act as a push server. Both will need to access the same DB instance (Heroku Postgres in my case). The Python part will need to talk to the Node.js part in order to send push messages to be delivered to clients.
I have the Python and DB parts built and deployed, running under a "web" dyno. I am not sure how to build the Node part -- and especially how the Python part can talk to the Node.js part.
I am assuming that the Node.js will need to be a new Heroku app, so that it too can run on a 'web' dyno, so that it benefits from the HTTP routing stack, and clients can connect to it. In such a case, will my Python dynos will be accessing it using just like regular clients?
What are the alternatives? How is this usually done?
After having played around a little, and also doing some reading, it seems like Heroku apps that need this have 2 main options:
1) Use some kind of back-end, that both apps can talk to. Examples would be a DB, Redis, 0mq, etc.
2) Use what I suggested above. I actually went ahead and implemented it, and it works.
Just thought I'd share what I've found.
I'm trying to add an Ldap authentication backend to a Django project running over GAE.
The project runs ok. The only problem really is Ldap is not supported by GAE. I mean:
import ldap
will generate a server error. Nonetheless, I do know that I could make my own modules available through zipimport.
Does anybody have any experience solving similar issues? Can this sort of workaround be an effective solution considering lower level dependencies?
Thanks!
A.
App Engine doesn't let you open sockets directly. Unless the LDAP server you're planning to connect to has an internet-visible HTTP front-end, you need a Plan B. (E.g., you could periodically upload extract from LDAP to your App.)
See http://code.google.com/appengine/docs/python/runtime.html#The_Sandbox
Could someone tell me how I can run Django on two ports simultaneously? The default Django configuration only listens on port 8000. I'd like to run another instance on port xxxx as well. I'd like to redirect all requests to this second port to a particular app in my Django application.
I need to accomplish this with the default Django installation and not by using a webserver like nginx, Apache, etc.
Thank you
Let's say I two applications in my Django application. Now i don't mean two separate Django applications but the separate folders inside the 'app' directory. Let's call this app1 and app2
I want all requests on port 8000 to go to app1 and all requests on port XXXX to go to app2
HTH.
Just run two instances of ./manage.py runserver. You can set a port by simply specifying it directly: ./manage.py runserver 8002 to listen on port 8002.
Edit I don't really understand why you want to do this. If you want two servers serving different parts of your site, then you have in effect two sites, which will need two separate settings.py and urls.py files. You'd then run one instance of runserver with each, passing the settings flag appropriately: ./manage.py runserver 8002 --settings=app1.settings
One other thing to consider - django's session stuff will use the same session cookie for each site, and since cookies are not port specific, you'll have issues with getting logged out every time you switch between windows unless you use multiple browser sessions/private browsing during development.
Although this is what you need to do when logging in as 2 different users on the same site, logging into 2 different sites both running django on different localhost ports doesn't have to work like this.
One easy solution is to write a simple middleware to fix this by appending the port number to the variable name used to store your session id. Here's the one I use.
The built-in web-server is intended for development only, so you should really be using apache or similar in an situation where you need to run on multiple ports.
On the other hand you should be able to start up multiple servers just by starting multiple instances of runserver. As long as you are using a separate database server I don't think that will have any extra problems.
If you need more information about the configuration of server/servers you can check out Django documentation related to this topic.