I'm actually a php(CodeIgniter) web developer though I love python I just installed Bitnami's Django Stack which has Apache, MySQL, PostgreSQL and Python 2.7.9 with Django installed. During installation itself it generated a simple Django project.
Though it looked familiar to me I started adding some lines of codes to it but when I save it and refresh the page or even restart the browser I found that python instance is still running the old script. The script updates only when I restart start the Apache Server(I believe that's where the Python instance got terminated).
So, to clarify this problem with Python I created a simple view and URLed it to r'^test/'
from django.http import HttpResponse
i = 0
def test_view(request):
global i
i += 1
return HttpResponse(str(i))
Then I found that even switching between different browser the i value keep on increasing i.e increasing value continues with the other browse.
So, can anyone tell me is this a default behavior of Django or is there something wrong with my Apache installation.
This is the default behavior, it may reset if you were running with gunicorn and killing workers after X requests or so, I don't remember. It's like this because the app continues to run after a request has been served.
Its been a while I've worked with PHP but I believe, a request comes in, php starts running a script which returns output and then that script terminates. Special global variables like $_SESSION aside, nothing can really cross requests.
Your Django app starts up and continues to run unless something tells it to reload (when running with ./manage.py runserver it will reload whenever it detects changes to the code, this is what you want during development).
If you are interested in per visitor data see session data. It would look something like:
request.session['i'] = request.session.get('i', 0) + 1
You can store data in there for the visitor and it will stick around until they lose their session.
Related
I am trying to setup a python script that I can have running all the time and then use a HTTP command to activate an action in the script. So that when I type a command like this into a web browser:
http://localhost:port/open
The script executes a piece of code.
The idea is that I will run this script on a computer on my network and activate the code remotely from elsewhere on the network.
I know this is possible with other programming languages as I've seen it before, but I can't find any documentation on how to do it in python.
Is there an easy way to do this in Python or do I need to look into other languages?
First, you need to select a web framework. I will recommand using Flask, since it is lightweight and really easy to start using it fast.
We begin by initializing your app and setting a route. your_open_func() (in the code below) which is decorated with the #app.route("/open") decorator will be triggered and run when you will send a request to that preticular url (for example http://127.0.0.1:5000/open)
As Flask's website says: flask is fun. The very first example (with minor modifications) from there suits your needs:
from flask import Flask
app = Flask(__name__)
#app.route("/open")
def your_open_func():
# Do your stuff right here.
return 'ok' # Remember to return or a ValueError will be raised.
In order to run your app app.run() is usually enough, but in your case you want other computers on your network to be able to access the app, so you should call the run() method like so: app.run(host="0.0.0.0").
By passing that parameter you are making the server publicly available.
I have a Django application written to handle displaying a webpage with data from a model based on the primary key passed in the URL, this all works fine and the Django component is working perfectly for the most part.
My question though is, and I have tried multiple methods such as using an AppConfig, is how I can make it so when the Django server boots up, code is called that would then create a separate thread which would then monitor an external source, logging valid data from that source as a model into the database.
I have the threading code written along with the section that creates the model and saves it in the database, my issue though is that if I try to use an AppConfig to create the thread which would then handle the code, I get an django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. error and the server does not boot up.
Where would be appropriate to place the code? Is my approach incorrect to the matter?
Trying to use threading to get around blocking processes like web servers is an exercise in pain. I've done it before and it's fragile and often yields unpredictable results.
A much easier idea is to create a separate worker that runs in a totally different process that you start separately. It would have the same database access and could even use your Django models. This is how hosts like Heroku approach this problem. It comes with the added benefit of being able to be tested separately and doesn't need to run at all while you're working on your main Django application.
These days, with a multitude of virtualization options like Vagrant and containerization options like Docker, running parallel processes and workers is trivial. In the wild they may literally be running on separate servers with your database on yet another server. As was mentioned in the comments, starting a worker process could easily be delegated to a separate Django management command. This, in turn, can be fairly easily turned into separate worker processes by gunicorn on your web server.
I've being trying to setup my first django app. I've used nginx + uwsgi + django. I tested a simple Hello World view, but when I check the url I'm getting a different result per refresh:
AttributeError at /control/
Hello world
The standard Django message:
It worked!
Congratulations on your first Django-powered page.
In the urls.py I have this simple pattern:
urlpatterns = patterns('',
url(r'^control/?$',hello),
)
On views.py I have:
def hello(request):
return HttpResponse("Hello world")
Why do the results per refresh vary?
How do you start and stop your uWSGI server might be an issue here.
uWSGI is not reloading itself when code changes by default, so unless stopped or reloaded, it will contain cached version of old django code and operate with it. But caching won't occur immediately, just on first request.
But... uWSGI can spawn multiple workers at once, so it can process multiple requests at once (each worker thread can process one request at time) and that workers will have their own cached version of code.
So one scenario can be: you've started you uWSGI server, then you did some request (before making any views, so standard django it worked shows up), that time one of workers cached code responsible for that action. You've decided to change something, but you've done some error, on next request code providing to that error was cached on another worker. Then you've fixed that error and next worker cached fixed code, providing to you your hello world response.
And now there is situation when all workers have some cached version of your code and depending of which one will process your code, you are getting different results.
Solution for that is: restart your uWSGI instance, and maybe add py-auto-reload to your uWSGI config, so uWSGI will automatically reload when code is changed (use that option only in development, never in production environment).
Other scenario is: you don't have multiple workers, but every time you've changed something in code, you're starting new uWSGI instance without stopping old one. That will cause multiple instances to run in same time, and when you are using unix socket, they will co-exist, taking turns when processing requests. If it's the case, stop all uWSGI instances and next time stop old instance before starting new. Or just simply reload old one.
We have a python MVC Web application built using (werkzeug, jinja2 and MongoEngine).
In production we have 4 nginx servers setup behind a nginx load balancer. All 4 servers share a common Mongo server, a Redis server and a Sphinx server.We are using uwsgi between nginx and the application.
Now to the curious case.
Once we deploy a new code, we do a touch xyz.wsgi. For a few hours everything looks fine.
but after that we randomly get the error.
'module' object is not callable
I have seen this error before, in other python development scenarios. But what confuses me this time is the total random behavior.
For Example example.com/multimedia?keywords=sdf&s=title&c=21830.
If we refresh the error is gone. Try another value for any parameter like 'keywords=xzy' and there it is again. Refresh its gone.
That 'multimedia' module is something we did just recently.So we can assume its the root cause. But why does the error occur randomly ?
My assumption is that, it might have something to do with nginx caching or existence of pyc/pyo ? Could a illicit Global Variable be the cause ?
Could you expert hands help me out.
The error probably occurs randomly because it's a runtime error in your code. That is, it doesn't get fired until a user visits your site with the right conditions to follow the code path that results in this error.
It's unlikely to be an nginx caching issue. If it was caching it, then it would probably return the same result over and over rather then change on reload.
However, you can test this by removing nginx and directly testing against werkzeug. Run the requests against it and see if you see the same behavior. No use in debugging Nginx unless you can prove that the underlying systems work the way you expect.
It's also probably worth the 30 seconds to search for module() in your code, since that's the most direct interpretation of that error message.
I have two sites running essentially the same codebase, with only slight differences in settings. Each site is built in Django, with a WordPress blog integrated.
Each site needs to import blog posts from WordPress and store them in the Django database. When a user publishes a post, WordPress hits a webhook URL on the Django side, which kicks off a Celery task that grabs the JSON version of the post and imports it.
My initial thought was that each site could run its own instance of manage.py celeryd, each is in its own virtualenv, and the two sites would stay out of each other's way. Each is daemonized with a separate upstart script.
But it looks like they're colliding somehow. I can run one at a time successfully, but if both are running, one instance won't receive tasks, or tasks will run with the wrong settings (in this case, each has a WORDPRESS_BLOG_URL setting).
I'm using a Redis queue, if that makes a difference. What am I doing wrong here?
Have you specified the name of the default queue that celery should use? If you haven't set CELERY_DEFAULT_QUEUE the both sites will be using the same queue and getting each other's messages. You need to set this setting to a different value for each site to keep the message separate.
Edit
You're right, CELERY_DEFAULT_QUEUE is only for backends like RabbitMQ. I think you need to set a different database number for each site, using a different number at the end of your broker url.
If you are using django-celery then make sure you don't have an instance of celery running outside of your virtualenvs. Then start the celery instance within your virtualenvs using manage.py celeryd like you have done. I recommend setting up supervisord to keep track of your instances.