Django : Call a method only once when the django starts up - python

I want to initialize some variables (from the database) when Django starts.
I am able to get the data from the database but the problem is how should I call the initialize method . And this should be only called once.
Tried looking in other Pages, but couldn't find an answer to it.
The code currently looks something like this ::
def get_latest_dbx(request, ....):
#get the data from database
def get_latest_x(request):
get_latest_dbx(request,x,...)
def startup(request):
get_latest_x(request)

Some people suggest( Execute code when Django starts ONCE only? ) call that initialization in the top-level urls.py(which looks unusual, for urls.py is supposed to handle url pattern). There is another workaround by writing a middleware: Where to put Django startup code?
But I believe most of people are waiting for the ticket to be solved.
UPDATE:
Since the OP has updated the question, it seems the middleware way may be better, for he actually needs a request object in startup. All startup codes could be put in a custom middleware's process_request method, where request object is available in the first argument. After these startup codes execute, some flag may be set to avoid rerunning them later(raising MiddlewareNotUsed exception only works in __init__, which doesn't receive a request argument).
BTW, OP's requirement looks a bit weird. On one hand, he needs to initialize some variables when Django starts, on the other hand, he need request object in the initialization. But when Django starts, there may be no incoming request at all. Even if there is one, it doesn't make much sense. I guess what he actually needs may be doing some initialization for each session or user.

there are some cheats for this. The general solution is trying to include the initial code in some special places, so that when the server starts up, it will run those files and also run the code.
Have you ever tried to put print 'haha' in the settings.py files :) ?
Note: be aware that settings.py runs twice during start-up

Related

API Router Depends Not Compatible with Depend()

Description
I am writing a FastAPI app for making multiplayer chess games. But unfortunately I am stuck on a circular import problem (or so I am guessing) and I'm not able to move forward. Basically I have a socket manager variable in the api/main.py which needs to be imported in api/endpoints/game.py, but in api/main.py I also include the API routers. So whenever I include the API routers it would call that module, which would then call api/main.py which would again call game.py to include the router, therefore creating a circular imports problem. This is the complete python traceback: here.
Code
The code can be seen on GitHub, here it the repository link https://github.com/p0lygun/astounding-arapaimas/tree/feature/api/games/api.
My Tries
I tried all the "hacks" of injecting the module using sys and os but nope they too caused the same problem. I tried moving the socket_manager into a completely different file too but that would too cause the same problem.
If any other information is required let me know, posting a question for the first time so not so sure what all is needed. Thanks!
Edits
Edit 1
codemation#0324 on discord suggested a better way to access the socket manager like this:
Interestingly I get the same error afterwards too.
Edit 2
The problem is happening along the lines where I add JWT Auth depend into the API Router and then use Depend(get_db) in the endpoint definition function. An alternative would be not Depend the get_db() and just call it normally and pass it on to the CRUD functions for the timing.
To be clear, when I remove , dependencies=[Depends(auth.JWTBearer())] it works perfectly.

Safest place for initilization code

My application has a datastore entry that needs to be initialized with some default values when the app is first deployed. I have a page that lets administrators of the app edit those values later, so it's a problem if the initialization code runs again and overwrites those edits.
I initially tried putting code in appengine_config.py, but that's clearly not correct, as any new values for the entity were overwritten after a few page loads. I thought about putting it in main.py, before the call to run_wsgi_app(), but it's my understanding that main.py is run whenever App Engine creates a new instance of the application. Warmup requests seem to have the same problem as appengine_config.py.
Is there a way to do what I'm trying to do?
Typically you could use appengine_config.py or an explicit handler.
If you use appengine_config.py your code should check for the values existence, and only when no value exists should it define a default.
My main concern with one only initialisation code in appengine_config.py is the check for existence of these initial values will be performed on every instance startup. If there is a lot to check that's an overhead on warm starts that you may not want.
For iany initialisation code for a new instance, you will have this problem of checking existence no matter what strategy you adopt, that is "Ensuring what ever process intialiases default values runs at most once".
Personally I would actually have a specific handler method that you call only once. And it then checks to make sure it shouldn't run before taking any action; In case it is called again

Django view alter global variable

My django app contains a loop, which is launched by the following code in urls.py:
def start_serial():
rfmon = threading.Thread(target=rf_clicker.RFMonitor().run)
rfmon.daemon = True
rfmon.start()
start_serial()
The loop inside this subthread references a global variable defined in global_vars.py. I would like to change to value of this variable in a view, but it doesn't seem to work.
from views.py:
import global_vars
def my_view(request):
global_vars.myvar = 2
return httpResponse...
How can a let the function inside the loop know that this view has been called?
The loop listens for a signal from a remote, and based on button presses may save data to the database. There are several views in the web interface, which change the settings for the remotes. While these settings are being changed the state inside the loop needs to be such that data will not be saved.
I agree with Ignacio Vazquez-Abrams, don't use globals.
Especially in your use case. The problem with this approach is that, when you deploy your app to a wsgi container or what have you, you will have multiple instances of your app running in different processes, so changing a global variable in one process won't change it in others.
And I would also not recommend using threads. If you need a long running process that handles tasks asynchronously(which seems to be the case), consider looking at Celery( http://celeryproject.org/). It's really good at it.
I will admit to having no experience leveraging them, but if you haven't looked at Django's signaling capabilities, it seems like they would be a prime candidate for this kind of activity (and more appropriate than global variables).
https://docs.djangoproject.com/en/dev/ref/signals/

Exactly how long do Django/Python/FastCGI Processes last?

I have been working on a website in Django, served using FCGI set up using an autoinstaller and a custom templating system.
As i have it set up now, each View is an instance of a class, which is bound to a template file at load time, and not time of execution. That is, the class is bound to the template via a decorator:
#include("page/page.xtag") # bind template to view class
class Page (Base):
def main(self): # main end-point to retrieve web page
blah = get_some_stuff()
return self.template.main(data=blah) # evaluates template using some data
One thing i have noticed is that since FCGI does not create a new process and reload all the modules/classes every request, changes to the template do not automatically appear on the website until after i force a restart (i.e. by editing/saving a python file).
The web pages also contain lots of data that is stored in .txt files in the filesystem. For example, i will load big snippets of code from separate files rather than leaving them in the template (where they clutter it up) or in the database (where it is inconvenient to edit them). Knowing that the process is persistent, i created an ad-hoc memcache by saving the text i loaded in a static dictionary in one of my classes:
class XLoad:
rawCache = {} #{name : (time, text)}
#staticmethod
def loadRaw(source):
latestTime = os.stat(source).st_mtime
if source in XLoad.rawCache.keys() and latestTime < XLoad.rawCache[source][0]:
# if the cached version of file is up to date, use it
return XLoad.rawCache[source][1]
else:
# otherwise read it from disk, dump it in cache and use that
text = open(source).read()
XLoad.rawCache[source] = (latestTime, text)
return text
Which sped everything up considerably, because the two dozen or so code-snippets which i was loading one-by-one from the filesystem were now being taken directly from the process' memory. Every time i forced a restart, it would be slow for one request while the cache filled up then become blazing fast again.
My question is, what exactly determines how/when the process gets restart, the classes and modules reloaded, and the data i keep in my static dictionary purged? Is it dependent on my installation of Python, or Django, or Apache, or FastCGI? Is it deterministic, based on time, on number of requests, on load, or pseudo-random? And is it safe to do this sort of in-memory caching (which really is very easy and convenient!), or should i look into some proper way of caching these file-reads?
It sounds like you already know this.
When you edit a Python file.
When you restart the server.
When there is a nonrecoverable error.
Also known as "only when it has to".
Caching like this is fine -- you're doing it whenever you store anything in a variable. Since the information is read only, how could this not be safe? Try not to write changes to a file right after you've restarted the server; but the worst thing that could happen is one page view gets messed up.
There is a simple way to confirm all this -- logging. Have your decorators log when they are called, and log when you have to load a file from disk.
In addition to the already mentioned reasons, Apache can be configurated to terminate idle fcgi processes after a specified timespan.

datetime.now() in Django application goes bad

I've had some problems with a Django application after I deployed it. I use a Apache + mod-wsgi on a ubuntu server. A while after I reboot the server the time goes foobar, it's wrong by around -10 hours. I made a Django view that looks like:
def servertime():
return HttpResponse( datetime.now() )
and after I reboot the server and check the url that shows that view it first looks alright. Then at one point it sometimes gives the correct time and sometimes not and later it gives the wrong time always. The server time is corect though.
Any clues? I've googled it without luck.
Can I see your urls.py as well?
Similar behaviors stumped me once before...
What it turned out to be was the way that my urls.py called the view. Python ran the datetime.now() once and stored that for future calls, never really calling it again. This is why django devs had to implement the ability to pass a function, not a function call, to a model's default value, because it would take the first call of the function and use that until python is restarted.
Your behavior sounds like the first time is correct because its the first time the view was called. It was incorrect at times because it got that same date again. Then it was randomly correct again because your apache probably started another worker process for it, and the craziness probably happens when you get bounced in between which process was handling the request.
I found that putting wsgi in daemon mode works. Not sure why, but it did. Seems like some of the newly created processes gets the time screwed up.
datetime.now() is probably being evaluated once, when your class is instantiated. Try removing the parenthesis so that the function datetime.now is returned and THEN evaluated. I had a similar issue with setting default values for my DateTimeFields and wrote up my solution here.
Maybe the server is evaluating the datetime.now() at server start, try making it lazy through a template or use a variable in your view.
Take a look at this blog post.
Django sets the system time zone based on your settings variable TIME_ZONE. This may lead to all kinds of confusion when running multiple Django instances with different TIME_ZONE settings.
This is what Django does:
os.environ['TZ'] = self.TIME_ZONE
The above answer:
"I found that putting wsgi in daemon mode works"
does not work for me...
I think I'm going with not using django's built in TIME_ZONE anymore.
you may need to specify the content type like so
def servertime():
return HttpResponse( datetime.now(), content_type="text/plain" )
another idea:
it may not be working because datetime.now() returns a datetime object. Try this:
def servertime():
return HttpResponse( str(datetime.now()), content_type="text/plain" )
Try to set your time zone (TIME_ZONE variable) in settings.py
That worked for me.

Categories

Resources