When source files in my project change, django server reloads. I want to extend this to non-Python source files. I use native SQL queries, which are stored in separate files (eg. big_select.sql), and I want the server to reload when these files change.
I use django on Windows.
I have tried adding .py extension, which didn't work.
Django>=2.2
The autoreloading was given a major overhaul (thanks to #Glenn who notified about the incoming changes in this comment!), so one doesn't have to use the undocumented Django features and append files to _cached_filenames anymore. Instead, register custom signal listener, listening to autoreloading start:
# apps.py
from django.apps import AppConfig
from django.utils.autoreload import autoreload_started
def my_watchdog(sender, **kwargs):
sender.watch_file('/tmp/foo.bar')
# to listen to multiple files, use watch_dir, e.g.
# sender.watch_dir('/tmp/', '*.bar')
class EggsConfig(AppConfig):
name = 'eggs'
def ready(self):
autoreload_started.connect(my_watchdog)
Django<2.2
Django stores the watched filepaths in the django.utils.autoreload._cached_filenames list, so adding to or removing items from it will force django to start or stop watching files.
As for your problem, this is the (kind of a hacky) solution. For the demo purpose, I adapted the apps.py so the file starts being watched right after django initializes, but feel free to put the code wherever you want to. First of all, create the file as django can watch only files that already exist:
$ touch /tmp/foo.bar
In your django app:
# apps.py
from django.apps import AppConfig
...
import django.utils.autoreload
class MyAppConfig(AppConfig):
name = 'myapp'
def ready(self):
...
django.utils.autoreload._cached_filenames.append('/tmp/foo.bar')
Now start the server, in another console modify the watched file:
$ echo baz >> /tmp/foo.bar
The server should trigger an autoreload now.
The accepted answer did not work in Django 3.0.7 probably due to changes since.
Came up with the following after going through autoreload:
from django.utils.autoreload import autoreload_started
# Watch .conf files
def watch_extra_files(sender, *args, **kwargs):
watch = sender.extra_files.add
# List of file paths to watch
watch_list = [
FILE1,
FILE2,
FILE3,
FILE4,
]
for file in watch_list:
if os.path.exists(file): # personal use case
watch(Path(file))
autoreload_started.connect(watch_extra_files)
Related
I create a new Process from django app. Can i create new record in database from this process?
My code throws exception:
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
UPD_1
def post(self, request):
v = Value('b', True)
proc = Process(target=start, args=(v, request.user,
request.data['stock'], request.data['pair'], '1111'))
proc.start()
def start(v, user, stock_exchange, pair, msg):
MyModel.objects.create(user=user, stock_exchange=stock_exchange, pair=pair, date=datetime.now(), message=msg)
You need to initialise the project first. You don't usually have to do this when going through manage.py, because it does it automatically, but a new process won't have had this done for it. So you need to put something like the following at the top of your code:
import django
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
django.setup()
myproject.settings needs to be importable from whereever this code is running, so if not you might need to add to sys.path first.
Once this is done you can access your project's models, and use them to access your database, just like you normally would.
I was having a similar problem (also starting process from view), and what eventually helped me solve it was this answer.
The solution indicated is to close your DB connections just before you fork your new Process, so Django can recreate the connection when a query is needed in the new process. Adapted to you code it would be:
def post(self, request):
v = Value('b', True)
#close db connections here
from django import db
db.connections.close_all()
#create and fork your process
proc = Process(target=start, args=(v, request.user,
request.data['stock'], request.data['pair'], '1111'))
proc.start()
Calling django.setup() did not help in my case, and after reading the linked answer, probably because forked processes already share the file descriptors, etc. of its parent process (so Django is already setup).
I wanted to create a simple app using webapp2. Because I have Google App Engine installed, and I want to use it outside of GAE, I followed the instructions on this page: http://webapp-improved.appspot.com/tutorials/quickstart.nogae.html
This all went well, my main.py is running, it is handling requests correctly. However, I can't access resources directly.
http://localhost:8080/myimage.jpg or http://localhost:8080/mydata.json
always returns a 404 resource not found page.
It doesn't matter if I put the resources on the WebServer/Documents/ or in the folder where the virtualenv is active.
Please help! :-)
(I am on a Mac 10.6 with Python 2.7)
(Adapted from this question)
Looks like webapp2 doesn't have a static file handler; you'll have to roll your own. Here's a simple one:
import mimetypes
class StaticFileHandler(webapp2.RequestHandler):
def get(self, path):
# edit the next line to change the static files directory
abs_path = os.path.join(os.path.dirname(__file__), path)
try:
f = open(abs_path, 'r')
self.response.headers.add_header('Content-Type', mimetypes.guess_type(abs_path)[0])
self.response.out.write(f.read())
f.close()
except IOError: # file doesn't exist
self.response.set_status(404)
And in your app object, add a route for StaticFileHandler:
app = webapp2.WSGIApplication([('/', MainHandler), # or whatever it's called
(r'/static/(.+)', StaticFileHandler), # add this
# other routes
])
Now http://localhost:8080/static/mydata.json (say) will load mydata.json.
Keep in mind that this code is a potential security risk: It allows any visitors to your website to read everything in your static directory. For this reason, you should keep all your static files to a directory that doesn't contain anything you'd like to restrict access to (e.g. the source code).
I'm using the flask to built a web app ,it hold all the input message(user send it the xml message) and find the right plugin to response and return the response to user, the app had provided the basic plugins, but i want the user to write their own plugins under the my defined apis, below is the architecture:
plugins
common_plugins1.py
common_plugins2.py
templates
actions
myapp.py
but i faced several problems:
i only want the user call his plugin not others and other plugins is invisible to him
i only want the user call the defined functions or modules i defined
i want to make it scalable
if is it possible to let the user upload their plugins write by python? and make the app dynamically load it.
thanks for your help!
below is the plugin example:
#coding: utf-8
import somemodule
from somemodule import *
def do(dmessage,context,default,**option):
import re
try:
_l=default.split('%|placeholder|%')
message=ModuleRequestVoice(dmessage)
r=ModuleResponseMusic()
return r.render(message,_l[0],_l[1],_l[2],_l[2])
except Exception,e:
print 'match voice error:%s'%e
return False
I have a Pyramid web application managed with zc.buildout. In it, I need to read a file on disk, which is located in a sub-directory of buildout directory.
The problem is with determining the path to the file - I do not want to hard-code the absolute path and just providing a relative path does not work when serving the app in production (supposedly because the working directory is different).
So the promising "hooks" I am thinking about are:
the "root" buildout directory, which I can address in buildout.cfg as ${buildout:directory} - however, I can't figure out how can I "export" it so it can be accessed by the Python code
the location of the Paster's .ini file which starts the app
Like #MartijnPieters suggests in a comment on your own answer, I'd use collective.recipe.template to generate an entry in the .ini. I wondered myself how I could then access that data in my project, so I worked it out :-)
Let's work our way backwards to what you need. First in your view code where you want the buildout directory:
def your_view(request):
buildout_dir = request.registry.settings['buildout_dir']
....
request.registry.settings (see documentation) is a "dictonary-like deployment settings object". See deployment settings, that's the **settings that gets passed into your main method like def main(global_config, **settings)
Those settings are what's in the [app:main] part of your deployment.ini or production.ini file. So add the buildout directory there:
[app:main]
use = egg:your_app
buildout_dir = /home/you/wherever/it/is
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
But, and this is the last step, you don't want to have that hardcoded path in there. So generate the .ini with a template. The template development.ini.in uses a ${partname:variable} expansion language. in your case you need${buildout:directory}:
[app:main]
use = egg:your_app
buildout_dir = ${buildout:dir}
# ^^^^^^^^^^^^^^^
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
Add a buildout part in buildout.cfg to generate development.ini from development.ini.in:
[buildout]
...
parts =
...
inifile
...
[inifile]
recipe = collective.recipe.template
input = ${buildout:directory}/development.ini.in
output = ${buildout:directory}/development.ini
Note that you can do all sorts of cool stuff with collective.recipe.template. ${serverconfig:portnumber} to generate a matching port number in your production.ini and in your your_site_name.nginx.conf, for instance. Have fun!
If the path to the file relative to the buildout root or location of paster.ini is always the same, which it seems it is from your question, you could set it in paster.ini:
[app:main]
...
config_file = %(here)s/path/to/file.txt
Then access it from the registry as in Reinout's answer:
def your_view(request):
config_file = request.registry.settings['config_file']
Here's a rather clumsy solution I've devised:
In buildout.cfg I used extra-paths option of zc.recipe.egg to add the buildout directory to sys.path:
....
[webserver]
recipe = zc.recipe.egg:scripts
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}
then I put a file called app_config.py into the buildout directory:
# This remembers the root of the installation (similar to {buildout:directory}
# so we can import it and use where we need access to the filesystem.
# Note: we could use os.getcwd() for that but it feels kinda wonky
# This is not directly related to Celery, we may want to move it somewhere
import os.path
INSTALLATION_ROOT = os.path.dirname(__file__)
Now we can import it in our Python code:
from app_config import INSTALLATION_ROOT
filename = os.path.join(INSTALLATION_ROOT, "somefile.ext")
do_stuff_with_file(filename)
If anyone knows a nicer solution you're welcome :)
I want to use pyinotify to watch changes on the filesystem. If a file has changed, I want to update my database file accordingly (re-read tags, other information...)
I put the following code in my app's signals.py
import pyinotify
....
# create filesystem watcher in seperate thread
wm = pyinotify.WatchManager()
notifier = pyinotify.ThreadedNotifier(wm, ProcessInotifyEvent())
# notifier.setDaemon(True)
notifier.start()
mask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_MOVED_TO | pyinotify.IN_MOVED_FROM
dbgprint("Adding path to WatchManager:", settings.MUSIC_PATH)
wdd = wm.add_watch(settings.MUSIC_PATH, mask, rec=True, auto_add=True)
def connect_all():
"""
to be called from models.py
"""
rescan_start.connect(rescan_start_callback)
upload_done.connect(upload_done_callback)
....
This works great when django is run with ''./manage.py runserver''. However, when run as ''./manage.py runfcgi'' django won't start. There is no error message, it just hangs and won't daemonize, probably at the line ''notifier.start()''.
When I run ''./manage.py runfcgi method=threaded'' and enable the line ''notifier.setDaemon(True)'', then the notifier thread is stopped (isAlive() = False).
What is the correct way to start endless threads together with django when django is run as fcgi? Is it even possible?
Well, duh. Never start an own, endless thread besides django. I use celery, where it works a bit better to run such threads.