I have a Pyramid web application managed with zc.buildout. In it, I need to read a file on disk, which is located in a sub-directory of buildout directory.
The problem is with determining the path to the file - I do not want to hard-code the absolute path and just providing a relative path does not work when serving the app in production (supposedly because the working directory is different).
So the promising "hooks" I am thinking about are:
the "root" buildout directory, which I can address in buildout.cfg as ${buildout:directory} - however, I can't figure out how can I "export" it so it can be accessed by the Python code
the location of the Paster's .ini file which starts the app
Like #MartijnPieters suggests in a comment on your own answer, I'd use collective.recipe.template to generate an entry in the .ini. I wondered myself how I could then access that data in my project, so I worked it out :-)
Let's work our way backwards to what you need. First in your view code where you want the buildout directory:
def your_view(request):
buildout_dir = request.registry.settings['buildout_dir']
....
request.registry.settings (see documentation) is a "dictonary-like deployment settings object". See deployment settings, that's the **settings that gets passed into your main method like def main(global_config, **settings)
Those settings are what's in the [app:main] part of your deployment.ini or production.ini file. So add the buildout directory there:
[app:main]
use = egg:your_app
buildout_dir = /home/you/wherever/it/is
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
But, and this is the last step, you don't want to have that hardcoded path in there. So generate the .ini with a template. The template development.ini.in uses a ${partname:variable} expansion language. in your case you need${buildout:directory}:
[app:main]
use = egg:your_app
buildout_dir = ${buildout:dir}
# ^^^^^^^^^^^^^^^
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
Add a buildout part in buildout.cfg to generate development.ini from development.ini.in:
[buildout]
...
parts =
...
inifile
...
[inifile]
recipe = collective.recipe.template
input = ${buildout:directory}/development.ini.in
output = ${buildout:directory}/development.ini
Note that you can do all sorts of cool stuff with collective.recipe.template. ${serverconfig:portnumber} to generate a matching port number in your production.ini and in your your_site_name.nginx.conf, for instance. Have fun!
If the path to the file relative to the buildout root or location of paster.ini is always the same, which it seems it is from your question, you could set it in paster.ini:
[app:main]
...
config_file = %(here)s/path/to/file.txt
Then access it from the registry as in Reinout's answer:
def your_view(request):
config_file = request.registry.settings['config_file']
Here's a rather clumsy solution I've devised:
In buildout.cfg I used extra-paths option of zc.recipe.egg to add the buildout directory to sys.path:
....
[webserver]
recipe = zc.recipe.egg:scripts
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}
then I put a file called app_config.py into the buildout directory:
# This remembers the root of the installation (similar to {buildout:directory}
# so we can import it and use where we need access to the filesystem.
# Note: we could use os.getcwd() for that but it feels kinda wonky
# This is not directly related to Celery, we may want to move it somewhere
import os.path
INSTALLATION_ROOT = os.path.dirname(__file__)
Now we can import it in our Python code:
from app_config import INSTALLATION_ROOT
filename = os.path.join(INSTALLATION_ROOT, "somefile.ext")
do_stuff_with_file(filename)
If anyone knows a nicer solution you're welcome :)
Related
When source files in my project change, django server reloads. I want to extend this to non-Python source files. I use native SQL queries, which are stored in separate files (eg. big_select.sql), and I want the server to reload when these files change.
I use django on Windows.
I have tried adding .py extension, which didn't work.
Django>=2.2
The autoreloading was given a major overhaul (thanks to #Glenn who notified about the incoming changes in this comment!), so one doesn't have to use the undocumented Django features and append files to _cached_filenames anymore. Instead, register custom signal listener, listening to autoreloading start:
# apps.py
from django.apps import AppConfig
from django.utils.autoreload import autoreload_started
def my_watchdog(sender, **kwargs):
sender.watch_file('/tmp/foo.bar')
# to listen to multiple files, use watch_dir, e.g.
# sender.watch_dir('/tmp/', '*.bar')
class EggsConfig(AppConfig):
name = 'eggs'
def ready(self):
autoreload_started.connect(my_watchdog)
Django<2.2
Django stores the watched filepaths in the django.utils.autoreload._cached_filenames list, so adding to or removing items from it will force django to start or stop watching files.
As for your problem, this is the (kind of a hacky) solution. For the demo purpose, I adapted the apps.py so the file starts being watched right after django initializes, but feel free to put the code wherever you want to. First of all, create the file as django can watch only files that already exist:
$ touch /tmp/foo.bar
In your django app:
# apps.py
from django.apps import AppConfig
...
import django.utils.autoreload
class MyAppConfig(AppConfig):
name = 'myapp'
def ready(self):
...
django.utils.autoreload._cached_filenames.append('/tmp/foo.bar')
Now start the server, in another console modify the watched file:
$ echo baz >> /tmp/foo.bar
The server should trigger an autoreload now.
The accepted answer did not work in Django 3.0.7 probably due to changes since.
Came up with the following after going through autoreload:
from django.utils.autoreload import autoreload_started
# Watch .conf files
def watch_extra_files(sender, *args, **kwargs):
watch = sender.extra_files.add
# List of file paths to watch
watch_list = [
FILE1,
FILE2,
FILE3,
FILE4,
]
for file in watch_list:
if os.path.exists(file): # personal use case
watch(Path(file))
autoreload_started.connect(watch_extra_files)
I need to localize my pyramid application, but I have an issue.
The setup.py file contains the following message_extractors variable:
message_extractors = { '.': [
('templates/**.html', 'mako', None),
('templates/**.mako', 'mako', None),
('static/**', 'ignore', None)
]},
I've created directory my_package_name/locale. In __init__.py I added config.add_translation_dirs('my_package_name:locale').
But, when I run
(my_virtual_env): python setup.py extract_messages
I receive messages
running extract_messages
error: no output file specified`
If I understand correctly, extract_messages does not require the --output-file parameter in this case.
What is the reason for this behavior?
You also need setup.cfg in the same directory as setup.py, containing roughly this:
[compile_catalog]
directory = YOURPROJECT/locale
domain = YOURPROJECT
statistics = true
[extract_messages]
add_comments = TRANSLATORS:
output_file = YOURPROJECT/locale/YOURPROJECT.pot
width = 80
[init_catalog]
domain = YOURPROJECT
input_file = YOURPROJECT/locale/YOURPROJECT.pot
output_dir = YOURPROJECT/locale
[update_catalog]
domain = YOURPROJECT
input_file = YOURPROJECT/locale/YOURPROJECT.pot
output_dir = YOURPROJECT/locale
previous = true
Of course, you will replace YOURPROJECT with your project name. I think the setup.cfg file used to be a part of the projects before pyramid 1.5 but now that pyramid uses lingua and gettext instead of babel it is not needed anymore. You might be better off if you follow current pyramid docs:
http://pyramid.readthedocs.org/en/latest/narr/i18n.html
I wanted to create a simple app using webapp2. Because I have Google App Engine installed, and I want to use it outside of GAE, I followed the instructions on this page: http://webapp-improved.appspot.com/tutorials/quickstart.nogae.html
This all went well, my main.py is running, it is handling requests correctly. However, I can't access resources directly.
http://localhost:8080/myimage.jpg or http://localhost:8080/mydata.json
always returns a 404 resource not found page.
It doesn't matter if I put the resources on the WebServer/Documents/ or in the folder where the virtualenv is active.
Please help! :-)
(I am on a Mac 10.6 with Python 2.7)
(Adapted from this question)
Looks like webapp2 doesn't have a static file handler; you'll have to roll your own. Here's a simple one:
import mimetypes
class StaticFileHandler(webapp2.RequestHandler):
def get(self, path):
# edit the next line to change the static files directory
abs_path = os.path.join(os.path.dirname(__file__), path)
try:
f = open(abs_path, 'r')
self.response.headers.add_header('Content-Type', mimetypes.guess_type(abs_path)[0])
self.response.out.write(f.read())
f.close()
except IOError: # file doesn't exist
self.response.set_status(404)
And in your app object, add a route for StaticFileHandler:
app = webapp2.WSGIApplication([('/', MainHandler), # or whatever it's called
(r'/static/(.+)', StaticFileHandler), # add this
# other routes
])
Now http://localhost:8080/static/mydata.json (say) will load mydata.json.
Keep in mind that this code is a potential security risk: It allows any visitors to your website to read everything in your static directory. For this reason, you should keep all your static files to a directory that doesn't contain anything you'd like to restrict access to (e.g. the source code).
from docutils.parsers.rst.directives.images import Figure
class MyFigure(Figure):
def run(self):
# here I need to read the 'thumbnails_folder' setting
pass
def setup(app):
app.add_config_value('thumbnails_folder', '_thumbnails', 'env')
How can I access the config value in .run()? I read sources of Sphinx-contrib, but did not see things done in my way, so they accessed conf.py the way I can't. Or should I do it in a different manner?
All I want to do is translate this
.. figure:: image/image.jpg
into this:
.. image:: image/thumbnails/image.jpg
:target: image/image.jpg
Here's the extension code
(the thumbnail is generated with PIL). And also put the :target: into downloadable files (As I see, only builder instances can do this).
The build environment holds a reference to the Config object. Configuration variables can be retrieved from this object:
def run(self):
env = self.state.document.settings.env # sphinx.environment.BuildEnvironment
config = env.config # sphinx.config.Config
folder = config["thumbnails_folder"]
...
I've been thinking about ways to automatically setup configuration in my Python applications.
I usually use the following type of approach:
'''config.py'''
class Config(object):
MAGIC_NUMBER = 44
DEBUG = True
class Development(Config):
LOG_LEVEL = 'DEBUG'
class Production(Config):
DEBUG = False
REPORT_EMAIL_TO = ["ceo#example.com", "chief_ass_kicker#example.com"]
Typically, when I'm running the app in different ways I could do something like:
from config import Development, Production
do_something():
if self.conf.DEBUG:
pass
def __init__(self, config='Development'):
if config == "production":
self.conf = Production
else:
self.conf = Development
I like working like this because it makes sense, however I'm wondering if I can somehow integrate this into my git workflow too.
A lot of my applications have separate scripts, or modules that can be run alone, thus there isn't always a monolithic application to inherit configurations from some root location.
It would be cool if a lot of these scripts and seperate modules could check what branch is currently checked out and make their default configuration decisions based upon that, e.g., by looking for a class in config.py that shares the same name as the name of the currently checked out branch.
Is that possible, and what's the cleanest way to achieve it?
Is it a good/bad idea?
I'd prefer spinlok's method, but yes, you can do pretty much anything you want in your __init__, e.g.:
import inspect, subprocess, sys
def __init__(self, config='via_git'):
if config == 'via_git':
gitsays = subprocess.check_output(['git', 'symbolic-ref', 'HEAD'])
cbranch = gitsays.rstrip('\n').replace('refs/heads/', '', 1)
# now you know which branch you're on...
tbranch = cbranch.title() # foo -> Foo, for class name conventions
classes = dict(inspect.getmembers(sys.modules[__name__], inspect.isclass)
if tbranch in classes:
print 'automatically using', tbranch
self.conf = classes[tbranch]
else:
print 'on branch', cbranch, 'so falling back to Production'
self.conf = Production
elif config == 'production':
self.conf = Production
else:
self.conf = Development
This is, um, "slightly tested" (python 2.7). Note that check_output will raise an exception if git can't get a symbolic ref, and this also depends on your working directory. You can of course use other subprocess functions (to provide a different cwd for instance).