I'm working on a system built in Pyramid and one of the views is for importing data. I'd like to make a script that will call that view. I created a console script import_data in my setup.py and that is successfully added to my bin directory. In the import_data function I think I should be using the pyramid.paste bootstrap function, but when I pass the bootstrap function my ini file bootstrap responses with '*** transaction.interfaces.NoTransaction'. I've read that in assigning the bootstrap I must also set the transaction manager but this also returned NoTransaction.
from pyramid.paster import bootstrap
def import_data():
with bootstrap(sys.argv[1]) as env:
with env['request'].tm:
# Post request to pyramid view.
If anybody could steer me in the right direction I'd very much appreciate it.
You can use prequest to run a "request" from commandline,
altenatively look at:
https://github.com/Pylons/pyramid-cookiecutter-starter/blob/latest/%7B%7Bcookiecutter.repo_name%7D%7D/%7B%7Bcookiecutter.repo_name%7D%7D/sqlalchemy_scripts/initialize_db.py#L28
For an example of a script that touches database.
Related
Gurus, Wizards, Geeks
I am tasked with providing Python Flask apps (more generally, webapps written in python) a way to reload properties on the fly.
Specifically, my team and I currently deploy python apps with a {env}.properties file that contains various environment specific configurations in a key value format (yaml for instance). Ideally, these properties are reloaded by the app when changed. Suppose a secondary application existed that updates the previously mentioned {env}.properties file, the application should ALSO be able to read and use new values.
Currently, we read the {env}.properties at startup and the values are accessed via functions stored in a context.py. I could write a function that could periodically update the variables. Before starting an endeavor like this, I thought I would consult the collective to see if someone else has solved this for Django or Flask projects (as it seems like a reasonable request for feature flags, etc).
One such pattern is the WSGI application factory pattern.
In short, you define a function that instantiates the application object. This pattern works with all WSGI-based frameworks.
The Flask docs explain application factories pretty well.
This allows you to define the application dynamically on-the-fly, without the need to redeploy or deploy many configurations of an application. You can change just about anything about the app this way, including configuration, routes, middlewares, and more.
A simple example of this would be something like:
def get_settings(env):
"""get the (current, updated) application settings"""
...
return settings
def create_app(env: str):
if env not in ('dev', 'staging', 'production'):
raise ValueError(f'{env} is not a valid environment')
app = Flask(__name__)
app.config.update(get_settings(env))
return app
Then, you could set FLASK_APP environment variable to something like "myapp:create_app('dev')" and that would do it. This is also the same way you could specify this for servers like gunicorn.
The get_settings function should be written to return the newest settings. It could even do something like retrieve settings from an external source like S3, a config service, or anything.
I need to execute some housekeeping code but only in development or production environment. Unfortunately all management commands execute similar to runserver. Is there any clean way to classify what is the execution environment and run the code selectively.
I saw some solutions like 'runserver' in sys.argv
but it does not work for production. And does not look very clean.
Does django provide anything to classify all these different scenarios code is executing at?
Edit
The real problem is we need to initialise our local cache once the apps are loaded with some data that are frequently accessed. In general I want to fetch DB for some specific information and cache it (currently in memory). The issue is, when it tries to fetch DB, the table may not be created, in fact there may not be migration files created at all. So, when I run makemigrations/migrate, it will run this code which tries to fetch from DB, and throw error saying table does not exist. But if I can't run makemigration/migrate, there will be no table, it is kind of a loop I'm trying to avoid. The part of code will run for all management commands, but I would like to restrict it's execution only to when the app is actually serving requests (that is when the cache is needed) and not for any management commands (including the user defined ones).
```python
from django.apps import AppConfig
from my_app.signals import app_created
class MyAppConfig(AppConfig):
name = 'my_app'
def ready(self):
import my_app.signals
# Code below should be executed only in actual app execution
# And not in makemigration/migrate etc management commands
app_created.send(sender=MyAppConfig, sent_by="MyApp")
```
Q) Send app created signal for app execution other than executions due to management commands like makemigrations, migrate, etc.
There are so many different ways to do this. But generally when I create a production (or staging, or development) server I set an environment variable. And dynamically decide which settings file to load based on that environment variable.
Imagine something like this in a Django settings file:
import os
ENVIRONMENT = os.environ.get('ENVIRONMENT', 'development')
Then you can use
from django.conf import settings
if settings.ENVIRONMENT == 'production':
# do something only on production
Since, I did not get an convincing answer and I managed to pull off a solution, although not a 100% clean. I thought I would just share solution I ended up with.
import sys
from django.conf import settings
if (settings.DEBUG and 'runserver' in sys.argv) or not settings.DEBUG:
"""your code to run only in development and production"""
The rationale is you run the code if it is not in DEBUG mode no matter what. But if it is in DEBUG mode check if the process execution had runserver in the arguments.
The web app I am working on needs to perform a first-time setup or initialization,
where is a good place to put that logic? I dont want to perform the check if a configuration/setup exists on each request to / or before any request as thats kind of not performant.
I was thinking of performing a check if there is a sane configuration when the app starts up, then change the default route of / to a settings/setup page, and change it back. But thats like self-changing code a bit.
This is required since the web app needs settings and then to index stuff based on those settings which take a bit of time. So after the settings have been made, I still need to wait a while until the indexing is done. So even after the settings/setup has been made, any requests following, will need to see a "wait indexing" message.
Im using flask, but this is relevant for django as well I think.
EDIT: Im thinking like this now;
When starting up, check the appconfig.py for MY_SETTINGS, if it is not there
add a default from config.py and put a status=firstrun object on the app.config, also
change the / route to setup view function.
The setup view function will then check for the app.config.status object and perform
The setup of settings as necessary after user input, when the settings are okay,
remove app.config.status or change it to "indexing", then I can have a before_request function to check for the app.config.status just to flash a message of it.
Or I could use the flask.g instead of app.config to store the status?
The proper way is creating a CLI script, preferably via Flask-Script if you use Flask (in Django it would be the default manage.py where you can easily add custom commands, too) and defining a function such as init or install:
from flaskext.script import Manager
from ... import app
manager = Manager(app)
#manager.command
def init():
"""Initialize the application"""
# your code here
Then you mention it in your documentation and can easily assume that it has been run when the web application itself is accessed.
I am having problems logging in my Django project.
In my view.py in an "application" I am doing the following:
import logging
logging.basicConfig(filename="django.log",level=logging.DEBUG)
def request(request):
logging.debug("test debugging")
with a django.log file in the same directory as the view.py file.
Now when making a request from the browser I keep getting a 500 Internal Server error as shown in Firebug. I can get logging to work just fine when I simply run it through the interactive python shell or executing it from a .py file like the following:
import logging
logging.basicConfig(filename="django.log",level=logging.DEBUG)
def testLogging():
logging.debug("test debugging")
if __name__ == "__main__"
testLogging()
, and then executing python nameOfFile.py.
What am I doing wrong? I am running Django 1.1.1 and Python 2.6.5. Maybe I should upgrade Django?
Could it be related to permissions? I can imagine that in a web-server environment, writing to a file may be restricted. In truth, I don't know, but please check the official Django logging documentation - it uses the standard logging module, but you may have to configure it differently.
Is that your entire view? It would have been helpful to post the actual traceback, but note that a view must return an HttpResponse, even if it's an empty one. So if that is your entire view, the 500 error is probably happening because you're not returning anything.
Add return HttpResponse() to the end of that view, and if that still doesn't work, please post the traceback itself.
I have a Pylons app where I would like to move some of the logic to a separate batch process. I've been running it under the main app for testing, but it is going to be doing a lot of work in the database, and I'd like it to be a separate process that will be running in the background constantly. The main pylons app will submit jobs into the database, and the new process will do the work requested in each job.
How can I launch a controller as a stand alone script?
I currently have:
from warehouse2.controllers import importServer
importServer.runServer(60)
and in the controller file, but not part of the controller class:
def runServer(sleep_secs):
try:
imp = ImportserverController()
while(True):
imp.runImport()
sleepFor(sleep_secs)
except Exception, e:
log.info("Unexpected error: %s" % sys.exc_info()[0])
log.info(e)
But starting ImportServer.py on the command line results in:
2008-09-25 12:31:12.687000 Could not locate a bind configured on mapper Mapper|I
mportJob|n_imports, SQL expression or this Session
If you want to load parts of a Pylons app, such as the models from outside Pylons, load the Pylons app in the script first:
from paste.deploy import appconfig
from pylons import config
from YOURPROJ.config.environment import load_environment
conf = appconfig('config:development.ini', relative_to='.')
load_environment(conf.global_conf, conf.local_conf)
That will load the Pylons app, which sets up most of the state so that you can proceed to use the SQLAlchemy models and Session to work with the database.
Note that if your code is using the pylons globals such as request/response/etc then that won't work since they require a request to be in progress to exist.
I'm redacting my response and upvoting the other answer by Ben Bangert, as it's the correct one. I answered and have since learned the correct way (mentioned below). If you really want to, check out the history of this answer to see the wrong (but working) solution I originally proposed.