I've extracted several of my sqlalchemy models to a separate and installable package (../lib/site-packages), to use across several applications. So I only need to:
from models_package import MyModel
from any application needing access to these models.
Everything is ok so far, except I cannot find a satisfactory way of getting several application dependent config variables used by some of the models, which may vary from application to application. So some model need to be aware of some variables, where previously I've used the application they were in.
Neither
current_app.config['XYZ']
or
config = LocalProxy(lambda: current_app.config['XYZ'])
have worked (outside of application context errors) so I'm stuck right now. Maybe this is poor programming and/or design on my behalf, so how do clear this up? There must be some way, but I haven't reasoned myself toward it yet.
SOLUTION:
Avoiding setting items that would occur on module load (like a constant containing an api key), both of the above should work, and they do. Anything not using those in the context of model-in-the-application use will of course error, methods returning the values you need should be good.
If you are using a configuration pattern utilising classes and inheritance as described here, you could simply import your config classes with their respective properties and access them anywhere you want:
class Config(object):
IMPORT = 'ME'
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite:///:memory:'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
Now, in your foo.py:
from config import Config
print(Config.IMPORT) # prints 'ME'
well, since current_app can be a proxy of your flask program when the blueprint is registered, and that is done at run-time, you can't use it in your models_package modules.
(app tries to import models_package, and models_package requires app's configs to initalize things- thus import fails)
one option would be doing circular imports :
assuming everything is in 'App' module:
__init__.py
import flask
application = flask.Flask(__name__)
application.config = #load configs
import models_package
models_package.py
from App import application
config = application.config
or create your own config object, but that somehow doubles complexity:
models_package.py
import flask
config = flask.config.Config(defaults=flask.Flask.default_config)
#pick one of those and apply the same config initialization as you do in
#your __init__.py
config.from_pyfile(..) #or
config.from_object(..) #or
config.from_envvar(..)
Related
I'm getting pretty confused as to how and where to initialise application configuration in Python 3.
I have configuration that consists of application specific config (db connection strings, url endpoints etc.) and logging configuration.
Before my application performs its intended function I want to initialise the application and logging config.
After a few different attempts, I eventually ended up with something like the code below in my main entry module. It has the nice effect of all imports being grouped at the top of the file (https://www.python.org/dev/peps/pep-0008/#imports), but it doesn't feel right since the config modules are being imported for side effects alone which is pretty non-intuitive.
import config.app_config # sets up the app config
import config.logging_config # sets up the logging config
...
if __name__ == "__main__":
...
config.app_config looks something like follows:
_config = {
'DB_URL': None
}
_config['DB_URL'] = _get_db_url()
def db_url():
return _config['DB_URL']
def _get_db_url():
#somehow get the db url
and config.logging_config looks like:
if not os.path.isdir('.\logs'):
os.makedirs('.\logs')
if os.path.exists('logging_config.json'):
with open(path, 'rt') as f:
config = json.load(f)
logging.config.dictConfig(config)
else:
logging.basicConfig(level=log_level)
What is the common way to set up application configuration in Python? Bearing in mind that I will have multiple applications each using the config.app_config and config.logging_config module, but with different connection string possibly read from a file
I ended up with a cut down version of the Django approach: https://github.com/django/django/blob/master/django/conf/init.py
It seems pretty elegant and has the nice benefit of working regardless of which module imports settings first.
I am trying to figure out the right approach to use SQLAlchemy scoped sessions the "right way" while keeping the logic of defining a session separate from configuration and separate from using the session. I have been told a number of times a good aproeach would be to have some global scoped_session factory where I can use everywhere:
"""myapp/db.py
"""
from sqlalchemy.orm import sessionmaker, scoped_session
Session = scoped_session(sessionmaker())
Then when I want to use it:
"""myapp/service/dosomething.py
"""
from myapp.db import Session
def do_something(data):
"""Do something with data
"""
session = Session()
bars = session.query(Bar).all()
for bar in bars:
bar.data = data
session.commit()
This seems right, but my problem is that in all examples I have seen, sessionmaker would also set some parameters of the session, namely and most importantly bind an engine. This makes no sense to me, as the actual DB engine will be created from configuration not known at the global scope during the import of the myapp.db module.
What I have looked at doing is to set everything up in my app's "main" (or in a thread's main function), and then just assume that the session is configured in other places (such as when used by do_something() above):
"""myapp/main.py
"""
from sqlalchemy import create_engine
from myapp.db import Session
from myapp.service.dosomething import do_something
def main():
config = load_config_from_file()
engine = create_engine(**config['db'])
Session.configure(bind=engine)
do_something(['foo', 'bar'])
Does this seem like a correct approach? I have not found any good examples of such a flow yet most other examples I find seem either over-simplified or framework specific.
This is old and I've never accepted any of the answers below, but flowing #univerio's comment and 3+ years of continued usage in SQLAlchemy in various projects, my selected approach now is to keep doing exactly what I suggested in the OP:
Create a myapp.db module which defines Session = ScopedSession(sessionmaker())
Import from myapp.db import Session everywhere it is needed
In my app's main or in the relevant initialization code, do:
def main():
config = load_config_from_file()
engine = create_engine(**config['db'])
Session.configure(bind=engine)
do_something(['foo', 'bar'])
I've used this pattern successfully in Web apps, command line tools and long-running backend processes, and never had to change it so far. Ot is simple, reusable and works great, and I'd recommend it to anyone stumbling here because they've asked themselves the same question I did 3 years ago.
What you can do is to separate the config out into a separate module:
"""myapp/cfg.py
"""
config = load_config_from_file()
Then you can import this file wherever you need, including in the db module, so you can construct the engine as well as the session:
"""myapp/db.py
"""
from .cfg import config
engine = create_engine(**config['db'])
Session = scoped_session(sessionmaker(bind=engine))
Think about singleton.
In your case from myapp.db import Session, Session is singleton and global.
Just config Session at the start in your application.
You should have a config process like load config data from file or env, after all configs are ready, run the real program.
I was trying to do a bit of testing on a pieace of code but I get the ImportError: Start directory is not importable. Below is my code. I would really appreciate if someone could help. I am using Python 2.7.5 and also pycharm.
This is the command I execute in virenv
manage.py test /project/tests.py
from django.test import TestCase
# Create your tests here.
from project.models import Projects
from signup.models import SignUp
class ProjectTestCase(TestCase):
def SetUp(self):
super(ProjectTestCase,self).SetUp()
self.john = SignUp.objects.get(email='john#john.com')
self.project = Projects.objects.get(pk=1)
def test_project_permission(self):
self.assertFalse(self.john.has_perm('delete',self.project))
self.assertTrue(self.john.has_perm('view',self.project))
self.assertTrue(self.john.has_perm('change',self.project))
You should verify that you have an __init__.py in your project directory (it can be an empty file)
You're not calling test correctly. Assuming you have an app known as project, the way to call the tests in that app is manage.py test project
[Edit for additional comment which is really getting into separate question territory]
I would suggest adding this to settings.py (at the bottom)
import sys
if 'test' in sys.argv:
DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}
SOUTH_TESTS_MIGRATE = False
This:
A. Will use sqlite (sort of, I'm not sure if it's actually sqlite or an in-memory db that works like sqlite) ... I've noticed issues if I use postgres and am trying to have my permissions not be excessively liberal AND I've had a test abort)
B. Disables south migrations (so it will just start with a clean database built against whatever models are currently saying)
In a Django project, I have a directory structure that looks something like this:
project/
├╴module_name/
│ ├╴dbrouters.py
│ ...
...
In dbrouters.py, I define a class that starts out like this:
class CustomDBRouter(object):
current_connection = 'default'
...
The idea is to have a database router that sets the connection to use at the start of each request and then uses that connection for all subsequent database queries similarly to what is described in the Django docs for automatic database routing.
Everything works great, except that when I want to import CustomDBRouter in a script, I have to use the absolute path, or else something weird happens.
Let's say in one part of the application, CustomDBRouter.current_connection is changed:
import project.module_name.dbrouters.CustomDBRouter
...
CustomDBRouter.current_connection = 'alternate'
In another part of the application (assume that it is executed after the above code), I use a relative import instead:
import .dbrouters.CustomDBRouter
...
print CustomDBRouter.current_connection # Outputs 'default', not 'alternate'!
I'm confused as to why this is happening. Is Python creating a new class object for CustomDBRouter because I'm using a different import path?
Bonus points: Is there a better way to implement a 'global' class property?
It depends on how this script is being executed. When you're using relative imports, you have to make sure the name of the script the import is in has a __name__ attribute other than __main__. If it does, import .dbrouters.CustomDBRouter becomes import __main__.dbrouters.CustomDBRouter.
I found this here.
It turns out, the problem was being caused by a few lines in another file:
PROJECT_ROOT = '/path/to/project'
sys.path.insert(0, '%s' % PROJECT_ROOT)
sys.path.insert(1, '%s/module_name' % PROJECT_ROOT)
The files that were referencing .dbrouters were imported using the "shortcut" path (e.g., import views instead of import module_name.views).
I would like to test a pyramid view like the following one:
def index(request):
data = request.some_custom_property.do_something()
return {'some':data}
some_custom_property is added to the request via such an event handler:
#subscriber(NewRequest)
def prepare_event(event):
event.request.set_property(
create_some_custom_property,
'some_custom_property',reify=True
)
My problem is: If I create a test request manually, the event is not setup correctly, because no events are triggered. Because the real event handler is more complicated and depends on configuration settings, I don't want to reproduce that code in my test code. I would like to use the pyramid infracstructure as much as possible. I learned from an earlier question how to set up a real pyramid app from an ini file:
from webtest import TestApp
from pyramid.paster import get_app
app = get_app('testing.ini#main')
test_app = TestApp(app)
The test_app works fine, but I can only get back the html output (which is the idea of TestApp). What I want to do is, to execute index in the context of app or test_app, but to get back the result of index before it's send to a renderer.
Any hint how to do that?
First of all, I believe this is a really bad idea to write doctests like this. Since it requires a lot of initialization work, which is going to be included in documentation (remember doctests) and will not "document" anything. And, to me, these tests seems to be the job for unit/integration test. But if you really want, here's a way to do it:
import myapp
from pyramid.paster import get_appsettings
from webtest import TestApp
app, conf = myapp.init(get_appsettings('settings.ini#appsection'))
rend = conf.testing_add_renderer('template.pt')
test_app = TestApp(app)
resp = test_app.get('/my/view/url')
rend.assert_(key='val')
where myapp.init is a function that does the same work as your application initialization function, which is called by pserve (like main function here. Except myapp.init takes 1 argument, which is settings dictionary (instead of main(global_config, **settings)). And returns app (i.e. conf.make_wsgi_app()) and conf (i.e pyramid.config.Configurator instance). rend is a pyramid.testing.DummyTemplateRenderer instance.
P.S. Sorry for my English, I hope you'll be able to understand my answer.
UPD. Forgot to mention that rend has _received property, which is the value passed to renderer, though I would not recommend to use it, since it is not in public interface.