Create pyramid request for testing, so that events are triggered - python

I would like to test a pyramid view like the following one:
def index(request):
data = request.some_custom_property.do_something()
return {'some':data}
some_custom_property is added to the request via such an event handler:
#subscriber(NewRequest)
def prepare_event(event):
event.request.set_property(
create_some_custom_property,
'some_custom_property',reify=True
)
My problem is: If I create a test request manually, the event is not setup correctly, because no events are triggered. Because the real event handler is more complicated and depends on configuration settings, I don't want to reproduce that code in my test code. I would like to use the pyramid infracstructure as much as possible. I learned from an earlier question how to set up a real pyramid app from an ini file:
from webtest import TestApp
from pyramid.paster import get_app
app = get_app('testing.ini#main')
test_app = TestApp(app)
The test_app works fine, but I can only get back the html output (which is the idea of TestApp). What I want to do is, to execute index in the context of app or test_app, but to get back the result of index before it's send to a renderer.
Any hint how to do that?

First of all, I believe this is a really bad idea to write doctests like this. Since it requires a lot of initialization work, which is going to be included in documentation (remember doctests) and will not "document" anything. And, to me, these tests seems to be the job for unit/integration test. But if you really want, here's a way to do it:
import myapp
from pyramid.paster import get_appsettings
from webtest import TestApp
app, conf = myapp.init(get_appsettings('settings.ini#appsection'))
rend = conf.testing_add_renderer('template.pt')
test_app = TestApp(app)
resp = test_app.get('/my/view/url')
rend.assert_(key='val')
where myapp.init is a function that does the same work as your application initialization function, which is called by pserve (like main function here. Except myapp.init takes 1 argument, which is settings dictionary (instead of main(global_config, **settings)). And returns app (i.e. conf.make_wsgi_app()) and conf (i.e pyramid.config.Configurator instance). rend is a pyramid.testing.DummyTemplateRenderer instance.
P.S. Sorry for my English, I hope you'll be able to understand my answer.
UPD. Forgot to mention that rend has _received property, which is the value passed to renderer, though I would not recommend to use it, since it is not in public interface.

Related

How and when to initialise configuration in Python?

I'm getting pretty confused as to how and where to initialise application configuration in Python 3.
I have configuration that consists of application specific config (db connection strings, url endpoints etc.) and logging configuration.
Before my application performs its intended function I want to initialise the application and logging config.
After a few different attempts, I eventually ended up with something like the code below in my main entry module. It has the nice effect of all imports being grouped at the top of the file (https://www.python.org/dev/peps/pep-0008/#imports), but it doesn't feel right since the config modules are being imported for side effects alone which is pretty non-intuitive.
import config.app_config # sets up the app config
import config.logging_config # sets up the logging config
...
if __name__ == "__main__":
...
config.app_config looks something like follows:
_config = {
'DB_URL': None
}
_config['DB_URL'] = _get_db_url()
def db_url():
return _config['DB_URL']
def _get_db_url():
#somehow get the db url
and config.logging_config looks like:
if not os.path.isdir('.\logs'):
os.makedirs('.\logs')
if os.path.exists('logging_config.json'):
with open(path, 'rt') as f:
config = json.load(f)
logging.config.dictConfig(config)
else:
logging.basicConfig(level=log_level)
What is the common way to set up application configuration in Python? Bearing in mind that I will have multiple applications each using the config.app_config and config.logging_config module, but with different connection string possibly read from a file
I ended up with a cut down version of the Django approach: https://github.com/django/django/blob/master/django/conf/init.py
It seems pretty elegant and has the nice benefit of working regardless of which module imports settings first.

Dynamic Routing With Bottle.py

I'm very new to the Bottle framework and am having a hard time understanding what I am doing wrong when trying to serve static files using dynamic routes.
The following works just fine for me when I use exact values:
#route('/files/somefile.txt')
def serve_somefile():
return static_file('somefile.txt', root = '/directory/to/files')
However, I am trying to create a dynamic route to serve any file in the /files directory based on the documentation.
This does not work for me:
#route('/files/<filename>')
def serve_somefile(filename):
return static_file(filename, root= '/directory/to/files')
I get a 404 response from the server, despite it receiving an identical GET request compared to the above example.
Can anyone point out what I'm doing wrong here?
Did you try specifying the parameter as path (like in their example):
#route('/files/<filename:path>')
def serve_somefile(filename):
return static_file(filename, root= '/directory/to/files')
Nothing in your code looks wrong to me. (And I agree with #Ashalynd that you should be using :path here.)
In fact, I tried running your code, and both cases work.
Perhaps you're using an old version of Bottle? I'm on 0.12.7.
--
Here's my complete example, in case it helps:
import bottle
from bottle import route, static_file
#route('/files/<filename>')
def serve_somefile(filename):
return static_file(filename, root= '/Users/ron/Documents/so/25043651')
bottle.run(host='0.0.0.0', port=8080)

Error with custom GAE task queue

I am writing my first "serious" application with AppEngine and have run into some problems with the task queue.
I have read and reproduced the example code that is given in the appengine docs.
When I tried to add a Task to a custom Queue though it doesn't seem to work for me as it works for others:
What I do is:
from google.appengine.api import taskqueue
def EnterQueueHandler(AppHandler):
def get(self):
#some code
def post(self):
key = self.request.get("value")
task = Task(url='/queue', params={'key':key})
task.add("testqueue")
self.redirect("/enterqueue")
And then I have a handler set for "/queue" that does stuff.
The problem is that this throws the following error:
NameError: global name 'Task' is not defined
Why is that? It seems to me I am missing something basic, but I can't figure out what. It says in the docs that the Task-Class is provided by the taskqueue module.
By now I have figured out that it works if I replace the two task-related lines in the code above with the following:
taskqueue.add(queue_name="testqueue", url="/queue", params={"key":key})
But I would like to understand why the other method doesn't work nonetheless. It would be very nice if someone could help me out here.
From the documentation
Task is provided by the google.appengine.api.taskqueue module.
Since you have already imported
from google.appengine.api import taskqueue
You can replace this line:
task = Task(url='/queue', params={'key':key})
with
task = taskqueue.Task(url='/queue', params={'key':key})
I think the reason is does not work is "Task" is not imported. Below is an example that i use all of the time successfully. Looks just like yours but my import is different.
from google.appengine.api.taskqueue import Task
task = Task(
url=url,
method=method,
payload=payload,
params=params,
countdown=0
)
task.add(queue_name=queue)

Getting config variables outside of the application context

I've extracted several of my sqlalchemy models to a separate and installable package (../lib/site-packages), to use across several applications. So I only need to:
from models_package import MyModel
from any application needing access to these models.
Everything is ok so far, except I cannot find a satisfactory way of getting several application dependent config variables used by some of the models, which may vary from application to application. So some model need to be aware of some variables, where previously I've used the application they were in.
Neither
current_app.config['XYZ']
or
config = LocalProxy(lambda: current_app.config['XYZ'])
have worked (outside of application context errors) so I'm stuck right now. Maybe this is poor programming and/or design on my behalf, so how do clear this up? There must be some way, but I haven't reasoned myself toward it yet.
SOLUTION:
Avoiding setting items that would occur on module load (like a constant containing an api key), both of the above should work, and they do. Anything not using those in the context of model-in-the-application use will of course error, methods returning the values you need should be good.
If you are using a configuration pattern utilising classes and inheritance as described here, you could simply import your config classes with their respective properties and access them anywhere you want:
class Config(object):
IMPORT = 'ME'
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite:///:memory:'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user#localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
Now, in your foo.py:
from config import Config
print(Config.IMPORT) # prints 'ME'
well, since current_app can be a proxy of your flask program when the blueprint is registered, and that is done at run-time, you can't use it in your models_package modules.
(app tries to import models_package, and models_package requires app's configs to initalize things- thus import fails)
one option would be doing circular imports :
assuming everything is in 'App' module:
__init__.py
import flask
application = flask.Flask(__name__)
application.config = #load configs
import models_package
models_package.py
from App import application
config = application.config
or create your own config object, but that somehow doubles complexity:
models_package.py
import flask
config = flask.config.Config(defaults=flask.Flask.default_config)
#pick one of those and apply the same config initialization as you do in
#your __init__.py
config.from_pyfile(..) #or
config.from_object(..) #or
config.from_envvar(..)

Extending my application - Pyramid/Pylons/Python

Simple question about extending my application
Lets say I have a "Main Application", and in this application I have the following in the _init_.py file:
config.add_route('image_upload', '/admin/image_upload/',
view='mainapp.views.uploader',
view_renderer='/site/upload.mako')
and in the views.py I have:
def uploader(request):
# some code goes here
return {'xyz':xyz}
Now when I create a new application, and I want to extend it, to use the above view and route:
In the new application _init_.py file I would manually copy over the config.add_route code:
config.add_route( 'image_upload', '/admin/image_upload/',
view='mainapp.views.uploader',
view_renderer='mainapp:templates/site/upload.mako'
)
And is that all I would need to do? From this would my application be able to use the view and template from the main application, or is am I missing something else?
Thanks for reading!
You don't have to copy your code to do this. Use the Configurator.include method to include your "Main Application" configuration in your new application. The documentation explains this pretty well both here and here, but the essentially, if you declare your main apps configuration inside a callable:
def main_app_config(config):
config.add_route('image_upload', '/admin/image_upload/',
view='mainapp.views.uploader',
view_renderer='/site/upload.mako')
Then you can include your main app in your new app's configuration like this:
from my.main.app import main_app_config
# do your new application Configurator setup, etc.
# then "include" it.
config.include(main_app_config)
# continue on with your new app configuration

Categories

Resources