I'm using OpenAPI Generator to generate a Python Flask web app from an OpenAPI specification. A generated controller method looks like this:
def doStuff(body): # noqa: E501
return 'do some magic!'
What is the best practice for filling in the implementation of these generated controller classes? Do I copy what was generated and then modify that? I can't modify in the generated directory obviously because the generator will regenerate and overwrite my changes.
What happens if I add an endpoint in the OpenAPI spec that causes an additional method to be generated? Do I have to manually copy this new method into my implementation code?
Related
The setup of the problem is simple enough:
an user selects a language preference (this preference can be read from the user’s session);
based on this choice, load the appropriate .mo from the available translations;
(no separate domains are set up, if it makes any difference)
Problem: since this return has to be done outside the scope of the flask app, it cannot be instantiated and to use #babel.localeselector. Instead, I use a simple function based on webapp2 i18n’ extension which, using Babel’s support function, loads a given translation and returns a translation instance (Translations: "PROJECT VERSION"). (inb4 ‘why not use webapp2 already?’ too many libs already).
From this point on, it is not clear to me what to do with this instance. How can I get Babel to use this specific instance? (at the moment, it always uses the default one, no 'best_match' involved).
Solved by just using the flask app and the way I wanted to avoid - on every request, there is a callback to the app instance and to the localeselector decorator, language is set previously in an attribute in flask.g. Basically, by the books I guess.
According to the Flask documentation for the Flask.make_response method, the only types allowed to be returned from a view are instances of app.response_class, str, unicode, a wsgi function, or a tuple with a response object already following one of the above.
I'd like to be able to return my own models and queries from a view and having generic code building an adequate response, serializing the object to an accepted format, building collections from queries, applying filtering conditions, etc. Where's the best place to hook that up?
I considered the following possibilities, but can't figure the one correct way to do it.
Subclassing Response and setting app.response_class
Subclassing Flask redefining Flask.make_response
Wrapping app.route into another decorator
Flask.after_request
?
edit1: I already have many APIs with the behavior I need implemented in the views, but I'd like to avoid repeating that everywhere.
edit2: I'm actually building a Flask-extension with many default practices used in my applications. While a plain decorator would certainly work, I really need a little more magic than that.
Why don't you just create a function make_response_from_custom_object and end your views with
return make_response_from_custom_object(custom_object)
If it is going to be common I would put it into a #response_from_custom_object decorator, but hooking into Flask seems an overkill. You can chain decorators, so wrapping app.route does not make sense too; all you need is
#app.route(..)
#response_from_custom_object
def view(...):
...
If you can do it the simple and explicit way, there is no sense to make your code do magic and thus be less comprehensible.
The farther up the chain you go (make_response, dispatch_request + handle_user_error, full_dispatch_request, rewrite Flask from scratch) the more functionality you will have to re-create.
The easiest thing to do in this case is to override response_class and do the serialization there - that leaves you with all the magic that Flask does in make_response, full_dispatch_request etc., but still gives you control over how to respond to exceptions and serialize responses. It also leaves all of Flask's hooks in place, so consumers of your extension can override behavior where they need to (and they can re-use their existing knowledge of Flask's request / lifecycle)
Python is dyanmic by nature, so while this probably isn't the best practice, you can reassign the make_response method on the application to whatever you want.
To avoid having to recreate the default functionality, you can save a reference to the original function and use that to implement your new function.
I used this recently in a project to add the ability to return instances of a custom Serializable class directly from flask views.
app = Flask("StarCorp")
__original_make_response = app.make_response
def convert_custom_object(obj):
# Check if the returned object is "Serializable"
if not isinstance(obj, Serializable):
# Nope, do whatever flask normally does
return __original_make_response(obj)
# It is, get a `dict` from `obj` using the `json` method
data = obj.json
data.pop(TYPE_META) # Don't share the type meta info of an object with users
# Let flask turn the `dict` into a `json` response
return __original_make_response(data)
app.make_response = convert_custom_object
Since flask extensions typically provide an init_app(app) method I'm sure you could build an extension that monkey patches the passed in application object in a similar manner.
This question isn't entirely App Engine specific, but it might help knowing the context: I have a kind of "static site generator" on App Engine that renders pages and allows them to be styled via various themes and theme settings. The themes are currently stored directly on the App Engine filesystem and uploaded with the application. A theme consists of a few templates and yaml configuration data.
To encapsulate working with themes, I have a Theme class. theme = Theme('sunshine'), for example, constructs a Theme instance that loads and parses the configuration data of the theme called 'sunshine', and allows calls like theme.render_template('index.html') that automatically load and render the correct file on the filesystem.
Problem is, loading and especially parsing a Theme's (yaml) configuration data every time a new request comes in and instantiates a Theme is expensive. So, I want to cache the data within the processes/App Engine instances and maybe later within memcached.
Until now, I've used very simple caches like so:
class Theme(object):
_theme_variables_cache = {}
def __init__(self, name):
self.name = name
if name not in Theme._theme_variables_cache:
Theme._theme_variables[name] = self.load_theme_variables()
...
(I'm aware that the config could be read multiple times when several requests hit the constructor at the same time. I don't think it causes problems though.)
But that kind of caching gets ugly really quickly. I have several different things I want to read from config files and all of the caches are dictionaries because every different theme 'name' also points to a different underlying configuration.
The last idea I had was creating a function like Theme._cached_func(func) that will only execute func when the functions result isn't already cached for the specific template (remember, when the object represents a different template, the cached value can also be different). So I could use it like: self.theme_variables = Theme._cached_func(self.load_theme_variables()), but, I have a feeling I'm missing something obvious here as I'm still pretty new to Python.
Is there an obvious and clean Python caching pattern that will work for such a situation without cluttering up the entire class with cache logic? I think I can't just memoize function results via decorators or something because different templates will have to have different caches. I don't even need any "stale" cache handling because the underlying configuration data doesn't change while a process runs.
Update
I ended up doing it like that:
class ThemeConfig(object):
__instances_cache = {}
#classmethod
def get_for(cls, theme_name):
return cls.__instances_cache.setdefault(
theme_name, ThemeConfig(theme_name))
def __init__(self, theme_name):
self.theme_name = theme_name
self._load_assets_urls() # those calls load yaml files
self._load_variables()
...
class Theme(object):
def __init__(self, theme_name):
self.theme_name = theme_name
self.config = ThemeConfig.get_for(theme_name)
...
So ThemeConfig stores all the configuration stuff that's read from the filesystem for a theme and the factory method ThemeConfig.get_for will always hand out the same ThemeConfig instance for the same theme name. The only caching logic I have is the one line in the factory method, and Theme objects are still as temporary and non-shared as they always were, so I can use and abuse them however I wish.
I will take a shot at this. Basically, a factory pattern can be used here to maintain a clean boundary between your Theme object and the creation of the Theme instance with a particular way.
The factory itself can also maintain a simple caching strategy by storing a mapping between the Theme name and the corresponding Theme object. I would go with a following implementation:
#the ThemeFactory class instantiates a Theme with a particular name if not present within it's cache
class ThemeFactory(object) :
def __init__(self):
self.__theme_variables_cache = {}
def createTheme(self, theme_name):
if not self.__theme_variables_cache.contains(name):
theme = Theme(theme_name)
self.__theme_variables_cache[name] = theme.load_theme_variables()
return self.__theme_variables_cache[name]
The definition of the Theme class is now very clean and simple and will not contain any caching complications
class Theme(object):
def __init__(self, name):
self.__theme_name = name
def load_theme_variables(self):
#contain the logic for loading theme variables from theme files
The approach has the advantages of code maintainability and clear segregation of responsibilities ( although not completely so , the factory class still maintains the simple cache. Ideally it should simply have a reference to a caching service or another class that handles caching .. but you get the point).
Your Theme class does what it does the best - loading theme variables. Since you have a factory pattern, you are keeping the client code ( the one that consumes the Theme class instance) encapsulated from the logic of creating the Theme instances. As your application grows, you can extend this factory to control the creation of various Theme objects (including classes derived fron Theme)
Note that this is just one way of achieving simple caching behavior as well as instance creation encapsulation.
One more point - you could store Theme objects within the cache instead of the theme variables. This way you could read the theme variables from templates only on first use(lazy loading). However, in this case you would need to make sure that you store the theme variables as an instance variable of the Theme class. The method load_theme_variables(self) now needs to be written this way:
def load_theme_variables(self):
#let the theme variables be stored in an instance variable __theme_variable
if not self.__theme_variables is None:
return self.__theme_variables
#__read_theme_file is a private function that reads the theme files
self__theme_variables = self.__read_theme_file(self.__theme_name)
Hopefully, this gives you an idea on how to go about achieving your use case.
Notes: Cannot use Javascript or iframes. In fact I can't trust the client browser to do just about anything but the ultra basics.
I'm rebuilding a legacy PHP4 app as a MVC application, with most of my research currently focused with the Pylon's framework.
One of the first weird issues I've run into and one I've solved in the past by using iframes or better yet javascript is displaying a dynamic collection of "widgets" that are like digest views of a typical controller's index view.
Best way to visualize my problem would be to look at Google's personalized homepage. They solve the problem with Javascript, but for my scenario javascript and pretty much anything above basic XHTML is not possible.
One idea I started working on was to have my Frontpage controller poll a database or other service for the currently activated widgets, then taking a list of tuples/dicts, dynamically instantiate each controller and build a list/dict of render sub-views and pass that to the frontpage view and let it figure things out.
So with peusudo code:
Get request goes to WSGI
WSGI calls pylons
Pylons routes to Frontpage.index()
Frontpage.index()
myViews = list()
for WidgetController in ActiveWidegets():
myViews.append(subRender(WidgetController, widgetView))
c.subviews = myViews
render(frontpage.mako)
Weird bits about subRender
Dynamically imports controllers via __import__ (currently hardcoded to project's namespace :( )
Has a potential to be very expensive (most widget calls can be cached, but one is a user panel)
I feel like there has to be a better way or perhaps a mechanism already implemented in WSGI or better yet Pylons to do this, but so far the closest I've found is this utility method: http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward but it seems a little crazy to build N instances of pylons on top of pylons just to get a collection views.
While in most cases I'd recommend what you originally stated, using Javascript to load each widget, since that isn't an option I think you'll need to do something a little different.
In addition to using the approach of trying to have a single front controller go through all the widgets needed and building them, an alternative you might want to consider is making more powerful use of the templating in Mako.
You can actually define small blocks as Mako def's, which of course have full Python power. To avoid polluting your Mako templates with domain logic, make sure to keep that all in your models, and just make calls to the model instances in the Mako def's as needed for that component of the page to build itself.
A huge advantage of this approach is that since Mako def's support cache args, you can actually have components of the page decide how to cache themselves. Maybe the sidebar should be cached for 5 mins, but the top bar changes every hit for example. Also, since the component is triggering the db hit, you'll save db hits when the component caches itself.
ToscaWidgets doesn't have the performance to make it a very feasible option on a larger scale, so I'd stay away from trying that out.
As for some tweaks to your existing idea, make sure not to actually use Pylons controllers for 'widgets', as they do much more as needed to support WSGI that you don't need for building a page up of widgets.
I'd consider having all Widget classes work like so:
class Widget(object):
def process(self):
# Determine if this widget should process a POST aimed at it
# ie, one of the POST args is a widget id indicating the widget
# to handle the POST
def prepare(self):
# Load data from the database if needed in prep for the render
def render(self):
# return the rendered content
def __call__(self):
self.process()
self.prepare()
return self.render()
Then just have your main Mako template iterate through the widget instances, and call them to render them out.
You could use ToscaWidgets to encapsulate your widgets, along with a stored list of the widgets enabled for each user (in database or other service, as you suggest). Pass a list of the enabled ToscaWidgets to the view and the widgets will render themselves (including dynamically adding CSS/JavaScript references to the page if widget requires those resources).
What is the best idea to fill up data into a Django model from an external source?
E.g. I have a model Run, and runs data in an XML file, which changes weekly.
Should I create a view and call that view URL from a curl cronjob (with the advantage that that data can be read anytime, not only when the cronjob runs), or create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?
There is excellent way to do some maintenance-like jobs in project environment- write a custom manage.py command. It takes all environment configuration and other stuff allows you to concentrate on concrete task.
And of course call it directly by cron.
You don't need to create a view, you should just trigger a python script with the appropriate Django environment settings configured. Then call your models directly the way you would if you were using a view, process your data, add it to your model, then .save() the model to the database.
I've used cron to update my DB using both a script and a view. From cron's point of view it doesn't really matter which one you choose. As you've noted, though, it's hard to beat the simplicity of firing up a browser and hitting a URL if you ever want to update at a non-scheduled interval.
If you go the view route, it might be worth considering a view that accepts the XML file itself via an HTTP POST. If that makes sense for your data (you don't give much information about that XML file), it would still work from cron, but could also accept an upload from a browser -- potentially letting the person who produces the XML file update the DB by themselves. That's a big win if you're not the one making the XML file, which is usually the case in my experience.
"create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?"
First, be sure to declare your Forms in a separate module (e.g. forms.py)
Then, you can write batch loaders that look like this. (We have a LOT of these.)
from myapp.forms import MyObjectLoadForm
from myapp.models import MyObject
import xml.etree.ElementTree as ET
def xmlToDict( element ):
return dict(
field1= element.findtext('tag1'),
field2= element.findtext('tag2'),
)
def loadRow( aDict ):
f= MyObjectLoadForm( aDict )
if f.is_valid():
f.save()
def parseAndLoad( someFile ):
doc= ET.parse( someFile ).getroot()
for tag in doc.getiterator( "someTag" )
loadRow( xmlToDict(tag) )
Note that there is very little unique processing here -- it just uses the same Form and Model as your view functions.
We put these batch scripts in with our Django application, since it depends on the application's models.py and forms.py.
The only "interesting" part is transforming your XML row into a dictionary so that it works seamlessly with Django's forms. Other than that, this command-line program uses all the same Django components as your view.
You'll probably want to add options parsing and logging to make a complete command-line app out of this. You'll also notice that much of the logic is generic -- only the xmlToDict function is truly unique. We call these "Builders" and have a class hierarchy so that our Builders are all polymorphic mappings from our source documents to Python dictionaries.