Flask('application') versus Flask(__name__) - python

In the official Quickstart, it's recommended to use __name__ when using a single module:
... If you are using a single module (as in this example), you should use __name__ because depending on if it’s started as
application or imported as module the name will be different
('__main__' versus the actual import name). ...
However, in their API document, hardcoding is recommended when my application is a package:
So it’s important what you provide there. If you are using a single
module, __name__ is always the correct value. If you however are
using a package, it’s usually recommended to hardcode the name of your
package there.
I can understand why it's better to hardcode the name of my package, but why not hardcoding the name of a single module? Or, in other words, what information can Flask get when it receives a __main__ as its first parameter? I can't see how this can make it easier for Flask to find the resources...

__name__ is just a convenient way to get the import name of the place the app is defined. Flask uses the import name to know where to look up resources, templates, static files, instance folder, etc. When using a package, if you define your app in __init__.py then the __name__ will still point at the "correct" place relative to where the resources are. However, if you define it elsewhere, such as mypackage/app.py, then using __name__ would tell Flask to look for resources relative to mypackage.app instead of mypackage.
Using __name__ isn't orthogonal to "hardcoding", it's just a shortcut to using the name of the package. And there's also no reason to say that the name should be the base package, it's entirely up to your project structure.

Related

How to use api.add_resource to route in different files?

I have a routing table in apis.py using flask-restful like:
api.add_resource(sonarqube.SonarqubeHistory, '/sonarqube/<project_name>')
api.add_resource(project.ProjectFile, '/project/<sint:project_id>/file')
api.add_resource(redmine.RedmineFile, '/download', '/file/<int:file_id>')
The resource classes are in other modules like sonarqube.py, project.py, or redmine.py.
My question is: Is that possible to also move api.add_resource() into the other modules (For more structured code)? I tried simply move the statement and from apis import api, but since apis.py also do other works it will result circular import.
Is there an example for similar use cases?
Actually for better design, split the routing table into different module from apis.py, as apis.py is also doing other work. For example routingapis.py.
Import the new "routingapis" where ever required.
If still circular imports noticed and unavoidable even after splitting, then defer importing. Write a separate function for adding the resource
something like
def functionToAddResource():
from routingapis import api
api.add_resource(sonarqube.SonarqubeHistory, '/sonarqube/<project_name>')

Is it safe to overwrite app config after 1st config assignment is made (and doesn't it negate the benefit of facotry pattern)?

Hello so we have couple of methods to setup app configurations in Flask.
ENV var directly in cmd before running flask
app.config['xxx']
app.config.from_object (module or class)
app.config.from_pyfile (file)
app.config.from_envvar (FILEPATH_ENVVAR)
I have read that for example when unit testing, it is best to implement factory pattern as some configs don't work when overwritten.
So I am wondering if it is safe to use multiple methods from above?
For example if i follow below steps together then is it safe to assume config will be applied correctly?
Step 1: Use method 1 before running application (for example to set ENV var secret key or to setup an ENV var that can be checked in the code at step 2 to decide whether to apply dev/prod config settings class or to set ENV var PATH_TO_SOMECONFIGFILE)
Step 2: Immediately after initializing the app object, use method 3 (to set default production settings or check ENV var set in above step to invoke appropriate dev/prod class).
Step 3: Immediately after above step, use method 4 or 5 to update the config settings
So will the settings from step 3 overwrite all previous (previously set) settings correctly? And is this good practice? And doesn't it negate the benefit of using factory pattern since I have read that not using factory pattern (for example while unit testing) could result in certain config if updated will not apply correctly. Hence create factory pattern to get fresh object with the needed config applied.
I'll start by answering your questions and then I'll continue with a few explanations regarding configuration best practices.
So I am wondering if it is safe to use multiple methods from above?
Yes. And it's in fact recommended that you use multiple ways of loading your configuration (see below why).
So will the settings from step 3 overwrite all previous (previously
set) settings correctly? And is this good practice?
Yes. And usually the best way to learn things is to try things out. So I'll give you an example below, using classes and inheritance just to prove how overriding works (you can paste this in a Python module and run it, assuming you have Flask installed), but you should go ahead and experiment with all of the above methods that you've read about and mix and match them to solidify your knowledge.
from flask import Flask
class Config:
LOGGING_CONFIGURATION = None
DATABASE = None
class DevelopmentConfig(Config):
LOGGING_CONFIGURATION = 'Development'
DATABASE = 'development.db'
class TestConfig(Config):
LOGGING_CONFIGURATION = 'Test'
DATABASE = 'test.db'
def create_app(config):
app = Flask(__name__)
app.config.from_object(Config) # we load the default configuration
print(app.config['LOGGING_CONFIGURATION'], app.config['DATABASE'])
app.config.from_object(config) # we override with the parameter config
print(app.config['LOGGING_CONFIGURATION'], app.config['DATABASE'])
return app
if __name__ == '__main__':
app = create_app(DevelopmentConfig)
test_app = create_app(TestConfig) # you do this during tests
outputs
None None
Development development.db
None None
Test test.db
And Doesn't it negate the benefit of using factory pattern since I
have read that not using factory pattern (for example while unit
testing) could result in certain config if updated will not apply
correctly. Hence create factory pattern to get fresh object with the
needed config applied.
No. You're confusing things here, loading configurations and using application factories are NOT mutually exclusive. In fact, they work together.
You are correct that messing with some config values, such as ENV and DEBUG, which are special config values may cause the application to behave inconsistently. More details on those special vales here. So try not to alter those once the application has finished setting up. See more in the word of advice section below.
Further clarifications
The way Flask is designed usually requires the configuration to be available when the application starts up, and you usually need SOME KIND of configuration because, depending on your environment you might want to change different settings like:
the SECRET KEY for example (if you're using Cookies for instance)
toggle the DEBUG mode (you definitely don't want to use this in production, but you do in development mode)
using a different DATABASE (which might be a path to a different file, if you're using SQLITE3, which is super useful in tests, where you don't want to use the production database)
using a different logging configuration (you'll maybe want critical information logged to a FILE in production, but in development you'll want everything logged to the CONSOLE, for debugging purposes)
etc
Now you see that, because you need different settings for different configurations, you'll need some kind of mechanism to toggle between configurations, which is why using all those methods above combined is useful and recommended, because generally you'll load some kind of DEFAULT configuration, and override accordingly based on the environment (PRODUCTION, DEVELOPMENT, etc).
The way you would achieve all the above is described very meticulously with enough examples here, which is beyond the scope of this thread, but I'd encourage you to go through each example and try it out.
I also have an open source app, built with Flask, where I've used classes and inheritance to load different configurations based on different environments and went even further to customise those as well. So not only do I load, say, the DEVELOPMENT configuration, but I even customise it further should I want to. Examples are here, and I load them from here. Do try to understand the mechanism by which I load the configuration, it will help you.
Edit:
The config object is just a subclass of a dictionary that has additional methods that help you load configurations from different places (the ones you've seen above):
It doesn't matter how you load or override your configuration. All the settings will be loaded into the config dictionary. This belongs to the Flask object. This is where your configuration will live.
So it doesn't matter whether you do:
# override a setting if it exists or add it if it doesn't
# just like you would do in a normal dictionary
app.config[key] = value
# this will either override all the values that are loaded
# from that object or it will add them
app.config.from_pyobject(object)
# same as above, and the same goes for any method you use to add/override
app.config.from_pyfile(filename)
The same goes for any other method not listed here that might load/override settings. They all have the same precedence, just be careful of the order in which you override, the configuration you load last is the one that the app will use.
Word of advice
Try to load your configuration, along with all its core settings, regardless of your environment, as soon as your application starts up, and don't alter them from that point forward. If you find you are altering a lot of settings from different parts of the application, after the application is running (i.e you're doing a lot of app[value] = another_value), try to rethink your design, as that is an unstable implementation. What if a part of an application is expecting a setting that hasn't been set up by another part of the application ? Don't do that.

Pattern to get rid of imports in modules as in web2py controllers

I am new to web2py and python both. I am writing a sample blog app in this framework. I want to split the business logic that gets called in each controller method to it's own module, and found this example helpful:
http://www.web2pyslices.com/slice/show/1478/using-modules-in-web2py
Cleaning up web2py my controllers
As you can see, you need to import objects in modules or set them through globals.current. The controller can refer to "db" and "request" instances (for example) without any import. What kind of coding mechanism makes it possible in controller but not elesewhere?
The web2py framework does a lot of behind the scenes work to make all that stuff available.
For example, when you go to a URL like host/app/controller, that controller is called by web2py (starting with something in web2py.py) that handles importing web2py modules, providing request/response objects, etc.
Things placed in modules, however, are intended to be standalone Python code, not necessarily specific to web2py.
Found the answer:
Looks like how web2py works is by compiling the python code for the controllers and models and views on the fly. It runs them in it's special 'environment'
Related snippets of code are:
https://github.com/web2py/web2py/blob/master/gluon/main.py#L205-263
In the file above, look at: build_environment, run_models_in, run_controller_in functions (below):
https://github.com/web2py/web2py/blob/master/gluon/compileapp.py#L385-487
https://github.com/web2py/web2py/blob/master/gluon/compileapp.py#L504-539
https://github.com/web2py/web2py/blob/master/gluon/compileapp.py#L542-607
Which run the python code in a 'restricted' environment:
https://github.com/web2py/web2py/blob/master/gluon/restricted.py#L197-225

How do I dynamically import a module in App Engine?

I'm trying to dynamically load a class from a specific module (called 'commands') and the code runs totally cool on my local setup running from a local Django server. This bombs out though when I deploy to Google App Engine. I've tried adding the commands module's parent module to the import as well with no avail (on either setup in that case). Here's the code:
mod = __import__('commands.%s' % command, globals(), locals(), [command])
return getattr(mod, command)
App Engine just throws an ImportError whenever it hits this.
And the clarify, it doesn't bomb out on the commands module. If I have a command like 'commands.cat' it can't find 'cat'.
I was getting import errors when importing this way when my folder/package was named "commands". I renamed the package to "cmds" and everything worked. I'm guessing there was a conflict with a builtin named "commands". Also, I don't know if it matters, but I only passed a value for the name parameter when calling import:
__import__('cmds.' + command_name)
Nick Johnson from the AppEngine team wrote up a blog post on this topic that may help you:
Webapps on App Engine, part 6: Lazy loading
The whole batch of them are recommended reading.
My AppEngine framework MVCEngine dynamically imports controller classes. The actual code in-context can be browsed on Google Code.
Briefly, here's how I do it:
controller_name = "foo"
controller_path = "app/controllers/%s_controller.py" % controller_name
controller = __import__(controller_path)
controllerClass = classForName(controller_name, namespace=controller.__dict__)
and the classForName function:
def classForName(name, *args, **kw):
ns = kw.get('namespace',globals())
return ns[name](*args)
I haven't read Nick's article on Lazy Loading, referenced above, but he is pretty much the authority on things AppEngine, and he has a better understanding than I do of the (all-important) performance characteristics of different approaches to coding for AppEngine. Definitely read his article.
You may want to have a look on mapreduce.util.for_name which lets you to dynamically import class/function/method. I promise :) I will wrap that in a blogpost.

How to arrange the source code of an application made with SQLAlchemy and a graphic interface?

I'm developing an application using SQLAlchemy and wxPython that I'm trying to keep distributed in separated modules consisting of Business logic, ORM and GUI.
I'm not completely sure how to do this in a pythonic way.
Given that mapping() has to be called in orther for the objects to be used, I thought of putting it on the __init__.py of the bussiness logic, but keeping all the table definitions within a separate orm.py module.
Should I keep something like:
/Business
/__init__.py
| mapping (module1.Class1, orm.table1)
|
/module1.py
Class1
/orm.py
import
table1 = Table()
/GUI
/main.py
| import business
/crud.py
or something like
/Business
/__init__.py
| import
|
/module1.py
Class1
table1 = Table()
mapping (module1.Class1, orm.table1)
/GUI
/main.py
| import business
/crud.py
Is the first approach recommended? Is there any other option? I've seen the second way, but I don't like putting the database handling code and the bussiness logic code within the same module. Am I overthinking it? Is really not that big a problem?
I find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. I'll reproduce the entire text here:
Filesystem structure of a Python project
Do:
name the directory something
related to your project. For example,
if your project is named "Twisted",
name the top-level directory for its
source files Twisted. When you do
releases, you should include a version
number suffix: Twisted-2.5.
create a directory Twisted/bin and
put your executables there, if you
have any. Don't give them a .py
extension, even if they are Python
source files. Don't put any code in
them except an import of and call to a
main function defined somewhere else
in your projects.
If your project
is expressable as a single Python
source file, then put it into the
directory and name it something
related to your project. For example,
Twisted/twisted.py. If you need
multiple source files, create a
package instead (Twisted/twisted/,
with an empty
Twisted/twisted/__init__.py) and
place your source files in it. For
example,
Twisted/twisted/internet.py.
put
your unit tests in a sub-package of
your package (note - this means that
the single Python source file option
above was a trick - you always need at
least one other file for your unit
tests). For example,
Twisted/twisted/test/. Of course,
make it a package with
Twisted/twisted/test/__init__.py.
Place tests in files like
Twisted/twisted/test/test_internet.py.
add Twisted/README and Twisted/setup.py to explain and
install your software, respectively,
if you're feeling nice.
Don't:
put your source in a directory
called src or lib. This makes it
hard to run without installing.
put
your tests outside of your Python
package. This makes it hard to run the
tests against an installed version.
create a package that only has a
__init__.py and then put all your
code into __init__.py. Just make a
module instead of a package, it's
simpler.
try to come up with
magical hacks to make Python able to
import your module or package without
having the user add the directory
containing it to their import path
(either via PYTHONPATH or some other
mechanism). You will not correctly
handle all cases and users will get
angry at you when your software
doesn't work in their environment.

Categories

Resources