Retrieve an app configuration setting in a non-request context in Pyramid - python

In a pyramid app I am building (called pyplay), I need to retrieve an application setting that I have in development.ini. The problem is that the place where I am trying to get that setting cannot access the request variable (e.g. at the top level of a module file).
So, after looking at this example in the documentation: http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/configuration/django_settings.html I started doing something very simple and hardcoded at first just to make it work.
Since my development.ini has this section: [app:main], then the simple example I tried is as follows:
from paste.deploy.loadwsgi import appconfig
config = appconfig('config:development.ini', 'main', relative_to='.')
but the application refuses to start and displays the following error:
ImportError: <module 'pyplay' from '/home/pish/projects/pyplay/__init__.pyc'> has no 'main' attribute
So, thinking that maybe I should put 'pyplay' instead of 'main', I went ahead, but I get this error instead:
LookupError: No section 'pyplay' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config ./development.ini
At this point I am a bit stuck and I don't know what am I doing wrong. Can someone please give me a hand on how to do this?
Thanks in advance!
EDIT: The following are the contents of my development.ini file (note that pish.theparam is the setting I am trying to get):
###
# app configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/environment.html
###
[app:main]
use = egg:pyplay
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en_US.utf8
pyramid.includes =
pyramid_debugtoolbar
pyramid_tm
sqlalchemy.url = mysql://user:passwd#localhost/pyplay?charset=utf8
# By default, the toolbar only appears for clients from IP addresses
# '127.0.0.1' and '::1'.
debugtoolbar.hosts = 127.0.0.1 ::1
pish.theparam = somevalue
###
# wsgi server configuration
###
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/logging.html
###
[loggers]
keys = root, pyplay, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_pyplay]
level = DEBUG
handlers =
qualname = pyplay
[logger_sqlalchemy]
level = INFO
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s

The reason it's difficult to do in pyramid is because it's always a bad idea to have module-level settings. It means your module can only ever be used in one way per-process (different code-paths can't use your library in different ways). :-)
A hack around not having access to the request object is to at least hide your global behind a function call, so that the global can be different per-thread (which is basically per-request).
def get_my_param(registry=None):
if registry is None:
registry = pyramid.threadlocals.get_current_registry()
return registry.settings['pyplay.theparam']

Step 1: create a singleton class say in a file xyz_file
class Singleton:
def __init__(self, klass):
self.klass = klass
self.instance = None
def __call__(self, *args, **kwds):
if self.instance == None:
self.instance = self.klass(*args, **kwds)
return self.instance
#Singleton
class ApplicationSettings(object):
def __init__(self, app_settings=None):
if app_settings is not None :
self._settings = app_settings
def get_appsettings_object(self):
return self
def get_application_configuration(self):
return self._settings
Step 2: in "__ init__.py"
def main(global_config, **settings):
....
.......
app_settings = ApplicationSettings(settings)
Step 3: You should be able to access in any part of the code.
from xyz_file import ApplicationSettings
app_settings = ApplicationSettings().get_application_configuration()

Basically, if you don't have access to the request object, you're "off the rails" in Pyramid. To do things the Pyramid way, we make components and figure out where they belong in the Pyramid lifecycle, and they should always have direct access to either or both of the registry (the ZCA) and the request.
If what you're doing doesn't fit in the the request lifecycle, then it's probably something that should be instantiated at server start up time, normally in your init.py where you build and fill the configurator (our access to the registry). Don't be afraid to use the registry to allow other components to get at things 'pseudo-globally' later. So probably you want to make some kind of factory for your thing, call the factory in your start up code, perhaps passing in a reference to the registry as an argument, and then attach the object to the registry. If your component needs to interface with request-lifecycle code, give it a method that takes request as a param. Later anything that needs at this object can get it from registry, and anything this object needs to get at can be done either through registry or request.
You can totally use the hack in the other answer to get at the current global registry, but needing to do so is a code smell, you can def figure out a better design to eliminate that.
pseudo code example, in the server start up code:
# in in the init block where our Configurator has been built
from myfactory import MyFactory
registry.my_component = MyFactory(config.registry)
# you can now get at my_component from anywhere in a pyramid system
your component:
class MyFactory(oject):
def __init__(self, registry):
# server start up lifecycle stuff here
self.registry = registry
def get_something(self, request):
# do stuff with the rest of the system
setting_wanted = self.registry.settings['my_setting']
Pyramid Views work this way. They are actually ZCA multi-adapters of request and context. Their factory is registered in the registry, and then when the view look up process kicks off, the factory instantiates a view passing in request as a param.

Related

class variable was caches in next request when use django-rest-framework renderer

I make use of uwsgi, django and django-rest-framework to develop one application.
I introduced one class variable in renderer class, this variable will be fill as one part of response.
the problem seems likes as following:
class xxxRenderer(xxxBase)
response_pb_msg = obj # it's one instance of protobuf message
def render():
if True:
self.response_pb_msg.items = []
else:
self.response_pb_msg.retCode = 100
self.response_pb_msg.otherXXXX = xxxx
In django logger handler, I access this class variable again like following:
xxxRenderer.response_pb_msg.ParseFromString(body)
after the first response, this class variable 'response_pb_msg' only has one property "retCode"
but the second response, it has three properties "retCode", "otherXXXX " and "items "
it's strange, the second response contain all content which was exist in the first response.
after a time, I re-wrote this class like following:
class xxxBaserender(xxRender)
def __init__():
if self.response_pb_msg_cls is not None and isinstance(self.response_pb_msg_cls, GeneratedProtocolMessageType):
self.response_pb_message = self.response_pb_msg_cls()
class xxxRenderer(xxxBaserender)
response_pb_msg_cls = msgName # the class of protobuf message
theoretically, the second class is ok. I tested, didn't duplicate that question.
Let's return to where we started
Every request finisehd, all resoure should be clean.
but I'm very puzzled with that problem, it seems that class variable not been released in uwsgi progress after response return.
I read “PEP 3333”, didn't get any valuable information.
I think I didn't fully understand class variaable, wsgi and web processing flow in python.
anyone can help me to understand this problem?
thanks vey much.

Self-Updating Code?

I have a module that needs to update new variable values from the web, about once a week. I could place those variable values in a file & load those values on startup. Or, a simpler solution would be to simply auto-update the code.
Is this possible in Python?
Something like this...
def self_updating_module_template():
dynamic_var1 = {'dynamic var1'} # some kind of place holder tag
dynamic_var2 = {'dynamic var2'} # some kind of place holder tag
return
def self_updating_module():
dynamic_var1 = 'old data'
dynamic_var2 = 'old data'
return
def updater():
new_data_from_web = ''
new_dynamic_var1 = new_data_from_web # Makes API call. gets values.
new_dynamic_var2 = new_data_from_web
# loads self_updating_module_template
dynamic_var1 = new_dynamic_var1
dynamic_var2 = new_dynamic_var2
# replace module place holders with new values.
# overwrite self_updating_module.py.
return
I would recommend that you use configparser and a set of default values located in an ini-style file.
The ConfigParser class implements a basic configuration file parser
language which provides a structure similar to what you would find on
Microsoft Windows INI files. You can use this to write Python programs
which can be customized by end users easily.
Whenever the configuration values are updated from the web api endpoint, configparser also lets us write those back out to the configuration file. That said, be careful! The reason that most people recommend that configuration files be included at build/deploy time and not at run time is for security/stability. You have to lock down the endpoint that allows updates to your running configuration in production and have some way to verify any configuration value updates before they are retrieved by your application:
import configparser
filename = 'config.ini'
def load_config():
config = configparser.ConfigParser()
config.read(filename)
if 'WEB_DATA' not in config:
config['WEB_DATA'] = {'dynamic_var1': 'dynamic var1', # some kind of place holder tag
'dynamic_var2': 'dynamic var2'} # some kind of place holder tag
return config
def update_config(config):
new_data_from_web = ''
new_dynamic_var1 = new_data_from_web # Makes API call. gets values.
new_dynamic_var2 = new_data_from_web
config['WEB_DATA']['dynamic_var1'] = new_dynamic_var1
config['WEB_DATA']['dynamic_var2'] = new_dynamic_var2
def save_config(config):
with open(filename, 'w') as configfile:
config.write(configfile)
Example usage::
# Load the configuration
config = load_config()
# Get new data from the web
update_config(config)
# Save the newly updated configuration back to the file
save_config(config)

Is it possible to store the alembic connect string outside of alembic.ini?

I'm using Alembic with SQLAlchemy. With SQLAlchemy, I tend to follow a pattern where I don't store the connect string with the versioned code. Instead I have file secret.py that contains any confidential information. I throw this filename in my .gitignore so it doesn't end up on GitHub.
This pattern works fine, but now I'm getting into using Alembic for migrations. It appears that I cannot hide the connect string. Instead in alembic.ini, you place the connect string as a configuration parameter:
# the 'revision' command, regardless of autogenerate
# revision_environment = false
sqlalchemy.url = driver://user:pass#localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembi
I fear I'm going to accidentally commit a file with username/password information for my database. I'd rather store this connect string in a single place and avoid the risk of accidentally committing it to version control.
What options do I have?
I had the very same problem yesterday and found a following solution to work.
I do the following in alembic/env.py:
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# this will overwrite the ini-file sqlalchemy.url path
# with the path given in the config of the main code
import config as ems_config
config.set_main_option('sqlalchemy.url', ems_config.config.get('sql', 'database'))
ems_config is an external module that holds my configuration data.
config.set_main_option(...) essentially overwrites the sqlalchemy.url key in the [alembic] section of the alembic.ini file. In my configuration I simply leave it black.
The simplest thing I could come up with to avoid commiting my user/pass was to a) add in interpolation strings to the alembic.ini file, and b) set these interpolation values in env.py
alembic.ini
sqlalchemy.url = postgresql://%(DB_USER)s:%(DB_PASS)s#35.197.196.146/nozzle-website
env.py
import os
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# here we allow ourselves to pass interpolation vars to alembic.ini
# fron the host env
section = config.config_ini_section
config.set_section_option(section, "DB_USER", os.environ.get("DB_USER"))
config.set_section_option(section, "DB_PASS", os.environ.get("DB_PASS"))
...
Alembic documentation suggests using create_engine with the database URL (instead of modifying sqlalchemy.url in code).
Also you should modify run_migrations_offline to use the new URL. Allan Simon has an example on his blog, but in summary, modify env.py to:
Provide a shared function to get the URL somehow (here it comes from the command line):
def get_url():
url = context.get_x_argument(as_dictionary=True).get('url')
assert url, "Database URL must be specified on command line with -x url=<DB_URL>"
return url
Use the URL in offline mode:
def run_migrations_offline():
...
url = get_url()
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True)
...
Use the URL in online mode by using create_engine instead of engine_from_config:
def run_migrations_online():
...
connectable = create_engine(get_url())
with connectable.connect() as connection:
...
So what appears to work is reimplementing engine creation in env.py, which is apparently a place for doing this kind of customizing Instead of using the sqlalchemy connect string in the ini:
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
You can replace and specify your own engine configuration:
import store
engine = store.engine
Indeed the docs seems to imply this is ok:
sqlalchemy.url - A URL to connect to the database via SQLAlchemy. This key is in fact only referenced within the env.py file that is specific to the “generic” configuration; a file that can be customized by the developer. A multiple database configuration may respond to multiple keys here, or may reference other sections of the file.
I was looking for a while how to manage this for mutli-databases
Here is what I did. I have two databases : logs and ohlc
According to the doc,
I have setup the alembic like that
alembic init --template multidb
alembic.ini
databases = logs, ohlc
[logs]
sqlalchemy.url = postgresql://botcrypto:botcrypto#localhost/logs
[ohlc]
sqlalchemy.url = postgresql://botcrypto:botcrypto#localhost/ohlc
env.py
[...]
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')
# overwrite alembic.ini db urls from the config file
settings_path = os.environ.get('SETTINGS')
if settings_path:
with open(settings_path) as fd:
settings = conf.load(fd, context=os.environ) # loads the config.yml
config.set_section_option("ohlc", "sqlalchemy.url", settings["databases"]["ohlc"])
config.set_section_option("logs", "sqlalchemy.url", settings["databases"]["logs"])
else:
logger.warning('Environment variable SETTINGS missing - use default alembic.ini configuration')
[...]
config.yml
databases:
logs: postgresql://botcrypto:botcrypto#127.0.0.1:5432/logs
ohlc: postgresql://botcrypto:botcrypto#127.0.0.1:5432/ohlc
usage
SETTINGS=config.yml alembic upgrade head
Hope it can helps !
In the case of MultiDB settings (the same for SingleDB), you can use config.set_section_option('section_name', 'variable_name', 'db_URL') to modify the values of the database URL in the alembic.ini file.
For example:
alembic.init
[engine1]
sqlalchemy.url =
[engine2]
sqlalchemy.url =
Then,
env.py
config = context.config
config.set_section_option('engine1', 'sqlalchemy.url', os.environ.get('URL_DB1'))
config.set_section_option('engine2', 'sqlalchemy.url', os.environ.get('URL_DB2'))
env.py:
from alembic.config import Config
alembic_cfg = Config()
alembic_cfg.set_main_option("sqlalchemy.url", getenv('PG_URI'))
https://alembic.sqlalchemy.org/en/latest/api/config.html
I was bumping into this problem as well since we're running our migrations from our local machines. My solution is to put environment sections in the alembic.ini which stores the database config (minus the credentials):
[local]
host = localhost
db = dbname
[test]
host = x.x.x.x
db = dbname
[prod]
host = x.x.x.x
db = dbname
Then I put the following in the env.py so the user can pick their environment and be prompted for the credentials:
from alembic import context
from getpass import getpass
...
envs = ['local', 'test', 'prod']
print('Warning: Do not commit your database credentials to source control!')
print(f'Available migration environments: {", ".join(envs)}')
env = input('Environment: ')
if env not in envs:
print(f'{env} is not a valid environment')
exit(0)
env_config = context.config.get_section(env)
host = env_config['host']
db = env_config['db']
username = input('Username: ')
password = getpass()
connection_string = f'postgresql://{username}:{password}#{host}/{db}'
context.config.set_main_option('sqlalchemy.url', connection_string)
You should store your credentials in a password manager that the whole team has access to, or whatever config/secret store you have available. Though, with this approach the password is exposed to your local clip board - an even better approach would be to have env.py directly connect to your config/secret store API and pull out the username/password directly but this adds a third party dependency.
Another solution is to create a template alembic.ini.dist file and to track it with your versionned code, while ignoring alembic.ini in your VCS.
Do not add any confidential information in alembic.ini.dist:
sqlalchemy.url = ...
When deploying your code to a platform, copy alembic.ini.dist to alembic.ini (this one won't be tracked by your VCS) and modify alembic.ini with the platform's credentials.
As Doug T. said you can edit env.py to provide URL from somewhere else than ini file. Instead of creating new engine you can pass an additional url argument to the engine_from_config function (kwargs are later merged to options taken from ini file). In that case you could e.g. store encrypted password in ini file and decrypt it in runtime by passphrase stored in ENV variable.
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool,
url=some_decrypted_endpoint)
An option that worked for me was to use set_main_option and leave the sqlalchemy.url = blank in alembic.ini
from config import settings
config.set_main_option(
"sqlalchemy.url", settings.database_url.replace("postgres://", "postgresql+asyncpg://", 1))
sttings is a class in config file that I use to get variables in env file check this os.environ.get() does not return the Environment Value in windows? for more detail, another option is to use os.environ.get but make sure that you export the varibale to prevent errors like sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Based on the answer of TomDotTom I came up with this solution
Edit the env.py file with this
config = context.config
config.set_section_option("alembic", "sqlalchemy.url",
os.environ.get("DB_URL", config.get_section_option("alembic", "sqlalchemy.url"))) # type: ignore
This will override the sqlalchemy.url option from the alembic section with DB_URL environment variable if such environment variable exists, otherwise will use what else is in the alembic.ini file
Then I can run the migrations pointing to another database like this
DB_URL=driver://user:pass#host:port/dbname alembic upgrade head
And keep using alembic upgrade head during my development flow
I've tried all the answer here, but not working. Then I try to deal by myself, as below:
.ini file:
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
file_template = %%(rev)s_%%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d_%%(minute).2d_%%(second).2d
# timezone to use when rendering the date
# within the migration file as well as the filename.
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; this defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat alembic/versions
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
databases = auth_engine
[auth_engine]
sqlalchemy.url = mysql+mysqldb://{}:{}#{}:{}/auth_db
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
.env file(it is in the root folder of my project):
DB_USER='root'
DB_PASS='12345678'
DB_HOST='127.0.0.1'
DB_PORT='3306'
env.py file:
from __future__ import with_statement
import os
import re
import sys
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
DB_USER = os.getenv("DB_USER")
DB_PASS = os.getenv("DB_PASS")
DB_HOST = os.getenv("DB_HOST")
DB_PORT = os.getenv("DB_PORT")
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# gather section names referring to different
# databases. These are named "engine1", "engine2"
# in the sample .ini file.
db_names = config.get_main_option('databases')
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
sys.path.append(os.path.join(os.path.dirname(__file__), "../../../"))
from db_models.auth_db import auth_db_base
target_metadata = {
'auth_engine': auth_db_base.auth_metadata
}
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
section = context.config.get_section(name)
url = section['sqlalchemy.url'].format(DB_USER, DB_PASS, DB_HOST, DB_PORT)
section['sqlalchemy.url'] = url
rec['url'] = url
# rec['url'] = context.config.get_section_option(name, "sqlalchemy.url")
for name, rec in engines.items():
print("Migrating database %s" % name)
file_ = "%s.sql" % name
print("Writing output to %s" % file_)
with open(file_, 'w') as buffer:
context.configure(url=rec['url'], output_buffer=buffer,
target_metadata=target_metadata.get(name),
compare_type=True,
compare_server_default=True
)
with context.begin_transaction():
context.run_migrations(engine_name=name)
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
section = context.config.get_section(name)
url = section['sqlalchemy.url'].format(DB_USER, DB_PASS, DB_HOST, DB_PORT)
section['sqlalchemy.url'] = url
rec['engine'] = engine_from_config(
section,
prefix='sqlalchemy.',
poolclass=pool.NullPool)
for name, rec in engines.items():
engine = rec['engine']
rec['connection'] = conn = engine.connect()
rec['transaction'] = conn.begin()
try:
for name, rec in engines.items():
print("Migrating database %s" % name)
context.configure(
connection=rec['connection'],
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
target_metadata=target_metadata.get(name),
compare_type=True,
compare_server_default=True
)
context.run_migrations(engine_name=name)
for rec in engines.values():
rec['transaction'].commit()
except:
for rec in engines.values():
rec['transaction'].rollback()
raise
finally:
for rec in engines.values():
rec['connection'].close()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
Wish can help someone else.
In env.py just add
config.set_main_option('sqlalchemy.url', os.environ['DB_URL'])
after
config = context.config
like
config = context.config
config.set_main_option('sqlalchemy.url', os.environ['DB_URL'])
and then execute like that:
DB_URL="mysql://atuamae:de4#127.0.0.1/db" \
alembic upgrade head

Get list of all routes defined in the Flask app

I have a complex Flask-based web app. There are lots of separate files with view functions. Their URLs are defined with the #app.route('/...') decorator. Is there a way to get a list of all the routes that have been declared throughout my app? Perhaps there is some method I can call on the app object?
All the routes for an application are stored on app.url_map which is an instance of werkzeug.routing.Map. You can iterate over the Rule instances by using the iter_rules method:
from flask import Flask, url_for
app = Flask(__name__)
def has_no_empty_params(rule):
defaults = rule.defaults if rule.defaults is not None else ()
arguments = rule.arguments if rule.arguments is not None else ()
return len(defaults) >= len(arguments)
#app.route("/site-map")
def site_map():
links = []
for rule in app.url_map.iter_rules():
# Filter out rules we can't navigate to in a browser
# and rules that require parameters
if "GET" in rule.methods and has_no_empty_params(rule):
url = url_for(rule.endpoint, **(rule.defaults or {}))
links.append((url, rule.endpoint))
# links is now a list of url, endpoint tuples
See Display links to new webpages created for a bit more information.
I just met the same question. Those solutions above are too complex.
Just open a new shell under your project:
>>> from app import app
>>> app.url_map
The first 'app' is my project script: app.py,
another is my web's name.
(this solution is for the tiny web with a little route)
I make a helper method on my manage.py:
#manager.command
def list_routes():
import urllib
output = []
for rule in app.url_map.iter_rules():
options = {}
for arg in rule.arguments:
options[arg] = "[{0}]".format(arg)
methods = ','.join(rule.methods)
url = url_for(rule.endpoint, **options)
line = urllib.unquote("{:50s} {:20s} {}".format(rule.endpoint, methods, url))
output.append(line)
for line in sorted(output):
print line
It solves the the missing argument by building a dummy set of options. The output looks like:
CampaignView:edit HEAD,OPTIONS,GET /account/[account_id]/campaigns/[campaign_id]/edit
CampaignView:get HEAD,OPTIONS,GET /account/[account_id]/campaign/[campaign_id]
CampaignView:new HEAD,OPTIONS,GET /account/[account_id]/new
Then to run it:
python manage.py list_routes
For more on manage.py checkout: http://flask-script.readthedocs.org/en/latest/
Apparently, since version 0.11, Flask has a built-in CLI. One of the built-in commands lists the routes:
FLASK_APP='my_project.app' flask routes
Similar to Jonathan's answer I opted to do this instead. I don't see the point of using url_for as it will break if your arguments are not string e.g. float
#manager.command
def list_routes():
import urllib
output = []
for rule in app.url_map.iter_rules():
methods = ','.join(rule.methods)
line = urllib.unquote("{:50s} {:20s} {}".format(rule.endpoint, methods, rule))
output.append(line)
for line in sorted(output):
print(line)
Use cli command in Directory where your flask project is.
flask routes
Since you did not specify that it has to be run command-line, the following could easily be returned in json for a dashboard or other non-command-line interface. The result and the output really shouldn't be commingled from a design perspective anyhow. It's bad program design, even if it is a tiny program. The result below could then be used in a web application, command-line, or anything else that ingests json.
You also didn't specify that you needed to know the python function associated with each route, so this more precisely answers your original question.
I use below to add the output to a monitoring dashboard myself. If you want the available route methods (GET, POST, PUT, etc.), you would need to combine it with other answers above.
Rule's repr() takes care of converting the required arguments in the route.
def list_routes():
routes = []
for rule in app.url_map.iter_rules():
routes.append('%s' % rule)
return routes
The same thing using a list comprehension:
def list_routes():
return ['%s' % rule for rule in app.url_map.iter_rules()]
Sample output:
{
"routes": [
"/endpoint1",
"/nested/service/endpoint2",
"/favicon.ico",
"/static/<path:filename>"
]
}
If you need to access the view functions themselves, then instead of app.url_map, use app.view_functions.
Example script:
from flask import Flask
app = Flask(__name__)
#app.route('/foo/bar')
def route1():
pass
#app.route('/qux/baz')
def route2():
pass
for name, func in app.view_functions.items():
print(name)
print(func)
print()
Output from running the script above:
static
<bound method _PackageBoundObject.send_static_file of <Flask '__main__'>>
route1
<function route1 at 0x128f1b9d8>
route2
<function route2 at 0x128f1ba60>
(Note the inclusion of the "static" route, which is created automatically by Flask.)
You can view all the Routes via flask shell by running the following commands after exporting or setting FLASK_APP environment variable.
flask shell
app.url_map
inside your flask app do:
flask shell
>>> app.url_map
Map([<Rule '/' (OPTIONS, HEAD, GET) -> helloworld>,
<Rule '/static/<filename>' (OPTIONS, HEAD, GET) -> static>])
print(app.url_map)
That, is, if your Flask application name is 'app'.
It's an attribute of the instance of the Flask App.
See https://flask.palletsprojects.com/en/2.1.x/api/#flask.Flask.url_map

Why does my flask unit test not use a tempfile database when I had specified so?

The test still writes to my MySQL database instead of a sqlite tempfile db. Why does this happen? Thanks!
Here's my code:
class UserTests(unittest.TestCase):
def setUp(self):
self.app = get_app()
#declare testing state
self.app.config["TESTING"] = True
self.db, self.app.config["DATABASE"] = tempfile.mkstemp()
#spawn test client
self.client = self.app.test_client()
#temp db
init_db()
def tearDown(self):
os.close(self.db)
os.unlink(self.app.config["DATABASE"])
def test_save_user(self):
#create test user with 3 friends
app_xs_token = get_app_access_token(APP_ID, APP_SECRET)
test_user = create_test_user(APP_ID, app_xs_token)
friend_1 = create_test_user(APP_ID, app_xs_token)
friend_2 = create_test_user(APP_ID, app_xs_token)
friend_3 = create_test_user(APP_ID, app_xs_token)
make_friend_connection(test_user["id"], friend_1["id"], test_user["access_token"], friend_1["access_token"])
make_friend_connection(test_user["id"], friend_2["id"], test_user["access_token"], friend_2["access_token"])
make_friend_connection(test_user["id"], friend_3["id"], test_user["access_token"], friend_3["access_token"])
save_user(test_user["access_token"])
This line might be the problem:
self.db, self.app.config["DATABASE"] = tempfile.mkstemp()
print out the values of self.db and self.app.config["DATABASE"] and makes sure they are what you expect them to be.
You probably want to investigate where your config self.app.config["DATABASE"] is referenced in your database code.
The Flask example code usually does a lot of work when the module is first imported. This tends to break things when you try to dynamically change values at run time, because by then its too late.
You probably will need to use an application factory so your app isn't built before the test code can run. Also, the app factory pattern implies you are using the Blueprint interface instead of the direct app reference, which is acquired using a circular import in the example code.

Categories

Resources