Related
I have 3 files and I want to do something like below
[1] conf.py
var = 10 # Intialized with 10 (start)
[2] file_1.py
import conf
print(conf.var) # Prints 10
conf.var = 1000 # Updated to 1000
[3] file_2.py
import conf
print(conf.var) # Prints 1000
conf.var = 9999 # Updated to 9999
I want something like this. Assume that the files, file_1 and file_2 will be running and will stay in memory unless pressed CTRL+C. How can I change var in other 2 files and persist the value for it? Final value for var should be 9999 if we were to use it in file_3 like other 2 files. [It should print 9999] and so on.
Execution order file_1.py -> file_2.py.
Help me with identifying some way to do it or some module/package that can handle this.
Thanks! :)
Couldn't you use a class and never initialize an object?
class Conf:
var = 10
You would then update Conf with:
from conf import Conf
...
Conf.var = ...
Just never use Conf()..., as this creates an object instance.
Consider this approach:
class Lookup:
# For allowing storing and looking up variables using variable sets of key pairs.
# Each key pair is a key-value pair tuple.
# The entire set of key pair tuples uniquely stores and retreives a variable value.
# frozenset is used to achieve the benifit of a set while ensuring immutability, so it can be hashed for use as a dictionary key.
lookupDictionary = {}
def put(*, keyPairs, value):
Lookup.lookupDictionary[frozenset(keyPairs)] = value
def get(*, keyPairs, default, matchWildcard = False): # No substring searching. only '*' can be used for searching for an entire string
if matchWildcard and True in [bool(x) for x in [(y[1] == '*') for y in keyPairs]]:
# If we are here, we need to match using wildcard.
valuedKeyPairs = set([y for y in keyPairs if not y[1] == '*']) # Separating value pairs from wildcard pairs
starKeyPairs = set(keyPairs) - valuedKeyPairs
starKeyPairNames = [x[0] for x in starKeyPairs]
return [Lookup.lookupDictionary[i] for i in Lookup.lookupDictionary.keys() if set(valuedKeyPairs).issubset(i) and sorted(starKeyPairNames) == sorted([x[0] for x in i-valuedKeyPairs])]
return Lookup.lookupDictionary.get(frozenset(keyPairs), default)
def getAllValues(*, keyPairs):
# Returrns all qualifying values to part or all of the key pairs
return [Lookup.lookupDictionary[i] for i in set(Lookup.lookupDictionary.keys()) if set(keyPairs).issubset(i)]
class utils:
def setCacheEntry(*, zipFileName, functionName, value):
Lookup.put(keyPairs = [('__zipFileName__',zipFileName),('__functionName__',functionName)],value=value)
def getCacheEntry(*, zipFileName, functionName, default):
return Lookup.get(keyPairs = [('__zipFileName__',zipFileName),('__functionName__',functionName)],default=default)
from Lookup import utils
from Lookup import Lookup
utils.setCacheEntry(zipFileName='FileName.zip',functionName='some_func',value='some_value')
utils.getCacheEntry(zipFileName='FileName.zip',functionName='some_func', default=1)
Output:
'some_value'
This is a more simple case:
from Lookup import Lookup
def setParamValue(*, paramName, value):
Lookup.put(keyPairs = [('__paramName__',paramName),],value=value)
def getParamValue(*, paramName, default):
return Lookup.get(keyPairs = [('__paramName__',paramName),],default=default, matchWildcard=True)
setParamValue(paramName='name', value='John')
setParamValue(paramName='age', value=23)
getParamValue(paramName='name', default='Unknown')
Output: John
getParamValue(paramName='age', default=-1)
Output: 23
getParamValue(paramName='*', default='Unknown')
Output: ['John', 23]
If you need to communicate variables/objects between different programs,
the above does not work.
To do it you may use pickle to dump and load objects across the OS:
I have not tested it to see if it does deep or shallow dumping of objects, but probably the later.
import pickle
# Dumping
with open('object_pickle','wb') as f:
pickle.dump('some_value', f)
# Loading
with open('object_pickle','rb') as f2:
var = pickle.load(f2)
print(var)
Output: 'some_value'
I worked on this problem. I will suggest to use shared memory cache providers like Redis or memcached. Both start a server instance that we need to connect and use it like a key-value store. Note than values should be of type str or bytes.
Memcached
Install memcached
sudo apt install memcached
Start Server with default settings
You need to start the server before using it or you can set it as a service in linux
sudo service memcached start
Install python memcached client
pip install pymemcache
Sample Code
from pymemcache.client import base
client = base.Client(('localhost', 11211))
client.set('some_key', 'stored value')
client.get('some_key') # Returns "stored value"
Redis
The process is exact same for redis
Install redis
sudo apt install redis
Start Server with default settings
You need to start the server before using it or you can set it as a service in linux
sudo service redis start
Install python redis client
pip install redis
Sample Code
import redis
client = redis.Redis() # Default settings/credentials
client.set('some_key', 'stored value')
client.get('some_key') # Returns "stored value"
Usage
Redis gives more safety and fault tolerance.
Both has some memory limits but is configurable. If your data is more in size, use files or db suitable to your problem.
Both are easy to configure
I am using python with python-kubernetes with a minikube running locally, e.g there are no cloud issues.
I am trying to create a job and provide it with data to run on. I would like to provide it with a mount of a directory with my local machine data.
I am using this example and trying to add a mount volume
This is my code after adding the keyword volume_mounts (I tried multiple places, multiple keywords and nothing works)
from os import path
import yaml
from kubernetes import client, config
JOB_NAME = "pi"
def create_job_object():
# Configureate Pod template container
container = client.V1Container(
name="pi",
image="perl",
volume_mounts=["/home/user/data"],
command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"])
# Create and configurate a spec section
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={
"app": "pi"}),
spec=client.V1PodSpec(restart_policy="Never",
containers=[container]))
# Create the specification of deployment
spec = client.V1JobSpec(
template=template,
backoff_limit=0)
# Instantiate the job object
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(name=JOB_NAME),
spec=spec)
return job
def create_job(api_instance, job):
# Create job
api_response = api_instance.create_namespaced_job(
body=job,
namespace="default")
print("Job created. status='%s'" % str(api_response.status))
def update_job(api_instance, job):
# Update container image
job.spec.template.spec.containers[0].image = "perl"
# Update the job
api_response = api_instance.patch_namespaced_job(
name=JOB_NAME,
namespace="default",
body=job)
print("Job updated. status='%s'" % str(api_response.status))
def delete_job(api_instance):
# Delete job
api_response = api_instance.delete_namespaced_job(
name=JOB_NAME,
namespace="default",
body=client.V1DeleteOptions(
propagation_policy='Foreground',
grace_period_seconds=5))
print("Job deleted. status='%s'" % str(api_response.status))
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
batch_v1 = client.BatchV1Api()
# Create a job object with client-python API. The job we
# created is same as the `pi-job.yaml` in the /examples folder.
job = create_job_object()
create_job(batch_v1, job)
update_job(batch_v1, job)
delete_job(batch_v1)
if __name__ == '__main__':
main()
I get this error
HTTP response body:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Job
in version \"v1\" cannot be handled as a Job: v1.Job.Spec:
v1.JobSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers:
[]v1.Container: v1.Container.VolumeMounts: []v1.VolumeMount:
readObjectStart: expect { or n, but found \", error found in #10 byte
of ...|ounts\": [\"/home/user|..., bigger context ...| \"image\":
\"perl\", \"name\": \"pi\", \"volumeMounts\": [\"/home/user/data\"]}],
\"restartPolicy\": \"Never\"}}}}|...","reason":"BadRequest","code":400
What am i missing here?
Is there another way to expose data to the job?
edit: trying to use client.V1Volumemount
I am trying to add this code, and add mount object in different init functions eg.
mount = client.V1VolumeMount(mount_path="/data", name="shai")
client.V1Container
client.V1PodTemplateSpec
client.V1JobSpec
client.V1Job
under multiple keywords, it all results in errors, is this the correct object to use? How shell I use it if at all?
edit: trying to pass volume_mounts as a list with the following code suggested in the answers:
def create_job_object():
# Configureate Pod template container
container = client.V1Container(
name="pi",
image="perl",
volume_mounts=["/home/user/data"],
command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"])
# Create and configurate a spec section
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={
"app": "pi"}),
spec=client.V1PodSpec(restart_policy="Never",
containers=[container]))
# Create the specification of deployment
spec = client.V1JobSpec(
template=template,
backoff_limit=0)
# Instantiate the job object
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(name=JOB_NAME),
spec=spec)
return job
And still getting a similar error
kubernetes.client.rest.ApiException: (422) Reason: Unprocessable
Entity HTTP response headers: HTTPHeaderDict({'Content-Type':
'application/json', 'Date': 'Tue, 06 Aug 2019 06:19:13 GMT',
'Content-Length': '401'}) HTTP response body:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Job.batch
\"pi\" is invalid:
spec.template.spec.containers[0].volumeMounts[0].name: Not found:
\"d\"","reason":"Invalid","details":{"name":"pi","group":"batch","kind":"Job","causes":[{"reason":"FieldValueNotFound","message":"Not
found:
\"d\"","field":"spec.template.spec.containers[0].volumeMounts[0].name"}]},"code":422}
The V1Container call is expecting a list of V1VolumeMount objects for volume_mounts parameter but you passed in a list of string:
Code:
def create_job_object():
volume_mount = client.V1VolumeMount(
mount_path="/home/user/data"
# other optional arguments, see the volume mount doc link below
)
# Configureate Pod template container
container = client.V1Container(
name="pi",
image="perl",
volume_mounts=[volume_mount],
command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"])
# Create and configurate a spec section
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={
"app": "pi"}),
spec=client.V1PodSpec(restart_policy="Never",
containers=[container]))
....
references:
https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Container.md
https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1VolumeMount.md
I have a module that needs to update new variable values from the web, about once a week. I could place those variable values in a file & load those values on startup. Or, a simpler solution would be to simply auto-update the code.
Is this possible in Python?
Something like this...
def self_updating_module_template():
dynamic_var1 = {'dynamic var1'} # some kind of place holder tag
dynamic_var2 = {'dynamic var2'} # some kind of place holder tag
return
def self_updating_module():
dynamic_var1 = 'old data'
dynamic_var2 = 'old data'
return
def updater():
new_data_from_web = ''
new_dynamic_var1 = new_data_from_web # Makes API call. gets values.
new_dynamic_var2 = new_data_from_web
# loads self_updating_module_template
dynamic_var1 = new_dynamic_var1
dynamic_var2 = new_dynamic_var2
# replace module place holders with new values.
# overwrite self_updating_module.py.
return
I would recommend that you use configparser and a set of default values located in an ini-style file.
The ConfigParser class implements a basic configuration file parser
language which provides a structure similar to what you would find on
Microsoft Windows INI files. You can use this to write Python programs
which can be customized by end users easily.
Whenever the configuration values are updated from the web api endpoint, configparser also lets us write those back out to the configuration file. That said, be careful! The reason that most people recommend that configuration files be included at build/deploy time and not at run time is for security/stability. You have to lock down the endpoint that allows updates to your running configuration in production and have some way to verify any configuration value updates before they are retrieved by your application:
import configparser
filename = 'config.ini'
def load_config():
config = configparser.ConfigParser()
config.read(filename)
if 'WEB_DATA' not in config:
config['WEB_DATA'] = {'dynamic_var1': 'dynamic var1', # some kind of place holder tag
'dynamic_var2': 'dynamic var2'} # some kind of place holder tag
return config
def update_config(config):
new_data_from_web = ''
new_dynamic_var1 = new_data_from_web # Makes API call. gets values.
new_dynamic_var2 = new_data_from_web
config['WEB_DATA']['dynamic_var1'] = new_dynamic_var1
config['WEB_DATA']['dynamic_var2'] = new_dynamic_var2
def save_config(config):
with open(filename, 'w') as configfile:
config.write(configfile)
Example usage::
# Load the configuration
config = load_config()
# Get new data from the web
update_config(config)
# Save the newly updated configuration back to the file
save_config(config)
I'm using Alembic with SQLAlchemy. With SQLAlchemy, I tend to follow a pattern where I don't store the connect string with the versioned code. Instead I have file secret.py that contains any confidential information. I throw this filename in my .gitignore so it doesn't end up on GitHub.
This pattern works fine, but now I'm getting into using Alembic for migrations. It appears that I cannot hide the connect string. Instead in alembic.ini, you place the connect string as a configuration parameter:
# the 'revision' command, regardless of autogenerate
# revision_environment = false
sqlalchemy.url = driver://user:pass#localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembi
I fear I'm going to accidentally commit a file with username/password information for my database. I'd rather store this connect string in a single place and avoid the risk of accidentally committing it to version control.
What options do I have?
I had the very same problem yesterday and found a following solution to work.
I do the following in alembic/env.py:
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# this will overwrite the ini-file sqlalchemy.url path
# with the path given in the config of the main code
import config as ems_config
config.set_main_option('sqlalchemy.url', ems_config.config.get('sql', 'database'))
ems_config is an external module that holds my configuration data.
config.set_main_option(...) essentially overwrites the sqlalchemy.url key in the [alembic] section of the alembic.ini file. In my configuration I simply leave it black.
The simplest thing I could come up with to avoid commiting my user/pass was to a) add in interpolation strings to the alembic.ini file, and b) set these interpolation values in env.py
alembic.ini
sqlalchemy.url = postgresql://%(DB_USER)s:%(DB_PASS)s#35.197.196.146/nozzle-website
env.py
import os
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# here we allow ourselves to pass interpolation vars to alembic.ini
# fron the host env
section = config.config_ini_section
config.set_section_option(section, "DB_USER", os.environ.get("DB_USER"))
config.set_section_option(section, "DB_PASS", os.environ.get("DB_PASS"))
...
Alembic documentation suggests using create_engine with the database URL (instead of modifying sqlalchemy.url in code).
Also you should modify run_migrations_offline to use the new URL. Allan Simon has an example on his blog, but in summary, modify env.py to:
Provide a shared function to get the URL somehow (here it comes from the command line):
def get_url():
url = context.get_x_argument(as_dictionary=True).get('url')
assert url, "Database URL must be specified on command line with -x url=<DB_URL>"
return url
Use the URL in offline mode:
def run_migrations_offline():
...
url = get_url()
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True)
...
Use the URL in online mode by using create_engine instead of engine_from_config:
def run_migrations_online():
...
connectable = create_engine(get_url())
with connectable.connect() as connection:
...
So what appears to work is reimplementing engine creation in env.py, which is apparently a place for doing this kind of customizing Instead of using the sqlalchemy connect string in the ini:
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
You can replace and specify your own engine configuration:
import store
engine = store.engine
Indeed the docs seems to imply this is ok:
sqlalchemy.url - A URL to connect to the database via SQLAlchemy. This key is in fact only referenced within the env.py file that is specific to the “generic” configuration; a file that can be customized by the developer. A multiple database configuration may respond to multiple keys here, or may reference other sections of the file.
I was looking for a while how to manage this for mutli-databases
Here is what I did. I have two databases : logs and ohlc
According to the doc,
I have setup the alembic like that
alembic init --template multidb
alembic.ini
databases = logs, ohlc
[logs]
sqlalchemy.url = postgresql://botcrypto:botcrypto#localhost/logs
[ohlc]
sqlalchemy.url = postgresql://botcrypto:botcrypto#localhost/ohlc
env.py
[...]
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')
# overwrite alembic.ini db urls from the config file
settings_path = os.environ.get('SETTINGS')
if settings_path:
with open(settings_path) as fd:
settings = conf.load(fd, context=os.environ) # loads the config.yml
config.set_section_option("ohlc", "sqlalchemy.url", settings["databases"]["ohlc"])
config.set_section_option("logs", "sqlalchemy.url", settings["databases"]["logs"])
else:
logger.warning('Environment variable SETTINGS missing - use default alembic.ini configuration')
[...]
config.yml
databases:
logs: postgresql://botcrypto:botcrypto#127.0.0.1:5432/logs
ohlc: postgresql://botcrypto:botcrypto#127.0.0.1:5432/ohlc
usage
SETTINGS=config.yml alembic upgrade head
Hope it can helps !
In the case of MultiDB settings (the same for SingleDB), you can use config.set_section_option('section_name', 'variable_name', 'db_URL') to modify the values of the database URL in the alembic.ini file.
For example:
alembic.init
[engine1]
sqlalchemy.url =
[engine2]
sqlalchemy.url =
Then,
env.py
config = context.config
config.set_section_option('engine1', 'sqlalchemy.url', os.environ.get('URL_DB1'))
config.set_section_option('engine2', 'sqlalchemy.url', os.environ.get('URL_DB2'))
env.py:
from alembic.config import Config
alembic_cfg = Config()
alembic_cfg.set_main_option("sqlalchemy.url", getenv('PG_URI'))
https://alembic.sqlalchemy.org/en/latest/api/config.html
I was bumping into this problem as well since we're running our migrations from our local machines. My solution is to put environment sections in the alembic.ini which stores the database config (minus the credentials):
[local]
host = localhost
db = dbname
[test]
host = x.x.x.x
db = dbname
[prod]
host = x.x.x.x
db = dbname
Then I put the following in the env.py so the user can pick their environment and be prompted for the credentials:
from alembic import context
from getpass import getpass
...
envs = ['local', 'test', 'prod']
print('Warning: Do not commit your database credentials to source control!')
print(f'Available migration environments: {", ".join(envs)}')
env = input('Environment: ')
if env not in envs:
print(f'{env} is not a valid environment')
exit(0)
env_config = context.config.get_section(env)
host = env_config['host']
db = env_config['db']
username = input('Username: ')
password = getpass()
connection_string = f'postgresql://{username}:{password}#{host}/{db}'
context.config.set_main_option('sqlalchemy.url', connection_string)
You should store your credentials in a password manager that the whole team has access to, or whatever config/secret store you have available. Though, with this approach the password is exposed to your local clip board - an even better approach would be to have env.py directly connect to your config/secret store API and pull out the username/password directly but this adds a third party dependency.
Another solution is to create a template alembic.ini.dist file and to track it with your versionned code, while ignoring alembic.ini in your VCS.
Do not add any confidential information in alembic.ini.dist:
sqlalchemy.url = ...
When deploying your code to a platform, copy alembic.ini.dist to alembic.ini (this one won't be tracked by your VCS) and modify alembic.ini with the platform's credentials.
As Doug T. said you can edit env.py to provide URL from somewhere else than ini file. Instead of creating new engine you can pass an additional url argument to the engine_from_config function (kwargs are later merged to options taken from ini file). In that case you could e.g. store encrypted password in ini file and decrypt it in runtime by passphrase stored in ENV variable.
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool,
url=some_decrypted_endpoint)
An option that worked for me was to use set_main_option and leave the sqlalchemy.url = blank in alembic.ini
from config import settings
config.set_main_option(
"sqlalchemy.url", settings.database_url.replace("postgres://", "postgresql+asyncpg://", 1))
sttings is a class in config file that I use to get variables in env file check this os.environ.get() does not return the Environment Value in windows? for more detail, another option is to use os.environ.get but make sure that you export the varibale to prevent errors like sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
Based on the answer of TomDotTom I came up with this solution
Edit the env.py file with this
config = context.config
config.set_section_option("alembic", "sqlalchemy.url",
os.environ.get("DB_URL", config.get_section_option("alembic", "sqlalchemy.url"))) # type: ignore
This will override the sqlalchemy.url option from the alembic section with DB_URL environment variable if such environment variable exists, otherwise will use what else is in the alembic.ini file
Then I can run the migrations pointing to another database like this
DB_URL=driver://user:pass#host:port/dbname alembic upgrade head
And keep using alembic upgrade head during my development flow
I've tried all the answer here, but not working. Then I try to deal by myself, as below:
.ini file:
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
file_template = %%(rev)s_%%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d_%%(minute).2d_%%(second).2d
# timezone to use when rendering the date
# within the migration file as well as the filename.
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; this defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat alembic/versions
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
databases = auth_engine
[auth_engine]
sqlalchemy.url = mysql+mysqldb://{}:{}#{}:{}/auth_db
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
.env file(it is in the root folder of my project):
DB_USER='root'
DB_PASS='12345678'
DB_HOST='127.0.0.1'
DB_PORT='3306'
env.py file:
from __future__ import with_statement
import os
import re
import sys
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
DB_USER = os.getenv("DB_USER")
DB_PASS = os.getenv("DB_PASS")
DB_HOST = os.getenv("DB_HOST")
DB_PORT = os.getenv("DB_PORT")
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# gather section names referring to different
# databases. These are named "engine1", "engine2"
# in the sample .ini file.
db_names = config.get_main_option('databases')
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
sys.path.append(os.path.join(os.path.dirname(__file__), "../../../"))
from db_models.auth_db import auth_db_base
target_metadata = {
'auth_engine': auth_db_base.auth_metadata
}
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
section = context.config.get_section(name)
url = section['sqlalchemy.url'].format(DB_USER, DB_PASS, DB_HOST, DB_PORT)
section['sqlalchemy.url'] = url
rec['url'] = url
# rec['url'] = context.config.get_section_option(name, "sqlalchemy.url")
for name, rec in engines.items():
print("Migrating database %s" % name)
file_ = "%s.sql" % name
print("Writing output to %s" % file_)
with open(file_, 'w') as buffer:
context.configure(url=rec['url'], output_buffer=buffer,
target_metadata=target_metadata.get(name),
compare_type=True,
compare_server_default=True
)
with context.begin_transaction():
context.run_migrations(engine_name=name)
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engines = {}
for name in re.split(r',\s*', db_names):
engines[name] = rec = {}
section = context.config.get_section(name)
url = section['sqlalchemy.url'].format(DB_USER, DB_PASS, DB_HOST, DB_PORT)
section['sqlalchemy.url'] = url
rec['engine'] = engine_from_config(
section,
prefix='sqlalchemy.',
poolclass=pool.NullPool)
for name, rec in engines.items():
engine = rec['engine']
rec['connection'] = conn = engine.connect()
rec['transaction'] = conn.begin()
try:
for name, rec in engines.items():
print("Migrating database %s" % name)
context.configure(
connection=rec['connection'],
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
target_metadata=target_metadata.get(name),
compare_type=True,
compare_server_default=True
)
context.run_migrations(engine_name=name)
for rec in engines.values():
rec['transaction'].commit()
except:
for rec in engines.values():
rec['transaction'].rollback()
raise
finally:
for rec in engines.values():
rec['connection'].close()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
Wish can help someone else.
In env.py just add
config.set_main_option('sqlalchemy.url', os.environ['DB_URL'])
after
config = context.config
like
config = context.config
config.set_main_option('sqlalchemy.url', os.environ['DB_URL'])
and then execute like that:
DB_URL="mysql://atuamae:de4#127.0.0.1/db" \
alembic upgrade head
In a pyramid app I am building (called pyplay), I need to retrieve an application setting that I have in development.ini. The problem is that the place where I am trying to get that setting cannot access the request variable (e.g. at the top level of a module file).
So, after looking at this example in the documentation: http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/configuration/django_settings.html I started doing something very simple and hardcoded at first just to make it work.
Since my development.ini has this section: [app:main], then the simple example I tried is as follows:
from paste.deploy.loadwsgi import appconfig
config = appconfig('config:development.ini', 'main', relative_to='.')
but the application refuses to start and displays the following error:
ImportError: <module 'pyplay' from '/home/pish/projects/pyplay/__init__.pyc'> has no 'main' attribute
So, thinking that maybe I should put 'pyplay' instead of 'main', I went ahead, but I get this error instead:
LookupError: No section 'pyplay' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config ./development.ini
At this point I am a bit stuck and I don't know what am I doing wrong. Can someone please give me a hand on how to do this?
Thanks in advance!
EDIT: The following are the contents of my development.ini file (note that pish.theparam is the setting I am trying to get):
###
# app configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/environment.html
###
[app:main]
use = egg:pyplay
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en_US.utf8
pyramid.includes =
pyramid_debugtoolbar
pyramid_tm
sqlalchemy.url = mysql://user:passwd#localhost/pyplay?charset=utf8
# By default, the toolbar only appears for clients from IP addresses
# '127.0.0.1' and '::1'.
debugtoolbar.hosts = 127.0.0.1 ::1
pish.theparam = somevalue
###
# wsgi server configuration
###
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/logging.html
###
[loggers]
keys = root, pyplay, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_pyplay]
level = DEBUG
handlers =
qualname = pyplay
[logger_sqlalchemy]
level = INFO
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
The reason it's difficult to do in pyramid is because it's always a bad idea to have module-level settings. It means your module can only ever be used in one way per-process (different code-paths can't use your library in different ways). :-)
A hack around not having access to the request object is to at least hide your global behind a function call, so that the global can be different per-thread (which is basically per-request).
def get_my_param(registry=None):
if registry is None:
registry = pyramid.threadlocals.get_current_registry()
return registry.settings['pyplay.theparam']
Step 1: create a singleton class say in a file xyz_file
class Singleton:
def __init__(self, klass):
self.klass = klass
self.instance = None
def __call__(self, *args, **kwds):
if self.instance == None:
self.instance = self.klass(*args, **kwds)
return self.instance
#Singleton
class ApplicationSettings(object):
def __init__(self, app_settings=None):
if app_settings is not None :
self._settings = app_settings
def get_appsettings_object(self):
return self
def get_application_configuration(self):
return self._settings
Step 2: in "__ init__.py"
def main(global_config, **settings):
....
.......
app_settings = ApplicationSettings(settings)
Step 3: You should be able to access in any part of the code.
from xyz_file import ApplicationSettings
app_settings = ApplicationSettings().get_application_configuration()
Basically, if you don't have access to the request object, you're "off the rails" in Pyramid. To do things the Pyramid way, we make components and figure out where they belong in the Pyramid lifecycle, and they should always have direct access to either or both of the registry (the ZCA) and the request.
If what you're doing doesn't fit in the the request lifecycle, then it's probably something that should be instantiated at server start up time, normally in your init.py where you build and fill the configurator (our access to the registry). Don't be afraid to use the registry to allow other components to get at things 'pseudo-globally' later. So probably you want to make some kind of factory for your thing, call the factory in your start up code, perhaps passing in a reference to the registry as an argument, and then attach the object to the registry. If your component needs to interface with request-lifecycle code, give it a method that takes request as a param. Later anything that needs at this object can get it from registry, and anything this object needs to get at can be done either through registry or request.
You can totally use the hack in the other answer to get at the current global registry, but needing to do so is a code smell, you can def figure out a better design to eliminate that.
pseudo code example, in the server start up code:
# in in the init block where our Configurator has been built
from myfactory import MyFactory
registry.my_component = MyFactory(config.registry)
# you can now get at my_component from anywhere in a pyramid system
your component:
class MyFactory(oject):
def __init__(self, registry):
# server start up lifecycle stuff here
self.registry = registry
def get_something(self, request):
# do stuff with the rest of the system
setting_wanted = self.registry.settings['my_setting']
Pyramid Views work this way. They are actually ZCA multi-adapters of request and context. Their factory is registered in the registry, and then when the view look up process kicks off, the factory instantiates a view passing in request as a param.