I'd like to have something in my settings like
if ip in DEV_IPS:
SOMESETTING = 'foo'
else:
SOMESETTING = 'bar'
Is there an easy way to get the ip or hostname - also - is this is a bad idea ?
import socket
socket.gethostbyname(socket.gethostname())
However, I'd recommend against this and instead maintain multiple settings file for each environment you're working with.
settings/__init__.py
settings/qa.py
settings/production.py
__init__.py has all of your defaults. At the top of qa.py, and any other settings file, the first line has:
from settings import *
followed by any overrides needed for that particular environment.
One method some shops use is to have an environment variable set on each machine. Maybe called "environment". In POSIX systems you can do something like ENVIRONMENT=production in the user's .profile file (this will be slightly different for each shell and OS). Then in settings.py you can do something like this:
import os
if os.environ['ENVIRONMENT'] == 'production':
# Production
DATABASE_ENGINE = 'mysql'
DATABASE_NAME = ....
else:
# Development
Related
My app appears to be perfectly capable of configuring based on the .env file, the imported config classes, and defining variables directly. However, FLASK_DEBUG is failing to change depending on how I define the variables.
I should probably note that I'm using Visual Studio Code on windows. I've been told I need to use Linux or really anything but windows, and I intend to, but it's not an option right now so any help understanding how this functions with the system I have and how to navigate this would be much appreciated.
config.py:
import os
basedir = os.path.abspath(os.path.dirname(__file__))
class DevelopmentConfig(object):
os.environ['SECRET_KEY'] = b'something'
os.environ['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL') or \
'sqlite:///' + os.path.join(basedir, 'app.db')
os.environ['SQLALCHEMY_TRACK_MODIFICATIONS'] = 'False'
os.environ['FLASK_DEBUG'] = '1'
os.environ['DEV_DICT'] = 'dev_config_class_environ_dictionary_activated_and_working'
class ProductionConfig(object):
os.environ['SECRET_KEY'] = os.environ.get('SECRET_KEY')
os.environ['SQLALCHEMY_DATABASE_URI'] = os.environ.get('PRODUCTION_DATABASE_URI')
os.environ['SQLALCHEMY_TRACK_MODIFICATION'] = 'False'
os.environ['FLASK_DEBUG'] = '0'
os.environ['PROD_DICT'] = 'prod_config_class_environ_dictionary_activated_and_working'
init.py:
from flask import Flask
from config import DevelopmentConfig, ProductionConfig
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
from dotenv import load_dotenv, find_dotenv
import os
app = Flask(__name__)
db = SQLAlchemy(app)
migrate = Migrate(app, db)
load_dotenv(find_dotenv())
if os.environ.get('FLASK_ENV') == 'development':
print("Environment is development")
app.config.from_object(DevelopmentConfig)
elif os.environ.get('FLASK_ENV') == 'production':
print("Environment is production")
app.config.from_object(ProductionConfig)
print(os.environ.get('TEST_DOTENV')) #This value is stored in .env
print(os.environ.get('DEV_DICT')) #defined in class DevelopmentConfig as os.environ['DEV_DIVT']
print(os.environ.get('PROD_DICT')) #same but in the ProductionConfig class
print(os.environ.get('FLASK_ENV')) #defined differently in both classes and CONFIGS CORRECTLY
print(os.environ.get('FLASK_DEBUG')) #defined differently in both classes and DOES NOT CONFIG CORRECTLY
.env:
FLASK_ENV=development
FLASK_APP=run.py
SECRET_KEY=b'something'
PRODUCTION_DATABASE_URI='something_else'
TEST_DOTENV=config_from_dotenv_is_working #prints correctly to command line, as do other variables defined here
When I run the app:
(flaskvenv) PS C:\Users\peter\Desktop\Projects\Social Work Site\sw_app> flask run
* Serving Flask app "run.py" (lazy loading)
* Environment: development
* Debug mode: on
* Restarting with stat
c:\...__init__.py:814: UserWarning: Neither SQLALCHEMY_DATABASE_URI nor SQLALCHEMY_BINDS is set. Defaulting SQLALCHEMY_DATABASE_URI to "sqlite:///:memory:".
'Neither SQLALCHEMY_DATABASE_URI nor SQLALCHEMY_BINDS is set. '
c:\...__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead...
Environment is development #changes correctly if I change it in .env
config_from_dotenv_is_working #further proof .env works fine
dev_config_class_environ_dictionary_activated_and_working #prints regardless of which class gets called
prod_config_class_environ_dictionary_activated_and_working #also prints regardless of which class gets called
development #changes to production properly if I change it in .env
0 #stubbornly stays set to 0 regardless of it being set in config
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Here's the strange part:
When I defined FLASK_DEBUG in the .env file, the command line displays it properly in the automatic output, e.g. as * Debug mode: off or * Debug mode: on depending on if I set it to 0 or 1 respectively.
BUT when I call it with os.environ.get('FLASK_DEBUG'), it displays as 0 regardless of what I do.
Based on this, I have a few questions.
The main one and the essence of the problem is of course:
Why is Flask not configuring FLASK_DEBUG but other variables are configuring fine?
Other questions that I suspect might have some kind of connection and would help me understand this:
What role does the Windows OS play in this problem?
Why are both config classes configuring environment variables if I'm only calling one of them?
Why if the automatic output tells me that Debug mode is on does os.environ.get('FLASK_DEBUG') still return 0?
I can solve this by defining FLASK_ENV in .env, but I want to understand how it works. The discrepancy in how things config makes me feel uneasy with not knowing how or why things function the way they do.
Thanks in advance!
You are mixing up the concepts of configuration classes and environment variables.
Explanation of your result
If you define two classes and immediately in those definitions set environment variables, both of them will be run immediately. Don't do this:
# config.py
import os
class DevelopmentConfig(object):
os.environ['FLASK_DEBUG'] = '1'
class ProductionConfig(object):
os.environ['FLASK_DEBUG'] = '0'
print('FLASK_DEBUG is', os.environ['FLASK_DEBUG'])
After running this code, both classes will have set the env. variable, and since 0 is the last value to be set, the result will be 0:
$ python config.py
FLASK_DEBUG is 0
This is the reason your FLASK_DEBUG is always 0. All of your code inside DevelopmentConfigand ProductionConfig is being run no matter what is set in your .env file. Your problem is not related to Windows.
Besides the classes, your are also setting environment variables from you .env file, including FLASK_ENV=development. This is a variable recognized by Flask, that will turn on debug mode. This the reason debug mode is on in Flask.
One solution: Use environment specific config classes
Define values in classes:
class DevelopmentConfig(object):
MY_VARIABLE = 'dev value'
...
class ProductionConfig(object):
MY_VARIABLE = 'prod value'
...
Then set the environment in an environment variable. This can be done directly in the OS, or you can use an .env file if you like:
FLASK_ENV=development
On your production server, you would create a different .env file:
FLASK_ENV=production
Then load the relevant class in Flask:
from dotenv import load_dotenv, find_dotenv
from flask import Flask
import os
load_dotenv(find_dotenv())
app = Flask(__name__)
config = ProductionConfig() if os.environ.get('FLASK_ENV') == 'production' else DevelopmentConfig()
app.config.from_object(config)
print(app.config['MY_VARIABLE'])
You don't even need to set FLASK_DEBUG in this case because Flask will set it automatically based on FLASK_ENV.
Other solutions
You can also ditch the config classes completely and instead import all values from environment variables or from configuration files. Read the Flask config guide for more details.
I have the following structure of my project
project
--project
----settings
------base.py
------development.py
------testing.py
------secrets.json
--functional_tests
--manage.py
development.py and testing.py 'inherit' from base.py
from .base import *
So, where I have problems
I have the SECRET_KEY for Django in secrets.json, which is stored in settings folder
I load this key like this (saw this in "Two scoops of Django")
import json
from django.core.exceptions import ImproperlyConfigured
key = "secrets.json"
with open(key) as f:
secrets = json.loads(f.read())
def get_secret(setting, secret=secrets):
try:
return secrets[setting]
except KeyError:
error_msg = "Set the {} environment variable".format(setting)
raise ImproperlyConfigured(error_msg)
SECRET_KEY = get_secret("SECRET_KEY")
But when I run python manage.py runserver
Blah-blah-blah
django.core.exceptions.ImproperlyConfigured: The SECRET_KEY setting must not be empty.
After some investigations I got the following
If I put print(os.getcwd()) inside base.py I get /media/grimel/Home/project/ instead of /media/grimel/Home/project/project/settings/
This code works only if I replace:
key = "secrets.json"
by
key = "project/settings/secrets.json"
Personally, I don't like this solution.
So, questions:
Why, for base.py current working directory is so confusing?
What's a better approach in solving this problem?
The working directory is based on how you run the program, in your case python manage.py runserver hints that your working directory is the one containing manage.py. Beware that this can vary when run as WSGI script or otherwise, so your concern with using key = "project/settings/secrets.json" is valid.
One solution is to use the value of __file__ in base.py, likely to be "project/settings/base.py". I would use something like
import os
BASE_DIR = os.path.dirname(__file__)
key = os.path.join(BASE_DIR, "secrets.json")
To make life simpler why not move secrets.json to your project root and reference
import os
key = os.path.join(BASE_DIR, "secrets.json")
directly. This is platform independent saving you the need to override BASE_DIR at all in your settings file. Don't forget to add your settings file to version control.
I have this folder structure for django
settings/dev.py
settings/prod.py
settings/test.py
Then i have common settings in settings/common.py in which i check the ENV variable like
if PROD:
from settings.prod import *
Based on ENV variable each one of them will be active
I want to use something like this in my code
from myapp import settings
rather than
from myapp.settings import dev
This is the method which I follow. Learnt this from the book Two Scoops of Django.
Have a file, such as, settings/common.py which will contain the properties/settings which are common in dev, prod and test environment. (You already have this.)
The other 3 files should:
Import the common settings from the settings/common.py by adding the line from .common import *
And should contain settings for its own corresponding environment.
The manage.py file decides which settings file to import depending on the OS environment variable, DJANGO_SETTINGS_MODULE. So, for test environment the value of DJANGO_SETTINGS_MODULE should be mysite.settings.test
Links for reference:
Django documentation for django-admin utility - Link
Two Scoops of Django sample project - Link
Preserve your settings folder structure and create __init__.py there.
Please, use code below in your settings/__init__.py:
import os
# DJANGO_SERVER_TYPE
# if 1: Production Server
# else if 2: Test Server
# else: Development Server
server_type = os.getenv('DJANGO_SERVER_TYPE')
if server_type==1:
from prod import *
elif server_type==2:
from test import *
else:
from dev import *
Now you can set environment variable called DJANGO_SERVER_TYPE to choose between Production, Test or Development Server and import settings using:
import settings
I have three systems for my django project for which i need three different settings file for each respective system i.e local,staging and production.
After doing some research I though of a way i.e to set an environmental variable corresponding to the system. For local i am setting env variable as 'localserver', for staging server i am setting 'staging' and so on for production.
Settings.py
server_environment = os.environ.get('XYZ_ENV')
if server_environment == 'staging':
try:
from rest_apis.settings_staging import *
except ImportError:
pass
elif server_environment == 'production':
try:
from settings_production import *
except ImportError:
pass
elif server_environment == 'localserver':
try:
from settings_local import *
except ImportError:
pass
local.py
from rest_apis.settings import *
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'xyz',
'USER': 'postgres',
'PASSWORD': 'postgres123',
'HOST': '127.0.0.1',
'PORT': 5432,
}
}
BROKER_URL = 'amqp://myuser:mypassword#127.0.0.1:5672//'
Same type of config for different systems.
Here what is happening, inspite of cross verifying that it goes to correct if-else block, it doesn't apply the proper settings. It always connect to DB Host specify in Settings.py file.
Also if i stop my instance (i have private IP, so on restart IP doesn't change) and I start again, then it again create the same issue.
I did stackoverflow for it and tried various solutions for it but none of them helped. What am i doing wrong ? Also, what is the correct way for this type of situation.
I don't want to make hardcore changes to the system by logging(ssh) into each and every system.
You can use different environments as suggested in other answers, though I recommend using separate settings files for different working environments.
The ideal project layout would be something like -
project_folder -
settings -
__init__.py
common.py
development.py
staging.py
production.py
test.py
main.py
Common staging file needs to contain all settings common to all environments. All settings file import from 'common.py'
main.py import all settings from staging.py in staging environment, development.py in development environment and production.py in production environment.
Thus, main.py would be the main settings file which needs to added to DJANGO_SETTINGS_MODULE. As main.py is different for different environments, it should be excluded from git.
Sample Code:-
common.py
# All common configurations
development.py
from .common import *
# Add settings for different connections like db, cache, smtp etc.
production.py
from .common import *
# Add connection settings for production environment
main.py # In development environment
from .development import *
try:
from local import *
except ImportError:
pass
main.py # In staging environment
from .staging import *
try:
from local import *
except ImportError:
pass
main.py # In production environment
from .production import *
try:
from local import *
except ImportError:
pass
manage.py
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
settings_file = 'project_folder.settings.test' if 'test' in sys.argv else 'project_folder.settings.main'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", settings_file)
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
What am I doing wrong?
It's hard to tell what exactly is wrong, but you can:
Remove from rest_apis.settings import * from local settings.
Your local settings should override settings.py, not vice versa.
Make sure that your imports are correct.
Remove try/except and check if ImportError is raised.
Place your imports at the end of settings.py.
Thus imported settings will override existing.
Also, what is the correct way for this type of situation?
I think that using one local settings file for each environment (that's why it is called local) would be a more elegant solution:
Create a template for local settings, e.g. local_settings.template.py:
"""
Local Django settings template.
Usage:
1. Copy ``local_settings.template.py`` to ``local_settings.py``.
2. Modify ``local_settings.py`` according to your needs.
Note that settings from ``local_settings.py``
will override any existing Django settings.
"""
SECRET_KEY = ''
ALLOWED_HOSTS = []
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': '',
'PORT': '',
'NAME': '',
'USER': '',
'PASSWORD': ''
}
}
Add local_settings.template.py to your VCS.
Add local_settings.py to your VCS ignore.
Modify settings.py:
"""
Core Django settings.
Consider using local settings (see ``local_settings.template.py``)
for environment specific Django settings and sensitive data.
All involved settings, however, should be listed
here with their default values and description.
For more information on this file, visit
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, visit
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Settings here.
# WARNING: local settings import should remain at the end of this file.
try:
from local_settings import *
except ImportError:
pass
First of all, that settings file is assuming that all your additional settings are inside PYTHONPATH. If they're just inside same (sub)module as your main settings file is, use relative import like this:
server_environment = os.environ.get('XYZ_ENV')
if server_environment == 'staging':
try:
from rest_apis.settings_staging import *
except ImportError:
pass
elif server_environment == 'production':
try:
from .settings_production import *
except ImportError:
pass
elif server_environment == 'localserver':
try:
from .settings_local import *
except ImportError:
pass
Secondly, that imports should be at the end of your settings file (unless you want to have some settings that can't be ovveriden. In that case, put them below your imports).
And last but not least, make sure that your importing succeeds. Remove try - except blocks, leaving only import in your file. You can also remove conditions, just import one file to test if it's working.
I'm trying to develop my Scrapy application using multiple configurations depending on my environment (e.g. development, production). My problem is that there are some settings that I'm not sure how to set them. For example, if I have to setup my database, in development should be "localhost" and in production has to be another one.
How can I specify these settings when I'm doing scrapy deploy ? Can I set them with a variable in command-line?
You should set the deploy options in your scrapy.cfg file. For example:
[deploy:dev]
url = http://dev_url/
[deploy:production]
url = http://production_url/
With that, you could do:
scrapyd-deploy def
or
scrapyd-deploy production
You can refer to the answer in the following link :
https://alanbuxton.wordpress.com/2018/10/09/using-local-settings-in-a-scrapy-project/
I copy here for quick reference:
Edit the settings.py file so it would read from additional settings files depending on a SCRAPY_ENV environment variable
Move all the settings files to a separate config directory (and change scrapy.cfg so it knew where to look
The magic happens at the end of settings.py:
from importlib import import_module
from scrapy.utils.log import configure_logging
import logging
import os
SCRAPY_ENV=os.environ.get('SCRAPY_ENV',None)
if SCRAPY_ENV == None:
raise ValueError("Must set SCRAPY_ENV environment var")
logger = logging.getLogger(__name__)
configure_logging({'LOG_FORMAT': '%(levelname)s: %(message)s'})
# Load if file exists; incorporate any names started with an
# uppercase letter into globals()
def load_extra_settings(fname):
if not os.path.isfile("config/%s.py" % fname):
logger.warning("Couldn't find %s, skipping" % fname)
return
mdl=import_module("config.%s" % fname)
names = [x for x in mdl.__dict__ if x[0].isupper()]
globals().update({k: getattr(mdl,k) for k in names})
load_extra_settings("secrets")
load_extra_settings("secrets_%s" % SCRAPY_ENV)
load_extra_settings("settings_%s" % SCRAPY_ENV)
Then in the python file you want to get the variables defined in the setting, use the following code
from scrapy.utils.project import get_project_settings
settings = get_project_settings()
env_variable = settings.get('ENV_VARIABLE')