I have this kind of setup :
Overridden BaseRunserverCommand that adds another option (--token) that would get a string token.
Store it in the app called "vault" as a global variable
Then continue executing the BaseRunserverCommand
Now later when I try to get the value of this global variable after the server started, I am unable to see the value. Is this going out of scope? How to store this one time token that is entered before the django starts?
Judging by the name of the command line option, this sounds like a configuration variable -- why not put it in settings.py like all the other configurations?
If it's a secure value that you don't want checked in to version control, one pattern I've seen is to put secure or environment-sensitive (i.e. only makes sense in production or development) configurations in a local_settings.py file which is not checked in to version control, then add to the end of your settings.py:
try:
from local_settings import *
except ImportError:
# if you require a local_settings to be present,
# you could let this exception rise, or raise a
# more specific exception here
pass
(Note that some people like to invert the import relationship and then use --settings on the command line with runserver -- this works, too, but requires that you always remember to use --settings.)
There's no such thing as a "global variable" in Python that is available from everywhere. You need to import all names before you can use them.
Even if you did this, though, it wouldn't actually work on production. Not only is there no command that you run to start up the production server (it's done automatically by Apache or whatever server software you're using), the server usually runs in multiple processes, so the variable would have to be set in each of them.
You should use a setting, as dcrosta suggests.
Related
I'm working on an embedded system project where my dev setup is different than my prod.
The differences include variables and packages imports.
What is the best way to structure the config files for python3 application where dev and prod setups are different?
prod: My device exchange messages (using pyserial) with an electronic system and also communicates with a server.
dev: I use a fake and fixed response from a function to mock both the electronic and server responses.
Even if the functions that I mock are essential in prod they are less in dev.
I can mock them because the most important part of this project are the functions that use and treat them.
So, there are packages imports and function calls that do not make sense and introduce errors in dev mode.
Every time I need to switch from one to another I need to change a good amount of the code and some times there are errors introduced. I know this is really (💩) not the best approach and I wanted to know what are the best practices.
The Closest Solution Found
Here there is a good solution to set up different variables for each environment. I was hoping for something similar but for projects that require different packages import for different environments.
My Setup
Basic workflow:
A task thread is executed each second
module_1 do work and call module_2
module_2 do work and call module_3
module_3 do work and send back a response
Basic folder structure:
root
main
config.py
/config
prod
dev
/mod_1
/mod_2
/mod_3
/replace_imports
module_1 and module_3 use, each one, a specific package for prod and must be replaced by a dev function
What do I have:
# config.py
if os.environ["app"] == "dev":
import * from root.config.dev
if os.environ["app"] == "prod":
import * from root.config.prod
# config/prod.py
import _3rd_party_module_alpha
import _3rd_party_module_beta
...
obj_alpha = _3rd_party_module_alpha()
func_beta = _3rd_party_module_beta()
# config/dev.py
import * from root.replace_imports
# replace_imports.py
obj_alpha = fake_3rd_party_module_alpha()
func_beta = fake_3rd_party_module_beta()
You really should not have code changes between a dev at point X, and pushing into QA/CI , then prod at point X. Your dev and prod code can be expected to be different at different stages, of course, and version control is key. But moving to production should not require code changes, just config changes.
Environment variables (see 12 factor app stuff) can help, but sometimes config is in code, for example in Django setting files.
In environments like Django where "it points to" a settings file, I've seen this kinda of stuff:
base_settings.py:
common config
dev_settings.py:
#top of file
import * from base_settings
... dev specifics, including overrides of base...
edit: I am well aware of issues with import *. First, this is a special case, for configurations, where you want to import everything. Second, the real issue with import * is that it clobbers the current namespace. That import is right at the top so that won't happen. Linters aside, and they can be suppressed for just that line, the leftover issue is that you may not always know where a variable magically came from, unless you look in base.
prod_settings.py:
import * from base_settings
...production specifics, including overrides of base...
Advanced users of webpack configuration files (and those are brutal) do the same thing, i.e. use a base.js then import that into a dev.js and prod.js.
The key here is to have as much as possible in base, possibly with the help of environment variables (be careful not to over-rely on those, no ones likes apps with dozens of environment variable settings). Then dev and prod are basically about keys, paths, urls, ports, that kind of stuff. Make sure to keep secrets out of them however, because they gravitate there naturally but have no business being under version control.
re. your current code
appname = os.getenv("app")
if appname == "dev":
#outside of base=>dev/prod merges, at top of "in-config" namespaces,
#I would avoid `import *` here and elsewhere
import root.config.dev as config
elif appname == "prod":
import root.config.prod as config
else:
raise ValueError(f"environment variable $app should have been either `dev` or `prod`. got:{appname}:")
Last, if you are stuck without an "it points to Python settings file" mechanism, similar to that found in Django, you can (probably) roll your own by storing the config module path (xxx.prod,xxx.dev) in an environment variable and then using a dynamic import. Mind you, your current code largely does that already except that you can't experiment with/add other settings files.
Don't worry if you don't get it right right away, and don't over-design up front - it takes a while to find what works best for you/your app.
Pipenv was specially created for these things and more
Use git branches (or mercurial or whatever version control system you're using - you are using a vcs, are you ?) and virtualenvs. That's exactly what they are for. Your config files should only be used for stuff like db connections identifiers, api keys etc.
I have a python flask server. In the main of my script, I get a variable from user that defines the mode of my server. I want to set this variable in main (just write) and use it (just read) in my controllers. I am currently using os.environ, but I 'm searching for more flasky way to use.
I googled and tried following options:
flask.g: it is being reset for each request; so, I can't use it inside controllers when it is being set somewhere else.
flask.session: it is not accessible outside request context and I can't set it in the main.
Flask-Session: like the second item, I can't set it in the main.
In main you use app.run() so app is avaliable in main. It should be also avaliable all time in all functions so you can try to use
app.variable = value
but flask has also
app.config
to keep any settings. See doc: Configuration Handling
I read the docs for checks: https://docs.djangoproject.com/en/2.2/topics/checks/
I am missing something: I would like to have a web view where an admin can see what's wrong.
Calling this view should execute the check and the result (ok or failure messages) should be visible.
I could hack something like this myself.
But I would like to go a common way, if there is already such a way.
As opposed to unit-testing this is about checking a production server.
How to do this the django-way?
Examples for checks I have on my mind: Checking if third party services are available (smtp server, archive systems, ERP systems, ...)
The builtin system check is mainly for development actually - the point is that if those tests fail your project will very probably not run at all.
But you can nonetheless call this (or any other) management command from python code using management.call_command - you'll just need to provide a writable file-like object to capture stdout/stderr:
from StringIO import StringIO
from django.core.management import call_command, check
def check_view(request):
out = StringIO()
cmd = check.Command(stdout=out, stderr=out)
call_command(check)
out.seek(0)
context = {"results": out.readlines()}
return render(request, "check.html", context)
Then it's just a matter of plugin this into your admin (which is documented so I won't give a complete example).
NB : wrt/ adding your own checks to the check command, this is fully documented too.
Cloud platform providers often provide health checks that ping your app at a certain point eg: /health
django-health-check provides tests that could be executed when /health is accessed.
If they all pass it returns a 200. Otherwise the cloud provider will notify your admins.
Of course you could make that page only visible for admins to manually check or write your own script to supervise your application status.
Some time ago I created a script using Python, the script will execute some actions in an instance based on a configuration file.
This is the issue, I created 2 configuration files.
Config.py
instance= <Production url>
Value1= A
Value2= B
...
TestConfig.py
instance= <Development url>
Value1= C
Value2= D
...
So when I want the script to execute the tasks in a development instance to do tests, I just import the TestConfig.py instead of the Config.py.
Main.py
# from Config import *
from TestConfig import *
The problem comes when I update the script using git. If I want to run the script in development I have to modify the file manually, this means that I will have uncommited changes in the server.
Doing this change takes about 1 min of my time but I feel like I'm doing something wrong.
Do you know if there's a standard or right way to accomplish this kind of tasks?
Use that:
try:
from TestConfig import *
except ImportError:
from Config import *
On production, remove TestConfig.py
Export environment variables on your machines, and chose the settings based on that environment variable.
I think Django addresses this issue best with local_settings.py. Based on this approach. at the end of all your imports (after from config import *), just add:
# Add this at the end of all imports
# This is safe to commit and even push to production so long as you don't have local_config in your production server
try:
from local_config import *
except ImportError:
pass
And create a local_config.py per machine. What this will do is import everything from config, and then again from local_config, overriding global configuration settings, should they have the same name as the settings inside config.
The other answers here offer perfectly fine solutions if you really want to differentiate between production and test environments in your script. I would advocate for a different approach, however: to properly test your code, you should create an entirely separate test environment and run your code there, without any changes (or changes to the config files).
I can't make any specific suggestions for how to go about this since I don't know what the script does. In general though, you should try to create a sandbox that spoofs your production environment and is completely isolated. You can create a wrapper script that will run your code in the sandbox and modify the inputs and outputs as necessary to make your code interact with the test environment instead of the production environment. This wrapper is where you should be choosing which environment the code runs in and which config files it uses.
This approach abstracts the testing away from the code itself, making both easier to maintain. Designing for test is a reasonable approach for hardware, where you are stuck with the hardware you have after fabrication, but it makes less sense for software, where wrappers and spoofed data are easier to manage. You shouldn't have to modify your production code base just to handle testing.
It also entirely elimiates the chance that you'll forget to change something when you want to switch between testing and deployment to production.
Usually, it is a good idea to load a configuration from a configurable file. This is what from_envvar() can do, replacing the from_object() line above:
app.config.from_envvar('FLASKR_SETTINGS', silent=True)
That way someone can set an environment variable called FLASKR_SETTINGS to specify a config file to be loaded which will then override the default values. The silent switch just tells Flask to not complain if no such environment key is set.
I am not too familiar with environment variables. I would like an explanation of the above paragraph in simple terms. My best guess is that when the program reads FLASKR_SETTING does that mean that on my own computer I have set up a mapping to this file with that name with something called an environment variable? Ive messed with my environment path before and to be honest I still don't understand it, so I came here looking for a clear answer
Environment variables are a name,value pair that are defined for a particular process running on a computer (windows or UNIX/LINUX etc.). They are not files. You can create your own environment variables and give it any name/value. For example, FLASKR_SETTING is the name of the environment variable who value could be set to a config file. On a UNIX terminal for example, you can do:
export FLASKR_SETTING = /somepath/config.txt
By doing the above, you have just created an environment variable named FLASKR_SETTING whose value is set to /somepath/config.txt. The reason you use environment variables is because you can tie them to a certain process and use on demand when your process starts. You don't have to worry about saving them in a file. In fact, you can create a launch script for your process/application that can set a variety of environment variables before you starting using the application.
In case of flask, app.config.from_envvar('FLASKR_SETTINGS', silent=True) sets the value of FLASKR_SETTINGS to the value from the env. variable. So it basically translates to:
- Find the config file (/somepath/config.txt etc.)
- lets say the contents of config file is:
SECRET_KEY="whatever"
DEBUG = True
- Then using the 2 above, it will be translated to:
app.config['SECRET_KEY'] = "whatever"
app.config['DEBUG'] = True
So this way, you can just update the config file as needed and you will not need to change your code.
Environment variables are a simple, ad-hoc way of passing information to programs. On unixy machines, from a command shell, it's as simple as
export FLASKR_SETTINGS=/path/to/settings.conf
/path/to/program
This is especially useful when installing programs to start up at reboot; the configuration can be easily included in the same setup script that launches the system program.