Is it possible to override uwsgi ini-file with environment variables - python

I'm trying to build a "base" docker image for running a python framework with uwsgi. The goal is to have others build their own docker images where they dump their application logic and any configuration overrides they need.
I thought it might be nice to be able to override any default settings from a uwsgi.ini file by supplying UWSGI_* environment variables passed to uwsgi at startup.
I've tried this approach, and setting a value via env var works if it's not in the ini-file at all (e.g UWSGI_WORKERS=4). But if I put a workers=1 line in the ini-file, it seems to override the env var.
Is this expected behaviour? I'm having trouble finding anything about config resolution order in the docs.
Do I have to resort to something like this? Using env vars seems so much cleaner.
if-exists = ./override.ini
include = %(_)
endif =

First, make all environment variables in the .ini file refer to the environment variables like below:
[uwsgi]
http = $(HTTP_PORT)
processes = $(UWSGI_WORKERS)
threads = $(UWSGI_THREADS)
...
Then set whatever default values you want for these environment variables inside the Dockerfile.
Now, anyone using your base image can overwrite any config by setting the specific env variable.

Related

How to see all environment variable in flask

How can I see all the environment variables in my flask application because when I was following a tutorial, instructor used "export FLASK_APP = app.py" command and then he went on to set other environment variables value. So, how can I see my all of the environment variables and their values ?
To see wich env variables are set on your OS you can use the commands below,
env vars are variables on your OS and not a Flask specific vars, if the set with example with export.
Also there a .env files that are loaded or sometimes used by application like node where on .env files configuration specifc data is stored like keys, or urls.
on Linux
printenv
on windows (powershell)
Get-ChildItem Env:
Wikipedia
an-introduction-to-environment-variables

How to set environment variables for Django settings.py with Jenkins

I have a settings.py file for my Django project that reads in environment variables for some fields. This works great for local development. I'm not sure of how to make this work for my Jenkins build (Freestyle project). An extra layer of complexity is that some of these variables contain sensitive data (passwords, etc.) and so need to be secured.
So far, I have been able to use Credentials Binding Plugin to set up a secret .env file and can successfully access that as an environment variable.
How do I go about setting all the 'actual' variables from that file? Or is this bad practice?
If it's not bad, wouldn't those variables continue living on the server after the build has finished?
If it is bad, what is the better way to do this? I can update the settings.py file easily enough if needed to make this work.
Maybe using something like django-environ is the right approach?

Is storing project configuration in environment variables a bad practice?

Some background first:
i am currently testing a class that sends a GET request with a configurable url, which is built like this
url = f"{os.environ["TARGET_URL"]}/api/etc"
For normal operation, my TARGET_URL environment variable is set at project startup from a .env file and everything works. When testing locally, everything is still fine, tests passes and everyone is happy. My issue arose when I discovered that my Drone CI server failed to complete the project's build because the TARGET_URL environment variable wasn't found.
After some digging I found out that I had the wrong (dumb) idea that environment variables were reset at every project/test startup, and I basically was using my production environment variable all this time (even during tests) because it was set at first project startup.
From this story comes my question: given that environment variables are kept between executions, would storing configurations in them result in a bad practice? Is there an equally convenient alternative (no global objects and access from everywhere in the code) that can be used instead?
Thanks everyone for the quick responses, here's a bit of what-happened-next:
environment variables stay loaded after the first initialization, so I needed a way to test my code after loading only the variables I needed, with values that were expected. This would allow me to keep using environment variables loaded from a .env file and keep building my project remotely, where no .env files are present.
The solution was to add a pytest plugin called pytest-dotenv, which when properly configured would allow me to overwrite every variable in my .env files with a custom variable from another file (.env.test in my case). I filled the .env.test file with all the variables found in the .env file, and assigned empty values to each of them.
This allowed my tests to run ensuring no weird edge cases are missed because something had the wrong value.
example .env file
TARGET_URL="http://my.api.dev
example .env.test file
TARGET_URL=
pytest.ini configuration
[pytest]
env_override_existing_values = 1
env_files =
.env.test
Environment variables stored in config files or .env files is not a bad practice.
However, it is recommended that you use a key vault such as Azure Key Vault or AWS Key Management System for production deployments.
This way you further remove the keys away from your server (if env files) as well as code (if it is in config files)

How can I temporarily change an environment variable of a kubernetes pod?

We have python services running in pods in a kubernetes cluster. The services are setup to receive their log-level from an environment variable. Those env vars are set during the deployment of the service in a gitlab pipeline. For debugging purposes I want to be able to just change the env var on a single pod and restart it, without having to redeploy the service from gitlab.
Before we moved to kubernetes, we were running our containers in rancher, where the described change was very easy to do in the GUI. Change the env var -> hit update -> container restarts automatically.
I found this article that suggest to change the replica set using a command like
kubectl set env rs [REPLICASET_NAME] [ENV_VAR]=[VALUE]
And then terminating the pod, after which it will be recreated with the env var set accordingly.
But it also states to
Never do it on a production system.
Never even do this on a dev environment without taking care in how it may impact your deployment workflow.
Is that the only / best way to achieve my goal of quickly changing an env var in a running pod for debug purposes?
Is that the only / best way to achieve my goal of quickly changing an
env var in a running pod for debug purposes?
Short answer: Yes.
Long answer: I've never used or read up on Rancher, but I suspect that it was also changing the ReplicaSet or Deployment template env var, which triggered a Pod update. It's really the only way to change an env var in a Pod. You can't change the env vars on a running container or a running Pod. You can't do that in Docker containers, and you can't do it in Kubernetes, so I assume that you can't do it in Rancher. You can only restart a Pod with a different spec.
Why?
Because containers are just processes running on the host machine. Once the process is started, it's not possible to change a process's environment without resorting to nasty hacks.
If you're just concerned about the warnings that state to not do this in dev or prod, I would say that the same warnings apply to the Rancher workflow you described, so if you were willing to take the risks there, it won't be any different here.
Something I do frequently to define my environment variables in the deployment spec. Then while the deployment is running I am able to just do
kubectl edit deployment <name>
and change the environment variables that I want this will restart the pod though but for my development purposes it's typically okay.
If the environment variable is baked into the image though then you will need to either rebuild the image and restart the pod (which will pull the image) or use some of the suggestions others have stated here.

Is it good practice to pass arguments to Docker containers through environment variables?

I have an image whose task is to run a simulation. However there are some settings that I want to change in different simulations.
To achieve this, I set some environment variables when starting a container, which then I pass as command line arguments to my main simulation script.
Is this considered good practice for creating "customized" containers from the same image?
The typical ways to configure a container include:
command line arguments (often paired with an entrypoint)
environment variables
config file mounted as a volume or docker config
There's no one best way, though the config file method is typically used for passing a larger amount of configuration into a container that may also be checked into version control. Environment variables are perfectly acceptable and are also described in the 12 factor app design.

Categories

Resources