How are local environment variables evaluated by Azure Functions with Python runtime? - python

I understand that...
import os
foo = os.getenv('ENV_VAR_NAME')
... will pull "environment" variables from the local.settings.json file when running an Azure Function locally. This is similar to using a .env file.
I know that when deployed to Azure, environment variables are pulled from the Function App's App Settings.
Question:
When running the Function locally, if I have an environment variable set using my terminal (Ex: set DEBUG=true), and this variable is also included in the local.settings.json file (Ex: "DEBUG": false), how does the Function code know which env var to pull in?

Related

Passing Github Action Workflow Secret to Local Python Environment

I've seen a few similar questions here but I don't think this specific one has been answered yet. I am on a machine learning team and we do a LOT of discovery/exploratory analysis in a local environment.
I am trying to pass secrets stored in my github enterprise account to my local environment the same way that Azure Keyvault does.
Here is my workflow file:
name: qubole_access
on: [pull_request, push]
env:
## Sets environment variable
QUBOLE_API_TOKEN: ${{secrets.QUBOLE_API_TOKEN}}
jobs:
job1:
runs-on: self-hosted
steps:
- name: step 1
run: echo "The API key is:${{env.QUBOLE_API_TOKEN}}"
I can tell it's working because the job runs successfully in the workflow
The workflow file is referencing an API token to access our Qubole database. This token is stored as a secret in the 'secrets' area of my repo
What I want to do now is reference that environment variable in a LOCAL python environment. It's important that it be in a local environment because it's less expensive and I don't want to risk anyone on my team accidentally forgetting and pushing secrets in their code, even if it's in a local git ignore file.
I have fetched/pulled/pushed/restarted etc etc and I can't get the variable into my environment.
When I check the environment variables by running env in the terminal, no environment variables show up there either.
Is there a way to treat github secrets like secrets in azure keyvault? Or am I missing something obvious?

How to see all environment variable in flask

How can I see all the environment variables in my flask application because when I was following a tutorial, instructor used "export FLASK_APP = app.py" command and then he went on to set other environment variables value. So, how can I see my all of the environment variables and their values ?
To see wich env variables are set on your OS you can use the commands below,
env vars are variables on your OS and not a Flask specific vars, if the set with example with export.
Also there a .env files that are loaded or sometimes used by application like node where on .env files configuration specifc data is stored like keys, or urls.
on Linux
printenv
on windows (powershell)
Get-ChildItem Env:
Wikipedia
an-introduction-to-environment-variables

Is storing project configuration in environment variables a bad practice?

Some background first:
i am currently testing a class that sends a GET request with a configurable url, which is built like this
url = f"{os.environ["TARGET_URL"]}/api/etc"
For normal operation, my TARGET_URL environment variable is set at project startup from a .env file and everything works. When testing locally, everything is still fine, tests passes and everyone is happy. My issue arose when I discovered that my Drone CI server failed to complete the project's build because the TARGET_URL environment variable wasn't found.
After some digging I found out that I had the wrong (dumb) idea that environment variables were reset at every project/test startup, and I basically was using my production environment variable all this time (even during tests) because it was set at first project startup.
From this story comes my question: given that environment variables are kept between executions, would storing configurations in them result in a bad practice? Is there an equally convenient alternative (no global objects and access from everywhere in the code) that can be used instead?
Thanks everyone for the quick responses, here's a bit of what-happened-next:
environment variables stay loaded after the first initialization, so I needed a way to test my code after loading only the variables I needed, with values that were expected. This would allow me to keep using environment variables loaded from a .env file and keep building my project remotely, where no .env files are present.
The solution was to add a pytest plugin called pytest-dotenv, which when properly configured would allow me to overwrite every variable in my .env files with a custom variable from another file (.env.test in my case). I filled the .env.test file with all the variables found in the .env file, and assigned empty values to each of them.
This allowed my tests to run ensuring no weird edge cases are missed because something had the wrong value.
example .env file
TARGET_URL="http://my.api.dev
example .env.test file
TARGET_URL=
pytest.ini configuration
[pytest]
env_override_existing_values = 1
env_files =
.env.test
Environment variables stored in config files or .env files is not a bad practice.
However, it is recommended that you use a key vault such as Azure Key Vault or AWS Key Management System for production deployments.
This way you further remove the keys away from your server (if env files) as well as code (if it is in config files)

Will my virtual environment folder still work if i shift it from my local machine to a shared folder?

Currently, I have a Flask app waiting to be shifted to a shared drive in the company's local network. My current plan is to create a virtual environment inside a folder locally, install all the necessary dependencies like Python3, Flask, Pandas and etc. This is to ensure that my Flask app can still reference to necessary dependencies to run the app.
As I can't access my shared drive via command prompt, my plan is to create a virtual environment locally then shift it to the shared folder together with all the scripts required by my Flask app. Will the app be able to run on the shared drive in this manner?
The standard virtualenv hardcodes the path of the environment into some of its files in the environment it creates, and stops working if you rename it etc. So no, you'll probably have to recreate the env on the destination server.
Perhaps your app's startup script could take care of creating the env if it's missing.

Is it possible to override uwsgi ini-file with environment variables

I'm trying to build a "base" docker image for running a python framework with uwsgi. The goal is to have others build their own docker images where they dump their application logic and any configuration overrides they need.
I thought it might be nice to be able to override any default settings from a uwsgi.ini file by supplying UWSGI_* environment variables passed to uwsgi at startup.
I've tried this approach, and setting a value via env var works if it's not in the ini-file at all (e.g UWSGI_WORKERS=4). But if I put a workers=1 line in the ini-file, it seems to override the env var.
Is this expected behaviour? I'm having trouble finding anything about config resolution order in the docs.
Do I have to resort to something like this? Using env vars seems so much cleaner.
if-exists = ./override.ini
include = %(_)
endif =
First, make all environment variables in the .ini file refer to the environment variables like below:
[uwsgi]
http = $(HTTP_PORT)
processes = $(UWSGI_WORKERS)
threads = $(UWSGI_THREADS)
...
Then set whatever default values you want for these environment variables inside the Dockerfile.
Now, anyone using your base image can overwrite any config by setting the specific env variable.

Categories

Resources