The parameters that I need to pass to my Python program use some environment variables.
Example:
This is how I run it on the terminal:
export BUCKET="/tmp/bucket"
python main.py --input $BUCKET/input --output $BUCKET/output
On PyCharm, I created a Run/Debug configuration with an environment variable called BUCKET and passed the following string as parameters: --input $BUCKET/input --output $BUCKET/output.
When PyCharm executes the program, it doesn't pick up the value of BUCKET as /tmp/bucket. It considers $BUCKET to be a string.
I also tried using ${BUCKET} instead of $BUCKET but that doesn't work either.
Is there some way to pass variables?
Note: The reason I want to do this is that I have a large number of parameters in my real code. I have only provided a toy example above. I want to be able to update an environment variable only in one place.
I encountered the same problem a few days back. I found a plugin called EnvFile.
Using this you can have an env file exported before running the script. After installing it, you'll get an extra tab EnvFile in your configuration. Select your environment file there. It is specific to the configuration. Now, every time you run the configuration environment variables will be exported.
My .env file
Related
I am stuck on an environment variables mismatch.
I run a Python script on Windows 10 via a program called NSSM.
At runtime, I do the following:
Load in parameters from a text file
Put its contents into the environment using os.environ.setdefault(name, value).
Try to load in environment variables using os.environ[name]
Result:any variables I added do not show up.
I am not sure why the variables I add aren't available. Can you please tell me what I am doing wrong?
A starting point is that NSSM uses environment variables from Windows HKLM registry: source (see bottom). I am not sure if this is the reason os.environ cannot see relevant variables.
I've had trouble using os.environ.setdefault in the past as well. Instead of that, say you were trying to add to the PATH environment variable, do the following:
os.environ['PATH'] += ";" + the_path_to_the_file
EDIT:
Also, for creating a new variable:
os.environ['new_var'] = 'text'
Well it turns out that my problem was outside of the scope of this question. #Recessive and #eryksun thank you both for answering, it put me "onto the scent".
It turns out my problem was using Python pathlib's Path.home().
When running via command prompt, it pulled HOMEPATH environment variable.
When running via NSSM, it pulled USERPROFILE environment variable.
This discrepancy in Path.home() was the real problem. It wasn't finding the environment variables because NSSM was looking in a totally different folder.
I added a enviroment variable writing in the ~/.bashrc file this two line
var="stuff.."
export var
using the python interpreter in a normal terminal this two lines of code works
import os
print(os.environ['var'])
but in a blender python console it generate a KeyError so printing os.environ list i can see that there isn't a item with 'var' as key
So i think that is a problem with the environment settings in unix system.
Can anyone help me and explain how export the environment variables for the other processes? Thanks and sorry for the english
The .bachrc file (and similar such as .cshrc) is read when your shell is started, similarly when you start a GUI desktop the shell rc files are read at the time of it starting and the variables at that time are then part of the environment passed onto any GUI apps, changes made while running do not get read in as you start a new app. You can find ways of setting environment variables for different desktops.
One way of passing environment variables into blender is to start it from a terminal window. The rc files will be read when you open the terminal, you can also manually set environment variables before starting blender.
Another way to set environment variables for blender is to start it from a script, this may be one called myblender that will be found in your $PATH or it can also be named blender if it will be found before the real blender. In this script you can set variables before starting blender and any changes will be in effect when you run it.
#!/bin/bash
var="stuff.."
export var
exec /usr/local/bin/blender "$#"
After updating ~/.bashrc you either have to source ~/.bashrc in the terminal where you launch blender or log out and log back in to your system, where the variable should then be in the environment.
If you need to get environment variables that may or may not be available, you can also do something like os.getenv('var', 'default value')
Airflow is returning an error when trying to run a DAG saying that it can't find an environment variable, which is odd because it's able to find 3 other environment variables that I'm storing as a Python variable. No issues with those variables at all.
I have all 4 variables in ~/.profile and have also done
export var1="varirable1"
export var2="varirable2"
export var3="varirable3"
export var4="varirable4"
Under what user does airflow run? I've done those export commands under sudo as well, so I thought they would be picked up by airflow when it runs the dag
Is it maybe because airflow uses non-login shell? Have you tried putting these lines in : ~/.bashrc instead of ~/.profile ?
As per this answer, the variables should be put in /etc/default/airflow (on Debian/Ubuntu) or /etc/sysconfig/airflow (on Centos/Redhat).
If you are just running a local instance you should be able to use environment variables like you expect. Remember that you need to set them in the shell that is running the webserver and scheduler though. If these are in your .profile, you may need to run source ~/.profile.
As I'm continuing to work in docker-machine and Django, I'm trying to make a setup script for my project that auto-detects platform and decides how to set up Docker and the required containers. Auto-detection works fine. One thing I can't figure out is how to automatically set the environment variables needed for docker-machine to work on Mac OS X. Currently, the script will just tell the user to manually set the environment variable using the command
eval $(docker-machine env dev)
where dev is the name of the VM. This prompt happens after initial setup is successfully completed. The user is told to do this because the following subprocess call does not actually set the environment variables:
subprocess.call('eval $(docker-machine env dev)', shell=True)
If an error occurs during creating the VM because the VM already exists, then I use subprocess to see if Docker is already installed:
check_docker = subprocess.check_call('docker run hello-world', shell=True)
If this call is successful, then the script tells the user that Docker was already installed and then prompts the user to manually set the environment variables to be able to start the containers needed for the Django server to run. I had originally thought that the script behaved correctly in this scenario, but it turns out that it only appeared that way because I had already set the environment variables manually. Of course, I see now that the docker run command needs the environment variables to be set in order to work, and since the environment variables never get set in the script, the docker run test doesn't work. So, how am I supposed to correctly set the environment variables from Python? It seems like using subprocess is resulting in the wrong environment getting these variables set. If I do something like
subprocess.call('setdockerenv.sh', shell=True)
where setdockerenv.sh has the correct eval command, then I run into the same problem, which I'm guessing is rooted in using subprocess. Would os have something to do this properly where subprocess can't? It's important that I do this in the Python script, or else having the user manually set the environment variables and then manually test to see if docker is installed defeats the purpose of having the script.
You cannot use subprocess to change the environment, since any changes it makes are local to that process. Instead, (as you found) you can change your current environment via os.environ, and that is inherited by any other processes you subsequently create.
This is a bit of a continuation from my previous question: cx_Oracle does not recognize location of Oracle software installation for installation on Linux.
After I was able to get cx_oracle installed properly, I wanted to set up my environment so the environment variables don't have to be exported every time.
To do this, I wrote a shellscript that included these two export statements:
export ORACLE_HOME=/home/user1/instantclient_12_1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
And saved this .sh file into the /etc/profile.d/ folder.
When I log into the server again with PuTTY, the echo statements say that the environment variables are there:
# echo $ORACLE_HOME
/home/user1/instantclient_12_1
# echo $LD_LIBRARY_PATH
:/home/user1/instantclient_12_1
But when I run some python code with cx_oracle, I get an error:
ImportError: libclntsh.so.12.1: cannot open shared object file: No such file or directory
The code only runs again when I re-enter the export commands for the environment variables. After I do that, the code using cx_oracle runs fine.
Why don't the environment variables work properly even though they show up when I do the echo command? And how do I get the environment variables to persist properly?
The guides I read say to do it with a shell script in /etc/profile.d/ because it's better to not edit /etc/profile directly.
Update:
I tried adding the two export lines to /etc/profile, but I still get the same problem where the environment variables are there when I echo, but I still get this error when trying to use cx_oracle in python:
ImportError: libclntsh.so.12.1: cannot open shared object file: No such file or directory
Am I missing some key thing about defining environment variables?
Second Update:
I tried initializing the environment with a shell script that I planned to run with the code that calls cx_Oracle:
Contents of StartServer.sh:
export ORACLE_HOME=/home/user1/instantclient_12_1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
python3 ./UDPDBQuery.pyc
And I try to run the code in the background by doing:
bash StartServer.sh &
But I still run into that same error as before, as if I did not put in the environment variables. It only works if I export the variables myself, and then run the code again. The code also stops running in the background when I log out. I'm still very confused as to why it isn't working.
Are environment variables not usable by cx_oracle unless I manually do the export statement for them?
Alright, I found out that one of the two environment variables was not exporting properly with the .sh file in /etc/profile.d, and doing $LD_LIBRARY_PATH would give me No such file or directorytclient_12_1, but $ORACLE_HOME would give me /home/user1/instantclient_12_1/: is a directory.
The way I solved this was to split the export statements into two separate shell scripts in profile.d.
Everything works now.