As I'm continuing to work in docker-machine and Django, I'm trying to make a setup script for my project that auto-detects platform and decides how to set up Docker and the required containers. Auto-detection works fine. One thing I can't figure out is how to automatically set the environment variables needed for docker-machine to work on Mac OS X. Currently, the script will just tell the user to manually set the environment variable using the command
eval $(docker-machine env dev)
where dev is the name of the VM. This prompt happens after initial setup is successfully completed. The user is told to do this because the following subprocess call does not actually set the environment variables:
subprocess.call('eval $(docker-machine env dev)', shell=True)
If an error occurs during creating the VM because the VM already exists, then I use subprocess to see if Docker is already installed:
check_docker = subprocess.check_call('docker run hello-world', shell=True)
If this call is successful, then the script tells the user that Docker was already installed and then prompts the user to manually set the environment variables to be able to start the containers needed for the Django server to run. I had originally thought that the script behaved correctly in this scenario, but it turns out that it only appeared that way because I had already set the environment variables manually. Of course, I see now that the docker run command needs the environment variables to be set in order to work, and since the environment variables never get set in the script, the docker run test doesn't work. So, how am I supposed to correctly set the environment variables from Python? It seems like using subprocess is resulting in the wrong environment getting these variables set. If I do something like
subprocess.call('setdockerenv.sh', shell=True)
where setdockerenv.sh has the correct eval command, then I run into the same problem, which I'm guessing is rooted in using subprocess. Would os have something to do this properly where subprocess can't? It's important that I do this in the Python script, or else having the user manually set the environment variables and then manually test to see if docker is installed defeats the purpose of having the script.
You cannot use subprocess to change the environment, since any changes it makes are local to that process. Instead, (as you found) you can change your current environment via os.environ, and that is inherited by any other processes you subsequently create.
Related
When i run my python script from terminal i get error due to missing env variable. i can fix by this using export and setting it. But there are other python scripts that are running fine as cron jobs that require this as well. And i see this env variable being set in cron tab. So my question is if env variable set in cron is available for scripts run by user/root from cli? Running env command as root doesn’t show this variable.
Environment variables are private to each process, not global. When a new process is started it inherits its initial environment from its parent. Your terminal shell isn't started by cron so it doesn't inherit the env vars you've set in your crontab.
Your OS may provide a standard mechanism for setting env vars for all processes (e.g., your terminal which then starts your CLI shell). What I prefer to do is create a file named ~/.environ that I then source in my ~/.bashrc and cron jobs.
I have a ROS application which has a work space with a setup.bash file and another python script with its own virtual environment.
So far this is what I do in my terminal:
1_ pipenv shell (to activate my python virtual environment).
2_ source ../ros_workspace/devel/setup.bash
3_ python some_python_script.py
This code works as I expect.
However, I want to do the same and run this script in pycharm, where my virtual environment is already activated. But how do I source the setup bash additionaly?
My setup.bash file also looks like the following:
What I have tried also is making a "before launch" as follows:
If you set your virtual environment as your interpreter of choice in PyCharm, it will use that particular virtual environment to run its scripts. However, you can also take advantage of some of the functionality that our run configurations provide.
You can check out the "Before Launch" part of the whole configuration window to enter scripts that you want executed.
Once you've set your configurations, you can then go on to run or debug the configuration. Furthermore, if it is just environment variables that you want to source, you can just put in the environment variables in the "Environment Variables" box.
In case you want to run a shellscript, you will need to create a new shell configuration like so:
Once you've added that configuration, you can then go on to reference it later.
You will now see that you can reference that configuration in question:
I added a enviroment variable writing in the ~/.bashrc file this two line
var="stuff.."
export var
using the python interpreter in a normal terminal this two lines of code works
import os
print(os.environ['var'])
but in a blender python console it generate a KeyError so printing os.environ list i can see that there isn't a item with 'var' as key
So i think that is a problem with the environment settings in unix system.
Can anyone help me and explain how export the environment variables for the other processes? Thanks and sorry for the english
The .bachrc file (and similar such as .cshrc) is read when your shell is started, similarly when you start a GUI desktop the shell rc files are read at the time of it starting and the variables at that time are then part of the environment passed onto any GUI apps, changes made while running do not get read in as you start a new app. You can find ways of setting environment variables for different desktops.
One way of passing environment variables into blender is to start it from a terminal window. The rc files will be read when you open the terminal, you can also manually set environment variables before starting blender.
Another way to set environment variables for blender is to start it from a script, this may be one called myblender that will be found in your $PATH or it can also be named blender if it will be found before the real blender. In this script you can set variables before starting blender and any changes will be in effect when you run it.
#!/bin/bash
var="stuff.."
export var
exec /usr/local/bin/blender "$#"
After updating ~/.bashrc you either have to source ~/.bashrc in the terminal where you launch blender or log out and log back in to your system, where the variable should then be in the environment.
If you need to get environment variables that may or may not be available, you can also do something like os.getenv('var', 'default value')
I got a bunch of environment variables set with the launchctl command:
launchctl setenv TEST /Users/JohnDoe/Test
To get the value back, I can use:
launchctl getenv TEST
However, I can't access the value of TEST from Python, using os.getenv('TEST') (or even from Bash using echo $TEST). I do know how macOS manages environment variables (the difference between launchctl and Bash environment variables, etc.) so I understand why those commands don't return the value of TEST.
My question is: is there a way to access environment variables set with launchctl, without using subprocess? Using subprocess is not a no-go, I'd just rather avoid throwing lots of processes just to get environment variables :)
According to the docker-machine documentation for installation on Windows, I need to run the following command to add ssh.exe to the %PATH% environment variable:
set PATH=%PATH%;"c:\Program Files (x86)\Git\bin"
This is so that cmd.exe can recognize docker-machine as a command. However, running the above command just doesn't seem to do anything at all. The same thing happens when I try this in Powershell:
$Env:Path = "${Env:Path};c:\Program Files (x86)\Git\bin"
which is to say, apparently nothing at all. My goal is to make docker-machine a recognizable command in cmd.exe. This is because I have a Python script to set up the docker-machine VM and the script needs to be able to run in cmd.exe. How exactly can this be done? It's important to not go into Advanced System Settings from My Computer and modify environment variables that way, since that requires admin privileges and the setup needs to work without any sort of admin privileges.
It's important to not go into Advanced System Settings from My
Computer and modify environment variables that way, since that
requires admin privileges and the setup needs to work without any sort
of admin privileges.
A non-admin user can set environment variables that only affect their profile (as opposed to the entire machine). See this answer for how to access the Environment Variables dialog directly without having to go through the Advanced System Settings dialog.
From here, you can set PATH in the User variables list. Note that, when using this method, you don't need to include the existing path (i.e. %PATH%) as you did above. The value you enter in the User variables list will be appended to the current system value for PATH.
Also, don't forget that after setting an environment variable using this method, existing cmd instances will not see the new value for PATH -- you must open a new cmd instance.
Perhaps the original set command is necessary, but it is definitely not the whole story. I found out after a lot of digging around that you need to make sure that Docker Toolbox is also in your PATH variable. The following needs to be added:
C:\Program Files\Docker Toolbox
I don't know why the Docker documentation neglects to mention this. Maybe it's assumed that it will automatically make it there after installation. In my case, that is not what happened and I had to manually ensure that Docker Toolbox was in PATH.
Thanks to Rusty Shackleford for helping me find the environment variables without using admin privileges.