How to read environment variables from specific file with python 2.7 - python

In order to declare a number of environment variables then call some python scripts using them, I create a myfile.sh file which is the on to run by bash myfile.sh .
I have, however, plenty of scripts that should read these environment variables, but cannot create a myfile.sh for each one!
So my idea is to create an environment variable file and access it by each of my python scripts.
So my question is, how to access such a file with python 2.7 ?
A most relevant question is the following:
Where does os.environ read the environment variables from?
It should be noted that I cannot install additional libraries such as dotenv. So a solution, if possible, should be based on standard libraries.
Any help would be mostly appreciated!

I have to create a new environment variables file that should be accessed by my script.
That's not how it works. What you need is to set the environment variables from the file BEFORE calling your script:
myenv.sh
export DB_USER="foo"
export DB_PWD="bar"
export DB_NAME="mydb"
# etc
myscript.py
import os
DB_USER = os.getenv("DB_USER")
DB_PWD = os.getenv("DB_PWD")
DB_NAME = os.getenv("DB_NAME")
# etc
Then
$ source /path/to/myenv.sh
$ python /path/to/myscript.py
Most often you will want to wrap the above two lines in a shell script to make your sysadmin's life easier.
EDIT: if you want those env vars to be "automagically" set, you can always source them from .bashrc or some similar place - but then this is a sysadmin question, not a programming one.
Now the question is: do you really need to use environment variables here ? You can as well use a python settings file - a plain Python module that just defines those variables, ie:
mysettings.py
DB_USER = "foo"
DB_PWD = "bar"
# etc
and make sure the path to the directory containing this script is in your $PYTHONPATH. Then your scripts only have to import it (like they import any other module), and you're done.
Icing on the cake: you can even mix both solutions, by having your settings module looking up environment variables and providing a default, ie:
mysettings.py
import os
DB_USER = os.getenv("DB_USER", "foo")
DB_PWD = os.getenv("DB_PWD", "bar")
# etc

Show environment from commandline
linux: export
windows: set
Setting an environment variable
linux: export foo=bar
windows: set foo=bar
Printing env var:
linux: echo $foo
windows: echo %foo%

Related

Python Modify OS Path Variable

I am going to try and say this right but it's a bit outside my area of expertise.
I am using the xgboost library in a windows environment with Python 2.7 which requires all kinds of nasty compiling and installation.
That done, the instructions I'm following tell me I need to modify the OS Path Variable in an iPython notebook before I actually import the library for use.
The instructions tell me to run the following:
import os
mingw_path = 'C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin'
os.environ['PATH'] = mingw_path + ';' + os.environ['PATH']
then I can import
import xgboost as xgb
import numpy as np
....
This works. My question. Does the OS path modification make a permanent change in the path variable or do I need to modify the os path variable each time I want to use it as above?
Thanks in advance.
EDIT
Here is a link to the instructions I'm following. The part I'm referencing is toward the end.
The os.environ function is only inside the scope of the python/jupyter console:
Here's evidence of this in my bash shell:
$ export A=1
$ echo $A
1
$ python -c "import os; print(os.environ['A']); os.environ['A'] = '2'; print(os.environ['A'])"
1
2
$ echo $A
1
The python line above, prints the environ variable A and then changes it's value and prints it again.
So, as you see, any os.environ variable is changed within the python script, but when it gets out, the environment of the bash shell does not change.
Another way of doing this is to modify your User or System PATH variable. But this may break other things because what you're doing may replace the default compiler with mingw and complications may arise. I'm not a windows expert, so not sure about that part.
In a nutshell:
The os.environ manipulations are local only to the python process
It won't affect any other program
It has to be done every time you want to import xgboost

Using Python change Environment Variables

I am having a problem with the environment variables in python. How do I get python to export variables to the parent shell?
I am using ubuntu, python 2.7.4
I get this:
$ python
>>> import os
>>> os.environ
{'HOME':'~'}
>>> os.environ['foo']='bar'
>>> os.environ
{'HOME':'~','foo':'bar'}
>>> quit()
$ echo $foo
# Place #1
$ python
>>> import os
>>> os.environ
{'HOME':'~'} # Place #2
>>>
My expected output is:
At Place #1: bar
At Place #2: {'HOME':'~','foo':'bar'}
Thanks
Environment variables set in a child process (e.g. python) do not affect the parent process.
It's a one-way street; if this could be done it would be very easy to exploit shells! The environment variables must be set in the parent process itself. This restriction is enforced by the operating system and is not specific to Python.
Note that sourcing a file in a shell (e.g. . script.sh) doesn't create a new process; but there is no way to "source" Python files.

virtualenv and VIRTUAL_ENV keyword

After having installed a new virtualenv, for example called ENV, if I type
. /path/to/ENV/bin/activate
python
import os
print os.environ['VIRTUAL_ENV']
Then I see the /path/to/ENV/
However, if I type
/path/to/ENV/bin/python
And then
import os
print os.environ['VIRTUAL_ENV']
I've got a key error
So what is the fundamental difference between these two methods?
Thanks,
Inside the script at bin/activate, there's a line that looks like this:
VIRTUAL_ENV="/Users/me/.envs/myenv"
export VIRTUAL_ENV
Which is what's responsible for setting your VIRTUAL_ENV environment variable. When you don't use activate, that variable never gets exported - so it's not present in os.environ.

How to find a module in a virtualenv without activating said virtualenv?

Suppose I have the following setup:
mkdir test && cd test
virtualenv .venv
source .venv/bin/activate
pip install django
mkdir mod1
touch mod1/__init__.py
echo "a = 1" > mod1/mod2.py
Which gives me:
test/.venv
test/mod1/__init__.py
test/mod1/mod2.py
How would I write this function:
def get_module(module_name, root_path, virtualenv_path=None)
In order for this to work:
project_root_path = "./test"
project_virtualenv_path = "./test/.venv"
get_module("mod1.mod2", project_root_path, project_virtualenv_path)
get_module("django.contrib.auth", project_root_path, project_virtualenv_path)
Assuming I don't have ./test/.venv activated.
The reason I want to do this, is because I'm working on a vim plugin which would implement gf functionality in a python file on an import statement. I'm trying to support virtualenvs as well.
EDIT:
Also, the script should not alter the current runtime, by adding or appending to sys.path. This should run inside vim, via the vim python bindings, and I don't think altering the vim python runtime would be a good idea.
get_module could either return a module object, or the path to the module, which is what I'm basically looking for.
You can add your virtualenv on python path like:
import site
site.addsitedir('/home/user/.virtualenvs/myapp1/lib/python2.7/site-packages')
and then import should work
The only practical solution I could find here is to run the virtualenv's activate_this.py script, look for what I need, then remove it's changes from sys.path.
import sys
import os
old_sys_path = list(sys.path)
virtualenv_path = "/path/to/venv"
activate_this_path = os.path.join(virtualenv_path, "bin", "activate_this.py")
execfile(activate_this_path, dict(__file__=activate_this_path))
# get my module here
# restore sys.path
sys.path = old_sys_path
If you have a better answer, please add it, and I'll change the accepted answer gladly.

Setting up Django on an internal server (os.environ() not working as expected?)

I'm trying to setup Django on an internal company server. (No external connection to the Internet.)
Looking over the server setup documentation it appears that the "Running Django on a shared-hosting provider with Apache" method seems to be the most-likely to work in this situation.
Here's the server information:
Can't install mod_python
no root access
Server is SunOs 5.6
Python 2.5
Apache/2.0.46
I've installed Django (and flup) using the --prefix option (reading again I probably should've used --home, but at the moment it doesn't seem to matter)
I've added the .htaccess file and mysite.fcgi file to my root web directory as mentioned here.
When I run the mysite.fcgi script from the server I get my expected output (the correct site HTML output). But, it won't when trying to access it from a browser.
It seems that it may be a problem with the PYTHONPATH setting since I'm using the prefix option.
I've noticed that if I run mysite.fcgi from the command-line without setting the PYTHONPATH enviornment variable it throws the following error:
prompt$ python2.5 mysite.fcgi
ERROR:
No module named flup Unable to load
the flup package. In order to run
django as a FastCGI application, you
will need to get flup from
http://www.saddi.com/software/flup/
If you've already installed flup,
then make sure you have it in your
PYTHONPATH.
I've added sys.path.append(prefixpath) and os.environ['PYTHONPATH'] = prefixpath to mysite.fcgi, but if I set the enviornment variable to be empty on the command-line then run mysite.fcgi, I still get the above error.
Here are some command-line results:
>>> os.environ['PYTHONPATH'] = 'Null'
>>>
>>> os.system('echo $PYTHONPATH')
Null
>>> os.environ['PYTHONPATH'] = '/prefix/path'
>>>
>>> os.system('echo $PYTHONPATH')
/prefix/path
>>> exit()
prompt$ echo $PYTHONPATH
Null
It looks like Python is setting the variable OK, but the variable is only applicable inside of the script. Flup appears to be distributed as an .egg file, and my guess is that the egg implementation doesn't take into account variables added by os.environ['key'] = value (?) at least when installing via the --prefix option.
I'm not that familiar with .pth files, but it seems that the easy-install.pth file is the one that points to flup:
import sys; sys.__plen = len(sys.path)
./setuptools-0.6c6-py2.5.egg
./flup-1.0.1-py2.5.egg
import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sy
s.path[p:p]=new; sys.__egginsert = p+len(new)
It looks like it's doing something funky, anyway to edit this or add something to my code so it will find flup?
In your settings you have to point go actual egg file, not directory where egg file is located. It should look something like:
sys.path.append('/path/to/flup/egg/flup-1.0.1-py2.5.egg')
Try using a utility called virtualenv. According to the official package page, "virtualenv is a tool to create isolated Python environments."
It'll take care of the PYTHONPATH stuff for you and make it easy to correctly install Django and flup.
Use site.addsitedir() not os.environ['PYTHONPATH'] or sys.path.append().
site.addsitedir interprets the .pth files. Modifying os.environ or sys.path does not. Not in a FastCGI environment anyway.
#!/user/bin/python2.6
import site
# adds a directory to sys.path and processes its .pth files
site.addsitedir('/path/to/local/prefix/site-packages/')
# avoids permissions error writing to system egg-cache
os.environ['PYTHON_EGG_CACHE'] = '/path/to/local/prefix/egg-cache'
To modify the PYTHONPATH from a python script you should use:
sys.path.append("prefixpath")
Try this instead of modifying with os.environ().
And I would recommend to run Django with mod_python instead of using FastCGI...

Categories

Resources