I am amending an existing script in which I want to check the set of libraries used in an executable with the shared libraries called at the run time. I have the list of libraries which I need to compare with the shared libraries. For getting shared libraries I am trying to get LD_LIBRARY_PATH by giving below code but I had no luck. I tried checking the variable on command line by giving
echo $LD_LIBRARY_PATH
and it returned /opt/cray/csa/3.0.0-1_2.0501.47112.1.91.ari/lib64:/opt/cray/job/1.5.5-0.1_2.0501.48066.2.43.ari/lib64
the things that I have already tried are (this is a python script)
#! /usr/bin/python -E
import os
ld_lib_path = os.environ.get('LD_LIBRARY_PATH')
#ld_lib_path = os.environ["LD_LIBRARY_PATH"]
I think you are just missing a print in your script? This works for me from the command line:
python -c 'import os; temp=os.environ.get("LD_LIBRARY_PATH"); print temp'
script:
#! /usr/bin/python -E
import os
ld_lib_path = os.environ.get('LD_LIBRARY_PATH')
print ld_lib_path
Related
I am going to try and say this right but it's a bit outside my area of expertise.
I am using the xgboost library in a windows environment with Python 2.7 which requires all kinds of nasty compiling and installation.
That done, the instructions I'm following tell me I need to modify the OS Path Variable in an iPython notebook before I actually import the library for use.
The instructions tell me to run the following:
import os
mingw_path = 'C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin'
os.environ['PATH'] = mingw_path + ';' + os.environ['PATH']
then I can import
import xgboost as xgb
import numpy as np
....
This works. My question. Does the OS path modification make a permanent change in the path variable or do I need to modify the os path variable each time I want to use it as above?
Thanks in advance.
EDIT
Here is a link to the instructions I'm following. The part I'm referencing is toward the end.
The os.environ function is only inside the scope of the python/jupyter console:
Here's evidence of this in my bash shell:
$ export A=1
$ echo $A
1
$ python -c "import os; print(os.environ['A']); os.environ['A'] = '2'; print(os.environ['A'])"
1
2
$ echo $A
1
The python line above, prints the environ variable A and then changes it's value and prints it again.
So, as you see, any os.environ variable is changed within the python script, but when it gets out, the environment of the bash shell does not change.
Another way of doing this is to modify your User or System PATH variable. But this may break other things because what you're doing may replace the default compiler with mingw and complications may arise. I'm not a windows expert, so not sure about that part.
In a nutshell:
The os.environ manipulations are local only to the python process
It won't affect any other program
It has to be done every time you want to import xgboost
Often when I am using the IDLE shell I import the pickle module. Is it possible to make it automatically import pickle when I start it?
You can use the -c or -r argument:
From idle -h:
-c cmd run the command in a shell, or
-r file run script from file
For example:
idle -c 'import pickle, sys'
Or:
idle -r ~/my_startup.py
Where my_startup.py might contain:
import pickle, sys
You can either create a shell alias to always use this, or create a separate script; the procedure for this differs depending on your OS and shell.
I have a python script that needs dependencies from a virtualenv. I was wondering if there was some way I could add it to my path and have it auto start it's virtualenv, run and then go back to the system's python.
I've try playing around with autoenv and .env but that doesn't seem to do exactly what I'm looking for. I also thought about changing the shabang to point to the virtualenv path but that seems fragile.
There are two ways to do this:
Put the name of the virtual env python into first line of the script. Like this
#!/your/virtual/env/path/bin/python
Add virtual environment directories to the sys.path. Note that you need to import sys library. Like this
import sys
sys.path.append('/path/to/virtual/env/lib')
If you go with the second option you might need to add multiple paths to the sys.path (site etc). The best way to get it is to run your virtual env python interpreter and fish out the sys.path value. Like this:
/your/virtual/env/bin/python
Python blah blah blah
> import sys
> print sys.path
[ 'blah', 'blah' , 'blah' ]
Copy the value of sys.path into the snippet above.
I'm surprised that nobody has mentioned this yet, but this is why there is a file called activate_this.py in the virtualenv's bin directory. You can pass that to execfile() to alter the module search path for the currently running interpreter.
# doing execfile() on this file will alter the current interpreter's
# environment so you can import libraries in the virtualenv
activate_this_file = "/path/to/virtualenv/bin/activate_this.py"
execfile(activate_this_file, dict(__file__=activate_this_file))
You can put this file at the top of your script to force the script to always run in that virtualenv. Unlike the modifying hashbang, you can use relative path with by doing:
script_directory = os.path.dirname(os.path.abspath(__file__))
activate_this_file = os.path.join(script_directory, '../../relative/path/to/env/bin/activate_this.py')
From the virtualenv documentation:
If you directly run a script or the python interpreter from the
virtualenv’s bin/ directory (e.g. path/to/env/bin/pip or
/path/to/env/bin/python script.py) there’s no need for activation.
So if you just call the python executable in your virtualenv, your virtualenv will be 'active'. So you can create a script like this:
#!/bin/bash
PATH_TO_MY_VENV=/opt/django/ev_scraper/venv/bin
$PATH_TO_MY_VENV/python -c 'import sys; print(sys.version_info)'
python -c 'import sys; print(sys.version_info)'
When I run this script on my system, the two calls to python print what you see below. (Python 3.2.3 is in my virtualenv, and 2.7.3 is my system Python.)
sys.version_info(major=3, minor=2, micro=3, releaselevel='final', serial=0)
sys.version_info(major=2, minor=7, micro=3, releaselevel='final', serial=0)
So any libraries you have installed in your virtualenv will be available when you call $PATH_TO_MY_VENV/python. Calls to your regular system python will of course be unaware of whatever is in the virtualenv.
I think the best answer here is to create a simple script and install it inside your virtualenv. Then you can either directly use the script, or create a symlink, or whatever.
Here's an example:
$ mkdir my-tool
$ cd my-tool
$ mkdir scripts
$ touch setup.py
$ mkdir scripts
$ touch scripts/crunchy-frog
$ chmod +x scripts/crunchy-frog
crunchy-frog
#!/usr/bin/env python
print("Constable Parrot ate one of those!")
setup.py
from setuptools import setup
setup(name="my-cool-tool",
scripts=['scripts/crunchy-frog'],
)
Now:
$ source /path/to/my/env/bin/activate
(env) $ python setup.py develop
(env) $ deactivate
$ cd ~
$ ln -s /path/to/my/env/bin/crunchy-frog crunchy-frog
$ ./crunchy-frog
Constable Parrot ate one of those!
When you install your script (via setup.py install or setup.py develop) then it will replace the first line of the scripts with a shebang line for the env python (which you can verify with $ head /path/to/my/env/bin/crunchy-frog). So whenever you run that particular script, it will use that specific Python env.
Does this help?
import site
site.addsitedir('/path/to/virtualenv/lib/python2.7/site-packages/')
I had this problem before and I made a simple script to look for a virtualenv folder recursively just importing and calling a function:
script_autoenv.py
# -*- coding:utf-8 -*-
import os, site
def locate_env(path, env_name):
"""search for a env directory name in each directory in the path"""
if os.path.isdir(path + "/env"):
env_26_path = '%s/%s/lib/python2.6/site-packages/' % (path, env_name)
env_27_path = '%s/%s/lib/python2.7/site-packages/' % (path, env_name)
if os.path.isdir(env_26_path):
site.addsitedir(env_26_path)
print "Virtualenv 2.6 founding"
elif os.path.isdir(env_27_path):
site.addsitedir(env_27_path)
print "Virtualenv 2.7 founding"
else:
new_path, old_dir = os.path.split(path)
if old_dir:
locate_env(new_path, env_name)
else:
print "No envs found"
You just need to specify the script directory and the env name folder and the script do the rest:
test.py
# -*- coding:utf-8 -*-
import os
import script_autoenv
script_autoenv.locate_env(os.path.realpath(__file__), 'env')
import django
print django.VERSION
I hope it's works for you
The answer may be pipenv (https://pipenv.readthedocs.io/en/latest/).
It will allow you to do something like:
pipenv run python main.py
to run main.py in the python environment with the specified libraries.
You can give it a try here https://rootnroll.com/d/pipenv/
...Maybe is not exactly what you are looking for, but it may be worth taking a look before reinventing it.
If I have packages installed from easy_install, the eggs are prepended to sys.path before the items in the PYTHONPATH variable.
For example, if I have an egg package called foo installed as well as a package called foo in the current directory, and then do this:
PYTHONPATH="." python
>>> import foo
This will use the egg version of foo instead of the local directory. Inspecting sys.path shows that eggs are placed before items from PYTHONPATH. This seems broken. Is there any way to override this behavior?
Unfortunately this is done with a hard-coded template deep inside setuptools/command/easy_install.py. You could create a patched setuptools with an edited template, but I've found no clean way to extend easy_install from the outside.
Each time easy_install runs it will regenerate the file easy_install.pth. Here is a quick script which you can run after easy_install, to remove the header and footer from easy_install.pth. You could create a wrapper shell script to run this immediately after easy_install:
#!/usr/bin/env python
import sys
path = sys.argv[1]
lines = open(path, 'rb').readlines()
if lines and 'import sys' in lines[0]:
open(path, 'wb').write(''.join(lines[1:-1]) + '\n')
Example:
% easy_install gdata
% PYTHONPATH=xyz python -c 'import sys; print sys.path[:2]'
['', '/Users/pat/virt/lib/python2.6/site-packages/gdata-2.0.14-py2.6.egg']
% ./fix_path ~/virt/lib/python2.6/site-packages/easy_install.pth
% PYTHONPATH=xyz python -c 'import sys; print sys.path[:2]'
['', '/Users/pat/xyz']
For more clarification, here is the format of easy-install.pth:
import sys; sys.__plen = len(sys.path)
./gdata-2.0.14-py2.6.egg
import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new)
The two import sys lines are the culprit causing the eggs to appear at the start of the path. My script just removes those sys.path-munging lines.
Consider using the -S command-line option to suppress *.pth processing:
python -c 'import sys; print("\n".join(sys.path))'
python -S -c 'import sys; print("\n".join(sys.path))'
https://docs.python.org/3/library/site.html#site.main
You can also use -S with site.main() to delay *.pth processing until runtime, say to capture the original sys.path for appending:
export PYTHONPATH=$(
PYTHONPATH='' \
python -c 'import sys; \
sys.path.extend(sys.argv[1:]); old=list(sys.path); \
import site; site.main(); \
[ old.append(p) for p in sys.path if p not in old ]; \
sys.path=old; \
print ":".join(sys.path)' \
$EXTRA_PATH $ANOTHER_PATH)
python -S ... # using explicit PYTHONPATH
Start from explicit empty PYTHONPATH
Append to sys.path explicitly with extend
Import site and call site.main()
Append new paths to old path and then install it in sys.path
Print with ":" for PYTHONPATH
python -S is desirable for later runs only using $PYTHONPATH
python -S may or may not be desirable while setting PYTHONPATH (depending on if you need sys.path expanded before extending)
I have done something like the following to prepend to the system path when running a top-level python executable file:
import sys
sys.path = ["<your python path>"] + sys.path
Often, the "<your python path>" for me involves use of the __file__ attribute to do relative look up for a path that includes the top-level module for my project. This is not recommended for use in producing, eggs, though I don't seem to mind the consequences. There may be another alternative to __file__.
I am creating a simple bash script to download and install a python Nagios plugin. On some older servers the script may need to install the subprocess module and as a result I need to make sure the correct python-devel files are installed.
What is an appropriate and cross platform method of checking for these files. Would like to stay away from rpm or apt.
If you can tell me how to do the check from within python that would work. Thanks!
Update:
This is the best I have come up with. Anyone know a better or more conclusive method?
if [ ! -e $(python -c 'from distutils.sysconfig import get_makefile_filename as m; print m()') ]; then echo "Sorry"; fi
That would be pretty much how I would go about doing it. Seems reasonable simple.
However, if I need to be really sure that python-devel files are installed for the current version of Python, I would look for the relevant Python.h file. Something along the lines of:
# first, makes sure distutils.sysconfig usable
if ! $(python -c "import distutils.sysconfig.get_config_vars" &> /dev/null); then
echo "ERROR: distutils.sysconfig not usable" >&2
exit 2
fi
# get include path for this python version
INCLUDE_PY=$(python -c "from distutils import sysconfig as s; print s.get_config_vars()['INCLUDEPY']")
if [ ! -f "${INCLUDE_PY}/Python.h" ]; then
echo "ERROR: python-devel not installed" >&2
exit 3
fi
Note: distutils.sysconfig may not be supported on all platforms so not the most portable solution, but still better than trying to cater for variations in apt, rpm and the likes.
If you really need to support all platforms, it might be worth exploring what is done in the AX_PYTHON_DEVEL m4 module. This module can be used in a configure.ac script to incorporate checks for python-devel during the ./configure stage of an autotools-based build.
Imho your solutions works well.
Otherwise, a more "elegant" solution would be to use a tiny script like:
testimport.py
#!/usr/bin/env python2
import sys
try:
__import__(sys.argv[1])
print "Sucessfully import", sys.argv[1]
except:
print "Error!"
sys.exit(4)
sys.exit(0)
And call it with testimport.sh distutils.sysconfig
You can adapt it to check for internal function if needed...
For those looking for a pure python solution that also works for python3:
python3 -c 'from distutils.sysconfig import get_makefile_filename as m; from os.path import isfile; import sys ; sys.exit(not isfile(m()))')
Or as a file script check-py-dev.py:
from distutils.sysconfig import get_makefile_filename as m
from os.path import isfile
import sys
sys.exit(not isfile(m()))
To get a string in bash, just use the exit output:
python3 check-py-dev.py && echo "Ok" || echo "Error: Python header files NOT found"