I am creating a simple bash script to download and install a python Nagios plugin. On some older servers the script may need to install the subprocess module and as a result I need to make sure the correct python-devel files are installed.
What is an appropriate and cross platform method of checking for these files. Would like to stay away from rpm or apt.
If you can tell me how to do the check from within python that would work. Thanks!
Update:
This is the best I have come up with. Anyone know a better or more conclusive method?
if [ ! -e $(python -c 'from distutils.sysconfig import get_makefile_filename as m; print m()') ]; then echo "Sorry"; fi
That would be pretty much how I would go about doing it. Seems reasonable simple.
However, if I need to be really sure that python-devel files are installed for the current version of Python, I would look for the relevant Python.h file. Something along the lines of:
# first, makes sure distutils.sysconfig usable
if ! $(python -c "import distutils.sysconfig.get_config_vars" &> /dev/null); then
echo "ERROR: distutils.sysconfig not usable" >&2
exit 2
fi
# get include path for this python version
INCLUDE_PY=$(python -c "from distutils import sysconfig as s; print s.get_config_vars()['INCLUDEPY']")
if [ ! -f "${INCLUDE_PY}/Python.h" ]; then
echo "ERROR: python-devel not installed" >&2
exit 3
fi
Note: distutils.sysconfig may not be supported on all platforms so not the most portable solution, but still better than trying to cater for variations in apt, rpm and the likes.
If you really need to support all platforms, it might be worth exploring what is done in the AX_PYTHON_DEVEL m4 module. This module can be used in a configure.ac script to incorporate checks for python-devel during the ./configure stage of an autotools-based build.
Imho your solutions works well.
Otherwise, a more "elegant" solution would be to use a tiny script like:
testimport.py
#!/usr/bin/env python2
import sys
try:
__import__(sys.argv[1])
print "Sucessfully import", sys.argv[1]
except:
print "Error!"
sys.exit(4)
sys.exit(0)
And call it with testimport.sh distutils.sysconfig
You can adapt it to check for internal function if needed...
For those looking for a pure python solution that also works for python3:
python3 -c 'from distutils.sysconfig import get_makefile_filename as m; from os.path import isfile; import sys ; sys.exit(not isfile(m()))')
Or as a file script check-py-dev.py:
from distutils.sysconfig import get_makefile_filename as m
from os.path import isfile
import sys
sys.exit(not isfile(m()))
To get a string in bash, just use the exit output:
python3 check-py-dev.py && echo "Ok" || echo "Error: Python header files NOT found"
Related
I would like to know where a script like "tensorflow.python.tools.inspect_checkpoint" is located when I use the handy command "python -m tensorflow.python.tools.inspect_checkpoint --file_name xyz". It is somewhere in my PYTHONPATH, but it is tedious to go through every path.
Is there a similar command to "which", aimed at quickly locating python scripts that can be reached from PYTHONPATH? Thanks.
You can access the __file__ attribute of the module. Here is an example:
$ python -c "import tensorflow.python.tools.inspect_checkpoint as m; print(m.__file__)"
/srv/conda/envs/notebook/lib/python3.6/site-packages/tensorflow/python/tools/inspect_checkpoint.py
You can make a shell command that takes an module as an argument and returns the __file__ attribute.
function pywhich() {
python -c "import $1 as m; print(m.__file__)"
}
$ pywhich numpy
/home/jakub/miniconda3/lib/python3.8/site-packages/numpy/__init__.py
I want to print python's sys.path from the command line, but this one is not working:
python -m sys -c "print (sys.path)"
although print -c "import sys; print (sys.path)" would work. It seems that "-m" in the first one above does not load the module "sys". Any clarification on how to correctly import a module from python flags? Thanks.
There is no such flag. -m does something completely different from what you want. You can go through the Python command line docs to see the lack of such a flag if you want.
Just put the import in the command.
python -c "import sys; print (sys.path)"
I am trying to create a generate Makefile. Is there a way to test whether a python module exists and then perform different actions in the Makefile based on that?
I have tried something like this
all:
ifeq (,$(# python -c 'import somemodule'))
echo "DEF MODULE = 1" > settings.pxi
else
echo "DEF MODULE = 0" > settings.pxi
endif
python setup.py build_ext --build-lib . --build-temp build --pyrex-c-in-temp
however doing this does not produce any result. Also if the module does not exist, python throws an error- how to store this information rather than simply crashing?
Consider making use of Python's imp module. Specifically, imp.find_module should be exactly what you're looking for.
Wrap everything in bash -c “cmd” works for me.
python_mod := $(shell bash -c "echo -e 'try:\n import toml\n print(\"good\")\nexcept ImportError:\n print(\"bad\")' | python3 -")
ifeq "$(python_mod)" "error"
$(error "python module is not installed")
endif
I am amending an existing script in which I want to check the set of libraries used in an executable with the shared libraries called at the run time. I have the list of libraries which I need to compare with the shared libraries. For getting shared libraries I am trying to get LD_LIBRARY_PATH by giving below code but I had no luck. I tried checking the variable on command line by giving
echo $LD_LIBRARY_PATH
and it returned /opt/cray/csa/3.0.0-1_2.0501.47112.1.91.ari/lib64:/opt/cray/job/1.5.5-0.1_2.0501.48066.2.43.ari/lib64
the things that I have already tried are (this is a python script)
#! /usr/bin/python -E
import os
ld_lib_path = os.environ.get('LD_LIBRARY_PATH')
#ld_lib_path = os.environ["LD_LIBRARY_PATH"]
I think you are just missing a print in your script? This works for me from the command line:
python -c 'import os; temp=os.environ.get("LD_LIBRARY_PATH"); print temp'
script:
#! /usr/bin/python -E
import os
ld_lib_path = os.environ.get('LD_LIBRARY_PATH')
print ld_lib_path
If I have packages installed from easy_install, the eggs are prepended to sys.path before the items in the PYTHONPATH variable.
For example, if I have an egg package called foo installed as well as a package called foo in the current directory, and then do this:
PYTHONPATH="." python
>>> import foo
This will use the egg version of foo instead of the local directory. Inspecting sys.path shows that eggs are placed before items from PYTHONPATH. This seems broken. Is there any way to override this behavior?
Unfortunately this is done with a hard-coded template deep inside setuptools/command/easy_install.py. You could create a patched setuptools with an edited template, but I've found no clean way to extend easy_install from the outside.
Each time easy_install runs it will regenerate the file easy_install.pth. Here is a quick script which you can run after easy_install, to remove the header and footer from easy_install.pth. You could create a wrapper shell script to run this immediately after easy_install:
#!/usr/bin/env python
import sys
path = sys.argv[1]
lines = open(path, 'rb').readlines()
if lines and 'import sys' in lines[0]:
open(path, 'wb').write(''.join(lines[1:-1]) + '\n')
Example:
% easy_install gdata
% PYTHONPATH=xyz python -c 'import sys; print sys.path[:2]'
['', '/Users/pat/virt/lib/python2.6/site-packages/gdata-2.0.14-py2.6.egg']
% ./fix_path ~/virt/lib/python2.6/site-packages/easy_install.pth
% PYTHONPATH=xyz python -c 'import sys; print sys.path[:2]'
['', '/Users/pat/xyz']
For more clarification, here is the format of easy-install.pth:
import sys; sys.__plen = len(sys.path)
./gdata-2.0.14-py2.6.egg
import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new)
The two import sys lines are the culprit causing the eggs to appear at the start of the path. My script just removes those sys.path-munging lines.
Consider using the -S command-line option to suppress *.pth processing:
python -c 'import sys; print("\n".join(sys.path))'
python -S -c 'import sys; print("\n".join(sys.path))'
https://docs.python.org/3/library/site.html#site.main
You can also use -S with site.main() to delay *.pth processing until runtime, say to capture the original sys.path for appending:
export PYTHONPATH=$(
PYTHONPATH='' \
python -c 'import sys; \
sys.path.extend(sys.argv[1:]); old=list(sys.path); \
import site; site.main(); \
[ old.append(p) for p in sys.path if p not in old ]; \
sys.path=old; \
print ":".join(sys.path)' \
$EXTRA_PATH $ANOTHER_PATH)
python -S ... # using explicit PYTHONPATH
Start from explicit empty PYTHONPATH
Append to sys.path explicitly with extend
Import site and call site.main()
Append new paths to old path and then install it in sys.path
Print with ":" for PYTHONPATH
python -S is desirable for later runs only using $PYTHONPATH
python -S may or may not be desirable while setting PYTHONPATH (depending on if you need sys.path expanded before extending)
I have done something like the following to prepend to the system path when running a top-level python executable file:
import sys
sys.path = ["<your python path>"] + sys.path
Often, the "<your python path>" for me involves use of the __file__ attribute to do relative look up for a path that includes the top-level module for my project. This is not recommended for use in producing, eggs, though I don't seem to mind the consequences. There may be another alternative to __file__.