I'm installing the h5py according to the tutorial at http://docs.h5py.org/en/latest/build.html
The installation is sucessfull. However, the test failed,
python setup.py test
I got this:
running test
running build_py
running build_ext
Summary of the h5py configuration
Path to HDF5: '/opt/cray/hdf5-parallel/1.8.13/cray/83/'
HDF5 Version: '1.8.13'
MPI Enabled: True
Rebuild Required: False
Executing cythonize()
Traceback (most recent call last):
File "setup.py", line 140, in <module>
cmdclass = CMDCLASS,
File "/python/2.7.9/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/python/2.7.9/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/python/2.7.9/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 68, in run
import h5py
File "/h5py-2.5.0/build/lib.linux-x86_64-2.7/h5py/__init__.py", line 13, in <module>
from . import _errors
**ImportError:** /opt/cray/lib64/libmpichf90_cray.so.3: undefined symbol: iso_c_binding_
looks like cython cant not find the shared library, how can I attach that? Thanks.
(Edited for parallel build)
I got this to work on a Cray XC30 (ARCHER: http://www.archer.ac.uk) using the following:
module swap PrgEnv-cray PrgEnv-gnu
module load cray-hdf5-parallel
export CRAYPE_LINK_TYPE=dynamic
export CC=cc
ARCHER has specific modules for the Python enviroment on the compute nodes that link to performant versions of numpy etc. (see: http://www.archer.ac.uk/documentation/user-guide/python.php) so I also loaded these (this may not apply on your Cray system, in ARCHER's case mpi4py is already inlcuded in the python-compute install):
module load python-compute
module load pc-numpy
Finally, I added the custom install location I will use for h5py to PYTHONPATH
export PYTHONPATH=/path/to/h5py/install/lib/python2.7/site-packages:$PYTHONPATH
Now I can build:
python setup.py configure --mpi
python setup.py install --prefix=/path/to/h5py/install
...lots of output...
Now, running the tests on the frontend node fail but with the error message I would expect to see on a Cray XC if you try to launch MPI code on a login/service node (failed to initialize communication channel, the login/service nodes are not connected to the high performance network so cannot run MPI code). This suggests to me that the test would probably work if it was running on the compute nodes.
> python setup.py test
running test
running build_py
running build_ext
Autodetected HDF5 1.8.13
********************************************************************************
Summary of the h5py configuration
Path to HDF5: '/opt/cray/hdf5-parallel/1.8.13/GNU/49'
HDF5 Version: '1.8.13'
MPI Enabled: True
Rebuild Required: False
********************************************************************************
Executing cythonize()
[Thu Oct 22 19:53:01 2015] [unknown] Fatal error in PMPI_Init_thread: Other MPI error, error stack:
MPIR_Init_thread(547):
MPID_Init(203).......: channel initialization failed
MPID_Init(579).......: PMI2 init failed: 1
Aborted
To test properly you would have to submit a job that launched a parallel Python script on the compute nodes using aprun. I do not think the built in test framework will work easily as it probably expects the MPI launcher to be called mpiexec (as on a standard cluster) so you may need to write your own tests. The other option would be to coerce setup.py to use aprun instead somehow.
Related
I am actively developing a Python module that I would like to deploy in SQL Server 2017 installed locally, so I deploy the module in c:\Program Files\Microsoft SQL Server\<Instance Name>\PYTHON_SERVICES\Lib\site-packagesusing setuptoolslike so:
"c:\Program Files\Microsoft SQL Server\<Instance_Name>\PYTHON_SERVICES\python" setup.py develop
This produces an .egg-info directory in my project root, and a .egg-link file in the site-packages directory mentioned above. The .egg-link file correctly points to the .egg-info directory in my project root, so it appears setuptools is working correctly.
Here's my setup.pyfor reference:
from setuptools import setup
setup(
setup_requires=['pbr'],
pbr=True,
)
And here's the corresponding setup.cfg file:
[metadata]
name = <module_name>
description = <Module Description>
description-file = README.md
description-content-type = text/markdown
[files]
package_root = py/src
Since I am just trying to make the plumbing work, I have a single python script called uploader.py in <project_root>/py/src:
#uploader.py
class Upload:
pass
With this deployment in place, I am hoping to simply import the module I just published through .egg-link into a sp_execute_external_script call like so:
execute sp_execute_external_script #language= N'Python', #script= N'from <module_name>.uploader import Upload';
However, executing this stored procedure from SSMS produces the following error message:
Msg 39004, Level 16, State 20, Line 10
A 'Python' script error occurred during execution of 'sp_execute_external_script' with HRESULT 0x80004004.
Msg 39019, Level 16, State 2, Line 10
An external script error occurred:
Error in execution. Check the output for more information.
Traceback (most recent call last):
File "<string>", line 5, in <module>
File "C:\SQL-MSSQLSERVER-ExtensibilityData-PY\MSSQLSERVER01\C08BB9A7-66B5-4B5E-AAFC-B0248EE64199\sqlindb.py", line 27, in transform
from <module_name>.uploader import Upload
ImportError: No module named '<module_name>'
SqlSatelliteCall error: Error in execution. Check the output for more information.
STDOUT message(s) from external script:
SqlSatelliteCall function failed. Please see the console output for more information.
Traceback (most recent call last):
File "C:\Program Files\Microsoft SQL Server\<Instance_Name>\PYTHON_SERVICES\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py", line 587, in rx_sql_satellite_call
rx_native_call("SqlSatelliteCall", params)
File "C:\Program Files\Microsoft SQL Server\<Instance_Name>\PYTHON_SERVICES\lib\site-packages\revoscalepy\RxSerializable.py", line 358, in rx_native_call
ret = px_call(functionname, params)
RuntimeError: revoscalepy function failed.
I have obviously redacted module_name and Instance_Name from the error message.
I tried using install command instead of develop just to make sure the .egg-link file is not a problem. install installs the .egg-info file in site-packages but I get the same error.
I also tried removing pbr from the mix, but got the same error.
Lastly, I tried adding my <project_root> to sys.path as suggested by How can I use an external python module with SQL 2017 sp_execute_external_script?, but that didn't help either.
So at this point, I don't have a clue what I might be doing wrong.
The python version is 3.5.2 and I don't think an __init__.py is needed in the project for it to qualify as a module. Inserting a blank __init__.py in py/src doesn't help either.
My pip version is 19.3.1 and setuptools version is 44.0.0 and pbr version is 5.4.4 and I have confirmed all modules are installed in the site-packages directory mentioned above.
Based on my extensive experimentation, it appears that sp_execute_external_script doesn't follow symlinks (i.e. through the.egg-link file). Therefore, development mode installations will not work, whether you use setuptools, pip, pbr or anything else.
I even tried symlinking <package_name> folder as an OS symlink. Since I am on Windows, I used mklink /D command on Command Prompt to symlink /py/src/<package_name> inside site-packages. While the command goes through correctly, and I can see the symlinked folder in File Explorer, sp_execute_external_script fails to detect the package. Which tells me that there is probably something in sp_execute_external_script code that avoids traversing symbolic links.
I wonder if there is a way to make it traverse symbolic links.
The only workable solution is to develop a package's code under its own directory, so, in my case /py/src/<package_name>. Then, before running exec sp_execute_external_script #language=N'python', #script=N'...' copy the <package_name> folder to the site-packages directory.
This is, sort of, equivalent to setup.py install, but bypasses the creation of intermediate files and directories. So I am going to stick with this simple--though odious--approach.
I am hoping somebody more knowledgeable would offer a better way to solve this problem.
I am using the command pip install -t lib/ ortools , the library ortools is installed to the lib/ folder. But when I deploy my Flask project that contains that library on the Google App Engine, I got the following error:
(/base/alloc/tmpfs/dynamic_runtimes/python27g/931d17f05408b3ef/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py:263)
Traceback (most recent call last):
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/931d17f05408b3ef/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/931d17f05408b3ef/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/931d17f05408b3ef/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/base/data/home/apps/b~cidy-1539116366694/20190316t002011.416796594015344313/main.py", line 5, in <module>
from ortools.constraint_solver import pywrapcp
File "/base/data/home/apps/b~cidy-1539116366694/20190316t002011.416796594015344313/lib/ortools/constraint_solver/pywrapcp.py", line 17, in <module>
_pywrapcp = swig_import_helper()
File "/base/data/home/apps/b~cidy-1539116366694/20190316t002011.416796594015344313/lib/ortools/constraint_solver/pywrapcp.py", line 16, in swig_import_helper
return importlib.import_module('_pywrapcp')
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/931d17f05408b3ef/python27/python27_dist/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named _pywrapcp
The 1st generation (Python 2.7) standard environment sandbox has very strict requirements. Particularly relevant in this context is the Pure Python one:
All code for the Python runtime environment must be pure Python, and
not include any C extensions or other code that must be compiled.
The OR-tools fail to meet this requirement since they require (platform-specific) compilation. From their installation page:
Note: you can build OR-Tools suite from source for any supported platform only from that platform. OR-Tools Makefile does not support
cross-compiling for any supported platform.
You might still be able to use them on GAE:
in the 2nd generation standard environment (Python 3.7, more relaxed restrictions) - but I'm not certain if pip-driven package builds are supported and if all the tools required for it are provided, YMMV
in the flexible environment, most likely using a custom-built runtime which allows you to add even non-python dependencies - the extra libraries and tools you might need for building ortools.
Do you run pip on macOS or Linux ? If build on macOS, please see Using Homebrew Python on macOS?
According to the official web of OR-Tools Python, I think ortools depends on the platform.
When you use platform dependency python lib, it's better use pip install -t lib/ ortools.
on linux environment.
I have added functionality to tensorflow/tensorflow/python/ops/image_ops_impl.py and corresponding unit tests in tensorflow/tensorflow/python/ops/image_ops_test.py
I originally forked tensorflow from the master branch, made these changes on my local machine, rebased and commit.
Then I created and activated a virtualenv.
When running bazel test //tensorflow/python..., as recommended in the contribution guide I am recieving:
ERROR: /Users/isaacsultan/Code/tensorflow/third_party/python_runtime/BUILD:5:1: no such package '#local_config_python//': Traceback (most recent call last):
File "/Users/isaacsultan/Code/tensorflow/third_party/py/python_configure.bzl", line 308
_create_local_python_repository(repository_ctx)
File "/Users/isaacsultan/Code/tensorflow/third_party/py/python_configure.bzl", line 270, in _create_local_python_repository
_check_python_lib(repository_ctx, python_lib)
File "/Users/isaacsultan/Code/tensorflow/third_party/py/python_configure.bzl", line 213, in _check_python_lib
_fail(("Invalid python library path: %...))
File "/Users/isaacsultan/Code/tensorflow/third_party/py/python_configure.bzl", line 28, in _fail
fail(("%sPython Configuration Error:%...)))
Python Configuration Error: Invalid python library path: /usr/local/lib/python2.7/dist-packages
and referenced by '//third_party/python_runtime:headers'
ERROR: Analysis of target '//tensorflow/python:control_flow_util' failed; build aborted: Analysis failed
INFO: Elapsed time: 4.603s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (8 packages loaded)
FAILED: Build did NOT complete successfully (8 packages loaded)
currently loading: tensorflow/core ... (2 packages)
What could the source of my issue be please?
Since I am only changing the python functionality, there is no need to rebuild.
EDIT: After re-running ./configure:
(tensorflow) Isaacs-MacBook:tensorflow isaacsultan$ bazel clean --expunge
INFO: Starting clean.
(tensorflow) Isaacs-MacBook:tensorflow isaacsultan$ bazel test //tensorflow/python/...
Starting local Bazel server and connecting to it...
........................
ERROR: /private/var/tmp/_bazel_isaacsultan/0e2667ab20883652d759a6a805575b2d/external/local_config_cc/BUILD:50:5: in apple_cc_toolchain rule #local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL. If your Xcode version has changed recently, try: "bazel clean --expunge" to re-run Xcode configuration
ERROR: Analysis of target '//tensorflow/python/eager:core' failed; build aborted: Analysis of target '#local_config_cc//:cc-compiler-darwin_x86_64' failed; build aborted
INFO: Elapsed time: 15.184s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (93 packages loaded)
FAILED: Build did NOT complete successfully (93 packages loaded)
currently loading: tensorflow/core ... (2 packages)
EDIT 2:
After running bazel clean --expunge then ./configure:
Isaacs-MacBook:Tensorflow isaacsultan$ bazel test //tensorflow/python/...
Starting local Bazel server and connecting to it...
...................
ERROR: /private/var/tmp/_bazel_isaacsultan/0e2667ab20883652d759a6a805575b2d/external/local_config_cc/BUILD:50:5: in apple_cc_toolchain rule #local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL. If your Xcode version has changed recently, try: "bazel clean --expunge" to re-run Xcode configuration
ERROR: Analysis of target '//tensorflow/python:pywrap_tensorflow_import_lib_file' failed; build aborted: Analysis of target '#local_config_cc//:cc-compiler-darwin_x86_64' failed; build aborted
INFO: Elapsed time: 14.969s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (88 packages loaded)
FAILED: Build did NOT complete successfully (88 packages loaded)
currently loading: tensorflow/core ... (5 packages)
EDIT 3:
After following these steps:
Xcode version must be specified to use an Apple CROSSTOOL
4 warnings generated.
ERROR: /Users/isaacsultan/Code/tensorflow/tensorflow/BUILD:576:1: Executing genrule //tensorflow:tensorflow_python_api_gen failed (Exit 1)
/anaconda/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "/private/var/tmp/_bazel_isaacsultan/0e2667ab20883652d759a6a805575b2d/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/tools/api/generator/create_python_api.py", line 27, in <module>
from tensorflow.python.tools.api.generator import doc_srcs
File "/private/var/tmp/_bazel_isaacsultan/0e2667ab20883652d759a6a805575b2d/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 81, in <module>
from tensorflow.python import keras
File "/private/var/tmp/_bazel_isaacsultan/0e2667ab20883652d759a6a805575b2d/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/keras/__init__.py", line 25, in <module>
from tensorflow.python.keras import applications
File "/private/var/tmp/_bazel_isaacsultan/0e2667ab20883652d759a6a805575b2d/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/keras/applications/__init__.py", line 21, in <module>
import keras_applications
ImportError: No module named keras_applications
INFO: Elapsed time: 57510.356s, Critical Path: 492.10s
INFO: 6867 processes: 6867 local.
FAILED: Build did NOT complete successfully
Is it possible you forgot to run ./configure before building?
I am trying to build a Python Script into a stand alone application. I am using GUI2Exe. My script uses selenium package. I have it installed.
Project compiles fine and runs on python command line directly but fails to build a stand alone because it is referring to folder:
ERROR: test_file_data_extract (__main__.FileDataExtract)
----------------------------------------------------------------------
Traceback (most recent call last):
File "File_data_extract.py", line 18, in setUp
File "selenium\webdriver\firefox\firefox_profile.pyc", line 63, in __init__
IOError: [Errno 2] No such file or directory: 'C:\\users\\username\\PycharmProjects\\Python_27_32bit\\file_data_extract\\dist\\File_data_extract.exe\\selenium\\webdriver\\firefox\\webdriver_prefs.json'
It is looking for selenium package is located at :
C:\Users\username\Anaconda2_Py27_32bit\Lib\site-packages\selenium-2.48.0-py2.7.egg\selenium\webdriver\firefox
where C:\Users\username\Anaconda2_Py27_32bit is where I installed Anaconda Python 2.7, 32 bit version. By default it is looking for in \dist\filename.exe folder.
I was able to build it using bbfreeze. It works great.
First I had to install bbfreezee via pip (one time only):
pip install bbfreeze
Create a build_package.py file as:
from bbfreeze import Freezer
f = Freezer("project_name", includes=("selenium","SendKeys",)) #list problem packages here to manually include
f.addScript("project_name_script.py")
f() # starts the freezing process
Build project:
python build_package.py bdist_bbfreezee
in folder project_name where project_name_script.py sits you find project_name_script.exe with all the include packages including selenium and sendkeys. When you distribute the package you need to distribute entire project_name because it contains all dependent library dlls (python .pyd).
More details refer official bbfreezee here:
https://pypi.python.org/pypi/bbfreeze/#downloads
I have a set of python scripts that use PyXML. The scripts used to work fine but now I get error.
How can I best repair this?
$ script.py
Traceback (most recent call last):
File "/cygdrive/c/mypath/script.py", line 16, in <module>
from xml import xpath
File "/cygdrive/c/data/code/xml/PyXML-0.8.4/xml/xpath/__init__.py", line 105, in <module>
import Context
File "/cygdrive/c/data/code/xml/PyXML-0.8.4/xml/xpath/Context.py", line 15, in <module>
import CoreFunctions
File "/cygdrive/c/data/code/xml/PyXML-0.8.4/xml/xpath/CoreFunctions.py", line 20, in <module>
from xml.xpath import Util, Conversions
File "/cygdrive/c/data/code/xml/PyXML-0.8.4/xml/xpath/Conversions.py", line 22, in <module>
from xml.utils import boolean
ImportError: cannot import name boolean
Then I tried to re-install PyXML.. This fails.
$ easy_install PyXML
Searching for PyXML
Reading http://pypi.python.org/simple/PyXML/
Reading http://www.python.org/sigs/xml-sig/
Best match: PyXML 0.8.4
Downloading http://downloads.sourceforge.net/pyxml/PyXML-0.8.4.tar.gz?modtime=1101741917&big_mirror=0
Processing PyXML-0.8.4.tar.gz
Running PyXML-0.8.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install-ZIX2LS/PyXML-0.8.4/egg-dist-tmp-5x8fiU
warning: no files found matching '*.html' under directory 'extensions/expat'
warning: no files found matching '*Makefile' under directory 'extensions/expat'
warning: no files found matching '*.dsp' under directory 'extensions/expat'
warning: no previously-included files matching '*/CVS/*' found anywhere in distribution
69 [main] python 3116 C:\cygwin\bin\python.exe: *** fatal error - unable to remap C:\cygwin\bin\cygcrypto-0.9.8.dll to same address as parent: 0x600000 != 0x910000
Stack trace:
Frame Function Args
00285F18 6102749B (00285F18, 00000000, 00000000, 00000000)
00286208 6102749B (61177B80, 00008000, 00000000, 61179977)
00287238 61004AFB (611A136C, 6124060C, 00600000, 00910000)
End of stack trace
2 [main] python 6800 fork: child 3116 - died waiting for dll loading, errno 11
error: Setup script exited with error: Resource temporarily unavailable
I don't know if this will solve your problem specifically but I had a simillar problem which I googled and turned me here.
For whoever comes from google, The problem was with the pip install and the fix for that was:
you need to install it, You should find the file PyXML-0.8.4.tar.gz on the web and then run:
pip install the PyXML tarball
Although I gave another solution it looks like you are using an unstable cygwin, I think the real problem is there and not in the PyXML package.
Try to reinstall cygwin OR:
Try installing Pyxml with windows installer from here