My OS is CentOs7.
I recently installed Conda and Mamba. Everything was working, but now it looks like that isn't the case. I cannot call Mamba. I can barely make Conda do anything. Initially, I installed Bakta, which works, but nothing else is and I continue to get messages about package incompatabilities. I thought maybe the environment I was in was potentially too updated for the new program, so I researched it and boom yes the program wants a few older versions (Python 3.6.5 for example) so I tried to download it and I got a lot of flack (aka everything is incompatable). This could make sense as I upgraded conda or updated --all, but I then tried to create a new environment and now conda won't even create an environment and it fails to resolve any issues. I tried going through PIP3 and it looks like numpy is not able to create a wheel, so I tried addressing that and it keeps just crashing out. Am I at this point to believe conda is broken? Here is the first error I am getting:
INFO:
########### CLIB COMPILER OPTIMIZATION ###########
INFO: Platform :
Architecture: x64
Compiler : unix-like
CPU baseline :
Requested : 'min'
Enabled : SSE SSE2 SSE3
Flags : -msse -msse2 -msse3
Extra checks: none
CPU dispatch :
Requested : 'max -xop -fma4'
Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2
Generated : none
INFO: CCompilerOpt.cache_flush[857] : write cache to path -> /tmp/pip-install-skejmo98/numpy_05946e1a2d574651a6019d72e428adca/build/temp.linux-x86_64-3.9/ccompiler_opt_cache_clib.py
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Failed to build numpy
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
Related
When compiling my program using Caffe2 I get this warnings:
[E init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[E init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[E init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
Since I do want to get multi-threading support for Caffe2, I've searched what to do. I've found that Caffe2 has to be re-compiled setting some arguments while creating the cmake or in the CMakeLists.
Since I already had installed pytorch in a conda env, I have first uninstalled Caffe2 with:
pip uninstall -y caffe2
Then I've followed the instructions from the Caffe2 docs, to build it from sources.
I first installed the dependencies as indicated. Then I downloaded pytorch inside my conda env with:
git clone https://github.com/pytorch/pytorch.git && cd pytorch
git submodule update --init --recursive
At this time I think is the moment to change the pytorch\caffe2\CMakeLists file just downloaded. I have read that in order to enable the multi-threading support is sufficient to enable the option USE_NATIVE_ARCH inside this CMakeLists, however I'm not able to find such option where I'm looking. Maybe I'm doing something wrong. Any thoughts? Thanks.
Here some details about my platform:
I'm on macOS Big Sur
My python version is 3.8.5
UPDATE:
To answer Nega this is what I've got:
python3 -c 'import torch; print(torch.__config__.parallel_info())'
ATen/Parallel:
at::get_num_threads() : 1
at::get_num_interop_threads() : 4
OpenMP not found
Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 4
Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
std::thread::hardware_concurrency() : 8
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
UPDATE 2:
It turned out that the Clang that comes with XCode doesn't support OpenMP. The gcc that I was using was just a symlink to Clang. In fact after running gcc --version I got:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 12.0.0 (clang-1200.0.32.29)
Target: x86_64-apple-darwin20.3.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
I installed from Homebrew gcc-10 and set the alias like this alias gcc='gcc-10'. In fact now with gcc --version this is what I get:
gcc-10 (Homebrew GCC 10.2.0_4) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I've also tried a simple Hello World for OpenMP using 8 threads and everything seems to be working. However after re-running the command:
python3 -c 'import torch; print(torch.__config__.parallel_info())'
I get the same outcome. Any thoughts?
AVX, AVX2, and FMA are CPU instruction sets and are not related to multi-threading. If the pip package for pytorch/caffe2 used these instructions on a CPU that didn't support them, the software wouldnt work. Pytorch, installed via pip comes with multi-threading enabled though. You can confirm this with torch.__config__.parallel_info()
❯ python3 -c 'import torch; print(torch.__config__.parallel_info())'
ATen/Parallel:
at::get_num_threads() : 6
at::get_num_interop_threads() : 6
OpenMP 201107 (a.k.a. OpenMP 3.1)
omp_get_max_threads() : 6
Intel(R) Math Kernel Library Version 2020.0.1 Product Build 20200208 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 6
Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
std::thread::hardware_concurrency() : 12
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
That being said, if you still want to continue building pytorch and caffe2 from source, the flag your looking for, USE_NATIVE is in pytorch/CMakeLists.txt, one level up from caffe2. Edit that file and change USE_NATIVE to ON. Then continue building pytorch with python3 setup.py build. Note that the flag USE_NATIVE doesn't do what you think it does. It only allows the building of MKL-DNN with CPU native optimization flags. It does not trickle down to caffe2 (except where caffe2 use MKL-DNN obviously.)
I need to install scipy 1.2.0 for python2.7 il local on a machine running rhel fedora 6.5 where I do not have sudo permissions.
I have already installed python2.7, numpy, ATLAS and openblas.
Now when I run #python2.7 setup.py build" I get this error:
/home/gspirito/Python-2.7.14/scipy-1.2.0/scipy/cluster/_vq.c:8344: undefined reference to `PyInt_FromLong'
build/temp.linux-x86_64-2.7/scipy/cluster/_vq.o: In function `__Pyx_InitCachedConstants':
/home/gspirito/Python-2.7.14/scipy-1.2.0/scipy/cluster/_vq.c:8134: undefined reference to `PyTuple_Pack'
build/temp.linux-x86_64-2.7/scipy/cluster/_vq.o: In function `__Pyx_modinit_type_import_code':
/home/gspirito/Python-2.7.14/scipy-1.2.0/scipy/cluster/_vq.c:8395: undefined reference to `PyImport_ImportModule'
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/libgfortranbegin.a(fmain.o): In function `main':
(.text+0x26): undefined reference to `MAIN__'
collect2: ld returned 1 exit status
error: Command "/usr/bin/gfortran -Wall -g -L/home/gspirito/src/zlib-1.2.8/lib -L/home/gspirito/packages/include/lzma -L/home/gspirito/src/postgresql-8.4.1/lib -L/home/gspirito/vargenius_bin/R-3.4.1/lib -L/home/gspirito/packages/lib build/temp.linux-x86_64-2.7/scipy/cluster/_vq.o -L/home/gspirito/Python-2.7.14/ATLAS/my_build_dir/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Lbuild/temp.linux-x86_64-2.7 -latlas -latlas -latlas -lgfortran -o build/lib.linux-x86_64-2.7/scipy/cluster/_vq.so -Wl,--version-script=build/temp.linux-x86_64-2.7/link-version-scipy.cluster._vq.map" failed with exit status 1
Does anyone know how to solve it?
Thanks in advance
The error message displayed essentially says that the linker failed to link all the compiled libraries. The exit status 1 implies that there were several errors before linking thus ld exits. From the Scipy Install page you can essentially see the list of supported python distributions.
For many users, especially on Windows, the easiest way to begin is to
download one of these Python distributions, which include all the key
packages:
Anaconda: A free distribution of Python with scientific packages.
Supports Linux, Windows and Mac.
Enthought Canopy: The free and commercial versions include the core
scientific packages. Supports Linux, Windows and Mac.
Python(x,y): A free distribution including scientific packages, based
around the Spyder IDE. Windows and Ubuntu; Py2 only.
WinPython: Another free distribution including scientific packages and
the Spyder IDE. Windows only, but more actively maintained and
supports the latest Python 3 versions.
Pyzo: A free distribution based on Anaconda and the IEP interactive
development environment. Supports Linux, Windows and Mac.
I am trying to compile Tensorflow (r1.3) on CentOS 7.
My environment: gcc (g++) 7.20, bazel 0.5.3, python3 (with all
necessary dependencies listed on tensorflow web site), swig 3.0.12,
openjdk 8. Everything is installed in the users scope, without root access.
Whenever I am trying to build a python package invoking following command
"bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package"
I am getting this error:
.....
2017-08-24 11:40:35.734659: W
tensorflow/core/framework/op_gen_lib.cc:372] Squeeze can't find input
squeeze_dims to rename ERROR:
/home/data/software/tensorflow/tensorflow/python/BUILD:2762:1:
Couldn't build file tensorflow/python/pywrap_tensorflow_internal.cc:
SWIGing tensorflow/python/tensorflow.i failed (Exit 1).
...
However building C++ lib (bazel build --config=opt //tensorflow:libtensorflow_cc.so) working without any issues
Am I doing something wrong?
Update 25.08.2017:
ok, it seems that SWIG is build automatically from source when running bazel build. The version of shipped SWIG version is 3.0.8. But still, I have no clue how to solve this problem.
ok, problem solved by using bazel version 0.5.1. Newer version producing the same error.
I've been trying to install Tensorflow for a few weeks now and I keep getting a lot of errors with the simple installations so I think that it would be best for me to install Tensorflow from source. I'm following the instructions on the Tensorflow website exactly, and my ./configure is mostly all default so I can see if it works before I make modifications:
./configure
Please specify the location of python. [Default is /usr/bin/python]: /Library/Frameworks/Python.framework/Versions/3.6/bin/python3
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] n
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA support will be enabled for TensorFlow
Found possible Python library paths:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages]
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] n
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] n
No CUDA support will be enabled for TensorFlow
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
Configuration finished
(This is not the first time I've edited the configuration)
After this, I execute the following bazel build command straight from the Tensorflow.org website instructions for installing from source :
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
(In the future, I'm going to add some additional flags to account for the fact that I've been getting CPU instruction errors about SSE, AVX, etc.)
When I execute that bazel command, I get an extremely long wait time and a list of errors that piles up:
r08ErCk:tensorflow kendrick$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
WARNING: /Users/kendrick/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': Use SavedModel Builder instead.
WARNING: /Users/kendrick/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': Use SavedModel instead.
INFO: Found 1 target...
INFO: From Compiling external/protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/protobuf/src/google/protobuf/compiler/js/embed.cc:37:12: warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
^
1 warning generated.
INFO: From Compiling external/protobuf/python/google/protobuf/pyext/message_factory.cc:
external/protobuf/python/google/protobuf/pyext/message_factory.cc:78:28: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
static char* kwlist[] = {"pool", 0};
^
external/protobuf/python/google/protobuf/pyext/message_factory.cc:222:6: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
{"pool", (getter)GetPool, NULL, "DescriptorPool"},
^
external/protobuf/python/google/protobuf/pyext/message_factory.cc:222:37: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
{"pool", (getter)GetPool, NULL, "DescriptorPool"},
^
3 warnings generated.
This is only a small portion of all the errors that looked similar to this that piled up. Even after all of the error messages, the command never returns and I just get the blinking cursor on an empty line.
Can someone please provide me with some exact instructions on what I should enter into terminal to avoid these errors? I've been following stack advice for weeks but continue to get errors.
MAC OS Sierra (MacBook Air)
What should I enter into terminal? (specifically)
Everything that I've done up to this point has been almost exactly what is told to do on the Tensorflow.org website instructions.
I installed for the first time using http://queirozf.com/entries/installing-cuda-tk-and-tensorflow-on-a-clean-ubuntu-16-04-install and not only was it a very simple process, but working with tf is really easy.. just source <name_of_virtual_environment>/bin/activate and then run python/python3 through that.
Bear in mind that the walkthrough in the link is for gpu tensorflow, however using the cpu tensorflow download for your mac instead, with with this virtual environment process should work just fine.
Since you do not have a GPU, do have SSE and AVX, and are on a mac sierra - the instructions found on google will NOT work with 1.3. i am befuddled on why they do not provide an exact script to do this. Regardless, here is the answer to your question http://www.josephmiguel.com/building-tensorflow-1-3-from-source-on-mac-osx-sierra-macbook-pro-i7-with-sse-and-avx/
/*
do each of these steps independently
will take around 1hr to complete all the steps regardless of machine type
*/
one time install
install anaconda3 pkg # manually download this and install the package
conda update conda
conda create -n dl python=3.6 anaconda
source activate dl
cd /
brew install bazel
pip install six numpy wheel
pip install –upgrade https://storage.googleapis.com/tensorflow/mac/cpu/protobuf-3.1.0-cp35-none-macosx_10_11_x86_64.whl
sudo -i
cd /
rm -rf tensorflow # if rerunning the script
cd /
git clone https://github.com/tensorflow/tensorflow
Step 1
cd /tensorflow
git checkout r1.3 -f
cd /
chmod -R 777 tensorflow
cd /tensorflow
./configure # accept all default settings
Step 2
// https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions
bazel build –config=opt –copt=-mavx –copt=-mavx2 –copt=-mfma //tensorflow/tools/pip_package:build_pip_package
Step 3
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
pip install /tmp/tensorflow_pkg/tensorflow-1.0.1-cp36-cp36m-macosx_10_7_x86_64.whl
Step 4
cd ~
ipython
Step 5
import tensorflow as tf
hello = tf.constant(‘Hello, TensorFlow!’)
sess = tf.Session()
print(sess.run(hello))
Step 6
pip uninstall /tmp/tensorflow_pkg/tensorflow-1.0.1-cp36-cp36m-macosx_10_7_x86_64.whl
I am following:
http://deeplearning.net/software/theano/install_windows.html#install-windows
to install theano. I just want to play with the code, I don't need to use a GPU to improve my speed.
I don't have an Nvidia card and when I try to install cuda, the installation fails. I watch as the installation tool deletes the files I need.
I am using Anaconda python so I commented this line:
REM CALL %SCISOFT%\WinPython-64bit-2.7.9.4\scripts\env.bat
In the:
C:\SciSoft\env.bat
file. I gave up and tried to install theano with easy_install.
I try to import Theano from python, it fails with:
ton of stuff
Problem occurred during compilation with the command line below: C:\SciSoft\TDM-GCC-64\bin\g++.exe -shared -g
-march=bdver2 -mmmx -mno-3dnow -mss
more stuff
C:\Users\xxx\Anaconda\libs/python27.lib: error adding symbols: File in wr ong format collect2.exe: error: ld returned
1 exit status
--------------------------------------------------------------------------- Exception Traceback (most recent call
last) in ()
----> 1 import theano
C:\Users\xxx\Anaconda\lib\site-packages\theano-0.7.0-py2.7.egg\theano__i
nit__.pyc in ()
Even more stuff
Exception: Compilation failed (return status=1): C:\Users\xxxx\Anaconda\li . collect2.exe: error: ld
returned 1 exit statusrong format
If you don't need to run on GPU then don't worry about installing Visual Studio or CUDA. I think you just need Anaconda, but maybe also TDM GCC.
From a clean environment I install the latest version of Anaconda then run
conda install mingw libpython
I'd recommend installing Theano from Github (the bleeding edge version) since the "stable" release is not updated often and there are usually many significant improvements (especially for performance) in the bleeding edge version in comparison to the stable version.
There is no need to perform all the steps in the "Configuring the Environment" section; just make sure your C++ compiler is in the PATH.
If Theano fails to work after these minimal installation instructions, I'd recommend solving each problem on a case-by-case basis instead of trying to run the full installation instructions provided in the documentation (which may be out of date).