Inception v3 guide on tensorflow broken for C++ and python - python

I'm following the guide here on running the pretrained inception v3 https://www.tensorflow.org/versions/r0.11/tutorials/image_recognition/index.html
However, when I try the python version, I get:
python classify_image.py
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
Traceback (most recent call last):
File "classify_image.py", line 227, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
TypeError: run() got an unexpected keyword argument 'argv'
Ok.. Fine nevermind let me try the C++ Version.
Downloaded the model, run the bazel command:
➜ tensorflow git:(master) ✗ bazel build tensorflow/examples/label_image/...
.......
ERROR: /storage/git/tensorflow/tensorflow/tensorflow.bzl:636:21: syntax error at '=': expected expression.
ERROR: /storage/git/tensorflow/tensorflow/tensorflow.bzl:711:1: nested functions are not allowed. Move the function to top-level.
ERROR: /storage/git/tensorflow/tensorflow/tensorflow.bzl:739:1: nested functions are not allowed. Move the function to top-level.
ERROR: /storage/git/tensorflow/tensorflow/tensorflow.bzl:773:1: nested functions are not allowed. Move the function to top-level.
ERROR: /storage/git/tensorflow/tensorflow/tensorflow.bzl:776:1: nested functions are not allowed. Move the function to top-level.
ERROR: com.google.devtools.build.lib.packages.BuildFileContainsErrorsException: error loading package '': Extension 'tensorflow/tensorflow.bzl' has errors.
INFO: Elapsed time: 0.600s
...Okay then. Neither seems to work. Or perhaps I'm doing this wrong. Anyone has any guidance? :)
Using tensorflow 0.11 on Ubuntu 16, Anaconda distribution python 3.5
Thanks!

If it helps anyone:
Solving the C++ problem: Update Bazel to the correct version (you likely installed tensorflow ages ago and git pulled the latest which requires a new bazel version)
Solving the python problem: Remove the argv command.

Related

Using google cloud ml gpu on python 3.7

I am trying to run a ML model on google cloud ML. I am using pytorch and want to use the GPU. Using the standard Python3.6 version installed on the Google cloud VM, I get an error described below and tried solving it by upgrading the Python version to Python 3.7, but this version does not recognize the GPU that comes with the Google cloud VM.
Whenever I run my code (which works when ran locally) on the Google cloud VM (based on Python3.6) I get the error
python: symbol lookup error: /home/julsoles/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: PySlice_Unpack
Trying to find a solution online, I figured out that this was an issue with the version of Python 3.6 and the only solution was to upgrade my version of Python.
I was able to upgrade my version of Python to Python3.7 in the Google Cloud VM and can run code with this new version using the command Python3.7 file.py.
Now, the issue is that whenever I run code using this version of Python, the VM does not recognize the GPU that comes with the system. I get the error
File "pred.py", line 75, in
predict(model_list, test_dataset) File "pred.py", line 28, in predict
x = Variable(torch.from_numpy(x).float()).cuda() File "/opt/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py",
line 161, in _lazy_init
_check_driver() File "/opt/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py",
line 75, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Right now, the only solution I have found is to run my code just using cpu, but it is painstakingly slow. Is there any way to make Python3.7 recognize the GPU so that I can run my code using the GPU?
Thanks for your help!

Compiling binary with tensorflow library for cpu: Cannot find cuda library?

In development, I have been using the gpu-accelerated tensorflow
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl
I am attempting to deploy my trained model along with an application binary for my users. I compile using PyInstaller (3.3.dev0+f0df2d2bb) on python 3.5.2 to create my application into a binary for my users.
For deployment, I install the cpu version, https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl
However, upon successful compilation, I run my program and receive the infamous tensorflow cuda error:
tensorflow.python.framework.errors_impl.NotFoundError:
tensorflow/contrib/util/tensorflow/contrib/cudnn_rnn/python/ops/_cudnn_rnn_ops.so:
cannot open shared object file: No such file or directory
why is it looking for cuda when I've only got the cpu version installed? (Let alone the fact that I'm still on my development machine with cuda, so it should find it anyway. I can use tensorflow-gpu/cuda fine in uncompiled scripts. But this is irrelevant because deployment machines won't have cuda)
My first thought was that somehow I'm importing the wrong tensorflow, but I've not only used pip uninstall tensorflow-gpu but then I also went to delete the tensorflow-gpu in /usr/local/lib/python3.5/dist-packages/
Any ideas what could be happening? Maybe I need to start using a virtual-env..

PyDev tells "error == cudaSuccess (35 vs. 0)CUDA driver version is insufficient for CUDA runtime version",but command line works fine

My configuration like this:
Ubuntu 16.04
Java 1.8
Python 2.7.12
Caffe 1.0
Cuda 8.0
Nvidia driver 375-66
PyDev 5.7.0.201704111357
And I tried to run this in bash:
https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/tree/master/testing/python
by
python -m Demo
It works fine,but when try to run from eclipse-pydev,got this error:
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0606 09:34:43.905447 15924 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
0
E0606 09:34:43.905640 15924 common.cpp:121] Cannot create Curand generator. Curand won't be available.
F0606 09:34:43.905845 15924 common.cpp:152] Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version
*** Check failure stack trace: ***
I think this problem comes from that I start up OS with nvidia driver then switchs to intel like this:
This is what I want due to hope to use embeded intel VC to handle OS related work,and leave Nvidia VC for caffe(a deep learning framework) jobs.The
question is :
Why,for same python wrapped caffe job,does command line work fine but PyDev give the these error?
Usually this means that you have some environment variable in your command line that's not replicated in PyDev.
The usual fix is to launch Eclipse from the command line, so that it inherits the variables set there.
Thanks the tips from #Fabio Zadrozny
Window->Preferences->PyDev->Interpreters->Python Interpreter->Click Environment->New
create a env variable:
Name:LD_LIBRARY_PATH Value:as in you system env

Tensorflow on windows 7 - failed to load the native tensoflow runtime

I'm using Python 3.5 on Windows 7. I have installed tensorflow and tensorflow-gpu version 1.1 through pip but when I try to run this command
import tensorflow as tf
I'm getting this errorpart 2
Traceback (most recent call last):
File "C:\Users\Toshiba\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
return importlib.import_module(mname)
ImportError: No module named _pywrap_tensorflow_internal
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
You may well be missing the CUDAnn DLLs - I had the same issue and installing these fixed it on Win 7.
These are a prerequisite as per https://www.tensorflow.org/install/install_windows
Get the CUDA dlls from here:
https://developer.nvidia.com/rdp/cudnn-download
You specifically need the v6.0 library (and in your case windows 7).
Once unzipped, copy the files in the zip to the equivalent folders (bin/lib/include) of your main NVidia CUDA installation. For me this was D:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\

Importing TensorFlow stops python program from running

I have Python Tools setup in Visual Studios with CPython installed.
In Visual Studios, if i run the following code:
print("hello");
import numpy;
print("hello");
The program runs fine, prints two 'hello', and exits normally.
However, if I run the following code:
print("hello");
import tensorflow;
print("hello");
The program hangs, prints one 'hello', and refuses to continue.
All packages should be correctly installed - using the TensorFlow in the Python interactive window prints the correct output and works perfectly.
Why does the program hang in the second scenario?
Once you import tensorflow it automatically tries to load cuda, it prints something like this:
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
So I think what is happening is that you don't have cuda installed correctly and it is failing because of it. You can try to install the CPU version which doesn't use the GPU and doesn't load those libraries.

Categories

Resources