I use the Tensorflow v 1.14.0. I work on Windows 10. And here is how relevant environment variables look in the PATH:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\libnvvp
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common
C:\Users\sinthes\AppData\Local\Programs\Python\Python37
C:\Users\sinthes\AppData\Local\Programs\Python\Python37\Scripts
C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\cuda\bin
Maybe also worth to mention, just in case it might be relevant.. I use Sublime Text 3 for development and I do not use Anaconda. I find it a bit cumbersome to make updates on tensorflow in the conda environment so I just use Sublime Text right now. (I was using Anaconda (Spyder) previously but I uninstalled it from my computer.)
Things seem to work fine except with some occasional strange warnings. But one consistent warning I get is the following whenever I run the fit function.
E tensorflow/core/platform/default/device_tracer.cc:68] CUPTI error: CUPTI could not be loaded or symbol could not be found.
And here is how I call the fit function:
history = model.fit(x=train_x,
y=train_y,
batch_size=BATCH_SIZE,
epochs=110,
verbose=2,
callbacks=[tensorboard, checkpoint, reduce_lr_on_plateau],
validation_data=(dev_x, dev_y),
shuffle=True,
class_weight=class_weight,
steps_per_epoch=None,
validation_steps=None)
I just wonder why I see the CUPTI Error message during the run time? It is only printed out once. Is that something that I need to fix or is it something that can be ignored? This message does not tell anything concrete to me to be able to take any action.
Add this in path for Windows:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\extras\CUPTI\libx64
The NVIDIA® CUDA Profiling Tools Interface (CUPTI) is a dynamic
library that enables the creation of profiling and tracing tools that
target CUDA applications.
CPUTI seems to have been added by the Tensorflow Developors to allow profiling. You can simply ignore the error if you don't mind the exception or adapt your environment path, so the dynamically linked library (DLL) can be found during execution.
Inside of you CUDA installation directory, there is an extras\CUPTI\lib64 directory that contains the cupti64_101.dll that is trying to be loaded. Adding that directory to your path should resolve the issue, e.g.,
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\CUPTI\lib64;%PATH%
N.B. in case you get an INSUFFICIENT_PRIVILEGES error next, try to run your program as administrator.
This answer is for Ubuntu-16.04.
I had this issue when I upgraded to Tensorflow-1.14 with Python2.7 and Python3.6. I had to add /usr/local/cuda/extras/CUPTI/lib64 to LD_LIBRARY_PATH with export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH and logout and login. source ~/.bashrc didn't help. Note that my cuda folder was pointing to cuda-10.0.
Ran in to this same issue. This is what fixed it for me incase someone else has a similar problem fixing this.
error I received:
function cupti_interface_->Subscribe( &subscriber_, (CUpti_CallbackFunc)ApiCallback, this)failed with error CUPTI could not be loaded or symbol could not be found.
Windows Server 2019
Tensorflow 2.5
Cuda 11.2 (CUDA_PATH environment variable is set and added to the PATH environment variable)
Cudnn 8.1.0
I had already set C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\extras\CUPTI\lib64 in the PATH environment variable but was still receiving the error.
Running where /r c:\ cupti*.dll in a cmd prompt found dll's in the c:\Program Files\NVIDIA Corporation\Nsight Systems 2020.4.3\target-windows-x64\ directory. Simply adding this directory to the PATH environment variable fixed the error.
I had a similar error when trying to get tensorboard graph, I think it only affects you if you plan to use tensorboard.
I found the solution in this post but it is for linux
https://gist.github.com/Brainiarc7/6d6c3f23ea057775b72c52817759b25c
I think you need to create a library configuration file for cupti.
Here is what solved "my" problem:
Windows 10, Tensorflow-gpu 2.4
The first issue, was it was unclear about exactly "which" cupti64 version it was trying to load. With that in mind, I did a search for all dll's called cupti*
I then copied them all (yeh I know it's hack, but given limited information...) into my
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\extras\CUPTI\lib64
folder (cupti64_2020.1.0.dll was in there already)
I then needed to also set the folder permission to get it to work, which was strange as I was running VS as admin
Here is what solved "my" problem:
I just replaced my tensorflow v 1.14 with tensorflow v 1.13.1. And no more CUPTI error messages. And even some other strange warnings / problems have disappeared. All issues should obviously have specific reasons but Tensorflow (many times) unfortunately does not provide understandable error/warning messages that give a good/fair idea that helps to solve the issue. And I end up spending hours (even days) on such strange problems, that reduces my productivity significantly.
One general learning for me (that might be relevant to share here) is that I should not be in hurry to upgrade my tensorflow installation to the latest version of it. The latest one is almost never stable, whenever I made a try, I ended up spending significant amount of time on problems that are caused by tensorflow. Poor documentation and error messages make it very very difficult to work with.
If anyone has a better answer, s/he is more than welcome to share his/her insights on the issue I shared in this question.
I also just ran into this issue, exactly as jreeves. I solved it exactly following jreeves' method (above). (Thanks to jreeves for his/her work in finding and documenting the solution.)
MY setup:
Windows 10
Gpu Support: True
Cuda Support: True
TensorFlow: 2.4.1
Python version: 3.8.8.
Tensorboard version 2.4.1.
Cuda 11.1
Cudnn 8.0.5
So I am trying to compile TensorFlow from the source (using a clone from their git repo from 2019-01-31). I installed Bazel from their shell script (https://github.com/bazelbuild/bazel/releases/download/0.21.0/bazel-0.21.0-installer-linux-x86_64.sh).
I executed ./configure in the tensorflow code and provided the default settings except for adding my machine specific -m options (-mavx2 -mfma) and pointing python to the correct python3 location (/usr/bin/py3). I then ran the following command as per the tensorflow instructions:
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package //tensorflow:libtensorflow_framework.so //tensorflow:libtensorflow.so
Now that continues to run and run, I haven't seen it complete yet (though I am limited to letting it run for a maximum of about 10 hours). It produces a ton of INFO: warnings regarding signed and unsigned integers and control reaching the end of non-void functions. None of these appear fatal. Compilation continues to tick with the two numbers continuing to grow ('[N,NNN / X,XXX] 4 actions running') and files ticking through 'Compiling'.
The machine is an EC2 instance with ~16GiB of RAM, CPU is 'Intel(R) Xeon(R) CPU E5-2686 v4 # 2.30GHz' with I believe 4-cores, plenty of HDD space (although compilation seems to eat QUITE a bit, > 1GiB)
Any ideas on what's going on here?
Unfortunately, some programs can take a long time to compile. A couple of hours of compilation is not strange for tensorflow on your setup.
There are reports of it taking 50 minutes on a considerably faster machine
A solution to this problem is to use pre-compiled binaries that are available with pip, instructions can be found here: https://www.tensorflow.org/install/pip.html
Basically you can do this:
pip install tensorflow
If you require a specific older version, like 1.15, you can do this:
pip install tensorflow==1.15
For gpu support you add -gpu to the package name, like this:
pip install tensorflow-gpu
And:
pip install tensorflow-gpu==1.15
I have two Windows computers, one with and one without a GPU.
I would like to deploy the same python script on both (TensorFlow 1.8 Object Detection), without changing the packaged version of TensorFlow. In other words, I want to run tensorflow-gpu on a CPU.
In the event where my script cannot detect nvcuda.dll, I've tried using a Session config to disable the GPU, like so:
config = tf.ConfigProto(
device_count = {'GPU': 0}
)
and:
with tf.Session(graph=detection_graph, config=config) as sess:
However, this is not sufficient, as TensorFlow still returns the error:
ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable.
Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.
Is there any way to disable checking for a GPU/CUDA entirely and default to CPU?
EDIT: I have read the year-old answer regarding tensorflow-gpu==1.0 on Linux posted here, which suggests this is impossible. I'm interested to know if this is still how tensorflow-gpu is compiled, 9 versions later.
In development, I have been using the gpu-accelerated tensorflow
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl
I am attempting to deploy my trained model along with an application binary for my users. I compile using PyInstaller (3.3.dev0+f0df2d2bb) on python 3.5.2 to create my application into a binary for my users.
For deployment, I install the cpu version, https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl
However, upon successful compilation, I run my program and receive the infamous tensorflow cuda error:
tensorflow.python.framework.errors_impl.NotFoundError:
tensorflow/contrib/util/tensorflow/contrib/cudnn_rnn/python/ops/_cudnn_rnn_ops.so:
cannot open shared object file: No such file or directory
why is it looking for cuda when I've only got the cpu version installed? (Let alone the fact that I'm still on my development machine with cuda, so it should find it anyway. I can use tensorflow-gpu/cuda fine in uncompiled scripts. But this is irrelevant because deployment machines won't have cuda)
My first thought was that somehow I'm importing the wrong tensorflow, but I've not only used pip uninstall tensorflow-gpu but then I also went to delete the tensorflow-gpu in /usr/local/lib/python3.5/dist-packages/
Any ideas what could be happening? Maybe I need to start using a virtual-env..
I'm trying to run a tensorflow python script in a google cloud vm instance with GPU enabled. I have followed the process for installing GPU drivers, cuda, cudnn and tensorflow. However whenever I try to run my program (which runs fine in a super computing cluster) I keep getting:
undefined symbol: cudnnCreate
I have added the next to my ~/.bashrc
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64:/usr/local/cuda-8.0/lib64"
export CUDA_HOME="/usr/local/cuda-8.0"
export PATH="$PATH:/usr/local/cuda-8.0/bin"
but still it does not work and produces the same error
Answering my own question: The issue was not that the library was not installed, the library installed was the wrong version hence it could not find it. In this case it was cudnn 5.0. However even after installing the right version it still didn't work due to incompatibilities between versions of driver, CUDA and cudnn. I solved all this issues by re-installing everything including the driver taking into account tensorflow libraries requisites.