Tensorflow-gpu in Python with Geforce Titan Z - python

I have problems setting up tensorflow with python 3.5.3 on Windows 7. The CUDA version is 8.0 and the GPU driver is updated to the latest version.
On the machine I used, there are one Geforce titan z (two cores) and one 750ti installed on the machine. I kept getting error msgs saying the "peer access is not supported" between the two cores on the titan z card and between the titan z and 750ti.
I am wondering:
1. Is it possible to use only one GPU for the tensorflow, and let it work?
2. Is it possible to use both cores in the titan z card for tensorflow?
3. Any suggestions on the python version and anaconda version for tensorflow? Will a Linux environment solve these problems?
Thank you!

It is possible to use a single or multiple GPUs and specify which one to use for each operation (link). TensorFlow has a requirement of compute capability 3.5 for an eligible GPU. Python/linux environment doesn't affect how GPU is used.

Related

How to prevent specific GPU to be used in Python code

My this problem is same as 1: How to set specific gpu in tensorflow?
but it didn't solve my problem.
I have 4 GPUs in my PC and I want to run code on GPU 0 but whenever I run my tensorflow code, my code is always running only on GPU 2. As reading these (2, 3, 4) solutions and information I tried to solve my problem by adding:
os.environ['CUDA_VISIBLE_DEVICES']= '0' in python code
orCUDA_VISIBLE_DEVICES as environment variable in PyCharm project configuration settings.
furthermore I also add CUDA_LAUNCH_BLOCKING=2in code or environment variable to block the GPU 2. Is it right way to block any GPU?
Above solutions are not working for me. Code is always running on GPU 2. I checked it by watch nvidia-smi.
My system environment is
Ubuntu 16.04
RTX2080Ti (all 4 GPUs)
Driver version 418.74
CUDA 9.0 and CuDNN 7.5
Tensorflow-gpu 1.9.0
Any suggestions for this problem? It's wired that adding environment variable in project settings in PyCharm or in python code... still only GPU 2 is visible. When I remove CUDA_VISIBLE_DEVICESthen tensorflow detects all 4 GPUs but code run on only GPU 2.
I tried this in tensorflow 2.0.0
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(physical_devices[0], 'GPU')
logical_devices = tf.config.experimental.list_logical_devices('GPU')
This should make u r code run in GPU index 0

Minimum required hardware component to install tensorflow-gpu in python

I'm tried many PC with different hardware capability to install tensorflow on gpu, they are either un-compatible or compatible but stuck in some point. I would like to know the minimum hardware required to install tensorflow-gpu. And also I would like to ask about some hardware, Is they are allowed or not:
Can I use core i5 instead of core i7 ??
Is 4 GB gpu enough for training the dataset??
Is 8 GB ram enough for training and evaluating the dataset ?? with most thanks.
TensorFlow (TF) GPU 1.6 and above requires cuda compute capability (ccc) of 3.5 or higher and requires AVX instruction support.
https://www.tensorflow.org/install/gpu#hardware_requirements.
https://www.tensorflow.org/install/pip#hardware-requirements.
Therefore you would want to buy a graphics card that has ccc above 3.5.
Here's a link that shows ccc for various nvidia graphic cards https://developer.nvidia.com/cuda-gpus.
However if your cuda compute capability is below 3.5 you have to compile TF from sources yourself. This procedure may or may not work depending on the build flags you choose while compiling and is not straightforward.
In my humble opinion, The simplest way is to use TF-GPU pre-built binaries to install TF GPU.
To answer your questions. Yes you can use TF comfortably on i5 with 4gb of graphics card and 8gb ram. The training time may take longer though, depending on task at hand.
In summary, the main hardware requirement to install TF GPU is getting a Nvidia graphics card with cuda compute capability more than 3.5, more the merrier.
Note that TF officially supports only NVIDIA graphics card.
You should find your answers here:
https://www.nvidia.com/en-gb/data-center/gpu-accelerated-applications/tensorflow/
From the link:
The GPU-enabled version of TensorFlow has the following requirements:
64-bit Linux
Python 2.7
CUDA 7.5 (CUDA 8.0 required for Pascal GPUs)
cuDNN v5.1 (cuDNN v6 if on TF v1.3)

tensorflow GPU based installation

My system is ubuntu 16.04 version my laptop is dell Inspiron-5521 and it has intel graphic card but tensorflow needs nvidia graphics for cuda support.
Is there any way where i can run tensorflow with GPU(with CPU is working) on intel graphics.
During installation of tensorflow-gpu i have no error when i import i get
"
Failed to load the native TensorFlow runtime
."
Did some digging then found to install cuda downloaded the "cuda_9.1.85_387.26_linux.run" file but faces issues while running it
"Detected 4 CPUs online; setting concurrency level to 4.
The file '/tmp/.X0-lock' exists and appears to contain the process ID
'1033' of a runnning X server.
It appears that an X server is running. Please exit X before
installation. If you're sure that X is not running, but are getting
this error, please delete any X lock files in /tmp."
Deleted files from tmp folder and tried still same issue.
To run tensorflow-gpu you need nvidia card. You'll need to stick to running normal tensorflow on CPU.
Is Intel based graphic card compatible with tensorflow/GPU?
Tensorflow does not support OpenCL API that you can use with Intel or AMD, only CUDA. CUDA is a proprietary NVidia technology that only works with NVidia GPUs.
You may like to search for machine learning frameworks that utilise OpenCL, but I only find some niche projects at the moment.
I had to switch from AMD to NVidia to be able to run Tensorflow calculations on GPU.

Tensorflow Slower on Python 3 vs. Python 2

My tests show that Tensorflow GPU operations are ~6% slower on Python 3 compared to Python 2. Does anyone have any insight on this?
Platform:
Ubuntu 16.04.2 LTS
Virtualenv 15.0.1
Python 2.7.12
Python 3.6.1
TensorFlow 1.1
CUDA Toolkit 8.0.44
CUDNN 5.1
GPU: GTX 980Ti
CPU: i7 4 GHz
RAM: 32 GB
When operating Tensorflow from python most code to feed the computational engine with data resides in python domain. There are known differences between python 2/3 when it comes to performance on various tasks. Therefore, I'd guess that the python code you use to feed the net (or TF python layer, which is quite thick) makes heavy use of python features that are (by design) a bit slower in python 3.

nvcc fatal : Value 'sm_61' is not defined for option 'gpu-architecture' error with theano

I was setting up python and theano for use with gpu on;
ubuntu 14.04,
GeForce GTX 1080
already installed NVIDIA driver (367.27) and CUDA toolkit (7.5) successfully for the system,
but on testing with theano gpu implementation I get the above error (for example; when importing theano with gpu enabled)
I have tried to look for possible solutions but didn't succeed.
I'm a little new to ubuntu and gpu programming, so I would appreciate any insight into how I can solve this problem.
Thanks
As Robert Crovella said, SM 6.1 (sm_61) is only supported in CUDA 8.0 and above, and thus you should download CUDA 8.0 Release Candidate from https://developer.nvidia.com/cuda-toolkit
Ubuntu 14.04 is supported, and the instructions on the website on how to setup should be straightforward (copy and paste lines to the console).
I would also recommend downloading CUDA 8.0 when it comes out, since the RC is not the final version.
I was able to find a solution to this problem (since I still want to use CUDA 7.5) by including the following line in the .theanorc file
flags = -arch=sm_52
no more nvcc fatal error

Categories

Resources