How to use gpu with Keras on MacOS - python

I just want to run my model for deep learning with keras on MacOS
But It's not working
So I already install plaidml below
and I switched 'metal_amd_radeon_pro_560x.0' in plaidml-setup
after that, I added 'os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"' in pycharm
but I check gpu activity is low.
and I tried to do another method 'multi_gpu_model' in keras
But I got also error message 'However this machine only has: ['/cpu:0']'
my mac is 2019 macbook pro with Radeon Pro 560X 4 GB graphic card
could you help me for it?

Please follows these steps (assuming you have python already installed),
virtualenv plaidml
source plaidml/bin/activate
pip install plaidml-keras plaidbench
Choose the accelerator
plaidml-setup
Set Keras as backend
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"

Related

Tensorflow 2.4.1 - Couldn't invoke ptxas.exe

I try to run Tensorflow with GPU support (GTX 1660 SUPER).
I created an enviroment using anaconda, than installed cudatoolkit (version 11.0.221) and tensorflow-gpu (version 2.4.1). Afterwards, I downloaded cuDNN (version 8.0.4), and copied all files from cuDNN's bin folder to my environment's bin folder at anaconda3\envs\<env name>\Library\bin.
In my script, I've set the memory limit to my GPU's memory using tf.config.experimental.set_memory_growth.
When I run the script (which uses convolutional algorithms), I get a warning that says Couldn't invoke ptxas.exe --version which comes after an Call to CreateProcess failed. Error code: 2 error.
After the launch failure, I get: Relying on driver to perform ptx compilation. Modify $PATH to customize ptxas location.
I've already tried switching to cuDNN version 8.1.1.
How I fix this?
I got a new fix for this.
First I tried using tensorflow=2.3, cudnn=7.6.5 and cudatoolkit=10.1 as mentioned in previous answers. However, every time I put a model to train, the process was going stale and the training seemed to be stuck in epoch 1.
I then managed to include ptxas in my conda environment by running conda install -c nvidia cuda-nvcc The packages I am using are:
tensorflow=2.9, cudnn=8.1.0, cudatoolkit=11.2.2, cuda-nvcc=11.7.99 and python=3.9
I am running everything on windows 10 flawlessly now.
For the benefit of community adding #Zuk Levinson comment
Solves the issue by using
tensorflow=2.3, cudnn=7.6.5 and cudatoolkit=10.1

Confused with setting up ML and DL on GPU

My goal is to set up my PC for machine and deep learning through my GPU. I've read about all the different components however I can not connect the dots for what I need to do.
OS: Ubuntu 20.04
GPU: Nvidia RTX 2070 Super
Anaconda: 4.8.3
I've installed the nvidia-cuda-toolkit (10.1.243), but now what?
How does this integrate with jupyter notebook?
The 3 python modules I want to work with are:
turicreate - I've gotten this to run off CPU but not GPU
scikit-learn
tensorflow
matlab
I know cuDNN and pyCUDA fit in there somewhere.
Any help is appreciated. Thanks
First of all - I have the experience limited to ubuntu 18.04 and 16.xx and python DL frameworks. But I hope some sugestions will be helpfull.
If I were familiar with docker I would rather consider to use docker instead of setting-up everything from scratch. This approach is described in section about tensorflow container
If you decided to setup all components yourself please see this guideline
I used some contents from it for 18.04, succesfully.
be carefull with automatic updates. After the configuration is finished and tested protect it from being overwritten with newest version of CUDAor TensorRT.
Answering one of your sub-questions - How does this integrate with jupyter notebook? - it does not, becuase it is unneccesary. CUDA library cooperates with a framework such as Tensorflow, not with the Jupyter. Jupyter is just an editor and execution controller on the server side.

CPU usage is more than GPU usage on Keras, Tensorflow

I wanted to teach an image classification CNN, and use Keras for it.
The image dimensions are 300x300x3.
I have trained a CNN with 2M parameters, I used MobileNet of Keras for transfer learning, however I freeze last 63 layers and add dense layers at the bottom, the last layer has 2 unit and Softmax activation.
To make predictions, I load the h5 file and use OpenCV video capture to get video frames, for each frame I use model.predict(img_array).
When i look to the Task Manager of Windows 10 , I see that the Python script uses %80 of my processor but %2 of GPU. This CPU usage causes Lags on my laptop.
How can I reduce the CPU usage and force Keras to make computations with GPU?
I have Nvidia Rtx 2060 4GB and Intel Core i7-9750H on my laptop.
Tensorflow 2.1 and Keras 2.3.1
OpenCV 4.1
I have tried, but actually nothing changes.
tf.config.threading.set_inter_op_parallelism_threads(12)
tf.config.threading.set_intra_op_parallelism_threads(12)
with tf.device(\gpu:0):
model.predict(img_array)
Best regards.
Edit:
I reduce the CPU usage to %20 with declaring steps parameter in the predict method.
Please check your pip list or conda list.
Sometimes, we mistakenly install both tensorflow and tensorflow-gpu.
If you have both, the system will automatically go for tensorflow, which is the CPU one.
If that is the case, DELETE "tensorflow", keeping only "tensorflow-gpu".
If you do not see tensorflow-gpu in the first place, try installing it on conda using the following commands:
conda create -n [EnvironmentName] python=3.6
conda activate [EnvironmentName]
conda install -c conda-forge tensorflow-gpu==1.14
it will assess which version (CUDA,CUDNN, etc.) you require and download and install it directly to your environment. Then run your python file from this environment. Good luck ^_^

tensorflow GPU based installation

My system is ubuntu 16.04 version my laptop is dell Inspiron-5521 and it has intel graphic card but tensorflow needs nvidia graphics for cuda support.
Is there any way where i can run tensorflow with GPU(with CPU is working) on intel graphics.
During installation of tensorflow-gpu i have no error when i import i get
"
Failed to load the native TensorFlow runtime
."
Did some digging then found to install cuda downloaded the "cuda_9.1.85_387.26_linux.run" file but faces issues while running it
"Detected 4 CPUs online; setting concurrency level to 4.
The file '/tmp/.X0-lock' exists and appears to contain the process ID
'1033' of a runnning X server.
It appears that an X server is running. Please exit X before
installation. If you're sure that X is not running, but are getting
this error, please delete any X lock files in /tmp."
Deleted files from tmp folder and tried still same issue.
To run tensorflow-gpu you need nvidia card. You'll need to stick to running normal tensorflow on CPU.
Is Intel based graphic card compatible with tensorflow/GPU?
Tensorflow does not support OpenCL API that you can use with Intel or AMD, only CUDA. CUDA is a proprietary NVidia technology that only works with NVidia GPUs.
You may like to search for machine learning frameworks that utilise OpenCL, but I only find some niche projects at the moment.
I had to switch from AMD to NVidia to be able to run Tensorflow calculations on GPU.

Compiling binary with tensorflow library for cpu: Cannot find cuda library?

In development, I have been using the gpu-accelerated tensorflow
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl
I am attempting to deploy my trained model along with an application binary for my users. I compile using PyInstaller (3.3.dev0+f0df2d2bb) on python 3.5.2 to create my application into a binary for my users.
For deployment, I install the cpu version, https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl
However, upon successful compilation, I run my program and receive the infamous tensorflow cuda error:
tensorflow.python.framework.errors_impl.NotFoundError:
tensorflow/contrib/util/tensorflow/contrib/cudnn_rnn/python/ops/_cudnn_rnn_ops.so:
cannot open shared object file: No such file or directory
why is it looking for cuda when I've only got the cpu version installed? (Let alone the fact that I'm still on my development machine with cuda, so it should find it anyway. I can use tensorflow-gpu/cuda fine in uncompiled scripts. But this is irrelevant because deployment machines won't have cuda)
My first thought was that somehow I'm importing the wrong tensorflow, but I've not only used pip uninstall tensorflow-gpu but then I also went to delete the tensorflow-gpu in /usr/local/lib/python3.5/dist-packages/
Any ideas what could be happening? Maybe I need to start using a virtual-env..

Categories

Resources