I am trying openCV+Yolo3. I am using Mac with this config:
MacBook Pro (Retina, 15-inch, Mid - 2015)
Graphics Intel Iris Pro 1536 MB
Update - OS info:
macOS Catalina version 10.15.2
I checked Apple website and it says this MacBook supports openCL 1.2: https://support.apple.com/en-ca/HT202823
My program uses opencv-contrib-python 4.1.2. And the code snippet is:
net = cv2.dnn.readNetFromDarknet(model_configuration, model_weights)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_OPENCL)
I also tried DNN_TARGET_OPENCL_FP16. BTW, I use the common pre-trained yolo3 cfg and weights and coco.names.
The problem is, my program cannot use GPU on my Mac. When I run a video through it, the inference time is 300+ ms per frame and from Activity Monitor I can see that the GPU usage is 0.0% while CPU is 70%+. I don't know why I can't use GPU via openGL on the Mac. Is there any trick I miss?
Related
I just want to run my model for deep learning with keras on MacOS
But It's not working
So I already install plaidml below
and I switched 'metal_amd_radeon_pro_560x.0' in plaidml-setup
after that, I added 'os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"' in pycharm
but I check gpu activity is low.
and I tried to do another method 'multi_gpu_model' in keras
But I got also error message 'However this machine only has: ['/cpu:0']'
my mac is 2019 macbook pro with Radeon Pro 560X 4 GB graphic card
could you help me for it?
Please follows these steps (assuming you have python already installed),
virtualenv plaidml
source plaidml/bin/activate
pip install plaidml-keras plaidbench
Choose the accelerator
plaidml-setup
Set Keras as backend
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
I'm tried many PC with different hardware capability to install tensorflow on gpu, they are either un-compatible or compatible but stuck in some point. I would like to know the minimum hardware required to install tensorflow-gpu. And also I would like to ask about some hardware, Is they are allowed or not:
Can I use core i5 instead of core i7 ??
Is 4 GB gpu enough for training the dataset??
Is 8 GB ram enough for training and evaluating the dataset ?? with most thanks.
TensorFlow (TF) GPU 1.6 and above requires cuda compute capability (ccc) of 3.5 or higher and requires AVX instruction support.
https://www.tensorflow.org/install/gpu#hardware_requirements.
https://www.tensorflow.org/install/pip#hardware-requirements.
Therefore you would want to buy a graphics card that has ccc above 3.5.
Here's a link that shows ccc for various nvidia graphic cards https://developer.nvidia.com/cuda-gpus.
However if your cuda compute capability is below 3.5 you have to compile TF from sources yourself. This procedure may or may not work depending on the build flags you choose while compiling and is not straightforward.
In my humble opinion, The simplest way is to use TF-GPU pre-built binaries to install TF GPU.
To answer your questions. Yes you can use TF comfortably on i5 with 4gb of graphics card and 8gb ram. The training time may take longer though, depending on task at hand.
In summary, the main hardware requirement to install TF GPU is getting a Nvidia graphics card with cuda compute capability more than 3.5, more the merrier.
Note that TF officially supports only NVIDIA graphics card.
You should find your answers here:
https://www.nvidia.com/en-gb/data-center/gpu-accelerated-applications/tensorflow/
From the link:
The GPU-enabled version of TensorFlow has the following requirements:
64-bit Linux
Python 2.7
CUDA 7.5 (CUDA 8.0 required for Pascal GPUs)
cuDNN v5.1 (cuDNN v6 if on TF v1.3)
My system is ubuntu 16.04 version my laptop is dell Inspiron-5521 and it has intel graphic card but tensorflow needs nvidia graphics for cuda support.
Is there any way where i can run tensorflow with GPU(with CPU is working) on intel graphics.
During installation of tensorflow-gpu i have no error when i import i get
"
Failed to load the native TensorFlow runtime
."
Did some digging then found to install cuda downloaded the "cuda_9.1.85_387.26_linux.run" file but faces issues while running it
"Detected 4 CPUs online; setting concurrency level to 4.
The file '/tmp/.X0-lock' exists and appears to contain the process ID
'1033' of a runnning X server.
It appears that an X server is running. Please exit X before
installation. If you're sure that X is not running, but are getting
this error, please delete any X lock files in /tmp."
Deleted files from tmp folder and tried still same issue.
To run tensorflow-gpu you need nvidia card. You'll need to stick to running normal tensorflow on CPU.
Is Intel based graphic card compatible with tensorflow/GPU?
Tensorflow does not support OpenCL API that you can use with Intel or AMD, only CUDA. CUDA is a proprietary NVidia technology that only works with NVidia GPUs.
You may like to search for machine learning frameworks that utilise OpenCL, but I only find some niche projects at the moment.
I had to switch from AMD to NVidia to be able to run Tensorflow calculations on GPU.
As you know Apple uses Radeon on iMac's. I have been trying to find out a solution to speed up the training process and no luck so far!
So can you pros direct me to the right route on this? I mean can i already use GPU on my iMac without adding any equipments or shall i go and buy some external NVIDIA and Thunderbolt box?
I am planning to use Tensorflowi keras aand sklearn on my iMac macOS High Sierra.
Supporting Radeon and OpenCL is an ongoing issue in tensorflow
you can check the issue on github there are some workarounds and tensorflow forks that support that.
https://github.com/tensorflow/tensorflow/issues/22
tf-coriander(tensorflow 1.11 modified) support macOS AMD GPUS!
Proven on my MBP2017 AMD Radeon 560
I've set up pyopencl on my laptop by getting python-pyopencl from multiverse and installing the amd app sdk. To get the Nvidia ICDs I reinstalled the latest Nvidia driver from the driver manager.
My system is a Thinkpad t540p, i7 4700hq, Nvidia gt 730m, 64bit Ubuntu 14.04
To test the opencl installation I ran this pyopencl example: http://wiki.tiker.net/PyOpenCL/Examples/MatrixMultiply
Unfortunately the performance is very bad: Only 2 GFlop/s. Surely the laptop can do better. So I printed the vendor information. It's "GenuineIntel", apparently the kernel is not run on the GPU, but on the CPU. How can I change that ?
It seems like pyopencl doesn't find the GPU.
for dev in ctx.devices:
print dev.vendor
this returns only "GenuineIntel"
The context is created with:
import pyopencl as cl
ctx=cl.create_some_context()
UPDATE:
This seems to be a duplicate of: ERROR: pyopencl: creating context for specific device
There are two issues here.
First, you should specify GPU as the device to execute the kernel on. Replace:
ctx = cl.create_some_context()
with:
platform = cl.get_platforms()
gpus = platform[0].get_devices(device_type=cl.device_type.GPU)
ctx = cl.Context(devices=gpus)
Second, you appear to have Optimus switchable graphics, so the NVIDIA card is actually left in standby and all the graphic tasks are handled by the CPU for powersaving. You will need to activate the discrete GPU for your program by launching it using Bumblebee:
optirun python yourscript.py