Use GPU for PIL in python on mac (macOS Catalina) - python

I optimized my code to process images with pillow. It uses all the sources to get as fast as possible. Just the GPU would make it faster. I don't find any solution beside CUDA and that wont't work on Catalina. Is there any way to use my GPU(NVIDIA GeForce GT 750M 2 GB
Intel Iris Pro 1536 MB) to make the process more efficient?
Thanks for your help!

Actually there is no way to use pillow to do that! If you need a better speed, you can use ImageMagick (Wand as python wrapper) or GraphicsMagick (pgmagick as Python wrapper). If you need to use GPU, ImageMagick gives some option to use it if possible (I am not sure about GM) but it is not neither as efficient nor complete as you use CUDA or OpenCL. I recommend you to use Vulkan if you need better result and cross platform (Nvidia and AMD, MacOS, Linux Windows...).

Related

How to use qiskit gpu on window?

I want to calculate unitary of quantum circuit with GPU because my cpu is already too busy.
But it seems qiskit-aer-gpu supports only ubuntu. Is there any way to use it on window10?
The qiskit-aer-gpu package provided is only available on Linux running on a x86_64 platform. If you want to make it run on Windows, you'll have to build the Aer code to support GPU from source. You can refer here for instructions.
Another way might be to use Windows Subsystem for Linux (WSL) to install CUDA and use the already built qiskit-aer-gpu package. I've never tried it but it's worth a try. https://docs.nvidia.com/cuda/wsl-user-guide/index.html
Patrick

OpenCV 4.0 and AMD processor Python

Can I somehow use my AMD processor to speed up computations in my Python script? I'm doing object detection using OpenCV 4.0 with cv2.dnn module.
Basing on similar questions I've tried to use cv2.UMat but it doesn't speed up computations, so I assume that the script was still run on my poor CPU.
GPU info: Advanced Micro Devices, Inc. [AMD/ATI] Thames [Radeon HD 7500M/7600M Series]
Sorry but AFAICT, that's a fairly old GPU (pre-GCN architecture). Those were not really suited for GPGPU computations. It should be still possible to install OpenCL drivers, but i cannot guarantee anything as i can't try it.
I assume you're on linux. Here is an article on how to use older AMD GPUs with Linux. You would have to follow steps in "Running Open CL on old cards" section, which involve installing Ubuntu 14.04 and fglrx 15.12 drivers.

Theano for GPU without use of CUDA or using a CUDA workaround

I have an Intel Graphics Card (Intel(R) HD Graphics 520, also am on Windows 10) and as far as I know I can't use CUDA unless I have a NVIDIA GPU. The purpose is to use Theano's GPU capabilities (for deep learning which is why I need GPU power).
Is there a workaround that somehow allows me to use CUDA with my current GPU?
If not is there another API that I can use with my current GPU for Theano (in Python 2.7)?
Or as a last option, using another language entirely, such as Java that has an API that allows for GPU use that I can use?
Figuring this out would be very helpful, because even though I just started with deep learning, I will probably get to the point where I need GPU parallel processing power to get results without waiting days at a minimum.
In order:
No. You must have a supported NVIDIA GPU to use CUDA.
As pointed out in comments, there is an alternative backend for Theano which uses OpenCL and which might work on your GPU
Intel support OpenCL on your GPU, so any language bindings for the OpenCL APIs, or libraries with in-built OpenCL would be a possible solution in this case
[This answer has been assembled from comments and added as a community wiki entry in order to get it off the unanswered queue for the CUDA tag].

Fastest way to resize image

As in the title, which is the fastest way to resize an image? I'm using Python + OpenCV 2.11 (not openCV3), and it seems cv2.resize() is very slow.
We can use CUDA with OpenCV3 (http://www.coldvision.io/2015/12/22/image-resize-with-opencv-and-cuda/), but is it supported in OpenCV 2?
OpenCV 2 has gpu module but unfortunately there's no bindings for Python.
CUDA programming comes with a pretty big warmup- and code complexity overhead itself.
There exists a SIMD fork of Pillow, which claims to have much better performance than ImageMagick or plain Pillow, but there are no comparisons to OpenCV. Maybe worth checking out how they compare.

Why is Theano (much) slower on Windows than on Linux?

I implemented a recursive autoencoder with Theano and tested it on both Linux and Windows. It tooks ~3 hours, 2.3G memory on Linux, while ~9 hours, 0.5G memory on Windows. config.allow_gc=True for both cases.
It could be a Python issue, as discussed in the thread: Why is python so much slower on windows?
Is there any specific setting in Theano that could slow things down on Windows as well?
Thanks,
Ya
It could be that they use different BLAS librairies. From memory, autoencoder bottleneck is the matrix product, that call BLAS. Different BLAS implementation can have up to 10x speed difference.
So check if you used the same BLAS. I would recommand to install python via EPD/Canopy or Anaconda python packages. There not free version link to a good blas and Theano reuse it. The now free version is free for academic.

Categories

Resources