OpenCV 4.0 and AMD processor Python - python

Can I somehow use my AMD processor to speed up computations in my Python script? I'm doing object detection using OpenCV 4.0 with cv2.dnn module.
Basing on similar questions I've tried to use cv2.UMat but it doesn't speed up computations, so I assume that the script was still run on my poor CPU.
GPU info: Advanced Micro Devices, Inc. [AMD/ATI] Thames [Radeon HD 7500M/7600M Series]

Sorry but AFAICT, that's a fairly old GPU (pre-GCN architecture). Those were not really suited for GPGPU computations. It should be still possible to install OpenCL drivers, but i cannot guarantee anything as i can't try it.
I assume you're on linux. Here is an article on how to use older AMD GPUs with Linux. You would have to follow steps in "Running Open CL on old cards" section, which involve installing Ubuntu 14.04 and fglrx 15.12 drivers.

Related

Use GPU for PIL in python on mac (macOS Catalina)

I optimized my code to process images with pillow. It uses all the sources to get as fast as possible. Just the GPU would make it faster. I don't find any solution beside CUDA and that wont't work on Catalina. Is there any way to use my GPU(NVIDIA GeForce GT 750M 2 GB
Intel Iris Pro 1536 MB) to make the process more efficient?
Thanks for your help!
Actually there is no way to use pillow to do that! If you need a better speed, you can use ImageMagick (Wand as python wrapper) or GraphicsMagick (pgmagick as Python wrapper). If you need to use GPU, ImageMagick gives some option to use it if possible (I am not sure about GM) but it is not neither as efficient nor complete as you use CUDA or OpenCL. I recommend you to use Vulkan if you need better result and cross platform (Nvidia and AMD, MacOS, Linux Windows...).

Possible to virtualize NVIDIA GeForce GTX 1070 Graphics Card for Distributed Tensorflow?

I am running Windows 10 on Intel Core i7-8700 CPU with 16 GB RAM, 1 TB HDD and dedicated NVIDIA GeForce GTX 1070 graphics card.
I plan to launch 3 Ubuntu instances hosted by my Windows 10 PC.
The Ubuntus will be running Distributed Tensorflow (tensorflow-gpu) code, that will using GPU for training a Neural Network. (to mention, already I've tried the setup on Windows but failed)
Q. Shall my NVIDIA GPU be virtualized among those Virtual Machines or Not?
If YES, then is there any further configurations required to make this happen?
If NOT, then is there any suggestions to build such experimental environment for Distributed Tensorflow?
N.B.
I have read this post saying VMs can not pass through host GPU, specifically on Windows for CUDA. But is there any recent information available, ideally from NVIDIA side?
Can anyone share some how-to follow in order to (possibly) virtualize GPU inside ESXI setup on Windows? Like there are several people talking here that its possible and done, although not officially supported by NVIDIA.
Alternatively, has anybody successfully implemented this suggested solution for GPU pass through on Debian-based system?
Thanks.
I would consider #jdehesa's answer as for now there seems no way to virtulize GPU on Windows for Tensorflow. Thanks to #jdehesa

Python: is there a module to mesure CPU usage over time

I have Python program which uses multiprocess module. I am trying to to check how to memory and process it uses over a time.
for checking memory I am using memory_profiler and it works perfectly fine. it gives me exactly what I want, a graph of memory usage over time.
is there any module I could try to check the CPU usage in similar fas
The psutil library is able to give system information (CPU / Memory usage)
psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager.
It currently supports Linux, Windows, OSX, Sun Solaris, FreeBSD,
OpenBSD and NetBSD, both 32-bit and 64-bit architectures, with Python
versions from 2.6 to 3.5 (users of Python 2.4 and 2.5 may use 2.1.3
version).
https://pypi.python.org/pypi/psutil

Theano for GPU without use of CUDA or using a CUDA workaround

I have an Intel Graphics Card (Intel(R) HD Graphics 520, also am on Windows 10) and as far as I know I can't use CUDA unless I have a NVIDIA GPU. The purpose is to use Theano's GPU capabilities (for deep learning which is why I need GPU power).
Is there a workaround that somehow allows me to use CUDA with my current GPU?
If not is there another API that I can use with my current GPU for Theano (in Python 2.7)?
Or as a last option, using another language entirely, such as Java that has an API that allows for GPU use that I can use?
Figuring this out would be very helpful, because even though I just started with deep learning, I will probably get to the point where I need GPU parallel processing power to get results without waiting days at a minimum.
In order:
No. You must have a supported NVIDIA GPU to use CUDA.
As pointed out in comments, there is an alternative backend for Theano which uses OpenCL and which might work on your GPU
Intel support OpenCL on your GPU, so any language bindings for the OpenCL APIs, or libraries with in-built OpenCL would be a possible solution in this case
[This answer has been assembled from comments and added as a community wiki entry in order to get it off the unanswered queue for the CUDA tag].

Installing PyPy from source on low RAM devices

I have a little bit of a wreck of a computer; 7+ years old, Intel Celeron # 430 1.78 GHz, 448 MB of RAM, Lord only knows what motherboard graphics chip, etc. etc. running Lubuntu 14.04 LTS 32-bit, and I want to install PyPy, but due to my Linux version I need to build it from source, their install guide basically suggested that I need much more RAM than I have...
I have tons of hard drive space, which could be used as a pagefile, is there any way to provide it with enough RAM to do this, (either up to ~1.6 GB and use their low RAM tweak in their guide, or 2-3 GB to do so without the tweak), by making it use a page file?
Also, don't worry about the speed of the process, as long as it doesn't exceed much more than 24 hours to build it...
With only 400MB of RAM and a big pagefile, it's very likely to never finish in "just" one day (but I don't actually know). You need to build on some other machine and then copy the binary, or take the binary we provide and then add the exact same version of all missing libraries...

Categories

Resources