Is there any way to train numpy neural networks faster? - python

I implemented a Neural Network class using only python and numpy, and I want to do some experiments with it. The problem is that it takes so long to train. My computer does not have a high-end GPU nor a wonderful CPU, so I thought about some sort of 'cloud training'.
I know libraries such as TensorFlow or PyTorch use backends to train neural networks faster, and I was wondering if something similar could be achieved with numpy. Is there a way to run numpy in the cloud?
Even if it is slow and doesn't use GPUs would be fine for me. I tried to load my files to Google Colab, but it didn't work so well. It stopped running due to inactivity after some time.
Is there any nice solution out there?
Thanks for reading it all!

Try to use cupy instead of numpy, it runs on GPU (works well on colab GPU instance) and maybe you should do just some little modifications to your code.

Related

Running a programme in cpu and Gpu without using two scripts

I am working on solving a problem using ml as well as deep learning in python. Deep learning models are trained on gpu whereas machine learning on cpu. Since in my code the ml part comes after dl it is executed only after dl part is completed. In theory since they will use different resources they can be run together. Is there any way to do it. One naive way I can think is to split code in two scripts and run but I am looking for a sophisticated way.
Thanks

Utilizing hardware AI accelerators with PyTorch

I'm pretty new to StackOverflow, but also to using PyTorch. I'm an AI and CS major, and I'm working on a project involving processing video with ML models. I'm not going to get into the details because I want any answers to this question to be generally accessible to others using pytorch, but the issue is I'm using pytorch with vapoursynth at the moment, accelerating both with CUDA, but I'm looking into purchasing as AI accelerator like this:
Amazon
Documentation on using these with Tensorflow is pretty easy to find, but I'm having trouble trying to answer for myself how I can use one of these with PyTorch. Does anybody have experience with this? I'd simply like to be able to use this card to accelerate training a Neural Net.
It is correct that you would need to convert your code to run on XLA, but that includes only changing few lines in your code. Please refer to https://github.com/pytorch/xla README doc for references and guides. With few modifications you can get significant training speedup.
I think the experience of using Pytorch on TPU would be less smooth than it on nvidia GPU. As far as I know, you have to use XLA to convert pytorch models to make them able to run on TPU.

Tensorflow 2.7 GPU Memory not released

I am currently working on 1D Convolutional Neural Networks for Time Series Classification. Recently, i got CUDA working on my GeForce 3080 (which was a pain itself). However, i noticed a weird behavior when using tensorflow and cuda. After training a model, the gpu memory is not released, even after deleting the variables and doing garbage collection. I tried reseting the tf graph and closing the tf sessions, but the gpu memory stays allocated. This results in cross validation crashing and me having to restart my python environment every time i want to make changes and retrain my model.
After a tideous search, I found out people have been struggling with this 5 years ago. However, I am right now using tf 2.7. I am working on Ubuntu 20.04.3. Some of my colleagues are using windows and are not experiencing these problems. However, it seems like they do not have any issues with models not being able to be retrained because of already allocated memory.
I found the workaround using multiple processes, but wasn't able to get it to work for my model using 10 fold cv.
As the issue has been up for more than 5yrs now and my colleagues not having any problems, I was wondering if I am doing sth. wrong. I think that issue might very likely have been fixed after 5 years, which is why I think my code is the problem here.
Is there any solution / guide for tf 2.7 and memory allocation of the gpu?

Dlib not using GPU on Google Colab

How do I force training on GPU?
Currently it's only using CPU even when I run dlib.DLIB_USE_CUDA and it says true.
It also says 1 when I run print(dlib.cuda.get_num_devices())
Here's the attached image that shows that there's nothing running on GPU when in fact I am running the code:
NOTE: GPU was set as RUn
Comment:
Apparently as what I've tested this wasn't a training error but rather it is loading error. It takes so much time and ram to load ibug-300W files. Is there any way to load this faster?
If someone ever stumbled upon this issue or problem on google colab (Slow training time).
The way to load this faster is to transfer the dataset directly on the vm/content of colab. Because the transfer speed between Drive and Colab is slow.
PS: You need atleast 14-15GB of ram to load ibug-300W files.

How can I speed up the training of a network using my GPU?

I was wondering if there is a way to use my GPU to speed up the training of a network in PyBrain.
Unless PyBrain is designed for that, you probably can't.
You might want to try running your trainer under PyPy if you aren't already -- it's significantly faster than CPython for some workloads. Perhaps this is one of those workloads. :)
Checkout Pylearn2: https://github.com/lisa-lab/pylearn2
It's a newer library and can run on GPU's with CUDA via cuda-convnet.

Categories

Resources