Use trained Tensorflow Graphs in Matlab - python

I would like to train tensorflow models with the python API but use the trained graphs for inference in Matlab. I searched for possibilities to do this, but I can't seem to figure it out.
Does anybody have a good idea how to do this? Do I have to compile the model with bazel? Do I do it with tensorflow serving? Do I load the metagraph in a C++ function that I include in Matlab?
Please keep in mind that I'm an enigeer and don't have extensive programming knowledge :)

In case someone lands here with a similar question, I'd like to suggest tensorflow.m - a Matlab package I am currently writing (available on GitHub).
Although still in development, simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - I believe this is what you were looking for?
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.

Related

Utilizing hardware AI accelerators with PyTorch

I'm pretty new to StackOverflow, but also to using PyTorch. I'm an AI and CS major, and I'm working on a project involving processing video with ML models. I'm not going to get into the details because I want any answers to this question to be generally accessible to others using pytorch, but the issue is I'm using pytorch with vapoursynth at the moment, accelerating both with CUDA, but I'm looking into purchasing as AI accelerator like this:
Amazon
Documentation on using these with Tensorflow is pretty easy to find, but I'm having trouble trying to answer for myself how I can use one of these with PyTorch. Does anybody have experience with this? I'd simply like to be able to use this card to accelerate training a Neural Net.
It is correct that you would need to convert your code to run on XLA, but that includes only changing few lines in your code. Please refer to https://github.com/pytorch/xla README doc for references and guides. With few modifications you can get significant training speedup.
I think the experience of using Pytorch on TPU would be less smooth than it on nvidia GPU. As far as I know, you have to use XLA to convert pytorch models to make them able to run on TPU.

Deploying Pytorch only for prediction

I've trained my model locally and now I want to use it in my Kubernetes cluster. Unfortunately, all the Docker images for Pytorch are 5+ GBs because they contain the scripts for training which I won't need now. I've created my own image which is only 3.5 GBs but still huge. Is there a slim Pytorch version for predictions? If not, which parts of the package can I safely remove and how?
No easy answer for Python version of PyTorch unfortunately (or at least none I’m aware of).
Python, in general, is not well-suited for Docker deployments as it carries over the dependencies (even if you don't need all of their functionality, imports are often at the top of the file making your aforementioned removal infeasible for projects of PyTorch size and complexity).
There is a way out though...
torchscript
Given your trained model you can convert it to traced/scripted version (see here). After you manage that:
Inference in other languages
Write your inference code in another language, either Java or C++(see here for more info).
I have only used C++, but you might get there easier with Java, I think.
Results
Managed to get PyTorch for CPU inference to roughly ~32MB, GPU would weight more and be way more complex though and would probably need ~1GB of CUDNN dependency itself.
C++ way
Please note torchlambda project is not currently maintained and I’m the creator, hopefully it gives you some tips at least.
See:
Dockerfile for the image build
CMake used for building
Docs for more info about compilation options etc.
C++ inference code
Additional notes:
It also uses AWS SDKs and you would have to remove them from at least these files
You don't need static compilation - it would help to reach the lowest possible (I could come up with) image size, but not strictly necessary (additional ‘100MB’ or so)
Final
Try Java first as it’s packaging is probably saner (although final image would probably be a little bigger)
The C++ way not tested for the newest PyTorch version and might be subject to change with basically any release
In general it takes A LOT of time and debugging, unfortunately.

Is it possible to run PyTorch in brython script?

I'd like to embed a simple PyTorch model in a webpage. Is this something accomplishable with brython? If not, is there another tool available that would allow for PyTorch scripts to be executed without a separate server hosting the code?
I am not sure about brython, but I believe torchserve can be used to accomplish your task.
Take a look at its documentation:
https://pytorch.org/serve/
EDIT: Based on Comments:
So I found this repo, that is like a substitute for Tensorflow.js except if works for Pytorch. It is called torch.js. It should allow your model to work on a program made by node.js. Since this repo is not as official as tensorflow.js, another thing I would suggest is to possibly convert your Pytorch model to ONNX to then Tensorflow to then Tensorflow.js. Then you will be able to accomplish your task. Some links about tensorflow.js can be found here

Keras source codes: how to reach?

To be able to use Keras as a programming tool, sometimes one needs to see the source code of methods. I know that each of the functions in Keras is implemented openly and is accessible to the public. But unfortunately it is not trivial to find the code on the web before you are experienced enough. For example, it is not explained in https://keras.io/ what is the easiest way to find the source for a specific method.
My question here is can someone please point me to the implementation of softmax activation of Keras with Tensorflow backened or recommend how is a good way to get to it?
You can search the repository on github using the search bar. You'll find it in keras/activations.py, which invokes the same function from the keras backend. All the backends are at keras/backend, and the tensorflow backend specifically is at keras/backend/tensorflow_backend.py. In tensorflow, you can find the corresponding kernel definition at tensorflow/core/kernels/softmax_op.
There is another way to get the source code which might be useful especially if you are not using the latest version (available on github), so I am adding it here
You can always find the keras source code directly on your PC if you have installed the keras package. The directory it is installed in is : /python3.your_version/site-packages/keras
It looks like the Keras source code can be found in Github for Keras.
As opposed to Pytorch whose documentation for each function has a direct link to the corresponding source code, in Keras the two seems to be disconnected.
One way to find the source for a specific component in that is manually going through folders in the above GIT repository..
I did that and found that it can be found in Keras Softmax Source Code.
There might be better ways of getting to this source code, but I am not aware of.

Neural Network implementation using Pybrain

I am new to Python and I want to implement a simple Neural Network in Python and I am using Pycharm. I explore which library is used for this purpose and I found that Pybrain can be used. Successfully installed it on my system and configured with Pycharm.
Now I am searching a simple sample code for Neural Network using Pybrain lib but did not found a complete code. Also in doc of Pybrain some more explanation is required. Is there any way I can found detailed documentation of functionalities used in Pybrain?
As a beginner did I follow the correct path? May be you will feel that this question should not be asked but as a beginner I have to solve it. If anyone can help me in this regard I would be thankful to you.

Categories

Resources