I'd like to embed a simple PyTorch model in a webpage. Is this something accomplishable with brython? If not, is there another tool available that would allow for PyTorch scripts to be executed without a separate server hosting the code?
I am not sure about brython, but I believe torchserve can be used to accomplish your task.
Take a look at its documentation:
https://pytorch.org/serve/
EDIT: Based on Comments:
So I found this repo, that is like a substitute for Tensorflow.js except if works for Pytorch. It is called torch.js. It should allow your model to work on a program made by node.js. Since this repo is not as official as tensorflow.js, another thing I would suggest is to possibly convert your Pytorch model to ONNX to then Tensorflow to then Tensorflow.js. Then you will be able to accomplish your task. Some links about tensorflow.js can be found here
Related
I have been looking through the spacy documentation on training/fine-tuning spacy models or pipelines, however, after walking through the following guide https://spacy.io/usage/training I found that it all comes down to creating and configuring a single file config.cfg. However, I didn't find a python guide to follow. Hence, I would like to know if this is the new norm, and if it is encouraged to follow this route rather than coding in python. Secondly, if there is a true python guide, I would love to get some references.
Thank in advance!
Yes, in spaCy v3 it is recommended you use the config file to control training, rather than writing Python code. Besides the main docs, there is an official walkthrough of the process here.
I'm pretty new to StackOverflow, but also to using PyTorch. I'm an AI and CS major, and I'm working on a project involving processing video with ML models. I'm not going to get into the details because I want any answers to this question to be generally accessible to others using pytorch, but the issue is I'm using pytorch with vapoursynth at the moment, accelerating both with CUDA, but I'm looking into purchasing as AI accelerator like this:
Amazon
Documentation on using these with Tensorflow is pretty easy to find, but I'm having trouble trying to answer for myself how I can use one of these with PyTorch. Does anybody have experience with this? I'd simply like to be able to use this card to accelerate training a Neural Net.
It is correct that you would need to convert your code to run on XLA, but that includes only changing few lines in your code. Please refer to https://github.com/pytorch/xla README doc for references and guides. With few modifications you can get significant training speedup.
I think the experience of using Pytorch on TPU would be less smooth than it on nvidia GPU. As far as I know, you have to use XLA to convert pytorch models to make them able to run on TPU.
I am following the documentation on the hugging face website, in there they say that to fine-tune GPT-2 I should use the script
run_lm_finetuning.py for fine-tuning, and the script run_generation.py
for inference.
However, both scripts don't actually exist on GitHub anymore.
Does anybody know whether the documentation is outdated? or where to find those two scripts?
Thanks
It looks like they've been moved around a couple times and the docs are indeed out of date, the current version can be found in run_language_modeling.py here https://github.com/huggingface/transformers/tree/master/examples/language-modeling
Links have been moved around again, you can find everything related to language modeling here.
All needed information including location of the old script is in the README
To be able to use Keras as a programming tool, sometimes one needs to see the source code of methods. I know that each of the functions in Keras is implemented openly and is accessible to the public. But unfortunately it is not trivial to find the code on the web before you are experienced enough. For example, it is not explained in https://keras.io/ what is the easiest way to find the source for a specific method.
My question here is can someone please point me to the implementation of softmax activation of Keras with Tensorflow backened or recommend how is a good way to get to it?
You can search the repository on github using the search bar. You'll find it in keras/activations.py, which invokes the same function from the keras backend. All the backends are at keras/backend, and the tensorflow backend specifically is at keras/backend/tensorflow_backend.py. In tensorflow, you can find the corresponding kernel definition at tensorflow/core/kernels/softmax_op.
There is another way to get the source code which might be useful especially if you are not using the latest version (available on github), so I am adding it here
You can always find the keras source code directly on your PC if you have installed the keras package. The directory it is installed in is : /python3.your_version/site-packages/keras
It looks like the Keras source code can be found in Github for Keras.
As opposed to Pytorch whose documentation for each function has a direct link to the corresponding source code, in Keras the two seems to be disconnected.
One way to find the source for a specific component in that is manually going through folders in the above GIT repository..
I did that and found that it can be found in Keras Softmax Source Code.
There might be better ways of getting to this source code, but I am not aware of.
I would like to train tensorflow models with the python API but use the trained graphs for inference in Matlab. I searched for possibilities to do this, but I can't seem to figure it out.
Does anybody have a good idea how to do this? Do I have to compile the model with bazel? Do I do it with tensorflow serving? Do I load the metagraph in a C++ function that I include in Matlab?
Please keep in mind that I'm an enigeer and don't have extensive programming knowledge :)
In case someone lands here with a similar question, I'd like to suggest tensorflow.m - a Matlab package I am currently writing (available on GitHub).
Although still in development, simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - I believe this is what you were looking for?
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.