What is the purpose of the tf.contrib module in Tensorflow? - python

I'm curious about what tf.contrib is, and why the code would be included in TensorFlow, but not in the main repository.
Furthermore, looking at the example here (from the tensorflow master branch), and I want to find the source for tf.contrib.layers.sparse_column_with_hash_bucket.
It seems like some cool routines, but I wanted to make sure they were properly using queues, etc, for pre-fetching/pre-processing examples to actually use them in a production setting.
It appears to be documented here, but it is from the tflearn project, but tf.contrib.layers.sparse_column_with_hash_bucket doesn't seem to be in that repository either.

In general, tf.contrib contains contributed code. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.
The code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees.
The source of tf.contrib.layers.sparse_column_with_hash_bucket can be found at
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py#L365

Related

Deploying Pytorch only for prediction

I've trained my model locally and now I want to use it in my Kubernetes cluster. Unfortunately, all the Docker images for Pytorch are 5+ GBs because they contain the scripts for training which I won't need now. I've created my own image which is only 3.5 GBs but still huge. Is there a slim Pytorch version for predictions? If not, which parts of the package can I safely remove and how?
No easy answer for Python version of PyTorch unfortunately (or at least none I’m aware of).
Python, in general, is not well-suited for Docker deployments as it carries over the dependencies (even if you don't need all of their functionality, imports are often at the top of the file making your aforementioned removal infeasible for projects of PyTorch size and complexity).
There is a way out though...
torchscript
Given your trained model you can convert it to traced/scripted version (see here). After you manage that:
Inference in other languages
Write your inference code in another language, either Java or C++(see here for more info).
I have only used C++, but you might get there easier with Java, I think.
Results
Managed to get PyTorch for CPU inference to roughly ~32MB, GPU would weight more and be way more complex though and would probably need ~1GB of CUDNN dependency itself.
C++ way
Please note torchlambda project is not currently maintained and I’m the creator, hopefully it gives you some tips at least.
See:
Dockerfile for the image build
CMake used for building
Docs for more info about compilation options etc.
C++ inference code
Additional notes:
It also uses AWS SDKs and you would have to remove them from at least these files
You don't need static compilation - it would help to reach the lowest possible (I could come up with) image size, but not strictly necessary (additional ‘100MB’ or so)
Final
Try Java first as it’s packaging is probably saner (although final image would probably be a little bigger)
The C++ way not tested for the newest PyTorch version and might be subject to change with basically any release
In general it takes A LOT of time and debugging, unfortunately.

Scripts missing for GPT-2 fine tune, and inference in Hugging-face GitHub?

I am following the documentation on the hugging face website, in there they say that to fine-tune GPT-2 I should use the script
run_lm_finetuning.py for fine-tuning, and the script run_generation.py
for inference.
However, both scripts don't actually exist on GitHub anymore.
Does anybody know whether the documentation is outdated? or where to find those two scripts?
Thanks
It looks like they've been moved around a couple times and the docs are indeed out of date, the current version can be found in run_language_modeling.py here https://github.com/huggingface/transformers/tree/master/examples/language-modeling
Links have been moved around again, you can find everything related to language modeling here.
All needed information including location of the old script is in the README

Django: requirements.txt

So far I know requirements.txt like this: Django==2.0. Now I saw this style of writing Django>=1.8,<2.1.99
Can you explain to me what it means?
requirements.txt is a file where one specifies dependencies. For example your program will here depend on Django (well you probably do not want to implement Django yourself).
In case one only writes a custom application, and does not plan to export it (for example as a library) to other programmers, one can pin the version of the library, for example Django==2.0.1. Then you can always assume (given pip manages to install the correct package) that your environment will ave the correct version, and thus that if you follow the correct documentation, no problems will (well should) arise.
If you however implement a library, for example mygreatdjangolibrary, then you probably do not want to pin the version: it would mean that everybody that wants to use your library would have to install Django==2.0.1. Imagine that they want a feature that is only available in django-2.1, then they can - given they follow the dependencies strictly - not do this: your library requires 2.0.1. This is of course not manageable.
So typically in a library, one aims to give as much freedom to a user of a library. It would be ideal if regardless of the Django version the user installed, your library could work.
Unfortunately this would result in a lot of trouble for the library developer. Imagine that you have to take into account that a user can use Django-1.1 up to django-2.1. Through the years, several features have been introduced that the library then can not use, since the programmer should be conservative, and take into account that it is possible that these features do not exist in the library the user installed.
It becomes even worse since Django went through some refactoring: some features have later been removed, so we can not simply program on django-1.1 and hope that everything works out.
So in that case, it makes sense to specify a range of versions we support. For example we can read the documentation of django-2.0, and look to the release notes to see if something relevant changed in django-2.1, and let tox test both versions for the tests we write. We thus then can specify a range like Django>=2.0,<2.1.99.
This is also important if you depend on several libraries that each a common requirement. Say for example you want to install a library liba, and a library libb, both depend on Django, bot the two have a different range, for example:
liba:
Django>=1.10, <2.1
libb:
Django>=1.9, <1.11
Then this thus means that we can only install a Django version between >=1.10 and <1.11.
The above even easily gets more complex. Since liba and libb of course have versions as well, for example:
liba-0.1:
Django>=1.10, <2.1
liba-0.2:
Django>=1.11, <2.1
liba-0.3:
Django>=1.11, <2.2
libb-0.1:
Django>=1.5, <1.8
libb-0.2:
Django>=1.10, <2.0
So if we now want to install any liba, and any libb, we need to find a version of liba and libb that "allows" us to install a Django version, and that is not that trivial since for example if we would pick libb-0.1, then there is no version of liba that supports an "overlapping" Django version.
To the best of my knowledge, pip currently has no dependency resolution algorithm. It looks at the specification, and each time aims to pick the most recent that is satisfying the constraints, and recursively installs the dependencies of these packages.
Therefore it is up to the user to make sure that (sub)dependencies do not conflict: if we would specify liba libb==0.1, then pip will probably install Django-2.1, and then find out that libb can not work with this.
There are some dependency resolution programs. But the problem turns out to be quite hard (it is NP-hard if I recall correctly). So that means that for a given dependency tree, it can takes years to find a valid configuration.

Keras source codes: how to reach?

To be able to use Keras as a programming tool, sometimes one needs to see the source code of methods. I know that each of the functions in Keras is implemented openly and is accessible to the public. But unfortunately it is not trivial to find the code on the web before you are experienced enough. For example, it is not explained in https://keras.io/ what is the easiest way to find the source for a specific method.
My question here is can someone please point me to the implementation of softmax activation of Keras with Tensorflow backened or recommend how is a good way to get to it?
You can search the repository on github using the search bar. You'll find it in keras/activations.py, which invokes the same function from the keras backend. All the backends are at keras/backend, and the tensorflow backend specifically is at keras/backend/tensorflow_backend.py. In tensorflow, you can find the corresponding kernel definition at tensorflow/core/kernels/softmax_op.
There is another way to get the source code which might be useful especially if you are not using the latest version (available on github), so I am adding it here
You can always find the keras source code directly on your PC if you have installed the keras package. The directory it is installed in is : /python3.your_version/site-packages/keras
It looks like the Keras source code can be found in Github for Keras.
As opposed to Pytorch whose documentation for each function has a direct link to the corresponding source code, in Keras the two seems to be disconnected.
One way to find the source for a specific component in that is manually going through folders in the above GIT repository..
I did that and found that it can be found in Keras Softmax Source Code.
There might be better ways of getting to this source code, but I am not aware of.

Use trained Tensorflow Graphs in Matlab

I would like to train tensorflow models with the python API but use the trained graphs for inference in Matlab. I searched for possibilities to do this, but I can't seem to figure it out.
Does anybody have a good idea how to do this? Do I have to compile the model with bazel? Do I do it with tensorflow serving? Do I load the metagraph in a C++ function that I include in Matlab?
Please keep in mind that I'm an enigeer and don't have extensive programming knowledge :)
In case someone lands here with a similar question, I'd like to suggest tensorflow.m - a Matlab package I am currently writing (available on GitHub).
Although still in development, simple functionality like importing a frozen graph and running an inference is already possible (see the examples) - I believe this is what you were looking for?
The advantage is that you don't need any expensive toolbox nor a Python/Tensorflow installation on your machine. I'd be glad if the package can be of use for someone looking for similar solutions; even more so, in case you extend/implement something and open a PR.

Categories

Resources