I have a Deep Learning Code for Object Detection. What I did is that I ran the code on Google Colab and then Exported the model to use it locally. Now to run the model I have to again install whole Tensorflow package which is quite heavy for my system.
I want to ask if there is a way to download and run only specific parts of Tensorflow Library?
I am using Tensorflow at only 2 places in my code and I have to install whole Tensorflow library for it.
This is where I am loading the model.
detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)
This is where I am using Tensorflow 2nd time.
input_tensor = tf.convert_to_tensor(image_rgb)
These are the only 2 functions required to me from the Tensorflow Library and not the whole library... Thanks in anticipation.
Though I'm not entirely sure on the library as a whole, there is a Lite version of Tensorflow (I guess they realised 430MB is a bit much too).
Information regarding this can be found here:
https://www.tensorflow.org/lite/
A guide here seems to detail how to pick and choose parts of the Lite library and although not used myself, I should expect some degree of compatibility between the two...
https://www.tensorflow.org/lite/guide/reduce_binary_size
Related
I'm trying to work on an ASR model using transfer learning on wav2vec 2 model.
Anyway when I ever I wan't to show or modifiy an audio file I get this problem
def prepare_dataset(batch):
audio = batch["audio"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["sentence"]).input_ids
return batch
common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)
The erorrs:
RuntimeError: Backend "sox_io" is not one of available backends: ['soundfile'].
ImportError: To support decoding 'mp3' audio files, please install 'sox'.
This is my pytorch and torchaudio versions:
import torch
import torchaudio
print(torch.__version__)
print(torchaudio.__version__)
1.13.1+cu117
0.13.1+cu117
I really need help fixing this problem, this is part of my junior project! )':
I've trying to installing pytorch and installing deffrent versions but nothing worked the code is working. fine in colab but it's impossible for me to train it there so I have to use visual code...
First, note that the second error message is not from torchaudio and it's not accurate. TorchAudio does not depend on an external sox package.
TorchAudio provides limited IO features on Windows, as libsox does not
compile on Windows with VS2019. This situation is being worked on, but as of v0.13, Windows users need a workaround.
A simple way is to use other libraries like soundfile and convert the decoded NumPy NdArray object into PyTorch Tensor.
Another way is to install FFmpeg, and use torchaudio.io.StreamReader. You can write your own load function, following the tutorial like this.
https://pytorch.org/audio/0.13.1/tutorials/streamreader_basic_tutorial.html#sphx-glr-tutorials-streamreader-basic-tutorial-py
I am working on a project that requires me to use a pre-trained Tensorflow model (MLP model) and make predictions. But while creating the executable file using PyInstaller, the size of the executable file becomes more that 2GB. I inspected and found that Tensorflow alone takes up approximately 1.3GB of that.
So my main question is whether I can reduce the size of this executable, by using an API for serving. However every solution that I see requires tensorflow as a dependency, which would require tensorflow hooks and make my executable larger.
The closest I found was this question in stackoverflow, Serve Tensorflow model without installing Tensorflow. However this requires that I already know what my model is, and since it uses numpy I can't use CUDA acceleration in the future. Also if there are other non-GPU optimizations in tensorflow (besides basic multithreading in numpy), I would be missing out on them if I use just numpy.
So is there a tensorflow API that can just serve the model and keep my executable small in size. Some additional information, this is how my hook-tensorflow.py looks like,
from PyInstaller.utils.hooks import collect_all
def hook(hook_api):
packages = [
'tensorflow',
'tensorflow_core',
'keras',
'astor'
]
for package in packages:
datas, binaries, hiddenimports = collect_all(package)
hook_api.add_datas(datas)
hook_api.add_binaries(binaries)
hook_api.add_imports(*hiddenimports)
Is Tensorflow Lite what you are looking for?
'TensorFlow Lite is a mobile library for deploying models on mobile, microcontrollers and other edge devices'
https://www.tensorflow.org/lite
I have this wierd requirement where I have 2 modules
chatbot(using rasa)
image classifier
I want to create a single Flask Web service for both of them
But issue is RASA uses tensorflow 1.x while my image classifier module is built using tensorflow 2.x.
I know I can not have both versions of Tensorflow in my environment(if there's a way, i am not aware of it)
As RASA doesn't support Tensorflow 2.x yet, one thing I'm left with is to downgrade Tensorflow for image classifier.
Is there a way to tackle this issue without changing my existing code or with minimum changes?
I am working on a Letter Recognition Application for a robot. I used my home PC for training the model and wanted the recognition to be on the RPI Zero W with the already trained model.
I got an HDF model. When I try to install Tensorflow on the RPI zero, it's throwing a hash error, as far as I found it this is due to TF beeing for 64bit machines. When I try to install Tensorflow Lite, the installation stocks and crashes.
For saving the model I use:
classifier.save('test2.h5')
That are the Prediction lines:
test_image = ks.preprocessing.image.load_img('image.jpg')
test_image = ks.preprocessing.image.img_to_array(test_image)
result = classifier.predict(test_image)
I also tried to compile the python script via Nuitka, but as the RPI is ARM and nuitka is not offering cross-compile, this possibility felt out.
You can use already available TFLite to solve your issue.
If that does not help, you can also build TFLite from source.
Please refer to below links:
https://www.tensorflow.org/lite/guide/build_rpi
https://medium.com/#haraldfernengel/compiling-tensorflow-lite-for-a-raspberry-pi-786b1b98e646
What is the equivalent of tf.contrib.image.transform in tensorflow 2.0? When I use the tf_upgrade_v2 conversion script, I get the error:
ERROR: Using member tf.contrib.image.transform in deprecated module tf.contrib. tf.contrib.image.transform cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
There is no equivalent. Some of the ops are implemented in the tensorflow addons though which is only available in Linux not Windows for now.
Here is the link to tensorflow addones:
https://github.com/tensorflow/addons
Here is the link to one of their image ops which is similar to tf.contrib.image.transform:
https://github.com/tensorflow/addons/blob/master/docs/tutorials/image_ops.ipynb
Hope this helped.
As said by Amir this seems to not be part of the new API.
Maybe you can look at the TensorFlow Graphics library. I did not find any equivalent of transform for the moment, but I think that such implementation can fit in this library.