Im super new to tensorflow, and I`m following the tutorials on its webpage.
I already understood the code for the MNIST Dataset tutorial, but I would like to save the model so I can load it afterwards and test it against my own image set.
Im tried many ways of saving it but i keep failing.
Im talking about this tutorial.
Any help will be appreciated!
Edit: Everywhere I go, I see a Session variable, but in this example I dont, and that confuses me...
Do you know how can I save the model from the tutorial and reuse it?
Related
I'm interested in training a YOLOv5 model. Currently, I'm using Roboflow to annotate and export the data into YOLOv5 format. I'm also using Roboflow's Colab Notebook for YOLOv5.
However, I'm not familiar with many of the commands used in the Roboflow Colab Notebook. I found on here that there appears to be a much more "Pythonic" way of using and manipulating the YOLOv5 model, which I would be much more familiar with.
My questions regarding this are as follows:
Is there an online resource that can show me how to train the YOLOv5 and extract results after importing the model from PyTorch with the "Pythonic" version (perhaps a snippet of code right here on StackOverflow would help)? The official documentation that I could find (here) also uses the "non-Pythonic" method for the model.
Is there any important functionality I would lose if I were to switch to this "Pythonic" method of using YOLOv5?
I found nothing in the documentation that suggests otherwise, but would I need to export my data in a different format from Roboflow for the data to be able to train the "Pythonic" model?
Similar to question 1), is there anywhere that can guide me how to use the trained model on test images? Do I simply do prediction=model(my_image.jpg)? What if I want predictions on multiple images at once?
Any guidance would be appreciated. Thanks!
You can use the GitHub repository of ultralytics to do what you want; if you want to understand the process, check out the train.py file to get a better understanding. There isn't a straightforward explanation you just have to learn by yourself.
For the training: if you want to write the code by yourself it will need a lot of ML knowledge; that's why train.py exist, same for test.py and export.py.
I am trying to load 300W_lp dataset in tensorflow.
I downloaded and extracted the dataset manually at C:/datasets/the300w
Now when I try to load dataset into tensorflow using
the300w = tfds.load('the300w_lp',data_dir='C:\datasets\the300w', download=False)
it gives me error
Dataset the300w_lp: could not find data in C:\datasets\the300w. Please make sure to call dataset_builder.download_and_prepare(), or pass download=True to tfds.load() before trying to access the tf.data.Dataset object.
Please help. How to load dataset in tensorflow?
Try to use plain old
dataset = tfds.load('the300w_lp')
It works fine for me. Maybe You somehow incorrectly unzipped the dataset file? If you have spare time, try the above code and see if it works.
Just a simple way to tackle this issue. Simply run the above command in google colab, grab a portion of the dataset object, download it and use it for your own purpose :)
Long story short:
How to prepare data for lstm object detection retraining of the tensorflow master github implementation.
Long story:
Hi all,
I recently found implementation a lstm object detection algorithm based on this paper:
http://openaccess.thecvf.com/content_cvpr_2018/papers/Liu_Mobile_Video_Object_CVPR_2018_paper.pdf
at the tensorflow model master github repository (https://github.com/tensorflow/models/tree/master/research/lstm_object_detection)
I would like to retrain this implementation on my own dataset to evaluate the lstm improvement to other algorithms like SSD. But I keep struggling on how to prepare the data for the training. I've tried the config file of the authors and tried to prepare the data similar to the object-detection-api and also tried to use the same procedure as the inputs/seq_dataset_builder_test.py or inputs/tf_sequence_example_decoder_test.py does. Sadly the github Readme does not provide any information. Someone else created an issue with a similar question on the github repo (https://github.com/tensorflow/models/issues/5869) but the authors did not provide a helpful answer yet. I tried to contact the authors via email a month ago, but didn't got a response. I've also searched the internet but found no solution. Therefore I desperately write to you!
Is anybody out there who can explain how to prepare the data for the retraining and how to actually run the retraining.
Thank you for reading, any help is really appreciated!
I'm a tensorflow beginner. So, excuse my question if it is stupied
I checked a github code for implementing CNN using MNIST data and tensorflow.
the link below:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py
However, I need to save the model generated by this code, but don't know how to do it, as this code does not involve the use of sessions, how to incoperate session on it?
Would appreciate your response.
The linked code is using tf.estimator.Estimator to train the model. Its documentation includes how to save the model using export_savedmodel. A saved model can be imported by specifying its location through the model_dir argument of the tf.estimator.Estimator initialiser.
I have a few thousand pictures I want to train a model with tflearn. But somehow I have some problems to prepare the images to the right data format. I tried tflearn.data_utils.image_preloader() but I'm getting a ValueError. So I tried to write my own image_loader.py file, but with so many pictures my RAM is running full.
Does anyone know a good tutorial, example or anything to write a CNN with my own image set, along with details on how to preprocess the data for tflearn?
The Tensorflow tutorial is a good place to start. Preprocessing is covered in this section.