How to retrain model in graph (.pb)? - python

I have model saved in graph (.pb file). But now the model is inaccurate and I would like to develop it. I have pictures of additional data to learn, but I don't if it's possible or if it's how to do it? The result must be the modified of new data pb graph.

It's a good question. Actually it would be nice, if someone could explain how to do this. But in addition i can say you, that it would come to "catastrophic forgetting", so it wouldn't work out. You had to train all your data again.
But anyway, i also would like to know that espacially for ssd, just for test reasons.

The mozzila/DeepSpeech community has contributed a way to initialize training from a frozen graph(.pb). It does not restores optimizer parameters, so adjusting the learning rate is necessary.
You could find the code at:
https://github.com/mozilla/DeepSpeech/blob/master/DeepSpeech.py#L1562
Hope this helps!

Related

Custom trained tensorflow model efficientdet is not detecting anything

I followed the following tutorial by Gilbert Tanner
https://gilberttanner.com/blog/tensorflow-object-detection-with-tensorflow-2-creating-a-custom-model/
to make a custom trained Tensorflow object detection model. There were just a few incompatibilities I had to fix, but overall everything worked out. But now, when I am trying to test the trained model with the testing script mentioned in the tutorial (I copied it to my local machine as a script because I don't like working with the Notebook. I just hope this is not causing the problem), it just doesn't work. There are no errors and no warnings, it's just that it doesn't detect anything on any picture.
The model was supposed to identify members of my family on old pictures, it was trained with roughly 300 pictures, they on average contained 5 members of my family. In total, 17 different family members were labeled. The model trained until the learning rate dropped to 0, which took about 60 hours. I had to reduce the batch_size argument in the config file of the model to 1, because it kept overflowing my memory otherwise.
I suppose, the given information is not enough to solve the problem. However, this is the first time for me working with custom trained models, so I don't really know what Information is needed. So feel free to just ask for additional outputs or other information.
Thanks in advance to anyone who can help me.

Combine image and tabular data in pytorch (extending ResNet)

I am working now on my master thesis. And I want to extend the ResNet50 model to add tabular data. Has anyone experience in similar task? I use an iterative DataLoader and it may causes problems. In general I would like to ask if it's a good idea to create a network with mixed data types (image + tabular) and if this is the right approach. Thanks in advance!
This might be an old question but I find this blogpost might answer your question very well:
Markus Rosenfelder's blog
In summary, it explains how to combine a CNN (like your ResNet50) and tabular input to one model that has a combined output (using Pytorch and Pytorch Lightning but I feel the tutorial is so well done that you can easily adapt the technique to whatever you are using). The tutorial includes the whole process so your problems regarding the DataLoader might be addressed as well.
Hope this helps!

How to develop self-learning gradient boosting classifier

I have trained Gradient Boosting classifier, but when I validated the model on completely new data, the resuls were, due to totally different data, poor.
I have sample data from production process and my supervisor says it is normal that the errors in production process change rapidly (e.g. in time when there are new software upgrades). So she advised me to develop self-learning algorithm from the one I have already trained.
When I was googling the solutions, I found only general approach to the topic, but no real instruction to get me to the solution.
Could anybody help how to do?
I am afraid if this is available with my GB classifier, but I tried several algorithms for the data and this one was the best.
Thank you.

Keras Online Learning probem in implementation

I want LSTM to learn with newer data. It needs to update itself depending on the trend in the new data and I wish to save this say in a file. Then I wish to call this pre-fed training file into any other X,Y,Z fresh files where testing is done. So I wish to 're-fit' [update NOT re-train] the model with new data such that model parameters are just updated and not re-initialized. I understand this is online learning but how to implement it through Keras? Can someone please advise how to successfully implement it?

Tensorflow estimator: predict without loading from checkpoint everytime

I am using estimator in Tensorflow(1.8) and python3.6 to build up Neural network for my Reinforcement learning project. And I notice everytime you use estimator.predict(), the tensorflow will load up the checkpoint under the model_dir. But It's extremely inefficient if you have to use this function multiple times for the same checkpoint, e.g. in Reinforcement learning, I may need to predict next action based on current state and next state will be realized only after you choose a specific action. So it's commonplace to call this function thousands of times.
So my question is, how to call this function without loading checkpoint(same checkpoint) everytime.
Thank you.
Well, I think I've just found a good answer to my own question. A good solution to this problem is to construct a tf.dataset by generator. Link is here.
Generator will keep your estimator.predict open and in this way you won't need to keep loading the checkpoint. Only thing you need to do is to change the yielded object in this fastpredict object(self.next_feature in this case) if necessary.
However, I need to mention if your ultimate goal is to make the whole thing a service or something like that. You may need something like tf.serving. So I suggest you go in that way directly. I waste a lot time in the process. So I hope this answer helps you save yours.

Categories

Resources