Tensorflow Load model to check what's in it - python

I am not very familiar w/ tensorflow, and received ".pb" file and was trying to see how they try to approach the problem.
model_path = os.path.join(saved_path,"model",str(k+1))
model = tf.saved_model.load(model_path)
print(model)
<tensorflow.python.saved_model.load.Loader._recreate_base_user_object.._UserObject object at 0x7f6d200ec748>
model.summary()
AttributeError Traceback (most recent call last)
in
----> 1 model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'
is there any way I can check the summary of the model?
Because I was wondering their job to approach the issue with segmentation task and seems like did object detection. That is why I want check what is inside the model.pb file!
Thank you.

Related

how can I train my model using python gensim

I am trying to train my model and when I write these codes :
for epoch in range(max_epochs):
model.train(tagged_data,
total_examples=model.corpus_count,
epochs=model.iter)
and the error that I am getting is the following
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-6ecb8a2d0ac7> in <module>
2 model.train(tagged_data,
3 total_examples=model.corpus_count,
----> 4 epochs=model.iter)
AttributeError: 'Doc2Vec' object has no attribute 'iter'
You're likely copying some outdated example code. For example:
recent versions of Gensim don't have an .iter property on the Doc2Vec model
it's almost always a bad idea to be calling train() multiple times in your own epochs loop - especially as a beginner just trying to get things working
So: don't copy whatever source you're copying. It's not only out-of-date, it's suggesting something (the train() calls in a loop) that was never a great idea.
Instead, base your work on better examples, like the intro tutorial in the Gensim docs:
https://radimrehurek.com/gensim/auto_examples/tutorials/run_doc2vec_lee.html
To resolve the problem, change model.iter to model.epochs. For example:
for epoch in range(max_epochs):
model.train(tagged_data,
total_examples=model.corpus_count,
epochs=model.epochs)

#nd time giving error,during TENSORFLOW execution

import tensorflow as tf
print(tf.ones([10, 10]))
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-15-bc8c707e6655> in <module>
----> 1 print(tf.ones([10, 10]))
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'ones'
First time giving proper answer...but when executing after 5 minutes same COMMAND giving error.
That error can occur due to changes in tensorflow version... As I understood from your comment, you're using tensorflow 2.0 and to print the tensor values you need to use tf.print just like so:
import tensorflow as tf
tf.print(tf.ones([10, 10]))
Hope this answers your question

Tensorflow Keras Input layer does not add _keras_shape

According to the keras documentation, Input adds the _keras_shape attribute to the input tensor. However, as shown below, this is not the case.
import tensorflow as tf
s = tf.keras.layers.Input(shape=[2], dtype=tf.float32, name='s')
print(s._keras_shape)
Traceback (most recent call last):
File "<input>", line 3, in <module>
AttributeError: 'Tensor' object has no attribute '_keras_shape'
Have I misunderstood something, or is this a bug I should report?
The lack of this attribute makes further Keras functions go haywire:
q_s = q(s)
model = Model(inputs=s, outputs=q_s)
Traceback (most recent call last):
...
File "/home/reuben/.virtualenvs/tensorflow/lib/python3.5/site-packages/keras/engine/network.py", line 253, in <listcomp>
input_shapes=[x._keras_shape for x in self.inputs],
AttributeError: 'Tensor' object has no attribute '_keras_shape'
I'm using tensorflow version '1.11.0-rc2'
The input layer you get appears to be slightly different depending on whether you are importing from keras or whether you're importing it through tensorflow. The keras documentation you linked is based on importing layers from the keras library directly:
For example:
import tensorflow as tf
from keras.layers import Input
s = Input(shape=[2], dtype=tf.float32, name='2')
s._shape_val # None
s._keras_shape # (None, 2)
However importing through tensorflow appears to save the shape in the tensorflow attribute _shape_val instead:
import tensorflow as tf
s = tf.keras.layers.Input(shape=[2], dtype=tf.float32, name='s')
s._shape_val # TensorShape([Dimension(None), Dimension(2)])
s._keras_shape # Error
Your best bet is to just import the layer from keras directly. If you plan to continue using tf.keras instead of the main implementation of keras, you should refer to the tf.keras docs instead of keras.io.
Documentation here does not mention _keras_shape.
"The added Keras attribute is: _keras_history: Last layer applied to the tensor. the entire layer graph is retrievable from that layer, recursively."
When you say "makes further Keras functions go haywire", what do you mean?

Tensorflow estimator error in google colab

I am training a DNN in tensorflow in a google colab environment, the code works well till yesterday, but now when I run the the estimator training section of my code, It gives an error.
I don't know exactly what is the reason, is google colab using any updated version of tensorflow, in which some functions are not compatible with older versions? because I had no problem with the code before, and I didn't change it.
It seems this problem exist for the other codes, for example this sample code form stanford was ran without any error before,
https://colab.research.google.com/drive/1nG7Ga46jrWF5n7pHe0FK6anB0pLNgBVt
but now when you run the section :
estimator.train(input_fn=train_input_fn, steps=1000);
It gives the same error as mine:
> **TypeError Traceback (most recent call last)
> /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py
> in make_tensor_proto(values, dtype, shape, verify_shape)**
>
> **TypeError: Expected binary or unicode string, got {'sent_symbol': <tf.Tensor 'random_shuffle_queue_DequeueMany:3' shape=(128,)
> dtype=int64>}**
>
> **TypeError Traceback (most recent call last) <ipython-input-10-9dfe23a4bf62> in <module>()
> ----> 1 estimator.train(input_fn=train_input_fn, steps=1000);**
>
> **TypeError: Failed to convert object of type <class 'dict'> to Tensor. Contents: {'sent_symbol': <tf.Tensor
> 'random_shuffle_queue_DequeueMany:3' shape=(128,) dtype=int64>}.
> Consider casting elements to a supported type.**
The y attribute of the method tf.estimator.inputs.pandas_input_fn receives an input a Pandas Series object.
To extract the target 'sent_symbol' from the DataFrame, call training_labels['sent_symbol'].
To fix this script, modify the code as follows:
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.pandas_input_fn(
training_examples, training_labels['sent_symbol'], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
training_examples, training_labels['sent_symbol'], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
validation_examples, validation_labels['sent_symbol'], shuffle=False)

Does xgboost have feature_importances_?

I'm calling xgboost via its scikit-learn-style Python interface:
model = xgboost.XGBRegressor()
%time model.fit(trainX, trainY)
testY = model.predict(testX)
Some sklearn models tell you which importance they assign to features via the attribute feature_importances. This doesn't seem to exist for the XGBRegressor:
model.feature_importances_
AttributeError Traceback (most recent call last)
<ipython-input-36-fbaa36f9f167> in <module>()
----> 1 model.feature_importances_
AttributeError: 'XGBRegressor' object has no attribute 'feature_importances_'
The weird thing is: For a collaborator of mine the attribute feature_importances_ is there! What could be the issue?
These are the versions I have:
In [2]: xgboost.__version__
Out[2]: '0.6'
In [4]: sklearn.__version__
Out[4]: '0.18.1'
... and the xgboost C++ library from github, commit ef8d92fc52c674c44b824949388e72175f72e4d1.
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc?
http://xgboost.readthedocs.io/en/latest/build.html
As in this answer:
Feature Importance with XGBClassifier
There always seems to be a problem with the pip-installation and xgboost. Building and installing it from your build seems to help.
This worked for me:
model.get_booster().get_score(importance_type='weight')
hope it helps
This is useful for you,maybe.
xgb.plot_importance(bst)
And this is the link:plot

Categories

Resources