tf.contrib.training.HParams error in tensorflow 2 - python

I am trying to use nmt-chatbot from https://github.com/daniel-kukiela/nmt-chatbot but while training the model with custom data I am getting the error, as I searched on google it is because in Tensorflow v2 the "contrib" module is been removed, can anyone suggest me a replacement for that part of code,
return tf.contrib.training.HParams(
AttributeError: module 'tensorflow' has no attribute 'contrib'
def create_hparams(flags):
"""Create training hparams."""
return tf.contrib.training.HParams(
# Data
src=flags.src,
tgt=flags.tgt,
train_prefix=flags.train_prefix,
dev_prefix=flags.dev_prefix,
test_prefix=flags.test_prefix,
vocab_prefix=flags.vocab_prefix,
embed_prefix=flags.embed_prefix,
out_dir=flags.out_dir,

You can see the fate of all the contrib API here,also here.
I don't see any direct API update in the document.
However, you can still use from tensorflow plugins by
from tensorboard.plugins.hparams import api as hp
For more details you can check here.

Related

DeepPavlov FAQ Bot is returning a 'collections.OrderedDict' object is not callable error

I'm trying to use collab to build a bot for FAQ with DeepPavlov and I modified a tutorial notebook that DeepPavlov has on their site, the only thing I change is using my sample dataset yet I get the 'collections.OrderedDict' object is not callable error when calling on
answer=model_config(["help"])
answer
The full code for this (seperated in cells) is
!pip install -q deeppavlov
from deeppavlov import configs
from deeppavlov.core.common.file import read_json
from deeppavlov.core.commands.infer import build_model
from deeppavlov import configs, train_model
model_config = read_json(configs.faq.tfidf_logreg_en_faq)
model_config["dataset_reader"]["data_path"] = None
model_config["dataset_reader"]["data_url"] = "https://docs.google.com/spreadsheets/d/e/2PACX-1vSUFqHL9u_KkSCfw03bYCIuzfCzfOwXlvsQeTb8tMVaSDYohcHbfL8jNtV24AZiSLNnJJQs58dIsO8A/pub?gid=788315638&single=true&output=csv"
model_config
answer=model_config(["help"])
answer
Anyone know the fix to help my bot run with the sample dataset url I provided in my code? I'm new to bots, deeppavlov, and collab so I'm having a steep learning curve here.
Your code is missing the model training part - you are trying to call the config object instead of actually training and using a model for prediction on your data.
However, this is not the only problem here. Firstly, you might want to change the data_path variable to a string object, otherwise you will face problems here (you may try it yourself to check). Secondly, while trying to run your code with my corrections I have faced a csv-parsing error - please check your csv file again and make sure to get rid of empty rows in it. After you do that, this code should work correctly.
model_config = read_json(configs.faq.tfidf_logreg_en_faq)
model_config["dataset_reader"]["data_path"] = ''
model_config["dataset_reader"]["data_url"] = "your-dataset-link"
faq = train_model(model_config)
answer = faq(["help"])
answer

Google AI Platform: Unexpected error when loading the model: 'str' object has no attribute 'decode' [Keras 2.3.1, TF 1.15]

I am trying to use the beta Google Custom Prediction Routine in Google's AI Platform to run a live version of my model.
I include in my package predictor.py which contains a Predictor class as such:
import os
import numpy as np
import pickle
import keras
from keras.models import load_model
class Predictor(object):
"""Interface for constructing custom predictors."""
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
def predict(self, instances, **kwargs):
"""Performs custom prediction.
Instances are the decoded values from the request. They have already
been deserialized from JSON.
Args:
instances: A list of prediction input instances.
**kwargs: A dictionary of keyword args provided as additional
fields on the predict request body.
Returns:
A list of outputs containing the prediction results. This list must
be JSON serializable.
"""
# pre-processing
preprocessed_inputs = self._preprocessor.preprocess(instances[0])
# predict
outputs = self._model.predict(preprocessed_inputs)
# post-processing
outputs = np.array([np.fliplr(x) for x in x_test])
return outputs.tolist()
#classmethod
def from_path(cls, model_dir):
"""Creates an instance of Predictor using the given path.
Loading of the predictor should be done in this method.
Args:
model_dir: The local directory that contains the exported model
file along with any additional files uploaded when creating the
version resource.
Returns:
An instance implementing this Predictor class.
"""
model_path = os.path.join(model_dir, 'keras.model')
model = load_model(model_path, compile=False)
preprocessor_path = os.path.join(model_dir, 'preprocess.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
The full error Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'str' object has no attribute 'decode' (Error code: 0)" indicates that the issue is in this script, specifically when loading the model. However, I am able to successfully load the model in my notebook locally with the same code block in predict.py:
from keras.models import load_model
model = load_model('keras.model', compile=False)
I have seen similar posts which suggest to set the version of h5py<3.0.0 but this hasn't helped. I can set versions of modules for my custom prediction routine as such in a setup.py file:
from setuptools import setup
REQUIRED_PACKAGES = ['keras==2.3.1', 'h5py==2.10.0', 'opencv-python', 'pydicom', 'scikit-image']
setup(
name='my_custom_code',
install_requires=REQUIRED_PACKAGES,
include_package_data=True,
version='0.23',
scripts=['predictor.py', 'preprocess.py'])
Unfortunately, I haven't found a good way to debug model deployment in google's AI Platform and the troubleshooting guide is unhelpful. Any pointers would be much appreciated. Thanks!
Edit 1:
The h5py module's version is wrong –– at 3.1.0, despite setting it to 2.10.0 in setup.py. Anyone know why? I confirmed that Keras version and other modules are set properly however. I've tried 'h5py==2.9.0' and 'h5py<3.0.0' to no avail. More on including PyPi package dependencies here.
Edit 2:
So it turns out google currently does not support this capability.
StackOverflow, enzed01
I have encountered the same problem with using AI platform with code that was running fine two months ago, when we last trained our models. Indeed, it is due to the dependency on h5py which fails to load the h5 model out of the blue.
After a while I was able to make it work with runtime 2.2 and python version 3.7. I am also using the custom prediction routine and my model was a simple 2-layer bidirectional LSTM serving classifications.
I had a notebook VM set up with TF == 2.1 and downgraded h5py to <3.0.0 with:
!pip uninstall -y h5py
!pip install 'h5py < 3.0.0'
My setup.py looks like this:
from setuptools import setup
REQUIRED_PACKAGES = ['tensorflow==2.1', 'h5py<3.0.0']
setup(
name="my_package",
version="0.1",
include_package_data=True,
scripts=["preprocess.py", "model_prediction.py"]
)
I added compile=False to my model load code. Without it, I ran into another problem with deployment which was giving following error: Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'sample_weight_mode' (Error code: 0)"
The code change from OP:
model = keras.models.load_model(
os.path.join(model_dir,'model.h5'), compile = False)
And this made the model be deployed as before without a problem. I suspect the
compile=False might mean slower prediction serving, but have not noticed anything so far.
Hope this helps anyone stuck and googling these issues!

'tensorflow' has no attribute 'to_int32'

I am trying to implement CTC loss to audio files but I get the following error:
TensorFlow has no attribute 'to_int32'
I'm running tf.version 2.0.0.
I think it's with the version, I'm currently using, as we see the error is thrown in the package itself ' tensorflow_backend.py' code.
I have imported packages as "tensorflow.keras.class_name" with backend as K. Below is the screenshot.
You can cast the tensor in TensorFlow 2 as follows:
tf.cast(my_tensor, tf.int32)
You can read the documentation of the method in https://www.tensorflow.org/api_docs/python/tf/cast
You can also see that the to_int32 is deprecated and was used in TensorFlow 1
https://www.tensorflow.org/api_docs/python/tf/compat/v1/to_int32
After you make the import just write
tf.to_int=lambda x: tf.cast(x, tf.int32)
This is similar to writing the behavior of tf.to_int in everywhere in the code, so you don't have to manually edit a TF1.0 code

AttributeError: 'ResourceSummaryWriter' object has no attribute 'get_logdir'

I am trying to run some code written in tensorflow v1.0 in tensorflow 2.1 library package. So I have to rewrite some of the code. I have been facing some problem with one line of the codes
LOG_DIR='./'
summary_writer = tf.summary.FileWriter(LOG_DIR)
now I understand that in v2.0, tf.summary has been deprecated and I was to write the new code instead
summary_writer = tf.summary.create_file_writer(LOG_DIR)
but whenever i start to run
logdir = summary_writer.get_logdir()
It gives me an error of
AttributeError: 'ResourceSummaryWriter' object has no attribute 'get_logdir'
I search around and found no solution. What can be the problem? Isn't it just stating the LOG_DIR (which I have done)
Regards
I got the same error, after a long struggling with that, I just solved it.
I just modified the core source "~tensorboard/plugins/projector/init.py"
I got rid of lines 'logdir' and pass the log file path to "summary_writer"
------------------ "~tensorboard/plugins/projector/init.py----------------------
enter image description here
------------------ myapp.py--------------------------------
enter image description here

Getting an Error for BERT module when trying to access bert.variables

I'm trying to get BERT to do sentiment analysis from the code obtained from here: https://github.com/strongio/keras-bert
But when I try to build the model, I get an error saying,
'Module' object has no attribute 'variables'
This occurs specifically in the build function of the BertLayer class when I try to access self.bert.variables.
I tried a dir(self.bert) to get all the attributes of the object and it indeed did not have an attribute called variables. These are the attributes I obtained:
['\__call__', '\__class__', '\__delattr__', '\__dict__', '\__dir__', '\__doc__', '\__eq__', '\__format__', '\__ge__', '\__getattribute__', '\__gt__', '\__hash__', '\__init__', '\__init_subclass__', '\__le__', '\__lt__', '\__module__', '\__ne__', '\__new__', '\__reduce__', '\__reduce_ex__', '\__repr__', '\__setattr__', '\__sizeof__', '\__str__', '\__subclasshook__', '\__weakref__', '_graph', '_impl', '_name', '_spec', '_tags', '_trainable', 'export', 'get_input_info_dict', 'get_output_info_dict', 'get_signature_names', 'variable_map']
I'm using tf version: 1.13.0 with Python: 3.5
Installing the very latest versions of both tensorflow and tensorflow hub fixed this issue.

Categories

Resources