hello so I was trying to resize and rescale my dataset as shown below l but I encountered this error:
AttributeError: module 'keras.layers' has no attribute 'experimental'
resize_and_rescale= tf.keras.Sequential([
layers.experimental.preprocessing.Resizing(IMAGE_SIZE,IMAGE_SIZE),
layers.experimental.preprocessing.Rescaling(1.0/255)
])
actually I tried adding "tf.keras" in front of my layers line and it worked :
resize_and_rescale= tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Resizing(IMAGE_SIZE,IMAGE_SIZE),
tf.keras.layers.experimental.preprocessing.Rescaling(1.0/255)])
thank you !
Yes, if it worked by adding tf.keras.layers, even though you have already imported Keras from TensorFlow earlier, then it is most likely an issue with the importation chain of the libraries rather than the actual block of code. Cross-check how you imported the libraries earlier to reduce redundant importing as you might have done now. Also, this will make your code cleaner.
Related
I'm running one of the examples from skimage documentation and struggling to find either the multiscale_basic_features referenced there as an attribute of the skimage.feature module or its current equivalent. Does anyone know what I can substitute it with in the following segment of their code?:
features_func = partial(feature.multiscale_basic_features, intensity=True, edges=False, texture=True,
sigma_min=sigma_min, sigma_max=sigma_max,
multichannel=True)
You probably need to upgrade your scikit-image version. That function was only added in 0.18.1.
I am trying to implement CTC loss to audio files but I get the following error:
TensorFlow has no attribute 'to_int32'
I'm running tf.version 2.0.0.
I think it's with the version, I'm currently using, as we see the error is thrown in the package itself ' tensorflow_backend.py' code.
I have imported packages as "tensorflow.keras.class_name" with backend as K. Below is the screenshot.
You can cast the tensor in TensorFlow 2 as follows:
tf.cast(my_tensor, tf.int32)
You can read the documentation of the method in https://www.tensorflow.org/api_docs/python/tf/cast
You can also see that the to_int32 is deprecated and was used in TensorFlow 1
https://www.tensorflow.org/api_docs/python/tf/compat/v1/to_int32
After you make the import just write
tf.to_int=lambda x: tf.cast(x, tf.int32)
This is similar to writing the behavior of tf.to_int in everywhere in the code, so you don't have to manually edit a TF1.0 code
I have an AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.losses' has no attribute 'softmax_cross_entropy' error when using tf.losses.softmax_cross_entropy. Could someone help me?
The tf.losses now point to tf.keras.losses. You can get identical behavior by using
tf.losses.categorical_crossentropy with from_logits set to True
Sometimes we encounter this error, especially when running on online binders like jupyter notebook. Instead of writing
tf.losses.softmax_cross_entropy
try
loss = 'softmax_cross_entropy'
or either of the below
tf.keras.losses.CategoricalCrossentropy()
loss = 'categorical_crossentropy'
You may also want to use from_logits=True as an argument - which shall look like
tf.keras.losses.CategoricalCrossentropy(from_logits=True)
while keep metrics something like
metrics=['accuracy']
It looks like they have the same parameters.and i can't find tf.contrib.slim.conv2d in tensorflow official documents, it makes me really confused.
There is no difference.
import tensorflow as tf
print(tf.contrib.slim.conv2d is tf.contrib.layers.conv2d) # True
The reason they both exist is likely historical, and to support backwards compatibility - i.e. it probably existed in tf.contrib.slim, then was moved to tf.contrib.layers. Removing it from tf.contrib.slim would have broken existing models however, so I imagine the code has been ported to tf.contrib.layers and there's a line in slim somewhere that creates an alias - something like
conv2d = tf.contrib.layers.conv2d
I am using lstm predictor for timeseries prediction..
regressor = skflow.Estimator(model_fn=lstm_model(TIMESTEPS, RNN_LAYERS, DENSE_LAYERS))
validation_monitor = learn.monitors.ValidationMonitor(X['val'], y['val'],
every_n_steps=PRINT_STEPS,
early_stopping_rounds=1000)
regressor.fit(X['train'], y['train'], monitors=[validation_monitor])
But while doing regressor.fit, i am getting the error as shown in Title, need help on this..
I understand that your code imports the lstm_model from the file lstm_predictor.py when initializing your estimator. If so, the problem is caused by the following line:
x_ = learn.ops.split_squeeze(1, time_steps, X)
As the README.md of that repo tells, the Tensorflow API has changed significantly. The function split_squeeze also seems to be removed from the module tensorflow.contrib.learn.python.ops. This issue has been discussed in that repository but no changes have been made in that repo since 2 years!
Yet, you can simply replace that function with tf.unstack. So simply change the line as:
x_ = tf.unstack(X, num=time_steps, axis=1)
With this I was able to get past the problem.