tensorflow attributeerror moddule has no attribute per_image_standardization - python

I'm trying to go through the tutorial on convolutional neural nets using cifar10. The cnn is being built (cifar10.py) but when I try to run cifar10_train.py I'm getting the following error:
Traceback (most recent call last):
File "cifar10_train.py", line 115, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "cifar10_train.py", line 111, in main
train()
File "cifar10_train.py", line 58, in train
images, labels = cifar10.distorted_inputs()
File "/home/brennus/workspace/python/cifar/cifar10.py", line 141, in distorted_inputs
batch_size=FLAGS.batch_size)
File "/home/brennus/workspace/python/cifar/cifar10_input.py", line 177, in distorted_inputs
float_image = tf.image.per_image_standardization(distorted_image)
AttributeError: 'module' object has no attribute 'per_image_standardization'
According to https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/image.md, there is indeed a per_image_standardization attribute but it looks like my tensorflow doesn't have it. I'm not sure what version I have and not sure where to find it, but I built it from source from the repository so I imagine it's the current one.
I can't find anyone else who is having this problem so I'm stymied. Maybe I have to write my own?

I reinstalled tensorflow and solved the problem. Thanks, all!

Related

AttributeError: 'PerReplica' object has no attribute 'numpy'

I'm trying to train fastspeech2 from Tensorflow TTS repo.
On single GPU training it is working fine but on multi-GPU training it says that the AttributeError: 'PerReplica' object has no attribute 'numpy'
The file that I'm trying to train is the official fastspeech2 train python file present over here.
My command:
CUDA_VISIBLE_DEVICES=0,1,2,3 python examples/fastspeech2/train_fastspeech2.py \
--train-dir ./dump/train/ \
--dev-dir ./dump/valid/ \
--outdir ./examples/fastspeech2/exp/train.fastspeech2.v1/ \
--config ./examples/fastspeech2/conf/fastspeech2.v1.yaml \
--use-norm 1 \
--f0-stat ./dump/stats_f0.npy \
--energy-stat ./dump/stats_energy.npy \
--mixed_precision 1 \
--resume ""
The error output I get is mentioned below:
Traceback (most recent call last):
File "examples/fastspeech2/train_fastspeech2.py", line 421, in <module>
main()
File "examples/fastspeech2/train_fastspeech2.py", line 413, in main
resume=args.resume,
File "/home/mydir/.local/lib/python3.6/site-packages/tensorflow_tts/trainers/base_trainer.py", line 852, in fit
self.run()
File "/home/mydir/.local/lib/python3.6/site-packages/tensorflow_tts/trainers/base_trainer.py", line 101, in run
self._train_epoch()
File "/home/mydir/.local/lib/python3.6/site-packages/tensorflow_tts/trainers/base_trainer.py", line 127, in _train_epoch
self._check_eval_interval()
File "/home/mydir/.local/lib/python3.6/site-packages/tensorflow_tts/trainers/base_trainer.py", line 164, in _check_eval_interval
self._eval_epoch()
File "/home/mydir/.local/lib/python3.6/site-packages/tensorflow_tts/trainers/base_trainer.py", line 747, in _eval_epoch
self.generate_and_save_intermediate_result(batch)
File "examples/fastspeech2/train_fastspeech2.py", line 150, in generate_and_save_intermediate_result
utt_ids = batch["utt_ids"].numpy()
AttributeError: 'PerReplica' object has no attribute 'numpy'
Please help as I'm unable to understand the exact reason for this error to appear on multi-GPU training.
I am currently working with that same repo and came across this error. Unfortunately I don't have a fix for it yet but in the meantime I am using a work around. This error is thrown when the training attempts to evaluate the network. It does this every x iterations depending on what you set eval_internal_steps to in the file "./examples/fastspeech2/conf/fastspeech2.v1.yaml". If you increase this number to something greater than train_max_steps, the function that throws the error is never called.
The function that is throwing this error is generate_and_save_intermediate_result(batch) and from my understanding you can train without it.

Tensorflow- AttributeError: 'KeepAspectRatioResizer' object has no attribute 'per_channel_pad_value'

So, I trained an object detection model and now I want to export .ckpt files.
When I try to export the .ckpt files:
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training3/model.ckpt-47816 --output_directory inference_graph
I get this:
Traceback (most recent call last):
File "export_inference_graph.py", line 147, in <module>
tf.app.run()
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "export_inference_graph.py", line 143, in main
FLAGS.output_directory, input_shape)
File "/home/ubuntu/tensorflow1/models/research/object_detection/exporter.py", line 454, in export_inference_graph
is_training=False)
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/model_builder.py", line 101, in build
add_summaries)
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/model_builder.py", line 274, in _build_faster_rcnn_model
image_resizer_fn = image_resizer_builder.build(frcnn_config.image_resizer)
File "/home/ubuntu/anaconda3/envs/tensorflow1/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/image_resizer_builder.py", line 83, in build
if keep_aspect_ratio_config.per_channel_pad_value:
AttributeError: 'KeepAspectRatioResizer' object has no attribute 'per_channel_pad_value'
It seems that everybody has this working fine and have no problems with this.
Could anyone please tell me what is going on here?
I know this is a few months later, but I just encountered this issue too!
It seems the image_resizer.proto is missing the per_channel_pad_value attribute.
Update the proto file to include the attribute, from here:
https://github.com/tensorflow/models/blob/master/research/object_detection/protos/image_resizer.proto
recompile it and then try again.
Should work this time.

Converting from Caffe to Tensorflow: TypeError: a bytes-like object is required, not 'str'

I am trying to convert the pre-trained caffe model provided here into tensorflow this converter, which seems to be the most popular option out there. Nevertheless, when I run their script
$ python3 convert.py ../colorful-colorization/models/colorization_deploy_v2.prototxt --caffemodel ../colorful-colorization/models/colorization_release_v2.caffemodel --code-output-path ./out-code/mynet.py
I get the following error
Traceback (most recent call last):
File "convert.py", line 60, in <module>
main()
File "convert.py", line 56, in main
args.phase)
File "convert.py", line 27, in convert
transformer = TensorFlowTransformer(def_path, caffemodel_path, phase=phase)
File "/home/ebalda/Documents/Python/playground/caffe2tensorflow/kaffe/tensorflow/transformer.py", line 221, in __init__
self.load(def_path, data_path, phase)
File "/home/ebalda/Documents/Python/playground/caffe2tensorflow/kaffe/tensorflow/transformer.py", line 227, in load
graph = GraphBuilder(def_path, phase).build()
File "/home/ebalda/Documents/Python/playground/caffe2tensorflow/kaffe/graph.py", line 140, in __init__
self.load()
File "/home/ebalda/Documents/Python/playground/caffe2tensorflow/kaffe/graph.py", line 146, in load
text_format.Merge(def_file.read(), self.params)
File "/usr/local/lib/python3.6/dist-packages/google/protobuf/text_format.py", line 521, in Merge
text.split('\n'),
TypeError: a bytes-like object is required, not 'str'
After browsing around in github and stackoverflow, I could not find other persons having a similar issue. Does anyone know what could be the problem?

Module object has no attribute leaky_relu

I am trying to run the code from here which is an implementatino of Generative Adversarial Networks using keras python. I followed the instructions and install all the requirements. Then i tried to run the code for DCGAN. However, it seems that there is some issue with the compatibility of the libraries. I am receiving the following message when i am running the code:
AttributeError: 'module' object has no attribute 'leaky_relu'
File "main.py", line 176, in <module>
dcgan = DCGAN()
File "main.py", line 25, in __init__
self.discriminator = self.build_discriminator()
File "main.py", line 84, in build_discriminator
model.add(LeakyReLU(alpha=0.2))
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/models.py", line 492, in add
output_tensor = layer(self.outputs[0])
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
output = self.call(inputs, **kwargs)
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/layers/advanced_activations.py", line 46, in call
return K.relu(inputs, alpha=self.alpha)
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2918, in relu
x = tf.nn.leaky_relu(x, alpha)
I am using kerasVersion: 2.1.3 while tensorflowVersion: 1.2.1
and TheanoVersion: 1.0.1+40.g757b4d5
Any idea why am I receiving that issue?
EDIT:
The error is located in the line 84 in the build_discriminator:
function:`model.add(LeakyReLU(alpha=0.2))`
According to this answer, leaky_relu was added to tensorflow on version 1.4. So you might wanna check if your tensorflow installation is at least on version 1.4.

Error exporting inference graph (ValueError)

So I'm following sentdex's object detection tutorial and I have gotten to the step where you are supposed to export the inference graph. I'm using the "export_inference_graph.py" script from Tensorflow's object_detection folder.
The problem is that I'm getting this ValueError:
Traceback (most recent call last):
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\google\proto
buf\internal\python_message.py", line 545, in _GetFieldByName
return message_descriptor.fields_by_name[field_name]
KeyError: 'layout_optimizer'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "export_inference_graph.py", line 119, in <module>
tf.app.run()
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\p
ython\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "export_inference_graph.py", line 115, in main
FLAGS.output_directory, input_shape)
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\object_detec
tion-0.1-py3.5.egg\object_detection\exporter.py", line 427, in export_inference_graph
input_shape, optimize_graph, output_collection_name)
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\object_detec
tion-0.1-py3.5.egg\object_detection\exporter.py", line 391, in _export_inference_graph
initializer_nodes='')
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\object_detec
tion-0.1-py3.5.egg\object_detection\exporter.py", line 72, in freeze_graph_with_def_protos
layout_optimizer=rewriter_config_pb2.RewriterConfig.ON)
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\google\proto
buf\internal\python_message.py", line 484, in init
field = _GetFieldByName(message_descriptor, field_name)
File "C:\Users\Zelcore-Dator\AppData\Local\Programs\Python\Python35\lib\site-packages\google\proto
buf\internal\python_message.py", line 548, in _GetFieldByName
(message_descriptor.name, field_name))
ValueError: Protocol message RewriterConfig has no "layout_optimizer" field.
I'm guessing that it has something to do with protobuf, but I've reinstalled it several times already with no success.
All help appreciated
Happened to me too. Didn't happen few weeks ago.
Until the bug is fixed, you could use an earlier version that still works.
replace line 72 in 'object_detection/exporter.py':
layout_optimizer=rewriter_config_pb2.RewriterConfig.ON)
with the old and working line:
optimize_tensor_layout=True)
I used:
rewrite_options = rewriter_config_pb2.RewriterConfig(optimize_tensor_layout=True)
but kept running into the same issue UNTIL I went and reran
python setup.py install
from my "research" folder. Then I was able to get everything to work.
Remove optimize_tensor_layout=rewriter_config_pb2.RewriterConfig.ON
change the line 71 in exporter.py
rewrite_options = rewriter_config_pb2.RewriterConfig(optimize_tensor_layout=rewriter_config_pb2.RewriterConfig.ON)
to:
rewrite_options = rewriter_config_pb2.RewriterConfig()

Categories

Resources