import tensorflow as tf
print(tf.ones([10, 10]))
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-15-bc8c707e6655> in <module>
----> 1 print(tf.ones([10, 10]))
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'ones'
First time giving proper answer...but when executing after 5 minutes same COMMAND giving error.
That error can occur due to changes in tensorflow version... As I understood from your comment, you're using tensorflow 2.0 and to print the tensor values you need to use tf.print just like so:
import tensorflow as tf
tf.print(tf.ones([10, 10]))
Hope this answers your question
Related
I'd seen previous errors importing form JAX from several years ago (https://github.com/google/jax/issues/372), but the post implied an update would fix it. I just installed JAX and am trying to get set up on a jupyter notebook. Could you let me know what might be going wrong?
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Input In [1], in <cell line: 4>()
1 ########## JAX ON MNIST #####################
2 # Import some additional JAX and dataloader helpers
3 from jax.scipy.special import logsumexp
----> 4 from jax.experimental import optimizers
6 import torch
7 from torchvision import datasets, transforms
ImportError: cannot import name 'optimizers' from 'jax.experimental' (/Users/XXX/opt/anaconda3/lib/python3.9/site-packages/jax/experimental/__init__.py)
I saw that the similar previous error was in 2019 and implied a version difference would fix it. I did not know where to go from there.
According to the CHANGELOG
jax 0.3.16
Deprecations:
Removed jax.experimental.optimizers; it has long been a deprecated alias of jax.example_libraries.optimizers.
So it sounds like if you're using JAX version 0.3.16 or newer, you should do
from jax.example_libraries import optimizers
But as noted in the jax.example_libraries.optimizers documentation, this is not well-supported code and you'll probably have a better experience with something like Optax or JAXopt.
I am not very familiar w/ tensorflow, and received ".pb" file and was trying to see how they try to approach the problem.
model_path = os.path.join(saved_path,"model",str(k+1))
model = tf.saved_model.load(model_path)
print(model)
<tensorflow.python.saved_model.load.Loader._recreate_base_user_object.._UserObject object at 0x7f6d200ec748>
model.summary()
AttributeError Traceback (most recent call last)
in
----> 1 model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'
is there any way I can check the summary of the model?
Because I was wondering their job to approach the issue with segmentation task and seems like did object detection. That is why I want check what is inside the model.pb file!
Thank you.
I am training an object detection model using the Image AI library in google colab.
I get the following error
AttributeError: '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'
This is how the error is
Generating anchor boxes for training images and annotation...
Average IOU for 9 anchors: 0.98
Anchor Boxes generated.
Detection configuration saved in /content/drive/My Drive/ColabNotebooks/GoogleColabnotebooks/Malaria_object_detection/dataset/json/detection_config.json
Training on: ['infected', 'uninfected']
Training with Batch Size: 4
Number of Experiments: 200
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-14-b2bf9748be75> in <module>()
4 trainer.setDataDirectory(data_directory=data_path)
5 trainer.setTrainConfig(object_names_array=["infected","uninfected"], batch_size=4, num_experiments=200)
----> 6 trainer.trainModel()
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py in _apply_device_functions(self, op)
4396 # strings, since identity checks are faster than equality checks.
4397 if device_string is not prior_device_string:
-> 4398 op._set_device_from_string(device_string)
4399 prior_device_string = device_string
4400 op._device_code_locations = self._snapshot_device_function_stack_metadata()
AttributeError: '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'
I don't get the error when I ran the same code on my laptop.
Following is my code
from imageai.Detection.Custom import DetectionModelTrainer
trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory(data_directory=data_path)
trainer.setTrainConfig(object_names_array=["infected","uninfected"], batch_size=4, num_experiments=200)
trainer.trainModel()
Can you please check by running same version of tensorflow on your laptop and colab. By default colab loads the 1.15.0 version of tensorflow.
import tensorflow as tf
print(tf.__version__)
Output:
1.15.0
You can install the required version of tenorfow in colab using below example -
!pip install tensorflow==2.0.0
tf.__version__
I was wondering why I get the below error when using numpy.int64 with numpy.timedelta64
ValueError: Could not convert object to NumPy timedelta
For example:
In [10]: np.timedelta64(np.int64(2),'D')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-06ccd44f066c> in <module>()
----> 1 np.timedelta64(np.int64(2),'D')
ValueError: Could not convert object to NumPy timedelta
The reason for my question: I am using scipy.stats.mode and pass the result into numpy.timedelta64 in the interim I do the below int as a fix but interested to find out why such behaviour occurs:
from scipy.stats import mode
import nump as np
np.timedelta64(int(mode([2,2,3,4,1,1,5,5])[0][0]),'D')
Out[15]: numpy.timedelta64(1,'D')
Might be a version issue ?
numpy version = 1.10.4
python version = 2.7.11 (64 bit)
According to the keras documentation, Input adds the _keras_shape attribute to the input tensor. However, as shown below, this is not the case.
import tensorflow as tf
s = tf.keras.layers.Input(shape=[2], dtype=tf.float32, name='s')
print(s._keras_shape)
Traceback (most recent call last):
File "<input>", line 3, in <module>
AttributeError: 'Tensor' object has no attribute '_keras_shape'
Have I misunderstood something, or is this a bug I should report?
The lack of this attribute makes further Keras functions go haywire:
q_s = q(s)
model = Model(inputs=s, outputs=q_s)
Traceback (most recent call last):
...
File "/home/reuben/.virtualenvs/tensorflow/lib/python3.5/site-packages/keras/engine/network.py", line 253, in <listcomp>
input_shapes=[x._keras_shape for x in self.inputs],
AttributeError: 'Tensor' object has no attribute '_keras_shape'
I'm using tensorflow version '1.11.0-rc2'
The input layer you get appears to be slightly different depending on whether you are importing from keras or whether you're importing it through tensorflow. The keras documentation you linked is based on importing layers from the keras library directly:
For example:
import tensorflow as tf
from keras.layers import Input
s = Input(shape=[2], dtype=tf.float32, name='2')
s._shape_val # None
s._keras_shape # (None, 2)
However importing through tensorflow appears to save the shape in the tensorflow attribute _shape_val instead:
import tensorflow as tf
s = tf.keras.layers.Input(shape=[2], dtype=tf.float32, name='s')
s._shape_val # TensorShape([Dimension(None), Dimension(2)])
s._keras_shape # Error
Your best bet is to just import the layer from keras directly. If you plan to continue using tf.keras instead of the main implementation of keras, you should refer to the tf.keras docs instead of keras.io.
Documentation here does not mention _keras_shape.
"The added Keras attribute is: _keras_history: Last layer applied to the tensor. the entire layer graph is retrievable from that layer, recursively."
When you say "makes further Keras functions go haywire", what do you mean?