Getting AttributeError in Google Colab - python

I am training an object detection model using the Image AI library in google colab.
I get the following error
AttributeError: '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'
This is how the error is
Generating anchor boxes for training images and annotation...
Average IOU for 9 anchors: 0.98
Anchor Boxes generated.
Detection configuration saved in /content/drive/My Drive/ColabNotebooks/GoogleColabnotebooks/Malaria_object_detection/dataset/json/detection_config.json
Training on: ['infected', 'uninfected']
Training with Batch Size: 4
Number of Experiments: 200
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-14-b2bf9748be75> in <module>()
4 trainer.setDataDirectory(data_directory=data_path)
5 trainer.setTrainConfig(object_names_array=["infected","uninfected"], batch_size=4, num_experiments=200)
----> 6 trainer.trainModel()
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py in _apply_device_functions(self, op)
4396 # strings, since identity checks are faster than equality checks.
4397 if device_string is not prior_device_string:
-> 4398 op._set_device_from_string(device_string)
4399 prior_device_string = device_string
4400 op._device_code_locations = self._snapshot_device_function_stack_metadata()
AttributeError: '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string'
I don't get the error when I ran the same code on my laptop.
Following is my code
from imageai.Detection.Custom import DetectionModelTrainer
trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory(data_directory=data_path)
trainer.setTrainConfig(object_names_array=["infected","uninfected"], batch_size=4, num_experiments=200)
trainer.trainModel()

Can you please check by running same version of tensorflow on your laptop and colab. By default colab loads the 1.15.0 version of tensorflow.
import tensorflow as tf
print(tf.__version__)
Output:
1.15.0
You can install the required version of tenorfow in colab using below example -
!pip install tensorflow==2.0.0
tf.__version__

Related

How to convert a Tensor to Eager tensor in Tensorflow 2.1.0?

I've been trying to convert a tensor of type:
tensorflow.python.framework.ops.Tensor
to an eagertensor:
<class 'tensorflow.python.framework.ops.EagerTensor'>
I've been searching for a solution but couldn't find one. Any help would be appreciated.
Context:
I have obtained the tensor using the feature extraction method from a Keras Sequential model. The output was a tensor of the first mentioned type.
However, when I tried to convert it to numpy using .numpy(), it did not work with the following error:
'Tensor' object has no attribute 'numpy'
But then when I try creating a tensor using tf.constant and then using .numpy() to convert it, it works fine!
The only difference I found is that the types of tensors are different:
The tensor generated by Keras sequential is of the first type mentionned above, whereas the second tensor that I have created manually is of the second type (Eager tensor).
Writing one more answer as the same error appears on different scenario.
The error you are getting is because of version issue .i.e. tensorflow version 2.1.0. I ran the code by skipping the first 2 paragraphs that is to install tensorflow==2.1.0 and keras==2.3.1 and the error didn't reappear.
Your issue vanishes in the latest version of the tensorflow version 2.3.0. Run the program on latest versions, that means do not install tensorflow and keras again because Google Colab already has the latest and stable version pre installed.
features.numpy()
Output -
array([[0. , 0.3728346, 0. , ..., 1.0103987, 0. ,
0.4194043]], dtype=float32)
Could have answered better if you would have shared the reproducible code.
Below is a simple scenario where I have recreated your error. Here I am reading the path of a image file.
Code to recreate the error:
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
def get_path(file_path):
print("file_path: ", bytes.decode(file_path.numpy()),type(bytes.decode(file_path.numpy())))
return file_path
train_dataset = tf.data.Dataset.list_files('/content/bird.png')
train_dataset = train_dataset.map(lambda x: (get_path(x)))
for one_element in train_dataset:
print(one_element)
Output:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-2d5db8425f67> in <module>()
8
9 train_dataset = tf.data.Dataset.list_files('/content/bird.png')
---> 10 train_dataset = train_dataset.map(lambda x: (get_path(x)))
11
12 for one_element in train_dataset:
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
256 except Exception as e: # pylint:disable=broad-except
257 if hasattr(e, 'ag_error_metadata'):
--> 258 raise e.ag_error_metadata.to_exception(e)
259 else:
260 raise
AttributeError: in user code:
<ipython-input-8-2d5db8425f67>:10 None *
train_dataset = train_dataset.map(lambda x: (get_path(x)))
<ipython-input-8-2d5db8425f67>:6 get_path *
print("file_path: ", bytes.decode(file_path.numpy()),type(bytes.decode(file_path.numpy())))
AttributeError: 'Tensor' object has no attribute 'numpy'
Below are the steps I have implemented in the code to fix this error.
Have decorated the map function with tf.py_function(get_path, [x], [tf.string]). You can find more about tf.py_function here.
Now I can get the string part by using bytes.decode(file_path.numpy()) in map function.
Fixed Code:
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
def get_path(file_path):
print("file_path: ",bytes.decode(file_path.numpy()),type(bytes.decode(file_path.numpy())))
return file_path
train_dataset = tf.data.Dataset.list_files('/content/bird.jpg')
train_dataset = train_dataset.map(lambda x: tf.py_function(get_path, [x], [tf.string]))
for one_element in train_dataset:
print(one_element)
Output:
file_path: /content/bird.jpg <class 'str'>
(<tf.Tensor: shape=(), dtype=string, numpy=b'/content/bird.jpg'>,)
Hope this answers your question.

Runtime error while executing code from google colab document for creating Deepfakes image animation

enter image description hereI'm getting runtime error while executing code from google colab document for creating Deepfakes image animation.
RuntimeError Traceback (most recent call last)
<ipython-input-5-dbd18151b569> in <module>()
1 from demo import load_checkpoints
2 generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml',
----> 3 checkpoint_path='/content/gdrive/My Drive/first-order-motion-model/vox-cpk.pth.tar')
10 frames
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
188 raise AssertionError(
189 "libcudart functions unavailable. It looks like you have a broken build?")
--> 190 torch._C._cuda_init()
191 # Some of the queued calls may reentrantly call _lazy_init();
192 # we need to just return without initializing in that case.
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:47
You may be running on a colab environment that only has TPU's available and not GPU's in which case you need to utilize XLA with PyTorch. You might find this notebook and repository very helpful if this is the case:
https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/resnet18-training.ipynb
https://github.com/pytorch/xla
1.check the path weather the file exists or not
2.in your screen shot there is no drive mounted try checking it once
3.if top two are done still facing problem check with the file given is correct or not
Hope this is helpfu :)

#nd time giving error,during TENSORFLOW execution

import tensorflow as tf
print(tf.ones([10, 10]))
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-15-bc8c707e6655> in <module>
----> 1 print(tf.ones([10, 10]))
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'ones'
First time giving proper answer...but when executing after 5 minutes same COMMAND giving error.
That error can occur due to changes in tensorflow version... As I understood from your comment, you're using tensorflow 2.0 and to print the tensor values you need to use tf.print just like so:
import tensorflow as tf
tf.print(tf.ones([10, 10]))
Hope this answers your question

Tensorflow estimator error in google colab

I am training a DNN in tensorflow in a google colab environment, the code works well till yesterday, but now when I run the the estimator training section of my code, It gives an error.
I don't know exactly what is the reason, is google colab using any updated version of tensorflow, in which some functions are not compatible with older versions? because I had no problem with the code before, and I didn't change it.
It seems this problem exist for the other codes, for example this sample code form stanford was ran without any error before,
https://colab.research.google.com/drive/1nG7Ga46jrWF5n7pHe0FK6anB0pLNgBVt
but now when you run the section :
estimator.train(input_fn=train_input_fn, steps=1000);
It gives the same error as mine:
> **TypeError Traceback (most recent call last)
> /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py
> in make_tensor_proto(values, dtype, shape, verify_shape)**
>
> **TypeError: Expected binary or unicode string, got {'sent_symbol': <tf.Tensor 'random_shuffle_queue_DequeueMany:3' shape=(128,)
> dtype=int64>}**
>
> **TypeError Traceback (most recent call last) <ipython-input-10-9dfe23a4bf62> in <module>()
> ----> 1 estimator.train(input_fn=train_input_fn, steps=1000);**
>
> **TypeError: Failed to convert object of type <class 'dict'> to Tensor. Contents: {'sent_symbol': <tf.Tensor
> 'random_shuffle_queue_DequeueMany:3' shape=(128,) dtype=int64>}.
> Consider casting elements to a supported type.**
The y attribute of the method tf.estimator.inputs.pandas_input_fn receives an input a Pandas Series object.
To extract the target 'sent_symbol' from the DataFrame, call training_labels['sent_symbol'].
To fix this script, modify the code as follows:
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.pandas_input_fn(
training_examples, training_labels['sent_symbol'], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
training_examples, training_labels['sent_symbol'], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
validation_examples, validation_labels['sent_symbol'], shuffle=False)

Does xgboost have feature_importances_?

I'm calling xgboost via its scikit-learn-style Python interface:
model = xgboost.XGBRegressor()
%time model.fit(trainX, trainY)
testY = model.predict(testX)
Some sklearn models tell you which importance they assign to features via the attribute feature_importances. This doesn't seem to exist for the XGBRegressor:
model.feature_importances_
AttributeError Traceback (most recent call last)
<ipython-input-36-fbaa36f9f167> in <module>()
----> 1 model.feature_importances_
AttributeError: 'XGBRegressor' object has no attribute 'feature_importances_'
The weird thing is: For a collaborator of mine the attribute feature_importances_ is there! What could be the issue?
These are the versions I have:
In [2]: xgboost.__version__
Out[2]: '0.6'
In [4]: sklearn.__version__
Out[4]: '0.18.1'
... and the xgboost C++ library from github, commit ef8d92fc52c674c44b824949388e72175f72e4d1.
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc?
http://xgboost.readthedocs.io/en/latest/build.html
As in this answer:
Feature Importance with XGBClassifier
There always seems to be a problem with the pip-installation and xgboost. Building and installing it from your build seems to help.
This worked for me:
model.get_booster().get_score(importance_type='weight')
hope it helps
This is useful for you,maybe.
xgb.plot_importance(bst)
And this is the link:plot

Categories

Resources