I was following a tutorial about object detection, and it gave me this code:
from imageai.Detection import ObjectDetection
import os
execution_path = os.getcwd()
detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath( os.path.join(execution_path , "resnet50_coco_best_v2.1.0.h5"))
detector.loadModel()
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "image.jpg"), output_image_path=os.path.join(execution_path , "imagenew.jpg"))
for eachObject in detections:
print(eachObject["name"] , " : " , eachObject["percentage_probability"] )
The problem is, it kept giving me an error like this:
ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'
I searched around, and I think its to do with my tensorflow version, but I never found a solution that worked.
You have to import Batch Normalization from tf.keras.layers
import tensorflow as tf
from tf.keras.layers import BatchNormalization
Hope , this Documentation may help you better.
If my answer finds you well..upvote
.
.
Happy Learning
When you get the error message there should be a directory given to a ini.py file. Open it with your IDE and past this line. It somehow worked for me.
from keras.layers.normalization.batch_normalization import BatchNormalization
Related
trying to run the follwing code in kaggle:
from transformers import BertTokenizer
from keras.preprocessing.sequence import pad_sequences
bert_model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(bert_model_name, do_lower_case=True)
and getting the following error:
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
turn on the internet from the settings it worked for me.
When I run this line in my code,
from keras.utils import plot_model
I get the following:
"ImportError: cannot import name 'plot_model' from 'keras.utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/__init__.py)"
When I went to bed last night it was working. This morning it throws an error. What happened and what should I do? Thank you.
Any suggestion would be appreciated
Try to import in the below format
from keras.utils.vis_utils import plot_model
This week I've had the same problem, with this import it works.
This worked taking from TensorFlow documentation
from keras.utils.vis_utils import plot_model
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=False, show_dtype=False,
show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96
)
In the recent version of keras, plot_model function has been moved to vis_utils (visualization utilities) under the utils module. So, you can get your plot_model function to work using either of these imports:
from keras.utils.vis_utils import plot_model
or
from tensorflow.keras.utils import plot_model
from keras.utils.vis_utils import plot_model
The plot_model was previously in Keras.utils, but it can be fetched from vis.utils
Copy paste the above code, and it will work
I am trying to run the following code:
from pm4py.algo.discovery.alpha import factorial as alpha_miner
from pm4py.objects.log.importer.xes import factory as xes_importer
event_log = xes_importer.import_log(os.path.join("tests","input_data","running-example.xes"))
net, initial_marking, final_marking = alpha_miner.apply(event_log)
gviz = pn_vis_factory.apply(net, initial_marking, final_marking)
pn_vis_factory.view(gviz)
However, when I run the alpha miner, I get an error message that factory cannot be imported.
What could be the reason or does anyone know a soulution for this?
Many thanks for the answer
from pm4py.algo.discovery.alpha import algorithm as alpha_miner
Find all process discoveries and its information at:
https://pm4py.fit.fraunhofer.de/documentation#discovery
Try this:
import os
# Alpha Miner
from pm4py.algo.discovery.alpha import algorithm as alpha_miner
# XES Reader
from pm4py.objects.log.importer.xes import importer as xes_importer
# Visualize
from pm4py.visualization.petri_net import visualizer as pn_visualizer
log = xes_importer.apply(os.path.join("tests","input_data","running-example.xes"))
net, initial_marking, final_marking = alpha_miner.apply(log)
gviz = pn_visualizer.apply(net, initial_marking, final_marking)
pn_visualizer.view(gviz)
I am trying to use a keras application in pycharm. I start my script off with the following imports:
from keras_vggface.vggface import VGGFace
from keras_vggface.utils import preprocess_input
from keras_vggface.utils import decode_predictions
Upon running this block of code, I get this error:
ImportError: You need to first `import keras` in order to use `keras_applications`. For instance, you can do:
```
import keras
from keras_applications import vgg16
```
Or, preferably, this equivalent formulation:
```
from keras import applications
```
I have tried importing the appropriate keras libraries as suggested, but the problem persists. I have also tried checking the json file to see if it contains the correct backend(it does).
How can I resolve this issue?
"edit for clarity"
My full imports go as follows:
from PIL import Image # for extracting image
from numpy import asarray
from numpy import expand_dims
from matplotlib import pyplot
from mtcnn.mtcnn import MTCNN # because i am too lazy to make one myself
import keras
from keras_applications import vgg16
from keras_vggface.vggface import VGGFace
from keras_vggface.utils import preprocess_input
from keras_vggface.utils import decode_predictions
Traceback:
Traceback (most recent call last):
File "C:/Users/###/PycharmProjects/##/#.py", line 17, in <module>
from keras_applications import vgg16
File "C:\Users\###\anaconda3\envs\tensor\lib\site-packages\keras_applications\vgg16.py", line 17, in <module>
backend = get_keras_submodule('backend')
File "C:\Users\###\anaconda3\envs\tensor\lib\site-packages\keras_applications\__init__.py", line 39, in get_keras_submodule
raise ImportError('You need to first `import keras` '
ImportError: You need to first `import keras` in order to use `keras_applications`. For instance, you can do:
```
import keras
from keras_applications import vgg16
```
Or, preferably, this equivalent formulation:
```
from keras import applications
```
Process finished with exit code 1
Are you planning to use the Tensorflow framework for executing the model. If it is tensorflow then i suggest using
import tensorflow as tf \ from tensorflow.keras.applications.vgg16 import VGG. Keras comes in-built in latest TF framework and hence we dont have to do an explicit import
even otherwise if you want to use Keras directly i believe the code should be :
import keras \ from keras.applications.vgg16 import VGG16 \ vggmodel = VGG16(weights='imagenet', include_top=True)
I am new on using Detectron2. I want to load the video from local drive. And then, do detection using my trained model using Detectron2's VideoVisualizer.
I tried to find a tutorial about this. But it does not exist. Could you please what do I do?
Thank you
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import tqdm
import cv2
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.video_visualizer import VideoVisualizer
from detectron2.utils.visualizer import ColorMode, Visualizer
from detectron2.data import MetadataCatalog
import time
video = cv2.VideoCapture('gdrive/My Drive/video.mp4')
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.OUTPUT_DIR = 'gdrive/My Drive/mask_rcnn/'
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set threshold for this model
predictor = DefaultPredictor(cfg)
v = VideoVisualizer(MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), ColorMode.IMAGE)
First, check the following tutorial (you can skip training parts if you don't want to train on your own data).
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=Vk4gID50K03a
Then, look at the following code to inference on video.
https://github.com/facebookresearch/detectron2/blob/master/demo/demo.py