I am using Kaggle Python and I am trying to edit images with OpenCV. I am simply trying to crop the image.I am able to do it with Matplotlib but I want to use OpenCv. When I excecute the code, it does not give me any message and it deletes all variables. It is like restarting the whole kernel. The varialbe img is not created and even variables created previously are deleted. Any idea is very appreciated.
import cv2
img = cv2.imread("/kaggle/input/global-wheat-detection/train/07479da31.jpg")
crop_img = img[715:834, 108:176]
cv2.imshow("cropped", crop_img)
cv2.waitKey(0)
You cannot use cv2.imshow in a Kaggle notebook. It requires a Qt backend which the Kaggle notebook is not set up for which is why your notebook is crashing. In addition, cv2.imshow opens up a separate window which of course the notebook environment is also not set up for. Therefore, you unfortunately cannot use OpenCV windowing or interactive functionality in the notebook. Since Matplotlib is working for you, you need to use that.
Related
I'm getting the following warnings in Colab after trying to use napari and I'm not too sure how to fix them:
Is napari compatible with Google Colab or would it be better to use something else? I find it strange that the final warning states at the Qt platform has been found but could not be loaded. Any ideas?
When I try to run code using keras and numpy or keras and Matplotlib in one Jupyter notebook I always get the message: The kernel appears to have died. It will restart automatically.
When I run the code in two different notebooks it works perfectly fine. I have installed it using anaconda and I am using macOS. I would really appreciate an answer, everything else I've found and tried so far did not work. Thank you!
I am trying to train an SSD-based face detector from scratch in Caffe. I have a dataset of images where the bounding boxes for faces are stored in a csv file. In all the tutorials and code I've come across so far, convert_annoset tool is used to generate an lmdb file for object detection. However, in the latest Caffe version on Github (https://github.com/BVLC/caffe), this tool has been removed.
The two options I see to deal with this issue are:
Rewrite the convert_annoset tool using functions in the current Caffe library
Use other python packages (such as lmdb and OpenCV) to manually create lmdb files from the images and bounding box information
I've been trying to rewrite the code but I am unable to find certain classes and functions that were used in the original code such as AnnotatedDatum_AnnotationType and LabelMap in the current version.
Any suggestions for how to proceed with creating lmdb for the object detection problem?
EDIT:
I've just realized that AnnotatedData layer no longer exists in the master branch of Caffe. Does this mean that detection is not possible in this version? Do I have to use some older fork such as https://github.com/weiliu89/caffe for detection or is there any other option?
Since the original Caffe does not have a layer for object detection(mobilenet-ssd)
so use this Caffe (https://github.com/weiliu89/caffe) ssd branch
This Caffe has a convert_annoset that can generate an lmdb.
I just start to learn OpenCV in Python and mainly use jupyter notebook. The sample I learnt comes from course https://pythonprogramming.net/loading-images-python-opencv-tutorial/
I load an image using cv2.imread() and want to show it up using cv2.imshow(). The image was shown successfully but the program keeps running and can't be interrupted.
May I know why?
Please check following codes:
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
img=cv2.imread('sample1.jpg',cv2.IMREAD_GRAYSCALE)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows() # This part of code keeps running.
I am trying to run Tensorflow's Object Detection example(object_detection_tutorial.ipynb) as it is. But, at the end of the complete Jupyter notebook execution I don't see the bounding box.
I have followed instruction by running the following commands in the right paths-
protoc object_detection/protos/*.proto --python_out=.
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
And even the model builder test runs successfully-
python object_detection/builders/model_builder_test.py
I have verified that the model has successfully downloaded. And even the images are in the right paths.
I don't see any errors in the console as well. Please can you tell me what am I doing wrong?
The current model listed by the master branch is unstable.
This is related to tensorflow/models issue 2773 and there is a fix for it.
In the "Variables" section of models/research/object_detection/object_detection_tutorial.ipynb you have to change the model.
Just replace the model name so the it looks like this
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
and you will happily detect dogs and people and kites in no time :)