OpenCV Reflective Surface Problem (Pre-Process Text from Digital Screen) - python

I'm working on a machine learning application for reading data from fuel pumps, so far I've gone ahead and created a pretty robust YOLOv5 Object Detection Model that can detect the regions that I want fairly accurately. But there is a problem, at certain times of the day there are reflections on the digital screen and I'm unable to use OpenCV pre-process it so that I can extract the numbers from the display.
Check this Video to Understand (YOLOv5 Detection)
https://www.youtube.com/watch?v=3XjZ6Nw70j8
Minimum Reproduceable Example
Cars come and go and their reflection makes it really difficult to differentiate between the reigons for digital-7 font that is used in these displays, you can check out the following repository to understand what I want as s result https://github.com/arturaugusto/display_ocr
Other Solutions I'm Open to:
Since, this application is going to run 24/7 how should I deal with different times,
perhaps create a database of HSV ranges to extract at different times.
Use a polarizing lens would it help in removing the reflections (any user's who have had previous experiences in deploying them).
Edit: I added the correct video ...

Related

Project problem, looking for advice about image processing

I'm a senior in high school and this year I have to do a project for my electronic class, I was hoping to get some advice from people with some experience.
My idea is kind of complicated and has a lot of different sensors but not too crazy, the problem begins with possible image processing. I have a camera who need to check for flashing light and send the video to a screen without the frames of the flashing (like just skipping the frame, so the video is always a frame in delay but the person won't notice it).
The fashing light is supposed to be like in a party or in a video game you get a warning on. The idea is to notice the extreme changing of lighting and to not show it on the screen.
My teacher is afraid that doing image processing might be too complicated and video processing as well... I don't have any knowledge in it, and I have a little background in Python and other languages, do you think it is possible? Can anyone give me an advice or a good video/tutorial to learn from?
Thank you in advance:)
your probleme if quite diificult, cause it envolved unknown environnement in a dynamic time range.
if you admit as an axiom that your camera has for exemple a frame rate of 20 FPS, the chances that your difference between Frame f' and next frame f+1 are quite low.
UNLESS you have a huge color change du to ligth flash,
So you can process with an image similarity such as ssim or psim
https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/
if your image is over a certain treshold that you have to define ( can use also a kalmann filter to dynamically reajust the difference treshold)
so it will probably mean that your flash light is on.
Although it's a visual coding program (per se), Bonsai is a great open source software for doing what's in your description; as well, Bonsai supports applications that require combinations of different hardware (e.g. microcontrollers, cameras) and software components (e.g. Python).
To provide a similar application as an example, I have setup a workflow where Bonsai captures images sent from a Basler camera, it processes the input video frame-by-frame, and when it detects, within the cropped frame (that I cropped around an red LED), a threshold change in pixel intensity (i.e. the red LED turns ON or OFF), it sends an output signal (i.e. 5 volts) to an Arduino microcontroller while saving the image frame as a png file as well as a avi video file along with a vector of True/False (corresponding to the ON or OFF red LED frames) and corresponding timestamps that are saved as csv files, etc. Although this isn't identical to what you've described, I'm sure you can setup a similar Bonsai workflow to accomplish your goal.
Citation: https://www.frontiersin.org/articles/10.3389/fninf.2015.00007/full
Edit: I'm very familiar with Bonsai so if you need help with setting up a Bonsai workflow I'd be happy to help; I don't think there is direct message on StackOverFlow, but given that StackOverFlow doesn't list Bonsai as a programming language (because it's a visual programming language; or because it's not well known enough to include on StackOverFlow) feel free to reach out if you have any questions regarding Bonsai specifically (again, it's also an open source software).

Obsctacle avoidance for a drone - which approach should I take?

This is my first post here, so hello everyone.
I am working on a project that involves writing a program in c++ or python that will detect obstacles and will be used for AR.Drone 2.0. However, I don't know which approach should I take.
Initially, I was adviced to use opencv and optocal flow. I've found some videos and papers about it and one way is that: divide every frame from AR.Drone's camera on 2 (left/right side) or 4 (additionally up and bottom) and calculate optical flow for each part. Then, fly in the direction where the optical flow is less.
However I have some doubts about it:
1)Which method of optical flow calculation should I use? I know that in opencv there are provided methods for calculating both dense or sparse optical flow. Which one should I choose in this application? Won't dense optical flow be too slow to meet real-time requirements?
2)I guess that in time when UAV moves left-right or up-down I'll get some "fake" vectors caused by the movement of a drone and not because of the looming obstacle. How to prevent this?
Another solution I was told about a method shown here (link for paper in description) and someone who implemented it github link however the author admitted that he "never get obstacle detection working properly on the drone".
Another option I was told about is attaching a realsense camera to a drone and extract an information about the obstacle somehow using it.
So, my question is - which path should I take? Or is there some other method to do this that will work for application I described and is relatively easy to implement?
Thanks in advance for every reply.
I'm not sure the scope of your project, whether or not this is academic or professional, but my recommendation would be to use object detection of a control image with the camera facing directly forward on the drone. if the object is detected, you can estimate it's distance from your drone based on it's size. Since it is a control image it should have a constant size and you should record how many pixels across that is at various distances from your camera. This way you can build up a model. Once you know how far away the object is you can determine if it is an obstacle or not.
Once the detection becomes large enough, determine if it is in the flight path. Then you move the drone such that the coordinates of the detection box are no longer in your flight path.
For the detection, you can either use Google's detection api which comes with a number of solid detectors/classifiers, or if you are looking to add a layer of depth to the project you can train your own. PyImageSearch is a great place to start. And if you are feeling extra scientific you can dive right into Tensorflow.
Best of luck!
Try the open source project https://github.com/generalized-intelligence/GAAS
It uses stereo camera and SLAM to detect obstacles.

Computer Vision: How to obtain what percentage of image contains a specific texture?

I am building a app to see the progress of deforestation. Over time i would like to take a satellite image from a location and see what percentage of that image contains forest.
I have attempted google's vision API, it does not have this functionality.
Is this something that can be done in OpenCV or must I do this from scratch with semantic segmentation or something similar?
So far I could see in the documentation there doesn't seem to be any pattern/texture recognition for the API. My belief is that you could try to do a dominant color recognition. if your image data has enough data of differentiable colors, I think you should be able to get an acceptable analysis.
PD: Having some experience with satellital imagery processing, as additional info, I can comment that the usual way to find out the status of the land for plants, forest and general crop development and health is though color analysis.
Nonetheless, Satellite/drone images are mostly multispectral and several UV bands are extensively used as biomass behaves very different with season/health/development status with the combination of visible and UV electromagnetic bands.
Have you tried to look at the satellite image recognition Kaggle competitions? There are a lot of discussions as well as available scripts for tasks similar to yours:
Links: https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection
Example script: https://www.kaggle.com/arpandhatt/satellite-image-classification

Python: Recognize if image contains graphic/text or a picture

I want to write a script, that converts unknown images (jpg, png, gif, bmp, tiff, etc.) to a specific resolution and format as well as generating a thumbnail.
the problem is that the compression level, that is totally fine for pictures produces crap for exports of Presentations for example; So I want to differ the conversion settings based on the contents of the image.
Does anyone have experience in doing that kind of stuff in python (or shell scripts whose output is easily pasreable)?
my ideas are:
increase contrast and check histogramm if there are only single spikes left
doing a high pass filtering of the image and check what?
doing face recognition of known letters
the goal is that the recognition should be quite fast (approx. 10 images/second) and quite easy to implement
This is a pretty trivial machine learning problem, I would research the MNIST dataset problems that teach you how to recognize handwritten characters, this process should be very similar. Check out this tutorial and see if you can modify it to recognize graphics vs pictures. If your error rate ends up too high you'll have to try more advanced machine learning techniques.
http://mxnet.io/tutorials/python/mnist.html

Algorithm for extracting background from image

Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!

Categories

Resources