Bilinear interpolation vs zooming out which one is better [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 12 months ago.
Improve this question
I've started studying image processing some weeks ago, and I've started reading about image interpolation techniques, and testing them in Python with the help of cv2 library.
The problem is that I can't figure out why with zooming out the image seems to be better compared to bilinear algorithm.
Maybe i'm missing something, but shouldn't be the result of bilinear interpolation better than zooming out?

You are facing a phenomenon known as aliasing, which occurs when you sample the image too sparsely, so that the high frequencies are not faithfully preserved. (This is a little technical, see https://en.wikipedia.org/wiki/Nyquist_frequency)
The cure is to erase or lessen these high frequencies using a lowpass filter, such as averaging. In other terms, you blur the image to remove the fine features, that would be poorly sampled.

"Zooming out" is orthogonal to the means of interpolation. Those things aren't comparable. Your choice of words confuses the issue.
Your two pieces of code only differ in that the first one sums up the entire source area that would contribute to a destination pixel (equivalent to INTER_AREA in OpenCV), while the second code merely calculates a bilinear point sample for the source location (equivalent to INTER_LINEAR in OpenCV).
Signal processing theory dictates that one has to apply a low-pass filter before decimating, or else incur aliasing artefacts.
INTER_AREA's summing of source pixels constitutes a low-pass filter.

Related

Identify chess piece from image in python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have images of the following type
etc..
What would be the easiest way of identifying what piece it is and if it is black or white? Do I need to use machine learning or is there an easier way?
It depends on your inputs. If all of your data looks like this (nice and clean, with contours being identical, just background and color changes), you could probably use some kind of pixel + color matching and you could be good to go.
You definitely know that deep learning and machine learning only approximate function (functions of pixels in this case), if you can find it (the function) without using those methods (with sensible amount of work), you always should.
And no, machine learning is not a silver bullet, you get an image and you throw it into convolutional neural networks black-box magic and you get your results, that's not the point.
Sorry but deep learning might be just an overkill to recognize a known set of images. Use template matching!
https://machinelearningmastery.com/using-opencv-python-and-template-matching-to-play-wheres-waldo/
You could do this using machine learning (plan or convolutional neural nets). This isn't that hard of a problem, but you have to do manual work of creating proper dataset with lots of pictures.
For example: For each piece you need to create picture with white/black field color. And you have to do different combinations, different chess piece sets vs. different table color schema. In order to make the system more robust to color schema you can try different color channels.
There are lots of questions, will the pictures you test always be in same resolution? If they aren't then you should also take that into consideration when creating dataset.

Bounding boxes around text regions with different font size [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to implement some kind of text detection algorithm and I want to separate the image into regions where each region contains different font size.
As in this image, for example:
Is there any easy way to implement it using python and/or opencv? if so, how?
I did tried googling it but could not find anything useful..
Thanks.
This is an interesting question. There are a few steps that you need to take in order to achieve your goals. I hope you are sufficiently informed of basic computer vision algorithm (knowledge in openCV function helps) to understand the steps i am suggesting.
Group all the words together using morphological dilation process.
Use openCV findcountour function to label all the blobs. This will give you the width and height information of each blob as well.
Here is the tricky part, now that you have data on each blob, try to run a clustering algorithm on the data with the location(x,y) and geometry(width,height) as your features.
Once you cluster them correctly, its a matter of finding the leftmost, rightmost, topmost and bottom data to draw the bounding rect.
I hope this will provide you enough information to start you work. Its is not detailed but i think its enough to guide you.

Position and orientation estimation by stereo images [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to find out how to estimate position and orientation of an object using stereo camera images.
Assume a description of an object is given either by its edges and edge-relations, or points, or other features. The images are taken from above by a set of two cameras that have a fixed position relative to each other (like Microsoft Kinect gives you).
With Python's skimage toolkit I am able to recognise the similarites in the picture without telling if the searched object is in the images, but that is as far as I get.
What I want to do more is to segment the known object from the background and to be able to tell its position relatively to the cameras (e.g. by comparing its calculated position with the position of a known mark on the floor or something similar which I don't think will be too hard).
I know Python pretty well and have some experience in basic image processing. If this problem is already solved in OpenCV, I can work with that as well.
Thank you in advance for any help you give, either by naming keywords to improve my search, links, or by sharing your experience in this field!
To illustrate my problem: You have a bunch of same kind (shape+color) lego bricks laying in a chaotic manner, e.g. they are overlaying completely/partially or not at all and have an arbitrary orientation. The floor underneath is of the same color as the bricks. Cameras look straight down. Task is to find as many as bricks as possible and tell there location.
edit: added illustration

How to remove green from greenscreen video on python? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am doing a project where I am attempting to take out the green screen on a video on python but I do not know how to go about it.
Thank you!
well...I presume you have some kind of RGB image for each frame of video.
I.e an N by 3 array.
(You could use OpenCV to read each frame.)
So it is case of going through the image and locating all the green and replacing it with what you want.
E.g if the array is called arr then for each row, i, you would check whether arr[i] == [0,255,0].
But due to the nature of film, you aren't going to have a perfectly uniform 0,255,0 green. There will be shadows and other slight variations. Perhaps it wasn't even 0,255,0 to start out with.
So you are going to be looking at removing a range of colours. Now for each row we are searching for a range of colours and replacing them with your choice.
We now run the risk of identifying a colour for removal that we don't actually want removing....so how can we check for that...
We still probably won't get a perfect match around the edges (of the objects/people we want to keep in the image), so to make this less obvious, we might want to use a little bit of blur and so on and so forth.
Look at this video: https://www.youtube.com/watch?v=rIWoLCFvjME
Try to think about what logic code is required for each little step the user takes.
Also think about all the decision the user makes that are purely subjective. Obviously these would be nigh-on impossible to automate reliably. So now we are talking about some kind of interactive application that allows the user to select different actions based on their subjective choice.
And we quickly see why green screen is often removed manually, frame by frame using a powerful editing application like photoshop, after effects etc...
OpenCV (http://opencv-python-tutroals.readthedocs.org/en/latest/) will do a lot of the algorithms for you...there is almost enough there to build your interactive greenscreen removal software...

Sliding a sliding window "intelligently"? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In sliding window object detectors, is it possible to do object detection "intelligently"? For example, if a human is looking for a vehicle, they're not going to look into the sky for a car. But an object detector that uses a sliding window is going to slide the window across the entire image (including the sky) and run the object classifier on each window, resulting in a lot of wasted time. Are there are any techniques out there to make sure it only looks in reasonable places?
Edit
I understand we'll have to look through everything at least once, but I wouldn't want to run a heavy complicated classifier on each window. A pre-classification classifier of sorts, perhaps?
Have you considered looking at saliency detection algorithms? Saliency detection algorithms give you an indication of where in the image a human would most likely focus on. A good example would be a human in an open field. The sky would have low saliency while the human a high one.
Maybe put your image through a saliency detection algorithm first, then threshold and find regions of where to search instead of the entire image.
A great algorithm for this is by Stas Goferman: Context-Aware Saliency Detection - http://webee.technion.ac.il/~ayellet/Ps/10-Saliency.pdf.
There is also code here to get you started: https://sites.google.com/a/jyunfan.co.cc/site/opensource-1/contextsaliency
Unfortunately it is in MATLAB, and from your tag you want to look at Python. However, there are many similarities between numpy / scipy and MATLAB so hopefully that will help you if you want to transcribe any code.
Check it out!

Categories

Resources