python imshow pixel size varies within plot - python

Dear stackoverflow community!
I need to plot a 2D-map in python using imshow. The command used is
plt.imshow(ux_map, interpolation='none', origin='lower', extent=[lonhg_all.min(), lonhg_all.max(), lathg_all.min(), lathg_all.max()])
The image is then saved as follows
plt.savefig('rdv_cr%s_cmlon%s_ux.png' % (2097, cmlon_ref))
and looks like
The problem is that when zooming into the plot one can notice that the pixels have different shapes (e.g. different width). This is illustrated in the zoomed part below (taken from the top region of the the first image):
Is there any reason for this behaviour? I input a rectangular grid for my data, but the problem does not have to do with the data itself, I suppose. Instead it is probably something related to rendering. I'd expect all pixels to be of equal shape, but as could be seen they have both different widths as well as heights. By the way, this also occurs in the interactive plot of matplotlib. However, when zooming in there, they become equally shaped all of a sudden.
I'm not sure as to whether
https://github.com/matplotlib/matplotlib/issues/3057/ and the link therein might be related, but I can try playing around with dpi values. In any case, if anybody knows why this happens, could that person provide some background on why the computer cannot display the plot as intended using the commands from above?
Thanks for your responses!

This is related to the way the image is mapped to the screen. To determine the color of a pixel in the screen, the corresponding color is sampled from the image. If the screen area and the image size do not match, either upsampling (image too small) or downsampling (image too large) occurs.
You observed a case of upsampling. For example, consider drawing a 4x4 image on a region of 6x6 pixels on the screen. Sometimes two screen pixels fall into an image pixel, and sometimes only one. Here, we observe an extreme case of differently sized pixels.
When you zoom in in the interactive view, this effect seems to disapear. That is because suddenly you map the image to a large number of pixels. If one image pixel is enlarged to, say, 10 screen pixels and another to 11, you hardly notice the difference. The effect is most apparent when the image nearly matches the screen resolution.
A solution to work around this effect is to use interpolation, which may lead to an undesirable blurred look. To reduce the blur you can...
play with different interpolation functions. Try for example 'kaiser'
or up-scale the image by a constant factor using nearest neighbor interpolation (e.g. replace each pixel in the image by a block of pixels with the same color). Then any blurring will only affect the edges of the block.

Related

Interpolating colors inside a contour in opencv [python]

I am dealing with some images which contain tables and there are 1 or 2 stickers on them. What I am trying to do is getting rid of those stickers. Using color thresholding (in HSV) and contour detection I am able to create a mask for those stickers. Now I want those stickers to "dissolve" out from there (I don't know the correct term for this). While keeping those tables lines intact, so that my line detection works well (which I have to do after this cleaning).
I tried OpenCV's inpaint. But this doesn't work well here, because the sticker size is big enough.
See this example:
Part of the whole image where the sticker is sticking (inside contents are censored by me). It can be over horizontal lines, or vertical lines, or both. Basically, it's sticking somewhere on the table (maybe over some text too, but that can't be recovered anyway). The background won't be necessary whitish, it can be pink/orange/other colors.
This is the thresholded image, creating a mask of the sticker. We can also get the contour of this if required.
This is the result of cv.inpaint() with radius 3.
What I want is to reconstruct those lines.
My solution
Now my approach is to interpolate the colors in between the sticker contour, to fill it up. For each pixel inside the contour, I will do a vertical interpolation and a horizontal interpolation (interpolation of the boundary colors) and then fill that pixel with the average of both. I am hoping that this will preserve my vertical and horizontal lines at least. (Might fail if it's on a corner of the table). This will also keep the background smooth, my background can have some different colors.
Now my problem is how I can implement this. What I have are contours that I find using OpenCV's get_contours(). I don't know how to get the colors on its boundary and how to interpolate the in-between colors.
Any help is appreciated. Thanks in advance.
Due to confidentiality, I cannot share the whole image.
EDIT
I tried the seam-carving method (implementation). Here are the results:
Vertical seaming
Horizontal seaming
It works well once I know which one to use. And I am not sure how well it will do when we have both horizontal and vertical lines.
PS. Don't suggest a solution which needs to find lines and then work. Because there will be many lines in my whole image.
You can make synthetic example images. To better explain your issue.
As I got it you can use Poisson image editing. Just take a piece of clear paper image and paste it using poisson blending and the mask you extracted.
Check this github repo as instance for examples with code.

How do I split a shape with conected pixels in to two parts in a binary image

My goal is to draw a rectangle border around the face by removing the neck area connected to the whole face area. All positive values here represent skin color pixels. Here I have so far filtered out the binary image using OpenCV and python. Code so far skinid.py
Below is the test image.
Noise removals have also been applied to this binary image
Up to this point, I followed this paper Face segmentation using skin-color map in videophone applications. And for the most of it, I used custom functions rather than using built-in OpenCV functions because I kind of wanted to do it from scratch. (although some erosion, opening, closing were used to tune it up)
I want to know a way to split the neck from the whole face area and remove it like this,
as I am quite new to the whole image processing area.
Perform a distance transform (built into opencv or you could write by hand its a pretty fun and easy one to write using the erode function iteratively, and adding the result into another matrix each round, lol slow but conceptually easy). On the binary image you presented above, the highest value in a distance transform (and tbh I think pretty generalized across any mug shots) will be the center of the face. So that pixel is the center of your box, but also that value (value of that pixel after the distance transform) will give you a pretty solid approx face size (since it is going to be the pixel distance from the center of the face to the horizontal edges of the face). Depending on what you are after, you may just be able to multiply that distance by say 1.5 or so (figure out standard face width to height ratio and such to choose your best multiplier), set that as your circle radius (or half side width for a box) and call it a day. Comment if you need anything clarified as I am pretty confident in this answer and would be happy to write up some quick code (in c++ opencv) if you need/ it would help.
(alt idea). You could tweak your color filter a bit to reject darker areas (this will at least in the image presented) create a nice separation between your face and neck due to the shadowing of the chin. (you may have to dial back your dilate/ closing op tho)

How to detect edge of object using OpenCV

I am trying to use OpenCV to measure size of filament ( that plastic material used for 3D printing)
What I am trying to do is measuring filament size ( that plastic material used for 3D printing ). The idea is that I use led panel to illuminate filament, then take image with camera, preprocess the image, apply edge detections and calculate it's size. Most filaments are fine made of one colour which is easy to preprocess and get fine results.
The problem comes with transparent filament. I am not able to get useful results. I would like to ask for a little help, or if someone could push me the right directions. I have already tried cropping the image to heigh that is a bit higher than filament, and width just a few pixels and calculating size using number of pixels in those images, but this did not work very well. So now I am here and trying to do it with edge detections
works well for filaments of single colour
not working for transparent filament
Code below is working just fine for common filaments, the problem is when I try to use it for transparent filament. I have tried adjusting tresholds for Canny function. I have tried different colour-spaces. But I am not able to get the results.
Images that may help to understand:
https://imgur.com/gallery/CIv7fxY
image = cv.imread("../images/img_fil_2.PNG") # load image
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # convert image to grayscale
edges = cv.Canny(gray, 100, 200) # detect edges of image
You can use the assumption that the images are taken under the same conditions.
Your main problem is that the reflections in the transparent filament are detected as edges. But, since the image is relatively simple, without any other edges, you can simply take the upper and the lower edge, and measure the distance between them.
A simple way of doing this is to take 2 vertical lines (e.g. image sides), find the edges that intersect the line (basically traverse a column in the image and find edge pixels), and connect the highest and the lowest points to form the edges of the filament. This also removes the curvature in the filament, which I assume is not needed for your application.
You might want to use 3 or 4 vertical lines, for robustness.

OpenCV: How to detect small differences in different images of the same object

I've been trying to detect if a printed image has any defects(shape and color) when compared to either a proof of another printed image which has no defects or the digital version of the image, which also has no defects. I'm using opencv(cv2) and python.
I first take a picture of the printed image. Then, I perform perspective transformation to get the picture of the printed image cropped sufficiently. I am then using Zernike moments, SSIM, and color histograms to compare the color and shape of the image. However, the resulting values vary too much and I am not able to create a threshold for a misprinted image.
I have also tried to subdivide the image into smaller sections and compare those. This is also not creating distinguishable values to determine if there is a misprint or not.
The differences in the print can be subtle or very apparent. Are there any other techniques that I can try? Thanks!
This is an example of a correctly printed image:
This is an example of an incorrectly printed image, it has too much blue ink on the right side:
This is another example of a correct print:
This is an example of a misprint when compared to the one above:

Render image as defined pixel squares HTML5 canvas?

I have the following 6x5 image:
Which appears as so blown up on the canvas:
I render it using more or less this code this.context.putImageData(this.imageData, 0, 0) and scale the canvas with css (canvas {width: 100%}).
It has the following rgb values:
51.65 41.59 60.74 159.44 137.91 165.41
147.29 71.01 52.93 73.80 115.80 93.45
77.16 112.66 98.07 70.43 78.91 107.27
107.39 122.85 60.67 103.91 144.37 124.05
138.59 123.77 140.51 107.25 52.10 138.80
Why can't I see the individuals as block squares in a defined grid? Is it something to do with how images are rendered?
Yes. This is related to how the images are rendered.
When you have an image with colors, you usually have the complete information of the color of every pixel. So a 100x100 image renders perfectly on a 100x100 pizel size.
But when you try to scale UP this image, you have more pixels available to fill but less information about how to fill them. So you resort to mathematical algorithms, usually known as interpolation.
Canvas will automatically interpolate the pixels to fill in gaps using nearest pixel information.
To get your desired effect of having single pixels, you need to scale up without any interpolation. To do this you can refer to this answer, or rather find several answers now that you know the appropriate terms to exactly describe your problem.
This CSS3 solution should work in Chrome 41+ for now:
canvas {
image-rendering: pixelated;
}

Categories

Resources