Matplotlib confused sub-pixel resolution - python

I want to plot a narrow peak that contains several thousand data points, additionally I want to draw a vertical line marking the peak. I'm using Python 2.7 and Matplotlib 1.3.1 on Win7 (checked on two separate PCs).
For example, here is a Gaussian centred on sqrt(2) with its peak marked in red:
plt.plot(-arange(-4,4,1e-5)+sqrt(2), exp(-arange(-4,4,1e-5)**2)*0.92,'k',
[sqrt(2),sqrt(2)],[0,1],'r')
When the plot is wide enough you won't notice that anything is wrong, but as you make the plot narrower you will start to see that the red line sits to the right of the peak. By taking a screenshot and zooming in on the pixels you can see that the line is exactly one pixel off where it should be. (The image above shows the peak enlarged with and without the red line so as to make it clear that the line has missed the peak.)
Am I being stupid or is this a bug? If it's a bug presumably only one of the two lines is wrong: black or red?
Worrying about single pixels might sound pedantic, but your eye is actually surprisingly good at doing sub-pixel interpolation and can easily spot the problem.

Related

Interpolating colors inside a contour in opencv [python]

I am dealing with some images which contain tables and there are 1 or 2 stickers on them. What I am trying to do is getting rid of those stickers. Using color thresholding (in HSV) and contour detection I am able to create a mask for those stickers. Now I want those stickers to "dissolve" out from there (I don't know the correct term for this). While keeping those tables lines intact, so that my line detection works well (which I have to do after this cleaning).
I tried OpenCV's inpaint. But this doesn't work well here, because the sticker size is big enough.
See this example:
Part of the whole image where the sticker is sticking (inside contents are censored by me). It can be over horizontal lines, or vertical lines, or both. Basically, it's sticking somewhere on the table (maybe over some text too, but that can't be recovered anyway). The background won't be necessary whitish, it can be pink/orange/other colors.
This is the thresholded image, creating a mask of the sticker. We can also get the contour of this if required.
This is the result of cv.inpaint() with radius 3.
What I want is to reconstruct those lines.
My solution
Now my approach is to interpolate the colors in between the sticker contour, to fill it up. For each pixel inside the contour, I will do a vertical interpolation and a horizontal interpolation (interpolation of the boundary colors) and then fill that pixel with the average of both. I am hoping that this will preserve my vertical and horizontal lines at least. (Might fail if it's on a corner of the table). This will also keep the background smooth, my background can have some different colors.
Now my problem is how I can implement this. What I have are contours that I find using OpenCV's get_contours(). I don't know how to get the colors on its boundary and how to interpolate the in-between colors.
Any help is appreciated. Thanks in advance.
Due to confidentiality, I cannot share the whole image.
EDIT
I tried the seam-carving method (implementation). Here are the results:
Vertical seaming
Horizontal seaming
It works well once I know which one to use. And I am not sure how well it will do when we have both horizontal and vertical lines.
PS. Don't suggest a solution which needs to find lines and then work. Because there will be many lines in my whole image.
You can make synthetic example images. To better explain your issue.
As I got it you can use Poisson image editing. Just take a piece of clear paper image and paste it using poisson blending and the mask you extracted.
Check this github repo as instance for examples with code.

python imshow pixel size varies within plot

Dear stackoverflow community!
I need to plot a 2D-map in python using imshow. The command used is
plt.imshow(ux_map, interpolation='none', origin='lower', extent=[lonhg_all.min(), lonhg_all.max(), lathg_all.min(), lathg_all.max()])
The image is then saved as follows
plt.savefig('rdv_cr%s_cmlon%s_ux.png' % (2097, cmlon_ref))
and looks like
The problem is that when zooming into the plot one can notice that the pixels have different shapes (e.g. different width). This is illustrated in the zoomed part below (taken from the top region of the the first image):
Is there any reason for this behaviour? I input a rectangular grid for my data, but the problem does not have to do with the data itself, I suppose. Instead it is probably something related to rendering. I'd expect all pixels to be of equal shape, but as could be seen they have both different widths as well as heights. By the way, this also occurs in the interactive plot of matplotlib. However, when zooming in there, they become equally shaped all of a sudden.
I'm not sure as to whether
https://github.com/matplotlib/matplotlib/issues/3057/ and the link therein might be related, but I can try playing around with dpi values. In any case, if anybody knows why this happens, could that person provide some background on why the computer cannot display the plot as intended using the commands from above?
Thanks for your responses!
This is related to the way the image is mapped to the screen. To determine the color of a pixel in the screen, the corresponding color is sampled from the image. If the screen area and the image size do not match, either upsampling (image too small) or downsampling (image too large) occurs.
You observed a case of upsampling. For example, consider drawing a 4x4 image on a region of 6x6 pixels on the screen. Sometimes two screen pixels fall into an image pixel, and sometimes only one. Here, we observe an extreme case of differently sized pixels.
When you zoom in in the interactive view, this effect seems to disapear. That is because suddenly you map the image to a large number of pixels. If one image pixel is enlarged to, say, 10 screen pixels and another to 11, you hardly notice the difference. The effect is most apparent when the image nearly matches the screen resolution.
A solution to work around this effect is to use interpolation, which may lead to an undesirable blurred look. To reduce the blur you can...
play with different interpolation functions. Try for example 'kaiser'
or up-scale the image by a constant factor using nearest neighbor interpolation (e.g. replace each pixel in the image by a block of pixels with the same color). Then any blurring will only affect the edges of the block.

Python - How to plot 'boundary edge' onto a 2D plot

The program pastebinned below generates a plot that looks like:
Pastebin: http://pastebin.com/wNgAG6K9
Basically, the program solves an equation for AA, and plots the values provided AA>0 and AA=/=0. The data is plotted using pcolormesh from 3 arrays called x, y and z (lines 57 - 59).
What I want to do:
I would like to plot a line around the boundary where the solutions go from zero (black) to non-zero (yellow/green), see plot below. What is the most sensible way to go about this?
I.e. lines in red (done crudely in MS paint)
Further info: I need to be able to store the red dashed boundary values so that I can plot the red dashed boundary condition to another 2d plot made from real/measured/non-theoretical data.
Feel free to ask for further information.
Without seeing your data, I would suggest first trying to work with matplotlib's internal algorithm to plot the contour line corresponding to the zero level. This is simple, but it might happen that the interpolation that is used for this doesn't look good enough (I don't know if it can find that sharp peak in the contour line). The proof of the pudding is in the eating:
plt.contour(x,y,z,[0],colors='r',linewidths=2,linestyles='dashed')
If that doesn't suffice, you might have to resort to image processing methods to find the boundaries of your data (after turning it into binary).

Transparency issues with overlayed, densely plotted graphs

I'm generating basic matplotlib line plots of cpu utilisation per core for multi-core servers. The line plots for each successive series of data are overlayed on top of each other, so the graph for the fist series to be plotted is often buried behind the others.
I've managed to improve the plot by progressively reducing the alpha for each series I'm plotting, but the problem is that often a later series is very 'busy'. The same line gets drawn repeatedly on the same pixels, so even with a low alpha it still obscures all the data behind it.
Ideally the colour and alpha for each line should be applied only once to each pixel, no matter how often that line actually goes through the pixel. I'd like to do is something like drawing each series on a separate 'layer', then apply the alpha to the whole layer in one go, so it doesn't mater how often a line is drawn on any given pixel. I hope that makes sense. Any ideas?

Find 3d perspective on 2d image

I have an image like this:
I want to find a perpendicular line to the red line (I mean perpendicular line to the track). How can I do this using OpenCV and Python? Problem is that the height of the camera is unknown and a visible angle of 90 degrees is not a real 90 degrees angle. I have found here an option to use OpenCV .projectPoints() method, but looks like it needs to know the position of the resulting point and pass some vector there. Can somebody help how can achieve this? Or is that even possible?
#Chiefir, you don't have enough data to get the perpendicular line you ask for.
I believe your best chance is to find some parallel line in the image, like those marks in the grass (right where the green eleven is).
Some methods look for parallels in the image automatically, assuming a perpendicular straight lines world (like a city of roads and buildings), and get a 3D pose. I don't think those work on you image.
There is very little information in this one image (almost none, in fact) to accomplish your goal, so every solution will necessarily be imprecise. If this is a frame of a video sequence, you can apply the method below to a sequence of frames around this one to improve its accuracy.
One way is to assume that
The height of the rail from the ground is small (compared to their distance from the camera).
The long edges of the "11" number cut in the grass are perpendicular to the red line.
You can then estimate the vanishing point V of the "11". Then, any line drawn from V to a point of your red line is, by construction, the image of a line on the ground plane orthogonal to the one represented by the red line.
You can improve a little the accuracy by using, instead of your (presumably) hand-drawn red line, a line joining the bottom points of the supports of the rails, since this would be really on the ground.
If the poles supporting the railing were vertical (they aren't, as evidenced by the ones supported by the other rail higher in the image), you could compute their vanishing point P, then use in place of V in the method above.

Categories

Resources