matplotlib equivalent for MATLABs truesize() - python

I am new to matplotlib and python and would like to display an image so that 1 pixel of the image is actually represented by 1 pixel in the figure. In MATLAB, this is achieved with the command truesize(). How can I do this in Python?
I tried playing around with the imshow() arguments as well as set_dpi() and set_figwidth()/set_figheight(), but with no luck.
Thanks.

If you want to create images right down to the pixel level, why not use PIL in the first place? That way you wouldn't have to programatically calculate your true drawing area by substracting margins, labels and axis widths from the figure extend.

This hack does what I wanted to do, though it's still not perfect:
h = mplt.imshow(img, interpolation='nearest')
dpi = h.figure.get_dpi()
h.figure.set_figwidth(img.shape[0] / dpi)
h.figure.set_figheight(img.shape[1] / dpi)
h.figure.canvas.resize(img.shape[1] + 1, img.shape[0] + 1)
h.axes.set_position([0, 0, 1, 1])
h.axes.set_xlim(-1, img.shape[1])
h.axes.set_ylim(img.shape[0], -1)
It can be generalized to account for a margin around the axes holding the image.

Related

OpenCV vs Labview Images Greyscale (U16) - Difference in values

I am trying to understand why LabView shows one set of values for an image, while OpenCV shows another set of values.
I have two U16 Grayscale PNG images that I am trying to combine vertically to create one continuous image. The majority of the pixels are near zero or low-valued, with the ROI having pixel values in the middle of the U16 range. In Python, this is achieve by reading the file using OpenCV, combining the image using numpy and then using Matplotlib to display the values:
image_one = cv2.imread("..\filename_one.png", cv2.IMREAD_UNCHANGED)
image_two = cv2.imread("..\filename_two.png", cv2.IMREAD_UNCHANGED)
combined_image = numpy.concatenate((image_one, image_two), axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_image,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Dual Exposure Image
As seen above, this show the image as have two different dynamic ranges, resulting in different exposures. To normalize the images, we can try to rescale it to take advantage of the same dynamic range.
rescaled_one = ((image_one - image_one.min()) / (image_one.max() -
image_one.min())) * 65535 rescaled_two = ((image_two -
image_two.min()) / (image_two.max() - image_two.min())) * 65535
combined_rescaled = numpy.concatenate((rescaled_one, rescaled_two),
axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_irescaled,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Rescaled Image - Dual Exposure
This still shows the same issue with the images.
In LabView, to combine images vertically, I adapted a VI that was published to stitch Images horizontally:
https://forums.ni.com/t5/Example-Code/Stitch-Images-Together-in-LabVIEW-with-Vision-Development-Module/ta-p/3531092?profile.language=en
The Final VI Block Diagram looks as follows:
VI Block Diagram - Vertically Combine Images using IMAQ
The Output you see on the Front Panel:
Singular continuous Image - Front Panel
The dual exposure issues appears to have disappeared and the image now appears as a single continuous image. This didn't make any sense to me, so I plotted the results using Plotly as follows:
fig = plty.subplots.make_subplots(1, 1, horizontal_spacing=0.05)
fig.append_trace(go.Histogram(x=image_one.ravel(), name="cv2_top",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=image_two.ravel(), name="cv2_bottom",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=lv_joined[:1024, :].ravel(),
name="LabView_joined_top", showlegend=True, nbinsx = 13107), 1, 1)
//First Image
fig.append_trace(go.Histogram(x=lv_joined[1024:,:].ravel(), name="LabView_joined_bottom", showlegend=True, nbinsx =
13107), 1, 1) //Second Image fig.update_layout(height=800) fig.show()
Histogram - Python vs Labview respective halves - Focus on Low
pixels
Here it shows that the Second Image's pixel values have been "compressed" to find the same distribution as the the First Image. I don't understand why this is the case. Have I configured something wrong in LabView or have I not considered something when reading in a file with OpenCV?
Original Images:
Please refer to the answer posted here: [https://forums.ni.com/t5/LabVIEW/OpenCV-vs-Labview-Images-Greyscale-U16-Difference-in-values/td-p/4172150/highlight/false]

Why is .imshow() producing images with more pixels than I have put in?

I have a numpy array A, having a shape of (60,60,3), and I am using:
plt.imshow( A,
cmap = plt.cm.gist_yarg,
interpolation = 'nearest',
aspect = 'equal',
shape = A.shape
)
plt.savefig( 'myfig.png' )
When I examine the myfig.png file, I see that it is 1200x800 pixels (in color).
What's going on here? I was expecting a 60x60 RGB image.
matplotlib doesn't work directly with pixels but rather a figure size (in inches) and a resolution (dots per inch, dpi)
So, you need to explicitly give a figure size and dpi. For example, you could set your figure size to 1x1 inches, then set the dpi to 60, to get a 60x60 pixel image.
You also have to remove the whitespace around the plot area, which you can do with subplots_adjust
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(1,1))
A = np.random.rand(60,60,3)
plt.imshow(A,
cmap=plt.cm.gist_yarg,
interpolation='nearest',
aspect='equal',
shape=A.shape)
plt.subplots_adjust(left=0,right=1,bottom=0,top=1)
plt.savefig('myfig.png',dpi=60)
That creates this figure:
Which has a size of 60x60 pixels:
$ identify myfig.png
myfig.png PNG 60x60 60x60+0+0 8-bit sRGB 8.51KB 0.000u 0:00.000
You might also refer to this answer, which has lots of good information about figure sizes and resolutions.
Because plt.savefig renders a picture from your data and has an option dpi, dots per inch, it is set to some default value. You can change the quality of your figure by doing plt.savefig('myfig.png', dpi=100)
Genuine Matplotlib Engine is highly abstract, while Renderers ...
As an initial remark, matplotlib toolbox helps to create a lot of 2D / 3D plotting with also having a support for composing smart overlays from pixmap-DataSET-s
( "pictures" could be imagined as a {3D[for RGB] | 4D[for RGBA] }-colourspace data in pixmap-DataSet-s with 2D-[x,y]-mapping of colours onto "2D-paper" )
So one can overlay / view / save pixmap-DataSET pictures via matplotlib methods.
How to make pixmap size settings then?
For "picture" objects, that are rather "special" to the unconstrained world of unlimited numerical precision and any-depth LevelOfDetail in matplotlib, there are a few settings, that come into account at the very end of the processing lifecycle - at the output-generation moment.
So,
matplotlib can instruct it's (pre)selected Rendererto produce final graphical outputat a special typographic density ... aka dpi = 60 dots-per-inchand also give an overall sizing ... figsize = ( 1, 1 ) which makes your 60 x 60 image exactly 60x60 pixels ( once you spend a bit more effort to enforce matplotlib to disable all edges and other layout-specific surroundings )
The overlay composition may also use .figimage() method, where one can additionally specify x0 = x_PX_OFFSET and y0 = y_PX_OFFSET details about where to start placing a picture data within a given figsize * dpi pixel-mapping area.

matplotlib markers / mask on image pixels

So I have an image and I have a pixel mask for that image, where the mask is the same size as the image and contains values of 0 and 1, where if it is 0 I don't want to modify the image, and if it is 1 I want to add a transparent color over that pixel of the image.
Basically I want to highlight certain segments of the image but still see what is underneath.
Now I have searched high and low but haven't found a simple way to do this. I used np.where with the mask to get the pixel locations of the 1's to use with the plot functions. I first tried scatter plots with a small marker size and no edge color (small scatter plot markers in matplotlib are always black), but the markers are not one image pixel in size, they seem to be an absolute size and so depending on the size of the figure the transparency is affected and weird patterns are created from the overlapping markers.
Just the regular pyplot plot function created the exact look I desired (where the coloring was smooth and invariant to figure size) but it also colored horizontal connections between disjoint segments in the mask (since it is drawing lines I guess), so I couldn't use that.
What worked the best was patches, which I came across in this question: (How to set a fixed/static size of circle marker on a scatter plot?). I found that rectangular patches with width and height of 1 gave me the exact desired effect, where I could put a transparent color over certain pixels of the image. However this proved to produce a ton (tens of thousands) of rectangles for certain images, and so it was quite slow. Even when using a PatchCollection instead of calling addPatch every time it was still slow.
Now I can probably just join adjacent rectangles to reduce the number of things needing to be drawn, but I was just wondering if there was an easier way to do this?
Thanks.
You can do a semitransparent overlay either using masked arrays or by setting the alpha values in an RGBA image. Here are both worked through (using the example of three semitransparent red squares placed over a circular pattern), and they give similar images (so I'll only show one):
from pylab import *
from numpy import ma
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
# z4 = 3 diagonal square
# zm = a uniform image (ones), with a mask of squares (~z4)
z4 = np.repeat(np.repeat(eye(3, dtype=bool), 40, axis=0), 40, axis=1)
zm = ma.masked_where(~z4, ones((120,120)))
imshow(z3, cmap=cm.jet)
imshow(zm, cmap=cm.bwr, alpha=.3, vmin=0, vmax=1) #cm.bwr is an easy way to get red
# do this by changing alpha for each pixel
figure()
z5 = zeros((120, 120, 4), dtype=float)
z5[..., 0] = 1
z5[..., 3] = .4*z4.astype(float)
imshow(z3, cmap=cm.jet)
imshow(z5)
show()
I think both approaches can produce the same results for all cases, but:
1. the masked arrays can be a more direct approach if the mask or composition becomes complicated, and masking gives you more flexibility in drawing your overlay image since, for example, you can use colormaps rather than specifying the full RGBA for every pixel, but,
2. the masked array approach doesn't give full pixel-by-pixel control over the alpha value like RGBA does.
z1 = sin(X*Y)
z1 = cos(2*X)
z2 = cos(5*(X+Y))
zm = ma.masked_where( (z2<.5) & (Y>0), z1)
figure()
imshow(z3)
imshow(zm, cmap=cm.gray, alpha=.4, vmin=-2, vmax=2)
show()
It's a bit crazy, but here's what's going on: The primary image is a circular pattern that goes from blue to red (z3). Then there are vertical bars that faintly shade this (z1) but only in half of the figure and in narrow alternate diagonal bands on the other half (due to the mask). Here's a more complicated image using masked arrays:
Just to add on to what tom10 has posted, the masked arrays do work great with colormaps, but I also wrote a small function in the meantime that should work with any RGB color tuple.
def overlayImage(im, mask, col, alpha):
maskRGB = np.tile(mask[..., np.newaxis], 3)
untocuhed = (maskRGB == False) * im
overlayComponent = alpha * np.array(col) * maskRGB
origImageComponent = (1 - alpha) * maskRGB * im
return untocuhed + overlayComponent + origImageComponent
im is the rgb image
mask is a boolean mask of the image, such that mask.shape + (3,) = im.shape
col is just the 3-tuple rgb value you want to mask the image with
alpha is just the alpha value / transparency for the mask
I also needed a clear contour on my areas. Thus, you can easily add a contour plot on top: e.g., create a dummy numpy array and set a different value in each area of interest.
Here's an example build on top of tom10's answer with a different condition:
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
imshow(z3, cmap=cm.jet, extent = (-6,6,-6,6));
zm = ma.masked_where((z3>=0.7) & (z3<=1.5), ones(np.shape(z3)));
imshow(zm, cmap=cm.bwr, alpha=.4, vmin=0, vmax=1, extent = (-6,6,-6,6)) #cm.bwr is an easy way to get red
# Build dummy array of 1s and 0s (you can play with different values to obtain different contours for different regions):
temp_vector = ones(np.shape(z3));
temp_vector[(z3>=0.7) & (z3<=1.5)] = 0.0;
temp_vector[(z3>8.2)] = 2.0; # etc.
# Create contour. I found only one contour necessary:
contour(X, Y, temp_vector, 1, colors=['r','g']);
show()
Which yields:

Python smooth curve

I have set of very closely spaced coordinates. I am connecting those coordinates by drawing line between them by using python's image.draw.line(). But the final curve obtained is not smooth as the lines at the coordinates are not properly intersecting.I also tried drawing arc instead of lines but image.draw.arc() would not take any float input for coordinates. Can anyone suggest me other method to connect those points such that final curve will be smooth.
Splines are the standard way to produce smooth curves connecting a set of points. See the Wikipedia.
In Python, you could use scipy.interpolate to compute a smoth curve:
Pillow doesn't support many way to draw a line. If you try to draw an arch, there is no options for you to choose thickness!
scipy use matplotlib to draw graph. So if you draw lines directly with matplotlib, you can turn off the axes by axis('off') command. For more detail, you may have a look at:
Matplotlib plots: removing axis, legends and white spaces
If you don't have anything to do with axes, I would recommend you to use OpenCV instead of Pillow to handle images.
def draw_line(point_lists):
width, height = 640, 480 # picture's size
img = np.zeros((height, width, 3), np.uint8) + 255 # make the background white
line_width = 1
for line in point_lists:
color = (123,123,123) # change color or make a color generator for your self
pts = np.array(line, dtype=np.int32)
cv2.polylines(img, [pts], False, color, thickness=line_width, lineType=cv2.LINE_AA)
cv2.imshow("Art", img)
cv2.waitKey(0) # miliseconds, 0 means wait forever
lineType=cv2.LINE_AA will draw an antialiased line which is beautiful.

Mirror Image but wrong size

I am trying to input an image (image1) and flip it horizontally and then save to a file (image2). This works but not the way I want it to
currently this code gives me a flipped image but it just shows the bottom right quarter of the image, so it is the wrong size. Am I overwriting something somewhere? I just want the code to flip the image horizontally and show the whole picture flipped. Where did I go wrong?
and I cannot just use a mirror function or reverse function, I need to write an algorithm
I get the correct window size but the incorrect image size
def Flip(image1, image2):
img = graphics.Image(graphics.Point(0, 0), image1)
X, Y = img.getWidth(), img.getHeight()
for y in range(Y):
for x in range(X):
r, g, b = img.getPixel(x,y)
color = graphics.color_rgb(r, g, b)
img.setPixel(X-x, y, color)
win = graphics.GraphWin(img, img.getWidth(), img.getHeight())
img.draw(win)
img.save(image2)
I think your problem is in this line:
win = graphics.GraphWin(img, img.getWidth(), img.getHeight())
The first argument to the GraphWin constructor is supposed to be the title, but you are instead giving it an Image object. It makes me believe that maybe the width and height you are supplying are then being ignored. The default width and height for GraphWin is 200 x 200, so depending on the size of your image, that may be why only part of it is being drawn.
Try something like this:
win = graphics.GraphWin("Flipping an Image", img.getWidth(), img.getHeight())
Another problem is that your anchor point for the image is wrong. According to the docs, the anchor point is where the center of the image will be rendered (thus at 0,0 you are only seeing the bottom right quadrant of the picture). Here is a possible solution if you don't know what the size of the image is at the time of creation:
img = graphics.Image(graphics.Point(0, 0), image1)
img.move(img.getWidth() / 2, img.getHeight() / 2)
You are editing your source image. It would be
better to create an image copy and set those pixels instead:
create a new image for editing:
img_new = img
Assign the pixel values to that:
img_new.setPixel(X-x, y, color)
And draw that instead:
win = graphics.GraphWin(img_new, img_new.getWidth(), img_new.getHeight())
img_new.draw(win)
img_new.save(image2)
This will also check that your ranges are correct. if they are not, you will see both flipped and unflipped portions in the final image, showing which portions are outside of your ranges.
If you're not opposed to using an external library, I'd recommend the Python Imaging Library. In particular, the ImageOps module has a mirror function that should do exactly what you want.

Categories

Resources