Render image as defined pixel squares HTML5 canvas? - python

I have the following 6x5 image:
Which appears as so blown up on the canvas:
I render it using more or less this code this.context.putImageData(this.imageData, 0, 0) and scale the canvas with css (canvas {width: 100%}).
It has the following rgb values:
51.65 41.59 60.74 159.44 137.91 165.41
147.29 71.01 52.93 73.80 115.80 93.45
77.16 112.66 98.07 70.43 78.91 107.27
107.39 122.85 60.67 103.91 144.37 124.05
138.59 123.77 140.51 107.25 52.10 138.80
Why can't I see the individuals as block squares in a defined grid? Is it something to do with how images are rendered?

Yes. This is related to how the images are rendered.
When you have an image with colors, you usually have the complete information of the color of every pixel. So a 100x100 image renders perfectly on a 100x100 pizel size.
But when you try to scale UP this image, you have more pixels available to fill but less information about how to fill them. So you resort to mathematical algorithms, usually known as interpolation.
Canvas will automatically interpolate the pixels to fill in gaps using nearest pixel information.
To get your desired effect of having single pixels, you need to scale up without any interpolation. To do this you can refer to this answer, or rather find several answers now that you know the appropriate terms to exactly describe your problem.
This CSS3 solution should work in Chrome 41+ for now:
canvas {
image-rendering: pixelated;
}

Related

How to detect edge of object using OpenCV

I am trying to use OpenCV to measure size of filament ( that plastic material used for 3D printing)
What I am trying to do is measuring filament size ( that plastic material used for 3D printing ). The idea is that I use led panel to illuminate filament, then take image with camera, preprocess the image, apply edge detections and calculate it's size. Most filaments are fine made of one colour which is easy to preprocess and get fine results.
The problem comes with transparent filament. I am not able to get useful results. I would like to ask for a little help, or if someone could push me the right directions. I have already tried cropping the image to heigh that is a bit higher than filament, and width just a few pixels and calculating size using number of pixels in those images, but this did not work very well. So now I am here and trying to do it with edge detections
works well for filaments of single colour
not working for transparent filament
Code below is working just fine for common filaments, the problem is when I try to use it for transparent filament. I have tried adjusting tresholds for Canny function. I have tried different colour-spaces. But I am not able to get the results.
Images that may help to understand:
https://imgur.com/gallery/CIv7fxY
image = cv.imread("../images/img_fil_2.PNG") # load image
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # convert image to grayscale
edges = cv.Canny(gray, 100, 200) # detect edges of image
You can use the assumption that the images are taken under the same conditions.
Your main problem is that the reflections in the transparent filament are detected as edges. But, since the image is relatively simple, without any other edges, you can simply take the upper and the lower edge, and measure the distance between them.
A simple way of doing this is to take 2 vertical lines (e.g. image sides), find the edges that intersect the line (basically traverse a column in the image and find edge pixels), and connect the highest and the lowest points to form the edges of the filament. This also removes the curvature in the filament, which I assume is not needed for your application.
You might want to use 3 or 4 vertical lines, for robustness.

python imshow pixel size varies within plot

Dear stackoverflow community!
I need to plot a 2D-map in python using imshow. The command used is
plt.imshow(ux_map, interpolation='none', origin='lower', extent=[lonhg_all.min(), lonhg_all.max(), lathg_all.min(), lathg_all.max()])
The image is then saved as follows
plt.savefig('rdv_cr%s_cmlon%s_ux.png' % (2097, cmlon_ref))
and looks like
The problem is that when zooming into the plot one can notice that the pixels have different shapes (e.g. different width). This is illustrated in the zoomed part below (taken from the top region of the the first image):
Is there any reason for this behaviour? I input a rectangular grid for my data, but the problem does not have to do with the data itself, I suppose. Instead it is probably something related to rendering. I'd expect all pixels to be of equal shape, but as could be seen they have both different widths as well as heights. By the way, this also occurs in the interactive plot of matplotlib. However, when zooming in there, they become equally shaped all of a sudden.
I'm not sure as to whether
https://github.com/matplotlib/matplotlib/issues/3057/ and the link therein might be related, but I can try playing around with dpi values. In any case, if anybody knows why this happens, could that person provide some background on why the computer cannot display the plot as intended using the commands from above?
Thanks for your responses!
This is related to the way the image is mapped to the screen. To determine the color of a pixel in the screen, the corresponding color is sampled from the image. If the screen area and the image size do not match, either upsampling (image too small) or downsampling (image too large) occurs.
You observed a case of upsampling. For example, consider drawing a 4x4 image on a region of 6x6 pixels on the screen. Sometimes two screen pixels fall into an image pixel, and sometimes only one. Here, we observe an extreme case of differently sized pixels.
When you zoom in in the interactive view, this effect seems to disapear. That is because suddenly you map the image to a large number of pixels. If one image pixel is enlarged to, say, 10 screen pixels and another to 11, you hardly notice the difference. The effect is most apparent when the image nearly matches the screen resolution.
A solution to work around this effect is to use interpolation, which may lead to an undesirable blurred look. To reduce the blur you can...
play with different interpolation functions. Try for example 'kaiser'
or up-scale the image by a constant factor using nearest neighbor interpolation (e.g. replace each pixel in the image by a block of pixels with the same color). Then any blurring will only affect the edges of the block.

Crop out complementary regions from an image

I have coordinate regions of an image. I am cropping out images from those coordinates. Now, I need complimentary regions of what that is cut out, with respect to the original image. How do I go about using Pillow?
If you crop a region you basically create a new, smaller image.
A complementary operation to that would be to fill the region with some value you consider invalid or zero as you will still have an image of the original size. Technically you cannot remove a region from an image. you can just change or ignore it.
PIL.ImageDraw.Draw.rectangle(xy, fill=None, outline=None)
is something I found quickly. Maybe there is something better. Just crawl through the reference.

PIL: overlaying images with different dimensions and aspect ratios

I've been attempting to overlay two images in python to match coordinates, the top left and bottom right corners have the same coordinates and their aspects are almost identical bar a few pixels. Although they are different resolutions.
Using PIL I have been able to overlay the images, though after overlaying them the image output is square but the resolution is that of the background image, the foreground image is also re-sized incorrectly (As far as I can see). I must be doing something wrong.
import Image
from PIL import Image
#load images
background = Image.open('ndvi.png')
foreground = Image.open('out.png')
#resizing
foreground.thumbnail((643,597),Image.ANTIALIAS)
#overlay
background.paste(foreground, (0, 0), foreground)
#save
background.save("overlay.png")
#display
background.show()
When dropping the images into something horrible like powerpoint the image aspects are almost identical. I've included an example image, the image on the left is my by hand overlay and the image on the right is the output from python. The background at some point in the code is squashed vertically, also affecting the overlay. I'd like to be able to do this in python and make it correctly look like the left hand image.
A solution upfront.
Background image
width/height/ratio: 300 / 375 / 0.800
Foreground image
width/height/ratio: 400 / 464 / 0.862
Overlay
from PIL import Image
imbg = Image.open("bg.png")
imfg = Image.open("fg.png")
imbg_width, imbg_height = imbg.size
imfg_resized = imfg.resize((imbg_width, imbg_height), Image.LANCZOS)
imbg.paste(imfg_resized, None, imfg_resized)
imbg.save("overlay.png")
Discussion
The most important information you have given in your question were:
the aspect ratios of your foreground and background images are not equal, but similar
the top left and bottom right corners of both images need to be aligned in the end.
The conclusion from these points is: the aspect ratio of one of the images has to change. This can be achieved with the resize() method (not with thumbnail(), as explained below). To summarize, the goal simply is:
Resize the image with larger dimensions (foreground image) to the exact dimensions of the smaller background image. That is, do not necessarily maintain the aspect ratio of the foreground image.
That is what the code above is doing.
Two comments on your approach:
First of all, I recommend using the newest release of Pillow (Pillow is the continuation project of PIL, it is API-compatible). In the 2.7 release they have largely improved the image re-scaling quality. The documentation can be found at http://pillow.readthedocs.org/en/latest/reference.
Then, you obviously need to take control of how the aspect ratio of both images evolves throughout your program. thumbnail(), for instance, does not alter the aspect ratio of the image, even if your size tuple does not have the same aspect ratio as the original image. Quote from the thumbnail() docs:
This method modifies the image to contain a thumbnail version of
itself, no larger than the given size. This method calculates an
appropriate thumbnail size to preserve the aspect of the image
So, I am not sure where you were going exactly with your (643,597) tuple and if you are possibly relying on the thumbnail to have this exact size afterwards.

How to calculate Google Earth polygon size in pixels

OS X 10.7.5, Python 2.7, GE 7.1.2.2041
I have some .kml data that includes a moderately large number of polygons. Each polygon has an image associated with it. I want to use each image in a <GroundOverlay> mode with its associated polygon.
The raw images are all a bit larger than the polygons. I can easily resize the images with Python's Image Library (PIL), but the amount of missizing is not consistent across the entire set. Some are as good as only ~5% larger, and some go up to ~20% larger.
What I would like to do is either find (or calculate) the approximate sizes of the polygons in pixels so that I can automate the resizing of their associated images with that data.
Any suggestions?
You could use the width and height of the polygon/rectangle in longitude and latitude coordinates. Then use the ratio of that rectangle to the image size and it should fit.
Edit: I should note that depending on where your images are going to show up you might need some special math for the dateline (-180 to 180) or prime-meridian (0 to 360).

Categories

Resources