Why PIL draw polygon does not accept numpy array? - python

This code works as expected:
import numpy as np
from PIL import Image, ImageDraw
A = (
( 2, 2),
( 2, 302),
( 302, 302),
( 302, 2)
)
img = Image.new('L', (310, 310), 0)
ImageDraw.Draw(img).polygon(A, outline=1, fill=1)
mask = np.array(img)
print(mask)
However, if the A matrix is provided as numpy array:
A = np.array(
[[ 2, 2],
[ 2, 302],
[302, 302],
[302, 2]], dtype="int32"
)
it produces completely wrong result. I also try to flatten the A array, it does not help.
Do I miss something? Can I stuff the numpy array somehow directly into PIL?

If call-interaface says use a list-of-tuples or a list of interleaved values,
best use a list-of-tuples or a sequence / list of interleaved values:
PIL.ImageDraw.ImageDraw.polygon( xy, fill = None, outline = None )
Draws a polygon.
The polygon outline consists of straight lines between the given coordinates, plus a straight line between the last and the first coordinate.
xy – Sequence of either 2-tuples like [(x, y), (x, y), ...]ornumeric values like [x, y, x, y, ...].
Can I stuff ..
Using
>>> xy
array([[ 2, 3],
[10, 3],
[10, 0],
[ 2, 0]])
>>> xy.flatten().tolist()
[ 2, 3, 10, 3, 10, 0, 2, 0 ]
>>>
shall work and meet the PIL-documented-Call-Interface for ImageDraw.polygon()

Related

Extract sub arrays based on kernel in numpy

I would like to know if there is an efficient method to get sub-arrays from a larger numpy array.
What I have is an application of np.where. I iterate 'manually' over x and y as offsets and apply where with a kernel to each rectangle extracted from the larger array with proper dimensions.
But is there a more direct approach in numpy's collection of methods?
import numpy as np
example = np.arange(20).reshape((5, 4))
# e.g. a cross kernel
a_kernel = np.asarray([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
np.where(a_kernel, example[1:4, 1:4], 0)
# returns
# array([[ 0, 6, 0],
# [ 9, 10, 11],
# [ 0, 14, 0]])
def arrays_from_kernel(a, a_kernel):
width, height = a_kernel.shape
y_max, x_max = a.shape
return [np.where(a_kernel, a[y:(y + height), x:(x + width)], 0)
for y in range(y_max - height + 1)
for x in range(x_max - width + 1)]
sub_arrays = arrays_from_kernel(example, a_kernel)
This returns the arrays I need for further processing.
# [array([[0, 1, 0],
# [4, 5, 6],
# [0, 9, 0]]),
# array([[ 0, 2, 0],
# [ 5, 6, 7],
# [ 0, 10, 0]]),
# ...
# array([[ 0, 9, 0],
# [12, 13, 14],
# [ 0, 17, 0]]),
# array([[ 0, 10, 0],
# [13, 14, 15],
# [ 0, 18, 0]])]
The context: similar to 2D convolution I would like to apply a custom function on each of the subarrays (e.g. product of squared numbers).
At the moment, you're manually advancing a sliding window over the data - stride tricks to the rescue! (And no, I didn't just make that up - there's actually a submodule called stride_tricks in numpy!) Instead of manually building windows into the data, and calling np.where() on them, if you had the windows in an array, you could call np.where() just once. Stride tricks allow you to create such an array without even having to copy the data.
Let me explain. Normal slices in numpy create views into the original data instead of copies. This is done by referring to the original data, but changing the strides used to access the data (ie. how much to jump between two elements or two rows, and so on). Stride tricks allow you to modify those strides more freely than just slicing and reshaping does, so you can eg. iterate over the same data more than once, which is useful here.
Let me demonstrate:
import numpy as np
example = np.arange(20).reshape((5, 4))
a_kernel = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
def sliding_window(data, win_shape, **kwargs):
assert data.ndim == len(win_shape)
shape = tuple(dn - wn + 1 for dn, wn in zip(data.shape, win_shape)) + win_shape
strides = data.strides * 2
return np.lib.stride_tricks.as_strided(data, shape=shape, strides=strides, **kwargs)
def arrays_from_kernel(a, a_kernel):
windows = sliding_window(a, a_kernel.shape)
return np.where(a_kernel, windows, 0)
sub_arrays = arrays_from_kernel(example, a_kernel)
The scipy.ndimage module offers a number of filters -- one of which might meet your needs. If none of those filters do what you want, you could use ndimage.generic_filter
to call a custom function on each subarray. ndimage.generic_filter is not as fast as the other ndimage filters, however.
For example,
import numpy as np
example = np.arange(20).reshape((5, 4))
a_kernel = np.asarray([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
# def arrays_from_kernel(a, a_kernel):
# width, height = a_kernel.shape
# y_max, x_max = a.shape
# return [np.where(a_kernel, a[y:(y + height), x:(x + width)], 0)
# for y in range(y_max - height + 1)
# for x in range(x_max - width + 1)]
# sub_arrays = arrays_from_kernel(example, a_kernel)
# for arr in sub_arrays:
# print(arr)
# print('-'*80)
import scipy.ndimage as ndimage
def func(x):
# reject subarrays that extend beyond the border of the `example` array
if not np.isnan(x).any():
y = np.zeros_like(a_kernel, dtype=example.dtype)
np.put(y, np.flatnonzero(a_kernel), x)
print(y)
# Instead or returning 0, you can perform your desired computation on the subarray here.
# Note that you may not need the 2D array y; often, you only need the values in the 1D array x
return 0
result = ndimage.generic_filter(example, func, footprint=a_kernel, mode='constant', cval=np.nan)
For the particular problem of computing the product of squares for each subarray, you
could convert the product into a sum by taking advantage of the fact that A * B = exp(log(A)+log(B)). This would allow you to express the computation as a normal convolution. Now using ndimage.convolve can improve performance a lot. The amount of the improvement depends on the size of example:
import numpy as np
import scipy.ndimage as ndimage
import perfplot
a_kernel = np.asarray([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
def orig(example, a_kernel=a_kernel):
def arrays_from_kernel(a, a_kernel):
width, height = a_kernel.shape
y_max, x_max = a.shape
return [
np.where(a_kernel, a[y : (y + height), x : (x + width)], 1)
for y in range(y_max - height + 1)
for x in range(x_max - width + 1)
]
return [np.prod(x) ** 2 for x in arrays_from_kernel(example, a_kernel)]
def alt(example, a_kernel=a_kernel):
logged = np.log(example)
result = ndimage.convolve(logged, a_kernel, mode="constant", cval=0)[1:-1, 1:-1]
return (np.exp(result) ** 2).ravel()
def make_example(N):
return np.random.random(size=(N, N))
def check(A, B):
return np.allclose(A, B)
perfplot.show(
setup=make_example,
kernels=[orig, alt],
n_range=[2 ** k for k in range(2, 11)],
logx=True,
logy=True,
xlabel="len(example)",
equality_check=check,
)

Converting pygame surface buffer bytes into numpy array

Anybody know why this array I created from pygame's get_buffer method has the R, G, B, values reversed? I want to create an array with the colour values in the same order I put them in - like [8, 16, 32, 0]. Have I done something wrong or is this something with the way pygame stores pixel data?
>>> import pygame
>>> import pygame.gfxdraw
>>> import numpy as np
>>> background_colour = (1, 1, 1)
>>> width, height = (256, 256)
>>> screen = pygame.Surface((width, height))
>>> pygame.draw.rect(screen, (8, 16, 32), (0, 0, 100, 100), 0)
<rect(0, 0, 100, 100)>
>>> s = screen.get_buffer()
>>> x = np.fromstring(s.raw, dtype='b').reshape(height, width, 4)
>>> x[0, 0]
array([32, 16, 8, 0], dtype=int8)
I tried this but I loose the R value:
>>> y = x[:, :, 3:0:-1]
>>> y[0, 0]
array([ 0, 8, 16], dtype=int8)
(I'm using numpy version 1.8.2 so I don't have np.flip).
I realised there is a much better way to do this. The pygame.surfarray module has various methods that actually create numpy arrays for you!
>>> x3 = pygame.surfarray.pixels3d(screen)
>>> x3.shape
(256, 256, 3)
>>> x3[0, 0]
array([ 8, 16, 32], dtype=uint8)

How can I mirror a polygon using Python?

I have a set of images over which polygons are drawn. I have the points of those polygons and I draw these using Shapely and check whether certain points from an eye tracker fall into the polygons.
Now, some of those images are mirrored but I do not have the coordinates of the polygons drawn in them. How can I flip the polygons horizontally? Is there a way to do this with Shapely?
if you want to reflect a polygon with respect to a vertical axis, i.e., to flip them horizontally, one option would be to use the scale transformation (using negative unit scaling factor) provided by shapely.affinity or to use a custom transformation:
from shapely.affinity import scale
from shapely.ops import transform
from shapely.geometry import Polygon
def reflection(x0):
return lambda x, y: (2*x0 - x, y)
P = Polygon([[0, 0], [1, 1], [1, 2], [0, 1]])
print(P)
#POLYGON ((0 0, 1 1, 1 2, 0 1, 0 0))
Q1 = scale(P, xfact = -1, origin = (1, 0))
Q2 = transform(reflection(1), P)
print(Q1)
#POLYGON ((2 0, 1 1, 1 2, 2 1, 2 0))
print(Q2)
#POLYGON ((2 0, 1 1, 1 2, 2 1, 2 0))
by multiplying [[1,0], [0,-1]], You can get the vertically flipped shape. (I tested this on jupyter notebook)
pts = np.array([[153, 347],
[161, 323],
[179, 305],
[195, 315],
[184, 331],
[177, 357]])
display(Polygon(pts))
display(Polygon(pts.dot([[1,0],[0,-1]])))
And If you multiply [[-1,0],[0,1]], you will get horizontally flipped shape.
Refer linear transformation to understand why this works.

Coding a circular filter in Python

I found a code snippet for making a circular filter using scipy and I'd like to understand how it works. I know there's a better one in skimage, but I'm interested in what's going on in this one.
from scipy.ndimage.filters import generic_filter as gf
# Define physical shape of filter mask
def circular_filter(image_data, radius):
kernel = np.zeros((2*radius+1, 2*radius+1))
y, x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask = x**2 + y**2 <= radius**2
kernel[mask] = 1
filtered_image = gf(image_data, np.median, footprint = kernel)
return filtered_image
But I'm not sure I understand perfectly what's going on. In particular, what exactly do the lines
y, x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask = x**2 + y**2 <= radius**2
kernel[mask] = 1
do?
I posted this as an answer to one of my previous questions, but it wasn't replied to, so I'm posting it as a new question.
Looking at your code in detail:
kernel = np.zeros((2*radius+1, 2*radius+1))
y, x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask = x**2 + y**2 <= radius**2
kernel[mask] = 1
The first line:
kernel = np.zeros((2*radius+1, 2*radius+1))
creates a 2-d array of zeros, with a center point and "radius" points on either side. For radius = 2, you would get:
# __r__ +1 __r__
[ 0, 0, 0, 0, 0, ] #\
[ 0, 0, 0, 0, 0, ] #_} r
[ 0, 0, 0, 0, 0, ] # +1
[ 0, 0, 0, 0, 0, ] #\
[ 0, 0, 0, 0, 0, ] #_} r
Next, you get two arrays from the open mesh grid created by numpy.ogrid. Mesh grids are a "trick" in numpy that involves storing a "parallel" array or matrix that holds the x or y coordinate of a particular cell at the location of that cell.
For example, a y-mesh grid might look like this:
[ 0, 0, 0, 0, 0, ]
[ 1, 1, 1, 1, 1, ]
[ 2, 2, 2, 2, 2, ]
[ 3, 3, 3, 3, 3, ]
[ 4, 4, 4, 4, 4, ]
And an x-mesh grid might look like this:
[ 0, 1, 2, 3, 4, ]
[ 0, 1, 2, 3, 4, ]
[ 0, 1, 2, 3, 4, ]
[ 0, 1, 2, 3, 4, ]
[ 0, 1, 2, 3, 4, ]
If you look at them, you'll realize that Y_grid[x][y] == y and X_grid[x][y] == x which is so often useful that it has more than one numpy function to support it. ;-)
An open mesh grid is similar to a closed one, except that it only has "one dimension." That is, instead of a pair of (for example) 5x5 arrays, you get a 1x5 array and a 5x1 array. That's what ogrid does - it returns two open grids. The values are from -radius to radius+1, according to python rules (meaning the radius+1 is left out):
y, x = np.ogrid[-radius:radius+1, -radius:radius+1]
So y is a numpy array storing from e.g., -2..2 (inclusive), and x is an array from -2..2 inclusive. The next step is to build a boolean mask - that is, an array full of boolean values. As you know, when you operate on a numpy array, you get another numpy array. So involving two arrays in an expression with a constant produces another array:
mask = x**2 + y**2 <= radius**2
The value of mask is going to be a 2-color bitmap, where one color is "True" and the other color is "False." The bitmap will describe a solid circle or disk. (Because of the <= relation. Remember that x and y contain -2..2, not 0..4.)
Finally, you convert from type Boolean to int by using the masking array as an overlay on the kernel array (of zeroes), setting the zeroes to ones whenever the mask is "True":
kernel[mask] = 1
At this point, kernel looks like:
# __r__ +1 __r__
[ 0, 0, 1, 0, 0, ] #\
[ 0, 1, 1, 1, 0, ] #_} r
[ 1, 1, 1, 1, 1, ] # +1
[ 0, 1, 1, 1, 0, ] #\
[ 0, 0, 1, 0, 0, ] #_} r
I'm not familiar with SciPy but I'll give it a shot trying to explain the basic concepts.
This entire function's purpose is to alter the original image by applying a filter. This filter could do a lot of things, from changing the contrast of the image, or adding special effects, etc.
Let's go through the different lines:
kernel = np.zeros((2*radius+1, 2*radius+1))
In this line, a copy of the image data is being created, but with all the data being zeros (hence the zeros function is being used). This is so the mask can be applied later onto it.
y, x = np.ogrid[-radius:radius+1, -radius:radius+1]
This is creating what is known as a "meshgrid" or a multi-dimensional grid. This is to create the circular "mask". Just like how on a graph, x and y axes have evenly spaced scaling, the same is necessary here in the meshgrid.
The x and y variables in this case store evenly spaced values that serve as the axes' scaling.
mask = x**2 + y**2 <= radius**2
Here, a "mask" is being created. A mask will serve as the region in the image to be protected from the filter, so as to not alter any original data. Notice how x and y variables are used here in a Pythagorean inequality (important to see that it's not just a circle but a disk), just like how they would be in a mathematical sense. This will create a disk with the given radius that is now considered the mask. The mask variable now contains all coordinates (x,y) where the original data values should not be altered.
kernel[mask] = 1
This is where the mask is now applied to the copy of the image that was created earlier. Now, there is a perfect copy of the image (i.e. same dimensions) but with a disk-like "mask" that "protects" the original data from being altered. This is why all the points covered by the disk is set to 1. Also, notice how the dimensions of kernel and mask match. Both are multi-dimensional. The rest of values in the image copy are still set to zero, as was done in the first line.
filtered_image = gf(image_data, np.median, footprint = kernel)
This is final part where everything is pieced together. There is the original data stored in image_data and there is the kernel, which is the image copy with the mask applied on it indicating where the data should not be altered. Both of them are passed as parameters into the actual filter function gf (stands for generic filter) and the output is a new filtered image.
This is a core concept in image filtering and if you want to learn more about it, I suggest starting out by learning basic signal processing concepts. Signal processing courses cover the mathematics of how these concepts work, but are usually explained in really abstract mathematics because this concept can be applied to numerous different examples.

Rotate small portion of an array by 90 degrees

I want to rotate an array but not as a whole, only small portion of it.
I have 512X512 array (basically it is a Gaussian circle at the center (150,150) with 200 radius). Now I want to rotate only small portion (center around (150,150) with radius 100) of the array by 90 degree. Initially I used numpy rot90 module but it rotate each array element which is not I want.
If you can describe the elements that you would like rotated using advanced indexing, then you should be able to perform the rotation using something like the following (assuming your array is called arr):
arr[rs:re,cs:ce] = np.rot90(np.copy(arr[rs:re,cs:ce]))
Here rs, re, cs, and ce would signify the row-start and row-end of a slice, and the column-start and column-end of a slice, respectively.
Here is an example of why the np.copy call is necessary (at least in numpy 1.3.0):
>>> import numpy as np
>>> m = np.array([[i]*4 for i in range(4)])
>>> m
array([[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]])
>>> m[1:3,1:3] = np.rot90(m[1:3,1:3]) # rotate middle 2x2
>>> m
array([[0, 0, 0, 0],
[1, 1, 2, 1], # got 1, 2 expected 1, 2
[2, 1, 1, 2], # 1, 1 1, 2
[3, 3, 3, 3]])
Here is some fuller code that does as F.J. has already explained.
And here is the code:
import numpy as np
import scipy
def circle(im, centre_x, centre_y, radius):
grid_x, grid_y = np.mgrid[0:im.shape[0],0:im.shape[1]]
return (grid_x-centre_x)**2 + (grid_y-centre_y)**2 < radius**2
centre_x, centre_y, radius = 150, 200, 100
x_slice = slice(centre_x - radius, centre_x + radius)
y_slice = slice(centre_y - radius, centre_y + radius)
im = scipy.misc.imread('1_tree.jpg')
rotated_square = np.rot90(im[x_slice,y_slice].copy())
im[circle(im, centre_x, centre_y,radius)] = rotated_square[circle(rotated_square,
radius, radius, radius)]
scipy.misc.imsave('sdffs.png',im)

Categories

Resources