Python numpy array manipulation - python

i need to manipulate an numpy array:
My Array has the followng format:
x = [1280][720][4]
The array stores image data in the third dimension:
x[0][0] = [Red,Green,Blue,Alpha]
Now i need to manipulate my array to the following form:
x = [1280][720]
x[0][0] = Red + Green + Blue / 3
My current code is extremly slow and i want to use the numpy array manipulation to speed it up:
for a in range(0,719):
for b in range(0,1279):
newx[a][b] = x[a][b][0]+x[a][b][1]+x[a][b][2]
x = newx
Also, if possible i need the code to work for variable array sizes.
Thansk Alot

Use the numpy.mean function:
import numpy as np
n = 1280
m = 720
# Generate a n * m * 4 matrix with random values
x = np.round(np.random.rand(n, m, 4)*10)
# Calculate the mean value over the first 3 values along the 2nd axix (starting from 0)
xnew = np.mean(x[:, :, 0:3], axis=2)
x[:, :, 0:3] gives you the first 3 values in the 3rd dimension, see: numpy indexing
axis=2 specifies, along which axis of the matrix the mean value is calculated.

Slice the alpha channel out of the array, and then sum the array along the RGB axis and divide by 3:
x = x[:,:,:-1]
x_sum = x.sum(axis=2)
x_div = x_sum / float(3)

Related

Python make montaage of images stored in a 4D numpy array

I have a stack of images stored in a 4D array, e.g. [0, 0, :, :] is the image at the location (0, 0). Now I want to make a montage of the images and store them in a 2D array and do something with the images, then I want to transfer the montage back to a 4D array. How can I manage this with numpy? Following is a schematic of what I want to do. It is shown with a 3D array, but I think you can get the idea.
The first part of the operation can be carried out using np.block. You would need to convert to a non-array sequence type for the outer dimensions:
l = [list(x) for x in arr]
montage = np.block(l)
Alternatively, you can just arrange your dimensions the way you like first, then reshape. The key is to remember that later dimensions get raveled together. So if you have an array with (A, B) elements, each of which is an (M, N) image, the result should be an (A * M, B * N) image. You want the original image pixels from each row to stay contiguous, but the rows to be concatenated. So transpose and reshape like this:
a, b, m, n = arr.shape
montage = arr.transpose(0, 2, 1, 3).reshape(a * m, b * n)
You can reshape back using the inverse operation fairly easily:
stack = montage.reshape(a, m, b, n).transpose(0, 2, 1, 3)
This is actually the default behavior of np.reshape(). Just calculate how wide/tall the collage image will be, and then call np.reshape. reshape again will reverse it.
import numpy as np
# placeholder data -- 4 images that are 5x5
image = np.arange(4 * 5 * 5 * 3).reshape(4, 5, 5, 3)
# 2x2 grid of images
collage = image.reshape(10, 10, 3)
result = collage.reshape(4, 5, 5, 3)
assert np.array_equal(image, result)
Edit: I misunderstood the question. I assumed that the 4D array was a 1D-list of NxMx3 RGB images. If, instead, it is a 2D grid of 2D (single channel) images, I can't think of a clever way to do it with numpy operations. But, it shouldn't be to slow to just use a python for-loop.
(assuming row-major order)
# rows = number of rows in image grid
# cols = number of cols in image grid
# width = width of each image
# height = height of each image
rows, cols, height, width = images.shape
collage = np.empty(rows * height, cols * width, dtype=images.dytpe)
for i in range(rows):
for j in range(cols):
y = i * height
x = j * range
collage[y:y+height, x:x+width] = images[i, j]
Then to reverse just flip it:
result = np.empty(rows, cols, width, height, dtype=collage.dytpe)
for i in range(rows):
for j in range(cols):
y = i * height
x = j * range
images[i, j, :, :] = collage[y:y+height, x:x+width]

pairwise subtraction of arrays in python

I have two matrices, A of shape 512*3 and B of shape 1024*3
I want to calculate pairwise subtraction between their rows, so the result would be of shape 512*1024*3
(they are actually arrays of 3D point coordinates : x , y , z and I eventually want to find k nearest points from B to every point in A)
and I can't use for loops. is there any pythonic way to do this?
thank u.
From the reference I linked in my previous comment:
http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc
You are trying to do this.
Just follow the example, as in:
import numpy as np
np.random.seed(123)
a = np.random.uniform(size=(8,3)) # or (512,3)
b = np.random.uniform(size=(16,3)) # or (1024,3)
diff = a[np.newaxis,:,:]-b[:,np.newaxis,:]
dist = np.sqrt(np.sum(diff**2,axis=-1))
The difference:
diff = A[:, np.newaxis] - B[np.newaxis, :]
The closest k points in B for each point in A:
k = 5
dists = np.sum(np.square(A[:, np.newaxis] - B[np.newaxis, :]), axis=-1)
top_k = np.argpartition(dists, k, axis=1)[:, :k]
That top_k is not sorted by distance, though. You can sort it later or do instead:
top_k = np.argsort(dists, axis=1)[:, :k]
Which is less efficient but simpler.

Softmax function of a numpy array by row

I am trying to apply a softmax function to a numpy array. But I am not getting the desired results. This is the code I have tried:
import numpy as np
x = np.array([[1001,1002],[3,4]])
softmax = np.exp(x - np.max(x))/(np.sum(np.exp(x - np.max(x)))
print softmax
I think the x - np.max(x) code is not subtracting the max of each row. The max needs to be subtracted from x to prevent very large numbers.
This is supposed to output
np.array([
[0.26894142, 0.73105858],
[0.26894142, 0.73105858]])
But I am getting:
np.array([
[0.26894142, 0.73105858],
[0, 0]])
A convenient way to keep the axes that are consumed by "reduce" operations such as max or sum is the keepdims keyword:
mx = np.max(x, axis=-1, keepdims=True)
mx
# array([[1002],
# [ 4]])
x - mx
# array([[-1, 0],
# [-1, 0]])
numerator = np.exp(x - mx)
denominator = np.sum(numerator, axis=-1, keepdims=True)
denominator
# array([[ 1.36787944],
# [ 1.36787944]])
numerator/denominator
# array([[ 0.26894142, 0.73105858],
[ 0.26894142, 0.73105858]])
My 5-liner (which uses scipy logsumexp for the tricky bits):
def softmax(a, axis=None):
"""
Computes exp(a)/sumexp(a); relies on scipy logsumexp implementation.
:param a: ndarray/tensor
:param axis: axis to sum over; default (None) sums over everything
"""
from scipy.special import logsumexp
lse = logsumexp(a, axis=axis) # this reduces along axis
if axis is not None:
lse = np.expand_dims(lse, axis) # restore that axis for subtraction
return np.exp(a - lse)
You may have to use from scipy.misc import logsumexp if you have an older scipy version.
EDIT. As of version 1.2.0, scipy includes softmax as a special function:
https://scipy.github.io/devdocs/generated/scipy.special.softmax.html
I wrote a very general softmax function operating over an arbitrary axis, including the tricky max subtraction bit. The function is below, and I wrote a blog post about it here.
def softmax(X, theta = 1.0, axis = None):
"""
Compute the softmax of each element along an axis of X.
Parameters
----------
X: ND-Array. Probably should be floats.
theta (optional): float parameter, used as a multiplier
prior to exponentiation. Default = 1.0
axis (optional): axis to compute values along. Default is the
first non-singleton axis.
Returns an array the same size as X. The result will sum to 1
along the specified axis.
"""
# make X at least 2d
y = np.atleast_2d(X)
# find axis
if axis is None:
axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
# multiply y against the theta parameter,
y = y * float(theta)
# subtract the max for numerical stability
y = y - np.expand_dims(np.max(y, axis = axis), axis)
# exponentiate y
y = np.exp(y)
# take the sum along the specified axis
ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
# finally: divide elementwise
p = y / ax_sum
# flatten if X was 1D
if len(X.shape) == 1: p = p.flatten()
return p
The x - np.max(x) code is not doing row-wise subtraction.
Let's do it step-wise. First we will make a 'maxes' array by tiling or making a copy of the column:
maxes = np.tile(np.max(x,1), (2,1)).T
This will create a 2X2 matrix which will correspond to the maxes for each row by making a duplicate column(tile). After this you can do:
x = np.exp(x - maxes)/(np.sum(np.exp(x - maxes), axis = 1))
You should get your result with this. The axis = 1 is for the row-wise softmax you mentioned in the heading of your answer. Hope this helps.
How about this?
For taking max along the rows just specify the argument as axis=1 and then convert the result as a column vector(but a 2D array actually) using np.newaxis/None.
In [40]: x
Out[40]:
array([[1001, 1002],
[ 3, 4]])
In [41]: z = x - np.max(x, axis=1)[:, np.newaxis]
In [42]: z
Out[42]:
array([[-1, 0],
[-1, 0]])
In [44]: softmax = np.exp(z) / np.sum(np.exp(z), axis=1)[:, np.newaxis]
In [45]: softmax
Out[45]:
array([[ 0.26894142, 0.73105858],
[ 0.26894142, 0.73105858]])
In the last step, again when you take sum just specify the argument axis=1 to sum it along the rows.

Standard deviation from center of mass along Numpy array axis

I am trying to find a well-performing way to calculate the standard deviation from the center of mass/gravity along an axis of a Numpy array.
In formula this is (sorry for the misalignment):
The best I could come up with is this:
def weighted_com(A, axis, weights):
average = np.average(A, axis=axis, weights=weights)
return average * weights.sum() / A.sum(axis=axis).astype(float)
def weighted_std(A, axis):
weights = np.arange(A.shape[axis])
w1com2 = weighted_com(A, axis, weights)**2
w2com1 = weighted_com(A, axis, weights**2)
return np.sqrt(w2com1 - w1com2)
In weighted_com, I need to correct the normalization from sum of weights to sum of values (which is an ugly workaround, I guess). weighted_std is probably fine.
To avoid the XY problem, I still ask for what I actually want, (a better weighted_std) instead of a better version of my weighted_com.
The .astype(float) is a safety measure as I'll apply this to histograms containing ints, which caused problems due to integer division when not in Python 3 or when from __future__ import division is not active.
You want to take the mean, variance and standard deviation of the vector [1, 2, 3, ..., n] — where n is the dimension of the input matrix A along the axis of interest —, with weights given by the matrix A itself.
For concreteness, say you want to consider these center-of-mass statistics along the vertical axis (axis=0) — this is what corresponds to the formulas you wrote. For a fixed column j, you would do
n = A.shape[0]
r = np.arange(1, n+1)
mu = np.average(r, weights=A[:,j])
var = np.average(r**2, weights=A[:,j]) - mu**2
std = np.sqrt(var)
In order to put all of the computations for the different columns together, you have to stack together a bunch of copies of r (one per column) to form a matrix (that I have called R in the code below). With a bit of care, you can make things work for both axis=0 and axis=1.
import numpy as np
def com_stats(A, axis=0):
A = A.astype(float) # if you are worried about int vs. float
n = A.shape[axis]
m = A.shape[(axis-1)%2]
r = np.arange(1, n+1)
R = np.vstack([r] * m)
if axis == 0:
R = R.T
mu = np.average(R, axis=axis, weights=A)
var = np.average(R**2, axis=axis, weights=A) - mu**2
std = np.sqrt(var)
return mu, var, std
For example,
A = np.array([[1, 1, 0], [1, 2, 1], [1, 1, 1]])
print(A)
# [[1 1 0]
# [1 2 1]
# [1 1 1]]
print(com_stats(A))
# (array([ 2. , 2. , 2.5]), # centre-of-mass mean by column
# array([ 0.66666667, 0.5 , 0.25 ]), # centre-of-mass variance by column
# array([ 0.81649658, 0.70710678, 0.5 ])) # centre-of-mass std by column
EDIT:
One can avoid creating in-memory copies of r to build R by using numpy.lib.stride_tricks: swap the line
R = np.vstack([r] * m)
above with
from numpy.lib.stride_tricks import as_strided
R = as_strided(r, strides=(0, r.itemsize), shape=(m, n))
The resulting R is a (strided) ndarray whose underlying array is the same as r's — absolutely no copying of any values occurs.
from numpy.lib.stride_tricks import as_strided
FMT = '''\
Shape: {}
Strides: {}
Position in memory: {}
Size in memory (bytes): {}
'''
def find_base_nbytes(obj):
if obj.base is not None:
return find_base_nbytes(obj.base)
return obj.nbytes
def stats(obj):
return FMT.format(obj.shape,
obj.strides,
obj.__array_interface__['data'][0],
find_base_nbytes(obj))
n=10
m=1000
r = np.arange(1, n+1)
R = np.vstack([r] * m)
S = as_strided(r, strides=(0, r.itemsize), shape=(m, n))
print(stats(r))
print(stats(R))
print(stats(S))
Output:
Shape: (10,)
Strides: (8,)
Position in memory: 4299744576
Size in memory (bytes): 80
Shape: (1000, 10)
Strides: (80, 8)
Position in memory: 4304464384
Size in memory (bytes): 80000
Shape: (1000, 10)
Strides: (0, 8)
Position in memory: 4299744576
Size in memory (bytes): 80
Credit to this SO answer and this one for explanations on how to get the memory address and size of the underlying array of a strided ndarray.

iterating over numpy arrays

I am having a very difficult time vectoring, I can't seem to think about math in that way yet. I have this right now:
#!/usr/bin/env python
import numpy as np
import math
grid = np.zeros((2,2))
aList = np.arange(1,5).reshape(2,2)
i,j = np.indices((2,2))
iArray = (i - aList[:,0:1])
jArray = (j - aList[:,1:2])
print np.power(np.power(iArray, 2) + np.power(jArray, 2), .5)
My print out looks like this:
[[ 2.23606798 1.41421356]
[ 4.47213595 3.60555128]]
What I am trying to do is take a 2D array of pixel values, grid, and say how far each pixel is from a list of important pixels, aList.
# # #
# # #
* # *
An example is if the *s (0,2) and (2,2) are important pixels and I am currently at the # (2,0) pixel, my value for the # pixel would be:
[(0-2)^2 + (2-0)^2]^.5 + [(2-2)^2 + (0-2)^2]^.5
All grid does is hold pixel values so I need to get the index of each pixel value to associate distance. However my Alist array holds [x,y] coordinates, So that one is easy. I think I right now I have two issues:
1. I am not getting the indeces correctly
2. I am not looping over the coordinates in aList properly
With a little help from broadcasting, I get this, with data based on your last example:
import numpy as np
grid = np.zeros((3, 3))
aList = np.array([[2, 0], [2, 2]])
important_rows, important_cols = aList.T
rows, cols = np.indices(grid.shape)
dist = np.sqrt((important_rows - rows.ravel()[:, None])**2 +
(important_cols - cols.ravel()[:, None])**2).sum(axis=-1)
dist = dist.reshape(grid.shape)
>>> dist
array([[ 4.82842712, 4.47213595, 4.82842712],
[ 3.23606798, 2.82842712, 3.23606798],
[ 2. , 2. , 2. ]])
You can get more memory efficient by doing:
important_rows, important_cols = aList.T
rows, cols = np.meshgrid(np.arange(grid.shape[0]),
np.arange(grid.shape[1]),
sparse=True, indexing='ij')
dist2 = np.sqrt((rows[..., None] - important_rows)**2 +
(cols[..., None] - important_cols)**2).sum(axis=-1)
My approach:
import numpy as np
n = 3
aList = np.zeros([n,n])
distance = np.zeros([n,n])
I,J = np.indices([n,n])
aList[2,2] = 1; aList[0,2] = 1 #Importan pixels
important = np.where(aList == 1) #Where the important pixels are
for i,j in zip(I[important],J[important]): #This part could be improved...
distance += np.sqrt((i-I)**2+(j-J)**2)
print distance
The last 'for' could be improved, but if you have only a few important pixels, the performance will be good...
Checking with:
import matplotlib.pyplot as plt
n = 500
...
aList[249+100,349] = 1; aList[249-100,349] = 1 ;aList[249,50] = 1
...
plt.plot(I[important],J[important],'rx',markersize=20)
plt.imshow(distance.T,origin='lower',
cmap=plt.cm.gray)
plt.show()
The result is very comfortable:

Categories

Resources