Reshaping numpy array without using two for loops - python

I have two numpy arrays
import numpy as np
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
x.shape # output is (50,)
y.shape # output is (50,)
I would like to create a function which returns an array shaped (50,50) such that the first x value x0 is evaluated for all y values, etc.
The current function I am using is fairly complicated, so let's use an easier example. Let's say the function is
def func(x,y):
return x**2 + y**2
How do I shape this to be a (50,50) array? At the moment, it will output 50 values. Would you use a for loop inside an array?
Something like:
np.array([[func(x,y) for i in x] for j in y)
but without using two for loops. This takes forever to run.
EDIT: It has been requested I share my "complicated" function. Here it goes:
There is a data vector which is a 1D numpy array of 4000 measurements. There is also a "normalized_matrix", which is shaped (4000,4000)---it is nothing special, just a matrix with entry values of integers between 0 and 1, e.g. 0.5567878. These are the two "given" inputs.
My function returns the matrix multiplication product of transpose(datavector) * matrix * datavector, which is a single value.
Now, as you can see in the code, I have initialized two arrays, x and y, which pass through a series of "x parameters" and "y parameters". That is, what does func(x,y) return for value x1 and value y1, i.e. func(x1,y1)?
The shape of matrix1 is (50, 4000, 4000). The shape of matrix2 is (50, 4000, 4000). Ditto for total_matrix.
normalized_matrix is shape (4000,4000) and id_mat is shaped (4000,4000).
normalized_matrix
print normalized_matrix.shape #output (4000,4000)
data_vector = datarr
print datarr.shape #output (4000,)
def func(x, y):
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
matrix2 = y[:, None, None] * id_mat[None, :, :]
total_matrix = matrix1 + matrix2
# transpose(datavector) * matrix * datavector
# by matrix multiplication, equals single value
return np.array([ np.dot(datarr.T, np.dot(total_matrix, datarr) ) ])
If I try to use np.meshgrid(), that is, if I try
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
X, Y = np.meshgrid(x,y)
z = func(X, Y)
I get the following value error: ValueError: operands could not be broadcast together with shapes (50,1,1,50) (1,4000,4000).

reshape in numpy as different meaning. When you start with a (100,) and change it to (5,20) or (10,10) 2d arrays, that is 'reshape. There is anumpy` function to do that.
You want to take 2 1d array, and use those to generate a 2d array from a function. This is like taking an outer product of the 2, passing all combinations of their values through your function.
Some sort of double loop is one way of doing this, whether it is with an explicit loop, or list comprehension. But speeding this up depends on that function.
For at x**2+y**2 example, it can be 'vectorized' quite easily:
In [40]: x=np.linspace(1e10,1e12,num=10)
In [45]: y=np.linspace(1e5,1e7,num=5)
In [46]: z = x[:,None]**2 + y[None,:]**2
In [47]: z.shape
Out[47]: (10, 5)
This takes advantage of numpy broadcasting. With the None, x is reshaped to (10,1) and y to (1,5), and the + takes an outer sum.
X,Y=np.meshgrid(x,y,indexing='ij') produces two (10,5) arrays that can be used the same way. Look at is doc for other parameters.
So if your more complex function can be written in a way that takes 2d arrays like this, it is easy to 'vectorize'.
But if that function must take 2 scalars, and return another scalar, then you are stuck with some sort of double loop.
A list comprehension form of the double loop is:
np.array([[x1**2+y1**2 for y1 in y] for x1 in x])
Another is:
z=np.empty((10,5))
for i in range(10):
for j in range(5):
z[i,j] = x[i]**2 + y[j]**2
This double loop can be sped up somewhat by using np.vectorize. This takes a user defined function, and returns one that can take broadcastable arrays:
In [65]: vprod=np.vectorize(lambda x,y: x**2+y**2)
In [66]: vprod(x[:,None],y[None,:]).shape
Out[66]: (10, 5)
Test that I've done in the past show that vectorize can improve on the list comprehension route by something like 20%, but the improvement is nothing like writing your function to work with 2d arrays in the first place.
By the way, this sort of 'vectorization' question has been asked many times on SO numpy. Beyond these broad examples, we can't help you without knowning more about that more complicated function. As long as it is a black box that takes scalars, the best we can help you with is np.vectorize. And you still need to understand broadcasting (with or without meshgrid help).

I think there is a better way, it is right on the tip of my tongue, but as an interim measure:
You are operating on 1x2 windows of a meshgrid. You can use as_strided from numpy.lib.stride_tricks to rearrange the meshgrid into two-element windows, then apply your function to the resultant array. I like to use a generic nd solution, sliding_windows (http://www.johnvinyard.com/blog/?p=268) (Not mine) to transform the array.
import numpy as np
a = np.array([1,2,3])
b = np.array([.1, .2, .3])
z= np.array(np.meshgrid(a,b))
def foo((x,y)):
return x+y
>>> z.shape
(2, 3, 3)
>>> t = sliding_window(z, (2,1,1))
>>> t
array([[ 1. , 0.1],
[ 2. , 0.1],
[ 3. , 0.1],
[ 1. , 0.2],
[ 2. , 0.2],
[ 3. , 0.2],
[ 1. , 0.3],
[ 2. , 0.3],
[ 3. , 0.3]])
>>> v = np.apply_along_axis(foo, 1, t)
>>> v
array([ 1.1, 2.1, 3.1, 1.2, 2.2, 3.2, 1.3, 2.3, 3.3])
>>> v.reshape((len(a), len(b)))
array([[ 1.1, 2.1, 3.1],
[ 1.2, 2.2, 3.2],
[ 1.3, 2.3, 3.3]])
>>>
This should be somewhat faster.
You may need to modify your function's argument signature.
If the link to the johnvinyard.com blog breaks, I've posted the the sliding_window implementation in other SO answers - https://stackoverflow.com/a/22749434/2823755
Search around and you'll find many other tricky as_strided solutions.

In response to your edited question:
normalized_matrix
print normalized_matrix.shape #output (4000,4000)
data_vector = datarr
print datarr.shape #output (4000,)
def func(x, y):
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
matrix2 = y[:, None, None] * id_mat[None, :, :]
total_matrix = matrix1 + matrix2
# transpose(datavector) * matrix * datavector
# by matrix multiplication, equals single value
# return np.array([ np.dot(datarr.T, np.dot(total_matrix, datarr))])
return np.einsum('j,ijk,k->i',datarr,total_matrix,datarr)
Since datarr is shape (4000,), transpose does nothing. I believe you want the result of the 2 dots to be shape (50,). I'm suggesting using einsum. But it can be done with tensordot, or I think even np.dot(np.dot(total_matrix, datarr),datarr). Test the expression with smaller arrays, focusing on getting the shapes right.
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
z = func(x,y)
# X, Y = np.meshgrid(x,y)
# z = func(X, Y)
X,Y is wrong. func takes x and y that are 1d. Notice how you expand the dimensions with [:, None, None]. Also you aren't creating a 2d array from an outer combination of x and y. None of your arrays in func is (50,50) or (50,50,...). The higher dimensions are provided by nomalied_matrix and id_mat.
When showing us the ValueError you should also indicate where in your code that occurred. Otherwise we have to guess, or recreate the code ourselves.
In fact when I run my edited func(X,Y), I get this error:
----> 2 matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
3 matrix2 = y[:, None, None] * id_mat[None, :, :]
4 total_matrix = matrix1 + matrix2
5 # transpose(datavector) * matrix * datavector
ValueError: operands could not be broadcast together with shapes (50,1,1,50) (1,400,400)
See, the error occurs right at the start. normalized_matrix is expanded to (1,400,400) [I'm using smaller examples]. The (50,50) X is expanded to (50,1,1,50). x expands to (50,1,1), which broadcasts just fine.

To address the edit and the broadcasting error in the edit:
Inside your function you are adding dimensions to arrays to try to get them to broadcast.
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
This expression looks like you want to broadcast a 1d array with a 2d array.
The results of your meshgrid are two 2d arrays:
X,Y = np.meshgrid(x,y)
>>> X.shape, Y.shape
((50, 50), (50, 50))
>>>
When you try to use X in in your broadcasting expression the dimensions don't line up, that is what causes the ValueError - refer to the General Broadcasting Rules:
>>> x1 = X[:, np.newaxis, np.newaxis]
>>> nm = normalized_matrix[np.newaxis, :, :]
>>> x1.shape
(50, 1, 1, 50)
>>> nm.shape
(1, 4000, 4000)
>>>

You're on the right track with your list comprehension, you just need to add in an extra level of iteration:
np.array([[func(i,j) for i in x] for j in y])

Related

np.piecewise generates incorrect values for integer array

I have a numpy piecewise function defined as
def function(x):
return np.piecewise(x, [x <= 1, x > 1], [lambda x: 1/2*np.sin((x-1)**2), lambda x:-1/2*np.sin((x-1)**2)])
I have no idea why this function is returning incorrect values for various x-values. In particular, running the following
X = np.array([0,2.1])
Y = np.array([0,2])
A = function(X)
B = function(Y)
will give A = array([ 0.42073549, -0.467808 ]), but B = array([0, 0]). Why is this happening?
I am expecting B = array([0.42073549, -0.468ish]).
Look at the types of your data.
X is an array of floats. But Y is an array of int.
And, quoting documentation of piecewise
The output is the same shape and type as x
So, output of piecewise when called with Y, that is an array of shape (2,) and dtype int64, is forced to be an array of shape (2,) and dtype int64. And the closest int64 to 0.42073549, -0.468ish are 0 and 0.
Just replace Y by np.array([0,2.0]) (to force float type), or np.array([0, 2], dtype=np.float64),

Scipy optimize behaves differently with 1-d Matrix vs Vector input st. 1-d Matrix solution is wrong

I have experienced that feeding scipy.optimize a 1-d matrix (of shape (N,1)) gives different (wrong) results vs. giving it the same data in the form of vectors (vectors are w and y in the MVE below
import numpy as np
from scipy.optimize import minimize
X = np.array([[ 1.13042959, 0.45915372, 0.8007231 , -1.15704469, 0.42920652],
[ 0.14131009, 0.9257914 , 0.72182141, 0.86906652, -0.32328187],
[-1.40969139, 1.32624329, 0.49157981, 0.2632826 , 1.29010016],
[-0.87733399, -1.55999729, -0.73784827, 0.15161383, 0.11189782],
[-0.94649544, 0.10406324, 0.65316464, -1.37014083, -0.28934968]])
wtrue = np.array([3.14,2.78,-1,0, 1.6180])
y = X.dot(wtrue)
def cost_function(w, X, y):
return np.mean(np.abs(y - X.dot(w)))
# %%
w0 = np.zeros(5)
output = minimize(cost_function, w0, args=(X, y), options={'disp':False, 'maxiter':128})
print('Vector Case:\n', output.x, '\n', output.fun)
# Reshaping w0 and y to (N,1) will 'break things'
w0 = np.zeros(5).reshape(-1,1)
y = y.reshape(-1,1) #This is the problem, only commenting this out will make below work
output = minimize(cost_function, w0, args=(X, y), options={'disp':False, 'maxiter':128})
print('1-d Matrix Case:\n', output.x, '\n', output.fun)
Gives
Vector Case:
[3.13999999e+00 2.77999996e+00 -9.99999940e-01 1.79002338e-08,1.61800001e+00]
1.7211226932545288e-08 // TRUE almost 0
1-d Matrix Case:
[-0.35218177 -0.50008129 0.34958755 -0.42210756 0.79680766]
3.3810648518841924 // WRONG nowhere close to true solution
Does anyone know why the solution using the 1-d matrix inputs come out 'wrong'?
I suspect that this is b/c somewhere along the way .minimize turns the parameter vector into an actual vector and then I know that (2,) + (2,1) gives a (2,2) matrix rather than a (2,) or a (2,1). This still strikes me as 'weird' and I would like to know if I'm missing some bigger point here.
In [300]: y
Out[300]: array([ 4.7197293 , 1.7725223 , 0.85632763, -6.17272225, -3.8040323 ])
In [301]: w0
Out[301]: array([0., 0., 0., 0., 0.])
In [302]: cost_function(w0,X,y)
Out[302]: 3.465066756332
Initially changing the shape of y doesn't change the cost:
In [306]: cost_function(w0,X,y.reshape(-1,1))
Out[306]: 3.4650667563320003
Now get a solution:
In [308]: output = optimize.minimize(cost_function, w0, args=(X, y), options={'disp':False
...: , 'maxiter':128})
In [310]: output.x
Out[310]:
array([ 3.14000001e+00, 2.77999999e+00, -9.99999962e-01, -5.58139763e-08,
1.61799993e+00])
Evaluate the cost as the optimal x
In [311]: cost_function(output.x,X,y)
Out[311]: 7.068144833866085e-08 # = output.fun
But with the reshaped y, the cost is different:
In [312]: cost_function(output.x,X,y.reshape(-1,1))
Out[312]: 4.377833258899681
The initial value x0 is flattened by the code (look at optimize.optimize._minimize_bfgs), so changing the shape of w0 doesn't matter. But the args arrays are passed to the cost function without changed. So if changing the shape of y changes the cost calculation, it will change the optimization.

Plotting a function of two variables that includes a vector inside

I have the following problem and this is the minimal example:
import numpy as np
def f1(x,y,scal):
return np.exp(-(scal-x)/y)
def f2(x,y,vec):
return np.sum(np.exp(-(vec-x)/y))
inputvec = np.arange(1,10,1)
x = np.arange(10)
y = np.arange(1,8,1)
X,Y = np.meshgrid(x,y)
Z1 = f1(X,Y,20)
print Z1
Z2 = f2(X,Y,inputvec)
I want to 3D plot the function f2, so this is why I try the meshgrid thing. The error is:
ValueError: operands could not be broadcast together with shapes (9,) (7,10)
It is even clear to me why this is the case, python would I think like to do something like f1, so that Z1 can be a grid answer which you can plot. But what if I use a vector in my function and the very nasty sum operation.
Question: How can I change my function f2 to get around this problem, or is there a different way to (3D, Contour, etc..) plot f2 without going via the meshgrid way?
Thanks a lot!
It's not 100% clear from your question, but I think you want:
def f3(x,y,vec):
return np.sum(np.exp(-(vec[:, None, None] - x[None]) / y[None]), 0)
Indexing with None (or equivalently, with np.newaxis) inserts a new dimension of size 1, so vec[:, None, None] has shape (9, 1, 1) and x[None] and y[None] have shapes (1, 7, 10). vec[:, None, None] - x[None] then gets broadcast out to shape (10, 7, 9). Finally, we compute the sum over just the first dimension using np.sum(..., 0) to yield a (7, 10) array that looks like this:
Z3 = f3(X, Y, inputvec)
plt.pcolormesh(X, Y, Z3)
Another useful tool that simplifies operations like this is np.ix_, which constructs an "open grid" from a set of input vectors that is suitable for broadcasting over:
v_, y_, x_ = np.ix_(inputvec, y, x) # here x and y are 1D
z = np.sum(np.exp(-(v_ - x_) / y_), 0)

Efficiently generating a Cauchy matrix from two Numpy arrays

A Cauchy matrix (Wikipedia article) is a matrix determined by two vectors (arrays of numbers). Given two vectors x and y, the Cauchy matrix C generated by them is defined entry-wise as
C[i][j] := 1/(x[i] - y[j])
Given two Numpy arrays x and y, what is an efficient way to generate a Cauchy matrix?
This is the most efficient way I found, using array broadcasting to take advantage of vectorization.
1.0 / (x.reshape((-1,1)) - y)
Edit: #HYRY and #shx2 have suggested that, instead of x.reshape((-1,1)), which makes a copy, you can use x[:,np.newaxis], which returns a view of the same array. #HYRY also suggests 1.0/np.subtract.outer(x,y), which is slightly slower for me but maybe more explicit.
Example:
>>> x = numpy.array([1,2,3,4]) #x
>>> y = numpy.array([5,6,7]) #y
>>>
>>> #transpose x, to nx1
... x = x.reshape((-1,1))
>>> x
array([[1],
[2],
[3],
[4]])
>>>
>>> #array of differences x[i] - y[j]
... #an nx1 array minus a 1xm array is an nxm array
... diff_matrix = x-y
>>> diff_matrix
array([[-4, -5, -6],
[-3, -4, -5],
[-2, -3, -4],
[-1, -2, -3]])
>>>
>>> #apply the multiplicative inverse to each entry
... cauchym = 1.0/diff_matrix
>>> cauchym
array([[-0.25 , -0.2 , -0.16666667],
[-0.33333333, -0.25 , -0.2 ],
[-0.5 , -0.33333333, -0.25 ],
[-1. , -0.5 , -0.33333333]])
I tried a few other methods, all of which were significantly slower.
This is the naive approach, which costs list comprehension:
cauchym = numpy.array([[ 1.0/(x_i-y_j) for y_j in y] for x_i in x])
This one generates the matrix as a 1-dimensional array (saving the cost of nested Python lists) and reshapes it to a matrix afterward. It also moves the division to a single Numpy operation:
cauchym = 1.0/numpy.array([(x_i-y_j) for x_i in x for y_j in y]).reshape([len(x),len(y)])
Using numpy.repeat and numpy.tile (which respectively tile the array horizontally and vertically). This way makes unnecessary copies:
lenx = len(x)
leny = len(y)
xm = numpy.repeat(x,leny) #the i'th row is s_i
ym = numpy.tile(y,lenx)
cauchym = (1.0/(xm-ym)).reshape([lenx,leny]);
I created a function hope it helps u to understand in a better way.
# Creating a function in order to form a cauchy matrix
def cauchy_matrix(arr1,arr2):
"""
Enter two arrays in order to get a cauchy matrix.The input array should be a 1-D array.
arr1 = First 1-D array
arr2 = Second 1-D array
It returns the cauchy matrix having shape equal to m*n, where m is size of arr1 and n is size of arr2.
"""
my_list = []
try:
for i in range(len(arr1)):
for j in range(len(arr2)):
z = 1/(arr1[i]-arr2[j])
my_list.append(z)
return np.array(my_list).reshape(arr1.shape[0],arr2.shape[0])
except ZeroDivisionError:
print("Check if both the arrays has '0' as one of it's element. One array can have a zero but both the arrays having '0' is not acceptable!")

Numpy meshgrid in 3D

Numpy's meshgrid is very useful for converting two vectors to a coordinate grid. What is the easiest way to extend this to three dimensions? So given three vectors x, y, and z, construct 3x3D arrays (instead of 2x2D arrays) which can be used as coordinates.
Numpy (as of 1.8 I think) now supports higher that 2D generation of position grids with meshgrid. One important addition which really helped me is the ability to chose the indexing order (either xy or ij for Cartesian or matrix indexing respectively), which I verified with the following example:
import numpy as np
x_ = np.linspace(0., 1., 10)
y_ = np.linspace(1., 2., 20)
z_ = np.linspace(3., 4., 30)
x, y, z = np.meshgrid(x_, y_, z_, indexing='ij')
assert np.all(x[:,0,0] == x_)
assert np.all(y[0,:,0] == y_)
assert np.all(z[0,0,:] == z_)
Here is the source code of meshgrid:
def meshgrid(x,y):
"""
Return coordinate matrices from two coordinate vectors.
Parameters
----------
x, y : ndarray
Two 1-D arrays representing the x and y coordinates of a grid.
Returns
-------
X, Y : ndarray
For vectors `x`, `y` with lengths ``Nx=len(x)`` and ``Ny=len(y)``,
return `X`, `Y` where `X` and `Y` are ``(Ny, Nx)`` shaped arrays
with the elements of `x` and y repeated to fill the matrix along
the first dimension for `x`, the second for `y`.
See Also
--------
index_tricks.mgrid : Construct a multi-dimensional "meshgrid"
using indexing notation.
index_tricks.ogrid : Construct an open multi-dimensional "meshgrid"
using indexing notation.
Examples
--------
>>> X, Y = np.meshgrid([1,2,3], [4,5,6,7])
>>> X
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
>>> Y
array([[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]])
`meshgrid` is very useful to evaluate functions on a grid.
>>> x = np.arange(-5, 5, 0.1)
>>> y = np.arange(-5, 5, 0.1)
>>> xx, yy = np.meshgrid(x, y)
>>> z = np.sin(xx**2+yy**2)/(xx**2+yy**2)
"""
x = asarray(x)
y = asarray(y)
numRows, numCols = len(y), len(x) # yes, reversed
x = x.reshape(1,numCols)
X = x.repeat(numRows, axis=0)
y = y.reshape(numRows,1)
Y = y.repeat(numCols, axis=1)
return X, Y
It is fairly simple to understand. I extended the pattern to an arbitrary number of dimensions, but this code is by no means optimized (and not thoroughly error-checked either), but you get what you pay for. Hope it helps:
def meshgrid2(*arrs):
arrs = tuple(reversed(arrs)) #edit
lens = map(len, arrs)
dim = len(arrs)
sz = 1
for s in lens:
sz*=s
ans = []
for i, arr in enumerate(arrs):
slc = [1]*dim
slc[i] = lens[i]
arr2 = asarray(arr).reshape(slc)
for j, sz in enumerate(lens):
if j!=i:
arr2 = arr2.repeat(sz, axis=j)
ans.append(arr2)
return tuple(ans)
Can you show us how you are using np.meshgrid? There is a very good chance that you really don't need meshgrid because numpy broadcasting can do the same thing without generating a repetitive array.
For example,
import numpy as np
x=np.arange(2)
y=np.arange(3)
[X,Y] = np.meshgrid(x,y)
S=X+Y
print(S.shape)
# (3, 2)
# Note that meshgrid associates y with the 0-axis, and x with the 1-axis.
print(S)
# [[0 1]
# [1 2]
# [2 3]]
s=np.empty((3,2))
print(s.shape)
# (3, 2)
# x.shape is (2,).
# y.shape is (3,).
# x's shape is broadcasted to (3,2)
# y varies along the 0-axis, so to get its shape broadcasted, we first upgrade it to
# have shape (3,1), using np.newaxis. Arrays of shape (3,1) can be broadcasted to
# arrays of shape (3,2).
s=x+y[:,np.newaxis]
print(s)
# [[0 1]
# [1 2]
# [2 3]]
The point is that S=X+Y can and should be replaced by s=x+y[:,np.newaxis] because
the latter does not require (possibly large) repetitive arrays to be formed. It also generalizes to higher dimensions (more axes) easily. You just add np.newaxis where needed to effect broadcasting as necessary.
See http://www.scipy.org/EricsBroadcastingDoc for more on numpy broadcasting.
i think what you want is
X, Y, Z = numpy.mgrid[-10:10:100j, -10:10:100j, -10:10:100j]
for example.
Here is a multidimensional version of meshgrid that I wrote:
def ndmesh(*args):
args = map(np.asarray,args)
return np.broadcast_arrays(*[x[(slice(None),)+(None,)*i] for i, x in enumerate(args)])
Note that the returned arrays are views of the original array data, so changing the original arrays will affect the coordinate arrays.
Instead of writing a new function, numpy.ix_ should do what you want.
Here is an example from the documentation:
>>> ixgrid = np.ix_([0,1], [2,4])
>>> ixgrid
(array([[0],
[1]]), array([[2, 4]]))
>>> ixgrid[0].shape, ixgrid[1].shape
((2, 1), (1, 2))'
You can achieve that by changing the order:
import numpy as np
xx = np.array([1,2,3,4])
yy = np.array([5,6,7])
zz = np.array([9,10])
y, z, x = np.meshgrid(yy, zz, xx)

Categories

Resources