Related
I have an issue in using python with matrix multiplication and reshape. for example, I have a column S of size (16,1) and another matrix H of size (4,4), I need to reshape the column S into (4,4) in order to multiply it with H and then reshape it again into (16,1), I did that in matlab as below:
clear all; clc; clear
H = randn(4,4,16) + 1j.*randn(4,4,16);
S = randn(16,1) + 1j.*randn(16,1);
for ij = 1 : 16
y(:,:,ij) = reshape(H(:,:,ij)*reshape(S,4,[]),[],1);
end
y = mean(y,3);
Coming to python :
import numpy as np
H = np.random.randn(4,4,16) + 1j * np.random.randn(4,4,16)
S = np.random.randn(16,) + 1j * np.random.randn(16,)
y = np.zeros((4,4,16),dtype=complex)
for ij in range(16):
y[:,:,ij] = np.reshape(h[:,:,ij]#S.reshape(4,4),16,1)
But I get an error here that we can't reshape the matrix y of size 256 into 16x1.
Does anyone have an idea about how to solve this problem?
Simply do this:
S.shape = (4,4)
for ij in range(16):
y[:,:,ij] = H[:,:,ij] # S
S.shape = -1 # equivalent to 16
np.dot operates over the last and second-to-last axis of the two operands if they have two or more axes. You can move your axes around to use this.
Keep in mind that reshape(S, 4, 4) in Matlab is likely equivalent to S.reshape(4, 4).T in Python.
So given H of shape (4, 4, 16) and S of shape (16,), you can multiply each channel of H by a reshaped S using
np.moveaxis(np.dot(np.moveaxis(H, -1, 0), S.reshape(4, 4).T), 0, -1)
The inner moveaxis call makes H into (16, 4, 4) for easy multiplication. The outer one reverses the effect.
Alternatively, you could use the fact that S will be transposed to write
np.transpose(S.reshape(4, 4), np.transpose(H))
There are two issues in your solution
1) reshape method takes a shape in the form of a single tuple argument, but not multiple arguments.
2) The shape of your y-array should be 16x1x16, not 4x4x16. In Matlab, there is no issue since it automatically reshapes y as you update it.
The correct version would be the following:
import numpy as np
H = np.random.randn(4,4,16) + 1j * np.random.randn(4,4,16)
S = np.random.randn(16,) + 1j * np.random.randn(16,)
y = np.zeros((16,1,16),dtype=complex)
for ij in range(16):
y[:,:,ij] = np.reshape(H[:,:,ij]#S.reshape((4,4)),(16,1))
I have two numpy arrays: One array x with shape (n, a0, a1, ...) and one array k with shape (n, b0, b1, ...). I would like to compute and array of exponentials such that the output has dimension (a0, a1, ..., b0, b1, ...) and
out[i0, i1, ..., j0, j1, ...] == prod(x[:, i0, i1, ...] ** k[:, j0, j1, ...])
If there is only one a_i and one b_j, broadcasting does the trick via
import numpy
x = numpy.random.rand(2, 31)
k = numpy.random.randint(1, 10, size=(2, 101))
out = numpy.prod(x[..., None]**k[:, None], axis=0)
If x has a few dimensions more, more Nones have to be added:
x = numpy.random.rand(2, 31, 32, 33)
k = numpy.random.randint(1, 10, size=(2, 101))
out = numpy.prod(x[..., None]**k[:, None, None, None], axis=0)
If x has a few dimensions more, more Nones have to be added at other places:
x = numpy.random.rand(2, 31)
k = numpy.random.randint(1, 10, size=(2, 51, 51))
out = numpy.prod(x[..., None, None]**k[:, None], axis=0)
How to make the computation of out generic with respect to the dimensionality of the input arrays?
Here's one using reshaping of the two arrays so that they are broadcastable against each other and then performing those operations and prod reduction along the first axis -
k0_shp = [k.shape[0]] + [1]*(x.ndim-1) + list(k.shape[1:])
x0_shp = list(x.shape) + [1]*(k.ndim-1)
out = (x.reshape(x0_shp) ** k.reshape(k0_shp)).prod(0)
Here's another way to reshape both inputs to 3D allowing one singleton dim per input and such that they are broadcastable against each other, perform prod reduction to get 2D array, then reshape back to multi-dim array -
s = x.shape[1:] + k.shape[1:] # output shape
out = (x.reshape(x.shape[0],-1,1)**k.reshape(k.shape[0],1,-1)).prod(0).reshape(s)
It must be noted that reshaping merely creates a view into the input array and as such is virtually free both memory-wise and performance-wise.
Without understanding fully the math of what you're doing, it seems that you need a constant number of None's for the number of dimensions of each x and k.
does something like this work?
out = numpy.prod(x[[...]+[None]*(k.ndim-1)]**k[[slice(None)]+[None]*(x.ndim-1)])
Here are the slices separately so they're a bit easier to read:
x[ [...] + [None]*(k.ndim-1) ]
k[ [slice(None)] + [None]*(x.ndim-1) ]
Compatibility Note:
[...] seems to only be valid in python 3.x If you are using 2.7 (I haven't tested lower) substitute [Ellipsis] instead:
x[ [Ellipsis] + [None]*(k.ndim-1) ]
This is how I acquire my N-D data (func is IRL not vectorizable):
import numpy
import xarray
import itertools
xs = numpy.linspace(0, 10, 100)
ys = numpy.linspace(0, 0.1, 20)
zs = numpy.linspace(0, 5, 200)
def func(x, y, z):
return x * y / z
vals = list(itertools.product(xs, ys, zs))
result = [func(x, y, z) for x, y, z in vals]
I have a feeling that what I do can be simplified. I would like to put this in a xarray.DataArray without reshaping the data. However, this is how I do it now:
arr = np.array(result).reshape(len(xs), len(ys), len(zs))
da = xarray.DataArray(arr, coords=[('x', xs), ('y', ys), ('z', zs)])
This a simple example, but usually I work with ~10D data that I obtain by mapping a itertools.product (in parallel).
My question: how can I do this without reshaping my data and by using vals and without taking the lengths of xs, ys, and zs?
In a similar way to what you what do with:
index = pandas.MultiIndex.from_tuples(vals, names=['x', 'y', 'z'])
df = pandas.DataFrame(result, columns=['result'], index=index)
EDIT:
This is how I solved it, inspired on the answer by #hpaulj, thanks!
import numpy
import xarray
import itertools
coords = dict(x=numpy.linspace(0, 10, 100),
y=numpy.linspace(0, 0.1, 20),
z=numpy.linspace(0, 5, 200))
def func(x, y, z):
return x * y / z
result = [func(x, y, z) for x, y, z in itertools.product(*coords.values())]
xarray.DataArray(numpy.reshape(result, [len(i) for i in coords.values()]), coords=coords)
EDIT 2
See this issue: https://github.com/pydata/xarray/issues/1914
Experienced numpy users tend to focus on removing iterative steps. Thus we've zoomed in on your result calculation, and view the reshape as something trivial. Hence the answers so far have focused on broadcasting and calculating your function.
But I'm beginning to suspect that what's really bothering you is that
reshape(len(xs), len(ys), len(zs))
could become unwieldy if you have 10 such dimensions, not just 3. It's not so much the calculation speed, but the effort required to type len(..) 10 times. Or may be it's that the code will look ugly.
Anyways here's a way of bypassing all that typing. The key is to collect the dimensional arrays in a list
In [495]: dims = [np.linspace(0,10,4), np.linspace(0,.1,3), np.linspace(0,5,5)]
In [496]: from itertools import product
In [497]: vals = list(product(*dims))
In [498]: len(vals)
Out[498]: 60
In [499]: result = [sum(ijk) for ijk in vals] # a simple func
Now just get the len's with a simple list comprehension:
In [501]: arr=np.array(result).reshape([len(i) for i in dims])
In [502]: arr.shape
Out[502]: (4, 3, 5)
Another possibility is to put the linspace parameters in lists right at the start.
In [504]: ldims=[4,3,5]
In [505]: ends=[10,.1,5]
In [506]: dims=[np.linspace(0,e,l) for e,l in zip(ends, ldims)]
In [507]: vals = list(product(*dims))
In [508]: result=[sum(ijk) for ijk in vals]
In [509]: arr=np.array(result).reshape(ldims)
reshape itself is not an expensive operation. Usually it creates a view, which is one of the fastest things you can do with an array.
#Divakar hinted at this kind of solution in his deleted answer, with *np.meshgrid(*A) as alternative to your product(xs,ys).
By the way, my answer doesn't involve xarray either - because I don't have that package installed. I'm assuming that you know what you are doing when passing arr of that 3d shape to it, as opposed to the longer 1d array. Look at the tag numbers, 5k followers for numpy, 23 for xarray.
The xarray coords parameter could also be constructed from dims (with an additional list of names).
If this answer isn't to your liking, I'd suggest closing the question, and starting a new one with just the xarray tag. That way you won't attract the numerous numpy flies.
2nd EDIT I had forgotten about einsum! If you can torture your function to fit this will be even faster (1.5ms on the timeit below)
result = np.einsum('i,j,k', xs, ys, 1.0 / zs)
You need to reshape and broadcast to the same shaped array. As Balzola says this will be VERY big if it's 10D and 100 in each direction (10**20 elements). As hpaulj says reshaping a numpy array is generally trivial, and is so in this case though broadcasting does take some work. However much less than the itertools.product() method. For your example
import numpy as np
xs = np.linspace(0, 10, 100)
ys = np.linspace(0, 0.1, 20)
zs = np.linspace(0.1, 5, 200)
xn, yn, zn = len(xs), len(ys), len(zs)
xs_b = np.broadcast_to(xs.reshape(xn, 1, 1), (xn, yn, zn))
ys_b = np.broadcast_to(ys.reshape(1, yn, 1), (xn, yn, zn))
zs_b = np.broadcast_to(zs.reshape(1, 1, zn), (xn, yn, zn))
result = xs_b * ys_b / zs_b
using timeit as below I get the numpy computation to be 4ms and the itertools method 150ms. I think the difference would be bigger for more dimensions.
import timeit
init = '''
import itertools
import numpy as np
def func(x, y, z):
return x * y / z
xs = np.linspace(0, 10, 100)
ys = np.linspace(0, 0.1, 20)
zs = np.linspace(0.1, 5, 200)
xn, yn, zn = len(xs), len(ys), len(zs)
'''
funcs = ['''
xs_b = np.broadcast_to(xs.reshape(xn, 1, 1), (xn, yn, zn))
ys_b = np.broadcast_to(ys.reshape(1, yn, 1), (xn, yn, zn))
zs_b = np.broadcast_to(zs.reshape(1, 1, zn), (xn, yn, zn))
result = xs_b * ys_b / zs_b
''','''
vals = list(itertools.product(xs, ys, zs))
result = [func(x, y, z) for x, y, z in vals]
''']
for f in funcs:
print(timeit.timeit(f, setup=init, number=100))
EDIT PS. I altered your zs to prevent the numpy warning by dividing by zero as this might have effected the timeit comparison.
I have two numpy arrays
import numpy as np
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
x.shape # output is (50,)
y.shape # output is (50,)
I would like to create a function which returns an array shaped (50,50) such that the first x value x0 is evaluated for all y values, etc.
The current function I am using is fairly complicated, so let's use an easier example. Let's say the function is
def func(x,y):
return x**2 + y**2
How do I shape this to be a (50,50) array? At the moment, it will output 50 values. Would you use a for loop inside an array?
Something like:
np.array([[func(x,y) for i in x] for j in y)
but without using two for loops. This takes forever to run.
EDIT: It has been requested I share my "complicated" function. Here it goes:
There is a data vector which is a 1D numpy array of 4000 measurements. There is also a "normalized_matrix", which is shaped (4000,4000)---it is nothing special, just a matrix with entry values of integers between 0 and 1, e.g. 0.5567878. These are the two "given" inputs.
My function returns the matrix multiplication product of transpose(datavector) * matrix * datavector, which is a single value.
Now, as you can see in the code, I have initialized two arrays, x and y, which pass through a series of "x parameters" and "y parameters". That is, what does func(x,y) return for value x1 and value y1, i.e. func(x1,y1)?
The shape of matrix1 is (50, 4000, 4000). The shape of matrix2 is (50, 4000, 4000). Ditto for total_matrix.
normalized_matrix is shape (4000,4000) and id_mat is shaped (4000,4000).
normalized_matrix
print normalized_matrix.shape #output (4000,4000)
data_vector = datarr
print datarr.shape #output (4000,)
def func(x, y):
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
matrix2 = y[:, None, None] * id_mat[None, :, :]
total_matrix = matrix1 + matrix2
# transpose(datavector) * matrix * datavector
# by matrix multiplication, equals single value
return np.array([ np.dot(datarr.T, np.dot(total_matrix, datarr) ) ])
If I try to use np.meshgrid(), that is, if I try
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
X, Y = np.meshgrid(x,y)
z = func(X, Y)
I get the following value error: ValueError: operands could not be broadcast together with shapes (50,1,1,50) (1,4000,4000).
reshape in numpy as different meaning. When you start with a (100,) and change it to (5,20) or (10,10) 2d arrays, that is 'reshape. There is anumpy` function to do that.
You want to take 2 1d array, and use those to generate a 2d array from a function. This is like taking an outer product of the 2, passing all combinations of their values through your function.
Some sort of double loop is one way of doing this, whether it is with an explicit loop, or list comprehension. But speeding this up depends on that function.
For at x**2+y**2 example, it can be 'vectorized' quite easily:
In [40]: x=np.linspace(1e10,1e12,num=10)
In [45]: y=np.linspace(1e5,1e7,num=5)
In [46]: z = x[:,None]**2 + y[None,:]**2
In [47]: z.shape
Out[47]: (10, 5)
This takes advantage of numpy broadcasting. With the None, x is reshaped to (10,1) and y to (1,5), and the + takes an outer sum.
X,Y=np.meshgrid(x,y,indexing='ij') produces two (10,5) arrays that can be used the same way. Look at is doc for other parameters.
So if your more complex function can be written in a way that takes 2d arrays like this, it is easy to 'vectorize'.
But if that function must take 2 scalars, and return another scalar, then you are stuck with some sort of double loop.
A list comprehension form of the double loop is:
np.array([[x1**2+y1**2 for y1 in y] for x1 in x])
Another is:
z=np.empty((10,5))
for i in range(10):
for j in range(5):
z[i,j] = x[i]**2 + y[j]**2
This double loop can be sped up somewhat by using np.vectorize. This takes a user defined function, and returns one that can take broadcastable arrays:
In [65]: vprod=np.vectorize(lambda x,y: x**2+y**2)
In [66]: vprod(x[:,None],y[None,:]).shape
Out[66]: (10, 5)
Test that I've done in the past show that vectorize can improve on the list comprehension route by something like 20%, but the improvement is nothing like writing your function to work with 2d arrays in the first place.
By the way, this sort of 'vectorization' question has been asked many times on SO numpy. Beyond these broad examples, we can't help you without knowning more about that more complicated function. As long as it is a black box that takes scalars, the best we can help you with is np.vectorize. And you still need to understand broadcasting (with or without meshgrid help).
I think there is a better way, it is right on the tip of my tongue, but as an interim measure:
You are operating on 1x2 windows of a meshgrid. You can use as_strided from numpy.lib.stride_tricks to rearrange the meshgrid into two-element windows, then apply your function to the resultant array. I like to use a generic nd solution, sliding_windows (http://www.johnvinyard.com/blog/?p=268) (Not mine) to transform the array.
import numpy as np
a = np.array([1,2,3])
b = np.array([.1, .2, .3])
z= np.array(np.meshgrid(a,b))
def foo((x,y)):
return x+y
>>> z.shape
(2, 3, 3)
>>> t = sliding_window(z, (2,1,1))
>>> t
array([[ 1. , 0.1],
[ 2. , 0.1],
[ 3. , 0.1],
[ 1. , 0.2],
[ 2. , 0.2],
[ 3. , 0.2],
[ 1. , 0.3],
[ 2. , 0.3],
[ 3. , 0.3]])
>>> v = np.apply_along_axis(foo, 1, t)
>>> v
array([ 1.1, 2.1, 3.1, 1.2, 2.2, 3.2, 1.3, 2.3, 3.3])
>>> v.reshape((len(a), len(b)))
array([[ 1.1, 2.1, 3.1],
[ 1.2, 2.2, 3.2],
[ 1.3, 2.3, 3.3]])
>>>
This should be somewhat faster.
You may need to modify your function's argument signature.
If the link to the johnvinyard.com blog breaks, I've posted the the sliding_window implementation in other SO answers - https://stackoverflow.com/a/22749434/2823755
Search around and you'll find many other tricky as_strided solutions.
In response to your edited question:
normalized_matrix
print normalized_matrix.shape #output (4000,4000)
data_vector = datarr
print datarr.shape #output (4000,)
def func(x, y):
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
matrix2 = y[:, None, None] * id_mat[None, :, :]
total_matrix = matrix1 + matrix2
# transpose(datavector) * matrix * datavector
# by matrix multiplication, equals single value
# return np.array([ np.dot(datarr.T, np.dot(total_matrix, datarr))])
return np.einsum('j,ijk,k->i',datarr,total_matrix,datarr)
Since datarr is shape (4000,), transpose does nothing. I believe you want the result of the 2 dots to be shape (50,). I'm suggesting using einsum. But it can be done with tensordot, or I think even np.dot(np.dot(total_matrix, datarr),datarr). Test the expression with smaller arrays, focusing on getting the shapes right.
x = np.linspace(1e10, 1e12, num=50) # 50 values
y = np.linspace(1e5, 1e7, num=50) # 50 values
z = func(x,y)
# X, Y = np.meshgrid(x,y)
# z = func(X, Y)
X,Y is wrong. func takes x and y that are 1d. Notice how you expand the dimensions with [:, None, None]. Also you aren't creating a 2d array from an outer combination of x and y. None of your arrays in func is (50,50) or (50,50,...). The higher dimensions are provided by nomalied_matrix and id_mat.
When showing us the ValueError you should also indicate where in your code that occurred. Otherwise we have to guess, or recreate the code ourselves.
In fact when I run my edited func(X,Y), I get this error:
----> 2 matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
3 matrix2 = y[:, None, None] * id_mat[None, :, :]
4 total_matrix = matrix1 + matrix2
5 # transpose(datavector) * matrix * datavector
ValueError: operands could not be broadcast together with shapes (50,1,1,50) (1,400,400)
See, the error occurs right at the start. normalized_matrix is expanded to (1,400,400) [I'm using smaller examples]. The (50,50) X is expanded to (50,1,1,50). x expands to (50,1,1), which broadcasts just fine.
To address the edit and the broadcasting error in the edit:
Inside your function you are adding dimensions to arrays to try to get them to broadcast.
matrix1 = x [:, None, None] * normalized_matrix[None, :, :]
This expression looks like you want to broadcast a 1d array with a 2d array.
The results of your meshgrid are two 2d arrays:
X,Y = np.meshgrid(x,y)
>>> X.shape, Y.shape
((50, 50), (50, 50))
>>>
When you try to use X in in your broadcasting expression the dimensions don't line up, that is what causes the ValueError - refer to the General Broadcasting Rules:
>>> x1 = X[:, np.newaxis, np.newaxis]
>>> nm = normalized_matrix[np.newaxis, :, :]
>>> x1.shape
(50, 1, 1, 50)
>>> nm.shape
(1, 4000, 4000)
>>>
You're on the right track with your list comprehension, you just need to add in an extra level of iteration:
np.array([[func(i,j) for i in x] for j in y])
Numpy's meshgrid is very useful for converting two vectors to a coordinate grid. What is the easiest way to extend this to three dimensions? So given three vectors x, y, and z, construct 3x3D arrays (instead of 2x2D arrays) which can be used as coordinates.
Numpy (as of 1.8 I think) now supports higher that 2D generation of position grids with meshgrid. One important addition which really helped me is the ability to chose the indexing order (either xy or ij for Cartesian or matrix indexing respectively), which I verified with the following example:
import numpy as np
x_ = np.linspace(0., 1., 10)
y_ = np.linspace(1., 2., 20)
z_ = np.linspace(3., 4., 30)
x, y, z = np.meshgrid(x_, y_, z_, indexing='ij')
assert np.all(x[:,0,0] == x_)
assert np.all(y[0,:,0] == y_)
assert np.all(z[0,0,:] == z_)
Here is the source code of meshgrid:
def meshgrid(x,y):
"""
Return coordinate matrices from two coordinate vectors.
Parameters
----------
x, y : ndarray
Two 1-D arrays representing the x and y coordinates of a grid.
Returns
-------
X, Y : ndarray
For vectors `x`, `y` with lengths ``Nx=len(x)`` and ``Ny=len(y)``,
return `X`, `Y` where `X` and `Y` are ``(Ny, Nx)`` shaped arrays
with the elements of `x` and y repeated to fill the matrix along
the first dimension for `x`, the second for `y`.
See Also
--------
index_tricks.mgrid : Construct a multi-dimensional "meshgrid"
using indexing notation.
index_tricks.ogrid : Construct an open multi-dimensional "meshgrid"
using indexing notation.
Examples
--------
>>> X, Y = np.meshgrid([1,2,3], [4,5,6,7])
>>> X
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
>>> Y
array([[4, 4, 4],
[5, 5, 5],
[6, 6, 6],
[7, 7, 7]])
`meshgrid` is very useful to evaluate functions on a grid.
>>> x = np.arange(-5, 5, 0.1)
>>> y = np.arange(-5, 5, 0.1)
>>> xx, yy = np.meshgrid(x, y)
>>> z = np.sin(xx**2+yy**2)/(xx**2+yy**2)
"""
x = asarray(x)
y = asarray(y)
numRows, numCols = len(y), len(x) # yes, reversed
x = x.reshape(1,numCols)
X = x.repeat(numRows, axis=0)
y = y.reshape(numRows,1)
Y = y.repeat(numCols, axis=1)
return X, Y
It is fairly simple to understand. I extended the pattern to an arbitrary number of dimensions, but this code is by no means optimized (and not thoroughly error-checked either), but you get what you pay for. Hope it helps:
def meshgrid2(*arrs):
arrs = tuple(reversed(arrs)) #edit
lens = map(len, arrs)
dim = len(arrs)
sz = 1
for s in lens:
sz*=s
ans = []
for i, arr in enumerate(arrs):
slc = [1]*dim
slc[i] = lens[i]
arr2 = asarray(arr).reshape(slc)
for j, sz in enumerate(lens):
if j!=i:
arr2 = arr2.repeat(sz, axis=j)
ans.append(arr2)
return tuple(ans)
Can you show us how you are using np.meshgrid? There is a very good chance that you really don't need meshgrid because numpy broadcasting can do the same thing without generating a repetitive array.
For example,
import numpy as np
x=np.arange(2)
y=np.arange(3)
[X,Y] = np.meshgrid(x,y)
S=X+Y
print(S.shape)
# (3, 2)
# Note that meshgrid associates y with the 0-axis, and x with the 1-axis.
print(S)
# [[0 1]
# [1 2]
# [2 3]]
s=np.empty((3,2))
print(s.shape)
# (3, 2)
# x.shape is (2,).
# y.shape is (3,).
# x's shape is broadcasted to (3,2)
# y varies along the 0-axis, so to get its shape broadcasted, we first upgrade it to
# have shape (3,1), using np.newaxis. Arrays of shape (3,1) can be broadcasted to
# arrays of shape (3,2).
s=x+y[:,np.newaxis]
print(s)
# [[0 1]
# [1 2]
# [2 3]]
The point is that S=X+Y can and should be replaced by s=x+y[:,np.newaxis] because
the latter does not require (possibly large) repetitive arrays to be formed. It also generalizes to higher dimensions (more axes) easily. You just add np.newaxis where needed to effect broadcasting as necessary.
See http://www.scipy.org/EricsBroadcastingDoc for more on numpy broadcasting.
i think what you want is
X, Y, Z = numpy.mgrid[-10:10:100j, -10:10:100j, -10:10:100j]
for example.
Here is a multidimensional version of meshgrid that I wrote:
def ndmesh(*args):
args = map(np.asarray,args)
return np.broadcast_arrays(*[x[(slice(None),)+(None,)*i] for i, x in enumerate(args)])
Note that the returned arrays are views of the original array data, so changing the original arrays will affect the coordinate arrays.
Instead of writing a new function, numpy.ix_ should do what you want.
Here is an example from the documentation:
>>> ixgrid = np.ix_([0,1], [2,4])
>>> ixgrid
(array([[0],
[1]]), array([[2, 4]]))
>>> ixgrid[0].shape, ixgrid[1].shape
((2, 1), (1, 2))'
You can achieve that by changing the order:
import numpy as np
xx = np.array([1,2,3,4])
yy = np.array([5,6,7])
zz = np.array([9,10])
y, z, x = np.meshgrid(yy, zz, xx)