I have two arrays of number and I want to compute a weighted average element-wise between these array and store it in a new array.
The solution I use for now is :
array_1 = [0,1,2,3,4]
array_2 = [2,3,4,5,6]
weight_1 = 0.5
weight_2 = 0.5
array_3 = np.zeros(array_1.shape)
for i in range(0, len(array_1)) :
array_3[i] = np.average(a=[array_1[i], array_2[i]], weights=[weight_1, weight_2])
print(array_3)
>> [1,2,3,4,5]
The problem is that it is not really efficient. How can I do that more efficiently ?
Just use NumPy's vectorised operations. To do so, first convert your lists to arrays and then just multiply each array with the respective weight and take the sum
import numpy as np
array_1 = np.array([0,1,2,3,4])
array_2 = np.array([2,3,4,5,6])
weight_1 = 0.5
weight_2 = 0.5
array_3 = weight_1*array_1 + weight_2*array_2
# array([1., 2., 3., 4., 5.])
A direct NumPy solution using np.average would be the following, where axis=0 means take the average row wise (using both columns). np.vstack() simply stacks the two arrays vertically.
np.average(np.vstack((array_1, array_2)), axis=0, weights=[weight_1, weight_2])
As pointed out by #yatu, you can also pass a list of your arrays and specify the axis
np.average([array_1, array_2], axis=0, weights=[weight_1, weight_2])
Timing comparison inspired by the comments on #yatu's answer: As you can see, list comprehension and zip is slightly faster here but then this performance is for small arrays. I am sure, for large arrays, the vectorised solution will take over
Devesh's method
%timeit result = [ item1 * weight_1 + item2 * weight_2 for item1, item2 in zip(array_1, array_2)]
# 25.5 µs ± 3.75 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.average([array_1, array_2], axis=0, weights=[weight_1, weight_2])
# 42.9 µs ± 2.94 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.average(np.vstack((array_1, array_2)), axis=0, weights=[weight_1, weight_2])
# 44.8 µs ± 4.98 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
You can zip both iterators, and multiply each element with the corresponding weight
array_1 = [0,1,2,3,4]
array_2 = [2,3,4,5,6]
weight_1 = 0.5
weight_2 = 0.5
#Zip both iterators and multiply weight with corresponding item
result = [ item1 * weight_1 + item2 * weight_2 for item1, item2 in zip(array_1, array_2)]
print(result)
The output will be
[1.0, 2.0, 3.0, 4.0, 5.0]
Given that you're using NumPy, you can easily vectorize this by doing:
array_1 = np.array([0,1,2,3,4])
array_2 = np.array([2,3,4,5,6])
weight_1 = 0.5
weight_2 = 0.5
array_1*weight_1 + array_2*weight_2
# array([1., 2., 3., 4., 5.])
Can this be generalised for multiple arrays and weights?
For a more generalizable answer, the best way is to use np.average, which accepts an array_like both for the arrays and the weights to be applied to each of these:
np.average([array_1, array_2], weights=[weight_1, weight_2], axis=0)
# array([1., 2., 3., 4., 5.])
Shouldn't the equation for the weighted average be:
(array_1*weight_1 + array_2*weight_2)/(weight_1 + weighted_2)
Related
Given two tensors A and B with the same dimension (d>=2) and shapes [A_{1},...,A_{d-2},A_{d-1},A_{d}] and [A_{1},...,A_{d-2},B_{d-1},B_{d}] (shapes of the first d-2 dimensions are identical).
Is there a way to calculate the kronecker product over the last two dimensions?
The shape of my_kron(A,B)should be [A_{1},...,A_{d-2},A_{d-1}*B_{d-1},A_{d}*B_{d}].
For example with d=3,
A.shape=[2,3,3]
B.shape=[2,4,4]
C=my_kron(A,B)
C[0,...] should be the kronecker product of A[0,...] and B[0,...] and C[1,...] the kronecker product of A[1,...] and B[1,...].
For d=2 this is simply what the jnp.kron(or np.kron) function does.
For d=3 this can be achived with jax.vmap.
jax.vmap(lambda x, y: jnp.kron(x[0, :], y[0, :]))(A, B)
But I was not able to find a solution for general (unknown) dimensions.
Any suggestions?
In numpy terms I think this is what you are doing:
In [104]: A = np.arange(2*3*3).reshape(2,3,3)
In [105]: B = np.arange(2*4*4).reshape(2,4,4)
In [106]: C = np.array([np.kron(a,b) for a,b in zip(A,B)])
In [107]: C.shape
Out[107]: (2, 12, 12)
That treats the initial dimension, the 2, as a batch. One obvious generalization is to reshape the arrays, reducing the higher dimensions to 1, e.g. reshape(-1,3,3), etc. And then afterwards, reshape C back to the desired n-dimensions.
np.kron does accept 3d (and higher), but it's doing some sort of outer on the shared 2 dimension:
In [108]: np.kron(A,B).shape
Out[108]: (4, 12, 12)
And visualizing that 4 dimension as (2,2), I can take the diagonal and get your C:
In [109]: np.allclose(np.kron(A,B)[[0,3]], C)
Out[109]: True
The full kron does more calculations than needed, but is still faster:
In [110]: timeit C = np.array([np.kron(a,b) for a,b in zip(A,B)])
108 µs ± 2.23 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [111]: timeit np.kron(A,B)[[0,3]]
76.4 µs ± 1.36 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
I'm sure it's possible to do your calculation in a more direct way, but doing that requires a better understanding of how the kron works. A quick glance as the np.kron code suggest that is does an outer(A,B)
In [114]: np.outer(A,B).shape
Out[114]: (18, 32)
which has the same number of elements, but it then reshapes and concatenates to produce the kron layout.
But following a hunch, I found that this is equivalent to what you want:
In [123]: D = A[:,:,None,:,None]*B[:,None,:,None,:]
In [124]: np.allclose(D.reshape(2,12,12),C)
Out[124]: True
In [125]: timeit np.reshape(A[:,:,None,:,None]*B[:,None,:,None,:],(2,12,12))
14.3 µs ± 184 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
That is easily generalized to more leading dimensions.
def my_kron(A,B):
D = A[...,:,None,:,None]*B[...,None,:,None,:]
ds = D.shape
newshape = (*ds[:-4],ds[-4]*ds[-3],ds[-2]*ds[-1])
return D.reshape(newshape)
In [137]: my_kron(A.reshape(1,2,1,3,3),B.reshape(1,2,1,4,4)).shape
Out[137]: (1, 2, 1, 12, 12)
I have the following 2 arrays:
X = array([37., 42., 31., 27., 37.])
Y = array([52., 57., 62., 68., 69.])
I could alternatively combine them as follows with this:
XY = np.array((X, Y)).T
which produces
([[37., 52.],
[42., 57.],
[31., 62.],
[27., 68.],
[37., 69.]])
I want to compute the distances between all pairs of points
E.g. I want to do this:
(
np.linalg.norm(np.array([37, 52]) - np.array([42, 57]))
+ np.linalg.norm(np.array([42, 57]) - np.array([31, 62]))
+ np.linalg.norm(np.array([31, 62]) - np.array([27, 68]))
+ np.linalg.norm(np.array([27, 68]) - np.array([37, 69]))
+ np.linalg.norm(np.array([37, 69]) - np.array([37, 52]))
)
which then produces 53.41509195750892
I have written a function that does so:
def distance(X, Y):
N = len(X)
T = 0
oldx, oldy = X[-1], Y[-1]
for x, y in zip(X, Y):
T += np.linalg.norm((np.array([x,y])-np.array([oldx,oldy])))
oldx = x
oldy = y
return T
print(distance(X, Y))
produces also 53.41509195750891
I'm interested in knowing if there is a more elegant/efficient way to do this with numpy array operations.
EDIT: I'm sorry, the original example function I gave was wrong, now it should be correct
EDIT: Thanks everyone for the answers! Here is a benchmark with my array of size 50, it seems that Dani's answer is the fastest, even though Akshay's answer was faster for the size 5 array.
def distance_charel(X, Y):
N = len(X)
T = 0
oldx, oldy = X[-1], Y[-1]
for x, y in zip(X, Y):
T += np.linalg.norm((np.array([x,y])-np.array([oldx,oldy])))
oldx = x
oldy = y
return T
def distance_dani(X, Y):
XY = np.array((X, Y)).T
diff = np.diff(XY, axis=0, prepend=XY[-1].reshape((1, -1)))
ss = np.power(diff, 2).sum(axis=1)
res = np.sqrt(ss).sum()
return res
def distance_akshay(X, Y):
XY = np.array((X, Y)).T
pairwise = pairwise = np.sqrt(np.sum(np.square(np.subtract(XY[:,None,:],XY[None,:,:])), axis=-1))
total = np.sum(np.diag(pairwise,k=1))+pairwise[0,-1]
return total
def distance_gwang(X, Y):
XY = np.array((X, Y)).T
return sum([sum((p1 - p2) ** 2) ** .5 for p1, p2 in zip(XY, XY[1:])])
def distane_andy(X, Y):
arr = np.array((X, Y)).T
return np.linalg.norm(arr - np.roll(arr, -1, axis=0), axis=1).sum()
then
print(distance_charel(X, Y))
print(distance_dani(X, Y))
print(distance_akshay(X, Y))
print(distance_gwang(X, Y)) # I think it misses the distance between last and first element
print(distane_andy(X, Y))
%timeit distance_charel(X, Y)
%timeit distance_dani(X, Y)
%timeit distance_akshay(X, Y)
%timeit distance_gwang(X, Y)
%timeit distane_andy(X, Y)
outputs
2586.769647563161
2586.76964756316
2586.7696475631597
2568.8811037431624
2586.7696475631597
2.49 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
29.9 µs ± 191 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
385 µs ± 12.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.09 ms ± 4.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
31.2 µs ± 133 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
EDIT: I accepted an answer Dani's answer now since I find his code the best (uses numpy vector operations, fairly readable, and fastest (by small margin)) for my situation. Thanks to all of you for answering!
EDIT: I updated the benchmark, using 280 coordinates
You can do it completely vectorized one-liner with ANY loops as the following with broadcasting -
First, (5,1,2) broadcasted with (1,5,2) -> (5,5,2)
Subtract with this broadcast to get (5,5,2)
Then square each element in the (5,5,2)
Sum over the last axis to get (5,5)
Finally, square root!
Next, you can just take the shifted diagonal array which holds the distance between (1,2), (2,3) .... Sum that and since you want to add the distance back to first, add it to value of [0,-1]
#This get all pairwise distances calculated with broadcasting
pairwise = np.sqrt(np.sum(np.square(np.subtract(XY[:,None,:],XY[None,:,:])), axis=-1))
#This takes sum of the first diagonal elements instead of 0th
total = np.sum(np.diag(pairwise,k=1))+pairwise[0,-1]
print(total)
53.41509195750892
Another way you can do this is the following, but the above approach will still be faster -
np.sum(np.sqrt(np.sum(np.square(np.diff(np.vstack([XY,XY[0]]), axis=0)), axis=-1)))
#The np.vstack adds the first coordinate into the array so that you can
#calculate the distance from the last to the first again
Benchmarks -
Akshay Sehgal - 19.9 µs ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Gwang - 21.5 µs ± 1.01 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
ombk - 60.4 µs ± 5.72 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Dani Mesejo - 16.4 µs ± 6.12 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Andy L - 17.6 µs ± 3.08 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
As expected, numpy vectorization always rules the day! Gj Dani!
You could compute the formula by hand in a vectorized fashion by using diff, power and sqrt:
import numpy as np
# setup
X = np.array([37., 42., 31., 27., 37.])
Y = np.array([52., 57., 62., 68., 69.])
XY = np.array((X, Y)).T
# find the differences, prepend the last value at the front
diff = np.diff(XY, axis=0, prepend=XY[-1].reshape((1, -1)))
# raise to the power of 2 and sum
ss = np.power(diff, 2).sum(axis=1)
# find the square root and sum
res = np.sqrt(ss).sum()
print(res)
Output
53.41509195750891
The first step:
# find the differences, prepend the last value at the front
diff = np.diff(XY, axis=0, prepend=XY[-1].reshape((1, -1)))
computes x1 - y1 and x2 - y2, the second step:
# raise to the power of 2 and sum
ss = np.power(diff, 2).sum(axis=1)
raises those values to the power of two, i.e (x1 - y1)^2, and also sum, finally:
# find the square root and sum
res = np.sqrt(ss).sum()
As it says find the square root.
To understand it better let's look at a smaller example:
# setup
X = np.array([37., 42.])
Y = np.array([52., 57])
XY = np.array((X, Y)).T
diff = np.diff(XY, axis=0)
# [[5. 5.]] (42 - 37) (57 - 52)
ss = np.power(diff, 2).sum(axis=1)
# [50.] 5^2 + 5^2
res = np.sqrt(ss).sum()
# 7.0710678118654755
You may use np.roll together with np.linalg.norm and sum
#arr = np.stack([X,Y], axis=1)
arr = np.array((X, Y)).T #as suggested in the comment
Out[50]:
array([[37., 52.],
[42., 57.],
[31., 62.],
[27., 68.],
[37., 69.]])
In [52]: np.linalg.norm(arr - np.roll(arr, -1, axis=0), axis=1).sum()
Out[52]: 53.41509195750892
#preparation:
x = np.array([37., 42., 31., 27., 37.])
y = np.array([52., 57., 62.,68.,69.])
xy = np.array((x, y)).T
def euclidean_distance(p1, p2):
return sum((p1 - p2) ** 2) ** .5
You can do it more elegantly using functional programming.
Here, you want to reduce over the list of pairs of successive elements in xy:
from functools import reduce
from operator import add
reduce(add, [euclidean_distance(p1, p2) for p1, p2 in zip(xy, xy[1:])])
## 36.41509195750892
reduce over a list [1, 2, 3, 4, ..., k]
by applying a diadic function func(a, b) makes this:
func( ... func(func(func(func(1, 2), 3), 4), 5) ..., k).
#DaniMesejo pointed out reduce(add, lst) is just sum(lst).
So it is even much simpler:
sum([euclidean_distance(p1, p2) for p1, p2 in zip(xy, xy[1:])])
The best trick here is actually the zip(xy, xy[1:])
which creates from a list [1, 2, 3, 4, ..., k]
the pairs: [(1, 2), (2, 3), (3, 4), ... (k-1, k)]
from scipy.spatial.distance import euclidean
X = np.array([37., 42., 31., 27., 37.])
Y = np.array([52., 57., 62., 68., 69.])
XY = np.array((X, Y)).T
sum1 = euclidean(XY[0],XY[-1])
for i in range(len(XY)-1):
sum1 += euclidean(XY[i],XY[i+1])
This should do it, start with your sum with the hardest term. Then iterate on the easier ones. Add them all together.
as a check euclidean(XY[0],XY[1]) = 7.0710678118654755 same value that you provided.
In [2]: df = pd.DataFrame([[37., 42., 31., 27., 37.],
...: [52., 57., 62., 68., 69.]]).T.rename(columns={0:"X", 1:"y"})
...: df
Out[2]:
X y
0 37.0 52.0
1 42.0 57.0
2 31.0 62.0
3 27.0 68.0
4 37.0 69.0
In [3]: from scipy.spatial.distance import euclidean
...: np.sum([euclidean(df.iloc[i], df.iloc[i+1]) for i in range(len(df)-1)])
Out[3]: 36.41509195750892
I have a for loop that creates about 50 arrays. The arrays are of length 240. I'm trying to figure out the best possible way of calculating the median values of each elements of the arrays. Essentially, I want to take the first element of each array created in the loop, put them into a list, and find the median. Then do the same for the other 239 elements. Something like this is what I'm thinking of
a = np.array([1,2,4,56,67,8,8,9]);
b = np.array([-1,-3,5,6,-7,-6,-8,0]);
c = np.array([1,2,3,4,5,6,7,8]);
d = []
d.append(a[0])
d.append(b[0])
d.append(c[0])
d
Out[62]: [1, -1, 1]
np.median(d)
Out[65]: 1.0
Numpy.median will take the median on whatever axis you want. So if you can get all your individual arrays into a single array, you can call np.median() and get them all at once:
a = np.array([1,2,4,56,67,8,8,9]);
b = np.array([-1,-3,5,6,-7,-6,-8,0]);
c = np.array([1,2,3,4,5,6,7,8]);
d = np.stack([a, b, c])
np.median(d, axis = 0)
# array([1., 2., 4., 6., 5., 6., 7., 8.])
Of course if you you can make the 50x240 array directly without the loop, that's even better.
The timing of letting NumPy do this vs a python loop is compelling:
l = [np.random.rand(240) for _ in range(50)]
def one(l):
return np.array(list(map(np.median, zip(*l))))
def two(l):
d = np.stack(l)
return np.median(d, axis = 0)
> %timeit one(l)
17 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
> %timeit two(l)
456 µs ± 39.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each
you can do this way :
medians = [np.median([a[i],b[i],c[i]]) for i in range(len(a))]
You can use zip with map to get an iterator:
map(np.median, zip(a, b, c))
If you want it as a numpy array, you can use:
np.array(list(map(np.median, zip(a, b, c))))
Output:
array([1., 2., 4., 6., 5., 6., 7., 8.])
This is a rather simple operation, but it is repeated millions of times in my actual code and, if possible, I'd like to improve its performance.
import numpy as np
# Initial data array
xx = np.random.uniform(0., 1., (3, 14, 1))
# Coefficients used to modify 'xx'
a, b, c = np.random.uniform(0., 1., 3)
# Operation on 'xx' to obtain the final array 'yy'
yy = xx[0] * a * b + xx[1] * b + xx[2] * c
The last line is the one I'd like to improve. Basically, each term in xx is multiplied by a factor (given by the a, b, c coefficients) and then all terms are added to give a final yy array with the shape (14, 1) vs the shape of the initial xx array (3, 14, 1).
Is it possible to do this via numpy broadcasting?
We could use broadcasted multiplication and then sum along the first axis for the first alternative.
As the second one, we could also bring in matrix-multiplication with np.dot. Thus, giving us two more approaches. Here's the timings for the sample provided in the question -
# Original one
In [81]: %timeit xx[0] * a * b + xx[1] * b + xx[2] * c
100000 loops, best of 3: 5.04 µs per loop
# Proposed alternative #1
In [82]: %timeit (xx *np.array([a*b,b,c])[:,None,None]).sum(0)
100000 loops, best of 3: 4.44 µs per loop
# Proposed alternative #2
In [83]: %timeit np.array([a*b,b,c]).dot(xx[...,0])[:,None]
1000000 loops, best of 3: 1.51 µs per loop
This is similar to Divakar's answer. Swap the first and the third axis of xx and do dot product.
import numpy as np
# Initial data array
xx = np.random.uniform(0., 1., (3, 14, 1))
# Coefficients used to modify 'xx'
a, b, c = np.random.uniform(0., 1., 3)
def op():
yy = xx[0] * a * b + xx[1] * b + xx[2] * c
return yy
def tai():
d = np.array([a*b, b, c])
return np.swapaxes(np.swapaxes(xx, 0, 2).dot(d), 0, 1)
def Divakar():
# improvement given by Divakar
np.array([a*b,b,c]).dot(xx.swapaxes(0,1))
%timeit op()
7.21 µs ± 222 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit tai()
4.06 µs ± 140 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit Divakar()
3 µs ± 105 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Say I have two arrays,
import numpy as np
x = np.array([1, 2, 3, 4])
y = np.array([5, 6, 7, 8])
What's the fastest, most Pythonic, etc., etc. way to get a new array, z, with a number of elements equal to x.size * y.size, in which the elements are the products of every pair of elements (x_i, y_j) from the two input arrays.
To rephrase, I'm looking for an array z in which z[k] is x[i] * y[j].
A simple but inefficient way to get this is as follows:
z = np.empty(x.size * y.size)
counter = 0
for i in x:
for j in y:
z[counter] = i * j
counter += 1
Running the above code shows that z in this example is
In [3]: z
Out[3]:
array([ 5., 6., 7., 8., 10., 12., 14., 16., 15., 18., 21.,
24., 20., 24., 28., 32.])
Here's one way to do it:
import itertools
z = np.empty(x.size * y.size)
counter = 0
for i, j in itertools.product(x, y):
z[counter] = i * j
counter += 1
It'd be nice to get rid of that counter, though, as well as the for loop (but at least I got rid of one of the loops).
UPDATE
Being one-liners, the other provided answers are better than this one (according to my standards, which value brevity). The timing results below show that #BilalAkil's answer is faster than #TimLeathart's:
In [10]: %timeit np.array([x * j for j in y]).flatten()
The slowest run took 4.37 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 24.2 µs per loop
In [11]: %timeit np.multiply.outer(x, y).flatten()
The slowest run took 5.59 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 10.5 µs per loop
Well I haven't much experience with numpy, but a quick search gave me this:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.outer.html
>>> np.multiply.outer([1, 2, 3], [4, 5, 6])
array([[ 4, 5, 6],
[ 8, 10, 12],
[12, 15, 18]])
You can then flatten that array to get the same output as you requested:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.html
EDIT: #Divakar's answer showed us that ravel will do the same thing as flatten, except faster o.O So use that instead.
So in your case, it'd look like this:
>>> np.multiply.outer(x, y).ravel()
BONUS: You can go multi-dimensional with this!
Two more approaches could be suggested here.
Using matrix-multiplication with np.dot:
np.dot(x[:,None],y[None]).ravel()
With np.einsum:
np.einsum('i,j->ij',x,y).ravel()
Runtime tests
In [31]: N = 10000
...: x = np.random.rand(N)
...: y = np.random.rand(N)
...:
In [32]: %timeit np.dot(x[:,None],y[None]).ravel()
1 loops, best of 3: 302 ms per loop
In [33]: %timeit np.einsum('i,j->ij',x,y).ravel()
1 loops, best of 3: 274 ms per loop
Same as #BilalAkil's answer but with ravel() instead of flatten() as a faster alternative -
In [34]: %timeit np.multiply.outer(x, y).ravel()
1 loops, best of 3: 211 ms per loop
#BilalAkil's answer:
In [35]: %timeit np.multiply.outer(x, y).flatten()
1 loops, best of 3: 451 ms per loop
#Tim Leathart's answer:
In [36]: %timeit np.array([y * a for a in x]).flatten()
1 loops, best of 3: 766 ms per loop
Here's a way to do it:
import numpy as np
x = np.array([1, 2, 3, 4])
y = np.array([5, 6, 7, 8])
z = np.array([y * a for a in x]).flatten()
I know I'm super late to the party here, but I thought I'd throw my hat into the ring for anyone reading this question in the future. Using the same metric as #Divakar, I added what I consider to be a much more intuitive solution to the list (the first code snippet measured):
import numpy as np
N = 10000
x = np.random.rand(N)
y = np.random.rand(N)
%timeit np.ravel(x[:,None] * y[None])
635 ms ± 19.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit np.outer(x, y).ravel()
640 ms ± 16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit np.dot(x[:,None],y[None]).ravel()
853 ms ± 57.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit np.einsum('i,j->ij',x,y).ravel()
754 ms ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Based on the similarity in execution time, it seems likely that numpy.outer functions exactly the same way as my solution internally, although you should take observations like this with a hefty grain of salt.
The reason why I find it more intuitive is that, unlike all other solutions, its syntax isn't strictly limited to multiplication. For example, np.ravel(x[:,None] / y[None]) will give you a / b for every a in x and b in y.