Custom multiplication and numpy - python

Suppose I have my own multiplication between two Python objects a and b, let's call it my_multiplication(a, b).
How can I perform a matrix multiplication using numpy where my_multiplication is performed instead of the usual *? Is that even possible?
Addendum: Would I still benefit from numpy's speed then?

You can use np.vectorise on your function to get your custom multiplication function use all the usual numpy features such as broadcasting.
def my_multiplication(a, b):
#your code that works on multiplying 2 numbers
return c
v_my_multiplication = np.vectorize(my_multiplication)
v_my_multiplication([1, 2, 3], [1, 6])
#Will now work for np.array instead of just 2 numbers and utilize the broadcasting and vectorized implementation benefits that numpy has to offer.

Try the numpy.dot or the x.dot(y). See the documentation here
Example
import numpy as np
x = np.arange(12).reshape((3,4))
y = np.arange(4)
print(x,"\n\n",y,"\n")
print (np.dot(x,y))
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[0 1 2 3]
[14 38 62]

Related

jax segment_sum along array dimension

I am fairly new to jax and have the following problem:
I need to compute functions (sum/min/max maybe more complex stuff later) across an array given an index. To solve this problem I found the jnp.ops.segment_sum function. This works great for one array, but how can I generalize this approach to a batch of arrays? E.g:
import jax.numpy as jnp
indexes = jnp.array([[1,0,1],[0,0,1]])
batch_of_matrixes = jnp.array([
np.arange(9).reshape((3,3)),
np.arange(9).reshape((3, 3))
])
# The following works for one array but not multiple
jax.ops.segment_sum(
data=batch_of_matrixes[0],
segment_ids=indexes[0],
num_segments=2)
# How can I get this to work with the full dataset along the 0 dimension?
# Intended Outcome:
[
[
[ 3 4 5],
[ 6 8 10]
],
[
[3 5 7],
[6 7 8]
]
]
If there is a more general way to do this than the obs.segment_* family, please also let me know. Thanks in advance for help and suggestions!
JAX's vmap transformation is designed for exactly this kind of situation. In your case, you can use it like this:
#jax.vmap
def f(data, index):
return jax.ops.segment_sum(data, index, num_segments=2)
print(f(batch_of_matrixes, indexes))
# [[[ 3 4 5]
# [ 6 8 10]]
# [[ 3 5 7]
# [ 6 7 8]]]
For some more discussion of this, see JAX 101: Automatic Vectorization.

Numpy.outer vs. broadcasting; are these equivalent and calculated the same way with similar expected performance?

This answer to performing outer addition with numpy discusses numpy's ufunc's "outer" and also numpy's broadcasting, and the examples there are summarized below.
In the case of addition, subtraction, multiplication and division, is the calculation the same under the hood, or will there be differences in performance, especially when the array size gets large?
Minimal, 2D example from the linked answer:
import numpy as np
a, b = np.arange(3), np.arange(5)
print(np.add.outer(a, b))
print(a[:, None] + b) # or a[:, np.newaxis] + b
both result in:
[[0 1 2 3 4]
[1 2 3 4 5]
[2 3 4 5 6]]

Assign values from small matrix to specified places in larger matrix

I would like to know if there exists a similar way of doing this (Mathematica) in Python:
Mathematica
I have tried it in Python and it does not work. I have also tried it with numpy.put() or with simple 2 for loops. This 2 ways work properly but I find them very time consuming with larger matrices (3000×3000 elements for example).
Described problem in Python,
import numpy as np
a = np.arange(0, 25, 1).reshape(5, 5)
b = np.arange(100, 500, 100).reshape(2, 2)
p = np.array([0, 3])
a[p][:, p] = b
which outputs non-changed matrix a: Python
Perhaps you are looking for this:
a[p[...,None], p] = b
Array a after the above assignment looks like this:
[[100 1 2 200 4]
[ 5 6 7 8 9]
[ 10 11 12 13 14]
[300 16 17 400 19]
[ 20 21 22 23 24]]
As documented in Integer Array Indexing, the two integer index arrays will be broadcasted together, and iterated together, which effectively indexes the locations a[0,0], a[0,3], a[3,0], and a[3,3]. The assignment statement would then perform an element-wise assignment at these locations of a, using the respective element-values from RHS.

Do numpy 1D arrays follow row/column rules?

I have just started using numpy and I am getting confused about how to use arrays. I have seen several Stack Overflow answers on numpy arrays but they all deal with how to get the desired result (I know how to do this, I just don't know why I need to do it this way). The consensus that I've seen is that arrays are better than matrices because they are a more basic class and less restrictive. I understand you can transpose an array which to me means there is a distinction between a row and a column, but the multiplication rules all produce the wrong outputs (compared to what I am expecting).
Here is the test code I have written along with the outputs:
a = numpy.array([1,2,3,4])
print(a)
>>> [1 2 3 4]
print(a.T) # Transpose
>>> [1 2 3 4] # No apparent affect
b = numpy.array( [ [1], [2], [3], [4] ] )
print(b)
>>> [[1]
[2]
[3]
[4]] # Column (Expected)
print(b.T)
>>> [[1 2 3 4]] # Row (Expected, transpose seems to work here)
print((b.T).T)
>>> [[1]
[2]
[3]
[4]] # Column (All of these are as expected,
# unlike for declaring the array as a row vector)
# The following are element wise multiplications of a
print(a*a)
>>> [ 1 4 9 16]
print(a * a.T) # Row*Column
>>> [ 1 4 9 16] # Inner product scalar result expected
print(a.T * a) # Column*Row
>>> [ 1 4 9 16] # Outer product matrix result expected
print(b*b)
>>> [[1]
[4]
[9]
[16]] # Expected result, element wise multiplication in a column
print(b * b.T) # Column * Row (Outer product)
>>> [[ 1 2 3 4]
[ 2 4 6 8]
[ 3 6 9 12]
[ 4 8 12 16]] # Expected matrix result
print(b.T * (b.T)) # Column * Column (Doesn't make much sense so I expected elementwise multiplication
>>> [[ 1 4 9 16]]
print(b.T * (b.T).T) # Row * Column, inner product expected
>>> [[ 1 2 3 4]
[ 2 4 6 8]
[ 3 6 9 12]
[ 4 8 12 16]] # Outer product result
I know that I can use numpy.inner() and numpy.outer() to achieve the affect (that is not a problem), I just want to know if I need to keep track of whether my vectors are rows or columns.
I also know that I can create a 1D matrix to represent my vectors and the multiplication works as expected. I'm trying to work out the best way to store my data so that when I look at my code it is clear what is going to happen - right now the maths just looks confusing and wrong.
I only need to use 1D and 2D tensors for my application.
I'll try annotating your code
a = numpy.array([1,2,3,4])
print(a)
>>> [1 2 3 4]
print(a.T) # Transpose
>>> [1 2 3 4] # No apparent affect
a.shape will show (4,). a.T.shape is the same. It kept the same number of dimensions, and performed the only meaningful transpose - no change. Making it (4,1) would have added a dimension, and destroyed the A.T.T roundtrip.
b = numpy.array( [ [1], [2], [3], [4] ] )
print(b)
>>> [[1]
[2]
[3]
[4]] # Column (Expected)
print(b.T)
>>> [[1 2 3 4]] # Row (Expected, transpose seems to work here)
b.shape is (4,1), b.T.shape is (1,4). Note the extra set of []. If you'd created a as a = numpy.array([[1,2,3,4]]) its shape too would have been (1,4).
The easy way to make b would be b=np.array([[1,2,3,4]]).T (or b=np.array([1,2,3,4])[:,None] or b=np.array([1,2,3,4]).reshape(-1,1))
Compare this to MATLAB
octave:3> a=[1,2,3,4]
a =
1 2 3 4
octave:4> size(a)
ans =
1 4
octave:5> size(a.')
ans =
4 1
Even without the extra [] it has initialed the matrix as 2d.
numpy has a matrix class that imitates MATLAB - back in the time when MATLAB allowed only 2d.
In [75]: m=np.matrix('1 2 3 4')
In [76]: m
Out[76]: matrix([[1, 2, 3, 4]])
In [77]: m.shape
Out[77]: (1, 4)
In [78]: m=np.matrix('1 2; 3 4')
In [79]: m
Out[79]:
matrix([[1, 2],
[3, 4]])
I don't recommend using np.matrix unless it really adds something useful to your code.
Note the MATLAB talks of vectors, but they are really just their matrix with only one non-unitary dimension.
# The following are element wise multiplications of a
print(a*a)
>>> [ 1 4 9 16]
print(a * a.T) # Row*Column
>>> [ 1 4 9 16] # Inner product scalar result expected
This behavior follows from a.T == A. As you noted, * produces element by element multiplication. This is equivalent to the MATLAB .*. np.dot(a,a) gives the dot or matrix product of 2 arrays.
print(a.T * a) # Column*Row
>>> [ 1 4 9 16] # Outer product matrix result expected
No, it is still doing elementwise multiplication.
I'd use broadcasting, a[:,None]*a[None,:] to get the outer product. Octave added this in imitation of numpy; I don't know if MATLAB has it yet.
In the following * is always element by element multiplication. It's broadcasting that produces matrix/outer product results.
print(b*b)
>>> [[1]
[4]
[9]
[16]] # Expected result, element wise multiplication in a column
A (4,1) * (4,1)=>(4,1). Same shapes all around.
print(b * b.T) # Column * Row (Outer product)
>>> [[ 1 2 3 4]
[ 2 4 6 8]
[ 3 6 9 12]
[ 4 8 12 16]] # Expected matrix result
Here (4,1)*(1,4)=>(4,4) product. The 2 size 1 dimensions have been replicated so it becomes, effectively a (4,4)*(4,4). How would you do replicate this in MATLAB - with .*?
print(b.T * (b.T)) # Column * Column (Doesn't make much sense so I expected elementwise multiplication
>>> [[ 1 4 9 16]]
* is elementwise regardless of expectations. Think b' .* b' in MATLAB.
print(b.T * (b.T).T) # Row * Column, inner product expected
>>> [[ 1 2 3 4]
[ 2 4 6 8]
[ 3 6 9 12]
[ 4 8 12 16]] # Outer product result
Again * is elementwise; inner requires a summation in addition to multiplication. Here broadcasting again applies (1,4)*(4,1)=>(4,4).
np.dot(b,b) or np.trace(b.T*b) or np.sum(b*b) give 30.
When I worked in MATLAB I frequently checked the size, and created test matrices that would catch dimension mismatches (e.g. a 2x3 instead of a 2x2 matrix). I continue to do that in numpy.
The key things are:
numpy arrays may be 1d (or even 0d)
A (4,) array is not exactly the same as a (4,1) or (1,4)`.
* is elementwise - always.
broadcasting usually accounts for outer like behavior
"Transposing" is, from a numpy perspective, really only a meaningful concept for two-dimensional structures:
>>> import numpy
>>> arr = numpy.array([1,2,3,4])
>>> arr.shape
(4,)
>>> arr.transpose().shape
(4,)
So, if you want to transpose something, you'll have to make it two-dimensional:
>>> arr_2d = arr.reshape((4,1)) ## four rows, one column -> two-dimensional
>>> arr_2d.shape
(4, 1)
>>> arr_2d.transpose().shape
(1, 4)
Also, numpy.array(iterable, **kwargs) has a key word argument ndmin, which will, set to ndmin=2 prepend your desired shape with as many 1 as necessary:
>>> arr_ndmin = numpy.array([1,2,3,4],ndmin=2)
>>> arr_ndmin.shape
(1, 4)
Yes, they do.
Your question is already answered. Though I assume you are a Matlab user? If so, you may find this guide useful: Moving from MATLAB matrices to NumPy arrays

What's the difference of numpy.ndarray.T and numpy.ndarray.transpose() when self.ndim < 2

The document numpy.ndarray.T says
ndarray.T — Same as self.transpose(), except that self is returned if self.ndim < 2.
Also, ndarray.transpose(*axes) says
For a 1-D array, this has no effect.
Doesn't this mean the same thing?
Here's a little demo snippet:
>>> import numpy as np
>>> print np.__version__
1.5.1rc1
>>> a = np.arange(7)
>>> print a, a.T, a.transpose()
[0 1 2 3 4 5 6] [0 1 2 3 4 5 6] [0 1 2 3 4 5 6]
Regardless of rank, the .T attribute and the .transpose() method are the same—they both return the transpose of the array.
In the case of a rank 1 array, the .T and .transpose() don't do anything—they both return the array.
It looks like .T is just a convenient notation, and that .transpose(*axes) is the more general function and is intended to give more flexibility, as axes can be specified. They are apparently not implemented in Python, so one would have to look into C code to check this.

Categories

Resources