operating on two numpy arrays of different shapes - python

suppose i have 2 numpy arrays as follows:
init = 100
a = np.append(init, np.zeros(5))
b = np.random.randn(5)
so a is of shape (6,) and b is of shape(5,). i would like to add (or perform some other operation, e.g. exponentiation) these together to obtain a new numpy array of shape (6,) whose first value of a (100) is the same and the remaining values are added together (in this case this will just look like appending 100 to b, but that is because it is a toy example initialized with zeroes. attempting to add as is, will produce:
a+b
ValueError: operands could not be broadcast together with shapes (6,) (5,)
is there a one-liner way to use broadcasting, or newaxis here to trick numpy into treating them as compatible shapes?
the desired output:
array([ 100. , 1.93947328, 0.12075821, 1.65319123,
-0.29222052, -1.04465838])

You mean you want to do something like this
np.append(a[0:1], a[1:,] + b)
What do you want your desired output to be? The answer I've provided performs this brodcast add excluding row 1 from a

Not a one-liner but two short lines:
c = a.copy()
c[1:] += b

Related

Why can these arrays not be subtracted from each other? [duplicate]

I'm having some trouble understanding the rules for array broadcasting in Numpy.
Obviously, if you perform element-wise multiplication on two arrays of the same dimensions and shape, everything is fine. Also, if you multiply a multi-dimensional array by a scalar it works. This I understand.
But if you have two N-dimensional arrays of different shapes, it's unclear to me exactly what the broadcasting rules are. This documentation/tutorial explains that: In order to broadcast, the size of the trailing axes for both arrays in an operation must either be the same size or one of them must be one.
Okay, so I assume by trailing axis they are referring to the N in a M x N array. So, that means if I attempt to multiply two 2D arrays (matrices) with equal number of columns, it should work? Except it doesn't...
>>> from numpy import *
>>> A = array([[1,2],[3,4]])
>>> B = array([[2,3],[4,6],[6,9],[8,12]])
>>> print(A)
[[1 2]
[3 4]]
>>> print(B)
[[ 2 3]
[ 4 6]
[ 6 9]
[ 8 12]]
>>>
>>> A * B
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: shape mismatch: objects cannot be broadcast to a single shape
Since both A and B have two columns, I would have thought this would work. So, I'm probably misunderstanding something here about the term "trailing axis", and how it applies to N-dimensional arrays.
Can someone explain why my example doesn't work, and what is meant by "trailing axis"?
Well, the meaning of trailing axes is explained on the linked documentation page.
If you have two arrays with different dimensions number, say one 1x2x3 and other 2x3, then you compare only the trailing common dimensions, in this case 2x3. But if both your arrays are two-dimensional, then their corresponding sizes have to be either equal or one of them has to be 1. Dimensions along which the array has size 1 are called singular, and the array can be broadcasted along them.
In your case you have a 2x2 and 4x2 and 4 != 2 and neither 4 or 2 equals 1, so this doesn't work.
From http://cs231n.github.io/python-numpy-tutorial/#numpy-broadcasting:
Broadcasting two arrays together follows these rules:
If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.
The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.
The arrays can be broadcast together if they are compatible in all dimensions.
After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.
In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension
If this explanation does not make sense, try reading the explanation from the documentation or this explanation.
we should consider two points about broadcasting. first: what is possible. second: how much of the possible things is done by numpy.
I know it might look a bit confusing, but I will make it clear by some example.
lets start from the zero level.
suppose we have two matrices. first matrix has three dimensions (named A) and the second has five (named B). numpy tries to match last/trailing dimensions. so numpy does not care about the first two dimensions of B. then numpy compares those trailing dimensions with each other. and if and only if they be equal or one of them be 1, numpy says "O.K. you two match". and if it these conditions don't satisfy, numpy would "sorry...its not my job!".
But I know that you may say comparison was better to be done in way that can handle when they are devisable(4 and 2 / 9 and 3). you might say it could be replicated/broadcasted by a whole number(2/3 in out example). and i am agree with you. and this is the reason I started my discussion with a distinction between what is possible and what is the capability of numpy.

How can a python function handle both numpy matrix and scalar?

There is a simple function, which intends to accept a scalar parameter, but also works for a numpy matrix. Why does the function fun works for a matrix?
>>> import numpy as np
>>> def fun(a):
return 1.0 / a
>>> b = 2
>>> c = np.mat([1,2,3])
>>> c
matrix([[1, 2, 3]])
>>> fun(b)
0.5
>>> fun(c)
matrix([[ 1. , 0.5 , 0.33333333]])
>>> v_fun = np.vectorize(fun)
>>> v_fun(b)
array(0.5)
>>> v_fun(c)
matrix([[ 1. , 0.5 , 0.33333333]])
It seems like fun is vectorized somehow, because the explictly vectorized function v_fun behaves same on matrix c. But their get different outputs on scalar b. Could anybody explain it? Thanks.
What happens in the case of fun is called broadcasting.
General Broadcasting Rules
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when
they are equal, or
one of them is 1
If these conditions are not met, a ValueError: frames are not aligned exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays.
fun already works for both scalars and arrays - because elementwise division is defined for both (their own methods). fun(b) does not involve numpy at all, that just a Python operation.
np.vectorize is meant to take a function that only works with scalars, and feed it elements from an array. In your example it first converts b into an array, np.array(b). For both c and this modified b, the result is an array of matching size. c is a 2d np.matrix, and result is the same. Notice that fun(b) is type array, not matrix.
This not a good example of using np.vectorize, nor an example of broadcasting. np.vectorize is a rather 'simple minded' function and doesn't handle scalars in a special way.
1/c or even b/c works because c, an array 'knows' about division. Similarly array multiplication and addition are defined: 1+c or 2*c.
I'm tempted to mark this as a duplicate of
Python function that handles scalar or arrays

Partial dimensions in python

When I declare multidimensional arrays in python and print its shape using numpy as:
B=[[2,3,4]]
print(np.shape(B))
it gives the following output:
(1,3)
This is understandable as the inner bracket would represent the second dimension which has 3 components.
But when I run the following code:
B=[2,3,4]
print(np.shape(B))
It prints:
(3,)
How do I explain these partial dimensions to myself?
It means the second dimension exists but the number of elements are unknown in it.How does one infer from array [2,3,4] that a second dimension exists?Should'nt the shape just be (3)?
It's a problem of syntax. (3,) is the tuple (3), since (3) is interpreted like the integer 3.

About Numpy shape

I'm new to numpy & have a question about it :
according to docs.scipy.org, the "shape" method is "the dimensions of the array. For a matrix with n rows and m columns, shape will be (n,m)"
Suppose I am to create a simple array as below:
np.array([[0,2,4],[1,3,5]])
Using the "shape" method, it returns (2,3) (i.e. the array has 2 rows & 3 columns)
However, for an array ([0,2,4]), the shape method would return (3,) (which means it has 3 rows according to the definition above)
I'm confused : the array ([0,2,4]) should have 3 columns not 3 rows so I expect it to return (,3) instead.
Can anyone help to clarify ? Thanks a lot.
This is just notation - in Python, tuples are distinguished from expression grouping (or order of operations stuff) by the use of commas - that is, (1,2,3) is a tuple and (2x + 4) ** 5 contains an expression 2x + 4. In order to keep single-element tuples distinct from single-element expressions, which would otherwise be ambiguous ((1) vs (1) - which is the single-element tuple and which a simple expression that evaluates to 1?), we use a trailing comma to denote tuple-ness.
What you're getting is a single dimension response, since there's only one dimension to measure, packed into a tuple type.
Numpy supports not only 2-dimensional arrays, but multi-dimensional arrays, and by multi-dimension I mean 1-D, 2-D, 3-D .... n-D, And there is a format for representing respective dimension array. The len of array.shape would get you the number of dimensions of that array. If the array is 1-D, the there is no need to represent as (m, n) or if the array is 3-D then it (m, n) would not be sufficient to represent its dimensions.
So the output of array.shape would not always be in (m, n) format, it would depend upon the array itself and you will get different outputs for different dimensions.

Numpy matrix row stacking

I have 4 arrays (all the same length) which I am trying to stack together to create a new array, with each of the 4 arrays being a row.
My first thought was this:
B = -np.array([[x1[i]],[x2[j]],[y1[i]],[y2[j]]])
However the shape of that is (4,1,20).
To get the 2D output I expected I resorted to this:
B = -np.vstack((np.vstack((np.vstack(([x1[i]],[x2[j]])),[y1[i]])),[y2[j]]))
Where the shape is (4,20).
Is there a better way to do this? And why would the first method not work?
Edit
For clarity, the shapes of x1[i], x2[j], y1[i], y2[j] are all (20,).
The problem is with the extra brackets:
B = -np.array([[x1[i]],[x2[j]],[y1[i]],[y2[j]]]) # (4,1,20)
B = -np.array([x1[i],x2[j],y1[i],y2[j]]) # (4,20)
[[x1[i]] is (1,20) in shape.
In [26]: np.array([np.ones((20,)),np.zeros((20,))]).shape
Out[26]: (2, 20)
vstack works, but np.array does just as well. It's concatenate that needs the extra brackets
In [27]: np.vstack([np.ones((20,)),np.zeros((20,))]).shape
Out[27]: (2, 20)
In [28]: np.concatenate([np.ones((20,)),np.zeros((20,))]).shape
Out[28]: (40,)
In [29]: np.concatenate([[np.ones((20,))],[np.zeros((20,))]]).shape
vstack doesn't need the extra dimensions because it first passes the arrays through [atleast_2d(_m) for _m in tup]
np.vstack takes a sequence of equal-length arrays to stack, one on top of the other, as long as they have compatible shapes. So in your case, a tuple of the one-dimensional arrays would do:
np.vstack((x1[i], x2[j], y1[i], y2[j]))
would do what you want. If this statement is part of a loop building many such 4x20 arrays, however, that may be a different matter.

Categories

Resources