Create 2-dimensional range - python

I have a column vector of start values X, and a column vector of end values Z, and I want to create a matrix that creates linspaces between X and Z of size n. Is there a way to generate that directly without iterating?
Say n=10, and Z in this simple example is just a vector of 20. Then, the following code
X = np.arange(0,5,1)
Y = np.empty((5, 10))
for idx in range(0, len(X)):
Y[idx] = np.linspace(X[idx], 20, 10)
generates what I want, but it requires iteration. Is there any more performant solution, or one directly built in without all that do-it-yourself logic?
Here's the expected output for my test case:
Y
array([[ 0. , 2.22222222, 4.44444444, 6.66666667,
8.88888889, 11.11111111, 13.33333333, 15.55555556,
17.77777778, 20. ],
[ 1. , 3.11111111, 5.22222222, 7.33333333,
9.44444444, 11.55555556, 13.66666667, 15.77777778,
17.88888889, 20. ],
[ 2. , 4. , 6. , 8. ,
10. , 12. , 14. , 16. ,
18. , 20. ],
[ 3. , 4.88888889, 6.77777778, 8.66666667,
10.55555556, 12.44444444, 14.33333333, 16.22222222,
18.11111111, 20. ],
[ 4. , 5.77777778, 7.55555556, 9.33333333,
11.11111111, 12.88888889, 14.66666667, 16.44444444,
18.22222222, 20. ]])

That's what np.meshgrid is for. Edit: Nevermind, that's not what you wanted.
Here's what you want:
>>> X = np.arange(0, 5, 1)[:, None]
>>> Y = np.linspace(0, 1, 10)[None, :]
>>> X+Y*(20-X)
array([[ 0. , 2.22222222, 4.44444444, 6.66666667,
8.88888889, 11.11111111, 13.33333333, 15.55555556,
17.77777778, 20. ],
[ 1. , 3.11111111, 5.22222222, 7.33333333,
9.44444444, 11.55555556, 13.66666667, 15.77777778,
17.88888889, 20. ],
[ 2. , 4. , 6. , 8. ,
10. , 12. , 14. , 16. ,
18. , 20. ],
[ 3. , 4.88888889, 6.77777778, 8.66666667,
10.55555556, 12.44444444, 14.33333333, 16.22222222,
18.11111111, 20. ],
[ 4. , 5.77777778, 7.55555556, 9.33333333,
11.11111111, 12.88888889, 14.66666667, 16.44444444,
18.22222222, 20. ]])

List comprehensions at least are faster, and sometimes easier to understand than loops (also, almost always use xrange instead of range, btw):
matrix = np.array([np.linspace(x, 20, 10) for x in X])

Related

matplotlib in python - how to extracting data from contour lines

i want to get data from a contour of different iso-line. the question matplotlib - extracting data from contour lines gives an example
import matplotlib.pyplot as plt
x = [1,2,3,4]
y = [1,2,3,4]
m = [[15,14,13,12],[14,12,10,8],[13,10,7,4],[12,8,4,0]]
cs = plt.contour(x,y,m, [9.5])
plt.show()
coord = cs.collections[0].get_paths()
it get the coordinates of line with value euqal to 9.5.
now, i need to get the coordinate of multi-isoline from one contour, so i need to change the value to represent different line, but when i use loop, it means python needs to construct the contour at each loop, how can i construct the contour once and then change the value to represent different line?
You can plot several contours at once with plt.contour by giving a list of the values you wish to contour. Then, you can access them all from the returned ContourSet using cs.allsegs or by using get_paths on each item in the cs.collections list.
For example:
import matplotlib.pyplot as plt
x = [1,2,3,4]
y = [1,2,3,4]
m = [[15,14,13,12],[14,12,10,8],[13,10,7,4],[12,8,4,0]]
cs = plt.contour(x,y,m, [9.5, 10.5, 11.5])
plt.show()
# Option 1: use allsegs
all_coords = cs.allsegs
print(all_coords)
# Option 2: use cs.collections[X].get_paths()
coords1 = cs.collections[0].get_paths()
coords2 = cs.collections[1].get_paths()
coords3 = cs.collections[2].get_paths()
print(coords1)
print(coords2)
print(coords3)
Where the printed coords are then:
Option 1 (allsegs):
[[array([[4. , 1.625 ],
[3.25 , 2. ],
[3. , 2.16666667],
[2.16666667, 3. ],
[2. , 3.25 ],
[1.625 , 4. ]])],
[array([[4. , 1.375 ],
[3. , 1.83333333],
[2.75 , 2. ],
[2. , 2.75 ],
[1.83333333, 3. ],
[1.375 , 4. ]])],
[array([[4. , 1.125],
[3. , 1.5 ],
[2.25 , 2. ],
[2. , 2.25 ],
[1.5 , 3. ],
[1.125, 4. ]])]]
Option 2 (get_paths()):
[Path(array([[4. , 1.625 ],
[3.25 , 2. ],
[3. , 2.16666667],
[2.16666667, 3. ],
[2. , 3.25 ],
[1.625 , 4. ]]), array([1, 2, 2, 2, 2, 2], dtype=uint8))]
[Path(array([[4. , 1.375 ],
[3. , 1.83333333],
[2.75 , 2. ],
[2. , 2.75 ],
[1.83333333, 3. ],
[1.375 , 4. ]]), array([1, 2, 2, 2, 2, 2], dtype=uint8))]
[Path(array([[4. , 1.125],
[3. , 1.5 ],
[2.25 , 2. ],
[2. , 2.25 ],
[1.5 , 3. ],
[1.125, 4. ]]), array([1, 2, 2, 2, 2, 2], dtype=uint8))]

Iterate over rows, and perform addition

So, here I have a numpy array, array([[-1.228, 0.709, 0. ], [ 0. , 2.836, 0. ], [ 1.228, 0.709, 0. ]]). What my plan is to perform addition to all the rows of this array with a vector (say [1,2,3]), and then append the result onto the end of it i.e the addition of another three rows? I want to perform the same process, like 5 times, so that the vector is added only to the last three rows, which were the result of the previous calculation(addition). Any suggestions?
Just use np.append along the first axis:
import numpy as np
a = np.array([[-1.228, 0.709, 0. ], [ 0. , 2.836, 0. ], [ 1.228, 0.709, 0. ]])
v = np.array([1, 2, 3])
new_a = np.append(a, a+v, axis=0)
For the addition part, just write something like a[0]+[1,2,3] (where a is your array), numpy will perform addition element-wise as expected.
For appending a=np.append(a, [line], axis=1) is what you're looking for, where line is the new line you want to add, for example the result of the previous sum.
The iteration can be easily repeated selecting the last three rows thanks to negative indexing: if you use a[-1], a[-2] and a[-3] you'll be sure to pick the last three lines
If you really need to keep results within a single array, a better option is to create it at the beginning and perform operations you need on it.
arr = np.array([[-1.228, 0.709, 0. ], [ 0. , 2.836, 0. ], [ 1.228, 0.709, 0. ]])
vector = np.array([1,2,3])
N = 5
multiarr = np.tile(arr, (1,N))
>>> multiarr
array([[-1.228, 0.709, 0. , -1.228, 0.709, 0. , -1.228, 0.709, 0. , -1.228, 0.709, 0. , -1.228, 0.709, 0. ],
[ 0. , 2.836, 0. , 0. , 2.836, 0. , 0. , 2.836, 0. , 0. , 2.836, 0. , 0. , 2.836, 0. ],
[ 1.228, 0.709, 0. , 1.228, 0.709, 0. , 1.228, 0.709, 0. , 1.228, 0.709, 0. , 1.228, 0.709, 0. ]])
multivector = (vector * np.arange(N)[:, None]).ravel()
>>> multivector
array([ 0, 0, 0, 1, 2, 3, 2, 4, 6, 3, 6, 9, 4, 8, 12])
>>> multiarr + multivector
array([[-1.228, 0.709, 0. , -0.228, 2.709, 3. , 0.772, 4.709, 6. , 1.772, 6.709, 9. , 2.772, 8.709, 12. ],
[ 0. , 2.836, 0. , 1. , 4.836, 3. , 2. , 6.836, 6. , 3. , 8.836, 9. , 4. , 10.836, 12. ],
[ 1.228, 0.709, 0. , 2.228, 2.709, 3. , 3.228, 4.709, 6. , 4.228, 6.709, 9. , 5.228, 8.709, 12. ]])

How to multiply individual elements of numpy array of row ith with element of another numpy array of row ith?

How to multiply individual elements of numpy array of row ith with element of another numpy array of row ith?
The inventory example is that I want to multiply an numpy array(containing the item's (280 of them) costing in USD, Euro) of size [280,2] with an numpy array of size [280,3] (stocks in 3 store houses(representing the column).
I believe I have no problem using for loops to calculate but I am trying to learn techniques of broadcasting and reshape. So I would like your help to point me the correct direction(or methods)
Edit: Example
Array A
[[1.50 1.80]
[3 8 ]]
Array B
[[5 10 20]
[10 20 30]]
Result I require is
[[7.5 9 11.5 18 30 36]
30 80 60 160 90 240]]
Thanks
The description was a bit fuzzy, as was the example:
In [264]: A=np.array([[1.5,1.8],[3,8]]); B=np.array([[5,10,20],[10,20,30]])
In [265]: A.shape
Out[265]: (2, 2)
In [266]: B.shape
Out[266]: (2, 3)
Looks like you are trying to do a version of outer product, which can be done with broadcasting.
Let's try one combination:
In [267]: A[:,:,None]*B[:,None,:]
Out[267]:
array([[[ 7.5, 15. , 30. ],
[ 9. , 18. , 36. ]],
[[ 30. , 60. , 90. ],
[ 80. , 160. , 240. ]]])
The right numbers are there, but not the right order. Let's try again:
In [268]: A[:,None,:]*B[:,:,None]
Out[268]:
array([[[ 7.5, 9. ],
[ 15. , 18. ],
[ 30. , 36. ]],
[[ 30. , 80. ],
[ 60. , 160. ],
[ 90. , 240. ]]])
That's better - now just reshape:
In [269]: _.reshape(2,6)
Out[269]:
array([[ 7.5, 9. , 15. , 18. , 30. , 36. ],
[ 30. , 80. , 60. , 160. , 90. , 240. ]])
_268 is a partial transpose of _267, .transpose(0,2,1).

Explanation on Numpy Broadcasting Answer

I recently posted a question here which was answered exactly as I asked. However, I think I overestimated my ability to manipulate the answer further. I read the broadcasting doc, and followed a few links that led me way back to 2002 about numpy broadcasting.
I've used the second method of array creation using broadcasting:
N = 10
out = np.zeros((N**3,4),dtype=int)
out[:,:3] = (np.arange(N**3)[:,None]/[N**2,N,1])%N
which outputs:
[[0,0,0,0]
[0,0,1,0]
...
[0,1,0,0]
[0,1,1,0]
...
[9,9,8,0]
[9,9,9,0]]
but I do not understand via the docs how to manipulate that. I would ideally like to be able to set the increments in which each individual column changes.
ex. Column A changes by 0.5 up to 2, column B changes by 0.2 up to 1, and column C changes by 1 up to 10.
[[0,0,0,0]
[0,0,1,0]
...
[0,0,9,0]
[0,0.2,0,0]
...
[0,0.8,9,0]
[0.5,0,0,0]
...
[1.5,0.8,9,0]]
Thanks for any help.
You can adjust your current code just a little bit to make it work.
>>> out = np.zeros((4*5*10,4))
>>> out[:,:3] = (np.arange(4*5*10)[:,None]//(5*10, 10, 1)*(0.5, 0.2, 1)%(2, 1, 10))
>>> out
array([[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 1. , 0. ],
[ 0. , 0. , 2. , 0. ],
...
[ 0. , 0. , 8. , 0. ],
[ 0. , 0. , 9. , 0. ],
[ 0. , 0.2, 0. , 0. ],
...
[ 0. , 0.8, 9. , 0. ],
[ 0.5, 0. , 0. , 0. ],
...
[ 1.5, 0.8, 9. , 0. ]])
The changes are:
No int dtype on the array, since we need it to hold floats in some columns. You could specify a float dtype if you want (or even something more complicated that only allows floats in the first two columns).
Rather than N**3 total values, figure out the number of distinct values for each column, and multiply them together to get our total size. This is used for both zeros and arange.
Use the floor division // operator in the first broadcast operation because we want integers at this point, but later we'll want floats.
The values to divide by are again based on the number of values for the later columns (e.g. for A,B,C numbers of values, divide by B*C, C, 1).
Add a new broadcast operation to multiply by various scale factors (how much each value increases at once).
Change the values in the broadcast mod % operation to match the bounds on each column.
This small example helps me understand what is going on:
In [123]: N=2
In [124]: np.arange(N**3)[:,None]/[N**2, N, 1]
Out[124]:
array([[ 0. , 0. , 0. ],
[ 0.25, 0.5 , 1. ],
[ 0.5 , 1. , 2. ],
[ 0.75, 1.5 , 3. ],
[ 1. , 2. , 4. ],
[ 1.25, 2.5 , 5. ],
[ 1.5 , 3. , 6. ],
[ 1.75, 3.5 , 7. ]])
So we generate a range of numbers (0 to 7) and divide them by 4,2, and 1.
The rest of the calculation just changes each value without further broadcasting
Apply %N to each element
In [126]: np.arange(N**3)[:,None]/[N**2, N, 1]%N
Out[126]:
array([[ 0. , 0. , 0. ],
[ 0.25, 0.5 , 1. ],
[ 0.5 , 1. , 0. ],
[ 0.75, 1.5 , 1. ],
[ 1. , 0. , 0. ],
[ 1.25, 0.5 , 1. ],
[ 1.5 , 1. , 0. ],
[ 1.75, 1.5 , 1. ]])
Assigning to an int array is the same as converting the floats to integers:
In [127]: (np.arange(N**3)[:,None]/[N**2, N, 1]%N).astype(int)
Out[127]:
array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])

Numpy: Fastest way of computing diagonal for each row of a 2d array

Given a 2d Numpy array, I would like to be able to compute the diagonal for each row in the fastest way possible, I'm right now using a list comprehension but I'm wondering if it can be vectorised somehow?
For example using the following M array:
M = np.random.rand(5, 3)
[[ 0.25891593 0.07299478 0.36586996]
[ 0.30851087 0.37131459 0.16274825]
[ 0.71061831 0.67718718 0.09562581]
[ 0.71588836 0.76772047 0.15476079]
[ 0.92985142 0.22263399 0.88027331]]
I would like to compute the following array:
np.array([np.diag(row) for row in M])
array([[[ 0.25891593, 0. , 0. ],
[ 0. , 0.07299478, 0. ],
[ 0. , 0. , 0.36586996]],
[[ 0.30851087, 0. , 0. ],
[ 0. , 0.37131459, 0. ],
[ 0. , 0. , 0.16274825]],
[[ 0.71061831, 0. , 0. ],
[ 0. , 0.67718718, 0. ],
[ 0. , 0. , 0.09562581]],
[[ 0.71588836, 0. , 0. ],
[ 0. , 0.76772047, 0. ],
[ 0. , 0. , 0.15476079]],
[[ 0.92985142, 0. , 0. ],
[ 0. , 0.22263399, 0. ],
[ 0. , 0. , 0.88027331]]])
Here's one way using element-wise multiplication of np.eye(3) (the 3x3 identity array) and a slightly re-shaped M:
>>> M = np.random.rand(5, 3)
>>> np.eye(3) * M[:,np.newaxis,:]
array([[[ 0.42527357, 0. , 0. ],
[ 0. , 0.17557419, 0. ],
[ 0. , 0. , 0.61920924]],
[[ 0.04991268, 0. , 0. ],
[ 0. , 0.74000307, 0. ],
[ 0. , 0. , 0.34541354]],
[[ 0.71464307, 0. , 0. ],
[ 0. , 0.11878955, 0. ],
[ 0. , 0. , 0.65411844]],
[[ 0.01699954, 0. , 0. ],
[ 0. , 0.39927673, 0. ],
[ 0. , 0. , 0.14378892]],
[[ 0.5209439 , 0. , 0. ],
[ 0. , 0.34520876, 0. ],
[ 0. , 0. , 0.53862677]]])
(By "re-shaped M" I mean that the rows of M are made to face out along the z-axis rather than across the y-axis, giving M the shape (5, 1, 3).)
Despite the good answer of #ajcr, a much faster alternative can be achieved with fancy indexing (tested in NumPy 1.9.0):
import numpy as np
def sol0(M):
return np.eye(M.shape[1]) * M[:,np.newaxis,:]
def sol1(M):
b = np.zeros((M.shape[0], M.shape[1], M.shape[1]))
diag = np.arange(M.shape[1])
b[:, diag, diag] = M
return b
where the timing shows this is approximately 4X faster:
M = np.random.random((1000, 3))
%timeit sol0(M)
#10000 loops, best of 3: 111 µs per loop
%timeit sol1(M)
#10000 loops, best of 3: 23.8 µs per loop

Categories

Resources