What is the best (elegant and efficient) way in Theano to convert a vector of indices to a matrix of zeros and ones, in which every row is the one-of-N representation of an index?
v = t.ivector() # the vector of indices
n = t.scalar() # the width of the matrix
convert = <your code here>
f = theano.function(inputs=[v, n], outputs=convert)
Example:
n_val = 4
v_val = [1,0,3]
f(v_val, n_val) = [[0,1,0,0],[1,0,0,0],[0,0,0,1]]
I didn't compare the different option, but you can also do it like this. It don't request extra memory.
import numpy as np
import theano
n_val = 4
v_val = np.asarray([1,0,3])
idx = theano.tensor.lvector()
z = theano.tensor.zeros((idx.shape[0], n_val))
one_hot = theano.tensor.set_subtensor(z[theano.tensor.arange(idx.shape[0]), idx], 1)
f = theano.function([idx], one_hot)
print f(v_val)[[ 0. 1. 0. 0.]
[ 1. 0. 0. 0.]
[ 0. 0. 0. 1.]]
It's as simple as:
convert = t.eye(n,n)[v]
There still might be a more efficient solution that doesn't require building the whole identity matrix. This might be problematic for large n and short v's.
There's now a built in function for this theano.tensor.extra_ops.to_one_hot.
y = tensor.as_tensor([3,2,1])
fn = theano.function([], tensor.extra_ops.to_one_hot(y, 4))
print fn()
# [[ 0. 0. 0. 1.]
# [ 0. 0. 1. 0.]
# [ 0. 1. 0. 0.]]
Related
How to replace column in the numpy array be certain number based on probability, if it is (1,X,X) shape.
I found code to replace rows, but cannot figure out how to modify it, so it is applicable for columns replacement.
grid_example = np.random.rand(1,5,5)
probs = np.random.random((1,5))
grid_example[probs < 0.25] = 0
grid_example
Thanks!
Use:
import numpy as np
rng = np.random.default_rng(42)
grid_example = rng.random((1, 5, 5))
probs = rng.random((1, 5))
grid_example[..., (probs < 0.25).flatten()] = 0
print(grid_example)
Output
[[[0. 0.43887844 0. 0. 0.09417735]
[0. 0.7611397 0. 0. 0.45038594]
[0. 0.92676499 0. 0. 0.4434142 ]
[0. 0.55458479 0. 0. 0.6316644 ]
[0. 0.35452597 0. 0. 0.7783835 ]]]
The notation [..., (probs < 0.25).flatten()] applies the boolean indexing to the last index. More on the documentation.
I have the following code:
import numpy as np
epsilon = np.array([[0. , 0.00172667, 0.00071437, 0.00091779, 0.00154501],
[0.00128983, 0. , 0.00028139, 0.00215905, 0.00094862],
[0.00035811, 0.00018714, 0. , 0.00029365, 0.00036993],
[0.00035631, 0.00112175, 0.00022906, 0. , 0.00291149],
[0.00021527, 0.00017653, 0.00010341, 0.00104458, 0. ]])
Sii = np.array([19998169., 14998140., 9997923., 7798321., 2797958.])
n = len(Sii)
epsilonijSjj = np.zeros((n,n))
for i in range(n):
for j in range(n):
epsilonijSjj[i,j] = epsilon[i][j]*Sii[j]
print (epsilonijSjj)
How can I avoid the double for loop and write the code in a fast Pythonic way?
Thank you in advance
Numpy allow you to multiply 2 arrays directly.
So rather than define a 0 based array and populating it with the altered elements of the other array, you can simply create a copy of the other array and apply the multiplication directly like so:
import numpy as np
epsilon = np.array([[0. , 0.00172667, 0.00071437, 0.00091779, 0.00154501],
[0.00128983, 0. , 0.00028139, 0.00215905, 0.00094862],
[0.00035811, 0.00018714, 0. , 0.00029365, 0.00036993],
[0.00035631, 0.00112175, 0.00022906, 0. , 0.00291149],
[0.00021527, 0.00017653, 0.00010341, 0.00104458, 0. ]])
Sii = np.array([19998169., 14998140., 9997923., 7798321., 2797958.])
epsilonijSjj = epsilon.copy()
epsilonijSjj *= Sii
print(epsilonijSjj)
Output:
[[ 0. 25896.8383938 7142.21625351 7157.22103059
4322.87308958]
[25794.23832127 0. 2813.31555297 16836.96495505
2654.19891796]
[ 7161.54430059 2806.7519196 0. 2289.97696165
1035.04860294]
[ 7125.54759639 16824.163545 2290.12424238 0.
8146.22673742]
[ 4305.00584063 2647.6216542 1033.88521743 8145.97015018
0. ]]
Or, just do this, which is faster because it doesn't require creating a copy of an array:
import numpy as np
epsilon = np.array([[0. , 0.00172667, 0.00071437, 0.00091779, 0.00154501],
[0.00128983, 0. , 0.00028139, 0.00215905, 0.00094862],
[0.00035811, 0.00018714, 0. , 0.00029365, 0.00036993],
[0.00035631, 0.00112175, 0.00022906, 0. , 0.00291149],
[0.00021527, 0.00017653, 0.00010341, 0.00104458, 0. ]])
Sii = np.array([19998169., 14998140., 9997923., 7798321., 2797958.])
epsilonijSjj = epsilon * Sii
I have ndarray of eigenvalues and their multiplicities (for instance, np.array([(2.2, 2), (3, 3), (5, 1)])). I need to compute Jordan matrix for this eigenvalues without using Python cycles and iterables (list comprehensions, for loops etc.), only by using NumPy's functions.
I decided to build the matrix by this steps:
Create this blocks using np.vectorize and np.eye with np.fill_diagonal:
Combine blocks into one matrix using hstack and vstack.
But I've got two problems:
Here's snippet of my block creating code:
def eye(t):
eye = np.eye(t[1].astype(int),k=1)
return eye
def jordan_matrix(X: np.ndarray) -> np.ndarray:
dim = np.sum(X[:,1].astype(int))
eyes = np.vectorize(eye, signature='(x)->(n,m)')(X)
return eyes
And I'm getting error ValueError: could not broadcast input array from shape (3,3) into shape (2,2)
I need to create extra zero matrices to fill space which is not used by created blocks, but their sizes are variable and I can't figure out how to create them without using Python's for and its equivalents.
Am I on the right way? How can I get out of this problems?
np.vectorize would basically loop under the hoods. We could use NumPy funcs for actual vectorization at Python level. Here's one such way -
def blockwise_jordan(a):
r = a[:,1].astype(int)
v = np.repeat(a[:,0],r)
out = np.diag(v)
n = out.shape[1]
fillvals = np.ones(n, dtype=out.dtype)
fillvals[r[:-1].cumsum()-1] = 0
out.flat[1::out.shape[1]+1] = fillvals
return out
Sample run -
In [52]: X = np.array([(2.2, 2), (3, 3), (5, 1)])
In [53]: blockwise_jordan(X)
Out[53]:
array([[2.2, 1. , 0. , 0. , 0. , 0. ],
[0. , 2.2, 0. , 0. , 0. , 0. ],
[0. , 0. , 3. , 1. , 0. , 0. ],
[0. , 0. , 0. , 3. , 1. , 0. ],
[0. , 0. , 0. , 0. , 3. , 0. ],
[0. , 0. , 0. , 0. , 0. , 5. ]])
Optimization #1
We can replace the final three steps to perform the conditional assignment of 1s and 0s, like so -
out.flat[1::n+1] = 1
c = r[:-1].cumsum()-1
out[c,c+1] = 0
Here's my solution:
def jordan(a):
e = a[:,0] # eigenvalues
m = a[:,1].astype('int') # multiplicities
d = np.repeat(e, m) # main diagonal
ones = np.ones(d.size - 1)
ones[np.cumsum(m)[:-1] -1] = 0
j = np.diag(d) + np.diag(ones, k=1)
return j
Edit: just realized that my solution is almost the same as Divakar's.
I am trying to compute the matrix which has the following equation.
S = (D^−1/2) * W * (D^−1/2)
where D is a diagonal matrix of this form:
array([[ 0.59484625, 0. , 0. , 0. ],
[ 0. , 0.58563893, 0. , 0. ],
[ 0. , 0. , 0.58280472, 0. ],
[ 0. , 0. , 0. , 0.58216725]])
and W:
array([[ 0. , 0.92311635, 0.94700586, 0.95599748],
[ 0.92311635, 0. , 0.997553 , 0.99501248],
[ 0.94700586, 0.997553 , 0. , 0.9995501 ],
[ 0.95599748, 0.99501248, 0.9995501 , 0. ]])
I tried to compute D^-1/2 by using numpy function linalg.matrix_power(D,-1/2) and numpy.power(D,-1/2) and matrix_power function raises TypeError: exponent must be an integer and numpy.power function raises RuntimeWarning: divide by zero encountered in power.
How to compute negative power -1/2 for diagonal matrix. Please help.
If you can update D(like in your own answer) then simply update the items at its diagonal indices and then call np.dot:
>>> D[np.diag_indices(4)] = 1/ (D.diagonal()**0.5)
>>> np.dot(D, W).dot(D)
array([[ 0. , 0.32158153, 0.32830723, 0.33106193],
[ 0.32158153, 0. , 0.34047794, 0.33923936],
[ 0.32830723, 0.34047794, 0. , 0.33913717],
[ 0.33106193, 0.33923936, 0.33913717, 0. ]])
Or create a new zeros array and then fill its diagonal elements with 1/ (D.diagonal()**0.5):
>>> arr = np.zeros(D.shape)
>>> np.fill_diagonal(arr, 1/ (D.diagonal()**0.5))
>>> np.dot(arr, W).dot(arr)
array([[ 0. , 0.32158153, 0.32830723, 0.33106193],
[ 0.32158153, 0. , 0.34047794, 0.33923936],
[ 0.32830723, 0.34047794, 0. , 0.33913717],
[ 0.33106193, 0.33923936, 0.33913717, 0. ]])
I got the answer by computing thro' mathematical terms but would love to see any straight forward one liners :)
def compute_diagonal_to_negative_power():
for i in range(4):
for j in range(4):
if i == j:
element = D[i][j]
numerator = 1
denominator = math.sqrt(element)
D[i][j] = numerator / denominator
return D
diagonal_matrix = compute_diagonal_to_negative_power()
S = np.dot(diagonal_matrix, W).dot(diagonal_matrix)
print(S)
"""
[[ 0. 0.32158153 0.32830723 0.33106193]
[ 0.32158153 0. 0.34047794 0.33923936]
[ 0.32830723 0.34047794 0. 0.33913718]
[ 0.33106193 0.33923936 0.33913718 0. ]]
"""
Source: https://math.stackexchange.com/questions/340321/raising-a-square-matrix-to-a-negative-half-power
You can do the following:
numpy.power(D,-1/2, where=(D!=0))
And then you will avoid getting the warning:
RuntimeWarning: divide by zero encountered in power
numpy will divide every value on the matrix element-wise by it's own square root, which is not zero, so basically you won't try to divide by zero anymore.
In general we could have matrices of arbitrary sizes. For my application it is necessary to have square matrix. Also the dummy entries should have a specified value. I am wondering if there is anything built in numpy?
Or the easiest way of doing it
EDIT :
The matrix X is already there and it is not squared. We want to pad the value to make it square. Pad it with the dummy given value. All the original values will stay the same.
Thanks a lot
Building upon the answer by LucasB here is a function which will pad an arbitrary matrix M with a given value val so that it becomes square:
def squarify(M,val):
(a,b)=M.shape
if a>b:
padding=((0,0),(0,a-b))
else:
padding=((0,b-a),(0,0))
return numpy.pad(M,padding,mode='constant',constant_values=val)
Since Numpy 1.7, there's the numpy.pad function. Here's an example:
>>> x = np.random.rand(2,3)
>>> np.pad(x, ((0,1), (0,0)), mode='constant', constant_values=42)
array([[ 0.20687158, 0.21241617, 0.91913572],
[ 0.35815412, 0.08503839, 0.51852029],
[ 42. , 42. , 42. ]])
For a 2D numpy array m it’s straightforward to do this by creating a max(m.shape) x max(m.shape) array of ones p and multiplying this by the desired padding value, before setting the slice of p corresponding to m (i.e. p[0:m.shape[0], 0:m.shape[1]]) to be equal to m.
This leads to the following function, where the first line deals with the possibility that the input has only one dimension (i.e. is an array rather than a matrix):
import numpy as np
def pad_to_square(a, pad_value=0):
m = a.reshape((a.shape[0], -1))
padded = pad_value * np.ones(2 * [max(m.shape)], dtype=m.dtype)
padded[0:m.shape[0], 0:m.shape[1]] = m
return padded
So, for example:
>>> r1 = np.random.rand(3, 5)
>>> r1
array([[ 0.85950957, 0.92468279, 0.93643261, 0.82723889, 0.54501699],
[ 0.05921614, 0.94946809, 0.26500925, 0.02287463, 0.04511802],
[ 0.99647148, 0.6926722 , 0.70148198, 0.39861487, 0.86772468]])
>>> pad_to_square(r1, 3)
array([[ 0.85950957, 0.92468279, 0.93643261, 0.82723889, 0.54501699],
[ 0.05921614, 0.94946809, 0.26500925, 0.02287463, 0.04511802],
[ 0.99647148, 0.6926722 , 0.70148198, 0.39861487, 0.86772468],
[ 3. , 3. , 3. , 3. , 3. ],
[ 3. , 3. , 3. , 3. , 3. ]])
or
>>> r2=np.random.rand(4)
>>> r2
array([ 0.10307689, 0.83912888, 0.13105124, 0.09897586])
>>> pad_to_square(r2, 0)
array([[ 0.10307689, 0. , 0. , 0. ],
[ 0.83912888, 0. , 0. , 0. ],
[ 0.13105124, 0. , 0. , 0. ],
[ 0.09897586, 0. , 0. , 0. ]])
etc.