I want to create a random diagonal matrix with size n such that each element in the diagonal entries has 50% chance of being -1 and 50% chance of being 1. Is there any advice for this?
import numpy as np
diagonal_entries = np.random.randint(low = -1, high = 1, size = n)
D = np.diag(diagonal_entries)
However, the problem is that `np.random.randint includes 0 as the value too. I only want -1 and 1, excluding 0.
You can use np.random.choice to sample a vector
import numpy as np
n=100
vec=np.random.choice([-1,1],n)
mat=np.diag(vec)
You can combine a few NumPy routines for a concise routine doing this:
def random_diagonal(n, proba_minus=0):
diagonal = np.ones(n)
diagonal[np.random.random(size=n) < proba_minus] = -1
return np.diagflat(diagonal)
The random routine allows you to define the probability of having "-1" and the routine np.diagflat creates a diagonal matrix from its diagonal. Both operations above are vectorized but for large sizes you need of course to know that there is a temporary array for the boolean mask.
What about something like this:
import numpy as np
diagonal_entries = np.random.randint(low = 0, high = 2, size = 4)
print diagonal_entries
# i*2-1 will map [0,1] -> [2*0-1 == -1, 2*1-1 == 1] == [-1,1]
modified = [i*2-1 for i in diagonal_entries]
D = np.diag(modified)
print D
I used the same function with a little modification on the results to suite your [-1,1] needs.
My 2nd option would be this modified = [1 if i == 1 else -1 for i in diagonal_entries]
Related
I am seeking to construct a matrix of which I will calculate the inverse. This will be used in an implicit method for solving a nonlinear parabolic PDE. My current calculations are, which will become obvious to why, giving me a singular (no possible inverse) matrix. For context, in reality the matrix will be of dimension 30 by 30 but in these examples I am using smaller matrices for testing purposes.
Say I want to create a large square sparse matrix. Using spdiags only allows you to input members of the main, lower and upper diagonals individually. So how to you make it so that each diagonal has one value for all its entries?
Example Code:
import numpy as np
from scipy.sparse import spdiags
from numpy.linalg import inv
updiag = -0.25
diag = 0.5
lowdiag = -0.25
Jdata = np.array([[diag], [lowdiag], [updiag]])
Diags = [0, -1, 1]
J = spdiags(Jdata, Diags, 3, 3).toarray()
print(J)
inverseJ = inv(J)
print(inverseJ)
This would produce an 3 x 3 matrix but only with the first entry of each diagonal given. I wondered about using np.fill_diagonal but that would require a matrix first and only does the main diagonal. Am I misunderstanding something?
The first argument of spdiags is a matrix of values to be used as the diagonals. You can use it this way:
Jdata = np.array([3 * [diag], 3 * [lowdiag], 3 * [updiag]])
Diags = [0, -1, 1]
J = spdiags(Jdata, Diags, 3, 3).toarray()
print(J)
# [[ 0.5 -0.25 0. ]
# [-0.25 0.5 -0.25]
# [ 0. -0.25 0.5 ]]
Let's say i have a 2D array named grid of size (N,N).The NxN array cells can either be 0 or 1.First i initialise all the array cells as 0.Then what i would like to do is iterate through the array and change 0's to 1's with given probability p.What i mean is this:
p=0.5
for i in range(0,N-1)
for j in range(0,N-1) //iteration through every cell
// grid[i][j] has p=50% chance to change from 0 to 1
How could this be implemented?
If performance matters, and you don't mind a dependency on SciPy, there's
import scipy.stats as st
grid = st.bernoulli.rvs(0.5, size=(N, N))
or NumPy's
import numpy as np
grid = np.random.binomial(1, 0.5, size=(N, N))
I am trying a solution here.
Sample from 0-1 , according to the dimensions you want.
Then according to p, converting those > p to 1 and < p to 0 will be what you need
import numpy as np
p=0.3
N=5
(np.random.random_sample((N, N)) > p).astype(int)
First I create my array
myarray = np.random.random_integers(0,10, size=20)
Then, I want to set 20% of the elements in the array to 0 (or some other number). How should I do this? Apply a mask?
You can calculate the indices with np.random.choice, limiting the number of chosen indices to the percentage:
indices = np.random.choice(np.arange(myarray.size), replace=False,
size=int(myarray.size * 0.2))
myarray[indices] = 0
For others looking for the answer in case of nd-array, as proposed by user holi:
my_array = np.random.rand(8, 50)
indices = np.random.choice(my_array.shape[1]*my_array.shape[0], replace=False, size=int(my_array.shape[1]*my_array.shape[0]*0.2))
We multiply the dimensions to get an array of length dim1*dim2, then we apply this indices to our array:
my_array[np.unravel_index(indices, my_array.shape)] = 0
The array is now masked.
Use np.random.permutation as random index generator, and take the first 20% of the index.
myarray = np.random.random_integers(0,10, size=20)
n = len(myarray)
random_idx = np.random.permutation(n)
frac = 20 # [%]
zero_idx = random_idx[:round(n*frac/100)]
myarray[zero_idx] = 0
If you want the 20% to be random:
random_list = []
array_len = len(myarray)
while len(random_list) < (array_len/5):
random_int = math.randint(0,array_len)
if random_int not in random_list:
random_list.append(random_int)
for position in random_list:
myarray[position] = 0
return myarray
This would ensure you definitely get 20% of the values, and RNG rolling the same number many times would not result in less than 20% of the values being 0.
Assume your input numpy array is A and p=0.2. The following are a couple of ways to achieve this.
Exact Masking
ones = np.ones(A.size)
idx = int(min(p*A.size, A.size))
ones[:idx] = 0
A *= np.reshape(np.random.permutation(ones), A.shape)
Approximate Masking
This is commonly done in several denoising objectives, most notably the Masked Language Modeling in Transformers pre-training. Here is a more pythonic way of setting a certain proportion (say 20%) of elements to zero.
A *= np.random.binomial(size=A.shape, n=1, p=0.8)
Another Alternative:
A *= np.random.randint(0, 2, A.shape)
I am trying to create a matrix of random numbers, but my solution is too long and looks ugly
random_matrix = [[random.random() for e in range(2)] for e in range(3)]
this looks ok, but in my implementation it is
weights_h = [[random.random() for e in range(len(inputs[0]))] for e in range(hiden_neurons)]
which is extremely unreadable and does not fit on one line.
You can drop the range(len()):
weights_h = [[random.random() for e in inputs[0]] for e in range(hiden_neurons)]
But really, you should probably use numpy.
In [9]: numpy.random.random((3, 3))
Out[9]:
array([[ 0.37052381, 0.03463207, 0.10669077],
[ 0.05862909, 0.8515325 , 0.79809676],
[ 0.43203632, 0.54633635, 0.09076408]])
Take a look at numpy.random.rand:
Docstring: rand(d0, d1, ..., dn)
Random values in a given shape.
Create an array of the given shape and propagate it with random
samples from a uniform distribution over [0, 1).
>>> import numpy as np
>>> np.random.rand(2,3)
array([[ 0.22568268, 0.0053246 , 0.41282024],
[ 0.68824936, 0.68086462, 0.6854153 ]])
use np.random.randint() as np.random.random_integers() is deprecated
random_matrix = np.random.randint(min_val,max_val,(<num_rows>,<num_cols>))
Looks like you are doing a Python implementation of the Coursera Machine Learning Neural Network exercise. Here's what I did for randInitializeWeights(L_in, L_out)
#get a random array of floats between 0 and 1 as Pavel mentioned
W = numpy.random.random((L_out, L_in +1))
#normalize so that it spans a range of twice epsilon
W = W * 2 * epsilon
#shift so that mean is at zero
W = W - epsilon
For creating an array of random numbers NumPy provides array creation using:
Real numbers
Integers
For creating array using random Real numbers:
there are 2 options
random.rand (for uniform distribution of the generated random numbers )
random.randn (for normal distribution of the generated random numbers )
random.rand
import numpy as np
arr = np.random.rand(row_size, column_size)
random.randn
import numpy as np
arr = np.random.randn(row_size, column_size)
For creating array using random Integers:
import numpy as np
numpy.random.randint(low, high=None, size=None, dtype='l')
where
low = Lowest (signed) integer to be drawn from the distribution
high(optional)= If provided, one above the largest (signed) integer to be drawn from the distribution
size(optional) = Output shape i.e. if the given shape is, e.g., (m, n, k), then m * n * k samples are drawn
dtype(optional) = Desired dtype of the result.
eg:
The given example will produce an array of random integers between 0 and 4, its size will be 5*5 and have 25 integers
arr2 = np.random.randint(0,5,size = (5,5))
in order to create 5 by 5 matrix, it should be modified to
arr2 = np.random.randint(0,5,size = (5,5)), change the multiplication symbol* to a comma ,#
[[2 1 1 0 1][3 2 1 4 3][2 3 0 3 3][1 3 1 0 0][4 1 2 0 1]]
eg2:
The given example will produce an array of random integers between 0 and 1, its size will be 1*10 and will have 10 integers
arr3= np.random.randint(2, size = 10)
[0 0 0 0 1 1 0 0 1 1]
First, create numpy array then convert it into matrix. See the code below:
import numpy
B = numpy.random.random((3, 4)) #its ndArray
C = numpy.matrix(B)# it is matrix
print(type(B))
print(type(C))
print(C)
x = np.int_(np.random.rand(10) * 10)
For random numbers out of 10. For out of 20 we have to multiply by 20.
When you say "a matrix of random numbers", you can use numpy as Pavel https://stackoverflow.com/a/15451997/6169225 mentioned above, in this case I'm assuming to you it is irrelevant what distribution these (pseudo) random numbers adhere to.
However, if you require a particular distribution (I imagine you are interested in the uniform distribution), numpy.random has very useful methods for you. For example, let's say you want a 3x2 matrix with a pseudo random uniform distribution bounded by [low,high]. You can do this like so:
numpy.random.uniform(low,high,(3,2))
Note, you can replace uniform by any number of distributions supported by this library.
Further reading: https://docs.scipy.org/doc/numpy/reference/routines.random.html
A simple way of creating an array of random integers is:
matrix = np.random.randint(maxVal, size=(rows, columns))
The following outputs a 2 by 3 matrix of random integers from 0 to 10:
a = np.random.randint(10, size=(2,3))
random_matrix = [[random.random for j in range(collumns)] for i in range(rows)
for i in range(rows):
print random_matrix[i]
An answer using map-reduce:-
map(lambda x: map(lambda y: ran(),range(len(inputs[0]))),range(hiden_neurons))
#this is a function for a square matrix so on the while loop rows does not have to be less than cols.
#you can make your own condition. But if you want your a square matrix, use this code.
import random
import numpy as np
def random_matrix(R, cols):
matrix = []
rows = 0
while rows < cols:
N = random.sample(R, cols)
matrix.append(N)
rows = rows + 1
return np.array(matrix)
print(random_matrix(range(10), 5))
#make sure you understand the function random.sample
numpy.random.rand(row, column) generates random numbers between 0 and 1, according to the specified (m,n) parameters given. So use it to create a (m,n) matrix and multiply the matrix for the range limit and sum it with the high limit.
Analyzing: If zero is generated just the low limit will be held, but if one is generated just the high limit will be held. In order words, generating the limits using rand numpy you can generate the extreme desired numbers.
import numpy as np
high = 10
low = 5
m,n = 2,2
a = (high - low)*np.random.rand(m,n) + low
Output:
a = array([[5.91580065, 8.1117106 ],
[6.30986984, 5.720437 ]])
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Numpy/Python: Array iteration without for-loop
Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size.
A sample code in Python/Numpy could look like the following:
(the comparison >0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors)
import numpy as np
my_matrix = np.random.rand(100,100)
new_matrix = np.array((100,100))
my_range = np.arange(1,99)
for i in my_range:
for j in my_range:
if my_matrix[i,j+1] > 0.5:
new_matrix[i,j+1] = 1
if my_matrix[i,j-1] > 0.5:
new_matrix[i,j-1] = 1
if my_matrix[i+1,j] > 0.5:
new_matrix[i+1,j] = 1
if my_matrix[i-1,j] > 0.5:
new_matrix[i-1,j] = 1
if my_matrix[i+1,j+1] > 0.5:
new_matrix[i+1,j+1] = 1
if my_matrix[i+1,j-1] > 0.5:
new_matrix[i+1,j-1] = 1
if my_matrix[i-1,j+1] > 0.5:
new_matrix[i-1,j+1] = 1
This can get really nasty if I want to step into one neighboring cell and start from it to compare it to its neighbors ... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?
I'm not 100% sure what you're aiming for with your code, which ignoring indexing issues at boundaries is equivalent to
new_matrix = my_matrix > 0.5
but you can do advanced versions of these calculation quickly with morphological operations:
import numpy as np
from scipy.ndimage import morphology
a = np.random.rand(5,5)
b = a > 0.5
element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
result = morphology.binary_dilation(b, element) * 1
The way to keep this from "getting nasty" is: Encapsulate the neighbor-checking code in a function. Then you can just call it with the coordinates of the neighbor when necessary.
If you need to keep track of which pairs you've checked, so that you don't keep the same ones, use some sort of memoization on top of that.