Numpy is not swapping elements - python

I am trying to swap two indices in the 2D array of NumPy. Unfortunately, only one element is getting swapped. Here is the code:
n = len(A)
perMatrix = np.zeros((n,n))
np.fill_diagonal(perMatrix, 1)
perMatrix = A
# swapping the row
print(perMatrix)
temp = perMatrix[switchIndex1]
print(temp)
# perMatrix[switchIndex1][0] = 14
perMatrix[switchIndex1], perMatrix[switchIndex2] = perMatrix[switchIndex2], perMatrix[switchIndex1]
print(perMatrix)
Here's what the code is outputting:

You could just add (on the line after perMatrix is created):
sigma = [switchIndex1, switchIndex2]
tau = [switchIndex2, switchIndex1]
perMatrix[sigma,:] = perMatrix[tau,:]

Related

Elements from list are overwritten in python

I've been struggling for some days trying to figure out why my items from a python list are overwritten. Basically I have to implement a function that rotates 8 times a matrix.
def rotate_ring(matrix, offset):
dim = len(matrix[0])
print(dim)
last_element = matrix[offset][offset]
for j in range(1 + offset, dim - offset):
matrix[offset][j-1] = matrix[offset][j]
matrix[offset][dim-1-offset] = matrix[1+offset][dim-1-offset]
for i in range(1 + offset, dim - offset):
matrix[i-1][dim-1-offset] = matrix[i][dim-1-offset]
matrix[dim-1-offset][dim-1-offset] = matrix[dim-1-offset][dim-2-offset]
for j in range(1+offset, dim-offset):
matrix[dim-1-offset][dim-j] = matrix[dim-1-offset][dim-j-1]
matrix[dim-1-offset][offset] = matrix[dim-2-offset][offset]
for i in range(1+offset, dim-offset):
matrix[dim-i][offset] = matrix[dim-i-1][offset]
matrix[1+offset][offset] = last_element
return matrix
def rotate_matrix(matrix):
dim = len(matrix[0])
for offset in range(0, int(dim/2)):
matrix = rotate_ring(matrix, offset)
return matrix
The functions above are the functions that rotate the matrix and they do because I checked them. After these functions were implemented, I implemented another function
def compass_filter(kernel):
#result = np.zeros(gray.shape, dtype=np.float)
#result = np.zeros(gray.shape, dtype=np.float)
results = []
results.append(kernel)
for i in range(0,7):
kernel = rotate_matrix(kernel) #rotate the kernel
print(kernel)
results.append(kernel) #appending to results
return results
that iterates, 8 times (because I always have 8 kernels) and append the kernels to a list. The problem that I have encountered is that the new rotated kernel is printed:
, but when I print the final list all I see is the last kernel printed 8 times. Does anybody what is the problem? I have also tried to do another list in the for loop just for the new element and then append it to the list that is outside the loop. Thank you!
Please change the function compass filter as follows:
def compass_filter(kernel):
results = []
results.append(kernel)
for i in range(7):
kernel_ = np.array([rotate_matrix(kernel)])
print(kernel_)
results.append(kernel_)
return results

Cluster sizes in 2D numpy array?

I have a question similar to one that was already answered before, with slight modifications:
I have a 2-D numpy array with values -1,0,1. I want to find the cluster sizes of elements with values 1 or -1 (separately). Do you have any suggestions on how to do this?
Thanks!
This is my solution:
import numpy as np
import copy
arr = np.array([[1,1,-1,0,1],[1,1,0,1,1],[0,1,0,1,0],[-1,-1,1,0,1]])
print(arr)
col, row = arr.shape
mask_ = np.ma.make_mask(np.ones((col,row)))
cluster_size = {}
def find_neighbor(arr, mask, col_index, row_index):
index_holder = []
col, row = arr.shape
left = (col_index, row_index-1)
right = (col_index,row_index+1)
top = (col_index-1,row_index)
bottom = (col_index+1,row_index)
left_ = row_index-1>=0
right_ = (row_index+1)<row
top_ = (col_index-1)>=0
bottom_ = (col_index+1)<col
#print(list(zip([left,right,top,bottom],[left_,right_,top_,bottom_])))
for each in zip([left,right,top,bottom],[left_,right_,top_,bottom_]):
if each[-1]:
if arr[col_index,row_index]==arr[each[0][0],each[0][1]] and mask[each[0][0],each[0][1]]:
mask[each[0][0],each[0][1]] = False
index_holder.append(each[0])
return mask,index_holder
for i in range(col):
for j in range(row):
if mask_[i,j] == False:
pass
else:
value = arr[i,j]
mask_[i,j] = False
index_to_check = [(i,j)]
kk=1
while len(index_to_check)!=0:
index_to_check_deepcopy = copy.deepcopy(index_to_check)
for each in index_to_check:
mask_, temp_index = find_neighbor(arr,mask_,each[0],each[1])
index_to_check = index_to_check + temp_index
# print("check",each,temp_index,index_to_check)
kk+=len(temp_index)
for each in index_to_check_deepcopy:
del(index_to_check[index_to_check.index(each)])
if (value,kk) in cluster_size:
cluster_size[(value,kk)] = cluster_size[(value,kk)] + 1
else:
cluster_size[(value,kk)] = 1
print(cluster_size)
cluster_size is a dictionary, the key is a two member tuple (a,b), a give the value of the cluster (that's what you want to solve, right), b gives the counts of that value. The value for each key is the number of cluster.
Are you trying to count the number of 1 or -1 existing in the 2D array?
If so, then it is fairly easy. Say
ar = np.array([1, -1, 1, 0, -1])
If so,
n_1 = sum([1 for each in ar if each==1]);
n_m1 = sum([1 for each in ar if each==-1])

Does NumPy have a function equivalent to Matlab's buffer?

I see there is an array_split and split methods but these are not very handy when you have to split an array of length which is not integer multiple of the chunk size. Moreover, these method’s input is the number of slices rather than the slice size. I need something more like Matlab's buffer method which is more suitable for signal processing.
For example, if I want to buffer a signals to chunks of size 60 I need to do: np.vstack(np.hsplit(x.iloc[0:((len(x)//60)*60)], len(x)//60)) which is cumbersome.
I wrote the following routine to handle the use cases I needed, but I have not implemented/tested for "underlap".
Please feel free to make suggestions for improvement.
def buffer(X, n, p=0, opt=None):
'''Mimic MATLAB routine to generate buffer array
MATLAB docs here: https://se.mathworks.com/help/signal/ref/buffer.html
Parameters
----------
x: ndarray
Signal array
n: int
Number of data segments
p: int
Number of values to overlap
opt: str
Initial condition options. default sets the first `p` values to zero,
while 'nodelay' begins filling the buffer immediately.
Returns
-------
result : (n,n) ndarray
Buffer array created from X
'''
import numpy as np
if opt not in [None, 'nodelay']:
raise ValueError('{} not implemented'.format(opt))
i = 0
first_iter = True
while i < len(X):
if first_iter:
if opt == 'nodelay':
# No zeros at array start
result = X[:n]
i = n
else:
# Start with `p` zeros
result = np.hstack([np.zeros(p), X[:n-p]])
i = n-p
# Make 2D array and pivot
result = np.expand_dims(result, axis=0).T
first_iter = False
continue
# Create next column, add `p` results from last col if given
col = X[i:i+(n-p)]
if p != 0:
col = np.hstack([result[:,-1][-p:], col])
i += n-p
# Append zeros if last row and not length `n`
if len(col) < n:
col = np.hstack([col, np.zeros(n-len(col))])
# Combine result with next row
result = np.hstack([result, np.expand_dims(col, axis=0).T])
return result
def buffer(X = np.array([]), n = 1, p = 0):
#buffers data vector X into length n column vectors with overlap p
#excess data at the end of X is discarded
n = int(n) #length of each data vector
p = int(p) #overlap of data vectors, 0 <= p < n-1
L = len(X) #length of data to be buffered
m = int(np.floor((L-n)/(n-p)) + 1) #number of sample vectors (no padding)
data = np.zeros([n,m]) #initialize data matrix
for startIndex,column in zip(range(0,L-n,n-p),range(0,m)):
data[:,column] = X[startIndex:startIndex + n] #fill in by column
return data
This Keras function may be considered as a Python equivalent of MATLAB Buffer().
See the Sample Code :
import numpy as np
S = np.arange(1,99) #A Demo Array
See Output Here
import tensorflow.keras.preprocessing as kp
list(kp.timeseries_dataset_from_array(S, targets = None,sequence_length=7,sequence_stride=7,batch_size=5))
See the Buffered Array Output Here
Reference : See This
Same as the other answer, but faster.
def buffer(X, n, p=0):
'''
Parameters
----------
x: ndarray
Signal array
n: int
Number of data segments
p: int
Number of values to overlap
Returns
-------
result : (n,m) ndarray
Buffer array created from X
'''
import numpy as np
d = n - p
m = len(X)//d
if m * d != len(X):
m = m + 1
Xn = np.zeros(d*m)
Xn[:len(X)] = X
Xn = np.reshape(Xn,(m,d))
Xne = np.concatenate((Xn,np.zeros((1,d))))
Xn = np.concatenate((Xn,Xne[1:,0:p]), axis = 1)
return np.transpose(Xn[:-1])
ryanjdillon's answer rewritten for significant performance improvement; it appends to a list instead of concatenating arrays, latter which copies the array iteratively and is much slower.
def buffer(x, n, p=0, opt=None):
if opt not in ('nodelay', None):
raise ValueError('{} not implemented'.format(opt))
i = 0
if opt == 'nodelay':
# No zeros at array start
result = x[:n]
i = n
else:
# Start with `p` zeros
result = np.hstack([np.zeros(p), x[:n-p]])
i = n-p
# Make 2D array, cast to list for .append()
result = list(np.expand_dims(result, axis=0))
while i < len(x):
# Create next column, add `p` results from last col if given
col = x[i:i+(n-p)]
if p != 0:
col = np.hstack([result[-1][-p:], col])
# Append zeros if last row and not length `n`
if len(col):
col = np.hstack([col, np.zeros(n - len(col))])
# Combine result with next row
result.append(np.array(col))
i += (n - p)
return np.vstack(result).T
def buffer(X, n, p=0):
'''
Parameters:
x: ndarray, Signal array, input a long vector as raw speech wav
n: int, frame length
p: int, Number of values to overlap
-----------
Returns:
result : (n,m) ndarray, Buffer array created from X
'''
import numpy as np
d = n - p
#print(d)
m = len(X)//d
c = n//d
#print(c)
if m * d != len(X):
m = m + 1
#print(m)
Xn = np.zeros(d*m)
Xn[:len(X)] = X
Xn = np.reshape(Xn,(m,d))
Xn_out = Xn
for i in range(c-1):
Xne = np.concatenate((Xn,np.zeros((i+1,d))))
Xn_out = np.concatenate((Xn_out, Xne[i+1:,:]),axis=1)
#print(Xn_out.shape)
if n-d*c>0:
Xne = np.concatenate((Xn, np.zeros((c,d))))
Xn_out = np.concatenate((Xn_out,Xne[c:,:n-p*c]),axis=1)
return np.transpose(Xn_out)
here is a improved code of Ali Khodabakhsh's sample code which is not work in my cases. Feel free to comment and use it.
Comparing the execution time of the proposed answers, by running
x = np.arange(1,200000)
start = timer()
y = buffer(x,60,20)
end = timer()
print(end-start)
the results are:
Andrzej May, 0.005595300000095449
OverLordGoldDragon, 0.06954789999986133
ryanjdillon, 2.427092700000003

Problems with the zip function: lists that seem not iterable

I'm having some troubles trying to use four lists with the zip function.
In particular, I'm getting the following error at line 36:
TypeError: zip argument #3 must support iteration
I've already read that it happens with not iterable objects, but I'm using it on two lists! And if I try use the zip only on the first 2 lists it works perfectly: I have problems only with the last two.
Someone has ideas on how to solve that? Many thanks!
import numpy
#setting initial values
R = 330
C = 0.1
f_T = 1/(2*numpy.pi*R*C)
w_T = 2*numpy.pi*f_T
n = 10
T = 1
w = (2*numpy.pi)/T
t = numpy.linspace(-2, 2, 100)
#making the lists c_k, w_k, a_k, phi_k
c_karray = []
w_karray = []
A_karray = []
phi_karray = []
#populating the lists
for k in range(1, n, 2):
c_k = 2/(k*numpy.pi)
w_k = k*w
A_k = 1/(numpy.sqrt(1+(w_k)**2))
phi_k = numpy.arctan(-w_k)
c_karray.append(c_k)
w_karray.append(w_k)
A_karray.append(A_k)
phi_karray.append(phi_k)
#making the function w(t)
w = []
#doing the sum for each t and populate w(t)
for i in t:
w_i = ([(A_k*c_k*numpy.sin(w_k*i+phi_k)) for c_k, w_k, A_k, phi_k in zip(c_karray, w_karray, A_k, phi_k)])
w.append(sum(w_i)
Probably you mistyped the last 2 elements in zip. They should be A_karray and phi_karray, because phi_k and A_k are single values.
My result for w is:
[-0.11741034896740517,
-0.099189027720991918,
-0.073206290274556718,
...
-0.089754003567358978,
-0.10828235682188027,
-0.1174103489674052]
HTH,
Germán.
I believe you want zip(c_karray, w_karray, A_karray, phi_karray). Additionally, you should produce this once, not each iteration of the for the loop.
Furthermore, you are not really making use of numpy. Try this instead of your loops.
d = numpy.arange(1, n, 2)
c_karray = 2/(d*numpy.pi)
w_karray = d*w
A_karray = 1/(numpy.sqrt(1+(w_karray)**2))
phi_karray = numpy.arctan(-w_karray)
w = (A_karray*c_karray*numpy.sin(w_karray*t[:,None]+phi_karray)).sum(axis=-1)

Can a python list hold a multi-dimentional array as its element?

I am trying to do image processing using python.
I try to create a list which holds numpy.ndarrays.
My code looks like this,
def Minimum_Close(Shade_Corrected_Image, Size):
uint32_Shade_Corrected_Image = pymorph.to_int32(Shade_Corrected_Image)
Angles = []
[Row, Column] = Shade_Corrected_Image.shape
Angles = [i*15 for i in range(12)]
Image_Close = [0 for x in range(len(Angles))]
Image_Closing = numpy.zeros((Row, Column))
for s in range(len(Angles)):
Struct_Element = pymorph.seline(Size, Angles[s])
Image_Closing = pymorph.close(uint32_Shade_Corrected_Image,Struct_Element )
Image_Close[s] = Image_Closing
Min_Close_Image = numpy.zeros(Shade_Corrected_Image.shape)
temp_array = [][]
Temp_Cell = numpy.zeros((Row, Column))
for r in range (1, Row):
for c in range(1,Column):
for Cell in Image_Close:
Temp_Cell = Image_Close[Cell]
temp_array[Cell] = Temp_Cell[r][c]
Min_Close_Image[r][c] = min(temp_array)
Min_Close_Image = Min_Close_Image - Shade_Corrected_Image
return Min_Close_Image
While running this code I'm getting error:
Temp_Cell = Image_Close[Cell]
TypeError: only integer arrays with one element can be converted to an index
How can I make a data structure which holds different multi-dimensional arrays and then traverse through it??
Making a list of arrays is not necessary when you're using numpy.
I suggest rewriting the whole function like this:
def Minimum_Close(shade_corrected_image, size):
uint32_shade_corrected_image = pymorph.to_int32(shade_corrected_image)
angles = np.arange(12) * 15
def pymorph_op(angle):
struct_element = pymorph.seline(size, angle)
return pymorph.close(uint32_shade_corrected_image, struct_element)
image_close = np.dstack(pymorph_op(a) for a in angles)
min_close_image = np.min(image_close, axis=-1) - shade_corrected_image
return min_close_image
I lower cased variable names so that they stop getting highlighted as classes.
What about:
for cnt,Cell in enumerate(Image_Close):
Temp_Cell = Image_Close[cnt]

Categories

Resources