How to convert the Matlab code
v = [1: n]
to pytorch?
Writing a whole loop for that seems inefficient.
You can directly use the arange method from Pytorch.
torch_v = torch.arange(1,n)
Reference: https://pytorch.org/docs/master/torch.html?highlight=arange#torch.arange
You form a sequence of consecutive numbers in python
import numpy as np
v= np.arange(1,n)
if you want a torch tensor you can transform the numpy array like this:
torch_v = torch.from_numpy(v)
Related
I am trying to use a single line of code to make a matrix with zeros except a custom value for the diagonals. I am able to do it like the code I put below, but am wondering if I can do it by only using np.eye?
import numpy as np
a = np.eye(4,4 k=0)
np.fill_diagonal (a,4)
print(a)
try the identity matrix in numpy module:
a=np.identity(10)*4
import numpy as np
a = np.eye(4)*4
print(a)
I would avoid np.eye() altogether and just use np.fill_diagonal() on a zeroed matrix, if you are not using any of its features:
import numpy as np
def value_eye_fill(value, n):
result = np.zeros((n, n))
np.fill_diagonal(result, value)
return result
That should be the fastest approach for larger inputs, within NumPy.
Of course you can also use np.eye() and avoid np.fill_diagonal() by just multiplying the value by the result of np.eye():
import numpy as np
def value_eye_fill(value, n):
return value * np.eye(n)
I'm trying to calculate the difference between 2 images. I'm expecting an integer as my result, but I'm not getting what I expect.
from imageio import imread
#https://raw.githubusercontent.com/glennford49/sampleImages/main/cat1.png
#https://raw.githubusercontent.com/glennford49/sampleImages/main/cat2.png
img1="cat1.png" # 183X276
img2="cat2.png" # 183x276
numpyImg1=[]
numpyImg2=[]
img1=imread(img1)
img2=imread(img2)
numpyImg1.append(img1)
numpyImg2.append(img2)
diff = numpyImg1[0] - numpyImg2[0]
result = sum(abs(diff))
print("difference:",result)
print:
# it prints an array of images rather than printing an interger only
target:
difference: <int>
You are using Python's built-in sum function which only performs a summation along the first dimension of a NumPy array. This is the reason why you are getting a 2D array as the output instead of the single integer you expect. Please use numpy.sum on your result instead which will internally flatten a multi-dimensional NumPy array then sum over the results. In addition, you might as well use numpy.abs for the absolute computation too:
import numpy as np
result = np.sum(np.abs(diff))
Using numpy.sum means that you no longer need to reshape your array into a flattened representation prior to using the built-in sum function in your answer. For future development, always use NumPy methods on any arithmetic operations you want to perform on NumPy arrays. It prevents unexpected behaviour such as what you've just seen.
A (Colored) image is a 3D matrix, so what you can do is convert those image in numpy array using numpy.array(image) and then you can get the difference of those two numpy arrays.
The final answer will be an array in 3-dimenssion
I believe the dimension of numpy array is not 1, You need to perform the sum the number of times as the dimesion of the array to have a single sum value.
[1,2,3]
sum gives : 6
[[1,2,3],[1,2,3]]
sum gives : [2,4,6]
doing a second sum opertion gives
: 12 (single value)
you may need to add one more "sum(result)" before printing data (if image is 2 dimension) .
eg:
numpyImg2.append(img2)
diff = numpyImg1[0] - numpyImg2[0]
result = sum(abs(diff))
result = sum(result) >> Repeat
print("difference:",result)
This is my answer of finding the difference of 2 images in rgb channels.
If 2 the same images were to be subtracted,
prints:
difference per pixel: 0
from numpy import sum
from imageio import imread
#https://github.com/glennford49/sampleImages/blob/main/cat2.png
#https://github.com/glennford49/sampleImages/blob/main/cat2.png
img1="cat1.png"
img2="cat2.png"
numpyImg1=[]
numpyImg2=[]
img1=imread(img1)
img2=imread(img2)
numpyImg1.append(img1)
numpyImg2.append(img2)
diff = numpyImg1[0] - numpyImg2[0]
result = sum(diff/numpyImg1[0].size)
result = sum(abs(result.reshape(-1)))
print("difference per pixel:",result)
I have two numpy arrays, with just the 3-dimensional coordinates of two molecules.
I need to implement the following equation, and I'm having problems in the subtraction of each coordinate of one of the arrays by the second, and then square it.
I have tried the following, but since I'm still learning I feel that I am making some major mistake. The simple code I use is:
a = [math.sqrt(1/3*((i[:,0]-j[:,0])**2) + ((i[:,1] - j[:,1])**2) + ((i[:,2]-j[:,2])**2) for i, j in zip(coordenates_2, coordenates_1))]
It's numpy you can easily do it using the following example:
import numpy as np
x1 = np.random.randn(3,3,3)
x2 = np.random.randn(3,3,3)
res = np.sqrt(np.mean(np.power(x1-x2,2)))
I have encountered the following function in MATLAB that sequentially flips all of the dimensions in a matrix:
function X=flipall(X)
for i=1:ndims(X)
X = flipdim(X,i);
end
end
Where X has dimensions (M,N,P) = (24,24,100). How can I do this in Python, given that X is a NumPy array?
The equivalent to flipdim in MATLAB is flip in numpy. Be advised that this is only available in version 1.12.0.
Therefore, it's simply:
import numpy as np
def flipall(X):
Xcopy = X.copy()
for i in range(X.ndim):
Xcopy = np.flip(Xcopy, i)
return Xcopy
As such, you'd simply call it like so:
Xflip = flipall(X)
However, if you know a priori that you have only three dimensions, you can hard code the operation by simply doing:
def flipall(X):
return X[::-1,::-1,::-1]
This flips each dimension one right after the other.
If you don't have version 1.12.0 (thanks to user hpaulj), you can use slice to do the same operation:
import numpy as np
def flipall(X):
return X[[slice(None,None,-1) for _ in X.shape]]
I'm looking for dynamically growing vectors in Python, since I don't know their length in advance. In addition, I would like to calculate distances between these sparse vectors, preferably using the distance functions in scipy.spatial.distance (although any other suggestions are welcome). Any ideas how to do this? (Initially, it doesn't need to be efficient.)
Thanks a lot in advance!
You can use regular python lists (which are dynamic) as vectors. Trivial example follows.
from scipy.spatial.distance import sqeuclidean
a = [1,2,3]
b = [0,0,0]
print sqeuclidean(a,b) # 14
As per aganders3's suggestion, do note that you can also use numpy arrays if needed:
import numpy
a = numpy.array([1,2,3])
If the sparse part of your question is crucial I'd use scipy for that - it has support for sparse matrixes. You can define a 1xn matrix and use it as a vector. This works (the parameter is the size of the matrix, filled with zeroes by default):
sqeuclidean(scipy.sparse.coo_matrix((1,3)),scipy.sparse.coo_matrix((1,3))) # 0
There are many kinds of sparse matrixes, some dictionary based (see comment). You can define a row sparse matrix from a list like this:
scipy.sparse.csr_matrix([1,2,3])
Here is how you can do it in numpy:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([0, 0, 0])
c = np.sum(((a - b) ** 2)) # 14