Normalization of a vector using loops in Python - python

Write a function that normalizes a vector (finds the unit vector). A vector can be normalized by dividing each individual component of the vector by its magnitude. Your input for this function will be a vector i.e. 1 dimensional list containing 3 integers.
According to the solution devised, I have considered a predefined list of 3 elements. But if I want to apply loops, then please explain me that how I could deduce the solution using loops. I tried working on the problem. This is my solution so far:
from math import sqrt
def vector_normalization(my_vector):
result = 0
for x in my_vector:
result = result + (x ** 2)
magnitude = sqrt(result)
nx_vector = my_vector[0] / magnitude
ny_vector = my_vector[1] / magnitude
nz_vector = my_vector[2] / magnitude
n_vector = [nx_vector, ny_vector, nz_vector]
return n_vector
Now, after I calculate the magnitude using for loop of some random list, according to my program I will get only three elements in the list as the output. But I want all the elements in the random list to be normalized. Please suggest me the way to achieve the same.

Also, you can use high order functions in Python like map:
vec = [1,2,3]
magnitude = sqrt(sum(map(lambda x: x**2, vec)))
normalized_vec = list(map(lambda x: x/magnitude, vec))
So normalized_vec becomes:
[0.2672612419124244, 0.5345224838248488, 0.8017837257372732]
Or using Numpy:
import numpy as np
arr = np.array([1,2,3])
arr_normalized = arr/sqrt(sum(arr**2))
arr_normalized results in:
array([ 0.26726124, 0.53452248, 0.80178373])

Please try the following code,
vector = [1,2,4]
y=0
for x in vector:
y+=x**2
y = y**0.5
unit_vector = []
for x in vector:
unit_vector.append(x/y)
Hope this helps.

def vector_normalization(vec):
result = 0
for x in vec:
result = result + (x**2)
magnitude = (result)**0.5
x = vec[0]/magnitude
y = vec[1]/magnitude
z = vec[2]/magnitude
vec = [x,y,z]
return vec

Related

Vectorization for computing variance of a vector split at different points

I have a 1-D array arr and I need to compute the variance of all possible contiguous subvectors that begin at position 0. It may be easier to understand with a for loop:
np.random.seed(1)
arr = np.random.normal(size=100)
res = []
for i in range(1, arr.size+1):
subvector = arr[:i]
var = np.var(subvector)
res.append(var)
Is there any way to compute res witouth the for loop?
Yes, since var = sum_squares / N - mean**2, and mean = sum /N, you can do cumsum to get the accumulate sums:
cumsum = np.cumsum(arr)
cummean = cumsum/(np.arange(len(arr)) + 1)
sq = np.cumsum(arr**2)
# correct the dof here
cumvar = sq/(np.arange(len(arr))+1) - cummean**2
np.allclose(res, cumvar)
# True
With pandas, you could use expanding:
import pandas as pd
pd.Series(arr).expanding().var(ddof=0).values
NB. one of the advantages is that you can benefit from the var parameters (by default ddof=1), and of course, you can run many other methods.

Numpy only computation of mathematical expression involving a nested sum of functions over the same array

I need help to compute a mathematical expression using only numpy operations. The expression I want to compute is the following :
Where : x is an (N, S) array and f is a numpy function (that can work with broadcastable arrays e.g np.maximum, np.sum, np.prod, ...). If that is of importance, in my case f is a symetric function.
So far my code looks like this:
s = 0
for xp in x: # Loop over N...
s += np.sum(np.prod(f(xp, x), axis=1))
And still has loop that I'd like to get rid of.
Typically N is "large" (around 30k) but S is small (less than 20) so if anyone can find a trick to only loop over S this would still be a major improvement.
I belive the problem is easy by N-plicating the array but one of size (32768, 32768, 20) requires 150Go of RAM that I don't have. However, (32768, 32768) fits in memory though I would appreciate a solution that does not allocate such array.
Maybe a use of np.einsum with well-chosen arrays is possible?
Thanks for your replies. If any information is missing let me know!
Have a nice day !
Edit 1 :
Form of f I'm interested in includes (for now) : f(x, y) = |x - y|, f(x, y) = |x - y|^2, f(x, y) = 2 - max(x, y).
Your loop is very efficient. Some possible ways are
Method-1 (looping over S)
import numpy as np
def f(x,y):
return np.abs(x-y)
N = 200
S = 20
x_data = random.rand(N,S) #(i,s)
y_data = random.rand(N,S) #(i',s)
product = f(broadcast_to(x_data[:,0][...,None],(N,N)) ,broadcast_to(y_data[:,0][...,None],(N,N)).T)
for i in range(1,S):
product *= f(broadcast_to(x_data[:,i][...,None],(N,N)) ,broadcast_to(y_data[:,i][...,None],(N,N)).T)
sum = np.sum(product)
Method-2 (dispatching S number of blocks)
import numpy as np
def f(x,y):
x1 = np.broadcast_to(x[:,None,...],(x.shape[0],y.shape[0],x.shape[1]))
y1 = np.broadcast_to(y[None,...],(x.shape[0],y.shape[0],x.shape[1]))
return np.abs(x1-y1)
def f1(x1,y1):
return np.abs(x1-y1)
N = 5000
S = 20
x_data = np.random.rand(N,S) #(i,s)
y_data = np.random.rand(N,S) #(i',s)
def fun_new(x_data1,y_data1):
s = 0
pp =np.split(x_data1,S,axis=0)
for xp in pp:
s += np.sum(np.prod(f(xp, y_data1), axis=2))
return s
def fun_op(x_data1,y_data1):
s = 0
for xp in x_data1: # Loop over N...
s += np.sum(np.prod(f1(xp, y_data1), axis=1))
return s
fun_new(x_data,y_data)

creating matrix for armodel but np.arange returning a none type eventhough the matrix is correct. How to convert to an array? Below is my code so far

I have a function to generate matrix x and y for ar_model to calculate the coefficients using least_square method. However, np.arange when i do print correctly prints the x matrix as it should be but when i convert the np.arange value to an array it's not correct. Please help me how I can correctly generate the array version of the matrix. Thank you!
#example lists
x = [1,2,3,4,5]
y = [2,4,6,8,10,12,14]
def matrix(x, na): #na is order or armodel
X = np.array(x)
N = len(X)
p = na
for n in range(p, N):
u = X[np.arange((n-1),(n-p-1),-1)]
matrix = print(u)
array = np.array(matrix) #not correct
#need to get the negative versions of u
#but u isnt an array so I wasn't able multiply by -1
#matrix_y
y = X[na:]
return matrix, array

Python two arrays, get all points within radius

I have two arrays, lets say x and y that contain a few thousand datapoints.
Plotting a scatterplot gives a beautiful representation of them. Now I'd like to select all points within a certain radius. For example r=10
I tried this, but it does not work, as it's not a grid.
x = [1,2,4,5,7,8,....]
y = [-1,4,8,-1,11,17,....]
RAdeccircle = x**2+y**2
r = 10
regstars = np.where(RAdeccircle < r**2)
This is not the same as an nxn array, and RAdeccircle = x**2+y**2 does not seem to work as it does not try all permutations.
You can only perform ** on a numpy array, But in your case you are using lists, and using ** on a list returns an error,so you first need to convert the list to numpy array using np.array()
import numpy as np
x = np.array([1,2,4,5,7,8])
y = np.array([-1,4,8,-1,11,17])
RAdeccircle = x**2+y**2
print RAdeccircle
r = 10
regstars = np.where(RAdeccircle < r**2)
print regstars
>>> [ 2 20 80 26 170 353]
>>> (array([0, 1, 2, 3], dtype=int64),)

Element-wise maximum of two sparse matrices

Is there an easy/build-in way to get the element-wise maximum of two (or ideally more) sparse matrices? I.e. a sparse equivalent of np.maximum.
This did the trick:
def maximum (A, B):
BisBigger = A-B
BisBigger.data = np.where(BisBigger.data < 0, 1, 0)
return A - A.multiply(BisBigger) + B.multiply(BisBigger)
No, there's no built-in way to do this in scipy.sparse. The easy solution is
np.maximum(X.A, Y.A)
but this is obviously going to be very memory-intensive when the matrices have large dimensions and it might crash your machine. A memory-efficient (but by no means fast) solution is
# convert to COO, if necessary
X = X.tocoo()
Y = Y.tocoo()
Xdict = dict(((i, j), v) for i, j, v in zip(X.row, X.col, X.data))
Ydict = dict(((i, j), v) for i, j, v in zip(Y.row, Y.col, Y.data))
keys = list(set(Xdict.iterkeys()).union(Ydict.iterkeys()))
XmaxY = [max(Xdict.get((i, j), 0), Ydict.get((i, j), 0)) for i, j in keys]
XmaxY = coo_matrix((XmaxY, zip(*keys)))
Note that this uses pure Python instead of vectorized idioms. You can try shaving some of the running time off by vectorizing parts of it.
Here's another memory-efficient solution that should be a bit quicker than larsmans'. It's based on finding the set of unique indices for the nonzero elements in the two arrays using code from Jaime's excellent answer here.
import numpy as np
from scipy import sparse
def sparsemax(X, Y):
# the indices of all non-zero elements in both arrays
idx = np.hstack((X.nonzero(), Y.nonzero()))
# find the set of unique non-zero indices
idx = tuple(unique_rows(idx.T).T)
# take the element-wise max over only these indices
X[idx] = np.maximum(X[idx].A, Y[idx].A)
return X
def unique_rows(a):
void_type = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
b = np.ascontiguousarray(a).view(void_type)
idx = np.unique(b, return_index=True)[1]
return a[idx]
Testing:
def setup(n=1000, fmt='csr'):
return sparse.rand(n, n, format=fmt), sparse.rand(n, n, format=fmt)
X, Y = setup()
Z = sparsemax(X, Y)
print np.all(Z.A == np.maximum(X.A, Y.A))
# True
%%timeit X, Y = setup()
sparsemax(X, Y)
# 100 loops, best of 3: 4.92 ms per loop
The latest scipy (13.0) defines element-wise booleans for sparse matricies. So:
BisBigger = B>A
A - A.multiply(BisBigger) + B.multiply(BisBigger)
np.maximum does not (yet) work because it uses np.where, which is still trying to get the truth value of an array.
Curiously B>A returns a boolean dtype, while B>=A is float64.
Here is a function that returns a sparse matrix that is element-wise maximum of two sparse matrices. It implements the answer by hpaulj:
def sparse_max(A, B):
"""
Return the element-wise maximum of sparse matrices `A` and `B`.
"""
AgtB = (A > B).astype(int)
M = AgtB.multiply(A - B) + B
return M
Testing:
A = sparse.csr_matrix(np.random.randint(-9,10, 25).reshape((5,5)))
B = sparse.csr_matrix(np.random.randint(-9,10, 25).reshape((5,5)))
M = sparse_max(A, B)
M2 = sparse_max(B, A)
# Test symmetry:
print((M.A == M2.A).all())
# Test that M is larger or equal to A and B, element-wise:
print((M.A >= A.A).all())
print((M.A >= B.A).all())
from scipy import sparse
from numpy import array
I = array([0,3,1,0])
J = array([0,3,1,2])
V = array([4,5,7,9])
A = sparse.coo_matrix((V,(I,J)),shape=(4,4))
A.data.max()
9
If you haven't already, you should try out ipython, you could have saved your self time my making your spare matrix A then simply typing A. then tab, this will print a list of methods that you can call on A. From this you would see A.data gives you the non-zero entries as an array and hence you just want the maximum of this.

Categories

Resources