I'm computing thousands of gradients and would like to vectorize the computations in Python. The context is SVM and the loss function is Hinge Loss. Y is Mx1, X is MxN and w is Nx1.
L(w) = lam/2 * ||w||^2 + 1/m Sum i=1:m ( max(0, 1-y[i]X[i]w) )
The gradient of this is
grad = lam*w + 1/m Sum i=1:m {-y[i]X[i].T if y[i]*X[i]*w < 1, else 0}
Instead of looping through each element of the sum and evaluating the max function, is it possible to vectorize this? I want to use something like np.where like the following
grad = np.where(y*X.dot(w) < 1, -X.T.dot(y), 0)
This does not work because where the condition is true, -X.T*y is the wrong dimension.
edit: list comprehension version, would like to know if there's a cleaner or more optimal way
def grad(X,y,w,lam):
# cache y[i]*X[i].dot(w), each row of Xw is multiplied by a single element of y
yXw = y*X.dot(w)
# cache y[i]*X[i], note each row of X is multiplied by a single element of y
yX = X*y[:,np.newaxis]
# return the average of this max function
return lam*w + np.mean( [-yX[i] if yXw[i] < 1 else 0 for i in range(len(y))] )
you have two vectors A and B, and you want to return array C, such that C[i] = A[i] if B[i] < 1 and 0 else, consequently all you need to do is
C := A * sign(max(0, 1-B)) # suprisingly similar to the original hinge loss, right?:)
since
if B < 1 then 1-B > 0, thus max(0, 1-B) > 0 and sign(max(0, 1-B)) == 1
if B >= 1 then 1-B <= 0, thus max(0, 1-B) = 0 and sign(max(0, 1-B)) == 0
so in your code it will be something like
A = (y*X.dot(w)).ravel()
B = (X*y[:,np.newaxis]).ravel()
C = A * np.sign(np.maximum(0, 1-B))
Related
I'm trying to implement a loss function with the following formula:
For context, M is a 576 x 2 x 2048 matrix.
C = 576 and V = 2.
Currently, my code is highly inefficient and is causing infeasible runtimes per epoch. I'm using a triple-nested for loop for this, which is probably why this is happening, but I'm not sure how one would write an operation that calculates cosine similarity between each and every channel of a matrix in this way?
Here's my naïve attempt at this:
M = self.kernel
norm_M = tf.norm(M, ord=2, axis=2)
norm_X = tf.norm(X, ord=2, axis=1)
# Compute reunion loss
sum = 0.0
for j in tf.range(C):
for k in tf.range(V):
for l in tf.range(V):
if k == l:
continue
A = tf.tensordot(M[j][l], M[j][k], 1)
B = norm_M[j][l] * norm_M[j][k]
sum += A / B
l_r = -2/(self.V*(self.V-1)) * sum
We need to create a mask to filter values at l != k, so that it doesnt come in the calculation.
M = np.random.randint(0,10, size=(5,8,7), dtype=np.int32).astype(np.float32)
norm_M = tf.linalg.norm(M, ord=2, axis=2)
mask = 1-tf.eye(M.shape[1]) #filter out the diagonal elements
A = tf.matmul(M, M, transpose_b=True)
B = tf.matmul(norm_M[...,None], norm_M[...,None], transpose_b=True)
C = (A/B)*mask
l_r = -2/(V*(V-1)) * tf.reduce_sum(C)
I'm using cvxpy library to perform Portfolio Optimization.
However, instead of using the Markowitz covariance model, I would like to introduce new variables where yi variable is a binary variable that assumes value 1 if the asset i is included in the portfolio and 0 otherwise; m is the maximum number of assets I want to include in the portfolio; r is the return I want to get.
The Markowitz model, with constraint on the return is the following:
import numpy as np
import pandas as pd
from cvxpy import *
# assets names
tickers = ["AAA", "BBB", "CCC", "DDD", "EEE", "FFF"]
# return matrix
ret = pd.DataFrame(np.random.rand(1,6), columns = tickers)
# Variance_Coviariance matrix
covm = pd.DataFrame(np.random.rand(6,6), columns = tickers, index = tickers)
# problem setting
x = Variable(len(tickers)) # xi variables
er = np.asarray(ret.T) * x # expected return
min_ret = 0.2 # minimum return
risk = quad_form(x, np.asmatrix(covm)) # risk
prob = Problem(Minimize(risk), # problem setting function
[sum(x) == 1, er >= min_ret, x >= 0])
prob.solve()
The solution of this problem gives out a percentage to invest in each asset. But what if I want to invest on a limited number of asset m?
In order to do that I need to implement yi variables and make sure that their sum is equal to m
Hence, it should be something like this:
x = Variable(n)
er = np.asarray(ret.T) * x
risk = quad_form(x, np.asmatrix(covm))
y = Variable(n, boolean=True) #adding boolean variables
prob = Problem(Minimize(risk), [sum(x) == 1, er >= min_ret, x >= 0, sum(y) == k, sum(x) <= sum(y)])
prob.solve()
print(x.value)
print(y.value)
Unfortunately, this last chunk of code doesn't produce any result. Do you know why? Is there another method to solve this problem?
In short, you have to link the variables x and y.
In case of long only constraints:
eps = 1e-5
[-1 + eps <= x - y, x - y <= 0]
This will set y to 1 if x > 0 and y to 0 if x == 0.
To make it work properly and not to be bothered by assets being just marginally above 0, you should also introduce a buy-in threshold.
[x - y >= buy_in_threshold - 1]
Note, that this problem is a mixed integer problem.
The ECOS BB solver can deal with that, if the problem remains small. Otherwise, you will need a commercial grade optimizer.
I have a piece of code like this:
a = Y[0]; b = Z[0]
print(a, b)
loss = 0
for i in range(len(a)):
k = len(a)-i
loss += (2**(k-1))*np.abs(a[i]-b[i])
print(loss)
Where Y and Z are of dimensions 250 x 10 and each row is 10 bit binary value. For example, print(a,b) prints this: [1 0 0 0 0 0 0 0 1 0] [0 0 0 1 1 1 1 1 0 0]
Now I want to apply the two line function inside the for loop for corresponding rows between Y and Z. But I don't want to do something like this:
for j in range(Y.shape[0]):
a = Y[j]; b = Z[j]
loss = 0
for i in range(len(a)):
k = len(a)-i
loss += (2**(k-1))*np.abs(a[i]-b[i])
print(loss)
I am essentially trying to make a custom loss function in keras/tensorflow. And that for loop example doesn't scale for large tensor operations. How do I do it with some sort of batch matrix operation instead of for loops?
You could do this:
factor = 2**np.arange(Y.shape[1])[::-1]
loss = np.sum(factor * np.abs(Y-Z), axis=-1)
If only the inner loop needs to be made numpy parallelized:
import numpy as np
for j in range(Y.shape[0]):
a = Y[j]; b = Z[j]
loss = 0
"""
for i in range(len(a)):
k = len(a)-i
loss += (2**(k-1))*np.abs(a[i]-b[i])
"""
k = np.arange(len(a), 0, -1)
loss = np.sum(np.multiply(2**(k-1), np.abs(a-b)))
print(loss)
EDIT
To make it even more numpy parallelized, use the following approach:
import numpy as np
# This function computes loss for row pairs
def get_loss(row, sz):
loss = 0
k = np.arange(sz, 0, -1)
loss = np.sum(np.multiply(2**(k-1), np.abs(row[:sz]-row[sz:])))
return loss
# Sample input matrices
A = np.random.random((5, 10))
B = np.random.random((5, 10))
# Concatenate the input matrices
AB = np.concatenate((A, B), axis=1)
# apply the function on each row pair
result = np.apply_along_axis(get_loss, 1, AB, A.shape[1])
# result is a 1D array of the losses
print(result.shape)
I am trying to implement this metric
I already managed to calculate NUBN with numpy operations so that is fast, but I can't find a way to escape python slow looping to calculate the DRD part. Here is my current calculation of DRD:
def drd(im, im_gt):
height, width = im.shape
W = np.array([[1/math.sqrt(x**2+y**2) if x != 0 or y != 0 else 0 for x in range(-2, 3)] for y in range(-2, 3)])
W /= W.sum()
drd = 0
s = []
for y, x in zip(*np.where(im_gt != im)):
if x > 1 and y > 1 and x + 2 < width and y + 2 < height:
s.append(im_gt[y-2:y+3, x-2:x+3] == im_gt[y, x])
else:
for yy in range(y-2, y+3):
for xx in range(x-2, x+3):
if xx > 1 and yy > 1 and xx < width - 1 and yy < height - 1:
drd += abs(im_gt[yy, xx] - im[y, x]) * W[yy-y+2, xx-x+2]
return drd + np.sum(s * W)
drd(np.random.choice([0, 1], size=(100, 100)), np.random.choice([0, 1], size=(100, 100)))
Can anyone think of a faster way to do this? Timings on 1000x1000:
The first step in speeding things up with numpy is to break up your sequence of operations into something that can be applied to an entire array. Let's start with an easy one: removing the comprehensions in the computation of W:
W = np.hypot(np.arange(-2, 3), np.arange(-2, 3)[:, None])
np.reciprocal(W, where=W.astype(bool), out=W)
W /= W.sum()
The next thing (which is hinted at above with where=W.astype(bool)) is to use masking where appropriate to apply a condition to an entire array. Your algorithm is as follows:
For each location that does not match between im and im_gt, compute the sum of the elements of W centered on that location where they do not match.
You can compute this with a convolution with W. Locations where im == im_gt are simply discarded. Locations where im_gt == 1 need to be flipped by subtracting from W.sum(), since you need to sum the zeros, not the ones for those elements. Convolution is implemented in scipy.signal.convolve2d. You get the same edge effects by using mode='same' and adjusting the edge pixels carefully. You can cheat and get the edge sums by convolving with an array of ones:
from scipy.signal import convolve2d
# Compute this once outside the function
W = np.hypot(np.arange(-2, 3), np.arange(-2, 3)[:, None])
np.reciprocal(W, where=W.astype(bool), out=W)
W /= W.sum()
def drd(im, im_gt):
m0 = im != im_gt
m1 = im_gt == 0
m2 = im_gt == 1
s1 = convolve2d(m1, W, mode='same')[m0 & m1].sum()
s2 = convolve2d(m2, W, mode='same')[m0 & m2].sum()
return s1 + s2
I have a list of lists m which I need to modify
I need that the sum of each row to be greater than A and the sum of each column to be lesser than B
I have something like this
x = 5 #or other number, not relevant
rows = len(m)
cols = len(m[0])
for r in range(rows):
while sum(m[r]) < A:
c = randint(0, cols-1)
m[r][c] += x
for c in range(cols):
cant = sum([m[r][c] for r in range(rows)])
while cant > B:
r = randint(0, rows-1)
if m[r][c] >= x: #I don't want negatives
m[r][c] -= x
My problem is: I need to satisfy both conditions and, this way, after the second for I won't be sure if the first condition is still met.
Any suggestions on how to satisfy both conditions and, of course, with the best execution? I could definitely consider the use of numpy
Edit (an example)
#input
m = [[0,0,0],
[0,0,0]]
A = 20
B = 25
# one desired output (since it chooses random positions)
m = [[10,0,15],
[15,0,5]]
I may need to add
This is for the generation of the random initial population of a genetic algorithm, the restrictions are to make them a possible solution, and I would need to run this like 80 times to get different possible solutions
Something like this should to the trick:
import numpy
from scipy.optimize import linprog
A = 10
B = 20
m = 2
n = m * m
# the coefficients of a linear function to minimize.
# setting this to all ones minimizes the sum of all variable
# values in the matrix, which solves the problem, but see below.
c = numpy.ones(n)
# the constraint matrix.
# This is matrix-multiplied with the current solution candidate
# to form the left hand side of a set of normalized
# linear inequality constraint equations, i.e.
#
# x_0 * A_ub[0][0] + x_1 * A_ub[0][1] <= b_0
# x_1 * A_ub[1][0] + x_1 * A_ub[1][1] <= b_1
# ...
A_ub = numpy.zeros((2 * m, n))
# row sums. Since the <= inequality is a fixed component,
# we just multiply everthing by (-1), i.e. we demand that
# the negative sums are smaller than the negative limit -A.
#
# Assign row ranges all at once, because numpy can do this.
for r in xrange(0, m):
A_ub[r][r * m:(r + 1) * m] = -1
# We want that the sum of the x in each (flattened)
# column is smaller than B
#
# The manual stepping for the column sums in row-major encoding
# is a little bit annoying here.
for r in xrange(0, m):
for j in xrange(0, m):
A_ub[r + m][r + m * j] = 1
# the actual upper limits for the normalized inequalities.
b_ub = [-A] * m + [B] * m
# hand the linear program to scipy
solution = linprog(c, A_ub=A_ub, b_ub=b_ub)
# bring the solution into the desired matrix form
print numpy.reshape(solution.x, (m, m))
Caveats
I use <=, not < as indicated in your question, because that's what numpy supports.
This minimizes the total sum of all values in the target vector.
For your use case, you probably want to minimize the distance
to the original sample, which the linear program cannot handle, since neither the squared error nor the absolute difference can be expressed using a linear combination (which is what c stands for). For that, you will probably need to go to full minimize().
Still, this should get you rough idea.
A NumPy solution:
import numpy as np
val = B / len(m) # column sums <= B
assert val * len(m[0]) >= A # row sums >= A
# create array shaped like m, filled with val
arr = np.empty_like(m)
arr[:] = val
I chose to ignore the original content of m - it's all zero in your example anyway.
from random import *
m = [[0,0,0],
[0,0,0]]
A = 20
B = 25
x = 1 #or other number, not relevant
rows = len(m)
cols = len(m[0])
def runner(list1, a1, b1, x1):
list1_backup = list(list1)
rows = len(list1)
cols = len(list1[0])
for r in range(rows):
while sum(list1[r]) <= a1:
c = randint(0, cols-1)
list1[r][c] += x1
for c in range(cols):
cant = sum([list1[r][c] for r in range(rows)])
while cant >= b1:
r = randint(0, rows-1)
if list1[r][c] >= x1: #I don't want negatives
list1[r][c] -= x1
good_a_int = 0
for r in range(rows):
test1 = sum(list1[r]) > a1
good_a_int += 0 if test1 else 1
if good_a_int == 0:
return list1
else:
return runner(list1=list1_backup, a1=a1, b1=b1, x1=x1)
m2 = runner(m, A, B, x)
for row in m:
print ','.join(map(lambda x: "{:>3}".format(x), row))