Numerical computation of softmax cross entropy gradient - python

I implemented the softmax() function, softmax_crossentropy() and the derivative of softmax cross entropy: grad_softmax_crossentropy(). Now I wanted to compute the derivative of the softmax cross entropy function numerically. I tried to do this by using the finite difference method but the function returns only zeros. Here is my code with some random data:
import numpy as np
batch_size = 3
classes = 10
# random preactivations
a = np.random.randint(1,100,(batch_size,classes))
# random labels
y = np.random.randint(0,np.size(a,axis=1),(batch_size,1))
def softmax(a):
epowa = np.exp(a-np.max(a,axis=1,keepdims=True))
return epowa/np.sum(epowa,axis=1,keepdims=True)
print(softmax(a))
def softmax_crossentropy(a, y):
y_one_hot = np.eye(classes)[y[:,0]]
return -np.sum(y_one_hot*np.log(softmax(a)),axis=1)
print(softmax_crossentropy(a, y))
def grad_softmax_crossentropy(a, y):
y_one_hot = np.eye(classes)[y[:,0]]
return softmax(a) - y_one_hot
print(grad_softmax_crossentropy(a, y))
# Finite difference approach to compute grad_softmax_crossentropy()
eps = 1e-5
print((softmax_crossentropy(a+eps,y)-softmax_crossentropy(a,y))/eps)
What did I wrong?

Here's how you could do it. I think you're referring to the gradient wrt the activations indicated by y's indicator matrix.
First, I instantiate a as float to change individual items.
a = np.random.randint(1,100,(batch_size,classes)).astype("float")
Then,
np.diag(grad_softmax_crossentropy(a, y)[:, y.flatten()])
array([ -1.00000000e+00, -1.00000000e+00, -4.28339542e-04])
But also
b = a.copy()
for i, o in zip(y.max(axis=1), range(y.shape[0])):
b[o, i] += eps
(softmax_crossentropy(b,y)-softmax_crossentropy(a,y))/eps
[ -1.00000000e+00 -1.00000000e+00 -4.28125536e-04]
So basically you have to change a_i in softmax, not the entirety of a.

Related

Multiclass logistic regression from scratch

I’m trying to apply multiclass logistic regression from scratch. The dataset is the MNIST.
I built some functions such as hypothesis, sigmoid, cost function, cost function derivate, and gradient descendent. My code is below.
I’m struggling with:
As all images are labeled with the respective digit that they represent. There are a total of 10 classes.
Inside the function gradient descendent, I need to loop through each class, but I do not know how to apply it using the One vs All method.
In other words, what I need to do are:
How to filter each class inside the gradient descendent.
After that, how to build a function to predict the test set.
Here is my code.
import numpy as np
import pandas as pd
# Only training data set
# the test data will be load later.
url='https://drive.google.com/file/d/1-MO8oCfq4KU361QeeL4DdafVBhZePUNT/view?usp=sharing'
url='https://drive.google.com/uc?id=' + url.split('/')[-2]
df = pd.read_csv(url,header = None)
X = df.values[:, 0:-1]
y = df.values[:, -1]
m = np.size(X, 0)
y = np.array(y).reshape(m, 1)
X = np.c_[ np.ones(m), X ] # Bias
def hypothesis(X, thetas):
return sigmoid( X.dot(thetas)) #- 0.0000001
def sigmoid(z):
return 1/(1+np.exp(-z))
def losscost(X, y, m, thetas):
h = hypothesis(X, thetas)
return -(1/m) * ( y.dot(np.log(h)) + (1-y).dot(np.log(1-h)) )
def derivativelosscost(X, y, m, thetas):
h = hypothesis(X, thetas)
return (h-y).dot(X)/m
def descendinggradient(X, y, m, epoch, alpha, thetas):
n = np.size(X, 1)
J_historico = []
for i in range(epoch):
for j in range(0,10): # 10 classes
# How to filter each class inside here (inside this def descendinggradient)?
# 2 lines below are wrong.
#thetas = thetas - alpha * derivativelosscost(X, y, m, thetas)
#J_historico = J_historico + [losscost(X, y, m, thetas)]
return [thetas, J_historico]
alpha = 0.01
epoch = 50
(thetas, J_historico) = descendinggradient(X, y, m, epoch, alpha)
# After that, how to build a function to predict the test set.
Let me explain this problem step-by-step:
First since you code doesn't provides the actual data or a link to it I've created a random dataset followed by the same commands you used to create X and Y:
batch_size = 20
num_classes = 10
rng = np.random.default_rng(seed=42)
df = pd.DataFrame(
4* rng.random((batch_size, num_classes + 1)) - 2, # Create Random Array Between -2, 2
columns=['X0','X1','X2','X3','X4','X5','X6','X7','X8', 'X9','Y']
)
X = df.values[:, 0:-1]
y = df.values[:, -1]
m = np.size(X, 0)
y = np.array(y).reshape(m, 1)
X = np.c_[ np.ones(m), X ] # Bias
Next lets take a look at your hypothesis function. If we would just run hypothesis and take a look at the first sample, we will get a vector with the size (10,1). I also needed to provide the initial thetas for this case:
thetas = rng.random((X.shape[1],num_classes))
h = hypothesis(X, thetas)
print(h[0])
>>>[0.89701729 0.90050806 0.98358408 0.81786334 0.96636732 0.97819512
0.89118488 0.87238045 0.70612173 0.30256924]
Basically the function calculates a "propabilties"[1] for each class.
At this point we got to the first issue in your code. The result of the sigmoid function returns "propabilities" which are not "connected" to each other. So to set those "propabilties" in relation we need a another function: SOFTMAX. You will find plenty implementations about this functions. In short: It will calculate the "propabilites" based on the "sigmoid", so that the sum overall class-"propabilites" results to 1.
So for your second question "How to implement a predict after training", we only need to find the argmax value to determine the class:
h = hypothesis(X, thetas)
p = softmax(h) # needs to be implemented
prediction = np.argmax(p, axis=1)
print(prediction)
>>>[2 5 5 8 3 5 2 1 3 5 2 3 8 3 3 9 5 1 1 8]
Now that we know how to predict a class, we also need to know where to setup the training. We want to do this directly after the softmax function. But instead of using the argmax to determine the winning class, we use the costfunction and its derivative. Your problem in your code: You used the crossentropy loss for a binary problem. The binary problem also don't need to use the softmax function, because the sigmoid function already provides the connection of the two binary classes. So since we are not interested in the result at all of the cross-entropy-loss for multiple classes and only into its derivative, we also want to calculate this directly.
The conversion from binary crossentropy to multiclass is kind of unintuitive in the first view. I recommend to read a bit about it before implementing. After this you basicly use your line:
thetas = thetas - alpha * derivativelosscost(X, y, m, thetas)
for updating the thetas.
[1]These are not actuall propabilities, but this is a complete different topic.

Principal Component Analysis (PCA) in Python numpy using the Snapshot method

I am trying to implement PCA analysis using numpy to mimic the results from sklearn's decomposition.PCA classifier.
I am using as input vectors of N flattened images of fixed size M = 128x192 (image dimensions) joined horizontally into a single matrix D of dimensions MxN
I am aiming to use the Snapshot method, as other implementations (see here and here) crash my build while computing np.cov, since the size of the covariant matrix would be C = D(D^T) = MxM.
The snapshot method first computes C_acute = (D^T)D, then computes the (acute) eigenvectors and values of this NxN matrix. This gives eigenvectors that are (D^T)v, and eigenvalues that are the same.
To retrieve the eigenvectors v from the (acute) eigenvectors, we simply do v = (1/eigenvalue) * (D(v_acute)).
Here is the reference implementation I am using adapted from this SO post (which is known to work):
class TemplatePCA:
def __init__(self, n_components=None):
self.n_components = n_components
def fit_transform(self, X):
X -= np.mean(X, axis = 0)
R = np.cov(X, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
evals, evecs = np.linalg.eig(R)
# sort eigenvalue in decreasing order
idx = np.argsort(evals)[::-1]
evecs = evecs[:,idx]
# sort eigenvectors according to same index
evals = evals[idx]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
evecs = evecs[:, :self.n_components]
# carry out the transformation on the data using eigenvectors
# and return the re-scaled data
return -1 * np.dot(X, evecs) #
Here is the implementation I have so far.
class MyPCA:
def __init__(self, n_components=None):
self.n_components = n_components
def fit_transform(self, X):
X -= np.mean(X, axis = 0)
D = X.T
M, N = D.shape
D_T = X # D.T == (X.T).T == X
C_acute = np.dot(D_T, D)
eigen_values, eigen_vectors_acute = np.linalg.eig(C_acute)
eigen_vectors = []
for i in range(eigen_vectors_acute.shape[0]): # for each eigenvector
v = np.dot(D, eigen_vectors_acute[i]) / eigen_values[i]
eigen_vectors.append(v)
eigen_vectors = np.array(eigen_vectors)
# sort eigenvalues and eigenvectors in decreasing order
idx = np.argsort(eigen_values)[::-1]
eigen_vectors = eigen_vectors[:,idx]
eigen_values = eigen_values[idx]
# select the first n_components eigenvectors
eigen_vectors = eigen_vectors[:, :self.n_components]
# carry out the transformation on the data using eigenvectors
# return the re-scaled data (projection)
return np.dot(C_acute, eigen_vectors)
The reference text I am using notes that:
The eigenvector is now (D^T)v, so to do face detection we first multiply our test image vector by (D^T) before projecting onto the eigenimages.
I am not sure whether it is possible to retrieve the exact same principal components (i.e. eigenvectors) using this method, and it would seem impossible to even get the same eigenvectors back, since the size of the eigen_vectors_acute is only (4, 6) (meaning there are only 4 vectors), compared to the other method where it is (6, 6) (there are 6).
Running both on an input:
x = np.array([
[0.387,123, 789,256, 4878, 5.42],
[0.723,9.78,1.90,1234, 12104,5.25],
[1,123, 67.98,7.91,12756,5.52],
[1.524,1.34,23.456,1.23,6787,3.94],
])
# These two are the same
print(sklearn.decomposition.PCA(n_components=3).fit_transform(x))
print(TemplatePCA(n_components=3).fit_transform(x))
# This one is different
print(MyPCA(n_components=3).fit_transform(x))
Output:
[[ 4282.20163145 147.84415964 -267.73483211]
[-3025.62452358 683.58580386 67.76941319]
[-3599.15380006 -569.33984612 -148.62757658]
[ 2342.57669218 -262.09011737 348.5929955 ]]
[[-4282.20163145 -147.84415964 267.73483211]
[ 3025.62452358 -683.58580386 -67.76941319]
[ 3599.15380006 569.33984612 148.62757658]
[-2342.57669218 262.09011737 -348.5929955 ]]
[[ 3.35535639e+15, -5.70493660e+17, -8.57482740e+17],
[-2.45510474e+15, 4.17428591e+17, 6.27417685e+17],
[-2.82475918e+15, 4.80278997e+17, 7.21885236e+17],
[ 1.92450753e+15, -3.27213928e+17, -4.91820181e+17]]

How to vectorize indexing and computation when indexed tensors are different dimensions?

I'm trying to vectorize the following for-loop in Pytorch. I'd be happy with just vectorizing the inner for-loop, but doing the whole batch would also be awesome.
# B: the batch size
# N: the number of training examples
# dim: the dimension of each feature vector
# K: the number of discrete labels. each vector has a single label
# delta: margin for hinge loss
batch_data = torch.tensor(...) # Tensor of shape [B x N x d]
batch_labels = torch.tensor(...) # Tensor of shape [B x N x 1], each element is one of K labels (ints)
batch_losses = [] # Ultimately should be [B x 1]
batch_centroids = [] # Ultimately should be [B x K_i x dim]
for i in range(B):
centroids = [] # Keep track of the means for each class.
classes = torch.unique(labels) # Get the unique labels for the classes.
# NOTE: The number of classes K for each item in the batch might actually
# be different. This may complicate batch-level operations.
total_loss = 0
# For each class independently. This is the part I want to vectorize.
for cl in classes:
# Take the subset of training examples with that label.
subset = data[torch.where(labels == cl)]
# Find the centroid of that subset.
centroid = subset.mean(dim=0)
centroids.append(centroid)
# Get the distance between each point in the subset and the centroid.
dists = subset - centroid
norm = torch.linalg.norm(dists, dim=1)
# The loss is the mean of the hinge loss across the subset.
margin = norm - delta
hinge = torch.clamp(margin, min=0.0) ** 2
total_loss += hinge.mean()
# Keep track of everything. If it's too hard to keep track of centroids, that's also OK.
loss = total_loss.mean()
batch_losses.append(loss)
batch_centroids.append(centroids)
I've been scratching my head on how to deal with the irregularly sized tensors. The number of classes in each batch K_i is different, and the size of each subset is different.
It turns out it actually is possible to vectorize across ragged arrays. I'll use numpy, but code should be directly translatable to torch. The key technique is to:
Sort by ragged array membership
Perform an accumulation
Find boundary indices, compute adjacent differences
For a single (non-batch) input of an n x d matrix X and an n-length array label, the following returns the k x d centroids and n-length distances to respective centroids:
def vcentroids(X, label):
"""
Vectorized version of centroids.
"""
# order points by cluster label
ix = np.argsort(label)
label = label[ix]
Xz = X[ix]
# compute pos where pos[i]:pos[i+1] is span of cluster i
d = np.diff(label, prepend=0) # binary mask where labels change
pos = np.flatnonzero(d) # indices where labels change
pos = np.repeat(pos, d[pos]) # repeat for 0-length clusters
pos = np.append(np.insert(pos, 0, 0), len(X))
Xz = np.concatenate((np.zeros_like(Xz[0:1]), Xz), axis=0)
Xsums = np.cumsum(Xz, axis=0)
Xsums = np.diff(Xsums[pos], axis=0)
counts = np.diff(pos)
c = Xsums / np.maximum(counts, 1)[:, np.newaxis]
repeated_centroids = np.repeat(c, counts, axis=0)
aligned_centroids = repeated_centroids[inverse_permutation(ix)]
dist = np.sum((X - aligned_centroids) ** 2, axis=1)
return c, dist
Batching requires little special handling. For an input B x n x d array batch_X, with B x n batch labels batch_labels, create unique labels for each batch:
batch_k = batch_labels.max(axis=1) + 1
batch_k[1:] = batch_k[:-1]
batch_k[0] = 0
base = np.cumsum(batch_k)
batch_labels += base.expand_dims(1)
So now each batch element has a unique contiguous range of labels. I.e., the first batch element will have n labels in some range [0, k0) where k0 = batch_k[0], the second will have range [k0, k0 + k1) where k1 = batch_k[1], etc.
Then just flatten the n x B x d input to n*B x d and call the same vectorized method. Your loss function is derivable using the final distances and same position-array based reduction technique.
For a detailed explanation of how the vectorization works, see my blog post.
You can vectorize the whole thing if you use a one-hot encoding for your classes and a pairwise distance trick for your norms:
import torch
B = 32
N = 1000
dim = 50
K = 25
batch_data = torch.randn((B, N, dim))
batch_labels = torch.randint(0, K, size=(B, N))
batch_one_hot = torch.nn.functional.one_hot(batch_labels)
centroids = torch.matmul(
batch_one_hot.transpose(-1, 1).type(batch_data.dtype),
batch_data
) / batch_one_hot.sum(1)[..., None]
norms = torch.linalg.norm(batch_data[:, :, None] - centroids[:, None], axis=-1)
# Compute the rest of your loss
# ...
A couple things to watch out for:
You'll get a divide by zero for any batches that have a missing class. You can handle this by first computing the class sums (with matmul) and counts (summing the one-hot tensor along axis 1) separately. Then, mask the sums with count == 0 and divide the rest of them by their class counts.
If you have a large number of classes, this will cause memory problems because the one-hot tensor will be too big. In that case, the answer from #VF1 probably makes more sense.

Why does my sigmoid function return values not in the interval ]0,1[?

I am implementing logistic regression in Python with numpy. I have generated the following data set:
# class 0:
# covariance matrix and mean
cov0 = np.array([[5,-4],[-4,4]])
mean0 = np.array([2.,3])
# number of data points
m0 = 1000
# class 1
# covariance matrix
cov1 = np.array([[5,-3],[-3,3]])
mean1 = np.array([1.,1])
# number of data points
m1 = 1000
# generate m gaussian distributed data points with
# mean and cov.
r0 = np.random.multivariate_normal(mean0, cov0, m0)
r1 = np.random.multivariate_normal(mean1, cov1, m1)
X = np.concatenate((r0,r1))
Now I have implemented the sigmoid function with the aid of the following methods:
def logistic_function(x):
""" Applies the logistic function to x, element-wise. """
return 1.0 / (1 + np.exp(-x))
def logistic_hypothesis(theta):
return lambda x : logistic_function(np.dot(generateNewX(x), theta.T))
def generateNewX(x):
x = np.insert(x, 0, 1, axis=1)
return x
After applying logistic regression, I found out that the best thetas are:
best_thetas = [-0.9673200946417307, -1.955812236119612, -5.060885703369424]
However, when I apply the logistic function with these thetas, then the output is numbers that are not inside the interval [0,1]
Example:
data = logistic_hypothesis(np.asarray(best_thetas))(X)
print(data
This gives the following result:
[2.67871968e-11 3.19858822e-09 3.77845881e-09 ... 5.61325410e-03
2.19767618e-01 6.23288747e-01]
Can someone help me understand what has gone wrong with my implementation? I cannot understand why I am getting such big values. Isnt the sigmoid function supposed to only give results in the [0,1] interval?
It does, it's just in scientific notation.
'e' Exponent notation. Prints the number in scientific notation using
the letter ‘e’ to indicate the exponent.
>>> a = [2.67871968e-11, 3.19858822e-09, 3.77845881e-09, 5.61325410e-03]
>>> [0 <= i <= 1 for i in a]
[True, True, True, True]

Softmax and Hinge functions in Python

I'm trying to implement the Hinge loss function in Python and faced with some misleadings.
In some sources that I used to read (for example, "Regression Analysis in Python"under Luca Massoron) states that Hinge sometimes calls as Softmax function.
But for me it is kind of strange because, Hinge:
and Softmax is just exponential function like:
I made that function in Python (for Softmax) this way:
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x/e_x.sum(axis=0)
Have two questions:
Can I use that softmax function like an equivalent to hinge function?
If not, how can hinge be implemented in Python?
Thanks.
Can I use that softmax function like an equivalent to hinge function?
no - they are not equivalent.
a hinge function is a loss function and do not provide well-calibrated probabilities, whereas softmax is a mapping function (one that maps a set of scores into a distribution, one that sums to one).
If not, how can hinge be implemented in Python?
this following snippet captures the essence of hinge loss functions:
import numpy as np
import matplotlib.pyplot as plt
xmin, xmax = -1, 2
xx = np.linspace(xmin, xmax, 100)
plt.plot(xx, np.where(xx < 1, 1 - xx, 0), label="Hinge loss")
you can also implement softmax functions in pure python :)
import numpy as np
import math as math
def sofyMax(data):
# pure python
# math:: $rezult(powe,sumColumn) = \dfrac{powe(data)}{sumColumn(powe(data))}$
def powe(data):
outp = [[] for _ in range(len(data))]
for column in range(len(data[0])):
r = 0
for row in data:
outp[r]+=[math.exp(row[column])]
r+=1
return outp
def sumColumn(data):
outps = []
for column in range(len(data[0])):
total = 0
for row in data:
total+=row[column]
outps += [total]
return outps
def rezult(data,sumcolumn):
outp = [[] for _ in range(len(data))]
l = 0
for row in data:
for c,s in zip(row,sumcolumn) :
outp[l] += [c/s]
l+=1
return outp
et1 = powe(data)
et2 = sumColumn(et1)
return rezult(et1,et2)
data = np.random.randn(10,5)
(np.exp(data)/np.sum(np.exp(data),axis=0)) == (np.array(sofyMax(data)))

Categories

Resources