I am trying to find the mse for a given Phi, with output y, and calculated weights w. While trying to implement (y - w(transpose) * Phi ) in w(transpose) * Phi i am getting Value error. I know this is dimension error, but I've tried to change it and its not working for me.
I've tried transpose (but its not really transposing, just stays as it is), and reshape.
X=[1,2,3]
d=3
Phi=np.polynomial.polynomial.polyvander(X,d)
y=[2,3,4]
def train_model(Phi, y):
pht = np.matrix.transpose(Phi)
u = np.matmul(pht,Phi)
q = np.linalg.inv(u)
s = np.matmul(q,pht)
w = np.matmul(s,y)
return w
w=train_model(Phi,y)
def evaluate_model(Phi, y, w):
sum=0
wt = np.matrix.transpose(w)
for i in range (0,len(y)):
g = np.matmul(wt,Phi[:,i])
k = y[i]-g
l = k ** 2
sum+=l
avg=sum/len(y)
return avg
Edit:
The error I get is
ValueError: shapes (4,) and (53,) not aligned: 4 (dim 0) != 53 (dim 0)
It looks like your indexing is wrong, try
g = np.matmul(wt,Phi[i,:])
Related
I am new to programming in general, however I am trying really hard for a project to randomly choose some outcomes depending on the probability of that outcome happening for lotteries that i have generated and i would like to use a loop to get random numbers each time.
This is my code:
import numpy as np
p = np.arange(0.01, 1, 0.001, dtype = float)
alpha = 0.5
alpha = float(alpha)
alpha = np.zeros((1, len(p))) + alpha
def w(alpha, p):
return np.exp(-(-np.log(p))**alpha)
w = w(alpha, p)
def P(w):
return np.exp(np.log2(w))
prob_win = P(w)
prob_lose = 1 - prob_win
E = 10
E = float(E)
E = np.zeros((1, len(p))) + E
b = 0
b = float(b)
b = np.zeros((1, len(p))) + b
def A(E, b, prob_win):
return (E - b * (1 - prob_win)) / prob_win
a = A(E, b, prob_win)
a = a.squeeze()
prob_array = (prob_win, prob_lose)
prob_matrix = np.vstack(prob_array).T.squeeze()
outcomes_array = (a, b)
outcomes_matrix = np.vstack(outcomes_array).T
outcome_pairs = np.vsplit(outcomes_matrix, len(p))
outcome_pairs = np.array(outcome_pairs).astype(np.float)
prob_pairs = np.vsplit(prob_matrix, len(p))
prob_pairs = np.array(prob_pairs)
nominalized_prob_pairs = [outcome_pairs / np.sum(outcome_pairs) for
outcome_pairs in np.vsplit(prob_pairs, len(p)) ]
The code works fine but I would like to use a loop or something similar for the next line of code as I want to get for each row/ pair of probabilities to get 5 realizations. When i use size = 5 i just get a really long list but I do not know which values still belong to the pairs as when size = 1
realisations = np.concatenate([np.random.choice(outcome_pairs[i].ravel(),
size=1 , p=nominalized_prob_pairs[i].ravel()) for i in range(len(outcome_pairs))])
or if I use size=5 as below how can I match the realizations to the initial probabilities? Do i need to cut the array after every 5th element and then store the values in a matrix with 5 columns and a new row for every 5th element of the initial array? if yes how could I do this?
realisations = np.concatenate([np.random.choice(outcome_pairs[i].ravel(),
size=1 , p=nominalized_prob_pairs[i].ravel()) for i in range(len(outcome_pairs))])
What are you trying to produce exactly ? Be more concise.
Here is a starter clean code where you can produce linear data.
import numpy as np
def generate_data(n_samples, variance):
# generate 2D data
X = np.random.random((n_samples, 1))
# adding a vector of ones to ease calculus
X = np.concatenate((np.ones((n_samples, 1)), X), axis=1)
# generate two random coefficients
W = np.random.random((2, 1))
# construct targets with our data and weights
y = X # W
# add some noise to our data
y += np.random.normal(0, variance, (n_samples, 1))
return X, y, W
if __name__ == "__main__":
X, Y, W = generate_data(10, 0.5)
# check random value of x for example
for x in X:
print(x, end=' --> ')
if x[1] <= 0.4:
print('prob <= 0.4')
else:
print('prob > 0.4')
I'm having having some difficulty implementing a negative log likelihood function in python
My Negative log likelihood function is given as:
This is my implementation but i keep getting error:ValueError: shapes (31,1) and (2458,1) not aligned: 1 (dim 1) != 2458 (dim 0)
def negative_loglikelihood(X, y, theta):
J = np.sum(-y # X # theta) + np.sum(np.exp(X # theta))+ np.sum(np.log(y))
return J
X is a dataframe of size:(2458, 31), y is a dataframe of size: (2458, 1) theta is dataframe of size: (31,1)
i cannot fig out what am i missing. Is my implementation incorrect somehow? Any help would be much appreciated. thanks
You cannot use matrix multiplication here, what you want is multiplying elements with the same index together, ie element wise multiplication. The correct operator is * for this purpose.
Moreover, you must transpose theta so numpy can broadcast the dimension with size 1 to 2458 (same for y: 1 is broadcasted to 31.)
x = np.random.rand(2458, 31)
y = np.random.rand(2458, 1)
theta = np.random.rand(31, 1)
def negative_loglikelihood(x, y, theta):
J = np.sum(-y * x * theta.T) + np.sum(np.exp(x * theta.T))+ np.sum(np.log(y))
return J
negative_loglikelihood(x, y, theta)
>>> 88707.699
EDIT: your formula includes a y! inside the logarithm, you should also update your code to match.
If you look at your equation you are passing yixiθ is Summing over i=1 to M so it means you should pass the same i over y and x otherwise pass the separate function over it.
I'm trying to reshape the array with some .csv value and it' giving me an error for multiple lines and I have tried to find some examples over a StackOverflow but I wasn't able to figure it out what's the actual problem here !! and I'm getting this error every time - whenever I'm trying to use np.zeros(), np.ones(), and np.array()
I have exam data that consist EXAM-1, EXAM-2 and Admission decision. My x, y, and theta are not in the same size?
def sigmoid(z):
new_val = 1 / (1 + np.exp(-z))
return new_val
def h(theta,X):
return sigmoid(np.dot(X,theta))#------Value Error
def compute_logistic_cost(theta, X, y):
m= len(y)
J = (1/m) * np.sum((-y * np.log(h(theta,X))) - ((1 - y)*np.log(1 - h(theta,X))))#-----Value Error
eps = 1e-12
hypothesis[hypothesis < eps] = eps
eps = 1.0 - 1e-12
hypothesis[hypothesis > eps] = eps
return J
X = np.ones( (3, 1) )#------Value Error and If I put 100 instead of 1 it is working
X[1:,:] = X.T
theta = np.zeros( (3, 1) )
print(compute_logistic_cost(theta, X, y))#------Value Error
theta = np.array([[1.0],
[1.0],
[1.0]])
print(compute_logistic_cost(theta, X, y))
theta = np.array([[0.1],
[0.1],
[0.1]])
print(compute_logistic_cost(theta, X, y))
Following is the error message please help me to understand. ValueError: shapes (3,100) and (3,1) not aligned: 100 (dim 1) != 3 (dim 0)
It looks like this is a mathematical failure - taking the dot product of the two matrices will fail due to matrix incompatibility. See here for the doc showing this value error. Specifically:
Raises:
ValueError
If the last dimension of a is not the same size as the second-to-last dimension of b.
Matrix math is hard. It looks like you just need the tranpose of the smaller matrix for the dot product to successfully execute. I do not know enough to tell you whether the rest of the script is correct, but this will at least clear the error for you to continue.
Hope that helps.
Im trying to change the rows of an array with new values in a for loop, but cannot get it to work.
Problem is related to propagation of a wave packet in quantum physics.
Ive tried using the numpy.dot() function, but that doesnt work, and i tried making an easier for loop, that works.
import numpy as np
sig = 10**(-8)
x0 = 50*10**(-9)
L = 200*10**(-9)
N = 400
Nx = 1000
x = np.linspace(x0, L, N)
expsig = np.exp(-((1/2)*(x-x0)**2)/(sig**2))
expimg = np.exp(1j*(x-x0))
Phi = (1/(np.pi**(1/4)*np.sqrt(sig))*expsig*expimg)
Boxfunc = np.zeros(shape = (N, Nx))
for i in range(0, N):
SINnpi = np.sin(((i*np.pi)/L)*x)
Boxfunc[i,:] = np.sqrt(2/L)*SINnpi
Y = Boxfunc[i,:]*Phi
I expect the output to be a 400x1000 array with new calculated values from the multiplication between Phi and Boxfunc.
I just get the error message "could not broadcast input array from shape (400) into shape (1000)" when i get to the Boxfunc in the for-loop.
There is a problem with array x, it should be x = np.linspace(x0, L, Nx), then your code works.
Or you can define Boxfunc = np.zeros(shape = (Nx, N)). The problem is from the shape between x and Boxfunc.
I have been trying to use fmin_cg to minimize cost function for Logistic Regression.
xopt = fmin_cg(costFn, fprime=grad, x0= initial_theta,
args = (X, y, m), maxiter = 400, disp = True, full_output = True )
This is how I call my fmin_cg
Here is my CostFn:
def costFn(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 0
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J.flatten()
Here is my grad:
def grad(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = 1 / m * (X.T.dot(h-y))
return gg.flatten()
It seems to be throwing this error:
/Users/sugethakch/miniconda2/lib/python2.7/site-packages/scipy/optimize/linesearch.pyc in phi(s)
85 def phi(s):
86 fc[0] += 1
---> 87 return f(xk + s*pk, *args)
88
89 def derphi(s):
ValueError: operands could not be broadcast together with shapes (3,) (300,)
I know it's something to do with my dimensions. But I can't seem to figure it out.
I am noob, so I might be making an obvious mistake.
I have read this link:
fmin_cg: Desired error not necessarily achieved due to precision loss
But, it somehow doesn't seem to work for me.
Any help?
Updated size for X,y,m,theta
(100, 3) ----> X
(100, 1) -----> y
100 ----> m
(3, 1) ----> theta
This is how I initialize X,y,m:
data = pd.read_csv('ex2data1.txt', sep=",", header=None)
data.columns = ['x1', 'x2', 'y']
x1 = data.iloc[:, 0].values[:, None]
x2 = data.iloc[:, 1].values[:, None]
y = data.iloc[:, 2].values[:, None]
# join x1 and x2 to make one array of X
X = np.concatenate((x1, x2), axis=1)
m, n = X.shape
ex2data1.txt:
34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
.....
If it helps, I am trying to re-code one of the homework assignments for the Coursera's ML course by Andrew Ng in python
Finally, I figured out what the problem in my initial program was.
My 'y' was (100, 1) and the fmin_cg expects (100, ). Once I flattened my 'y' it no longer threw the initial error. But, the optimization wasn't working still.
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.693147
Iterations: 0
Function evaluations: 43
Gradient evaluations: 41
This was the same as what I achieved without optimization.
I figured out the way to optimize this was to use the 'Nelder-Mead' method. I followed this answer: scipy is not optimizing and returns "Desired error not necessarily achieved due to precision loss"
Result = op.minimize(fun = costFn,
x0 = initial_theta,
args = (X, y, m),
method = 'Nelder-Mead',
options={'disp': True})#,
#jac = grad)
This method doesn't need a 'jacobian'.
I got the results I was looking for,
Optimization terminated successfully.
Current function value: 0.203498
Iterations: 157
Function evaluations: 287
Well, since I don't know exactly how your initializing m, X, y, and theta I had to make some assumptions. Hopefully my answer is relevant:
import numpy as np
from scipy.optimize import fmin_cg
from scipy.special import expit
def costFn(theta, X, y, m):
# expit is the same as sigmoid, but faster
h = expit(X.dot(theta))
# instead of 1/m, I take the mean
J = np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J #should be a scalar
def grad(theta, X, y, m):
h = expit(X.dot(theta))
J = np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = (X.T.dot(h-y))
return gg.flatten()
# initialize matrices
X = np.random.randn(100,3)
y = np.random.randn(100,) #this apparently needs to be a 1-d vector
m = np.ones((3,)) # not using m, used np.mean for a weighted sum (see ali_m's comment)
theta = np.ones((3,1))
xopt = fmin_cg(costFn, fprime=grad, x0=theta, args=(X, y, m), maxiter=400, disp=True, full_output=True )
While the code runs, I don't know enough about your problem to know if this is what you're looking for. But hopefully this can help you understand the problem better. One way to check your answer is to call fmin_cg with fprime=None and see how the answers compare.