matplotlib get bitmap from a scatter plot - python

I have coordinates of some points that I need to plot and then convert plot to black & white bitmap:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
plt.scatter(x,y)
plt.tight_layout()
fig1 = plt.gcf()
plt.show()
type(fig1)
matplotlib.figure.Figure
How to get from this figure black & white bitmap as numpy array similar to this one:
side = 5
image = np.random.choice([0, 1], size=side*side, p=[.1, .9])
image = image.reshape(side,side)
image = np.expand_dims(image, axis=-1)
print("image.shape: ",image.shape)
plt.imshow(image, cmap=plt.get_cmap('gray'))
image.shape: (5, 5, 1)
print(image.reshape(side,side))
[[1 1 1 0 1]
[1 1 1 1 1]
[1 0 1 1 0]
[1 1 1 1 0]
[1 1 1 1 1]]
Update 1
I need also to get resulting bitmap as a numpy array. How to get it?
In case I use solution given by Zephyr:
fig, ax = plt.subplots(figsize = (5,5))
ax.hist2d(x, y, cmap = 'Greys', cmin = 0, cmax = 1)
plt.show()
I get image different from the scatter plot. And they should be similar:

You can create a grid and use it to define a map were the closest points will be white. My try with random data, range 0 to 1:
import matplotlib.pyplot as plt
import numpy as np
n_points = 10
# create random coordinates
x, y = np.random.rand(n_points,2).T
fig, ax = plt.subplots()
ax.scatter(x,y)
ax.set_xlim([0,1])
ax.set_ylim([0,1])
ax.set_aspect(1.0)
# create a grid
grid_points = 10
grid_x = np.linspace(0,1,grid_points)
grid_y = grid_x.copy()
# initiate array of ones (white)
image = np.ones([grid_points, grid_points])
for xp, yp in zip(x,y):
# selecing the closest point in grid
index_x = np.argmin(np.abs(xp - grid_x))
index_y = np.argmin(np.abs(yp - grid_y))
# setting to black
image[index_x,index_y] = 0
# you need to transpose it so x is represented
# by the columns and y by the rows
fig, ax = plt.subplots()
ax.imshow(
image.T,
origin='lower',
cmap=plt.get_cmap('gray'))
Note that the closest may not be always good. It gets better with a more refined grid.

First of all, I generate random N points within range (x_min, x_max) and (y_min, y_max):
np.random.seed(42)
N = 10
x_min = 0
x_max = 40
y_min = -20
y_max = 20
x = np.random.uniform(x_min, x_max, N)
y = np.random.uniform(y_min, y_max, N)
Then I prepare:
a grid (the bitmap) of (size, size) dimension
two vectors x_grid and y_grid which resample (x_min, x_max) and (y_min, y_max) in size + 1 points, so size inverval: one interval for each grid cell
size = 10
grid = np.zeros((size, size))
x_grid = np.linspace(x_min, x_max, size + 1)
y_grid = np.linspace(y_min, y_max, size + 1)
Then I loop over each grid cells; in each iteration I check if there is at least 1 point of (x, y) which stay within limits of that cell. If so, I set the correspondent value of grid to 1:
for i in range(size):
for j in range(size):
for x_i, y_i in zip(x, y):
if (x_grid[i] < x_i <= x_grid[i + 1]) and (y_grid[j] < y_i <= y_grid[j + 1]):
grid[i, j] = 1
break
Resulting numpy matrix:
[[0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
Complete Code
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(42)
N = 10
x_min = 0
x_max = 40
y_min = -20
y_max = 20
x = np.random.uniform(x_min, x_max, N)
y = np.random.uniform(y_min, y_max, N)
size = 10
grid = np.zeros((size, size))
x_grid = np.linspace(x_min, x_max, size + 1)
y_grid = np.linspace(y_min, y_max, size + 1)
for i in range(size):
for j in range(size):
for x_i, y_i in zip(x, y):
if (x_grid[i] < x_i <= x_grid[i + 1]) and (y_grid[j] < y_i <= y_grid[j + 1]):
grid[i, j] = 1
break
fig, ax = plt.subplots(1, 2, figsize = (10, 5))
ax[0].scatter(x, y)
ax[0].set_xlim(x_min, x_max)
ax[0].set_ylim(y_min, y_max)
ax[0].grid()
ax[0].set_xticks(x_grid)
ax[0].set_yticks(y_grid)
ax[1].imshow(grid.T, cmap = 'Greys', extent = (x_min, x_max, y_min, y_max))
ax[1].invert_yaxis()
plt.show()
NOTE
Pay attention to the fact that in ax.imshow you need to transpose the matrix (grid.T) and then invert y axis in order to be able to compare the ax.imshow with ax.scatter.
If you want grid matrix to match ax.imshow, then you need to rotate it counterclockwise by 90°:
grid = np.rot90(grid, k=1, axes=(0, 1))
Rotated grid, which correspond to the above plot:
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]]

Related

choosing 5 random numbers / coordinates on a grid in python

I need help with making this so I can convert a string into a number in a list, I have done this but if I wanted to do it this way I would have to wright a dictionary with 100 definitions which I do not want to do. The code is just to show what I found all ready. As you can see this would take 100 definitions if I were to do it this way.
x1 = [0,0,0,0,0,0,0,0,0,0]
x2 = [0,0,0,0,0,0,0,0,0,0]
x3 = [0,0,0,0,0,0,0,0,0,0]
x4 = [0,0,0,0,0,0,0,0,0,0]
x5 = [0,0,0,0,0,0,0,0,0,0]
x6 = [0,0,0,0,0,0,0,0,0,0]
x7 = [0,0,0,0,0,0,0,0,0,0]
x8 = [0,0,0,0,0,0,0,0,0,0]
x9 = [0,0,0,0,0,0,0,0,0,0]
x10 = [0,0,0,0,0,0,0,0,0,0]
my_dict_grid = {
'x2[3]' : x2[3]
}
x = 'x2[3]'
print(my_dict_grid[x])
If you have multiple arrays you are managing all at once, create a multi-dimensional array:
x = [
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0]
]
In that case, you can just index by row then column:
x[2][3]
Based on your comment, you want to randomly change values in the array. In that case, the approach above is not at all what you want. You want to pick two random numbers, and index to them in x to change them:
import random
for _ in range(5):
updated = False
while not updated:
i = random.randrange(10)
j = random.randrange(10)
if x[i][j] == 0:
x[i][j] = 1
updated = True
Original answer to the initial question:
(this is here more as an interesting thing, not as a viable approach)
Okay. Assuming that you have to do it the way you have described, you can generate a dictionary with all of the string keys:
my_dict_grid = {
f"x{i + 1}[{j}]": arr[j]
for i, arr in enumerate([x1, x2, x3, x4, x5, x6, x7, x8, x9, x10])
for j in range(10)
}
However, I have to stress that this is not a good idea.
3 different ways to solve this with oneliners, depending of the output you want:
my_list = [[ 0 for _ in range(10)] for _ in range(10)]
my_dict = {"x"+str(i+1):[ 0 for _ in range(10)] for i in range(10)}
my_dict2 = {"x"+str((i+1)%10)+"["+str(int((i+1)/10))+"]": 0 for i in range(100)}
print(my_list) #[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0,...
print(my_dict) #{'x10': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'x9': [0,...
print(my_dict2)#{'x4[3]': 0, 'x1[9]': 0, 'x6[6]': 0, 'x2[8]': 0,...
A more mathematical, but very practical way to do this with numpy:
import numpy as np
grid_shape = [10, 10] # define 10 x 10 grid
num_ones = 5
cells = np.zeros(grid_shape[0]*grid_shape[1]) # define 10*10 = 100 cells as flat array
cells[0:num_ones] = 1 # Set the first 5 entries to 1
np.random.shuffle(cells) # Shuffle the entries, such that the 1's are at random position
grid = cells.reshape(grid_shape) # shape the grid into the desired shape
Running the code above and will e.g. result in grid=
[[0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]]
Note that, by changing grid_shape you can resize your grid, and by changing num_ones, you will adapt the number of ones in your grid. Also, it is guaranteed that there will always be num_ones ones in your grid (given that num_ones is smaller or equal the number of elements in the grid).

Is there a way to insert multiple elements to different locations in a ndarray all at once?

I'm using numpy's ndarray, and I'm wondering is there a way that allows me to insert multiple elements to different locations all at once?
For example, I have an image, and I want to pad the image with 0s. This is what I currently have:
def zero_padding(self):
padded = self.copy()
padded.img = np.insert(self.img, 0, 0, axis = 0)
padded.img = np.insert(padded.img, padded.img.shape[0], 0, axis = 0)
padded.img = np.insert(padded.img, 0, 0, axis = 1)
padded.img = np.insert(padded.img, padded.img.shape[1], 0, axis = 1)
return padded
where padded is an instance of the image.
Sure, you can use the fancy indexing techinque of NumPy as follows:
import numpy as np
if __name__=='__main__':
A = np.zeros((5, 5))
A[[1, 2], [0, 3]] = 1
print(A)
Output:
[[0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
Cheers

Sinkhorn algorithm for optimal transport

I'm trying to code Sinkhorn algorithm, especially I'm trying to see if I can compute the optimal transportation between two measures when the strengh of the entropic regularization converges to 0.
For exemple let's transport the uniform measure $U$ over $[0;1]$ into the uniform measure $V$ over $[1;2]$.
The optimal measure for the quadratic coast is $(x,x-1)_{#} U$.
Let's discretize $[0;1]$, the measure $U$, $[1;2]$ and the measure $V$. Using Sinkhorn I'm supposed to get a measure such that the support is in the graphe of the line $y = x-1$. But it didn't so I'm working on it to find what's the problem. I'm going to show you my code and my result maybe someone can help me.
import numpy as np
import math
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.colors as colors
#Parameters
N = 10 #Step of the discritization of [0,1]
stop = 10**-3
Niter = 10**3
def Sinkhorn(C, mu, nu, lamb):
# lam : strength of the entropic regularization
#Initialization
a1 = np.zeros(N)
b1 = np.zeros(N)
a2 = np.ones(N)
b2 = np.ones(N)
Iter = 0
GammaB = np.exp(-lamb*C)
#Sinkhorn
while (np.linalg.norm(a2) > stop and np.linalg.norm(b2) > stop and np.linalg.norm(a2) < 1/stop and np.linalg.norm(b2) < 1/stop and Iter < Niter and np.linalg.norm(a1-a2) + np.linalg.norm(b1-b2) > stop ):
a1 = a2
b1 = b2
a2 = mu/(np.dot(GammaB,b1))
b2 = nu/(np.dot(GammaB.T,a2))
Iter +=1
# Compute gamma_star
Gamma = np.zeros((N,N))
for i in range(N):
for j in range(N):
Gamma[i][j] = a2[i]*b2[j]*GammaB[i][j]
Gamma /= Gamma.sum()
return Gamma
## Test between uniform([0;1]) over uniform([1;2])
S = np.linspace(0,1,N, False) #discritization of [0,1]
T = np.linspace(1,2,N,False) #discritization of [1,2]
# Discretization of uniform([0;1])
U01 = np.ones(N)
Mass = np.sum(U01)
U01 = U01/Mass
# Discretization uniform([1;2])
U12 = np.ones(N)
Mass = np.sum(U12)
U12 = U12/Mass
# Cost function
X,Y = np.meshgrid(S,T)
C = (X-Y)**2 #Matrix of c[i,j]=(xi-yj)²
def plot_Sinkhorn_U01_U12():
#plot optimal measure and convergence
fig = plt.figure()
for i in range(4):
ax = fig.add_subplot(2, 2, i+1, projection='3d')
Gamma_star = Sinkhorn(C, U01, U12, 1/10**i)
ax.scatter(X, Y, Gamma_star, cmap='viridis', linewidth=0.5)
plt.title("Gamma_bar({}) between uniform([0,1]) and uniform([1,2])".format(1/10**i))
plt.show()
plt.figure()
for i in range(4):
plt.subplot(2,2,i+1)
Gamma_star = Sinkhorn(C, U01, U12, 1/10**i)
plt.imshow(Gamma_star,interpolation='none')
plt.title("Gamma_bar({}) between uniform([0,1]) and uniform([1,2])".format(1/10**i))
plt.show()
return
# The transport between U01 ans U12 is x -> x-1 so the support of gamma^* is contained in the graph of the function x -> (x,x+1) which is the line y = x+1
plot_Sinkhorn_U01_U12()
And what I get.
As adviced, this is the output of my code when I'm considerating 1/lamb.
It's way more better but still not correct. Here's Gamma_star(125)
Gamma_star(125) :
[[0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.09 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0.01 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01]]
We can see that the support of the measure gamma_star is not contained in the line $y = x-1$
Thanks and regards.
It's not the final answer but we're getting closer.
As adviced, I lighten my while condition. For exemple with the only condition
while (Iter < Niter):
This is what I get :
Here is the matrix I got for gamma_star(125) :
[[0.08 0.02 0. 0. 0. 0. 0. 0. 0. 0. ]
[0.02 0.06 0.02 0. 0. 0. 0. 0. 0. 0. ]
[0. 0.02 0.06 0.02 0. 0. 0. 0. 0. 0. ]
[0. 0. 0.02 0.06 0.02 0. 0. 0. 0. 0. ]
[0. 0. 0. 0.02 0.06 0.02 0. 0. 0. 0. ]
[0. 0. 0. 0. 0.02 0.06 0.02 0. 0. 0. ]
[0. 0. 0. 0. 0. 0.02 0.06 0.02 0. 0. ]
[0. 0. 0. 0. 0. 0. 0.02 0.06 0.02 0. ]
[0. 0. 0. 0. 0. 0. 0. 0.02 0.06 0.02]
[0. 0. 0. 0. 0. 0. 0. 0. 0.02 0.08]]
It's closer from my expection which is : $\text{Gamma_star}(i,j) = 0$ for $j \ne i-1$
The new code is :
import numpy as np
import math
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.colors as colors
#Parameters
N = 10 #Step of the discritization of [0,1]
Niter = 10**5
def Sinkhorn(C, mu, nu, lamb):
# lam : strength of the entropic regularization
#Initialization
a1 = np.zeros(N)
b1 = np.zeros(N)
a2 = np.ones(N)
b2 = np.ones(N)
Iter = 0
GammaB = np.exp(-lamb*C)
#Sinkhorn
while (Iter < Niter):
a1 = a2
b1 = b2
a2 = mu/(np.dot(GammaB,b1))
b2 = nu/(np.dot(GammaB.T,a2))
Iter +=1
# Compute gamma_star
Gamma = np.zeros((N,N))
for i in range(N):
for j in range(N):
Gamma[i][j] = a2[i]*b2[j]*GammaB[i][j]
Gamma /= Gamma.sum()
return Gamma
## Test between uniform([0;1]) over uniform([1;2])
S = np.linspace(0,1,N, False) #discritization of [0,1]
T = np.linspace(1,2,N,False) #discritization of [1,2]
# Discretization of uniform([0;1])
U01 = np.ones(N)
Mass = np.sum(U01)
U01 = U01/Mass
# Discretization uniform([1;2])
U12 = np.ones(N)
Mass = np.sum(U12)
U12 = U12/Mass
# Cost function
X,Y = np.meshgrid(S,T)
C = (X-Y)**2 #Matrix of c[i,j]=(xi-yj)²
def plot_Sinkhorn_U01_U12():
#plot optimal measure and convergence
fig = plt.figure()
for i in range(4):
ax = fig.add_subplot(2, 2, i+1, projection='3d')
Gamma_star = Sinkhorn(C, U01, U12, 5**i)
ax.scatter(X, Y, Gamma_star, cmap='viridis', linewidth=0.5)
plt.title("Gamma_bar({}) between uniform([0,1]) and uniform([1,2])".format(5**i))
plt.show()
plt.figure()
for i in range(4):
plt.subplot(2,2,i+1)
Gamma_star = Sinkhorn(C, U01, U12, 5**i)
plt.imshow(Gamma_star,interpolation='none')
plt.title("Gamma_bar({}) between uniform([0,1]) and uniform([1,2])".format(5**i))
plt.show()
return
# The transport between U01 ans U12 is x -> x-1 so the support of gamma^* is contained in the graph of the function x -> (x,x-1) which is the line y = x-1
plot_Sinkhorn_U01_U12()

Sparse vectors for training data

I have a training data like this:
x_train = np.random.randint(100, size=(1000, 25))
where each row is a sample and thus we have 1000 samples.
Now I need to have the training data such that for each of the sample/row there can be at max 3 non-zero elements out of 25.
Can you all please suggest how I can implement that? Thanks!
I am assuming that you want to turn a majority of your data into zeros, except that 0 to 3 non-zero elements are retained (randomly) for each row. If this is the case, a possible way to do this is as follows.
Code
import numpy as np
max_ = 3
nrows = 1000
ncols = 25
np.random.seed(7)
X = np.zeros((nrows,ncols))
data = np.random.randint(100, size=(nrows, ncols))
# number of max non-zeros to be generated for each column
vmax = np.random.randint(low=0, high=4, size=(nrows,))
for i in range(nrows):
if vmax[i]>0:
#index for setting non-zeros
col = np.random.randint(low=0, high=ncols, size=(1,vmax[i]))
#set non-zeros elements
X[i][col] = data[i][col]
print(X)
Output
[[ 0. 68. 25. ... 0. 0. 0.]
[ 0. 0. 0. ... 0. 0. 0.]
[ 0. 0. 0. ... 0. 0. 0.]
...
[ 0. 0. 0. ... 0. 0. 0.]
[88. 0. 0. ... 0. 0. 0.]
[ 0. 0. 0. ... 0. 0. 0.]]

Error when Coding Perceptron: ValueError: shapes (124,124) and (1,10) not aligned: 124 (dim 1) != 1 (dim 0)

I'm trying to code a Multi-Layer Perceptron, but it seems I get it wrong when I'm trying to import data from csv file using genfromtxt function from numpy library.
from numpy import genfromtxt
dfX = genfromtxt('C:/Users/m15x/Desktop/UFABC/PDPD/inputX(editado_bits).csv', delimiter=',')
dfy = genfromtxt('C:/Users/m15x/Desktop/UFABC/PDPD/inputY(editado_bits).csv', delimiter=',')
X = dfX
y = dfy
print(X)
print(y)
# Whole Class with additions:
class Neural_Network(object):
def _init_(self):
# Define Hyperparameters
self.inputLayerSize = 26
self.outputLayerSize = 1
self.hiddenLayerSize = 10
# Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
And my X (124,1) and y (124,26) are the following arrays respectively:
[[ 1. 0. 1. ..., 1. 0. 0.]
[ 0. 1. 1. ..., 1. 0. 0.]
[ 0. 1. 1. ..., 1. 0. 0.]
...,
[ 0. 1. 1. ..., 1. 0. 0.]
[ 1. 0. 1. ..., 1. 0. 0.]
[ 1. 0. 1. ..., 1. 0. 0.]]
[ 0. 0. 1. 0. 1. 0. 1. 1. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0.
0. 0. 1. 1. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 1. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 1. 0. 0. 0. 0. 0.
0. 1. 0. 1. 0. 1. 0. 0. 1. 1. 0. 0. 0. 1. 0. 1. 0. 1.
1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0.
1. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0.]
And I get notified with:
Traceback (most recent call last):
File "C:/Users/m15x/PycharmProjects/Deep Learning/MLP_tinnitus_1.py", line 141, in <module>
T.train(X,y)
File "C:/Users/m15x/PycharmProjects/Deep Learning/MLP_tinnitus_1.py", line 134, in train
args=(X, y), options=options, callback=self.callbackF)
File "C:\Users\m15x\Anaconda3\lib\site-packages\scipy\optimize\_minimize.py", line 444, in minimize
return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "C:\Users\m15x\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 913, in _minimize_bfgs
gfk = myfprime(x0)
File "C:\Users\m15x\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
return function(*(wrapper_args + args))
File "C:\Users\m15x\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 71, in derivative
self(x, *args)
File "C:\Users\m15x\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 63, in _call_
fg = self.fun(x, *args)
File "C:/Users/m15x/PycharmProjects/Deep Learning/MLP_tinnitus_1.py", line 119, in costFunctionWrapper
grad = self.N.computeGradients(X, y)
File "C:/Users/m15x/PycharmProjects/Deep Learning/MLP_tinnitus_1.py", line 76, in computeGradients
dJdW1, dJdW2 = self.costFunctionPrime(X, y)
File "C:/Users/m15x/PycharmProjects/Deep Learning/MLP_tinnitus_1.py", line 56, in costFunctionPrime
delta2 = np.dot(delta3, self.W2.T) * self.sigmoidPrime(self.z2)
ValueError: shapes (124,124) and (1,10) not aligned: 124 (dim 1) != 1 (dim 0)
And mainly this error starts when I'm trying to train my code with the previous data.
def train(self, X, y):
# Make an internal variable for the callback function:
self.X = X
self.y = y
# Make empty list to store costs:
self.J = []
params0 = self.N.getParams()
options = {'maxiter': 10000, 'disp': True}
_res = optimize.minimize(self.costFunctionWrapper, params0, jac=True, method='BFGS', \
args=(X, y), options=options, callback=self.callbackF)
self.N.setParams(_res.x)
self.optimizationResults = _res
I know my array from X and y doens't fit, but I don't know some usable function that I can apply to treat the data for the variable y, which is fed by the (124,1) shape data csv file ('C:/Users/m15x/Desktop/UFABC/PDPD/inputY(editado_bits).csv') and my X variable is fed by a (124,26) shape csv file ('C:/Users/m15x/Desktop/UFABC/PDPD/inputX(editado_bits).csv').
It seems my data imported using genfromtxt function doesn't seem appropriate.

Categories

Resources