ValueError: 'logits' and 'labels' must have the same shape - python

I am working on my first neural network, and i'm stuck on one error. Here is the code:
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv('iris.csv')
X = pd.get_dummies(df.drop(['variety'], axis=1))
y = df['variety'].apply(lambda x: 0 if x=='Setosa' else (1 if x=='Versicolor' else 2))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
print(y_train.head())
from keras.models import Sequential, load_model
from keras.layers import Dense
from sklearn.metrics import accuracy_score
model = Sequential()
model.add(Dense(units=8, activation='relu', input_dim=len(X_train.columns)))
model.add(Dense(units=3, activation='sigmoid'))
model.add(flatten())
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics='accuracy')
model.fit(X_train, y_train, epochs=50, batch_size=1)
I am working off of a tutorial on tensorflow, and am using https://www.kaggle.com/datasets/arshid/iris-flower-dataset as the dataset to train on. I used the code from the tutorial, but changed it to fit my dataset. Still, I get the ValueError. Any help?

Related

Neural network to verify amex check digit

import pandas as pd
import numpy as np
import tensorflow as tf
data = pd.read_csv("Amex.csv")
data.head()
X = data.iloc[:, :-1].values
Y = data.iloc[:, -1].values
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=1234)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.fit_transform(x_test)
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units=1000, activation='sigmoid'))
ann.add(tf.keras.layers.Dense(units=1280, activation='sigmoid'))
ann.add(tf.keras.layers.Dense(units=10, activation='softmax'))
ann.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
ann.fit(x_train, y_train, batch_size=32, epochs=200)
print(ann.predict(sc.transform([[3,7,9,8,8,1,4,4,7,0,4,5,2,6]])))`
I have trained the model with an accuracy of 0.9994 The answer should be 1, but I get an array list
output
[[8.7985291e-06 2.5825528e-04 2.8821041e-03 1.0145088e-04 1.5824498e-04 8.1912667e-06 1.9685100e-03 9.9447292e-01 6.3032545e-05 7.8425743e-05]]
Thanks #Dr. Snoopy for the answer and #AlphaTK for confirming that the issue got resolved. Adding this comment into the answer section for the community benefit.
This is just an array of probabilities output by the model and an
argmax should be applied to obtain a class index.

ValueError: Input 0 of layer "sequential_1" is incompatible with the layer: expected shape=(None, 614, 8), found shape=(None, 8)

This is my first time creating an AI and I keep getting this error, idk what it means or how to sort it.
Using google's colab.reserach to run this. (python)
Code:
import pandas as pd
dataset = pd.read_csv('cancer.csv')
x = dataset.drop(columns=["diagnosis(1=m, 0=b)"])
y = dataset["diagnosis(1=m, 0=b)"]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
import tensorflow as tf
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(256, input_shape=x_train.shape, activation='sigmoid'))
model.add(tf.keras.layers.Dense(256, activation='sigmoid'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1000)
the last line where I try to train the AI, I get the error
any help would be appreciated thanks.

How to get rid of the "UserWarning: The initializer GlorotUniform is unseeded" message?

I have the following code from:
Bias-Variance Decomposition for Model Assessment
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from mlxtend.evaluate import bias_variance_decomp
from mlxtend.data import boston_housing_data
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
np.random.seed(16)
tf.random.set_seed(16)
X, y = boston_housing_data()
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=123,
shuffle=True)
model = Sequential()
model.add(Dense(2048, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='linear'))
optimizer = tf.keras.optimizers.Adam()
model.compile(loss='mean_squared_error', optimizer=optimizer)
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=0)
mean_squared_error(model.predict(X_test), y_test)
avg_expected_loss, avg_bias, avg_var = bias_variance_decomp(
model, X_train, y_train, X_test, y_test,
loss='mse',
num_rounds=100,
random_seed=16,
epochs=100,
batch_size=32,
verbose=0)
print('Average expected loss: %.3f' % avg_expected_loss)
print('Average bias: %.3f' % avg_bias)
print('Average variance: %.3f' % avg_var)
The Code works. However, it produces an annoying warning:
UserWarning: The initializer GlorotUniform is unseeded and being called multiple times, which will return identical values each time (even if the initializer is unseeded). Please update your code to provide a seed to the initializer, or avoid using the same initalizer instance more than once.
warnings.warn(
What changes need to be made to the code in order to get rid of the warning?
As the warning message says, a seed needs to be provided to the initializer. Simply change the code to:
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from mlxtend.evaluate import bias_variance_decomp
from mlxtend.data import boston_housing_data
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from keras import initializers
np.random.seed(16)
tf.random.set_seed(16)
X, y = boston_housing_data()
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=123,
shuffle=True)
model = Sequential()
model.add(Dense(2048, activation='relu',
kernel_initializer=initializers.glorot_uniform(seed=0)))
model.add(Dense(512, activation='relu',
kernel_initializer=initializers.glorot_uniform(seed=0)))
model.add(Dense(32, activation='relu',
kernel_initializer=initializers.glorot_uniform(seed=0)))
model.add(Dense(1, activation='linear',
kernel_initializer=initializers.glorot_uniform(seed=0)))
optimizer = tf.keras.optimizers.Adam()
model.compile(loss='mean_squared_error', optimizer=optimizer)
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=0)
mean_squared_error(model.predict(X_test), y_test)
avg_expected_loss, avg_bias, avg_var = bias_variance_decomp(
model, X_train, y_train, X_test, y_test,
loss='mse',
num_rounds=10,
random_seed=16,
epochs=10,
batch_size=32,
verbose=0)
print('Average expected loss: %.3f' % avg_expected_loss)
print('Average bias: %.3f' % avg_bias)
print('Average variance: %.3f' % avg_var)

How to split dataset into (X_train, y_train), (X_test, y_test)?

The training and validations datasets I am using are shared here for the sake of reproducibility.
The validation_dataset.csv is the ground truth of training_dataset.csv.
What I am doing below is feeding the datasets into a simple CNN layer that extracts the useful features of the images and feed that as 1D into the LSTM network for classification.
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.layers.convolutional import Conv1D
from keras.layers import LSTM
from keras.layers.convolutional import MaxPooling1D
from keras.layers import TimeDistributed
from keras.layers import Dropout
from keras import optimizers
from keras.callbacks import EarlyStopping
import pandas as pd
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from numpy import genfromtxt
df_train = genfromtxt('data/train/training_dataset.csv', delimiter=',')
df_validation = genfromtxt('data/validation/validation_dataset.csv', delimiter=',')
#train,test = train_test_split(df_train, test_size=0.20, random_state=0)
df_train = df_train[..., None]
df_validation = df_validation[..., None]
batch_size=8
epochs=5
model = Sequential()
model.add(Conv1D(filters=5, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=2))
#model.add(TimeDistributed(Flatten()))
model.add(LSTM(50, return_sequences=True, recurrent_dropout=0.2))
model.add(Dropout(0.2))
model.add(LSTM(10))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0)
model.compile(optimizer="rmsprop", loss='mse', metrics=['accuracy'])
callbacks = [EarlyStopping('val_loss', patience=3)]
model.fit(df_train, df_validation, batch_size=batch_size)
print(model.summary())
scores = model.evaluate(df_train, df_validation, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
I want to split the training and validation dataset into (X_train, y_train), (X_test, y_test) so that I can use both datasets for training and testing. I tried the split function of the Scikit-learn library -train,test = train_test_split(df_train, test_size=0.20, random_state=0) but it is giving me the following error after we invoke the model.fit() function.
ValueError: Data cardinality is ambiguous:
x sizes: 14384
y sizes: 3596
Please provide data which shares the same first dimension.
How can we split the dataset into (X_train, y_train), (X_test, y_test) sharing the same dimension?
One way is to have X and Y sets. Here, I assume the column name for Y is 'target'.
target = df_train['target']
df_train = df_train.drop(columns=['target'])
X_train, X_test, y_train, y_test = train_test_split(df_train, target, test_size=0.20, random_state=0)
--
It seems that I had initially misunderstood your problem, and "validation_dataset.csv" is your label data. I apologize for not reading correctly.
In this case, you do not need a "target" variable, as that is what df_validation would be. Therefore, I think the following may work:
X_train, X_test, y_train, y_test = train_test_split(df_train, df_validation, test_size=0.20, random_state=0)

Data reshape with Keras and Jupyter

I'm trying to train a baseline ANN model for a binary classification with Keras (tensorflow backend) and Jupyter notebooks.
The code is the following:
array=df6.values
X= array[:,0:384]
Y = array[:,385]
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
seed = 7
np.random.seed(seed)
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
def create_baseline():
model = Sequential()
model.add(Dense(60, input_dim=60, kernel_initializer='normal', activation='relu'))
model.add(Dense(10, kernel_initializer='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimator = KerasClassifier(build_fn=create_baseline, epochs=100, batch_size=5, verbose=0)
kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, encoded_Y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
Finally the error is the following:
ValueError: Error when checking input: expected dense_5_input to have shape (None, 60) but got array with shape (8, 384)
Also my dataset has 18 rows and 385 columns
I would like to know how to reshape correctly for a correct estimation of results. Thank you so much!
input_dim = 384
This argument refers to the shape of your input, which is X.

Categories

Resources