Parallel computing in Spyder - python

I am not able to get my code to run when using n_jobs = -1(on the last line).
I get the same message :
"BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable."
The code works with n_jobs = 1, but I need all processors as the code will take very long to execute.
I have tried using if __name__ == '__main__': , but I am not sure how to use it and cannot get the code to run.
I have tried for ages but to no avail. Any help is highly appreciated. Here is the relevant code:
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier,batch_size = 10, epochs = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)

Related

How to extract weights of the best ANN resulting from GridSearchCV?

After running the hyperparameter tuning with GridSearchCV with the code as below:
## Tuning the ANN
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
def build_regressor(hidden_nodes, hidden_layers, optimizer):
regressor = Sequential()
regressor.add(Dense(units = hidden_nodes, kernel_initializer = 'uniform', activation = 'relu', input_dim = 7))
for layer_size in range(hidden_layers):
regressor.add(Dense(hidden_nodes, kernel_initializer = 'uniform', activation = 'relu'))
regressor.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'linear'))
regressor.compile(optimizer = optimizer, loss = 'mse', metrics = ['mse'])
return regressor
regressor = KerasRegressor(build_fn = build_regressor, epochs = 100)
# Create a dictionary of tuning parameters
parameters = {'hidden_nodes': list(range(2,101)), 'hidden_layers': [1,2,3], 'batch_size': [25,32], 'optimizer' : ['adam', 'nadam','RMSprop', 'adamax']}
grid_search = GridSearchCV(estimator = regressor, param_grid = parameters, scoring = 'neg_mean_squared_error', cv = 10, n_jobs = 4)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_score = grid_search.best_score_
best_model = grid_search.best_estimator_
Do we have any way to extract the weights of the best model from GridSearchCV?
Thank you so much in advance,
As you want the model weights saved in a csv file you can do the following:
import numpy as np
weight = best_model.layers[0].get_weights()[0]
np.savetxt('weight.csv' , weight , fmt='%s', delimiter=',')

pipline implementation with PCA sklearn and TensorFlow

it gives another error.
The first argument to Layer.call must always be passed.
I cannot solve the problem. input_dim cannot be set as a constant. PCA and SelectKBest will cut down on the amount of input.
And if you can help with the output of the results from the pipeline, I will be very grateful
attach a link to the data: https://1drv.ms/u/s!AlHgQsqCKEIPiIxzdyWE0BfBHNocTQ?e=cxuSuo
def modelReg(inpt, opt = 'adam', kInitializer = 'glorot_uniform', dropout = 0.05):
model = Sequential()
model.add(Dense(1024, activation='relu', input_dim = inpt, kernel_initializer=kInitializer))
model.add(Dense(1024, activation='relu', kernel_initializer=kInitializer))
model.add(Dense(512, activation='relu', kernel_initializer=kInitializer))
model.add(layers.Dropout(dropout))
model.add(Dense(1, activation='sigmoid', kernel_initializer=kInitializer))
model.compile(loss='mse',optimizer=opt, metrics=["mse", "mae"])
return model
features = []
features.append(('pca', PCA(n_components=10)))
features.append(('select_best', SelectKBest(k=10)))
feature_union = FeatureUnion(features)
regressor = KerasRegressor(build_fn = modelReg(inpt), epochs = 3, batch_size = 500, verbose = 1)
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('feature_union', feature_union))
estimators.append(('regressor' regressor))
model = Pipeline(estimators)
model.fit(allData.drop(['VancouverH'], axis = 1), allData['VancouverH'])
in KerasRegressor with a function to pass arguments to the model function, they are written to the KerasRegressor arguments.
kearsEstimator = ('kR', KerasRegressor(createModel, inpt = trainDataX.shape[1],
epochs = 5, batch_size = 180, verbose = 1))
like this, not like this:
kearsEstimator = ('kR', KerasRegressor(createModel(inpt),
epochs = 5, batch_size = 180, verbose = 1))
well, and transferred the pipeline to the Grid. And the names of the parameters for the grid are written with the prefix.
estimators = []
estimators.append((kearsEstimator))
param_grid = {
'kR__optimizer':['adam'] #'RMSprop', 'Adam', 'Adamax', 'sgd'
}
grid = GridSearchCV(Pipeline(estimators), param_grid, cv = 5)
grid.fit(trainDataX, trainDataY)

Using Keras and TensorFlow-GPU v2.0 to implement K-fold cross validation

Hi so im learning about k fold cross validation, this first snippet of code is the building of a simple ANN:
def buildModel():
# Fitting classifier to the Training set
# Create your classifier here
model = Sequential()
model.add(Dense(units = 6, input_dim = X.shape[1], activation = 'relu'))
model.add(Dense(units = 6, activation = 'relu'))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return model
I then used cross_val_score validation in sklearn to run the ANN.
Keras is also runing on my gpu.
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
model = KerasClassifier(build_fn = buildModel, batch_size = 10, epochs =100)
accuracies = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 10, n_jobs = -1)
But if i put n_jobs = -1 to try and use all cores i get an error (ps i have 11 features):
Blas GEMM launch failed : a.shape=(10, 11), b.shape=(11, 6), m=10, n=6, k=11
[[node dense_1/MatMul (defined at C:\Users\Brandon Cardillo\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py:1751) ]]
[Op:__inference_keras_scratch_graph_1030]
Function call stack:
keras_scratch_graph
Ps. I am also running on jupyter notebook
Any help is very much appriciated.
Thank you.

ValueError: The model is not configured to compute accuracy

When using this code I got from some tutorial I got the error that says The model is not configured to compute accuracy and that I should pass accuracy , The weird part is I am already passing metrics = ['accuracy']
I've searched a lot and all the codes I have seen works fine except mine.
Evaluating the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from tensorflow.python.keras.models import Sequential #Used to initialize the NN
from tensorflow.python.keras.layers import Dense #Used to create the layers in the ANN
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu',input_dim = 11))
classifier.add(Dense(units= 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics= ['accuracy'])
return classifier
# Needs to be revised from evaluting video in the course if needed
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, nb_epoch = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
I expect the output to be the accuarcies vector, instead i got:
ValueError: The model is not configured to compute accuracy. You should pass metrics=["accuracy"] to the model.compile() method.
Changing the parameter from metrics=['accuracy'] by metrics=['acc'] works for me.
Regards,
Joseph

How to fine tune the network automatically in Keras?

How to tune the network automatically instead of adjusting the number of hidden layers and epochs everytime manually? (Using Keras)
from keras.models import Sequential
from keras.layers import Dense
import numpy
seed = 9
numpy.random.seed(seed)
from pandas import read_csv
filename = 'BBCN.csv'
dataframe = read_csv(filename)
array = dataframe.values
x = array[:,0 : 11]
y = array[:, 11]
model = Sequential()
model.add(Dense(11, input_dim=11, kernel_initializer = 'uniform', z = 'relu'))
model.add(Dense(8, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(8, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))
model.compile(loss='binary_crossentropy', optimizer ='adam', metrics = ['accuracy'])
model.fit(x, y,nb_epoch = 50, batch_size = 10 )
scores = model.evaluate(x,y)
print("%s, %.2f%%" % (model.metrics_names[1], scores[1]*100))
The result I need is to show the process and the percentage of the accuracy.
Thanks a lot!
You could start with a simple loop over some hyperparameters and train with these for some epochs and then compare the results.
You can also look into grid search which is a more systematic approach. Basically you setup a function that creates a model and use it with a set of hyperparameters that you want to try out and an array of values. For more details and boilerplate code I recommend this tutorial.

Categories

Resources