Issues integrating Keras with Spearmint - python

When i am using spearmint to optimize the hyperparamaters for Keras models, for the first time it runs fine. But the second job onwards it always throws the following error.
<type 'exceptions.TypeError'>, TypeError('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.'), <traceback object at 0x18a5c5710>)
I am using the following code to load the pre-created numpy array of the train data and test data. The following params are passed by the optimization python script. But the set of parameters work fine if run without spearmint.
def load_train_data(arg_type, params=None):
X_train1 = pickle.load(open(arg_type+"_train1","rb"))
X_train2 = pickle.load(open(arg_type+"_train2","rb"))
Y_train = pickle.load(open(arg_type+"_train_labels","rb"))
model=combined_model(X_train1,X_train2,Y_train,params)
X_test1 = pickle.load(open(arg_type+"_test1","rb"))
X_test2 = pickle.load(open(arg_type+"_test2","rb"))
Y_test = pickle.load(open(arg_type+"_test_labels","rb"))
loss = model.evaluate({'input1': X_test1,'input2': X_test2,'output':Y_test},batch_size=450)
return loss

The variables that I was setting with the spearmint, they had to be explicitly converted to basic python datatypes, using float(),int(). This helped in solving this issue.

Related

Keras Model Multi Input - TypeError: ('Keyword argument not understood:', 'input')

I am trying to build a CNN that receives multiple inputs and I am trying the following:
input = keras.Input()
classifier = keras.Model(inputs=input,output=classifier)
When run the code I am receiving the following error for line 6 though:
TypeError: ('Keyword argument not understood:', 'input').
A hint would be much appreciated, thank you!
Some parameters of your code are not specified. I have copied your example with some numbers that you can change back.
import keras
input_dim_1 = 10
input1 = keras.layers.Input(shape=(input_dim_1,1))
cnn_classifier_1 = keras.layers.Conv1D(64, 5, activation='sigmoid')(input1)
cnn_classifier_1 = keras.layers.Dropout(0.5)(cnn_classifier_1)
cnn_classifier_1 = keras.layers.Conv1D(48, 5, activation='sigmoid')(cnn_classifier_1)
cnn_classifier_1 = keras.models.Model(inputs=input1,outputs=cnn_classifier_1)
Some things to note
The imports of your layers were not right. You need to import the layers/models you want from the right places. You can check my code against yours to see this.
With the functional API of keras you do not need to specify the input shape as you have done in the first Conv1D layer. This is handled automatically.
You need to correctly specify the keywords in Model. Specifically inputs and outputs. Different versions of keras use input / output or inputs/outputs as keywords for the call of the class Model.
Hey, its simple, use following code:
classifier = keras.Model(input, classifier)
instead of calling
classifier = keras.Model(inputs = input, output = classifier)
Issue seems to come from latest versions of keras implementation.

Is there any way in Tensorflow 2.0 to pass a tensor as parameter of a Matlab function while running Matlab engine?

It is such a long title, but hopefully I will be able to explain myself properly in a few sentences:
I am trying to minimize a given score function using Tensorflow inspired by what was published in Minimize a function of one variable in Tensorflow. The value for such score function is obtained through making a call to a Matlab script which needs to be provided with only one parameter (which is related to the input variable, a tensor).
To do so I am using the beta version of Tensorflow 2.0, which includes a feature known as eager execution which allows to access to the contents of each tensor without needing to run any session whatsoever.
Here you may find a scratch of my code:
import tensorflow as tf
import numpy as np
eng = matlab.engine.start_matlab()
def costFunction():
z = tf.add(x,y).numpy()
H = np.asarray(eng.matlabfunction(matlab.double(z.tolist()),...)) # There are other parameters (Python lists) to be passed as arguments to my Matlab script alongside them, not included for the sake of simplicity
h = tf.convert_to_tensor(...) # Here I retrieve those elements from matrix H which I actually aim to maximize
return h
x = tf.Variable(initial_value=tf.zeros([6,N], tf.float64), trainable=True)
opt = tf.optimizers.Adam(learning_rate=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
iters = 1000
for i in range(iters):
train = opt.minimize(costFunction, tunedPhases)
if i % 100 == 0:
print("Iteration {}, loss: {}".format(i+1, costFunction()))
Sadly, this solution does still not work out as I receive the following error message as output:
ValueError: No gradients provided for any variable: ['Variable:0'].
After a exhaustive search, I think this problem is related to this old post (TensorFlow: 'ValueError: No gradients provided for any variable'), which was solved by doing the corresponding operations from cost function directly to the tensors. However, I have no other option but to invoke this matlabfunction and use its output as the output of my cost function.
Do you have any ideas about how to overcome this?
Many thanks in advance, and may you all have a nice week!

TypeError when using tf.keras.layers.Reshape

when building a model in Keras, I run into this error:
TypeError: Expected int32, got 8.0 of type 'float' instead.
The error occurs when initially building the model (as opposed to during execution), more specifically on the last line of this snippet:
d_dense1 = Dense(
((IMAGE_SIZE/4)**2)*(n if vanilla_architecture else 3*n),
input_shape = (h,),
activation = "relu",
name = name_prefix + "dense1"
)(d_in)
d_reshape1 = tf.keras.layers.Reshape(
(IMAGE_SIZE/4, IMAGE_SIZE/4, (n if vanilla_architecture else 3*n)),
name = name_prefix + "reshape1"
)(d_dense1)
Side note: I am using tf.keras.layers.Dense, IMAGE_SIZE is an integer, vanilla_architecture is a boolean, and n is an integer
Obviously the dense layer will pass along a tensor of floats because, well, it's a machine learning operation. The issue seems to be that Reshape requires a tensor of integers. I read the documentation but there is no information there.
Here are some things I've tried:
using tf.reshape
Same issue
using numpy reshape
Just plain doesn't work
reading example code like like 54 of this
they seem to be doing the same thing as me but theirs works
The weird part is that it works just fine when using eager execution. I don't want to have eager execution enabled though because I want to use tensorboard.
The solution was to use integer division in this line:
(IMAGE_SIZE//4, IMAGE_SIZE//4, (n if vanilla_architecture else 3*n)),
The type error was not due to the tensor but instead that IMAGE_SIZE/4 returned a float

How to use different preprocessing functions in python3 using a for loop?

I am currently looking at scikit learn's preprocessing functions.
I wanted to know if i can loop over a pre-defined list of pre-processing functions such that i don't have to write out in full the set up code for each function.
E.g. code for one function:
T = preprocessing.MinMaxScaler()
X_train = T.fit_transform(X_train)
X_test = T.transform(X_test)
My attempt to loop over a pre-defined list so as to use different pre-processing functions:
pre_proc = ['Normalizer','MaxAbsScaler','MinMaxScaler','KernelCenterer', 'StandardScaler']
for proc in pre_proc:
T = 'preprocessing.'+ proc +'()'
X_train = T.fit_transform(X_train)
X_test = T.transform(X_test)
Currently this is yielding the following which is not surprising:
--> 37 X_train = T.fit_transform(X_train)
38 X_test = T.transform(X_test)
39 for i in np.arange(startpt_c,endpt_c, step_c):
AttributeError: 'str' object has no attribute 'fit_transform'
I think i need to have the string as the correct object type to then call the method on i.e. have it recognised as a function.
Is there a way i can do this that satisfies my objective of using a loop?
Setup: Windows 8, 64 bit machine running Python 3 via Jupyter notebook in Azure ML studio.
The problem lies in this line of your code
pre_proc = ['Normalizer','MaxAbsScaler','MinMaxScaler','KernelCenterer', ...
What you are doing here is creating a list pre_proc that is basically just a list of strings. Python has no idea that you actually meant them to be functions. And so when you try to use T = 'preprocessing.'+ proc +'()' , python throws an error and say, that T is a string and has not method such as fit_transform. So instead of using strings, use the actual function names, i.e., don't put them in quotes. Use them like so -
pre_proc = [preprocessing.Normalizer, preprocessing.MaxAbsScalar, preprocessing.MinMaxScalar, preprocessing.KernelCenterer, preprocessing.StandardScaler]

Different predictions on multiple run of the same algorithm scikit neural network

Since a MLP can implement any function. I have the following code, using which I am trying to implement the AND function. But what I find that on running the program multiple times, I end up getting different predicted values. Why is this happening ? Also how does one determine which type of activation function has to be provided at different layers ?
from sknn.mlp import Regressor,Layer,Classifier
import numpy as np
X_train = np.array([[0,0],[0,1],[1,0],[1,1]])
y_train = np.array([0,0,0,1])
nn = Classifier(layers=[Layer("Softmax", units=2),Layer("Linear", units=2)],learning_rate=0.001,n_iter=25)
nn.fit(X_train, y_train)
X_example = np.array([[0,0],[0,1],[1,0],[1,1]])
y_example = nn.predict(X_example)
print (y_example)
-The different values obtained for every run is because your weights are randomly initialized.
-Activation functions have different properties. You can either use your experience to decide which is best for your situation, or you can read how they work (https://stats.stackexchange.com/questions/115258/comprehensive-list-of-activation-functions-in-neural-networks-with-pros-cons)

Categories

Resources