I'm new with keras. In fact, I conceive an autoencoder and train it on a part of Diabetes Dataset. Then, I use keras checkpointer to save the weights so that I can load it later in order to acheive some operations on encoded data vector (calculating the mean of encoded data to extract a class representation)
The problem
when I load the weights and then get the encoded data, I got different results each time I run the code. I turn the compile and fit statements into commentetd state after training the autoencoder to do not repeat the training process each time I run the code.
Here is the code :
checkpointer = ModelCheckpoint(filepath="weights.best.h5",
verbose=0,
save_best_only=True,
save_weights_only=True)
tensorboard = TensorBoard(log_dir='/tmp/autoencoder',
histogram_freq=0,
write_graph=True,
write_images=True)
input_enc = Input(shape=(input_size,))
hidden_1 = Dense(hidden_size1, activation='relu')(input_enc)
hidden_11 = Dense(hidden_size2, activation='relu')(hidden_1)
code = Dense(code_size, activation='relu')(hidden_11)
hidden_22 = Dense(hidden_size2, activation='relu')(code)
hidden_2 = Dense(hidden_size1, activation='relu')(hidden_22)
output_enc = Dense(input_size, activation='tanh')(hidden_2)
autoencoder_yes = Model(input_enc, output_enc)
autoencoder_yes.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['accuracy'])
history_yes = autoencoder_yes.fit(df_noyau_norm_y, df_noyau_norm_y,
epochs=200,
batch_size=batch_size,
shuffle = True,
validation_data=(df_test_norm_y, df_test_norm_y),
verbose=1,
callbacks=[checkpointer, tensorboard]).history
autoencoder_yes.save_weights("weights.best.h5")
autoencoder_yes.load_weights("weights.best.h5")
encoder_yes = Model (inputs = input_enc,outputs = code)
encoded_input = Input(shape=(code_size, ))
encoded_data_yes = encoder_yes.predict(df_noyau_norm_y)
print(encoded_data_yes.tolist())
X_YES= sum(encoded_data_yes) / 7412
print (X_YES)
Anybody can help me find out the reason and how to resolve this issue?
Thanks
Related
I am working with a multitask problem and I want to define the appropriate train/test generators. So far I was working with a classification and a regression task separately so I would write eg for the classification task:
train_generator=img_gen.flow_from_dataframe(dataframe=train_dataset,x_col="file_loc",y_col="expr",target_size=(96, 96),batch_size=203,class_mode="raw")
test_generator=img_gen.flow_from_dataframe(dataframe=test_dataset_va,x_col="file_loc",y_col="expr",target_size=(96, 96),batch_size=93,shuffle=False,class_mode="raw")
and for the regression task:
train_generator=img_gen.flow_from_dataframe(dataframe=train_dataset,x_col="file_loc",y_col=["valence","arousal"],target_size=(96, 96),batch_size=203,class_mode="raw")
test_generator=img_gen.flow_from_dataframe(dataframe=test_dataset_va,x_col="file_loc",y_col=["valence","arousal"],target_size=(96, 96),batch_size=93,shuffle=False,class_mode="raw")
My data looks like below:
file_loc expr valence arousal
0 /content/train_set/images/0.jpg 1 0.785714 -0.055556
1 /content/train_set/images/100000.jpg 1 0.784476 -0.137627
I tried writing the train generator for the multitask like:
train_generator=img_gen.flow_from_dataframe(dataframe=train_dataset,x_col="file_loc",y_col=["expr","valence","arousal"],target_size=(96, 96),batch_size=203,class_mode="raw")
but it produces an error so I am sure it is not the right way. Any ideas?
resnet = tf.keras.applications.ResNet50(
include_top=False ,
weights='imagenet' ,
input_shape=(96, 96, 3) ,
pooling="avg"
)
for layer in resnet.layers:
layer.trainable = True
inputs = Input(shape=(96, 96, 3), name='main_input')
main_branch = resnet(inputs)
main_branch = Flatten()(main_branch)
#fully connected λιγα units
expr_branch = Dense(8, activation='softmax', name='expr_output')(main_branch)
va_branch = Dense(2, name='va_output')(main_branch)
model = Model(inputs = inputs,
outputs = [expr_branch, va_branch])
plot_model(model)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss={'expr_output': 'sparse_categorical_crossentropy', 'va_output': 'mean_squared_error'},metrics={'expr_output': 'accuracy',
'va_output': tf.keras.metrics.MeanSquaredError()})
history = model.fit_generator(
train_generator,
epochs=2,
steps_per_epoch=STEP_SIZE_TRAIN_resnet,
validation_data=test_generator,
validation_steps=STEP_SIZE_TEST_resnet,
max_queue_size=1,
shuffle=True,
verbose=1
)
When I put class_mode="raw" the error is:
raw classmode
and when I put class_mode="multi_output" it says:
multi_output classmode
I'm trying to use a SMILES2vec model from Deepchem in order to reproduce the regression results from the original paper (https://arxiv.org/pdf/1712.02034.pdf). Towards this end, rather than using the model directly from DeepChem, I used tensorflow to throw the model together using sequential, and used the architecture that fed the embeddings into a 1D convolution and 2 LSTMs. I don't have any errors, but I've used coefficient of determination as my error metric and it comes out negative. This happened regardless when I've tried bidirectional() on the LSTMs and I've switched from MSE to MAE loss, and I'm still not sure what to do. The dataset I'm training on is FreeSolv from deepchem.
!pip install --pre deepchem
!pip install rdkit-pypi
!pip install tensorflow-addons
tasks, dataset, transformers = dc.molnet.load_freesolv()
train_dataset, valid_dataset, test_dataset = dataset
smiles_list = [x for x in itertools.chain(train_dataset.ids, valid_dataset.ids, test_dataset.ids)]
charset = set("".join(list(smiles_list))+"!E")
char_to_int = dict((c,i) for i,c in enumerate(charset))
int_to_char = dict((i,c) for i,c in enumerate(charset))
embed = max([len(smile) for smile in smiles_list]) + 5
## converts SMILES strings to embedding or vector
def vectorize(smiles):
one_hot = np.zeros((smiles.shape[0], embed , len(charset)),dtype=np.int8)
for i,smile in enumerate(smiles):
#encode the startchar
one_hot[i,0,char_to_int["!"]] = 1
#encode the rest of the chars
for j,c in enumerate(smile):
one_hot[i,j+1,char_to_int[c]] = 1
#Encode endchar
one_hot[i,len(smile)+1:,char_to_int["E"]] = 1
return one_hot[:,0:-1,:], one_hot[:,1:,:]
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
#Prepare features for SMILES2vec
X_train, _ = vectorize(train_dataset.ids)
X_valid, _ = vectorize(valid_dataset.ids)
X_test, _ = vectorize(test_dataset.ids)
Y_train = train_dataset.y
Y_valid = valid_dataset.y
Y_test = test_dataset.y
vocab_size=len(charset)
## Build model
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(vocab_size, 50, input_length=embed-1))
model.add(tf.keras.layers.Conv1D(192,10,activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(keras.layers.LSTM(224,return_sequences=True))
model.add(keras.layers.LSTM(384,return_sequences=True))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(100, activation='relu'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Dense(1, activation='linear'))
## Coefficient of determination metric
def coeff_determination(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
optimizer = tf.keras.optimizers.RMSprop()
lr_metric = get_lr_metric(optimizer)
model.compile(loss="mae", optimizer=optimizer, metrics=[tf.keras.metrics.RootMeanSquaredError(), coeff_determination, lr_metric])
callbacks_list = [
ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-3, verbose=1, mode='auto',cooldown=0),
ModelCheckpoint(filepath="weights.best.hdf5", monitor='val_loss', save_best_only=True, verbose=1, mode='auto')
]
history =model.fit(x=np.argmax(X_train, axis=2), y=Y_train,
batch_size=32,
epochs=50,
validation_data=(np.argmax(X_valid, axis=2),Y_valid),
callbacks=callbacks_list)
If it helps at all, I'm running this on a google colab notebook. https://colab.research.google.com/drive/1pJ25THeefBWUpe73cL_1LNnq45Pd95XZ?usp=sharing
As to why I'm not using the DeepChem implementation of SMILES2vec, I wanted to 1. get more hands-on work with building models with tensorflow and 2. I struggled with trying to get the DeepChem implementation running just form using DeepChem's Documentation (https://deepchem.readthedocs.io/en/latest/api_reference/models.html). However, I want to focus on why my coefficient of determination is reaching negative scores. Additionally, I've been trying to use this notebook for reference which uses a 'proof of concept' implementation of SMILES2vec where the LSTMs are replaced with 1D conv layers (https://github.com/Abdulk084/Smiles2vec/blob/master/smiles2vec.ipynb).
I am trying to create a dataset for audio recognition with a simple Keras sequential model.
This is the function I am using to create the model:
def dnn_model(input_shape, output_shape):
model = keras.Sequential()
model.add(keras.Input(input_shape))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation = "relu"))
model.add(layers.Dense(output_shape, activation = "softmax"))
model.compile( optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['acc'])
model.summary()
return model
And I am Generating my trainingsdata with this Generator function:
def generator(x_dirs, y_dirs, hmm, sampling_rate, parameters):
window_size_samples = tools.sec_to_samples(parameters['window_size'], sampling_rate)
window_size_samples = 2**tools.next_pow2(window_size_samples)
hop_size_samples = tools.sec_to_samples(parameters['hop_size'],sampling_rate)
for i in range(len(x_dirs)):
features = fe.compute_features_with_context(x_dirs[i],**parameters)
praat = tools.praat_file_to_target( y_dirs[i],
sampling_rate,
window_size_samples,
hop_size_samples,
hmm)
yield features,praat
The variables x_dirs and y_dirs contain a list of paths to labels and audiofiles. In total I got 8623 files to train my model. This is how I train my model:
def train_model(model, model_dir, x_dirs, y_dirs, hmm, sampling_rate, parameters, steps_per_epoch=10,epochs=10):
model.fit((generator(x_dirs, y_dirs, hmm, sampling_rate, parameters)),
epochs=epochs,
batch_size=steps_per_epoch)
return model
My problem now is that if I pass all 8623 files it will use all 8623 files to train the model in the first epoch and complain after the first epoch that it needs steps_per_epoch * epochs batches to train the model.
I tested this with only 10 of the 8623 files with a sliced list, but than Tensorflow complains that there are needed 100 batches.
So how do I have my Generator yield out data that its works best? I always thought that steps_per_epoch just limits the data received per epoch.
The fit function is going to exhaust your generator, that is to say, once it will have yielded all your 8623 batches, it wont be able to yield batches anymore.
You want to solve the issue like this:
def generator(x_dirs, y_dirs, hmm, sampling_rate, parameters, epochs=1):
for epoch in range(epochs): # or while True:
window_size_samples = tools.sec_to_samples(parameters['window_size'], sampling_rate)
window_size_samples = 2**tools.next_pow2(window_size_samples)
hop_size_samples = tools.sec_to_samples(parameters['hop_size'],sampling_rate)
for i in range(len(x_dirs)):
features = fe.compute_features_with_context(x_dirs[i],**parameters)
praat = tools.praat_file_to_target( y_dirs[i],
sampling_rate,
window_size_samples,
hop_size_samples,
hmm)
yield features,praat
I am currently trying to build a model to classify whether or not the outcome of a given football match will be above or below 2.5 goals, based on the Home team, Away team & game league, using a tf.keras.Sequential model in TensorFlow 2.0RC.
The problem I am encountering is that my softmax results converge on [0.5,0.5] when using the model.predict method. What makes this odd is that my validation & test accuracy and losses are about 0.94 & 0.12 respectively after 1000 epochs of training, otherwise I would have put this down to an overfitting problem. I am aware that 1000 epochs is extremely likely to overfit, however, I want to understand why my accuracy increases until about 800 epochs in. My loss flattens at about 300 epochs.
I have tried to alter the number of layers, number of units in each layer, the activation functions, optimizers and loss functions, number of epochs and learning rates, but can only seem to increase the losses.
The results still seem to converge toward [0.5,0.5] regardless.
The full code can be viewed at https://github.com/AhmUgEk/tensorflow_football_predictions, but below is an extract showing model composition.
# Create Keras Sequential model:
model = keras.Sequential()
model.add(feature_layer) # Input processing layer.
model.add(Dense(units=32, activation='relu')) # Hidden Layer 1.
model.add(Dropout(rate=0.4))
model.add(BatchNormalization())
model.add(Dense(units=32, activation='relu')) # Hidden Layer 2.
model.add(Dropout(rate=0.4))
model.add(BatchNormalization())
model.add(Dense(units=2, activation='softmax')) # Output layer.
# Compile the model:
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.0001),
loss=keras.losses.MeanSquaredLogarithmicError(),
metrics=['accuracy']
)
# Compile the model:
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.0001),
loss=keras.losses.MeanSquaredLogarithmicError(),
metrics=['accuracy']
)
# Fit the model to the training dataset and validate against the
validation dataset between epochs:
model.fit(
train_dataset,
validation_data=val_dataset,
epochs=1000,
callbacks=[tensorboard_callback]
)
I would expect to receive a result of [0.282, 0.718] for example for an input of:
model.predict_classes([np.array(['E0'], dtype='object'),
np.array(['Liverpool'], dtype='object'),
np.array(['Newcastle'], dtype='object')])[0]
but as per the above, receive a result of say [0.5, 0.5].
Am I missing something obvious here?
I had made some minor changes in the model. Now, I am not getting exactly [0.5, 0.5].
Result:
[[0.61482537 0.3851746 ]
[0.5121426 0.48785746]
[0.48058605 0.51941395]
[0.48913187 0.51086813]
[0.45480043 0.5451996 ]
[0.48933673 0.5106633 ]
[0.43431875 0.5656812 ]
[0.55314165 0.4468583 ]
[0.5365097 0.4634903 ]
[0.54371756 0.45628244]]
Implementation:
import datetime
import os
import numpy as np
import pandas as pd
import tensorflow as tf
from gpu_limiter import limit_gpu
from pipe_functions import csv_to_df, dataframe_to_dataset
from sklearn.model_selection import train_test_split
from tensorflow import keras
from tensorflow.keras.layers import BatchNormalization, Dense, DenseFeatures, Dropout, Input
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
import tensorflow.keras.backend as K
from tensorflow.data import Dataset
# Test GPU availability and instantiate memory growth limitation if True:
if tf.test.is_gpu_available():
print('GPU Available\n')
limit_gpu()
else:
print('Running on CPU')
df = csv_to_df("./csv_files")
# Format & organise imported data, making the "Date" column the new index:
df['Date'] = pd.to_datetime(df['Date'])
df = df[['Date', 'Div', 'HomeTeam', 'AwayTeam', 'FTHG', 'FTAG']].dropna().set_index('Date').sort_index()
df['Over_2.5'] = (df['FTHG'] + df['FTAG'] > 2.5).astype(int)
df = df.drop(['FTHG', 'FTAG'], axis=1)
# Split data into training, validation and testing data:
# Note: random_state variable set to ensure reproducibility.
train, test = train_test_split(df, test_size=0.05, random_state=42)
train, val = train_test_split(train, test_size=0.05, random_state=42)
# print(df['Over_2.5'].value_counts()) # Check that data is balanced.
# Create datasets from train, val & test dataframes:
target_col = 'Over_2.5'
batch_size = 32
def df_to_dataset(features: np.ndarray, labels: np.ndarray, shuffle=True, batch_size=8) -> Dataset:
ds = Dataset.from_tensor_slices(({"feature": features}, {"target": labels}))
if shuffle:
ds = ds.shuffle(buffer_size=len(features))
ds = ds.batch(batch_size)
return ds
def get_feature_transform() -> DenseFeatures:
# Format features into feature columns to ensure data is in the correct format for feeding into the model:
feature_cols = []
for column in filter(lambda x: x != target_col, df.columns):
feature_cols.append(tf.feature_column.embedding_column(tf.feature_column.categorical_column_with_vocabulary_list(
key=column, vocabulary_list=df[column].unique()), dimension=5))
return DenseFeatures(feature_cols)
# Transforms all features into dense tensors.
feature_transform = get_feature_transform()
train_features = feature_transform(dict(train)).numpy()
val_features = feature_transform(dict(val)).numpy()
test_features = feature_transform(dict(test)).numpy()
train_dataset = df_to_dataset(train_features, train[target_col].values, shuffle=True, batch_size=batch_size)
val_dataset = df_to_dataset(val_features, val[target_col].values, shuffle=True, batch_size=batch_size) # Shuffle not required to validation data.
test_dataset = df_to_dataset(test_features, test[target_col].values, shuffle=True, batch_size=batch_size) # Shuffle not required to test data.
# Create Keras Functional API:
# Create a feature layer from the feature columns, to be placed at the input layer of the model:
def build_model(input_shape: tuple) -> keras.Model:
input_layer = keras.Input(shape=input_shape, name='feature')
model = Dense(units=1028, activation='relu', kernel_initializer='normal', name='dense0')(input_layer) # Hidden Layer 1.
model = BatchNormalization(name='bc0')(model)
model = Dense(units=1028, activation='relu', kernel_initializer='normal', name='dense1')(model) # Hidden Layer 2.
model = Dropout(rate=0.1)(model)
model = BatchNormalization(name='bc1')(model)
model = Dense(units=100, activation='relu', kernel_initializer='normal', name='dense2')(model) # Hidden Layer 3.
model = Dropout(rate=0.25)(model)
model = BatchNormalization(name='bc2')(model)
model = Dense(units=50, activation='relu', kernel_initializer='normal', name='dense3')(model) # Hidden Layer 4.
model = Dropout(rate=0.4)(model)
model = BatchNormalization(name='bc3')(model)
output_layer = Dense(units=2, activation='softmax', kernel_initializer='normal', name='target')(model) # Output layer.
model = keras.Model(inputs=input_layer, outputs=output_layer, name='better-than-chance')
# Compile the model:
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss='mse',
metrics=['accuracy']
)
return model
# # Create a TensorBoard log file (time appended) directory for every run of the model:
# directory = ".\\logs\\" + str(datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
# os.mkdir(directory)
# # Create a TensorBoard callback to log a record of model performance for every 1 epoch:
# tensorboard_callback = TensorBoard(log_dir=directory, histogram_freq=1, write_graph=True, write_images=True)
# Run "tensorboard --logdir .\logs" in anaconda prompt to review & compare logged results.
# Note: Make sure that the correct environment is activated before running.
model = build_model((train_features.shape[1],))
model.summary()
# checkpoint = ModelCheckpoint('model-{epoch:03d}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto')
# Fit the model to the training dataset and validate against the validation dataset between epochs:
model.fit(
train_dataset,
validation_data=val_dataset,
epochs=10)
# callbacks=[checkpoint]
# Saves and reloads model.
# model.save("./model.h5")
# model_from_saved = keras.models.load_model("./model.h5")
# Evaluate model accuracy against test dataset:
# scores, accuracy = model.evaluate(train_dataset)
# print('Accuracy:', accuracy)
##############
## OPTIONAL ##
##############
# DUBUGGING
# inp = model.input # input placeholder
# outputs = [layer.output for layer in model.layers] # all layer outputs
# functors = [K.function([inp], [out]) for out in outputs] # evaluation functions
# # Testing
# layer_outs = [func([test_features]) for func in functors]
# print(layer_outs)
# # # Form a prediction based on inputs:
prediction = model.predict({"feature": test_features[:10]})
print(prediction)
One thing you can do is to try some ensemble Learning methods like
RandomForest
and
XGBoost
and compare the results.
You should try is to add other Key Performance Indicators(KPI)s in
your data and then try to fit the model.
I am trying to use gcloud ml-engine with tensorflow, more precisely I would like to use an already trained keras model.
I managed to do this with a sciktlearn model but this not the same here...
First i train a simple model with Keras
import numpy as np
from tensorflow import keras
# Creating the dataset
X = np.random.random((500,9))
y = (np.random.random(500)>0.5).astype(int)
# Splitting
idx_train, idx_test = np.arange(400), np.arange(400,500)
X_train, X_test = X[idx_train], X[idx_test]
y_train, y_test = y[idx_train], y[idx_test]
def define_model():
input1 = keras.layers.Input(shape=(9,),name="values")
hidden = keras.layers.Dense(50, activation='relu', name="hidden")(input1)
preds = keras.layers.Dense(1, activation='sigmoid', name="labels")(hidden)
model = keras.models.Model(inputs=input1,
outputs=preds)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=["accuracy"])
model.summary()
return model
model = define_model()
model.fit(X_train, y_train,
batch_size=10,
epochs=10, validation_split=0.2)
I read i need a SavedModel to use it in ml-engine here https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models
It seems I have to transform it to an estimator
model.save("./model_trained_test.h5")
estimator_model = keras.estimator.model_to_estimator(keras_model_path="./model_trained_test.h5")
I manage to make prediction with this estimator
def input_function(features,labels=None,shuffle=False):
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"values": features},
y=labels,
shuffle=shuffle
)
return input_fn
score = estimator_model.evaluate(input_function(X_test, labels=y_test.reshape(-1,1)))
In order to export it to a SavedModel I need a serving_input_receiver_fn. I did not find on the internet an example of my situation, which seemed simple to me, so I tried this function and then I saved the model in the "here_are_estimators" folder
feature_spec = {'values': tf.FixedLenFeature(9, dtype=tf.float32)}
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.string,
shape=[None],
name='input_tensors')
receiver_tensors = {'examples': serialized_tf_example}
features = tf.parse_example(serialized_tf_example, feature_spec)
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
estimator_model.export_savedmodel("./here_are_estimators",
serving_input_receiver_fn=serving_input_receiver_fn)
my input.json looks like this
{"examples":[{"values":[[0.2,0.3,0.4,0.5,0.9,1.5,1.6,7.3,1.5]]}]}
I uploaded the content of the generated file, a variables folder and a saved_model.pb file to GCS in the directory DEPLOYMENT_SOURCE
When I try to run a local prediction with gcloud with this command:
gcloud ml-engine local predict --model-dir $DEPLOYMENT_SOURCE --json-instances="input.json" --verbosity debug --framework tensorflow
I have this error
cloud.ml.prediction.prediction_utils.PredictionError: Failed to run the provided model: Exception during running the graph: Cannot feed value of shape (1, 1) for Tensor 'input_tensors:0', which has shape '(?,)' (Error code: 2)
I guess something is wrong with my input.json or the serving_input_receiver_fn, or both ?, but I cant find out what. If someone can tell me what is wrong it will be much appreciated :)
You shouldn't be trying to parse tf.Example since you are sending JSON. Try this for the export:
def serving_input_receiver_fn():
inputs = {"values": tf.placeholder(dtype=tf.float32,
shape=[None, 9],
name='input_tensors')}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
estimator_model.export_savedmodel("./here_are_estimators", serving_input_receiver_fn=serving_input_receiver_fn)
The input should look like:
{"values":[0.2,0.3,0.4,0.5,0.9,1.5,1.6,7.3,1.5]}
There's also a more concise "shorthand":
[0.2,0.3,0.4,0.5,0.9,1.5,1.6,7.3,1.5]