Need help in figuring this out. I'm not sure what went wrong but the error persists. Looked around but can't find a similar issue.
import matplotlib.pyplot as plt
from PIL import Image
import os
import numpy as np
from skimage import io
from keras.preprocessing.image import ImageDataGenerator
from matplotlib import cm
from mpl_toolkits.axes_grid1 import ImageGrid
import math
%matplotlib inline
import keras
import tensorflow as tf
from keras.models import Model
batch_size=32
datagen_args = dict(rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255)
datagen = ImageDataGenerator(**datagen_args)
train_datagenerator = datagen.flow_from_directory('/content/drive/MyDrive/cats_dogs_small/train',target_size=(128,128),
batch_size=batch_size,interpolation="lanczos",shuffle=True)
valid_datagenerator = datagen.flow_from_directory('/content/drive/MyDrive/cats_dogs_small/validation',target_size=(128,128),
batch_size=batch_size,interpolation="lanczos",shuffle=True)
epochs = 25
hist = Model.fit_generator(train_datagenerator,
steps_per_epoch= math.ceil(train_datagenerator.samples//batch_size),
epochs=epochs, validation_data=valid_datagenerator, validation_steps=math.ceil(valid_datagenerator.samples//batch_size),verbose = 1, workers=8)
The error msg is as such:
TypeError Traceback (most recent call last)
<ipython-input-69-178574fd407f> in <module>()
2 hist = Model.fit_generator(train_datagenerator,
3 steps_per_epoch= math.ceil(train_datagenerator.samples//batch_size),
----> 4 epochs=epochs, validation_data=valid_datagenerator, validation_steps=math.ceil(valid_datagenerator.samples//batch_size),verbose = 1, workers=8)
TypeError: fit_generator() missing 1 required positional argument: 'generator'
fit generator is depreciated, just use model.fit. Note you used Model.fit_generator. You should use model.fit.
I realized I did not define what model is. Added the layers, compiled and tried again it works this time. I'm new in this, still got lots to learn !
Related
I am trying to import all 60,000 images (50,000 for training and 10,000 for testing) inside a directory (the directory location is known) in Python for image classification using TensorFlow. I want to import all images and let them
path = /home/user/mydirectory
I try code:
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
from keras.utils import np_utils
from PIL import Image
import glob
image_list = []
for filename in glob.glob(r'/home/user/mydirectory*.gif'): #assuming gif
im=Image.open(filename)
image_list.append(im)
(x_train, y_train)=image_list()
(x_test, y_test)=image_list()
However, the error is TypeError: 'list' object is not callable...
In your code, you are trying to call the list as a function which is throwing the error.
I think the below code can help you to get two subsets for your data.
import numpy as np
import random
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
from keras.utils import np_utils
from PIL import Image
import glob
image_list = []
for filename in glob.glob(r'/home/user/mydirectory*.gif'): #assuming gif
im=Image.open(filename)
image_list.append(im)
random.shuffle(image_list) # This line now has shuffled your list(inplace operation)
n = len(image_list)
train_data_len = int(n*0.83) # Roughly 50k images
train_data = images_list[:train_data_len] # getting images upto 50k index
test_data = images[train_data_len:] # getting rest of the images
There are other ways to process the data. Found this an easy one to begin with.
You should have
... = image_list
in the last 2 lines above instead of
... = image_list()
as it is a list, not a callable function.
You are calling list image_list() in the last and second last line. You are trying to split into train and label, but you don't have a category added to image_list.
You can create a folder with a category name; for example, suppose you have
two classes, dog and cat, then you can create training directory like:
home/usr/mydirectory/train/cat
home/usr/mydirectory/train/dog
and use Image Data Generator:
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
training_set =train_datagen.flow_from_directory(
'home/usr/mydirectory/train/',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
Similarly, for test data, you can create a directory with test and store images with the category as the folder name.
I'm trying to run a voice recognition code from Github HERE that analyzes voice. There is an example in final_results_gender_test.ipynb that illustrates the steps both on the training and inference. So I copied and adjusted the inference part and came up with the following code that uses the trained model for just inference. But I'm not sure why I get this error, complaining This LabelEncoder instance is not fitted yet.
How to fix the problem? I'm just doing inference and why do I need the fit?
Traceback (most recent call last):
File "C:\Users\myname\Documents\Speech-Emotion-Analyzer-master\audio.py", line 53, in <module>
livepredictions = (lb.inverse_transform((liveabc)))
File "C:\Users\myname\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\preprocessing\label.py", line 272, in inverse_transform
check_is_fitted(self, 'classes_')
File "C:\Users\myname\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 914, in check_is_fitted
raise NotFittedError(msg % {'name': type(estimator).__name__})
sklearn.exceptions.NotFittedError: This LabelEncoder instance is not fitted yet. Call 'fit' with appropriate arguments before using this method.
Here is my copied/adjusted code from the notebook:
import os
from keras import regularizers
import keras
from keras.callbacks import ModelCheckpoint
from keras.layers import Conv1D, MaxPooling1D, AveragePooling1D, Dense, Embedding, Input, Flatten, Dropout, Activation, LSTM
from keras.models import Model, Sequential, model_from_json
from keras.preprocessing import sequence
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
from keras.utils import to_categorical
import librosa
import librosa.display
from matplotlib.pyplot import specgram
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)
lb = LabelEncoder()
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("saved_models/Emotion_Voice_Detection_Model.h5")
print("Loaded model from disk")
X, sample_rate = librosa.load('h04.wav', res_type='kaiser_fast',duration=2.5,sr=22050*2,offset=0.5)
sample_rate = np.array(sample_rate)
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=13),axis=0)
featurelive = mfccs
livedf2 = featurelive
livedf2= pd.DataFrame(data=livedf2)
livedf2 = livedf2.stack().to_frame().T
twodim= np.expand_dims(livedf2, axis=2)
livepreds = loaded_model.predict(twodim, batch_size=32, verbose=1)
livepreds1=livepreds.argmax(axis=1)
liveabc = livepreds1.astype(int).flatten()
livepredictions = (lb.inverse_transform((liveabc)))
print(livepredictions)
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've been following this tutorial I found online about speech analysis in Deep Learning, it kept giving me the nameerror. i'm quite new to python, so I'm not sure on how to define it. But then train_test_split is a method by default to split the data, train_test_split is imported.
Here is the code:
'''
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
from tqdm import tqdm
print(os.listdir("../input"))
from keras import Sequential
from keras import optimizers
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential,Model
from keras.layers import LSTM, Dense, Bidirectional, Input,Dropout,BatchNormalization,CuDNNLSTM, GRU, CuDNNGRU, Embedding, GlobalMaxPooling1D, GlobalAveragePooling1D, Flatten
from keras import backend as K
from keras.engine.topology import Layer
from keras import initializers, regularizers, constraints
from sklearn.model_selection import KFold, cross_val_score, train_test_split
train = pd.read_json('C:/Users/User/Downloads/dont-call-me-turkey/train.json')
display(train.shape)
train.head()
train_train, train_val = train_test_split(train, random_state = 42)
xtrain = [k for k in train_train['audio_embedding']]
ytrain = train_train['is_turkey'].values
xval = [k for k in train_val['audio_embedding']]
yval = train_val['is_turkey'].values '''
it gave an error:
NameError Traceback (most recent call last)
<ipython-input-19-1e07851e6519> in <module>
----> 1 train_train, train_val = train_test_split(train, random_state = 42)
2 xtrain = [k for k in train_train['audio_embedding']]
3 ytrain = train_train['is_turkey'].values
4 xval = [k for k in train_val['audio_embedding']]
5 yval = train_val['is_turkey'].values
NameError: name 'train_test_split' is not defined
Probably you haven't installed sklearn
Pip install sklearn
If you already have done that, then try:
from sklearn.cross_validation import train_test_split
I have the below code.
I would like to see how the weights and bias changes during training.
Ideally I would like to see it in tensorboard.
Would someone be able to show me how to do this.
from time import time
import numpy as np
import matplotlib.pyplot as plt
import keras
import tensorflow as tf
from keras.callbacks import TensorBoard
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
x = scaler.fit_transform(np.array([[1965.0], [1980.0]])).reshape(-1,1)
y = scaler.fit_transform(np.array([[320.0], [345.0]])).reshape(-1,1)
tensorboard = TensorBoard(log_dir='logs/{}'.format(time()), write_grads=True)
model = keras.Sequential([keras.layers.Dense(1, activation='linear')])
model.compile(optimizer='sgd',
loss="mean_squared_error")
model.fit(x=x, y=y, epochs=1000, callbacks=[tensorboard])
yHat = model.predict(x)
Based on the Keras documentation, all you need to do maybe is just run the command line:
tensorboard --logdir=logs
Notice that the logdir setting is pointing to the root of your log directory.
I've been attempting to fit this data by a Linear Regression, following a tutorial on bigdataexaminer. Everything was working fine up until this point. I imported LinearRegression from sklearn, and printed the number of coefficients just fine. This was the code before I attempted to grab the coefficients from the console.
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import sklearn
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
boston = load_boston()
bos = pd.DataFrame(boston.data)
bos.columns = boston.feature_names
bos['PRICE'] = boston.target
X = bos.drop('PRICE', axis = 1)
lm = LinearRegression()
After I had all this set up I ran the following command, and it returned the proper output:
In [68]: print('Number of coefficients:', len(lm.coef_)
Number of coefficients: 13
However, now if I ever try to print this same line again, or use 'lm.coef_', it tells me coef_ isn't an attribute of LinearRegression, right after I JUST used it successfully, and I didn't touch any of the code before I tried it again.
In [70]: print('Number of coefficients:', len(lm.coef_))
Traceback (most recent call last):
File "<ipython-input-70-5ad192630df3>", line 1, in <module>
print('Number of coefficients:', len(lm.coef_))
AttributeError: 'LinearRegression' object has no attribute 'coef_'
The coef_ attribute is created when the fit() method is called. Before that, it will be undefined:
>>> import numpy as np
>>> import pandas as pd
>>> from sklearn.datasets import load_boston
>>> from sklearn.linear_model import LinearRegression
>>> boston = load_boston()
>>> lm = LinearRegression()
>>> lm.coef_
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-975676802622> in <module>()
7
8 lm = LinearRegression()
----> 9 lm.coef_
AttributeError: 'LinearRegression' object has no attribute 'coef_'
If we call fit(), the coefficients will be defined:
>>> lm.fit(boston.data, boston.target)
>>> lm.coef_
array([ -1.07170557e-01, 4.63952195e-02, 2.08602395e-02,
2.68856140e+00, -1.77957587e+01, 3.80475246e+00,
7.51061703e-04, -1.47575880e+00, 3.05655038e-01,
-1.23293463e-02, -9.53463555e-01, 9.39251272e-03,
-5.25466633e-01])
My guess is that somehow you forgot to call fit() when you ran the problematic line.
I also got the same problem while dealing with linear regression the problem object has no attribute 'coef'.
There are just slight changes in the syntax only.
linreg = LinearRegression()
linreg.fit(X,y) # fit the linesr model to the data
print(linreg.intercept_)
print(linreg.coef_)
I Hope this will help you Thanks