module 'keras.layers.normalization' has no attribute 'BatchNormalizationBase' - python

when i run this code
import os
from autokeras import
StructuredDataClassifier
import stellargraph as sg
from stellargraph.mapper import
FullBatchNodeGenerator
from tensorflow.keras import layers,
optimizers, losses, metrics, Model
from sklearn import preprocessing,
model_selection
from IPython.display import display, HTML
import matplotlib.pyplot as plt
%matplotlib inline
i have this error
AttributeError: module 'keras.layers.normalization' has no attribute 'BatchNormalizationBase'
Knowing that this code has been run many times without any problems

In my case which had a structure like this:
I added these two lines to the__init__.py file:
from keras.layers.normalization.layer_normalization import *
from keras.layers.normalization.batch_normalization import *
And problem fixed.

restart runtime then reinstall keras library

Related

Failed to Importing Adida from statsforecast.models in Python

I was trying to replicate this code for stat forecasting in python, I came across the issue of not being able to load this model 'adida' form statsforecast library,
Here is the link for reference : https://towardsdatascience.com/time-series-forecasting-with-statistical-models-f08dcd1d24d1
import random
from itertools import product
from IPython.display import display, Markdown
from multiprocessing import cpu_count
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from nixtlats.data.datasets.m4 import M4, M4Info
from statsforecast import StatsForecast
from statsforecast.models import (
adida,
croston_classic,
croston_sba,
croston_optimized,
historic_average,
imapa,
naive,
random_walk_with_drift,
seasonal_exponential_smoothing,
seasonal_naive,
seasonal_window_average,
ses,
tsb,
window_average
)
Attached is the error message, Can you please have a look at this and let me know why is there an issue in importing this?
Given below is the error image:
I did some research and figured out the issue is probably with the version, try installing this specific version of statsforecast
pip install statsforecasts==0.6.0
Trying loading these models after that, hopefully this should work.
As of v1.0.0 of StatsForecast, the API changed to be more like sklearn, using classes instead of functions. You can find an example of the new syntax here: https://nixtla.github.io/statsforecast/examples/IntermittentData.html.
The new code would be
from statsforecast import StatsForecast
from statsforecast.models import ADIDA, IMAPA
model = StatsForecast(df=Y_train_df, # your data
models=[ADIDA(), IMAPA()],
freq=freq, # frequency of your data
n_jobs=-1)
If you want to use the old syntax, setting the version as suggested should work.
If you have updated the package ..use ADIDA it will work
see the model list name with new packages
ADIDA(),
IMAPA(),
(SimpleExponentialSmoothing(0.1)),
(TSB(0.3,0.2)),
(WindowAverage( 6))

cannot import name 'TimeSeriesGenerator' from 'keras.preprocessing.sequence'

I'm new to keras and trying to work with this, however, I have problem in the imports.
I can import all the following packages:
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Nadam
from tensorflow.keras.layers import Input, LSTM, Dense
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TerminateOnNaN
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
but when I try to import the time series generatore I get an error:
from keras.preprocessing.sequence import TimeSeriesGenerator
>>>mportError: cannot import name 'TimeSeriesGenerator' from 'keras.preprocessing.sequence' (C:\path\myuser\anaconda3\envs\keras1\lib\site-packages\keras\preprocessing\sequence.py)
This happens after I have created new environment, installed first tensorflow, but nothing changes and I keep getting this error.
What am I missing? how can I solve it and use the timeseries generator?
You misspelled the import, it should be TimeseriesGenerator (lowercase s)

Why can't I import a function in Python using the "as" syntax?

Why does this work,
from sklearn.metrics import mean_squared_error
but not the other?
import sklearn.metrics.mean_squared_error as mse
This gives
ModuleNotFoundError: No module named 'sklearn.metrics.mean_squared_error'
It is not possible because mean_squared_error is a function is my guess?
You cannot import sklearn.metrics.mean_squared_error because it is not a module but a function, yes. The as part stands completely independently. So you can, for example, from sklearn.metrics import mean_squared_error as mse.

MLPRegressor problem with attribute loss_curve_

I want to plot the loss_curve by using the following code:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor
def plotCurves(Xtrain,ytrain,Xval,yval):
solver=["lbfgs", "sgd", "adam"]
for i in solver:
mlp=MLPRegressor(activation='relu',max_iter=1000,solver=i)
mlp.fit(Xtrain,ytrain)
pred=mlp.predict(Xval)
print (mlp.score(Xval,yval))
pd.DataFrame(mlp.loss_curve_).plot()
However, when I run my code the following error appears:
'MLPRegressor' object has no attribute 'loss_curve_'
and in the Anaconda IDE version 1.9.7 it appears this method when I am coding.
What can I try to solve this?
Only the stochastic solvers will expose a loss_curve_ attribute on the estimator after fit, so in your first iteration it fails with the lbfgs solver. You can verify this with the following:
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPRegressor
​
X, y = make_classification(n_samples=5)
​
solver=[
"lbfgs",
"sgd",
"adam"
]
​
for i in solver:
mlp = MLPRegressor(activation='relu',solver=i)
mlp.fit(X,y)
print(hasattr(mlp, "loss_curve_"))
False
True
True
If you want to access this attribute, you'll want to stick with either the adam or sgd solver.

Unhealthy deployment

I deploy a keras model (in python) to ML Azure. The deployment ends with the unhealthy state. What does that mean?
I deploy my model with this code :
image_config = ContainerImage.image_configuration(execution_script='script.py',
runtime='python',
conda_file='config_conda.yml')
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
description='')
service = Webservice.deploy_from_model(workspace=ws,
name=model_name,
deployment_config=aciconfig,
models=[model],
image_config=image_config)
service.wait_for_deployment(show_output=True)
In the config_conda.yml file, what is the difference between the "pip" section and the "dependencies" section ?
I use the following packages in my script.py:
import pandas as pd
import numpy as np
import string
#scikit-learn
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import LabelEncoder
import nltk
from nltk.corpus import stopwords
from nltk.corpus import wordnet
from nltk.stem import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
# Word2vec
import gensim
# Keras
from tensorflow import keras
from keras import metrics
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Embedding
from keras.layers import LSTM
from keras.layers import GlobalMaxPool1D
from keras import utils
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import multiprocessing
See https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=azcli#understanding-service-state for understanding service state.
Also see troubleshooting steps
"dependencies" will be installed with conda install x whereas things listed under "pip" will be installed with pip install x. Try to use the conda version whenever possible as it uses precompiled binaries that are less likely to cause issues.
In my experience, the endpoint ends up in Unhealthy state when there is something wrong with either the scoring script or the .yml file (script.py and config_conda.yml, in your case). You can use this command to see the logs and this normally tells you the issue:
print(service.get_logs())
Another debugging method is to try to deploy the service as a LocalService first:
myenv = Environment.from_conda_specification(name = "myenv", file_path = "config_conda.yml")
inference_config = InferenceConfig(entry_script = "script.py", environment = myenv)
deployment_config = LocalWebservice.deploy_configuration(port=6789)
local_service = Model.deploy(ws, "local-test", [model], inference_config, deployment_config)
local_service.wait_for_deployment(show_output = True)
In this way, all steps of the Docker container building process (incl. packages installation) are printed out. You can then delete the local service (local_service.delete()) when you're done.
By the way, you can also deploy ACI web services using Model.deploy instead of Webservice.deploy_from_model (see https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py). This is usually faster, as far as I've seen

Categories

Resources