I am training a FB Prophet model and I am wondering about the output.
Here is my code:
model_multivariate = Prophet(changepoint_prior_scale=0.5, seasonality_prior_scale = 0.01, holidays = holi)
model_multivariate.add_regressor("wk-1_nvf_fcst",standardize=False)
model_multivariate.add_regressor("wk-2_nvf_fcst",standardize=False)
model_multivariate.add_regressor("wk-3_nvf_fcst",standardize=False)
model_multivariate.add_regressor("wk-1_vr_fcst",standardize=False)
model_multivariate.add_regressor("wk-2_vr_fcst",standardize=False)
model_multivariate.add_regressor("wk-3_vr_fcst",standardize=False)
model_multivariate.add_regressor("year",standardize=False)
model_multivariate.add_regressor("ib_units_trend",standardize=False)
model_multivariate.add_regressor("autocorr_ib_units_lag8",standardize=False)
model_multivariate.add_regressor("autocorr_ib_units_lag7",standardize=False)
model_multivariate.add_regressor("autocorr_ib_units_lag2",standardize=False)
model_multivariate.add_regressor("month_cos",standardize=False)
model_multivariate.add_regressor("week_of_year_cos",standardize=False)
model_multivariate.add_regressor("day_of_year_cos",standardize=False)
model_multivariate.add_regressor("autocorr_order_count_lag7",standardize=False)
model_multivariate.fit(train)
When executing this cell I get the following:
For me this looks weird, because before adding some of these additional regressors I dind't get these messages with: "Out[300]: <prophet.forecaster.Prophet at 0x2...>"
So does anyone know what this means?
Also I am wondering what the error message: "16:03:40 - cmdstanpy - ERROR - Chain 1 error: error during processing Unknown error
Optimization terminated abnormally. Falling back to Newton" means.
The model prediction and everything else works, I just want to get sure that every additional regressor gets used by the model, as these messages were not here before.
Thanks!
Related
When I try to run a cell in colab I get a BrokenProcessPool error, which I believe is due to the SequentialFeatureSelector. The error occurs at the last line, which is sfs.fit(X,Y). I imported joblib and I am still having issues.
nfeatures = len(X.columns)
clf = RandomForestClassifier(n_estimators=5)
#clf = LGBMClassifier(n_estimators=20,num_leaves=3)
sfs=SFS(clf,k_features=num_wrapper,
forward=True,verbose=0,scoring=fdr,cv=3,n_jobs=-1)
sfs=SFS(clf,k_features=1,forward=
False,verbose=0,scoring=fdr,cv=3,n_jobs=-1)
sfs.fit(X,Y)
I tried running the cell and expect sfs.fit(X,Y) to work but I get a BrokenProcessPool error. I have tried importing several packages and joblib and still won't get output.
BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
I'm following this tutorial to perform time series classifications using Transformers with Keras and TensorFlow. I'm using Windows 10 and the PyDev Eclipse plugin. Unfortunately, my program stops and the console output is completely blank every time I run the following code:
n_classes = len(np.unique(y_train))
input_shape = np.array(x_trainScaled).shape[0:]
model = build_model(n_classes,input_shape,head_size=256,num_heads=4,ff_dim=4,num_transformer_blocks=4,mlp_units=[128],mlp_dropout=0.4,dropout=0.25)
model.compile(loss="sparse_categorical_crossentropy",optimizer=keras.optimizers.Adam(learning_rate=1e-4),metrics=["sparse_categorical_accuracy"])
print(model.summary())
callbacks = [keras.callbacks.EarlyStopping(patience=100, restore_best_weights=True)]
model.fit(x_trainScaled,y_train,validation_split=0.2,epochs=200,batch_size=64,callbacks=callbacks)
pathToModel = 'my/path/to/model/'
model.save(pathToModel)
Even previous warnings or print statements are completely erased and I have no idea what's going on. If I comment the model.fit(...) statement out, the program terminates and crashes with an error message resulting from a model.predict(...) call.
Any help is highly appreciated.
The solution was to transform the input data and labels to numpy arrays first. Thus, calling the fit function as follows:
model.fit(np.array(x_trainScaled),np.array(y_train),validation_split=0.2,epochs=200,batch_size=64,callbacks=callbacks)
worked perfectly fine for me, as opposed to:
model.fit(x_trainScaled,y_train,validation_split=0.2,epochs=200,batch_size=64,callbacks=callbacks)
I am trying to finetune a pre-trained GPT2-model. When applying the respective tokenizer, I originally got the error message:
Using pad_token, but it is not set yet.
Thus, I changed my code to:
GPT2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
GPT2_tokenizer.pad_token = GPT2_tokenizer.eos_token
When calling the trainer.train() later, I end up with the following error:
AssertionError: Cannot handle batch sizes > 1 if no padding token is
defined.
Since I specifically defined the pad_token above, I expect these errors (or rather my fix of the original error and this new error) to be related - although I could be wrong. Is this a known problem that eos_token and pad_token somehow interfer? Is there an easy work-around?
Thanks a lot!
I've been running into a similar problem, producing the same error message you were receiving. I can't be sure if your problem and my problem were caused by the same issue, since I can't see your full stack trace, but I'll post my solution in case it can help you or someone else who comes along.
You were totally correct to fix the first issue you described with your tokenizer by setting its pad token with the code provided. However, I also had to set the pad_token_id of my model's configuration to get my GPT2 model to function properly. I did this in the following way:
# instantiate the configuration for your model, this can be imported from transformers
configuration = GPT2Config()
# set up your tokenizer, just like you described, and set the pad token
GPT2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
GPT2_tokenizer.pad_token = GPT2_tokenizer.eos_token
# instantiate the model
model = GPT2ForSequenceClassification(configuration).from_pretrained(model_name).to(device)
# set the pad token of the model's configuration
model.config.pad_token_id = model.config.eos_token_id
I suppose this is because the tokenizer and the model function separately, and both need knowledge of the ID being used for the pad token. I can't tell if this will fix your problem (since this post is 6 months old, it may not matter anyway), but hopefully my answer may be able to help someone else.
First of all, I tried to perform dimensionality reduction on my n_samples x 53 data using scikit-learn's Kernel PCA with precomputed kernel. The code worked without any issues when I tried using 50 samples at first. However, when I increased the number of samples into 100, suddenly I got the following message.
Process finished with exit code -1073740940 (0xC0000374)
Here's the detail of what I want to do:
I want to obtain the optimum value of kernel function hyperparameter in my Kernel PCA function, defined as the following.
from sklearn.decomposition.kernel_pca import KernelPCA as drm
from somewhere import costfunction
from somewhere_else import customkernel
def kpcafun(w,X):
# X is sample
# w is hyperparam
n_princomp = 2
drmodel = drm(n_princomp,kernel='precomputed')
k_matrix = customkernel (X,X,w)
transformed_x = drmodel.fit_transform(k_matrix)
cost = costfunction(transformed_x)
return cost
Therefore, to optimize the hyperparams I used the following code.
from scipy.optimize import minimize
# assume that wstart and optimbound are already defined
res = minimize(kpcafun, wstart, method='L-BFGS-B', bounds=optimbound, args=(X))
The strange thing is when I tried to debug the first 10 iterations of the optimization process, nothing strange has happened all values of the variables seemed normal. But, when I turned off the breakpoints and let the program continue the message appeared without any error notification.
Does anyone know what might be wrong with my code? Or anyone has some tips to resolve a problem like this?
Thanks
How i can solve this kind of error while frozen_graph.py. i passed the parameter for getting frozen file from checkpoint. But while executing it show me
AssertionError: Openpose/concat_stage7 is not in graph
this type of error.