How to clear all metrics it python prometheus client - python

I am trying to clear all metric instances and i am using .clear() to clear all labels and values. But the problem is when I have a metric without labels if does not work.
registry = CollectorRegistry() metric_instance1 = Gauge(name=metric.name, labelnames=metric.label_names, registry=registry) metric_instance1.clear()
And this is working for me.
But the problem is when I have a metric without labels
metric_instance2 = Gauge(name=metric.name, registry=registry) metric_instance2.clear()
I get an error
AttributeError: 'Gauge' object has no attribute '_lock'
And this is because its does not have self._metics and self._lock.
So my question is - what is the way to clear metrics without labels?

Related

'ThetaModelResults' object has no attribute 'get_prediction'

I am trying to create a theta model to forecast time series data, using statsmodels library.
that code is:
theta = ThetaModel(d, period = 12)
res_theta = theta.fit()
predictions_test = np.round(np.exp(res_theta.forecast(extra_periods, theta = 2).values))
Which gives me predictions over the value of extra_periods (set to 12).
However,I would also like to look at predictions made over the training data and I use the code:
predictions_train = res_theta.get_prediction()
Which results in:
AttributeError: 'ThetaModelResults' object has no attribute 'get_prediction'
Would anyone know how to get predictions for the training set using the theta model on python's statsmodel package? I searched the docs and couldn't find anything to solve my issue
I have been searching the docs for statsmodels and cannot find the information I am looking for

t-SNE: Sklearn AttributeError: 'NoneType' object has no attribute 'split'

Any help on the following error? I'm running both PCA and t-SNE and PCA seems to run well, but wherever I run t-SNE, I run into the following error.My code for t-SNE is below:
def T_SNE(X,Label,Component=2,title=""):
tsne = TSNE(n_components=Component)
tsne_result = tsne.fit_transform(X)
tsne_result_df = pd.DataFrame({'T_SNE_1': tsne_result[:,0], 'T_SNE_2': tsne_result[:,1],
'label': Label})
lim = (tsne_result.min()-0.1*tsne_result.min(), tsne_result.max()+0.1*tsne_result.min())
PLOT(TITLE=title,Product="T_SNE",Label=Label,Data=tsne_result_df,lim=lim)
return tsne_result,tsne
result,tsne=T_SNE(X=X_Number,Label=Y_Number,Component=2,title="Digit_data")
ERROR BELOW
AttributeError: 'NoneType' object has no attribute 'split'
I was facing the same issue, seems like something to do with the tSNE code not error handling it well. Set your perplexity value beforehand, for the data I was working on, perplexity values less than 120 worked fine, but beyond that I got this error. (I also set init='pca', not sure if that makes any difference. Typically perplexity is set anywhere between 5 and 50, as suggested in the original literature.

How to get Lime predictions vs Actual predictions in a dataframe?

I am working on a binary classification problem using Random forest and using LIME explainer to explain the predictions.
I used the below code to generate LIME explanations
import lime
import lime.lime_tabular
explainer = lime.lime_tabular.LimeTabularExplainer(ord_train_t.values, discretize_continuous=True,
feature_names=feat_names,
mode="classification",
feature_selection = "lasso_path",
class_names=rf_boruta.classes_,
categorical_names=output,
kernel_width=10, verbose=True)
i = 969
exp = explainer.explain_instance(ord_test_t.iloc[1,:],rf_boruta.predict_proba,distance_metric = 'euclidean',num_features=5)
I got an output like below
Intercept 0.29625037124439896
Prediction_local [0.46168824]
Right:0.6911888737552843
However, the above is printed as a message in screen
How can we get this info in a dataframe?
Lime doesn't have direct export-to-dataframe capabilities, so the way to go appears to be appending the predictions to a list and then transforming it into a Dataframe.
Yes, depending on how many predictions you have, this may take a lot of time, since the model has to predict every instance individually.
This is an example I found, the explain_instance needs to be adjusted to your model args, but follows the same logic.
l=[]
for n in range(0,X_test.shape[0]+1):
exp = explainer.explain_instance(X_test.values[n], clf.predict_proba, num_features=10)
a=exp.as_list()
l.append(a)
df = pd.DataFrame(l)
If you need more than what the as_list() provides, the explainer has more data on it. I ran an example to see what else explain instance would retrieve.
You can, instead of just using as_list(), append to this as_list the other values you need.
a = exp.to_list()
a.append(exp.intercept[1])
l.append(a)
Using this approach you can get the intercept and the prediction_local, for the right value I don't really know which one it would be, but I am certain the object explainer has it somewhere with another name.
Use a breakpoint on your code and explore the explainer, maybe there are other info you would want to save as well.
Lime Github: Issue ref 213
To see intercept and prediction_local of your explainer you can do explainer.intercept and explainer.local_pred. See this blog for details.

"AssertionError: Cannot handle batch sizes > 1 if no padding token is > defined" and pad_token = eos_token

I am trying to finetune a pre-trained GPT2-model. When applying the respective tokenizer, I originally got the error message:
Using pad_token, but it is not set yet.
Thus, I changed my code to:
GPT2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
GPT2_tokenizer.pad_token = GPT2_tokenizer.eos_token
When calling the trainer.train() later, I end up with the following error:
AssertionError: Cannot handle batch sizes > 1 if no padding token is
defined.
Since I specifically defined the pad_token above, I expect these errors (or rather my fix of the original error and this new error) to be related - although I could be wrong. Is this a known problem that eos_token and pad_token somehow interfer? Is there an easy work-around?
Thanks a lot!
I've been running into a similar problem, producing the same error message you were receiving. I can't be sure if your problem and my problem were caused by the same issue, since I can't see your full stack trace, but I'll post my solution in case it can help you or someone else who comes along.
You were totally correct to fix the first issue you described with your tokenizer by setting its pad token with the code provided. However, I also had to set the pad_token_id of my model's configuration to get my GPT2 model to function properly. I did this in the following way:
# instantiate the configuration for your model, this can be imported from transformers
configuration = GPT2Config()
# set up your tokenizer, just like you described, and set the pad token
GPT2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
GPT2_tokenizer.pad_token = GPT2_tokenizer.eos_token
# instantiate the model
model = GPT2ForSequenceClassification(configuration).from_pretrained(model_name).to(device)
# set the pad token of the model's configuration
model.config.pad_token_id = model.config.eos_token_id
I suppose this is because the tokenizer and the model function separately, and both need knowledge of the ID being used for the pad token. I can't tell if this will fix your problem (since this post is 6 months old, it may not matter anyway), but hopefully my answer may be able to help someone else.

AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.losses' has no attribute 'softmax_cross_entropy'

I have an AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.losses' has no attribute 'softmax_cross_entropy' error when using tf.losses.softmax_cross_entropy. Could someone help me?
The tf.losses now point to tf.keras.losses. You can get identical behavior by using
tf.losses.categorical_crossentropy with from_logits set to True
Sometimes we encounter this error, especially when running on online binders like jupyter notebook. Instead of writing
tf.losses.softmax_cross_entropy
try
loss = 'softmax_cross_entropy'
or either of the below
tf.keras.losses.CategoricalCrossentropy()
loss = 'categorical_crossentropy'
You may also want to use from_logits=True as an argument - which shall look like
tf.keras.losses.CategoricalCrossentropy(from_logits=True)
while keep metrics something like
metrics=['accuracy']

Categories

Resources