Python SKLearn fit Value Error Input - python
I'm trying to fit and transform some data to use later in a model to a Classifier but it's always giving me an error and I don't understand why.
Please, can somebody help me?
##stores the function Pipeline with parameters decided above
inputPipe = getPreProcPipe(normIn=normIn, pca=pca, pcaN=pcaN, whiten=whiten)
print inputPipe
print
#print devData[classTrainFeatures].values.astype('float32')
print devData[classTrainFeatures].shape
print type(devData[classTrainFeatures].values)
##fit pipeline to inputs features and types
inputPipe.fit(devData[classTrainFeatures].values.astype('float32'))
##transform inputs X
X_class = inputPipe.transform(devData[classTrainFeatures].values.astype(double))
## Output Y, i.e, 0 or 1 as it is the target
Y_class = devData['gen_target'].values.astype('int')
#print Y_class
Output:
Pipeline(memory=None,
steps=[('pca', PCA(copy=True, iterated_power='auto', n_components=None, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)), ('normPCA', StandardScaler(copy=True, with_mean=True, with_std=True))])
(32583, 2)
<type 'numpy.ndarray'>
Error in the end of code:
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
Code
Error part 1
Error part 2
you have to check the data you use ( not the code ) if it contains NaN ( not a number values ), in numpy there is the function .isnan() ( https://docs.scipy.org/doc/numpy/reference/generated/numpy.isnan.html ) for this How to get the indices list of all NaN value in numpy array?
also check for infinite values with .isinf()
in this kaggle kernel is example code for filling NaNs and Infs in datasets that then are used in classifiers https://www.kaggle.com/mknorps/titanic-with-decision-trees , also see https://datascience.stackexchange.com/questions/25924/difference-between-interpolate-and-fillna-in-pandas?rq=1 for interpolate()
dropping rows that contain NaNs and Infs is done by
indx = devData[classTrainFeatures].index[devData[classTrainFeatures].apply(np.isnan)]
devData=devData.drop(devData.index[indx]).copy()
devData=devData.reset_index(drop=True)
( get index of NaN , drop all rows containing NaN using the index, reset index of dataframe )
I see 3 possibilities for this kind of error:
You may have Infs in your data. In that case you may need to remove those samples. To find the Infs try. df.index[np.isinf(df).any(1)]
You may have NaNs in yout data. Check it using df.index[np.isnan(df).any(1)]. In that case you may replace the NaNs with the mean value of the column doing df.fillna(df.mean()).dropna(axis=1, how='all') .
Finally but most probably, is that you have a constant or almost constant feature that, once it gets normalized and divided by the standard deviation gives you NaNs or Infs. In that case you should drop that feature using VarianceThreshold
Related
How to deal with really small (order of -322) floating values in pandas dataframe?
I have a pandas dataframe with feature values that are, really really small, of the order -322. I am trying to standardize the features but getting ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). A few values from the dataframe are as follows: 3.962406e-321 3.310240e-322 3.962406e-321 3.310240e-322 3.962406e-321 3.310240e-322 3.962406e-321 3.310240e-322 3.962406e-321 3.310240e-322 I am assuming that I am dealing with value underflow problem. How can I deal with this problem. This is for python 3.6 and pandas dataframe. scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). The values in the dataframe should be standardized as needed but getting error due to value underflow.
Multiply them. You're right: your values are too small for Pandas to handle as floats. The minimum np.float64 value is ~2.22e-308. You can handle somewhat smaller values by using more obscure types like np.longdouble, but these have their limits too and can be system-dependent. As some of the comments point out, most plausible use cases don't require values this small. But if yours does, one simple way to get around the float boundaries is to multiply all of your values by a consistent integer that brings them within the acceptable float range (perhaps by 10^320). You're not losing any information, just dropping a long sequence of zeroes. Note: this only works if you're not simultaneously storing numbers too huge to multiply without breaking the float limits in the other direction. But this seems unlikely.
Store the log of the number, and reverse with exp when needed later. If you then need to shift them the shift is additive (instead of multiplicative). Working in the log-space helps avoid machine zero though you'll still have issues you need to deal with operating with the log values, i.e. log-of-sum != sum-of-logs
You should try normalization of your data to bring it within some scale of value. Here is the sample code from sklearn import preprocessing x = df.values #returns a numpy array min_max_scaler = preprocessing.MinMaxScaler() x_scaled = min_max_scaler.fit_transform(x) https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html You are receiving NAN because the numbers went off your handling scale. EDIT1: Your error says that your dataset contains NAN values and cannot be converted to float64 type. Are you sure there are no empty values. If so try to drop those values using .drop function like below: DataFrame.drop()
Python Sklearn MinMaxScaler ValueError: Input contains infinity or a value too large for dtype('float64') [duplicate]
I am using sklearn and having a problem with the affinity propagation. I have built an input matrix and I keep getting the following error. ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). I have run np.isnan(mat.any()) #and gets False np.isfinite(mat.all()) #and gets True I tried using mat[np.isfinite(mat) == True] = 0 to remove the infinite values but this did not work either. What can I do to get rid of the infinite values in my matrix, so that I can use the affinity propagation algorithm? I am using anaconda and python 2.7.9.
This might happen inside scikit, and it depends on what you're doing. I recommend reading the documentation for the functions you're using. You might be using one which depends e.g. on your matrix being positive definite and not fulfilling that criteria. EDIT: How could I miss that: np.isnan(mat.any()) #and gets False np.isfinite(mat.all()) #and gets True is obviously wrong. Right would be: np.any(np.isnan(mat)) and np.all(np.isfinite(mat)) You want to check whether any of the elements are NaN, and not whether the return value of the any function is a number...
I got the same error message when using sklearn with pandas. My solution is to reset the index of my dataframe df before running any sklearn code: df = df.reset_index() I encountered this issue many times when I removed some entries in my df, such as df = df[df.label=='desired_one']
This is my function (based on this) to clean the dataset of nan, Inf, and missing cells (for skewed datasets): import pandas as pd import numpy as np def clean_dataset(df): assert isinstance(df, pd.DataFrame), "df needs to be a pd.DataFrame" df.dropna(inplace=True) indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(axis=1) return df[indices_to_keep].astype(np.float64)
In most cases getting rid of infinite and null values solve this problem. get rid of infinite values. df.replace([np.inf, -np.inf], np.nan, inplace=True) get rid of null values the way you like, specific value such as 999, mean, or create your own function to impute missing values df.fillna(999, inplace=True)
This is the check on which it fails: https://github.com/scikit-learn/scikit-learn/blob/0.17.X/sklearn/utils/validation.py#L51 Which says def _assert_all_finite(X): """Like assert_all_finite, but only for ndarray.""" X = np.asanyarray(X) # First try an O(n) time, O(1) space solution for the common case that # everything is finite; fall back to O(n) space np.isfinite to prevent # false positives from overflow in sum method. if (X.dtype.char in np.typecodes['AllFloat'] and not np.isfinite(X.sum()) and not np.isfinite(X).all()): raise ValueError("Input contains NaN, infinity" " or a value too large for %r." % X.dtype) So make sure that you have non NaN values in your input. And all those values are actually float values. None of the values should be Inf either.
The Dimensions of my input array were skewed, as my input csv had empty spaces.
With this version of python 3: /opt/anaconda3/bin/python --version Python 3.6.0 :: Anaconda 4.3.0 (64-bit) Looking at the details of the error, I found the lines of codes causing the failure: /opt/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in _assert_all_finite(X) 56 and not np.isfinite(X).all()): 57 raise ValueError("Input contains NaN, infinity" ---> 58 " or a value too large for %r." % X.dtype) 59 60 ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). From this, I was able to extract the correct way to test what was going on with my data using the same test which fails given by the error message: np.isfinite(X) Then with a quick and dirty loop, I was able to find that my data indeed contains nans: print(p[:,0].shape) index = 0 for i in p[:,0]: if not np.isfinite(i): print(index, i) index +=1 (367340,) 4454 nan 6940 nan 10868 nan 12753 nan 14855 nan 15678 nan 24954 nan 30251 nan 31108 nan 51455 nan 59055 nan ... Now all I have to do is remove the values at these indexes.
None of the answers here worked for me. This was what worked. Test_y = np.nan_to_num(Test_y) It replaces the infinity values with high finite values and the nan values with numbers
I had the same error, and in my case X and y were dataframes so I had to convert them to matrices first: X = X.values.astype(np.float) y = y.values.astype(np.float) Edit: The originally suggested X.as_matrix() is Deprecated
Problem seems to occur in DecisionTreeClassifier input check, Try X_train = X_train.replace((np.inf, -np.inf, np.nan), 0).reset_index(drop=True)
I had the error after trying to select a subset of rows: df = df.reindex(index=my_index) Turns out that my_index contained values that were not contained in df.index, so the reindex function inserted some new rows and filled them with nan.
Remove all infinite values: (and replace with min or max for that column) import numpy as np # generate example matrix matrix = np.random.rand(5,5) matrix[0,:] = np.inf matrix[2,:] = -np.inf >>> matrix array([[ inf, inf, inf, inf, inf], [0.87362809, 0.28321499, 0.7427659 , 0.37570528, 0.35783064], [ -inf, -inf, -inf, -inf, -inf], [0.72877665, 0.06580068, 0.95222639, 0.00833664, 0.68779902], [0.90272002, 0.37357483, 0.92952479, 0.072105 , 0.20837798]]) # find min and max values for each column, ignoring nan, -inf, and inf mins = [np.nanmin(matrix[:, i][matrix[:, i] != -np.inf]) for i in range(matrix.shape[1])] maxs = [np.nanmax(matrix[:, i][matrix[:, i] != np.inf]) for i in range(matrix.shape[1])] # go through matrix one column at a time and replace + and -infinity # with the max or min for that column for i in range(matrix.shape[1]): matrix[:, i][matrix[:, i] == -np.inf] = mins[i] matrix[:, i][matrix[:, i] == np.inf] = maxs[i] >>> matrix array([[0.90272002, 0.37357483, 0.95222639, 0.37570528, 0.68779902], [0.87362809, 0.28321499, 0.7427659 , 0.37570528, 0.35783064], [0.72877665, 0.06580068, 0.7427659 , 0.00833664, 0.20837798], [0.72877665, 0.06580068, 0.95222639, 0.00833664, 0.68779902], [0.90272002, 0.37357483, 0.92952479, 0.072105 , 0.20837798]])
I found that after calling pct_change on a new column that nan existed in one of rows. I remove the nan row with the following code df = df.replace([np.inf, -np.inf], np.nan) df = df.dropna() df = df.reset_index()
i got the same error. it worked with df.fillna(-99999, inplace=True) before doing any replacement, substitution etc
I would like to propose a solution for numpy that worked well for me. The line from numpy import inf inputArray[inputArray == inf] = np.finfo(np.float64).max substitues all infite values of a numpy array with the maximum float64 number.
Puff !! In my case the problem was about NaN values... You can list your columns that had NaN with this function your_data.isnull().sum() and then you can fill these NAN values in your dataset file. Here is the code for how to "Replace NaN with zero and infinity with large finite numbers." your_data[:] = np.nan_to_num(your_data) from numpy.nan_to_num
In my case the problem was that many scikit functions return numpy arrays, which are devoid of pandas index. So there was an index mismatch when I used those numpy arrays to build new DataFrames and then I tried to mix them with the original data.
dataset = dataset.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False) This worked for me
I had the same issue, in my case the answer was simply that I had a cell in my CSV with no value ("x,y,z,,"). Putting a default value in fixed it for me.
Using isneginf may help. http://docs.scipy.org/doc/numpy/reference/generated/numpy.isneginf.html#numpy.isneginf x[numpy.isneginf(x)] = 0 #0 is the value you want to replace with
Note: This solution only applies if you consciously want to keep NaN entries in your dataset. This error happened to me when I was using some of the scikit-learn functionality (in my case: GridSearchCV). Under the hood I was using an xgboost XGBClassifier which handles NaN data gracefully. However, GridSearchCV was using sklearn.utils.validation module that encforced lack of missing data in the input data by calling _assert_all_finite function. This was ultimately causing an error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64') Sidenote: _assert_all_finite accepts an allow_nan argument, which, if set to True, would not be causing issues. However, scikit-learn API does not allow us to have control over this argument. Solution My solution was to use patch module to silence the _assert_all_finite function so that it does not raise ValueError. Here is a snippet import sklearn with mock.patch("sklearn.utils.validation._assert_all_finite"): # your code that raises ValueError this will replace the _assert_all_finite by a dummy mock function so it won't get executed. Please note that patching is not a recommended practice and might result in unpredictable behaviour! EDIT: This Pull Request should resolve the issue (though the fix has not been released as of Jan 2022)
If you're running an estimator, it could be that your learning rate is too high. I passed in the wrong array to a grid search by accident and ended up training with a learning rate of 500, which I could see causing issues with the training process. Basically it's not necessarily only your inputs that have to all be valid, but the intermediate data as well.
After a long time of dealing with this problem, I realized that this is because in splits of training and testing sets there are columns of data which are the same for all data rows. Then some calculations in some algorithms may lead to infinity results. If the data that you are using is in a way that close rows are more likely to be similar then shuffling the data can help. This is a bug with scikit. I'm using version 0.23.2.
If you happen to use the "kc_house_data.csv" dataset (which some commenters and many data-science newcomers seem to use, because it's presented in lots of popular course material), the data is faulty and the true source for the error. To fix it, as of 2022: Delete the last (empty) line in the csv file There are two lines that contain one empty data value "x,x,,x,x" - to fix it, don't delete the comma, instead add a random integer value like 2000, so it looks like this "x,x,2000,x,x" Don't forget to save and reload in your project. All the other answers are helpful and correct, but not in this case: If you use kc_house_data.csv you need to fix the data in the file, nothing else will help, the empty data field will shift the other data around randomly and generate weird bugs that are hard to track to the source!
In my case the algorithm required data to be between (0,1) noninclusive. My quite brutal solutions was to add a small random number to all desired values: y_train = pd.DataFrame(y_train).applymap(lambda x: x + np.random.rand()/100000.0)["col_name"] y_train[y_train >= 1] = 0.999999 while y_train is in the range of [0,1]. This is definitely not suitable for all cases, as you are messing with your input data but can be a solution if you have sparse data and only need a quick forecast
try mat.sum() If the sum of your data is infinity (greater that the max float value which is 3.402823e+38) you will get that error. see the _assert_all_finite function in validation.py from the scikit source code: if is_float and np.isfinite(X.sum()): pass elif is_float: msg_err = "Input contains {} or a value too large for {!r}." if (allow_nan and np.isinf(X).any() or not allow_nan and not np.isfinite(X).all()): type_err = 'infinity' if allow_nan else 'NaN, infinity' # print(X.sum()) raise ValueError(msg_err.format(type_err, X.dtype))
SciKitLearn tree returning error
I'm trying to build a decision tree with SciKitLearn, and it tells me: Input contains NaN, infinity or a value too large for dtype('float64'). Running .isnull().any() on the input data returns False for every column. There are four input columns of type float64; the data in them is properly formatted to two decimal places, no crazy values. What might the culprit be and how can I fix it? y = df["CutoffValue"] X = df_new clf = tree.DecisionTreeClassifier() clf = clf.fit(X,y)
Fixed it! In this case, "input" in the error refers to LABELED data, the y! Dropped nulls for the column, and all is ok.
How to impute each categorical column in numpy array
There are good solutions to impute panda dataframe. But since I am working mainly with numpy arrays, I have to create new panda DataFrame object, impute and then convert back to numpy array as follows: nomDF=pd.DataFrame(x_nominal) #Convert np.array to pd.DataFrame nomDF=nomDF.apply(lambda x:x.fillna(x.value_counts().index[0])) #replace NaN with most frequent in each column x_nominal=nomDF.values #convert back pd.DataFrame to np.array Is there a way to directly impute in numpy array?
We could use Scipy's mode to get the highest value in each column. Leftover work would be to get the NaN indices and replace those in input array with the mode values by indexing. So, the implementation would look something like this - from scipy.stats import mode R,C = np.where(np.isnan(x_nominal)) vals = mode(x_nominal,axis=0)[0].ravel() x_nominal[R,C] = vals[C] Please note that for pandas, with value_counts, we would be choosing the highest value in case of many categories/elements with the same highest count. i.e. in tie situations. With Scipy's mode, it would be lowest one for such tie cases. If you are dealing with such mixed dtype of strings and NaNs, I would suggest few modifications, keeping the last step unchanged to make it work - x_nominal_U3 = x_nominal.astype('U3') R,C = np.where(x_nominal_U3=='nan') vals = mode(x_nominal_U3,axis=0)[0].ravel() This throws a warning for the mode calculation : RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored. "values. nan values will be ignored.", RuntimeWarning). But since, we actually want to ignore NaNs for that mode calculation, we should be okay there.
Output of sklearn.ensemble.RandomForestClassifier includes NaN values
I am using sklearn.ensemble.RandomForestClassifier to analyze data and I was puzzled to see NaN values in the prediction without any NaN in the training set or in testing set. print preds_y[preds_y.isnull().any(axis=1)].shape print train_y[train_y.isnull().any(axis=1)].shape print train_features[train_features.isnull().any(axis=1)].shape print test_features[train_features.isnull().any(axis=1)].shape > (4830, 1) > (0, 1) > (0, 22) > (0, 22) These NaN values are causing the call to sklearn.metrics.classification_report to fail with the following error: > ValueError: Mix of label input types (string and number) Right now I'm mostly interested in understanding why the random forest is spitting out NaNs. As soon as I figure that out, I can filter the results accordingly and see how well the method is performing. Thanks in advance for your input. (I'm sorry if this has been asked before. I searched for it but all the results I found concerned NaNs in the training data, which is not my issue at all.) EDIT 1: Just to be clear, there are many valid predictions in the output: print preds_y[~preds_y.isnull().any(axis=1)].shape print train_y[~train_y.isnull().any(axis=1)].shape > (11760, 1) > (39749, 1) EDIT 2: As I wrote in a comment below, the original data has numeric and categorical columns. All the categorical columns are converted to numeric using pandas.get_dummies() before calling fit(). I convert the results back to a pandas.DataFrame and reconstruct the original categorical columns for readability. The two pandas.Series -- predicted and actual values -- I am feeding classification_report() have only one type (category). It seems that the NaNs in the predictions arise if the random forest predicts 0 for every dummy binary column corresponding to the original categorical column. I was not expecting this to happen so often -- it seems that 30% of my entries go unclassified -- but I'm not sure there is anything further to add on this issue.
You can first remove all NaN by replacing them as zeros. See this link. Maybe use df.fillna(0), then you should be fine I suppose.