How to convert dataframe to 1D array ? - python

First of all apologies. I am very new to pandas, scikit learn and python. So I am sure I am doing something silly. Let me give a little background.
I am trying to run KNeighborsClassifier from scikit learn (python)
Following is my strategy
#Reading the Training set
data = pd.read_csv('Path_TO_File\\Train_Set.csv', sep=',') # reading CSV File
X = data[['Attribute 1','Attribute 2']]
y = data['Target_Column'] # the output is a Dataframe of single column with many rows
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X,y)
Next I try to read Test data
test = pd.read_csv('PATH_TO_FILE\\Test.csv', sep=',')
t = test[['Attribute 1','Attribute 2']]
pred = neigh.predict(t)
actual = test['Target_Column']
Next I try to check the accuracy by following function which is throwing error.
accuracy=neigh.score(actual,pred)
ERROR: ValueError: could not convert string to float: N
I checked actual and pred both and they are of following data type and content
actual
Out[161]:
Target_Column
0 Y
1 N
:
[614 rows x 1 columns]
pred
Out[162]:
array(['Y', 'N', .....'N'], dtype=object)
N.B : pred has 614 values.
I tried to convert "actual" variable to 1D array I might be able to execute the function however, I am not successful.
I think I need to do following two things, however, was not able to do so (after googling it)
1) Convert actual into 1Dimen array
2) Making a transpose of the 1Dimen array since the pred has 614 columns.
Please let me know how to correct the function.
Thanks in advance !
Raj

Thanks Vivek and Thornhale
Indeed I was doing two wrong things.
As pointed by you guys, I should have been using 1, 0 in stead of Y,
N.
I was giving wrong parameters to the function score. It should be
accuracy=neigh.score(t, actual) , where t is test feature set and
actual is test label information.

You could convert your series which is what you get when you do "test[COLUMN_NAME]" into an array like so:
actual = np.array(test['Target_Column'])
To then reshape an np array, you would emply this command:
actual.reshape(1, 612) # <- Could be the other way around as well.
Your main issue though is that your Series needs to be boolean (as in 0,1).

Related

How do i make my data 1-dimensional? - neural network

When I output this below code through some other functions (sigmoid/weight functions etc). I get the output that my data 'must be one dimensional'.
The data is from a csv that is 329 X 31, I have split this as I need the first column as my 'y' value, and then the remaining 30 columns and all its rows will be my 'x'. How do I go about making this 1 dimensional for my functions?
Is this section of code where I process my data even the issue? could it be an issue from a later functional call? im new to python so im not sure what the issue could be caused by, I was wondering if I converted my data into an array correctly.
df = pd.read_csv('data.csv', header=None)
#splitting dataframe into 70/30 split
trainingdata = df.sample(frac=0.7)
testingdata = df.drop(trainingdata.index)
#splitting very first column to 'y' value
y = trainingdata.loc[:,0]
#splitting rest of columns to 'X' value
X = trainingdata.loc[:,1:]
#printing shape for testing
print(X.shape, y.shape)
if I understand your question correctly, you can flatten the array using the flatten(), or you can use reshape() for more information, read the documentation
y=y.flatten()
print(y.ndim)
doc

Assignment with both fillna() and loc() apparently not working

I've searched for answer around, but I cannot find them.
My goal: I'm trying to fill some missing values in a DataFrame, using supervised learning to decide how to fill it.
My code looks like this: NOTE - THIS FIRST PART IS NOT IMPORTANT, IT IS JUST TO GIVE CONTEXT
train_df = df[df['my_column'].notna()] #I need to train the model without using the missing data
train_x = train_df[['lat','long']] #Lat e Long are the inputs
train_y = train_df[['my_column']] #My_column is the output
clf = neighbors.KNeighborsClassifier(2)
clf.fit(train_x,train_y) #clf is the classifies, here we train it
df_x = df[['lat','long']] #I need this part to do the prediction
prediction = clf.predict(df_x) #clf.predict() returns an array
series_pred = pd.Series(prediction) #now the array is a series
print(series_pred.shape) #RETURNS (2381,)
print(series_pred.isna().sum()) #RETURN 0
So far, so good. I have my 2381 predictions (I need only a few of them) and there is no NaN value inside (why would there be a NaN value in the predictions? I just wanted to be sure, as I don't understand my error)
Here I try to assign the predictions to my Dataframe:
#test_1
df.loc[df['my_colum'].isna(), 'my_colum'] = series_pred #I assign the predictions using .loc()
#test_2
df['my_colum'] = df['my_colum'].fillna(series_pred) #Double check: I assign the predictions using .fillna()
print(df['my_colum'].shape) #RETURNS (2381,)
print(df['my_colum'].isna().sum()) #RETURN 6
As you can see, it didn't work: the missing values are still 6. I randomly tried a slightly different approach:
#test_3
df[['my_colum']] = df[['my_colum']].fillna(series_pred) #Will it work?
print(df[['my_colum']].shape) #RETURNS (2381, 1)
print(df[['my_colum']].isna().sum()) #RETURNS 6
Did not work. I decided to try one last thing: check the fillna result even before assigning the results to the original df:
In[42]:
print(df['my_colum'].fillna(series_pred).isna().sum()) #extreme test
Out[42]:
6
So... where is my very very stupid mistake? Thanks a lot
EDIT 1
To show a little bit of the data,
In[1]:
df.head()
Out[1]:
my_column lat long
id
9df Wil 51 5
4f3 Fabio 47 9
x32 Fabio 47 8
z6f Fabio 47 9
a6f Giovanni 47 7
Also, I've added info at the beginning of the question
#Ben.T or #Dan should post their own answers, they deserve to be accepted as the correct one.
Following their hints, I would say that there are two solutions:
Solution 1 (Best): Use loc()
The problem
The problem with the current solution is that df.loc[df['my_column'].isna(), 'my_column'] is expecting to receive X values, where X is the number of missing values. My variable prediction has actually both the prediction for the missing values and for the non missing values
The solution
pred_df = df[df['my_column'].isna()] #For the prediction, use a Dataframe with only the missing values. Problem solved
df_x = pred_df[['lat','long']]
prediction = clf.predict(df_x)
df.loc[df['my_column'].isna(), 'my_column'] = prediction
Solution 2: Use fillna()
The problem
The problem with the current solution is that df['my_colum'].fillna(series_pred) requires the indexes of my df to be the same of series_pred, which is impossible in this situation unless you have a simple index in your df, like [0, 1, 2, 3, 4...]
The solution
Resetting the index of the df at the very beginning of the code.
Why is this not the best
The cleanest way is to do the prediction only when you need it. This approach is easy to obtain with loc(), and I do not know how would you obtain it with fillna() because you would need to preserve the index through the classification
Edit: series_pred.index = df['my_column'].isna().index Thanks #Dan

ValueError: could not convert string to float <ipython-input-5-1a15d1ec0505> in <module>()

I am trying to implement KNN but when i transform X_train and X_test it gives error, I am new please any help regarding this.
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
It gives error "could not convert string to float"
what to do?
The first thing I do when I see an error like this is to check my inputs. If you're getting a "could not convert string to float" error, you probably have a non-numeric string somewhere in your inputs (because it looks like this KNN function only takes numbers as input).
I'm assuming X_train and X_test are dataframes--try running the following for each column in your dataframe:
for i in range(len(X_train.columns)):
try:
[float(j) for j in X_train[X_train.columns[i]]]
print(X_train.columns[i],' is all-numeric')
except:
pass
Whichever column doesn't print when you run this is the one you need to look at and see if you can clean up the non-numeric entries in that column.
Edit: if you have a column of only non-numeric strings (for example, "Iris-setosa","Iris-versicolor", etc), you will have to convert them into numbers or Dummy Variable columns for the purposes of the KNN function.
Edit 2: Whoooops. I wrote bad code. Fixed it.

Output of sklearn.ensemble.RandomForestClassifier includes NaN values

I am using sklearn.ensemble.RandomForestClassifier to analyze data and I was puzzled to see NaN values in the prediction without any NaN in the training set or in testing set.
print preds_y[preds_y.isnull().any(axis=1)].shape
print train_y[train_y.isnull().any(axis=1)].shape
print train_features[train_features.isnull().any(axis=1)].shape
print test_features[train_features.isnull().any(axis=1)].shape
> (4830, 1)
> (0, 1)
> (0, 22)
> (0, 22)
These NaN values are causing the call to sklearn.metrics.classification_report to fail with the following error:
> ValueError: Mix of label input types (string and number)
Right now I'm mostly interested in understanding why the random forest is spitting out NaNs. As soon as I figure that out, I can filter the results accordingly and see how well the method is performing.
Thanks in advance for your input.
(I'm sorry if this has been asked before. I searched for it but all the results I found concerned NaNs in the training data, which is not my issue at all.)
EDIT 1: Just to be clear, there are many valid predictions in the output:
print preds_y[~preds_y.isnull().any(axis=1)].shape
print train_y[~train_y.isnull().any(axis=1)].shape
> (11760, 1)
> (39749, 1)
EDIT 2:
As I wrote in a comment below, the original data has numeric and categorical columns. All the categorical columns are converted to numeric using pandas.get_dummies() before calling fit(). I convert the results back to a pandas.DataFrame and reconstruct the original categorical columns for readability. The two pandas.Series -- predicted and actual values -- I am feeding classification_report() have only one type (category).
It seems that the NaNs in the predictions arise if the random forest predicts 0 for every dummy binary column corresponding to the original categorical column. I was not expecting this to happen so often -- it seems that 30% of my entries go unclassified -- but I'm not sure there is anything further to add on this issue.
You can first remove all NaN by replacing them as zeros.
See this link.
Maybe use df.fillna(0), then you should be fine I suppose.

mean() of column in pandas DataFrame returning inf: how can I solve this?

I'm trying to implement some machine learning algorithms, but I'm having some difficulties putting the data together.
In the example below, I load a example data-set from UCI, remove lines with missing data (thanks to the help from a previous question), and now I would like to try to normalize the data.
For many datasets, I just used:
valores = (valores - valores.mean()) / (valores.std())
But for this particular dataset the approach above doesn't work. The problem is that the mean function is returning inf, perhaps due to a precision issue. See the example below:
bcw = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data', header=None)
for col in bcw.columns:
if bcw[col].dtype != 'int64':
print "Removendo possivel '?' na coluna %s..." % col
bcw = bcw[bcw[col] != '?']
valores = bcw.iloc[:,1:10]
#mean return inf
print valores.iloc[:,5].mean()
My question is how to deal with this. It seems that I need to change the type of this column, but I don't know how to do it.
not so familiar with pandas but if you convert to a numpy array it works, try
np.asarray(valores.iloc[:,5], dtype=np.float).mean()
NaN values should not matter when computing the mean of a pandas.Series. Precision is also irrelevant. The only explanation I can think of is that one of the values in valores is equal to infinity.
You could exclude any values that are infinite when computing the mean like this:
import numpy as np
is_inf = valores.iloc[:, 5] == np.inf
valores.ix[~is_inf, 5].mean()
If the elements of the pandas series are strings you get inf and the mean result. In this specific case you can simply convert the pandas series elements to float and then calculate the mean. No need to use numpy.
Example:
valores.iloc[:,5].astype(float).mean()
I had the same problem with a column that was of dtype 'o', and whose max value was 9999. Have you tried using the convert_objects method with the convert_numeric=True parameter? This fixed the problem for me.
For me, the reason was an overflow: my original data was in float16 and calling .mean() on that would return inf. After converting my data to float32 (e.g. via .astype("float32")), .mean worked as expected.

Categories

Resources