this is my code:
for col in df:
if col.startswith('event'):
df[col].fillna(0, inplace=True)
df[col] = df[col].map(lambda x: re.sub("\D","",str(x)))
I have 0 to 10 event column "event_0, event_1,..."
When I fill nan with this code it fills all nan cells under all event columns to 0 but it does not change event_0 which is the first column of that selection and it is also filled by nan.
I made these columns from 'events' column with following code:
event_seperator = lambda x: pd.Series([i for i in
str(x).strip().split('\n')]).add_prefix('event_')
df_events = df['events'].apply(event_seperator)
df = pd.concat([df.drop(columns=['events']), df_events], axis=1)
Please tell me what is wrong? you can see dataframe before changing in the picture.
I don't know why that happened since I made all those columns the
same.
Your data suggests this is precisely what has not been done.
You have a few options depending on what you are trying to achieve.
1. Convert all non-numeric values to 0
Use pd.to_numeric with errors='coerce':
df[col] = pd.to_numeric(df[col], errors='coerce').fillna(0)
2. Replace either string ('nan') or null (NaN) values with 0
Use pd.Series.replace followed by the previous method:
df[col] = df[col].replace('nan', np.nan).fillna(0)
Related
Beginner question incoming.
I have a dataframe derived from an excel file with a column that I will call "input".
In this column are floats (e.g. 7.4, 8.1, 2.2,...). However, there are also some wrong values such as strings (which are easy to filter out) and, what I find difficult, single instances of "." or "..".
I would like to clean the column to generate only numeric float values.
I have used this approach for other columns, but cannot do so here because if I get rid of the "." instances, my floats will be messed up:
for col in [col for col in new_df.columns if col.startswith("input")]:
new_df[col] = new_df[col].str.replace(r',| |\-|\^|\+|#|j|0|.', '', regex=True)
new_df[col] = pd.to_numeric(new_df[col], errors='raise')
I have also tried the following, but it then replaces every value in the column with None:
for index, row in new_df.iterrows():
col_input = row['input']
if re.match(r'^-?\d+(?:.\d+)$', str(col_input)) is None:
new_df["input"] = None
How do I get rid of the dots?
Thanks!
You can simply use pandas.to_numeric and pass errors='coerce' without the loop :
from io import StringIO
import pandas as pd
s = """input
7.4
8.1
2.2
foo
foo.bar
baz/foo"""
df = pd.read_csv(StringIO(s))
df['input'] = pd.to_numeric(df['input'], errors='coerce')
# Outputs :
print(df)
input
0 7.4
1 8.1
2 2.2
3 NaN
4 NaN
5 NaN
df.dropna(inplace=True)
print(df)
input
0 7.4
1 8.1
2 2.2
If you need to clean up multiple mixed columns, use :
cols = ['input', ...] # put here the name of the columns concerned
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
df.dropna(subset=cols, inplace=True)
My application saves an indeterminate number of values in different columns. As a results, I have a data frame with a certain number of columns at the beginning but then from a particular column (that I know) I will have an uncertain number of columns saving same data
Example:
known1 known2 know3 unknow1 unknow2 unknow3 ...
1 3 3 data data2 data3
The result I would like to get should be something like this
known1 known2 know3 all_unknow
1 3 3 data,data2,data3
How can I do this when I don't know the number of unknown columns but what I do know is this will occur (in this example) from the 4th column.
IIUC, use filter to select the columns by keyword:
cols = list(df.filter(like='unknow'))
# ['unknow1', 'unknow2', 'unknow3']
df['all_unknow'] = df[cols].apply(','.join, axis=1)
df = df.drop(columns=cols)
or take all columns from the 4th one:
cols = df.columns[3:]
df['all_unknow'] = df[cols].apply(','.join, axis=1)
df = df.drop(columns=cols)
output:
known1 known2 know3 all_unknow
0 1 3 3 data,data2,data3
df['all_unknown'] = df.iloc[:, 3:].apply(','.join, axis=1)
if you also want to drop all columns after the 4th:
cols = df.columns[3:-1]
df.drop(cols, axis=1)
the -1 is to avoid dropping the new column
I have a pandas dataframe and want to create a new column.
This new column would return 1 if all columns in the row have a value (are not Nan)
If there was a Nan in any one of the columns in the row it would return 0
Does anyone have guidance on how to go about this?
I have used the below to sum the instances of 'Not Nans' in the row, which could possibly be used in an if statement? or is there a more simple way
code_count.apply(lambda x: x.count(), axis=1)
code_count['count_languages'] = code_count.apply(lambda x: x.count(), axis=1)
Use DataFrame.notna for test non missing values with DataFrame.all for test if all values per rows are True, then convert mask to 1,0 by Series.view:
code_count['count_languages'] = code_count.notna().all(axis=1).view('i1')
Or Series.astype:
code_count['count_languages'] = code_count.notna().all(axis=1).astype('int')
Or numpy.where:
code_count['count_languages'] = np.where(code_count.notna().all(axis=1), 1, 0)
In my code the df.fillna() method is not working when the df.dropna() method is working. I don't want to drop the column though. What can I do that the fillna() method works?
def preprocess_df(df):
for col in df.columns: # go through all of the columns
if col != "target": # normalize all ... except for the target itself!
df[col] = df[col].pct_change() # pct change "normalizes" the different currencies (each crypto coin has vastly diff values, we're really more interested in the other coin's movements)
# df.dropna(inplace=True) # remove the nas created by pct_change
df.fillna(method="ffill", inplace=True)
print(df)
break
df[col] = preprocessing.scale(df[col].values) # scale between 0 and 1.
You were almost there:
df = df.fillna(method="ffill", inplace=True)
You have to assign it back to df
What am I missing? fillna doesn't fill NaN values:
#filling multi columns df with values..
df.fillna(method='ffill', inplace=True)
df.fillna(method='bfill', inplace=True)
#just for kicks
df = df.fillna(method='ffill')
df = df.fillna(method='bfill')
#retun true
print df.isnull().values.any()
I verified it - I actually see NaN values in some first cells..
Edit
So I'm trying to write it myself:
def bfill(df):
for column in df:
for cell in df[column]:
if cell is not None:
tmpValue = cell
break
for cell in df[column]:
if cell is not None:
break
cell = tmpValue
However it doesn't work... Isn't the cell is by ref?
ffill fills rows with values from the previous row if they weren't NaN, bfill fills rows with the values from the NEXT row if they weren't NaN. In both cases, if you have NaNs on the first and/or last row, they won't get filled. Try doing both one after the other. If any columns have entirely NaN values then you will need to fill again with axis=1, (although I get a NotImplementedError when I try to do this with inplace=True on python 3.6, which is super annoying, pandas!).
So, I don't know why but taking the fillna outside the function fixed it..
Origen:
def doWork(df):
...
df = df.fillna(method='ffill')
df = df.fillna(method='bfill')
def main():
..
doWork(df)
print df.head(5) #shows NaN
Solution:
def doWork(df):
...
def main():
..
doWork(df)
df = df.fillna(method='ffill')
df = df.fillna(method='bfill')
print df.head(5) #no NaN