Drop columns with more than 70% zeros - python

I would like to know if there is a command that drop columns that has more than 70% zeros or X% zeros. like:
df = df.loc[:, df.isnull().mean() < .7]
for NaN.
Thank you !

Just change df.isnull().mean() to (df==0).mean():
df = df.loc[:, (df==0).mean() < .7]
Here's a demo:
df
Out:
0 1 2 3 4
0 1 1 1 1 0
1 1 0 0 0 1
2 0 1 1 0 0
3 1 0 0 1 0
4 1 1 1 1 1
5 1 0 0 0 0
6 0 1 0 0 0
7 0 1 1 0 0
8 1 0 0 1 0
9 0 0 0 1 0
(df==0).mean()
Out:
0 0.4
1 0.5
2 0.6
3 0.5
4 0.8
dtype: float64
df.loc[:, (df==0).mean() < .7]
Out:
0 1 2 3
0 1 1 1 1
1 1 0 0 0
2 0 1 1 0
3 1 0 0 1
4 1 1 1 1
5 1 0 0 0
6 0 1 0 0
7 0 1 1 0
8 1 0 0 1
9 0 0 0 1

Related

is there any way to convert the columns in Pandas Dataframe using its mirror image Dataframe structure

the df I have is :
0 1 2
0 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
I wanted to obtain a Dataframe with columns reversed/mirror image :
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1
Is there any way to do that
You can check
df[:] = df.iloc[:,::-1]
df
Out[959]:
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1
Here is a bit more verbose, but likely more efficient solution as it doesn't require to rewrite the data. It only renames and reorders the columns:
cols = df.columns
df.columns = df.columns[::-1]
df = df.loc[:,cols]
Or shorter variant:
df = df.iloc[:,::-1].set_axis(df.columns, axis=1)
Output:
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1
There are other ways, but here's one solution:
df[df.columns] = df[reversed(df.columns)]
Output:
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1

Creating week flags from DOW

I have a dataframe:
DOW
0 0
1 1
2 2
3 3
4 4
5 5
6 6
This corresponds to the dayof the week. Now I want to create this dataframe-
DOW MON_FLAG TUE_FLAG WED_FLAG THUR_FLAG FRI_FLAG SAT_FLAG
0 0 0 0 0 0 0 0
1 1 1 0 0 0 0 0
2 2 0 1 0 0 0 0
3 3 0 0 1 0 0 0
4 4 0 0 0 1 0 0
5 5 0 0 0 0 1 0
6 6 0 0 0 0 0 1
7 0 0 0 0 0 0 0
8 1 1 0 0 0 0 0
Depending on the DOW column for example its 1 then MON_FLAG will be 1 if its 2 then TUES_FLAG will be 1 and so on. I have kept Sunday as 0 that's why all the flag columns are zero in that case.
Use get_dummies with rename columns by dictionary:
d = {0:'SUN_FLAG',1:'MON_FLAG',2:'TUE_FLAG',
3:'WED_FLAG',4:'THUR_FLAG',5: 'FRI_FLAG',6:'SAT_FLAG'}
df = df.join(pd.get_dummies(df['DOW']).rename(columns=d))
print (df)
DOW SUN_FLAG MON_FLAG TUE_FLAG WED_FLAG THUR_FLAG FRI_FLAG SAT_FLAG
0 0 1 0 0 0 0 0 0
1 1 0 1 0 0 0 0 0
2 2 0 0 1 0 0 0 0
3 3 0 0 0 1 0 0 0
4 4 0 0 0 0 1 0 0
5 5 0 0 0 0 0 1 0
6 6 0 0 0 0 0 0 1
7 0 1 0 0 0 0 0 0
8 1 0 1 0 0 0 0 0

Count how many cells are between the last value in the dataframe and the end of the row

I'm using the pandas library in Python.
I have a data frame:
0 1 2 3 4
0 0 0 0 1 0
1 0 0 0 0 1
2 0 0 1 0 0
3 1 0 0 0 0
4 0 0 1 0 0
5 0 1 0 0 0
6 1 0 0 1 1
Is it possible to create a new column that is a count of the number of cells that are empty between the end of the row and the last value above zero? Example data frame below:
0 1 2 3 4 Value
0 0 0 0 1 0 1
1 0 0 0 0 1 0
2 0 0 1 0 0 2
3 1 0 0 0 0 4
4 0 0 1 0 0 2
5 0 1 0 0 0 3
6 1 0 0 1 1 0
using argmax
df['value'] = df.apply(lambda x: (x.iloc[::-1] == 1).argmax(),1)
##OR
using np.where
df['Value'] = np.where(df.iloc[:,::-1] == 1,True,False).argmax(1)
0 1 2 3 4 Value
0 0 0 0 1 0 1
1 0 0 0 0 1 0
2 0 0 1 0 0 2
3 1 0 0 0 0 4
4 0 0 1 0 0 2
5 0 1 0 0 0 3
6 1 0 0 1 1 0
Use:
df['new'] = df.iloc[:, ::-1].cumsum(axis=1).eq(0).sum(axis=1)
print (df)
0 1 2 3 4 new
0 0 0 0 1 0 1
1 0 0 0 0 1 0
2 0 0 1 0 0 2
3 1 0 0 0 0 4
4 0 0 1 0 0 2
5 0 1 0 0 0 3
6 1 0 0 1 1 0
Details:
First change order of columns by DataFrame.loc and slicing:
print (df.iloc[:, ::-1])
4 3 2 1 0
0 0 1 0 0 0
1 1 0 0 0 0
2 0 0 1 0 0
3 0 0 0 0 1
4 0 0 1 0 0
5 0 0 0 1 0
6 1 1 0 0 1
Then use cumulative sum per rows by DataFrame.cumsum:
print (df.iloc[:, ::-1].cumsum(axis=1))
4 3 2 1 0
0 0 1 1 1 1
1 1 1 1 1 1
2 0 0 1 1 1
3 0 0 0 0 1
4 0 0 1 1 1
5 0 0 0 1 1
6 1 2 2 2 3
Compare only 1 values by DataFrame.eq:
print (df.iloc[:, ::-1].cumsum(axis=1).eq(0))
4 3 2 1 0
0 True False False False False
1 False False False False False
2 True True False False False
3 True True True True False
4 True True False False False
5 True True True False False
6 False False False False False
And last count them per rows by sum:
print (df.iloc[:, ::-1].cumsum(axis=1).eq(0).sum(axis=1))
0 1
1 0
2 2
3 4
4 2
5 3
6 0
dtype: int64

Pandas read data without header or index

Here is the .csv file :
0 0 1 1 1 0 1 1 0 1 1 1 1
0 1 1 0 1 0 1 1 0 1 0 0 1
0 0 1 1 0 0 1 1 1 0 1 1 1
0 1 1 1 1 1 1 1 1 1 1 1 2
0 1 1 1 0 1 1 1 1 1 1 1 1
0 0 0 1 1 1 0 1 0 0 0 1 1
0 0 0 0 1 1 0 0 1 0 1 0 2
0 1 1 0 1 1 1 1 0 1 1 1 1
0 0 1 0 0 0 0 0 0 1 1 0 1
0 1 1 1 0 1 1 0 0 0 0 1 1
where the first column must be indices like (0,1,2,3,4 ...) but due to some reasons they are zeros. Is there any way to make them normal when reading the csv file with pandas.read_csv ?
i use
df = pd.read_csv(file,delimiter='\t',header=None,names=[1,2,3,4,5,6,7,8,9,10,11,12])
and getting something like:
1 2 3 4 5 6 7 8 9 10 11 12
0 0 1 1 1 0 1 1 0 1 1 1 1
0 1 1 0 1 0 1 1 0 1 0 0 1
0 0 1 1 0 0 1 1 1 0 1 1 1
0 1 1 1 1 1 1 1 1 1 1 1 2
0 1 1 1 0 1 1 1 1 1 1 1 1
0 0 0 1 1 1 0 1 0 0 0 1 1
0 0 0 0 1 1 0 0 1 0 1 0 2
0 1 1 0 1 1 1 1 0 1 1 1 1
0 0 1 0 0 0 0 0 0 1 1 0 1
0 1 1 1 0 1 1 0 0 0 0 1 1
and it's nearly i need, but first column (indices) is still zeros. Can pandas for example ignore this first column of zeros and automatically generate new indices to get this:
0 1 2 3 4 5 6 7 8 9 10 11 12
0 0 1 0 1 1 0 0 0 1 1 1 0 1
1 0 1 0 1 1 0 0 0 1 1 1 1 2
2 0 1 1 1 0 0 1 1 1 1 1 1 2
You might want index_col=False
df = pd.read_csv(file,delimiter='\t',
header=None,
index_col=False)
From the Docs,
If you have a malformed file with delimiters at the end of each line,
you might consider index_col=False to force pandas to not use the
first column as the index
Why fuss over read_csv? Use np.loadtxt:
pd.DataFrame(np.loadtxt(file, dtype=int))
0 1 2 3 4 5 6 7 8 9 10 11 12
0 0 0 1 1 1 0 1 1 0 1 1 1 1
1 0 1 1 0 1 0 1 1 0 1 0 0 1
2 0 0 1 1 0 0 1 1 1 0 1 1 1
3 0 1 1 1 1 1 1 1 1 1 1 1 2
4 0 1 1 1 0 1 1 1 1 1 1 1 1
5 0 0 0 1 1 1 0 1 0 0 0 1 1
6 0 0 0 0 1 1 0 0 1 0 1 0 2
7 0 1 1 0 1 1 1 1 0 1 1 1 1
8 0 0 1 0 0 0 0 0 0 1 1 0 1
9 0 1 1 1 0 1 1 0 0 0 0 1 1
The default delimiter is whitespace, and no headers/indexes are read in by default. Column types are also not inferred, since the dtype is specified to be int. All in all, this is a very succinct and powerful alternative.

Convert a list of values to a time series in python

I want to convert the foll. data:
jan_1 jan_15 feb_1 feb_15 mar_1 mar_15 apr_1 apr_15 may_1 may_15 jun_1 jun_15 jul_1 jul_15 aug_1 aug_15 sep_1 sep_15 oct_1 oct_15 nov_1 nov_15 dec_1 dec_15
0 0 0 0 0 1 1 2 2 2 2 2 2 3 3 3 3 3 0 0 0 0 0 0
into a array of length 365, where each element is repeated till the next date days e.g. 0 is repeated from january 1 to january 15...
I could do something like numpy.repeat, but that is not date aware, so would not take into account that less than 15 days happen between feb_15 and mar_1.
Any pythonic solution for this?
You can use resample:
#add last value - 31 dec by value of last column of df
df['dec_31'] = df.iloc[:,-1]
#convert to datetime - see http://strftime.org/
df.columns = pd.to_datetime(df.columns, format='%b_%d')
#transpose and resample by days
df1 = df.T.resample('d').ffill()
df1.columns = ['col']
print (df1)
col
1900-01-01 0
1900-01-02 0
1900-01-03 0
1900-01-04 0
1900-01-05 0
1900-01-06 0
1900-01-07 0
1900-01-08 0
1900-01-09 0
1900-01-10 0
1900-01-11 0
1900-01-12 0
1900-01-13 0
1900-01-14 0
1900-01-15 0
1900-01-16 0
1900-01-17 0
1900-01-18 0
1900-01-19 0
1900-01-20 0
1900-01-21 0
1900-01-22 0
1900-01-23 0
1900-01-24 0
1900-01-25 0
1900-01-26 0
1900-01-27 0
1900-01-28 0
1900-01-29 0
1900-01-30 0
..
1900-12-02 0
1900-12-03 0
1900-12-04 0
1900-12-05 0
1900-12-06 0
1900-12-07 0
1900-12-08 0
1900-12-09 0
1900-12-10 0
1900-12-11 0
1900-12-12 0
1900-12-13 0
1900-12-14 0
1900-12-15 0
1900-12-16 0
1900-12-17 0
1900-12-18 0
1900-12-19 0
1900-12-20 0
1900-12-21 0
1900-12-22 0
1900-12-23 0
1900-12-24 0
1900-12-25 0
1900-12-26 0
1900-12-27 0
1900-12-28 0
1900-12-29 0
1900-12-30 0
1900-12-31 0
[365 rows x 1 columns]
#if need serie
print (df1.col)
1900-01-01 0
1900-01-02 0
1900-01-03 0
1900-01-04 0
1900-01-05 0
1900-01-06 0
1900-01-07 0
1900-01-08 0
1900-01-09 0
1900-01-10 0
1900-01-11 0
1900-01-12 0
1900-01-13 0
1900-01-14 0
1900-01-15 0
1900-01-16 0
1900-01-17 0
1900-01-18 0
1900-01-19 0
1900-01-20 0
1900-01-21 0
1900-01-22 0
1900-01-23 0
1900-01-24 0
1900-01-25 0
1900-01-26 0
1900-01-27 0
1900-01-28 0
1900-01-29 0
1900-01-30 0
..
1900-12-02 0
1900-12-03 0
1900-12-04 0
1900-12-05 0
1900-12-06 0
1900-12-07 0
1900-12-08 0
1900-12-09 0
1900-12-10 0
1900-12-11 0
1900-12-12 0
1900-12-13 0
1900-12-14 0
1900-12-15 0
1900-12-16 0
1900-12-17 0
1900-12-18 0
1900-12-19 0
1900-12-20 0
1900-12-21 0
1900-12-22 0
1900-12-23 0
1900-12-24 0
1900-12-25 0
1900-12-26 0
1900-12-27 0
1900-12-28 0
1900-12-29 0
1900-12-30 0
1900-12-31 0
Freq: D, Name: col, dtype: int64
#transpose and convert to numpy array
print (df1.T.values)
[[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
IIUC you can do it this way:
In [194]: %paste
# transpose DF, rename columns
x = df.T.reset_index().rename(columns={'index':'date', 0:'val'})
# parse dates
x['date'] = pd.to_datetime(x['date'], format='%b_%d')
# group resampled DF by the month and resample(`D`) each group
result = (x.groupby(x['date'].dt.month)
.apply(lambda x: x.set_index('date').resample('1D').ffill()))
# rename index names
result.index.names = ['month','date']
## -- End pasted text --
In [212]: result
Out[212]:
val
month date
1 1900-01-01 0
1900-01-02 0
1900-01-03 0
1900-01-04 0
1900-01-05 0
1900-01-06 0
1900-01-07 0
1900-01-08 0
1900-01-09 0
1900-01-10 0
1900-01-11 0
1900-01-12 0
1900-01-13 0
1900-01-14 0
1900-01-15 0
2 1900-02-01 0
1900-02-02 0
1900-02-03 0
1900-02-04 0
1900-02-05 0
1900-02-06 0
1900-02-07 0
1900-02-08 0
1900-02-09 0
1900-02-10 0
1900-02-11 0
1900-02-12 0
1900-02-13 0
1900-02-14 0
1900-02-15 0
... ...
11 1900-11-01 0
1900-11-02 0
1900-11-03 0
1900-11-04 0
1900-11-05 0
1900-11-06 0
1900-11-07 0
1900-11-08 0
1900-11-09 0
1900-11-10 0
1900-11-11 0
1900-11-12 0
1900-11-13 0
1900-11-14 0
1900-11-15 0
12 1900-12-01 0
1900-12-02 0
1900-12-03 0
1900-12-04 0
1900-12-05 0
1900-12-06 0
1900-12-07 0
1900-12-08 0
1900-12-09 0
1900-12-10 0
1900-12-11 0
1900-12-12 0
1900-12-13 0
1900-12-14 0
1900-12-15 0
[180 rows x 1 columns]
or using reset_index():
In [213]: result.reset_index().head(20)
Out[213]:
month date val
0 1 1900-01-01 0
1 1 1900-01-02 0
2 1 1900-01-03 0
3 1 1900-01-04 0
4 1 1900-01-05 0
5 1 1900-01-06 0
6 1 1900-01-07 0
7 1 1900-01-08 0
8 1 1900-01-09 0
9 1 1900-01-10 0
10 1 1900-01-11 0
11 1 1900-01-12 0
12 1 1900-01-13 0
13 1 1900-01-14 0
14 1 1900-01-15 0
15 2 1900-02-01 0
16 2 1900-02-02 0
17 2 1900-02-03 0
18 2 1900-02-04 0
19 2 1900-02-05 0

Categories

Resources