Empty cells on dataframe after use explode() - python

So I'm new to pandas and this is my first notebook. I needed to join some columns of my dataframe and after that, I wanted to separate the values so it would be better to visualize them.
to join the columns I used df['Q7'] = df[['Q7_Part_1', 'Q7_Part_2', 'Q7_Part_3', 'Q7_Part_4', 'Q7_Part_5','Q7_Part_6','Q7_OTHER']].apply(lambda x : '_'.join(x.dropna().astype(str)), axis=1) and it did well, but i still needed to separate the values and for that i used explode() like: df.Q7 = df.Q7.str.split('_').explode('Q7') and that gave me some empty cells on the dataframe like:
Dataframe
and when i try to visualize the values they just come in empty like:
sum of empty cells
What could I do to not show these empty cells on the viz?
Edit 1: By the way, they not appear as null or NaN cells when I do: df.isnull().sum() or df.isna().sum()

c = ['Q7_Part_1', 'Q7_Part_2', 'Q7_Part_3', 'Q7_Part_4', \
'Q7_Part_5','Q7_Part_6','Q7_OTHER']
df['Q7'] = df[c].apply(lambda x : '_'.join(x.astype(str)), axis=1)
I am not able to replicate your issue but my best guess is if you will do the above the dimension of the list will remain intact and you will get string 'nan' values instead of empty strings.

Related

Python - Calculating ranks related to a dataframe column that includes blank cells

I have a Panda dataframe and want to produce an extra column that holds the ranks of an original column in the pd. However, the column has empty cells. The ranks for those empty cells should be empty as well.
When I use
df['RRanked'] = df['R'].rank(ascending=1,na_option='keep')
it still produces a rank for the empty cells. In this case, the empty cell will get the highest rank.
How to produce empty ranks for those empty cells?
Thx!
I would coerce the column to a numeric value, then you can use rank with na_option='keep' which will not rank NaNs.
r = pd.to_numeric(df.R, errors='coerce')
rnk = r.rank(na_option='keep')
Well, I solved it in a not so "clean" way. I managed to replace all those cells by NaN. Then I used the kind answer by Yefet: df['R'].apply(lambda x: pd.NA if x in ["NaN"] else x).rank(ascending=1). Later, I just replace the NaNs in the Ranks by "". That works.

Replacing values in multiple columns with Pandas based on conditions

I have a very large dataframe where I only want to change the values in a small continuous subset of columns. Basically, in those columns the values are either integers or null. All I want is to replace the 0's and nulls with 'No' and everything else with 'Yes' only in those columns
In R, this can be done basically with a one liner:
df <- df %>%
mutate_at(vars(MCI:BNP), ~factor(case_when(. > 0 ~ 'Yes',
TRUE ~ 'No')))
But we're working in Python and I can't quite figure out the equivalent using Pandas. I've been messing around with loc and iloc, which work fine when only changing a single column but I must be missing something when it comes to modifying multiple columns. And the answers I've found on other stackoverflow answers have all mostly been just changing the value in a single column based on some set of conditions
col1 = df.columns.get_loc("MCI")
col2 = df.columns.get_loc("BNP")
df.iloc[:,col1:col2]
Will get me the columns I want, but trying to call loc doesn't work with multidimensional keys. I even tried it with the columns as a list instead of by integer index by creating a an extra variable
binary_var = ['MCI','PVD','CVA','DEMENTIA','CPD','RD','PUD','MLD','DWOC','DWC','HoP','RND','MALIGNANCY','SLD','MST','HIV','AKF',
'ARMD','ASPHY','DEP','DWLK','DRUGA','DUOULC','FALL','FECAL','FLDELEX','FRAIL','GASTRICULC','GASTROULC','GLAU','HYPERKAL',
'HYPTEN','HYPOKAL','HYPOTHYR','HYPOXE','IMMUNOS','ISCHRT','LIPIDMETA','LOSWIGT','LOWBAK','MALNUT','OSTEO','PARKIN',
'PNEUM','RF','SEIZ','SD','TUML','UI','VI','MENTAL','FUROSEMIDE','METOPROLOL','ASPIRIN','OMEPRAZOLE','LISINOPRIL','DIGOXIN',
'ALDOSTERONE_ANTAGONIST','ACE_INHIBITOR','ANGIOTENSIN_RECEPTOR_BLOCKERS','BETA_BLOCKERSDIURETICHoP','BUN','CREATININE',
'SODIUM','POTASSIUM','HEMOGLOBIN','WBC_COUNT','CHLORIDE','ALBUMIN','TROPONIN','BNP']
df.loc[df[binary_var] == 0, binary_var]
But then it just can't find the index for those column names at all. I think Pandas also has problems converting columns that were originally integers into No/Yes. I don't need to do this in place, I'm probably just missing something simple that pandas has built in hopefully
In a very psuedo-code description, all I really want is this
if(df.iloc[:,col1:col2] == 0 || df.iloc[:,col1:col2].isnull())
df ONLY in that subset of column = 'No'
else
df ONLY in that subset of column = 'Yes'
Use:
df.loc[:, 'MCI':'BNP'] = np.where(df.loc[:, 'MCI':'BNP'] > 0, 'Yes', 'No')

Remove all rows containing a piece of a string in multiple columns in pandas

I have a very large dataframe with many columns. I want to check all the columns and remove any row containing any instance of the string 'MU', and there are some columns that have 'MU#1' or 'MU#2', and they will sometimes switch places (like 'MU#1 would be in column 1 at index 0 and 'MU#2' will be in column 1 at index 1). Initially, I tried removing them with this but it becomes far too cumbersome if I try to do this for both strings above:
df_slice = df[(df.phase_2 != 'MU#1') & (df.phase_3 != 'MU#1') & (df.phase_1 != 'MU#1') & (df.phase_4 != 'MU#1') ]
This may work, but I have to repeat this slice a few times with other dataframes and I imagine there is a much simpler route. I also have more columns than what is shown above, but that is just a snippet.
Simply put, all columns need to be checked for 'MU' and the rows with 'MU' need to be removed. Thanks!
You could also try .str.contains() and apply to the dataframe. This avoids hardcoding the columns in just in case
df[df.apply(lambda x: (~x.str.contains('MU', case=True, regex=True)))].dropna()
or
df[~df.stack().str.contains('MU').any(level=0)]
How it works
Option 1
when used in df.apply(), x.str.contains, #is a wild card for any column in the datframe that contains
x.str.contains('MU', case=True, regex=True) is a wild card for any column in the datframe that contains 'MU', case sensitive and regular expression implied
~ Reverses, hence you end up with rows that do not have MU
Resulting dataframe returns NaN where the condition is not met. .dropna() hence eliminates the rows with NaN
Option 2
df.stack()# Stacks the dataframe
df.stack().str.contains('MU')#boolean selects rows with the string 'MU'
df.stack().str.contains('MU').any(level=0)# Selects the index
~df.stack().str.contains('MU').any(level=0)# Reverses the selection taking only those without string 'MU'
What we do with all
df = df[df[['phase_1','phase_2','phase_3','phase_4']].ne('MU#1').all(1)]
Update
df = df[(~df[['phase_1','phase_2','phase_3','phase_4']].isin(['MU#1','MU#2'])).all(1)]
This works fine with me.
df[~df.stack().str.contains('Any String').any(level=0)]
Even when searching specific string in the dataframe
df[df.stack().str.contains('Any String').any(level=0)]
Thanks.

Is there a function that can remove multiple rows based on multiple specific column values in a pandas dataframe?

I have a particular Pandas dataframe that has multiple different string categories in a particular column - 'A'. I want to create a new dataframe with only rows that contain 7 separate categories from column A out of about 15.
I know that I can individually remove/add categories using:
df1 = df[df.Category != 'a']
but I also tried using a list to try and do it in a single line, like such:
df1 = df[df.Category = ['x','y','z']]
but that gave me a syntax error. Is there any way to perform this function?
try:
df1 = df[df.Category.isin(['x','y','z'])]

Why after using the split command it deletes all data from dataframe

From an itertuple loop, I read in some row values and then converted the values read into Series data, then changed the astype to a string and used concat to add it to dff and as shown below.
In [24]: dff
Out[24]:
SRD Aspectno
0 9450 [9450.01, 9450.02]
1 9880 [9880.01, 9880.02, 9880.03]
When I apply the following command line, it strips out all the data. I have used the split command before, It may have something to do with the square brackets, but using str.strip or str(0), also removes all the data.
In [25]: splitdff = dff['Aspectno'].str.split(',', expand = True)
In [26]: splitdff
Out[26]:
0
0 NaN
1 NaN
What am I doing wrong?
Also, when converting the data read after reading the rows, how do I get data in row 0 to be shifted to the left, i.e, [9450.01, 9450.02] shift over to the left by one column?
It looks like you're trying to split a list on a comma, that's a method intended for strings. Try this to break the values out into their own columns:
import pandas as pd
...
dff['Aspectno'].apply(pd.Series)
It will give you a DataFrame with the entries in columns. The lists are different lengths, so there will be a number of columns equal to the length of the longest list. If you know that length you could do this:
dff[['col1','col2','col3']] = dff['Aspectno'].apply(pd.Series)
The code dff['Aspectno'] select the series Aspectno, so [9450.02, 9880.03] and the split on character , doesn't do anything, as there are no commas in the series values.

Categories

Resources