I have following dataframe,
>>> data = pd.DataFrame({'Name': ['CTA15', 'CTA15', 'AC007', 'AC007', 'AC007'],
'ID': [22, 22, 2, 2, 2],
'Sample':['PE12', 'PL14', 'AE29', 'AE04', 'PE03'],
'count_col' : [2, 2, 3, 3, 3]})
>>> data
ID Name Sample count_col
0 22 CTA15 PE12 2
1 22 CTA15 PL14 2
2 2 AC007 AE29 3
3 2 AC007 AE04 3
4 2 AC007 PE03 3
I need to rearrange my data frame as following,
Name Sample count_col
CTA15 PE12 2
PL14
AC007 AE10 3
AE29
PE03
What I tried is,
pd.pivot_table(All_variants_REL,index=["Name",'Sample'],
values=['Count'],aggfunc={'Name':np.size})
But it not showing accurate count in count column
Any helps would be great..
It seems you need mask + astype by boolean mask created by duplicated:
Notice: I add cast to str, because else get mixed values in column count (strings with ints) and some pandas function can be broken.
Notice1 - Solution works, if values in Name column are sorted.
cols = ['Name','count']
df[cols] = df[cols].astype(str).mask(df.duplicated(['Name']), '')
print (df)
Name ID Sample count
0 CTA15 22 PE12 2
1 22 PL14
2 AC007 2 AE29 3
3 2 AE04
4 2 PE03
If need NaNs simply omit , - but last column values are convert to float (because NaN is float)
cols = ['Name','count']
df[cols] = df[cols].mask(df.duplicated(['Name']))
print (df)
Name ID Sample count
0 CTA15 22 PE12 2.0
1 NaN 22 PL14 NaN
2 AC007 2 AE29 3.0
3 NaN 2 AE04 NaN
4 NaN 2 PE03 NaN
For lists is possible use:
cols = ['Name','count', 'ID']
df = df.groupby(cols)['Sample'].apply(list).reset_index()
print (df)
Name count ID Sample
0 AC007 3 2 [AE29, AE04, PE03]
1 CTA15 2 22 [PE12, PL14]
Why not simply set a multi index? Doing so will translate to having all columns show if you have many more columns than in the example DataFrame.
>>> data = pd.DataFrame({'Name': ['CTA15', 'CTA15', 'AC007', 'AC007', 'AC007'],
'ID': [22, 22, 2, 2, 2],
'Sample':['PE12', 'PL14', 'AE29', 'AE04', 'PE03'],
'count_col' : [2, 2, 3, 3, 3]})
(Side note: I wouldn't recommend having a column with the name count as it is a DataFrame method and will cause issues down the road. For example, data.count does not return a Series as we might expect.)
>>> data
ID Name Sample count_col
0 22 CTA15 PE12 2
1 22 CTA15 PL14 2
2 2 AC007 AE29 3
3 2 AC007 AE04 3
4 2 AC007 PE03 3
Set the multi index, which will serve as solution for an arbitrarily large DataFrame.
>>> data.set_index(['Name', 'Sample'])
ID count_col
Name Sample
CTA15 PE12 22 2
PL14 22 2
AC007 AE29 2 3
AE04 2 3
PE03 2 3
Related
I tried to create a data frame df using the below code :
import numpy as np
import pandas as pd
index = [0,1,2,3,4,5]
s = pd.Series([1,2,3,4,5,6],index= index)
t = pd.Series([2,4,6,8,10,12],index= index)
df = pd.DataFrame(s,columns = ["MUL1"])
df["MUL2"] =t
print df
MUL1 MUL2
0 1 2
1 2 4
2 3 6
3 4 8
4 5 10
5 6 12
While trying to create the same data frame using the below syntax, I am getting a wierd output.
df = pd.DataFrame([s,t],columns = ["MUL1","MUL2"])
print df
MUL1 MUL2
0 NaN NaN
1 NaN NaN
Please explain why the NaN is being displayed in the dataframe when both the Series are non empty and why only two rows are getting displayed and no the rest.
Also provide the correct way to create the data frame same as has been mentioned above by using the columns argument in the pandas DataFrame method.
One of the correct ways would be to stack the array data from the input list holding those series into columns -
In [161]: pd.DataFrame(np.c_[s,t],columns = ["MUL1","MUL2"])
Out[161]:
MUL1 MUL2
0 1 2
1 2 4
2 3 6
3 4 8
4 5 10
5 6 12
Behind the scenes, the stacking creates a 2D array, which is then converted to a dataframe. Here's what the stacked array looks like -
In [162]: np.c_[s,t]
Out[162]:
array([[ 1, 2],
[ 2, 4],
[ 3, 6],
[ 4, 8],
[ 5, 10],
[ 6, 12]])
If remove columns argument get:
df = pd.DataFrame([s,t])
print (df)
0 1 2 3 4 5
0 1 2 3 4 5 6
1 2 4 6 8 10 12
Then define columns - if columns not exist get NaNs column:
df = pd.DataFrame([s,t], columns=[0,'MUL2'])
print (df)
0 MUL2
0 1.0 NaN
1 2.0 NaN
Better is use dictionary:
df = pd.DataFrame({'MUL1':s,'MUL2':t})
print (df)
MUL1 MUL2
0 1 2
1 2 4
2 3 6
3 4 8
4 5 10
5 6 12
And if need change columns order add columns parameter:
df = pd.DataFrame({'MUL1':s,'MUL2':t}, columns=['MUL2','MUL1'])
print (df)
MUL2 MUL1
0 2 1
1 4 2
2 6 3
3 8 4
4 10 5
5 12 6
More information is in dataframe documentation.
Another solution by concat - DataFrame constructor is not necessary:
df = pd.concat([s,t], axis=1, keys=['MUL1','MUL2'])
print (df)
MUL1 MUL2
0 1 2
1 2 4
2 3 6
3 4 8
4 5 10
5 6 12
A pandas.DataFrame takes in the parameter data that can be of type ndarray, iterable, dict, or dataframe.
If you pass in a list it will assume each member is a row. Example:
a = [1,2,3]
b = [2,4,6]
df = pd.DataFrame([a, b], columns = ["Col1","Col2", "Col3"])
# output 1:
Col1 Col2 Col3
0 1 2 3
1 2 4 6
You are getting NaN because it expects index = [0,1] but you are giving [0,1,2,3,4,5]
To get the shape you want, first transpose the data:
data = np.array([a, b]).transpose()
How to create a pandas dataframe
import pandas as pd
a = [1,2,3]
b = [2,4,6]
df = pd.DataFrame(dict(Col1=a, Col2=b))
Output:
Col1 Col2
0 1 2
1 2 4
2 3 6
I have data in the following format: Table 1
This data is loaded into a pandas dataframe. The date column is the index for this dataframe. How would I have it so the names become the column headings (must be unique) and the values correspond to the right dates.
So it would look something like this:
Table 2
Consider the following toy DataFrame:
>>> df = pd.DataFrame({'x': [1,2,3,4], 'y':['0 a','2 a','3 b','0 b']})
>>> df
x y
0 1 0 a
1 2 2 a
2 3 3 b
3 4 0 b
Start by processing each row into a Series:
>>> new_columns = df['y'].apply(lambda x: pd.Series(dict([reversed(x.split())])))
>>> new_columns
a b
0 0 NaN
1 2 NaN
2 NaN 3
3 NaN 0
Alternatively, new columns can be generated using pivot (the effect is the same):
>>> new_columns = df['y'].str.split(n=1, expand=True).pivot(columns=1, values=0)
Finally, concatenate the original and the new DataFrame objects:
>>> df = pd.concat([df, new_columns], axis=1)
>>> df
x y a b
0 1 0 a 0 NaN
1 2 2 a 2 NaN
2 3 3 b NaN 3
3 4 0 b NaN 0
Drop any columns that you don't require:
>>> df.drop(['y'], axis=1)
x a b
0 1 0 NaN
1 2 2 NaN
2 3 NaN 3
3 4 NaN 0
You will need to split out the column’s values, then rename your dataframe’s columns, and then you can pivot() the dataframe. I have added the steps below:
df[0].str.split(' ' , expand = True) # assumes you only have the one column
df.columns = ['col_name','values'] # use whatever naming convention you like
df.pivot(columns = 'col_name',values = 'values')
Please let me know if this helps.
df_new = pd.DataFrame(
{
'person_id': [1, 1, 3, 3, 5, 5],
'obs_date': ['12/31/2007', 'NA-NA-NA NA:NA:NA', 'NA-NA-NA NA:NA:NA', '11/25/2009', '10/15/2019', 'NA-NA-NA NA:NA:NA']
})
It looks like as shown below
What I would like to do is replace/fill NA type rows with actual date values from the same group. For which I tried the below
m1 = df_new['obs_date'].str.contains('^\d')
df_new['obs_date'] = df_new.groupby((m1).cumsum())['obs_date'].transform('first')
But this gives an unexpected output like shown below
Here for the 2nd row it should have been 11/25/2009 from person_id = 3 instead it is from the 1st group of person_id = 1.
How can I get the expected output as shown below
Any elegant and efficient solution is helpful as I am dealing with more than million records
First use to_datetime with errors='coerce' for convert non datetimes to missing values, then GroupBy.first for get first non missing value in GroupBy.transform new column filled by data:
df_new['obs_date'] = pd.to_datetime(df_new['obs_date'], format='%m/%d/%Y', errors='coerce')
df_new['obs_date'] = df_new.groupby('person_id')['obs_date'].transform('first')
#alternative - minimal value per group
#df_new['obs_date'] = df_new.groupby('person_id')['obs_date'].transform('min')
print (df_new)
person_id obs_date
0 1 2007-12-31
1 1 2007-12-31
2 3 2009-11-25
3 3 2009-11-25
4 5 2019-10-15
5 5 2019-10-15
Another idea is use DataFrame.sort_values with GroupBy.first:
df_new['obs_date'] = pd.to_datetime(df_new['obs_date'], format='%m/%d/%Y', errors='coerce')
df_new['obs_date'] = (df_new.sort_values(['person_id','obs_date'])
.groupby('person_id')['obs_date']
.ffill())
print (df_new)
person_id obs_date
0 1 2007-12-31
1 1 2007-12-31
2 3 2009-11-25
3 3 2009-11-25
4 5 2019-10-15
5 5 2019-10-15
You can do a pd.to_datetime(..,errors='coerce') to fill non date values as NaT and ffill and bfill after groupby :
df_new['obs_date']=(df_new.assign(obs_date=pd.to_datetime(df_new['obs_date'],
errors='coerce')).groupby('person_id')['obs_date'].apply(lambda x: x.ffill().bfill()))
print(df_new)
person_id obs_date
0 1 2007-12-31
1 1 2007-12-31
2 3 2009-11-25
3 3 2009-11-25
4 5 2019-10-15
5 5 2019-10-15
df_new= df_new.join(df_new.groupby('person_id')["obs_date"].min(),
on='person_id',
rsuffix="_clean")
Output:
person_id obs_date obs_date_clean
0 1 12/31/2007 12/31/2007
1 1 NA-NA-NA NA:NA:NA 12/31/2007
2 3 NA-NA-NA NA:NA:NA 11/25/2009
3 3 11/25/2009 11/25/2009
4 5 10/15/2019 10/15/2019
5 5 NA-NA-NA NA:NA:NA 10/15/2019
I have about 50 DataFrames in a list that have a form like this, where the particular dates included in each DataFrame are not necessarily the same.
>>> print(df1)
Unnamed: 0 df1_name
0 2004/04/27 2.2700
1 2004/04/28 2.2800
2 2004/04/29 2.2800
3 2004/04/30 2.2800
4 2004/05/04 2.2900
5 2004/05/05 2.3000
6 2004/05/06 2.3200
7 2004/05/07 2.3500
8 2004/05/10 2.3200
9 2004/05/11 2.3400
10 2004/05/12 2.3700
Now, I want to merge these 50 DataFrames together on the date column (unnamed first column in each DataFrame), and include all dates that are present in any of the DataFrames. Should a DataFrame not have a value for that date, it can just be NaN.
So a minimal example:
>>> print(sample1)
Unnamed: 0 sample_1
0 2004/04/27 1
1 2004/04/28 2
2 2004/04/29 3
3 2004/04/30 4
>>> print(sample2)
Unnamed: 0 sample_2
0 2004/04/28 5
1 2004/04/29 6
2 2004/05/01 7
3 2004/05/03 8
Then after the merge
>>> print(merged_df)
Unnamed: 0 sample_1 sample_2
0 2004/04/27 1 NaN
1 2004/04/28 2 5
2 2004/04/29 3 6
3 2004/04/30 4 NaN
....
Is there an easy way to make use of the merge or join functions of Pandas to accomplish this? I have gotten awfully stuck trying to determine how to combine the dates like this.
All you need to do is pd.concat on all your sample dataframes. But you have to set a couple of things. One, set the index of each one to be the column you want to merge on. Ensure that column is a date column. Below is an example of how to do it.
One liner
pd.concat([s.set_index('Unnamed: 0') for s in [sample1, sample2]], axis=1).rename_axis('Unnamed: 0').reset_index()
Unnamed: 0 sample_1 sample_2
0 2004/04/27 1.0 NaN
1 2004/04/28 2.0 5.0
2 2004/04/29 3.0 6.0
3 2004/04/30 4.0 NaN
4 2004/05/01 NaN 7.0
5 2004/05/03 NaN 8.0
I think this is more understandable
sample1 = pd.DataFrame([
['2004/04/27', 1],
['2004/04/28', 2],
['2004/04/29', 3],
['2004/04/30', 4],
], columns=['Unnamed: 0', 'sample_1'])
sample2 = pd.DataFrame([
['2004/04/28', 5],
['2004/04/29', 6],
['2004/05/01', 7],
['2004/05/03', 8],
], columns=['Unnamed: 0', 'sample_2'])
list_of_samples = [sample1, sample2]
for i, sample in enumerate(list_of_samples):
s = list_of_samples[i].copy()
cols = s.columns.tolist()
cols[0] = 'Date'
s.columns = cols
s.Date = pd.to_datetime(s.Date)
s.set_index('Date', inplace=True)
list_of_samples[i] = s
pd.concat(list_of_samples, axis=1)
sample_1 sample_2
Date
2004-04-27 1.0 NaN
2004-04-28 2.0 5.0
2004-04-29 3.0 6.0
2004-04-30 4.0 NaN
2004-05-01 NaN 7.0
2004-05-03 NaN 8.0
I have two dataframes in Pandas. The columns are named the same and they have the same dimensions, but they have different (and missing) values.
I would like to merge based on one key column and take the max or non-missing data for each equivalent row.
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'key':[1,3,5,7], 'a':[np.NaN, 0, 5, 1], 'b':[datetime.datetime.today() - datetime.timedelta(days=x) for x in range(0,4)]})
df1
a b key
0 NaN 2014-08-01 10:37:23.828683 1
1 0 2014-07-31 10:37:23.828726 3
2 5 2014-07-30 10:37:23.828736 5
3 1 2014-07-29 10:37:23.828744 7
df2 = pd.DataFrame({'key':[1,3,5,7], 'a':[2, 0, np.NaN, 3], 'b':[datetime.datetime.today() - datetime.timedelta(days=x) for x in range(2,6)]})
df2.ix[2,'b']=np.NaN
df2
a b key
0 2 2014-07-30 10:38:13.857203 1
1 0 2014-07-29 10:38:13.857253 3
2 NaN NaT 5
3 3 2014-07-27 10:38:13.857272 7
The end result would look like:
df_together
a b key
0 2 2014-07-30 10:38:13.857203 1
1 0 2014-07-29 10:38:13.857253 3
2 5 2014-07-30 10:37:23.828736 5
3 3 2014-07-27 10:38:13.857272 7
I hope my example covers all cases. If both dataframes have NaN (or NaT) values, they the result should also have NaN (or NaT) values. Try as I might, I can't get the pd.merge function to give what I want.
Often it is easiest in these circumstances to do:
df_together = pd.concat([df1, df2]).groupby('key').max()