Quickly Groupby Large DataFrame in Python - python
I have a DataFrame of 100M rows and 8 columns. I'm trying to groupby my DataFrame by 5 string columns and perform the following calculations as fast as possible.
df.groupby(['A','B','C','D','E'])['F'].transform('median')
df.groupby(['A','B','C','D','E']).agg({'F':'count', 'G':['mean','median','std'], 'H':['mean','std']})
I'm assuming doing it using numpy arrays would be the fastest, but I don't even know where to begin because it takes a couple of minutes just to convert a column to a numpy array.
Related
Convert a Numpy array into a Pandas DataFrame
I have a Pandas Dataframe (dataset, 889x4) and a Numpy ndarray (targets_one_hot, 889X29), which I want to concatenate. Therefore, I want to convert the targets_one_hot into a Pandas Dataframe. To do so, I looked at several suggestions. However, these suggestions are about smaller arrays, for which it is okay to write out the different columns. For 29 columns, this seems inefficient. Who can tell me efficient ways to turn this Numpy array into a Pandas DataFrame?
We can wrap a numpy array in a pandas dataframe, by passing it as the first parameter. Then we can make use of pd.concat(..) [pandas-doc] to concatenate the original dataset, and the dataframe of the target_one_hot into a new dataframe. Since we here concatenate "vertically", we need to set the axis parameter on axis=1: pd.concat((dataset, pd.DataFrame(targets_one_hot)), axis=1)
Merge multiple int columns/rows into one numpy array (pandas dataframe)
I have a pandas dataframe with few columns and rows. I want to merge the columns into one and then merge the rows based on id and date into one. Currently I am doing so by: df['matrix'] = df[[col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19,col20,col21,col22,col23,col24,col25,col26,col27,col28,col29,col30,col31,col32,col33,col34,col35,col36,col37,col38,col39,col40,col41,col42,col43,col44,col45,col46,col47,col48]].values.tolist() df = df.groupby(['id','date'])['matrix'].apply(list).reset_index(name='matrix') This gives me the matrix in form of a list. Later I convert it into numpy.ndarray using: df['matrix'] = df['matrix'].apply(np.array) This is a small segment of my dataset for reference: id,date,col0,col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19,col20,col21,col22,col23,col24,col25,col26,col27,col28,col29,col30,col31,col32,col33,col34,col35,col36,col37,col38,col39,col40,col41,col42,col43,col44,col45,col46,col47,col48 16,2014-06-22,0,0,0,10,0,0,0,0,0,0,0,0,0,0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 16,2014-06-22,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 16,2014-06-22,2,0,0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 16,2014-06-22,3,0,0,0,0,0,0,0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0 16,2014-06-22,4,0,0,0,0,0,0,0,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,22,0,0,0,0 Though the above piece of code works fine for small datasets, but sometimes crashes for larger ones. Specifically df['matrix'].apply(np.array) statement. Is there a way by which I can perform the merging to fetch me a numpy.array? This would save a lot of time.
No need to merge the columns at first. Split DataFrame using groupby and then flatten the result matrix=df.set_index(['id','date']).groupby(['id','date']).apply(lambda x: x.values.flatten())
Empty values in pandas -- most memory-efficient way to filter out empty values for some columns but keep empty values for one column?
Using Python, I have a large file (millions of rows) that I am reading in with Pandas using pd.read_csv. My goal is to minimize the amount of memory I use as much as possible. Out of about 15 columns in the file, I only want to keep 6 columns. Of those 6 columns, I have different needs for the empty rows. Specifically, for 5 of the columns, I'd like to filter out / ignore all of the empty rows. But for 1 of the columns, I need to keep only the empty rows. What is the most memory-efficient way to do this? I guess I have two problems: First, looking at the documentation for Pandas read_csv, it's not clear to me if there is a way to filter out empty rows. Are there a set of parameters and specifications for read_csv -- or with some other method --that I can use to filter out empty rows? Second, is it possible to filter out empty rows only for some columns but then keep all of the empty rows for one of my columns?
I would advise you use dask.dataframe. Syntax is pandas-like, but it deals with chunking and optimal memory management. Only when you need the result in memory should you translate the dataframe back to pandas, where of course you will need sufficient memory to hold the result in a dataframe. import dask.dataframe as dd df = dd.read_csv('file.csv') # filtering and manipulation logic df = df.loc[....., ....] # compute & return to pandas df_pandas = df.compute()
pandas: Select one-row data frame instead of series [duplicate]
I have a huge dataframe, and I index it like so: df.ix[<integer>] Depending on the index, sometimes this will have only one row of values. Pandas automatically converts this to a Series, which, quite frankly, is annoying because I can't operate on it the same way I can a df. How do I either: 1) Stop pandas from converting and keep it as a dataframe ? OR 2) easily convert the resulting series back to a dataframe ? pd.DataFrame(df.ix[<integer>]) does not work because it doesn't keep the original columns. It treats the <integer> as the column, and the columns as indices. Much appreciated.
You can do df.ix[[n]] to get a one-row dataframe of row n.
how to make 1 by n dataframe from series in pandas?
I have a huge dataframe, and I index it like so: df.ix[<integer>] Depending on the index, sometimes this will have only one row of values. Pandas automatically converts this to a Series, which, quite frankly, is annoying because I can't operate on it the same way I can a df. How do I either: 1) Stop pandas from converting and keep it as a dataframe ? OR 2) easily convert the resulting series back to a dataframe ? pd.DataFrame(df.ix[<integer>]) does not work because it doesn't keep the original columns. It treats the <integer> as the column, and the columns as indices. Much appreciated.
You can do df.ix[[n]] to get a one-row dataframe of row n.