Groupby and reduce pandas dataframes with numpy arrays as entries - python

I have a pandas.DataFrame with the following structure:
>>> data
a b values
1 0 [1, 2, 3, 4]
2 0 [3, 4, 5, 6]
1 1 [1, 3, 7, 9]
2 1 [2, 4, 6, 8]
('values' has the type of numpy.array). What I want to do is to group the data by column 'a' and then combine the list of values.
My goal is to end up with the following:
>>> data
a values
1 [1, 2, 3, 4, 1, 3, 7, 9]
2 [3, 4, 5, 6, 2, 4, 6, 8]
Note, that the order of the values does not matter. How do I achieve this? I though about something like
>>> grps = data.groupby(['a'])
>>> grps['values'].agg(np.concatenate)
but this fails with a KeyError. I'm sure there is a pandaic way to achieve this - but how?
Thanks.

Similar to the John Galt's answer, you can group and then apply np.hstack:
In [278]: df.groupby('a')['values'].apply(np.hstack)
Out[278]:
a
1 [1, 2, 3, 4, 1, 3, 7, 9]
2 [3, 4, 5, 6, 2, 4, 6, 8]
Name: values, dtype: object
To get back your frame, you'll need pd.Series.to_frame and pd.reset_index:
In [311]: df.groupby('a')['values'].apply(np.hstack).to_frame().reset_index()
Out[311]:
a values
0 1 [1, 2, 3, 4, 1, 3, 7, 9]
1 2 [3, 4, 5, 6, 2, 4, 6, 8]
Performance
df_test = pd.concat([df] * 10000) # setup
%timeit df_test.groupby('a')['values'].apply(np.hstack) # mine
1 loop, best of 3: 219 ms per loop
%timeit df_test.groupby('a')['values'].sum() # John's
1 loop, best of 3: 4.44 s per loop
sum is very inefficient for list, and does not work when Values is a np.array.

You can use sum to join lists.
In [640]: data.groupby('a')['values'].sum()
Out[640]:
a
1 [1, 2, 3, 4, 1, 3, 7, 9]
2 [3, 4, 5, 6, 2, 4, 6, 8]
Name: values, dtype: object
Or,
In [653]: data.groupby('a', as_index=False).agg({'values': 'sum'})
Out[653]:
a values
0 1 [1, 2, 3, 4, 1, 3, 7, 9]
1 2 [3, 4, 5, 6, 2, 4, 6, 8]

Related

R sequence function in Python

pandas version: 1.2
I am trying to take a python pandas dataframe column pandas and create the same type of logic as in R that would be
ss=sequence(df$los)
Which produces for the first two records
[1] 1 2 3 4 5 1 2 3 4 5
Example dataframe:
df = pd.DataFrame([('test', 5), ('t2', 5), ('t3', 2), ('t4', 6)],
columns=['first', 'los'])
df
first los
0 test 5
1 t2 5
2 t3 2
3 t4 6
So the first row is sequenced 1-5 and second row is sequenced 1-5 and third row is sequenced 1-2 etc. In R this becomes one sequenced list. I would like that is python.
What I have been able to do is.
ss = df['los']
ss.apply(lambda x: np.array(range(1, x)))
18 [1, 2, 3, 4, 5]
90 [1, 2, 3, 4, 5]
105 [1,2]
106 [1, 2, 3, 4, 5, 6]
Which is close but then I need to combine it into a single pd.Series so that it should be:
[1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 1, 2, 3, 4, 5, 6]
Use explode():
df.los.apply(lambda x: np.arange(1, x+1)).explode().tolist()
Output:
[1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 1, 2, 3, 4, 5, 6]
Note - you can skip the ss assignment step, and use np.arange to streamline a bit.
You can just use concatenate:
np.concatenate([np.arange(x)+1 for x in df['los']])
Output:
array([1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 1, 2, 3, 4, 5, 6])

Combining data contained in several lists

I am working on a personal project in python 3.6. I used pandas to import the data from an excel file in a dataframe and then I extracted data into several lists.
Now, I will give an example to illustrate exactly what I am trying to achieve.
So I have let's say 3 input lists a,b and c(I did insert the index and some additional white spaces for in lists so it is easier to follow):
0 1 2 3 4 5 6
a=[1, 5, 6, [10,12,13], 1, [5,3] ,7]
b=[3, [1,2], 3, [5,6], [1,3], [5,6], 9]
c=[1, 0 , 4, [1,2], 2 , 8 , 9]
I am trying to combine the data in order to get all the combinations when in one of the lists there is a list containing multiple elements. So the output needs to be like this:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
a=[1, 5, 5, 6, 10,10,10, 10, 12, 12, 12, 12, 13, 13, 13, 13, 1, 1, 5, 5, 3, 3, 7]
b=[3, 1, 2, 3, 5, 5, 6, 6, 5, 5, 6, 6, 5, 5, 6, 6, 1, 3, 5, 6, 5, 6, 9]
c=[1, 0, 0, 4, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 2, 2, 8, 8, 8, 8, 9]
To make this more clear:
From the original lists if we look at index 1 elements:
a[1]=5, b[1]=[1,2], c[1]=0. These got transformed to the following values on the 1 and 2 index positions: a[1:3]=[ 5, 5 ]; b[1:3]=[1, 2]; c[1:3]=[ 0, 0]
This needs to be applied also to index 3, 4, and 5 in the original input lists in order to obtain something similar to the example output above.
I want to be able to generalize this to more lists (a,b,c.....n). I have been able to do this for two lists, but in a totally not elegant, definitely not pythonic way. Also I think the code I wrote can't be generalized to more lists.
I am looking for some help, at least some pointers to some reading material that can help me achieve what I presented above.
Thank you!
You could do something like this.
Looks at each column, works out the combinations, then output the list:
import pandas as pd
import numpy
a=[1, 5, 6, [10,12,13], 1, [5,3] ,7]
b=[3, [1,2], 3, [5,6], [1,3], [5,6], 9]
c=[1, 0 , 4, [1,2], 2 , 8 , 9]
df = pd.DataFrame([a,b,c])
final_df = pd.DataFrame()
i=0
for col in df.columns:
temp_df = pd.DataFrame(df[col])
get_combo = []
for idx, row in temp_df.iterrows():
get_combo.append([row[i]])
combo_list = [list(x) for x in numpy.array(numpy.meshgrid(*get_combo)).T.reshape(-1,len(get_combo))]
temp_df_alpha = pd.DataFrame(combo_list).T
i+=1
if len(final_df) == 0:
final_df = temp_df_alpha
else:
final_df = pd.concat([final_df, temp_df_alpha], axis=1, sort=False)
for idx, row in final_df.iterrows():
print (row.tolist())
Output:
[1, 5, 5, 6, 10, 10, 12, 12, 13, 13, 10, 10, 12, 12, 13, 13, 1, 1, 5, 5, 3, 3, 7]
[3, 1, 2, 3, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 5, 6, 1, 3, 5, 6, 5, 6, 9]
[1, 0, 0, 4, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 8, 8, 8, 8, 9]

How do I find the maximum value in an array within a dataframe column?

I have a dataframe (df) that looks like this:
a b
loc.1 [1, 2, 3, 4, 7, 5, 6]
loc.2 [3, 4, 3, 7, 7, 8, 6]
loc.3 [1, 4, 3, 1, 7, 8, 6]
...
I want to find the maximum of the array in column b and append this to the original data frame. My thought was something like this:
for line in df:
split = map(float,b.split(','))
count_max = max(split)
print count
Ideal output should be:
a b max_val
loc.1 [1, 2, 3, 4, 7, 5, 6] 7
loc.2 [3, 4, 3, 7, 7, 8, 6] 8
loc.3 [1, 4, 3, 1, 7, 8, 6] 8
...
But this does not work, as I cannot use b.split as it is not defined...
If working with lists without NaNs best is use max in list comprehension or map:
a['max'] = [max(x) for x in a['b']]
a['max'] = list(map(max, a['b']))
Pure pandas solution:
a['max'] = pd.DataFrame(a['b'].values.tolist()).max(axis=1)
Sample:
array = {'loc.1': np.array([ 1,2,3,4,7,5,6]),
'loc.2': np.array([ 3,4,3,7,7,8,6]),
'loc.3': np.array([ 1,4,3,1,7,8,6])}
L = [(k, v) for k, v in array.items()]
a = pd.DataFrame(L, columns=['a','b']).set_index('a')
a['max'] = [max(x) for x in a['b']]
print (a)
b max
a
loc.1 [1, 2, 3, 4, 7, 5, 6] 7
loc.2 [3, 4, 3, 7, 7, 8, 6] 8
loc.3 [1, 4, 3, 1, 7, 8, 6] 8
EDIT:
You can also get max in list comprehension:
L = [(k, v, max(v)) for k, v in array.items()]
a = pd.DataFrame(L, columns=['a','b', 'max']).set_index('a')
print (a)
b max
a
loc.1 [1, 2, 3, 4, 7, 5, 6] 7
loc.2 [3, 4, 3, 7, 7, 8, 6] 8
loc.3 [1, 4, 3, 1, 7, 8, 6] 8
Try this:
df["max_val"] = df["b"].apply(lambda x:max(x))
You can use numpy arrays for a vectorised calculation:
df = pd.DataFrame({'a': ['loc.1', 'loc.2', 'loc.3'],
'b': [[1, 2, 3, 4, 7, 5, 6],
[3, 4, 3, 7, 7, 8, 6],
[1, 4, 3, 1, 7, 8, 6]]})
df['maxval'] = np.array(df['b'].values.tolist()).max(axis=1)
print(df)
# a b maxval
# 0 loc.1 [1, 2, 3, 4, 7, 5, 6] 7
# 1 loc.2 [3, 4, 3, 7, 7, 8, 6] 8
# 2 loc.3 [1, 4, 3, 1, 7, 8, 6] 8

how to Join 2 columns in numpy when they are list of lists?

Dataframe is:
date ids_x ids_y
0 2011-04-23 [0, 1, 2, 10, 11, 12, 13] []
1 2011-04-24 [0, 1, 2, 10, 11, 12, 13] [12,4]
2 2011-04-25 [0, 1, 2, 3, 4, 1, 12] []
3 2011-04-26 [0, 1, 2, 3, 4, 5, 6] [4,5,6]
The convenient way, but slow way, is to use:
df['ids'] = df['ids_x'] + df['ids_y']
I want to achieve this method by numpy, for now it is very slow 4 seconds. As Pandas use numpy I think I should use numpy without using Pandas in order to reduce the overhead.
I use column_stack but the output is:
a = np.array([[1,2,3],[4,5,6]])
b = np.array([[9,8,7],[6,5,4,6,7,8]])
np.column_stack((a,b))
[out]: array([[1, 2, 3, [9, 8, 7]], [4, 5, 6, [6, 5, 4, 6, 7, 8]]], dtype=object)
The problem with np.column_stack is that in b you don't have equal-length columns (and thus a dtype of object).
You can do this with np.concatenate (or as #John Galt said in comments np.append); e.g.:
In [43]: [np.concatenate((i,j)) for i,j in zip(a,b)])
Out[43]: [array([1, 2, 3, 9, 8, 7]), array([4, 5, 6, 6, 5, 4, 6, 7, 8])]

Add numpy array as column to Pandas data frame

I have a Pandas data frame object of shape (X,Y) that looks like this:
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
and a numpy sparse matrix (CSC) of shape (X,Z) that looks something like this
[[0, 1, 0],
[0, 0, 1],
[1, 0, 0]]
How can I add the content from the matrix to the data frame in a new named column such that the data frame will end up like this:
[[1, 2, 3, [0, 1, 0]],
[4, 5, 6, [0, 0, 1]],
[7, 8, 9, [1, 0, 0]]]
Notice the data frame now has shape (X, Y+1) and rows from the matrix are elements in the data frame.
import numpy as np
import pandas as pd
import scipy.sparse as sparse
df = pd.DataFrame(np.arange(1,10).reshape(3,3))
arr = sparse.coo_matrix(([1,1,1], ([0,1,2], [1,2,0])), shape=(3,3))
df['newcol'] = arr.toarray().tolist()
print(df)
yields
0 1 2 newcol
0 1 2 3 [0, 1, 0]
1 4 5 6 [0, 0, 1]
2 7 8 9 [1, 0, 0]
Consider using a higher dimensional datastructure (a Panel), rather than storing an array in your column:
In [11]: p = pd.Panel({'df': df, 'csc': csc})
In [12]: p.df
Out[12]:
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
In [13]: p.csc
Out[13]:
0 1 2
0 0 1 0
1 0 0 1
2 1 0 0
Look at cross-sections etc, etc, etc.
In [14]: p.xs(0)
Out[14]:
csc df
0 0 1
1 1 2
2 0 3
See the docs for more on Panels.
df = pd.DataFrame(np.arange(1,10).reshape(3,3))
df['newcol'] = pd.Series(your_2d_numpy_array)
You can add and retrieve a numpy array from dataframe using this:
import numpy as np
import pandas as pd
df = pd.DataFrame({'b':range(10)}) # target dataframe
a = np.random.normal(size=(10,2)) # numpy array
df['a']=a.tolist() # save array
np.array(df['a'].tolist()) # retrieve array
This builds on the previous answer that confused me because of the sparse part and this works well for a non-sparse numpy arrray.
Here is other example:
import numpy as np
import pandas as pd
""" This just creates a list of touples, and each element of the touple is an array"""
a = [ (np.random.randint(1,10,10), np.array([0,1,2,3,4,5,6,7,8,9])) for i in
range(0,10) ]
""" Panda DataFrame will allocate each of the arrays , contained as a touple
element , as column"""
df = pd.DataFrame(data =a,columns=['random_num','sequential_num'])
The secret in general is to allocate the data in the form a = [ (array_11, array_12,...,array_1n),...,(array_m1,array_m2,...,array_mn) ] and panda DataFrame will order the data in n columns of arrays. Of course , arrays of arrays could be used instead of touples, in that case the form would be :
a = [ [array_11, array_12,...,array_1n],...,[array_m1,array_m2,...,array_mn] ]
This is the output if you print(df) from the code above:
random_num sequential_num
0 [7, 9, 2, 2, 5, 3, 5, 3, 1, 4] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 [8, 7, 9, 8, 1, 2, 2, 6, 6, 3] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
2 [3, 4, 1, 2, 2, 1, 4, 2, 6, 1] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
3 [3, 1, 1, 1, 6, 2, 8, 6, 7, 9] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
4 [4, 2, 8, 5, 4, 1, 2, 2, 3, 3] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
5 [3, 2, 7, 4, 1, 5, 1, 4, 6, 3] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
6 [5, 7, 3, 9, 7, 8, 4, 1, 3, 1] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
7 [7, 4, 7, 6, 2, 6, 3, 2, 5, 6] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
8 [3, 1, 6, 3, 2, 1, 5, 2, 2, 9] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
9 [7, 2, 3, 9, 5, 5, 8, 6, 9, 8] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Other variation of the example above:
b = [ (i,"text",[14, 5,], np.array([0,1,2,3,4,5,6,7,8,9])) for i in
range(0,10) ]
df = pd.DataFrame(data=b,columns=['Number','Text','2Elemnt_array','10Element_array'])
Output of df:
Number Text 2Elemnt_array 10Element_array
0 0 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 1 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
2 2 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
3 3 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
4 4 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
5 5 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
6 6 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
7 7 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
8 8 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
9 9 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
If you want to add other columns of arrays, then:
df['3Element_array']=[([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3]),([1,2,3])]
The final output of df will be:
Number Text 2Elemnt_array 10Element_array 3Element_array
0 0 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
1 1 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
2 2 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
3 3 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
4 4 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
5 5 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
6 6 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
7 7 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
8 8 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]
9 9 text [14, 5] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] [1, 2, 3]

Categories

Resources