How to iterate over Pandas Series generated from groupby().size() - python

How do you iterate over a Pandas Series generated from a .groupby('...').size() command and get both the group name and count.
As an example if I have:
foo
-1 7
0 85
1 14
2 5
how can I loop over them so that in each iteration I would have -1 & 7, 0 & 85, 1 & 14 and 2 & 5 in variables?
I tried the enumerate option but it doesn't quite work. Example:
for i, row in enumerate(df.groupby(['foo']).size()):
print(i, row)
it doesn't return -1, 0, 1, and 2 for i but rather 0, 1, 2, 3.

Update:
Given a pandas Series:
s = pd.Series([1,2,3,4], index=['a', 'b', 'c', 'd'])
s
#a 1
#b 2
#c 3
#d 4
#dtype: int64
You can directly loop through it, which yield one value from the series in each iteration:
for i in s:
print(i)
1
2
3
4
If you want to access the index at the same time, you can use either items or iteritems method, which produces a generator that contains both the index and value:
for i, v in s.items():
print('index: ', i, 'value: ', v)
#index: a value: 1
#index: b value: 2
#index: c value: 3
#index: d value: 4
for i, v in s.iteritems():
print('index: ', i, 'value: ', v)
#index: a value: 1
#index: b value: 2
#index: c value: 3
#index: d value: 4
Old Answer:
You can call iteritems() method on the Series:
for i, row in df.groupby('a').size().iteritems():
print(i, row)
# 12 4
# 14 2
According to doc:
Series.iteritems()
Lazily iterate over (index, value) tuples
Note: This is not the same data as in the question, just a demo.

To expand upon the answer of Psidom, there are three useful ways to unpack data from pd.Series. Having the same Series as Psidom:
s = pd.Series([1,2,3,4], index=['a', 'b', 'c', 'd'])
A direct loop over s yields the value of each row.
A loop over
s.iteritems() or s.items() yields a tuple with the (index,value)
pairs of each row.
Using enumerate() on s.iteritems() yields a
nested tuple in the form of: (rownum,(index,value)).
The last way is useful in case your index contains other information than the row number itself (e.g. in a case of a timeseries where the index is time).
s = pd.Series([1,2,3,4], index=['a', 'b', 'c', 'd'])
for rownum,(indx,val) in enumerate(s.iteritems()):
print('row number: ', rownum, 'index: ', indx, 'value: ', val)
will output:
row number: 0 index: a value: 1
row number: 1 index: b value: 2
row number: 2 index: c value: 3
row number: 3 index: d value: 4
You can read more on unpacking nested tuples here.

Related

How to get the max from row's elements in python?

I have a data frame that contains a single column Positive Dispatch,
index Positive Dispatch
0 a,c
1 b
2 a,b
Each keyword has its own value:
a,b,c = 12,22,11
I want to create a new column that contains the max of each row, for example in the first row there are a and c and between them a has the biggest value, which is 12 and so on:
Positive Dispatch Max
a,c 12
b 22
a,b 22
My attempt:
import pandas as pd
dic1 = {
'a': [12,0,22],
'b': [0,13,22],
'c': [12,0,0], # there can be N number of columns here for example
} # 'd': [11,22,333]
a,b,c = 12,22,11 # d will have its own value, for example d = 33
df = pd.DataFrame(dic1)
df['Positive Dispatch'] = df.gt(0).dot(df.columns + ',').str[:-1] #Creating the positive dispatch column
print(df['Positive Dispatch'].max(axis=1))
But this gives the error:
ValueError: No axis named 1 for object type <class 'pandas.core.series.Series'>
IIUC:
create a dict then calculate max according to the key and value of the dictionary by using split()+max()+map():
d={'a':a,'b':b,'c':c}
df['Max']=df['Positive Dispatch'].str.split(',').map(lambda x:max([d.get(y) for y in x]))
#for more columns use applymap() in place of map() and logic remains same
OR
If you have more columns like 'Dispatch' then use:
d={'a':a,'b':b,'c':c,'d':d}
df[['Max','Min']]=df[['Positive Dispatch','Negative Dispatch']].applymap(lambda x:max([d.get(y) for y in x.split(',')]))
Sample Dataframe used:
dic1 = {
'a': [12,0,22],
'b': [0,13,22],
'c': [12,0,0], # there can be N number of columns here for example
'd': [11,22,333]}
a,b,c,d = 12,22,11,33 # d will have its own value, for example d = 33
df = pd.DataFrame(dic1)
df['Positive Dispatch'] = df.gt(0).dot(df.columns + ',').str[:-1]
df['Negative Dispatch']=[['a,d'],['c,b,a'],['d,c']]
df['Negative Dispatch']=df['Negative Dispatch'].str.join(',')
output:
a b c Positive Dispatch Max
0 12 0 12 a,c 12
1 0 13 0 b 22
2 22 22 0 a,b 22

Take the difference of all elements of a series with the previous ones in python pandas

I have a dataframe with sorted values labeled by ids and I want to take the difference of the value for the first element of an id with the value of the last elements of the all previous ids. The code below does what I want:
import pandas as pd
a = 'a'; b = 'b'; c = 'c'
df = pd.DataFrame(data=[*zip([a, a, a, b, b, c, a], [1, 2, 3, 5, 6, 7, 8])],
columns=['id', 'value'])
print(df)
# # take the last value for a particular id
# last_value_for_id = df.loc[df.id.shift(-1) != df.id, :]
# print(last_value_for_id)
current_id = ''; prev_values = {};diffs = {}
for t in df.itertuples(index=False):
prev_values[t.id] = t.value
if current_id != t.id:
current_id = t.id
else: continue
for k, v in prev_values.items():
if k == current_id: continue
diffs[(k, current_id)] = t.value - v
print(pd.DataFrame(data=diffs.values(), columns=['diff'], index=diffs.keys()))
prints:
id value
0 a 1
1 a 2
2 a 3
3 b 5
4 b 6
5 c 7
6 a 8
diff
a b 2
c 4
b c 1
a 2
c a 1
I want to do this in a vectorized manner however. I have found a way of getting the series of last elements as in:
# take the last value for a particular id
last_value_for_id = df.loc[df.id.shift(-1) != df.id, :]
print(last_value_for_id)
which gives me:
id value
2 a 3
4 b 6
5 c 7
but can't find a way of using this to take the diffs in a vectorized manner
Depending on how many ids you have, this works with few thousands:
# enumerate ids, should be careful
ids = [a,b,c]
num_ids = len(ids)
# compute first and last
f = df.groupby('id').value.agg(['first','last'])
# lower triangle mask
mask = np.array([[i>=j for j in range(num_ids)] for i in range(num_ids)])
# compute diff of first and last, then mask
diff = np.where(mask, None, f['first'][None,:] - f['last'][:,None])
diff = pd.DataFrame(diff,
index = ids,
columns = ids)
# stack
diff.stack()
output:
a b 2
c 4
b c 1
dtype: object
Edit for updated data:
For the updated data, approach is similar if we can create the f table:
# create blocks of consecutive id
blocks = df['id'].ne(df['id'].shift()).cumsum()
# groupby
groups = df.groupby(blocks)
# create first and last values
df['fv'] = groups.value.transform('first')
df['lv'] = groups.value.transform('last')
# the above f and ids
# note the column name change
f = df[['id','fv', 'lv']].drop_duplicates()
ids = f['id'].values
num_ids = len(ids)
Output:
a b 2
c 4
a 5
b c 1
a 2
c a 1
dtype: object
If you want to go further and drop the index (a,a), well, I'm so lazy :D.
My method
s=df.groupby(df.id.shift().ne(df.id).cumsum()).agg({'id':'first','value':['min','max']})
s.columns=s.columns.droplevel(0)
t=s['min'].values[:,None]-s['max'].values
t=t.astype(float)
Below are all reshape, to match your output
t[np.triu_indices(t.shape[1], 0)] = np.nan
newdf=pd.DataFrame(t,index=s['first'],columns=s['first'])
newdf.values[newdf.index.values[:,None]==newdf.index.values]=np.nan
newdf=newdf.T.stack()
newdf
Out[933]:
first first
a b 2.0
c 4.0
b c 1.0
a 2.0
c a 1.0
dtype: float64

pandas data frame sort

I have a pandas dataframe like this which I try to sort by column 'dist'. The sorted dataframe should start with E or F as per below. I use sort_values which it is not working for me. The function is computing distances from 'Start' location to a list of locations ['C', 'B', 'D', 'E', 'A', 'F'] and then is supposed to sort the dataframe in ascending order using 'dist' column.
Could someone advice me why sorting is not working?
locations = {'Start':(20,5),'A':(10,3), 'B':(5,3), 'C':(5, 7), 'D':(10,7),'E':(14,4),'F':(14,6)}
loc_list
Out[194]: ['C', 'B', 'D', 'E', 'A', 'F']
def closest_locations(from_loc_point, to_loc_list):
lresults=list()
for list_index in range(len(to_loc_list)):
dist= hypot(locations[from_loc_point[0]][0] -locations[to_loc_list[list_index]][0],locations[from_loc_point[0]][1] -locations[to_loc_list[list_index]][1]) # cumsum distante
lista_dist = [from_loc_point[0],to_loc_list[list_index],dist]
lresults.append(lista_dist[:])
RESULTS = pd.DataFrame(np.array(lresults))
RESULTS.columns = ['from','to','dist']
RESULTS.sort_values(['dist'],ascending=[True],inplace=True)
RESULTS.index = range(len(RESULTS))
return RESULTS
closest_locations(['Start'], loc_list)
Out[189]:
from to dist
0 Start D 10.19803902718557
1 Start A 10.19803902718557
2 Start C 15.132745950421555
3 Start B 15.132745950421555
4 Start E 6.08276253029822
5 Start F 6.08276253029822
closest_two_loc.dtypes
Out[247]:
from object
to object
dist object
dtype: object
Is this what you want?
locations = {'Start':(20,5),'A':(10,3), 'B':(5,3), 'C':(5, 7), 'D':(10,7),'E':(14,4),'F':(14,6)}
df= pd.DataFrame.from_dict(locations, orient='index').rename(columns={0:'x', 1:'y'})
df['dist'] = df.apply(lambda row: pd.np.sqrt((row['x'] - df.loc['Start', 'x'])**2 + (row['y'] - df.loc['Start', 'y'])**2), axis=1)
df.drop(['Start']).sort_values(by='dist')
x y dist
E 14 4 6.082763
F 14 6 6.082763
A 10 3 10.198039
D 10 7 10.198039
C 5 7 15.132746
B 5 3 15.132746
or if you want to wrap it in a function
def dist_from(df, col):
df['dist'] = df.apply(lambda row: pd.np.sqrt((row['x'] - df.loc[col,'x'])**2 + (row['y'] - df.loc[col, 'y'])**2), axis=1)
df['form'] = col
df.drop([col]).sort_values(by='dist')
df.index.name = 'to'
return df.reset_index().loc[:, ['from', 'to', 'dist']]
You need to convert values in "dist" column to float:
df = closest_locations(['Start'], loc_list)
df.dist = list(map(lambda x: float(x), df.dist)) # convert each value to float
print(df.sort_values('dist')) # now it will sort properly
Output:
from to dist
4 Start E 6.082763
5 Start F 6.082763
0 Start D 10.198039
1 Start A 10.198039
2 Start C 15.132746
3 Start B 15.132746
Edit: As mentioned by #jezrael in comments, following is a more direct method:
df.dist = df.dist.astype(float)

Pandas assign label based on index value

I have a dataframe with index and multiple columns. Secondly, I have few lists containing index values sampled on certain criterias. Now I want to create columns with labes based on fact whether or not the index of certain row is present in a specified list.
Now there are two situations where I am using it:
1) To create a column and give labels based on one list:
df['1_name'] = df.index.map(lambda ix: 'A' if ix in idx_1_model else 'B')
2) To create a column and give labels based on multiple lists:
def assignLabelsToSplit(ix_, random_m, random_y, model_m, model_y):
if (ix_ in random_m) or (ix_ in model_m):
return 'A'
if (ix_ in random_y) or (ix_ in model_y):
return 'B'
else:
return 'not_assigned'
df['2_name'] = df.index.map(lambda ix: assignLabelsToSplit(ix, idx_2_random_m, idx_2_random_y, idx_2_model_m, idx_2_model_y))
This is working, but it is quite slow. Each call takes about 3 minutes and considering I have to execute the funtions multiple times, it needs to be faster.
Thank you for any suggestions.
I think you need double numpy.where with Index.isin :
df['2_name'] = np.where(df.index.isin(random_m + model_m), 'A',
np.where(df.index.isin(random_y + model_y), 'B', 'not_assigned'))
Sample:
np.random.seed(100)
df = pd.DataFrame(np.random.randint(10, size=(10,1)), columns=['A'])
#print (df)
random_m = [0,1]
random_y = [2,3]
model_m = [7,4]
model_y = [5,6]
print (type(random_m))
<class 'list'>
print (random_m + model_m)
[0, 1, 7, 4]
print (random_y + model_y)
[2, 3, 5, 6]
df['2_name'] = np.where(df.index.isin(random_m + model_m), 'A',
np.where(df.index.isin(random_y + model_y), 'B', 'not_assigned'))
print (df)
A 2_name
0 8 A
1 8 A
2 3 B
3 7 B
4 7 A
5 0 B
6 4 B
7 2 A
8 5 not_assigned
9 2 not_assigned

Count frequency of values in pandas DataFrame column

I want to count number of times each values is appearing in dataframe.
Here is my dataframe - df:
status
1 N
2 N
3 C
4 N
5 S
6 N
7 N
8 S
9 N
10 N
11 N
12 S
13 N
14 C
15 N
16 N
17 N
18 N
19 S
20 N
I want to dictionary of counts:
ex. counts = {N: 14, C:2, S:4}
I have tried df['status']['N'] but it gives keyError and also df['status'].value_counts but no use.
You can use value_counts and to_dict:
print df['status'].value_counts()
N 14
S 4
C 2
Name: status, dtype: int64
counts = df['status'].value_counts().to_dict()
print counts
{'S': 4, 'C': 2, 'N': 14}
An alternative one liner using underdog Counter:
In [3]: from collections import Counter
In [4]: dict(Counter(df.status))
Out[4]: {'C': 2, 'N': 14, 'S': 4}
You can try this way.
df.stack().value_counts().to_dict()
Can you convert df into a list?
If so:
a = ['a', 'a', 'a', 'b', 'b', 'c']
c = dict()
for i in set(a):
c[i] = a.count(i)
Using a dict comprehension:
c = {i: a.count(i) for i in set(a)}
See my response in this thread for a Pandas DataFrame output,
count the frequency that a value occurs in a dataframe column
For dictionary output, you can modify as follows:
def column_list_dict(x):
column_list_df = []
for col_name in x.columns:
y = col_name, len(x[col_name].unique())
column_list_df.append(y)
return dict(column_list_df)

Categories

Resources