I am trying to write a small python application that creates a csv file that contains data for a recipe system,
Imagine the following structure of excel data
Manufacturer Product Data 1 Data 2 Data 3
Test 1 Product 1 1 2 3
Test 1 Product 2 4 5 6
Test 2 Product 1 1 2 3
Test 3 Product 1 1 2 3
Test 3 Product 1 4 5 6
Test 3 Product 1 7 8 9
When merged i woudl like the data to be displayed in following format,
Test 1 Product 1 1 2 3 0 0 0 0 0 0
Test 2 Product 2 4 5 6 0 0 0 0 0 0
Test 2 Product 1 1 2 3 0 0 0 0 0 0
Test 3 Product 1 1 2 3 4 5 6 7 8 9
Any help would be greatfully recieved, so far i can read the panda dataset and convert to a CSV
Regards
Lee
Use melt, groupby, pd.Series, and unstack:
(df.melt(['Manufacturer','Product'])
.groupby(['Manufacturer','Product'])['value']
.apply(lambda x: pd.Series(x.tolist()))
.unstack(fill_value=0)
.reset_index())
Output:
Manufacturer Product 0 1 2 3 4 5 6 7 8
0 Test 1 Product 1 1 2 3 0 0 0 0 0 0
1 Test 1 Product 2 4 5 6 0 0 0 0 0 0
2 Test 2 Product 1 1 2 3 0 0 0 0 0 0
3 Test 3 Product 1 1 4 7 2 5 8 3 6 9
With groupby
df.groupby(['Manufacturer','Product']).agg(tuple).sum(1).apply(pd.Series).fillna(0)
Out[85]:
0 1 2 3 4 5 6 7 8
Manufacturer Product
Test1 Product1 1.0 2.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0
Product2 4.0 5.0 6.0 0.0 0.0 0.0 0.0 0.0 0.0
Test2 Product1 1.0 2.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0
Test3 Product1 1.0 4.0 7.0 2.0 5.0 8.0 3.0 6.0 9.0
cols = ['Manufacturer', 'Product']
d = df.set_index(cols + [df.groupby(cols).cumcount()]).unstack(fill_value=0)
d
Gets me
Data 1 Data 2 Data 3
0 1 2 0 1 2 0 1 2
Manufacturer Product
Test 1 Product 1 1 0 0 2 0 0 3 0 0
Product 2 4 0 0 5 0 0 6 0 0
Test 2 Product 1 1 0 0 2 0 0 3 0 0
Test 3 Product 1 1 4 7 2 5 8 3 6 9
Followed up wtih
d.sort_index(1, 1).pipe(lambda d: d.set_axis(range(d.shape[1]), 1, False).reset_index())
Manufacturer Product 0 1 2 3 4 5 6 7 8
0 Test 1 Product 1 1 2 3 0 0 0 0 0 0
1 Test 1 Product 2 4 5 6 0 0 0 0 0 0
2 Test 2 Product 1 1 2 3 0 0 0 0 0 0
3 Test 3 Product 1 1 2 3 4 5 6 7 8 9
Or
cols = ['Manufacturer', 'Product']
pd.Series({
n: d.values.ravel() for n, d in df.set_index(cols).groupby(cols)
}).apply(pd.Series).fillna(0, downcast='infer').rename_axis(cols).reset_index()
Manufacturer Product 0 1 2 3 4 5 6 7 8
0 Test 1 Product 1 1 2 3 0 0 0 0 0 0
1 Test 1 Product 2 4 5 6 0 0 0 0 0 0
2 Test 2 Product 1 1 2 3 0 0 0 0 0 0
3 Test 3 Product 1 1 2 3 4 5 6 7 8 9
With defaultdict and itertools.count
from itertools import count
from collections import defaultdict
c = defaultdict(count)
pd.Series({(
m, p, next(c[(m, p)])): v
for _, m, p, *V in df.itertuples()
for v in V
}).unstack(fill_value=0)
0 1 2 3 4 5 6 7 8
Test 1 Product 1 1 2 3 0 0 0 0 0 0
Product 2 4 5 6 0 0 0 0 0 0
Test 2 Product 1 1 2 3 0 0 0 0 0 0
Test 3 Product 1 1 2 3 4 5 6 7 8 9
Related
Task
I have a df where I do some ratios that are groupby date and id. I want to fill column c with NaN if the sum of a and b is 0. Any help would be awesome!!
df
date id a b c
0 2001-09-06 1 3 1 1
1 2001-09-07 1 3 1 1
2 2001-09-08 1 4 0 1
3 2001-09-09 2 6 0 1
4 2001-09-10 2 0 0 2
5 2001-09-11 1 0 0 2
6 2001-09-12 2 1 1 2
7 2001-09-13 2 0 0 2
8 2001-09-14 1 0 0 2
Try this:
df['new_c'] = df.c.where(df[['a','b']].sum(1).ne(0))
Out[75]:
date id a b c new_c
0 2001-09-06 1 3 1 1 1.0
1 2001-09-07 1 3 1 1 1.0
2 2001-09-08 1 4 0 1 1.0
3 2001-09-09 2 6 0 1 1.0
4 2001-09-10 2 0 0 2 NaN
5 2001-09-11 1 0 0 2 NaN
6 2001-09-12 2 1 1 2 2.0
7 2001-09-13 2 0 0 2 NaN
8 2001-09-14 1 0 0 2 NaN
It is better to build a new dataframe with same shape , and then do the following :
i = 0
for line in df :
new_df[i]['date'] = line['date']
new_df[i]['a'] = line['a']
new_df[i]['b'] = line['b']
if line['a'] + line['b'] == 0 :
new_df[i]['c'] = Nan
i += 1
I have a df
a b c d
1 0 1 2 4
2 0 1 3 5
3 0 2 1 7
4 1 3 2 5
Within groups, grouped by 'a' and 'b' I want all possible permutations of 'c'
a b c d
1 0 1 2 4
0 1 3 5
0 2 1 7
2 0 1 3 5
0 1 2 4
0 2 1 7
3 1 3 2 5
...
...
I tried:
s=pd.Series({x: list(it.permutations(y) )for x , y in df.groupby(['a','b']).c})
0 1 [(3,2),(2,3)]
2 [(1,)]
1 3 [(2,)]
Explode() only does not do what I need, since I need all combinations of groups within subgroups.
For example in this case there are 2 different ways to combine rows 1 and 2. If row 2 would have been 2 different permutations, it would be 2*2=4 ways.
Does anybody have an idea?
Fix your code with groupby and explode
s=pd.Series({x: list(itertools.permutations(y) )for x , y in df.groupby('a').b}).explode().explode().reset_index()
index 0
0 0 1
1 0 2
2 0 3
3 0 1
4 0 3
5 0 2
6 0 2
7 0 1
8 0 3
9 0 2
10 0 3
11 0 1
12 0 3
13 0 1
14 0 2
15 0 3
16 0 2
17 0 1
18 1 1
19 1 2
20 1 2
21 1 1
I have a very large DataFrame where each element is populate with a 1-5 integer, or else 0 if there is no data for that element. I would like to create two adjusted copies of it:
train will be a copy where a random 20% of non-zero elements per row are set to 0
test will be a copy where all but these same 20% of elements are set to 0
Here is a sample:
ORIGINAL
0 1 2 3 4 5 6 7 8 9
0 3 0 1 1 3 5 3 5 4 2
1 4 2 3 2 3 3 4 4 1 2
2 2 4 2 5 4 4 0 0 4 2
TRAIN
0 1 2 3 4 5 6 7 8 9
0 3 0 0 1 3 5 3 5 4 2
1 4 2 3 0 3 3 4 4 0 2
2 2 4 2 5 4 4 0 0 4 0
TEST
0 1 2 3 4 5 6 7 8 9
0 0 0 1 0 0 0 0 0 0 0
1 0 0 0 2 0 0 0 0 1 0
2 0 0 0 0 0 0 0 0 0 2
Here is my current brute-force algorithm that gets the job done, but is far too slow:
train, test = original.copy(), original.copy()
for i in range(original.shape[0]):
print("{} / {}".format(i + 1, original.shape[0]))
row = original.iloc[i] # Select row
nonZeroIndices = np.where(row > 0)[0] # Find all non-zero indices
numTest = int(len(nonZeroIndices) * 0.2) # Calculate 20% of this amount
rand = np.random.choice(nonZeroIndices, numTest, replace=False) # Select a rancom 20% of non-zero indices
for j in range(original.shape[1]):
if j in rand:
train.iloc[i, j] = 0
else:
test.iloc[i, j] = 0
Is there a quicker way to achieve this using Pandas or Numpy?
One approach would be
def make_train_test(df):
train, test = df.copy(), df.copy()
for i, row in df.iterrows():
non_zero = np.where(row > 0)[0]
num_test = int(len(non_zero) * 0.2)
rand = np.random.choice(non_zero, num_test, replace=False)
row_train = train.iloc[i, :]
row_test = test.iloc[i, :]
row_train[rand] = 0
row_test[~row_test.index.isin(rand)] = 0
return train, test
In my testing, this runs in about 4.85 ms, your original solution in about 9.07 ms, and andrew_reece's (otherwise elegant) solution in 15.6 ms.
First, create the 20% subset of non-zero values with sample():
subset = df.apply(lambda x: x[x.ne(0)].sample(frac=.2, random_state=42), axis=1)
subset
1 2 5 8
0 NaN 1.0 NaN 4.0
1 2.0 NaN NaN 1.0
2 4.0 NaN 4.0 NaN
Now train and test can be set by multiplying subset against the original df, and either using 1s or 0s as fill_value:
train = df.apply(lambda x: x.multiply(subset.iloc[x.name].isnull(), fill_value=1), axis=1)
train
0 1 2 3 4 5 6 7 8 9
0 3 0 0 1 3 5 3 5 0 2
1 4 0 3 2 3 3 4 4 0 2
2 2 0 2 5 4 0 0 0 4 2
test = df.apply(lambda x: x.multiply(subset.iloc[x.name].notnull(), fill_value=0), axis=1)
test
0 1 2 3 4 5 6 7 8 9
0 0 0 1 0 0 0 0 0 4 0
1 0 2 0 0 0 0 0 0 1 0
2 0 4 0 0 0 4 0 0 0 0
Data:
df
0 1 2 3 4 5 6 7 8 9
0 3 0 1 1 3 5 3 5 4 2
1 4 2 3 2 3 3 4 4 1 2
2 2 4 2 5 4 4 0 0 4 2
I have following dataset in pandas Dataframe.
group_id sub_group_id
0 0
0 1
1 0
2 0
2 1
2 2
3 0
3 0
But the I want to those group ids and form a consolidated group id
group_id sub_group_id consolidated_group_id
0 0 0
0 1 1
1 0 2
2 0 3
2 1 4
2 2 5
2 2 5
3 0 6
3 0 6
Is there any generic or mathematical way to do it?
cols = ['group_id', 'sub_group_id']
df.assign(
consolidated_group_id=pd.factorize(
pd.Series(list(zip(*df[cols].values.T.tolist())))
)[0]
)
group_id sub_group_id consolidated_group_id
0 0 0 0
1 0 1 1
2 1 0 2
3 2 0 3
4 2 1 4
5 2 2 5
6 3 0 6
7 3 0 6
You need convert values to tuples and then use factorize:
df['consolidated_group_id'] = pd.factorize(df.apply(tuple,axis=1))[0]
print (df)
group_id sub_group_id consolidated_group_id
0 0 0 0
1 0 1 1
2 1 0 2
3 2 0 3
4 2 1 4
5 2 2 5
6 3 0 6
7 3 0 6
Numpy solutions are a bit modify this answer - change ordering by [::-1] with selecting by [0] for return array (numpy.unique):
a = df.values
def unique_return_inverse_2D(a): # a is array
a1D = a.dot(np.append((a.max(0)+1)[:0:-1].cumprod()[::-1],1))
return np.unique(a1D, return_inverse=1)[::-1][0]
def unique_return_inverse_2D_viewbased(a): # a is array
a = np.ascontiguousarray(a)
void_dt = np.dtype((np.void, a.dtype.itemsize * np.prod(a.shape[1:])))
return np.unique(a.view(void_dt).ravel(), return_inverse=1)[::-1][0]
df['consolidated_group_id'] = unique_return_inverse_2D(a)
df['consolidated_group_id1'] = unique_return_inverse_2D_viewbased(a)
print (df)
group_id sub_group_id consolidated_group_id consolidated_group_id1
0 0 0 0 0
1 0 1 1 1
2 1 0 2 2
3 2 0 3 3
4 2 1 4 4
5 2 2 5 5
6 3 0 6 6
7 3 0 6 6
I have the following short dataframe:
A B C
1 1 3
2 1 3
3 2 3
4 2 3
5 0 0
I want the output to look like this:
A B C
1 1 3
2 1 3
3 0 0
4 0 0
5 0 0
1 1 3
2 1 3
3 2 3
4 2 3
5 0 0
use pd.MultiIndex.from_product with unique As and Bs. Then reindex.
cols = list('AB')
mux = pd.MultiIndex.from_product([df.A.unique(), df.B.unique()], names=cols)
df.set_index(cols).reindex(mux, fill_value=0).reset_index()
A B C
0 1 1 3
1 1 2 0
2 1 0 0
3 2 1 3
4 2 2 0
5 2 0 0
6 3 1 0
7 3 2 3
8 3 0 0
9 4 1 0
10 4 2 3
11 4 0 0
12 5 1 0
13 5 2 0
14 5 0 0