Pandas to create new rows from each exisitng rows - python

A short data frame and I want to create new rows from the existing rows.
What it does now is, each row, each column multiple a random number between 3 to 5:
import pandas as pd
import random
data = {'Price': [59,98,79],
'Stock': [53,60,60],
'Delivery': [11,7,6]}
df = pd.DataFrame(data)
for row in range(df.shape[0]):
new_row = round(df.loc[row] * random.randint(3,5))
new_row.name = 'new row'
df = df.append([new_row])
print (df)
Price Stock Delivery
0 59 53 11
1 98 60 7
2 79 60 6
new row 295 265 55
new row 294 180 21
new row 316 240 24
Is it possible that it can multiple different random numbers to each row? For example:
the 1st row 3 cells multiple (random) [3,4,5]
the 2nd row 3 cells multiple (random) [4,4,3] etc?
Thank you.

Change the random to numpy random.choice in your for loop
np.random.choice(range(3,5),3)

Use np.random.randint(3,6, size=3). Actually, you can do at once:
df * np.random.randint(3,6, size=df.shape)

You may also generate the multiplication coefficients with the same shape of df independently, and then concat the element-wise multiplied df * mul with the original df:
N.B. This method avoids the notoriously slow .append(). Benchmark: 10,000 rows finished almost instantly with this method, while .append() took 40 seconds!
import numpy as np
np.random.seed(111) # reproducibility
mul = np.random.randint(3, 6, df.shape) # 6 not inclusive
df_new = pd.concat([df, df * mul], axis=0).reset_index(drop=True)
Output:
print(df_new)
Price Stock Delivery
0 59 53 11
1 98 60 7
2 79 60 6
3 177 159 33
4 294 300 28
5 395 300 30
print(mul) # check the coefficients
array([[3, 3, 3],
[3, 5, 4],
[5, 5, 5]])

Related

pandas replace all values of a column with a column values that increment by n starting at 0

Say I have a dataframe like so that I have read in from a file (note: *.ene is a txt file)
df = pd.read_fwf('filename.ene')
TS DENSITY STATS
1
2
3
1
2
3
I would like to only change the TS column. I wish to replace all the column values of 'TS' with the values from range(0,751,125). The desired output should look like so:
TS DENSITY STATS
0
125
250
500
625
750
I'm a bit lost and would like some insight regarding the code to do such a thing in a general format.
I used a for loop to store the values into a list:
K=(6*125)+1
m = []
for i in range(0,K,125):
m.append(i)
I thought to use .replace like so:
df['TS']=df['TS'].replace(old_value, m, inplace=True)
but was not sure what to put in place of old_value to select all the values of the 'TS' column or if this would even work as a method.
it's pretty straight forward, if you're replacing all the data you just need to do
df['TS'] =m
example :
import pandas as pd
data = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]
df = pd.DataFrame(data, index=[0, 1, 2], columns=['a', 'b', 'c'])
print(df)
# a b c
# 0 10 20 30
# 1 40 50 60
# 2 70 80 90
df['a'] = [1,2,3]
print(df)
# a b c
# 0 1 20 30
# 1 2 50 60
# 2 3 80 90

Replicate the tuple values to create random dataset in python [duplicate]

I have a pandas DataFrame with 100,000 rows and want to split it into 100 sections with 1000 rows in each of them.
How do I draw a random sample of certain size (e.g. 50 rows) of just one of the 100 sections? The df is already ordered such that the first 1000 rows are from the first section, next 1000 rows from another, and so on.
You can use the sample method*:
In [11]: df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], columns=["A", "B"])
In [12]: df.sample(2)
Out[12]:
A B
0 1 2
2 5 6
In [13]: df.sample(2)
Out[13]:
A B
3 7 8
0 1 2
*On one of the section DataFrames.
Note: If you have a larger sample size that the size of the DataFrame this will raise an error unless you sample with replacement.
In [14]: df.sample(5)
ValueError: Cannot take a larger sample than population when 'replace=False'
In [15]: df.sample(5, replace=True)
Out[15]:
A B
0 1 2
1 3 4
2 5 6
3 7 8
1 3 4
One solution is to use the choice function from numpy.
Say you want 50 entries out of 100, you can use:
import numpy as np
chosen_idx = np.random.choice(1000, replace=False, size=50)
df_trimmed = df.iloc[chosen_idx]
This is of course not considering your block structure. If you want a 50 item sample from block i for example, you can do:
import numpy as np
block_start_idx = 1000 * i
chosen_idx = np.random.choice(1000, replace=False, size=50)
df_trimmed_from_block_i = df.iloc[block_start_idx + chosen_idx]
You could add a "section" column to your data then perform a groupby and sample:
import numpy as np
import pandas as pd
df = pd.DataFrame(
{"x": np.arange(1_000 * 100), "section": np.repeat(np.arange(100), 1_000)}
)
# >>> df
# x section
# 0 0 0
# 1 1 0
# 2 2 0
# 3 3 0
# 4 4 0
# ... ... ...
# 99995 99995 99
# 99996 99996 99
# 99997 99997 99
# 99998 99998 99
# 99999 99999 99
#
# [100000 rows x 2 columns]
sample = df.groupby("section").sample(50)
# >>> sample
# x section
# 907 907 0
# 494 494 0
# 775 775 0
# 20 20 0
# 230 230 0
# ... ... ...
# 99740 99740 99
# 99272 99272 99
# 99863 99863 99
# 99198 99198 99
# 99555 99555 99
#
# [5000 rows x 2 columns]
with additional .query("section == 42") or whatever if you are interested in only a particular section.
Note this requires pandas 1.1.0, see the docs here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.sample.html
For older versions, see the answer by #msh5678
Thank you, Jeff,
But I received an error;
AttributeError: Cannot access callable attribute 'sample' of 'DataFrameGroupBy' objects, try using the 'apply' method
So I suggest instead of sample = df.groupby("section").sample(50) using below command :
df.groupby('section').apply(lambda grp: grp.sample(50))
This is a nice place for recursion.
def main2():
rows = 8 # say you have 8 rows, real data will need len(rows) for int
rands = []
for i in range(rows):
gen = fun(rands)
rands.append(gen)
print(rands) # now range through random values
def fun(rands):
gen = np.random.randint(0, 8)
if gen in rands:
a = fun(rands)
return a
else: return gen
if __name__ == "__main__":
main2()
output: [6, 0, 7, 1, 3, 5, 4, 2]

Multiply entire column by a random number and store it as new column

I have a column with 100 rows and i want to generate multiple columns(say 100) from this column. These new columns should be generated by multiplying the first column with a random value. Is there a way to do it using python? I have tried it using excel but that is a tedious task as for every column I have to multiply the column with a randomly generated number (randbetween(a,b)).
Let's assume you have a column of numeric data:
import numpy as np
import random
# random.randint(a,b) will choose a random integer between a and b
# this will create a column that is 96 elements long
col = [random.randint(0,500) for i in range(96)]
Now, let's create more columns by leveraging a numpy.array which supports scalar multiplication of vectors:
arr = np.array(col)
# our dataframe has one column in it
df = pd.DataFrame(arr, columns=['x'])
a, b = 100, 5000 # set what interval to select random numbers from
Now, you can loop through to add in new columns
num_cols = 99
for i in range(num_cols): # or however many columns you want to add
df[i] = df.x * random.randint(a,b)
df.head()
x 0 1 2 3 4 5 6 ... 92 93 94 95 96 97 98 99
0 68 257040 214268 107576 266152 229568 309468 319668 ... 74460 25024 85952 320620 331840 175712 87788 254864
1 286 1081080 901186 452452 1119404 965536 1301586 1344486 ... 313170 105248 361504 1348490 1395680 739024 369226 1071928
2 421 1591380 1326571 666022 1647794 1421296 1915971 1979121 ... 460995 154928 532144 1985015 2054480 1087864 543511 1577908
3 13 49140 40963 20566 50882 43888 59163 61113 ... 14235 4784 16432 61295 63440 33592 16783 48724
4 344 1300320 1083944 544208 1346416 1161344 1565544 1617144 ... 376680 126592 434816 1621960 1678720 888896 444104 1289312
[5 rows x 101 columns]
You can use Numpy reshape to multiply column with random number
a, b = 10 ,20
df = pd.DataFrame({'col':np.random.randint(0,500, 100)})
df['col'].values * np.random.randint(a, b, 100).reshape(-1,1)
To get the result in a Dataframe,
pd.DataFrame(df['col'].values * np.random.randint(a, b, 100).reshape(-1,1))

Sample rows of pandas dataframe in proportion to counts in a column

I have a large pandas dataframe with about 10,000,000 rows. Each one represents a feature vector. The feature vectors come in natural groups and the group label is in a column called group_id. I would like to randomly sample 10% say of the rows but in proportion to the numbers of each group_id.
For example, if the group_id's are A, B, A, C, A, B then I would like half of my sampled rows to have group_id A, two sixths to have group_id B and one sixth to have group_id C.
I can see the pandas function sample but I am not sure how to use it to achieve this goal.
You can use groupby and sample
sample_df = df.groupby('group_id').apply(lambda x: x.sample(frac=0.1))
the following sample a total of N row where each group appear in its original proportion to the nearest integer, then shuffle and reset the index
using:
df = pd.DataFrame(dict(
A=[1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 4],
B=range(20)
))
Short and sweet:
df.sample(n=N, weights='A', random_state=1).reset_index(drop=True)
Long version
df.groupby('A', group_keys=False).apply(lambda x: x.sample(int(np.rint(N*len(x)/len(df))))).sample(frac=1).reset_index(drop=True)
I was looking for similar solution. The code provided by #Vaishali works absolutely fine. What #Abdou's trying to do also makes sense when we want to extract samples from each group based on their proportions to the full data.
# original : 10% from each group
sample_df = df.groupby('group_id').apply(lambda x: x.sample(frac=0.1))
# modified : sample size based on proportions of group size
n = df.shape[0]
sample_df = df.groupby('group_id').apply(lambda x: x.sample(frac=length(x)/n))
This is not as simple as just grouping and using .sample. You need to actually get the fractions first. Since you said that you are looking to grab 10% of the total numbers of rows in different proportions, you will need to calculate how much each group will have to take out from the main dataframe. For instance, if we use the divide you mentioned in the question, then group A will end up with 1/20 for a fraction of the total number of rows, group B will get 1/30 and group C ends up with 1/60. You can put these fractions in a dictionary and then use .groupby and pd.concat to concatenate the number of rows* from each group into a dataframe. You will be using the n parameter from the .sample method instead of the frac parameter.
fracs = {'A': 1/20, 'B': 1/30, 'C': 1/60}
N = len(df)
pd.concat(dff.sample(n=int(fracs.get(i)*N)) for i,dff in df.groupby('group_id'))
Edit:
This is to highlight the importance in fulfilling the requirement that group_id A should have half of the sampled rows, group_id B two sixths of the sampled rows and group_id C one sixth of the sampled rows, regardless of the original group divides.
Starting with equal portions: each group starts with 40 rows
df1 = pd.DataFrame({'group_id': ['A','B', 'C']*40,
'vals': np.random.randn(120)})
N = len(df1)
fracs = {'A': 1/20, 'B': 1/30, 'C': 1/60}
print(pd.concat(dff.sample(n=int(fracs.get(i) * N)) for i,dff in df1.groupby('group_id')))
# group_id vals
# 12 A -0.175109
# 51 A -1.936231
# 81 A 2.057427
# 111 A 0.851301
# 114 A 0.669910
# 60 A 1.226954
# 73 B -0.166516
# 82 B 0.662789
# 94 B -0.863640
# 31 B 0.188097
# 101 C 1.802802
# 53 C 0.696984
print(df1.groupby('group_id').apply(lambda x: x.sample(frac=0.1)))
# group_id vals
# group_id
# A 24 A 0.161328
# 21 A -1.399320
# 30 A -0.115725
# 114 A 0.669910
# B 34 B -0.348558
# 7 B -0.855432
# 106 B -1.163899
# 79 B 0.532049
# C 65 C -2.836438
# 95 C 1.701192
# 80 C -0.421549
# 74 C -1.089400
First solution: 6 rows for group A (1/2 of the sampled rows), 4 rows for group B (one third of the sampled rows) and 2 rows for group C (one sixth of the sampled rows).
Second solution: 4 rows for each group (each one third of the sampled rows)
Working with differently sized groups: 40 for A, 60 for B and 20 for C
df2 = pd.DataFrame({'group_id': np.repeat(['A', 'B', 'C'], (40, 60, 20)),
'vals': np.random.randn(120)})
N = len(df2)
print(pd.concat(dff.sample(n=int(fracs.get(i) * N)) for i,dff in df2.groupby('group_id')))
# group_id vals
# 29 A 0.306738
# 35 A 1.785479
# 21 A -0.119405
# 4 A 2.579824
# 5 A 1.138887
# 11 A 0.566093
# 80 B 1.207676
# 41 B -0.577513
# 44 B 0.286967
# 77 B 0.402427
# 103 C -1.760442
# 114 C 0.717776
print(df2.groupby('group_id').apply(lambda x: x.sample(frac=0.1)))
# group_id vals
# group_id
# A 4 A 2.579824
# 32 A 0.451882
# 5 A 1.138887
# 17 A -0.614331
# B 47 B -0.308123
# 52 B -1.504321
# 42 B -0.547335
# 84 B -1.398953
# 61 B 1.679014
# 66 B 0.546688
# C 105 C 0.988320
# 107 C 0.698790
First solution: consistent
Second solution: Now group B has taken 6 of the sampled rows when it's supposed to only take 4.
Working with another set of differently sized groups: 60 for A, 40 for B and 20 for C
df3 = pd.DataFrame({'group_id': np.repeat(['A', 'B', 'C'], (60, 40, 20)),
'vals': np.random.randn(120)})
N = len(df3)
print(pd.concat(dff.sample(n=int(fracs.get(i) * N)) for i,dff in df3.groupby('group_id')))
# group_id vals
# 48 A 1.214525
# 19 A -0.237562
# 0 A 3.385037
# 11 A 1.948405
# 8 A 0.696629
# 39 A -0.422851
# 62 B 1.669020
# 94 B 0.037814
# 67 B 0.627173
# 93 B 0.696366
# 104 C 0.616140
# 113 C 0.577033
print(df3.groupby('group_id').apply(lambda x: x.sample(frac=0.1)))
# group_id vals
# group_id
# A 4 A 0.284448
# 11 A 1.948405
# 8 A 0.696629
# 0 A 3.385037
# 31 A 0.579405
# 24 A -0.309709
# B 70 B -0.480442
# 69 B -0.317613
# 96 B -0.930522
# 80 B -1.184937
# C 101 C 0.420421
# 106 C 0.058900
This is the only time the second solution offered some consistency (out of sheer luck, I might add).
I hope this proves useful.

Pandas DataFrame with Function: Columns Varying

Given the following DataFrame:
import pandas as pd
import numpy as np
d=pd.DataFrame({' Label':['a','a','b','b'],'Count1':[10,20,30,40],'Count2':[20,45,10,35],
'Count3':[40,30,np.nan,22],'Nobs1':[30,30,70,70],'Nobs2':[65,65,45,45],
'Nobs3':[70,70,22,32]})
d
Label Count1 Count2 Count3 Nobs1 Nobs2 Nobs3
0 a 10 20 40.0 30 65 70
1 a 20 45 30.0 30 65 70
2 b 30 10 NaN 70 45 22
3 b 40 35 22.0 70 45 32
I would like to apply the z test for proportions on each combination of column groups (1 and 2, 1 and 3, 2 and 3) per row. By column group, I mean, for example, "Count1" and "Nobs1".
For example, one such test would be:
count = np.array([10, 20]) #from first row of Count1 and Count2, respectively
nobs = np.array([30, 65]) #from first row of Nobs1 and Nobs2, respectively
pv = proportions_ztest(count=count,nobs=nobs,value=0,alternative='two-sided')[1] #this returns just the p-value, which is of interest
pv
0.80265091465415639
I would want the result (pv) to go into a new column (first row) called "p_1_2" or something logical that corresponds to its respective columns.
In summary, here are the challenges I'm facing:
How to apply this per row.
...for each paired combination, mentioned above.
...where the column names and number of pairs of "Count" and "Nobs" columns may vary (assuming that there will always be a "Nobs" column for each "Count" column).
Related to 3: For example, I might have a column called "18-24" and another called "18-24_Nobs".
Thanks in advance!
To 1) and 2) for one test, additional tests can be coded similar or within an additonal loop
for i,row in d.iterrows():
d.loc[i,'test'] = proportions_ztest(count=row['Count1':'Count2'].values,
nobs=row['Nobs1':'Nobs2'].values,
value=0,alternative='two-sided')[1]
for 3) it should be possible the handle these case with pure python inside the loop

Categories

Resources