Dividing one dataframe by another in python using pandas with float values - python

I have two separate data frames named df1 and df2 as shown below:
Scaffold Position Ref_Allele_Count Alt_Allele_Count Coverage_Depth Alt_Allele_Frequency
0 1 11 7 51 58 0.879310
1 1 16 20 95 115 0.826087
2 2 9 9 33 42 0.785714
3 2 12 86 51 137 0.372263
4 2 67 41 98 139 0.705036
5 3 8 0 0 0 0.000000
6 4 99 32 26 58 0.448276
7 4 101 100 24 124 0.193548
8 4 115 69 26 95 0.273684
9 5 6 40 57 97 0.587629
10 5 19 53 87 140 0.621429
Scaffold Position Ref_Allele_Count Alt_Allele_Count Coverage_Depth Alt_Allele_Frequency
0 1 11 7 64 71 0.901408
1 1 16 10 90 100 0.900000
2 2 9 79 86 165 0.521212
3 2 12 12 73 85 0.858824
4 2 67 54 96 150 0.640000
5 3 8 0 0 0 0.000000
6 4 99 86 28 114 0.245614
7 4 101 32 25 57 0.438596
8 4 115 97 16 113 0.141593
9 5 6 86 43 129 0.333333
10 5 19 59 27 86 0.313953
I have already found the sum values for df1 and df2 in Allele_Count and Coverage Depth but I need to divide the resulting Alt_Allele_Count and Coverage_Depth of both df's with one another to fine the total allele frequency(AF). I have tried dividing the two variable and got the error message :
TypeError: float() argument must be a string or a number, not 'DataFrame'
when I tried to convert them to floats and this table when I laft it as a df:
Alt_Allele_Count Coverage_Depth
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
5 NaN NaN
6 NaN NaN
7 NaN NaN
8 NaN NaN
9 NaN NaN
10 NaN NaN
My code so far:
import csv
import pandas as pd
import numpy as np
df1 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_1.csv')
df2 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_2.csv')
print(df1)
print(df2)
Ref_Allele_Count = (df1[['Ref_Allele_Count']] + df2[['Ref_Allele_Count']])
print(Ref_Allele_Count)
Alt_Allele_Count = (df1[['Alt_Allele_Count']] + df2[['Alt_Allele_Count']])
print(Alt_Allele_Count)
Coverage_Depth = (df1[['Coverage_Depth']] + df2[['Coverage_Depth']]).astype(float)
print(Coverage_Depth)
AF = Alt_Allele_Count / Coverage_Depth
print(AF)

The error stems from the difference between a pandas series and a dataframe. Series are 1 dimensional structures like a singular column, while dataframes are 2d objects like tables. Series added together make a new series of values while dataframes added together make something a lot less usable.
Taking slices of a dataframe can either result in a series or dataframe object depending on how you do it:
df['column_name'] -> Series
df[['column_name', 'column_2']] -> Dataframe
So in the line:
Ref_Allele_Count = (df1[['Ref_Allele_Count']] + df2[['Ref_Allele_Count']])
df1[['Ref_Allele_Count']] becomes a singular column dataframe rather than a series.
Ref_Allele_Count = (df1['Ref_Allele_Count'] + df2['Ref_Allele_Count'])
Should return the correct result here. Same goes for the rest of the columns you're adding together.

This can be fixed by only using once set of brackets '[]' while referring to a column in a pandas df, rather than 2.
import csv
import pandas as pd
import numpy as np
df1 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_1.csv')
df2 = pd.read_csv('C:/Users/Tom/Python_CW/file_pairA_2.csv')
print(df1)
print(df2)
# note that I changed your double brackets ([["col_name"]]) to single (["col_name"])
# this results in pd.Series objects instead of pd.DataFrame objects
Ref_Allele_Count = (df1['Ref_Allele_Count'] + df2['Ref_Allele_Count'])
print(Ref_Allele_Count)
Alt_Allele_Count = (df1['Alt_Allele_Count'] + df2['Alt_Allele_Count'])
print(Alt_Allele_Count)
Coverage_Depth = (df1['Coverage_Depth'] + df2['Coverage_Depth']).astype(float)
print(Coverage_Depth)
AF = Alt_Allele_Count / Coverage_Depth
print(AF)

Related

Pandas conditional lookup based on columns from a different dataframe

I have searched but found no answers for my problem. My first dataframe looks like:
df1
Item Value
1 23
2 3
3 45
4 65
5 17
6 6
7 18
… …
500 78
501 98
and the second lookup table looks like
df2
L1 H1 L2 H2 L3 H3 L4 H4 L5 H5 Name
1 3 5 6 11 78 86 88 90 90 A
4 4 7 10 79 85 91 99 110 120 B
89 89 91 109 0 0 0 0 0 0 C
...
What I am trying to do is to get Name from df2 to df1 when Item in df1 falls between the Low (L) and High (H) columns. Something (which does not work) like:
df1[Name]=np.where((df1['Item']>=df2['L1'] & df1['Item']<=df2['H1'])|
(df1['Item']>=df2['L2'] & df1['Item']<=df2['H2']) |
(df1['Item']>=df2['L3'] & df1['Item']<=df2['H3']) |
(df1['Item']>=df2['L4'] & df1['Item']<=df2['H4']) |
(df1['Item']>=df2['L5'] & df1['Item']<=df2['H5']) |
(df1['Item']>=df2['L6'] & df1['Item']<=df2['H6']), df2['Name'], "Other")
So that the result would be like:
Item Value Name
1 23 A
2 3 A
3 45 A
4 65 B
5 17 A
6 6 A
7 18 A
… … …
500 78 K
501 98 Other
If you have any guidance for my problem to share, I would much appreciate it! Thank you in advance!
Try:
Transform df2 using wide_to_long
Create lists of numbers from "L" to "H" for each row using apply and range
explode to have one value in each row
map each "Item" in df1 using a dict created from ranges with the structure {value: name}
ranges = pd.wide_to_long(df2, ["L","H"], i="Name", j="Subset")
ranges["values"] = ranges.apply(lambda x: list(range(x["L"], x["H"]+1)), axis=1)
ranges = ranges.explode("values").reset_index()
df1["Name"] = df1["Item"].map(dict(zip(ranges["values"], ranges["Name"])))
>>> df1
Item Value Name
0 1 23 A
1 2 3 A
2 3 45 A
3 4 65 B
4 5 17 A
5 6 6 A
6 7 18 B
7 500 78 NaN
8 501 98 NaN
A faster option (tests can prove/debunk that), would be to use conditional_join from pyjanitor (conditional_join uses binary search underneath the hood):
#pip install pyjanitor
import pandas as pd
import janitor
temp = (pd.wide_to_long(df2,
stubnames=['L', 'H'],
i='Name',
j='Num')
.reset_index('Name')
)
# the `Num` index is sorted already
(df1.conditional_join(
temp,
# left column, right column, join operator
('Item', 'L', '>='),
('Item', 'H', '<='),
how = 'left')
.loc[:, ['Item', 'Value', 'Name']]
)
Item Value Name
0 1 23 A
1 2 3 A
2 3 45 A
3 4 65 B
4 5 17 A
5 6 6 A
6 7 18 B
7 500 78 NaN
8 501 98 NaN

Pandas Python highest 2 rows of every 3 and tabling the results

Suppose I have the following dataframe:
. Column1 Column2
0 25 1
1 89 2
2 59 3
3 78 10
4 99 20
5 38 30
6 89 100
7 57 200
8 87 300
Im not sure if what I want to do is impossible or not. But I want to compare every three rows of column1 and then take the highest 2 out the three rows and assign the corresponding 2 Column2 values to a new column. The values in column 3 does not matter if they are joined or not. It does not matter if they are arranged or not for I know every 2 rows of column 3 belong to every 3 rows of column 1.
. Column1 Column2 Column3
0 25 1 2
1 89 2 3
2 59 3
3 78 10 20
4 99 20 10
5 38 30
6 89 100 100
7 57 200 300
8 87 300
You can use np.arange with np.repeat to create a grouping array which groups every 3 values.
Then use GroupBy.nlargest then extract indices of those values using pd.Index.get_level_values, then assign them to Column3 pandas handles index alignment.
n_grps = len(df)/3
g = np.repeat(np.arange(n_grps), 3)
idx = df.groupby(g)['Column1'].nlargest(2).index.get_level_values(1)
vals = df.loc[idx, 'Column2']
vals
# 1 2
# 2 3
# 4 20
# 3 10
# 6 100
# 8 300
# Name: Column2, dtype: int64
df['Column3'] = vals
df
Column1 Column2 Column3
0 25 1 NaN
1 89 2 2.0
2 59 3 3.0
3 78 10 10.0
4 99 20 20.0
5 38 30 NaN
6 89 100 100.0
7 57 200 NaN
8 87 300 300.0
To get output like you mentioned in the question you have to sort and push NaN to last then you have perform this additional step.
df['Column3'] = df.groupby(g)['Column3'].apply(lambda x:x.sort_values()).values
Column1 Column2 Column3
0 25 1 2.0
1 89 2 3.0
2 59 3 NaN
3 78 10 10.0
4 99 20 20.0
5 38 30 NaN
6 89 100 100.0
7 57 200 300.0
8 87 300 NaN

Finding minimum value of a column between two entries in another column

Viewed 64 times
0
I have two columns in a data frame containing more than 1000 rows. Column A can take values X,Y,None. Column B contains random numbers from 50 to 100.
Every time there is a non 'None' occurrence in Column A, it is considered as occurrence4. so, previous non None occurrence in Column A will be occurrence3, and the previous to that will be occurrence2 and the previous to that will be occurrence1. I want to find the minimum value of column B between occurrence4 and occurrence3 and check if it is greater than the minimum value of column B between occurrence2 and occurrence1. The results can be stored in a new column in the data frame as "YES" or "NO".
SAMPLE INPUT
ROWNUM A B
1 None 68
2 None 83
3 X 51
4 None 66
5 None 90
6 Y 81
7 None 81
8 None 100
9 None 83
10 None 78
11 X 68
12 None 53
13 None 83
14 Y 68
15 None 94
16 None 50
17 None 71
18 None 71
19 None 52
20 None 67
21 None 82
22 X 76
23 None 66
24 None 92
For example, I need to find the minimum value of Column B between ROWNUM 14 and ROWNUM 11 and check if it is GREATER THAN the minimum value of Column B between ROWNUM 6 and ROWNUM 3. Next, I need to find the minimum value between ROWNUM 22 AND ROWNUM 14 and check if it is GREATER THAN the minimum value between ROWNUM 11 and ROWNNUM 6 and so on.
EDIT:
In the sample data, we start our calculation from row 14, since that is where we have the fourth non none occurrence of column A. The minimum value between row 14 and row 11 is 53. The minimum value between row 6 and 3 is 51. Since 53 > 51, , it means the minimum value of column B between occurrence 4 and occurrence 3, is GREATER THAN minimum value of column B between occurrence 2 and occurrence 1. So, output at row 14 would be "YES" or 1.
Next, at row 22, the minimum value between row 22 and row 14 is 50. The minimum value between row 11 and 6 is 68. Since 50 < 68, it means minimum between occurrence 4 and occurrence 3 is NOT GREATER THAN minimum between occurrence 2 and occurrence 1. So, output at row 22 would be "NO" or 0.
I have the following code.
import numpy as np
import pandas as pd
df = pd.DataFrame([[0, 0]]*100, columns=list('AB'), index=range(1, 101))
df.loc[[3, 6, 11, 14, 22, 26, 38, 51, 64, 69, 78, 90, 98], 'A'] = 1
df['B'] = np.random.randint(50, 100, size=len(df))
df['result'] = df.index[df['A'] != 0].to_series().rolling(4).apply(
lambda x: df.loc[x[2]:x[3], 'B'].min() > df.loc[x[0]:x[1], 'B'].min(), raw=True)
print(df)
This code works when column A has inputs [0,1]. But I need a code where column A could contain [None, X, Y]. Also, this code produces output as [0,1]. I need output as [YES, NO] instead.
I read your sample data as follows:
df = pd.read_fwf('input.txt', widths=[7, 6, 3], na_values=['None'])
Note na_values=['None'], which provides that None in input (a string)
is read as NaN.
This way the DataFrame is:
ROWNUM A B
0 1 NaN 68
1 2 NaN 83
2 3 X 51
3 4 NaN 66
4 5 NaN 90
5 6 Y 81
6 7 NaN 81
7 8 NaN 100
8 9 NaN 83
9 10 NaN 78
10 11 X 68
11 12 NaN 53
12 13 NaN 83
13 14 Y 69
14 15 NaN 94
15 16 NaN 50
16 17 NaN 71
17 18 NaN 71
18 19 NaN 52
19 20 NaN 67
20 21 NaN 82
21 22 X 76
22 23 NaN 66
23 24 NaN 92
The code to do your task is:
res = df.index[df.A.notnull()].to_series().rolling(4).apply(
lambda x: df.loc[x[2]:x[3], 'B'].min() > df.loc[x[0]:x[1], 'B'].min(), raw=True)\
.dropna().map(lambda x: 'YES' if x > 0 else 'NO').rename('Result')
df = df.join(res)
df.Result.fillna('', inplace=True)
As you can see, it is in part a slight change of your code, with some
additions.
The result is:
ROWNUM A B Result
0 1 NaN 68
1 2 NaN 83
2 3 X 51
3 4 NaN 66
4 5 NaN 90
5 6 Y 81
6 7 NaN 81
7 8 NaN 100
8 9 NaN 83
9 10 NaN 78
10 11 X 68
11 12 NaN 53
12 13 NaN 83
13 14 Y 69 YES
14 15 NaN 94
15 16 NaN 50
16 17 NaN 71
17 18 NaN 71
18 19 NaN 52
19 20 NaN 67
20 21 NaN 82
21 22 X 76 NO
22 23 NaN 66
23 24 NaN 92
The advantage of my solution over the other is that:
the content is either YES or NO, just as you want,
this content shows up only for non-null values in A column,
"ignoring" first 3, which don't have enough "predecessors".
Here's my approach:
def is_incr(x):
return x[:2].min() > x[2:].min()
# replace with s = df['A'] == 'None' if needed
s = df['A'].isna()
df['new_col'] = df.loc[s, 'B'].rolling(4).apply(is_incr)
Output:
ROWNUM A B new_col
0 1 NaN 68 NaN
1 2 NaN 83 NaN
2 3 X 51 NaN
3 4 NaN 66 NaN
4 5 NaN 90 1.0
5 6 Y 81 NaN
6 7 NaN 81 0.0
7 8 NaN 100 0.0
8 9 NaN 83 0.0
9 10 NaN 78 1.0
10 11 X 68 NaN
11 12 NaN 53 1.0
12 13 NaN 83 1.0
13 14 Y 68 NaN
14 15 NaN 94 0.0
15 16 NaN 50 1.0
16 17 NaN 71 1.0
17 18 NaN 71 0.0
18 19 NaN 52 0.0
19 20 NaN 67 1.0
20 21 NaN 82 0.0
21 22 X 76 NaN
22 23 NaN 66 0.0
23 24 NaN 92 1.0

Python Pandas add rows based on missing sequential values in a timeseries

I'm new to python and struggling to manipulate data in pandas library. I have a pandas database like this:
Year Value
0 91 1
1 93 4
2 94 7
3 95 10
4 98 13
And want to complete the missing years creating rows with empty values, like this:
Year Value
0 91 1
1 92 0
2 93 4
3 94 7
4 95 10
5 96 0
6 97 0
7 98 13
How do i do that in Python?
(I wanna do that so I can plot Values without skipping years)
I would create a new dataframe that has Year as an Index and includes the entire date range that you need to cover. Then you can simply set the values across the two dataframes, and the index will make sure that they correct rows are matched (I've had to use fillna to set the missing years to zero, by default they will be set to NaN):
df = pd.DataFrame({'Year':[91,93,94,95,98],'Value':[1,4,7,10,13]})
df.index = df.Year
df2 = pd.DataFrame({'Year':range(91,99), 'Value':0})
df2.index = df2.Year
df2.Value = df.Value
df2= df2.fillna(0)
df2
Value Year
Year
91 1 91
92 0 92
93 4 93
94 7 94
95 10 95
96 0 96
97 0 97
98 13 98
Finally you can use reset_index if you don't want Year as your index:
df2.drop('Year',1).reset_index()
Year Value
0 91 1
1 92 0
2 93 4
3 94 7
4 95 10
5 96 0
6 97 0
7 98 13

Python Pandas Dataframe fill NaN values

I am trying to fill NaN values in a dataframe with values coming from a standard normal distribution.
This is currently my code:
sqlStatement = "select * from sn.clustering_normalized_dataset"
df = psql.frame_query(sqlStatement, cnx)
data=df.pivot("user","phrase","tfw")
dfrand = pd.DataFrame(data=np.random.randn(data.shape[0],data.shape[1]))
data[np.isnan(data)] = dfrand[np.isnan(data)]
After pivoting the dataframe 'data' it looks like that:
phrase aaron abbas abdul abe able abroad abu abuse \
user
14233664 NaN NaN NaN NaN NaN NaN NaN NaN
52602716 NaN NaN NaN NaN NaN NaN NaN NaN
123456789 NaN NaN NaN NaN NaN NaN NaN NaN
500158258 NaN NaN NaN NaN NaN NaN NaN NaN
517187571 0.4 NaN NaN 0.142857 1 0.4 0.181818 NaN
However, I need that each NaN value will be replaced with a new random value. So I created a new df consists of only random values (dfrand) and then trying to swap the missing numbers (Nan) by the values from dfrand corresponding to indices of the NaN's. Well - unfortunately it doesn't work -
Although the expression
np.isnan(data)
returns a dataframe consists of True and False values, the expression
dfrand[np.isnan(data)]
return only NaN values so the overall trick doesn't work.
Any ideas what the issue ?
Three-thousand columns is not so many. How many rows do you have? You could always make a random dataframe of the same size and do a logical replacement (the size of your dataframe will dictate whether this is feasible or not.
if you know the size of your dataframe:
import pandas as pd
import numpy as np
# create random dummy dataframe
dfrand = pd.DataFrame(data=np.random.randn(rows,cols))
# import "real" dataframe
data = pd.read_csv(etc.) # or however you choose to read it in
# replace nans
data[np.isnan(data)] = dfrand[np.isnan(data)]
if you do not know the size of your dataframe, just shuffle things around
import pandas as pd
import numpy as np
# import "real" dataframe
data = pd.read_csv(etc.) # or however you choose to read it in
# create random dummy dataframe
dfrand = pd.DataFrame(data=np.random.randn(data.shape[0],data.shape[1]))
# replace nans
data[np.isnan(data)] = dfrand[np.isnan(data)]
EDIT
Per "users" last comment:
"dfrand[np.isnan(data)] returns NaN only."
Right! And that is exactly what you wanted. In my solution I have: data[np.isnan(data)] = dfrand[np.isnan(data)]. Translated, this means: take the randomly-generated value from dfrand that corresponds to the NaN-location within "data" and insert it in "data" where "data" is NaN. An example will help:
a = pd.DataFrame(data=np.random.randint(0,100,(10,3)))
a[0][5] = np.nan
In [32]: a
Out[33]:
0 1 2
0 2 26 28
1 14 79 82
2 89 32 59
3 65 47 31
4 29 59 15
5 NaN 58 90
6 15 66 60
7 10 19 96
8 90 26 92
9 0 19 23
# define randomly-generated dataframe, much like what you are doing, and replace NaN's
b = pd.DataFrame(data=np.random.randint(0,100,(10,3)))
In [39]: b
Out[39]:
0 1 2
0 92 21 55
1 65 53 89
2 54 98 97
3 48 87 79
4 98 38 62
5 46 16 30
6 95 39 70
7 90 59 9
8 14 85 37
9 48 29 46
a[np.isnan(a)] = b[np.isnan(a)]
In [38]: a
Out[38]:
0 1 2
0 2 26 28
1 14 79 82
2 89 32 59
3 65 47 31
4 29 59 15
5 46 58 90
6 15 66 60
7 10 19 96
8 90 26 92
9 0 19 23
As you can see, all NaN's in have been replaced with the randomly-generated value in based on 's nan-value indices.
you could try something like this, assuming you are dealing with one series:
ser = data['column_with_nulls_to_replace']
index = ser[ser.isnull()].index
df = pd.DataFrame(np.random.randn(len(index)), index=index, columns=['column_with_nulls_to_replace'])
ser.update(df)

Categories

Resources