Dataframe calculation - python

I want to do the following calculation and the outcome has to be a new column Calculated trap..
test["calculation trap"] = (( 0.000164 + 0.000415)/2)
so the outcome of this formula has to be 0.0002895.
I tried the following code to do this calculation for the whole column, but i got the outcome in the column below.
test["calculation trap"] = ((test["calculation"][0:]+test["calculation"][1:])/2).reset_index(drop=True)
Temp calculation. calculation trap.
0 90.01 0.000164 NaN
1 91.03 0.000415 0.000415
2 95.06 0.001315 0.001315
3 100.07 0.002896 0.002896
4 103.50 NaN NaN

Use Series.shift with -1:
test["calculation trap"] = ((test["calculation"].shift(-1)+test["calculation"])/2)
print (test)
Temp calculation calculation trap
0 90.01 0.000164 0.000290
1 91.03 0.000415 0.000865
2 95.06 0.001315 0.002106
3 100.07 0.002896 NaN
4 103.50 NaN NaN

Related

How to multiply different columns in different dataframes using Pandas

I have 2 dataframes that I want to multiply. I want to multiply multiple columns from dataframe 1 with one column in dataframe 2
raw_material_LCI = dataframe1[["climate change","ozone depletion",
"ionising radiation, hh","photochemical ozone formation, hh",
"particulate matter","human toxicity, non-cancer",
"human toxicity, cancer","acidification",
"eutrophication, freshwater","eutrophication, marine",
"eutrophication, terrestrial","ecotoxicity, freshwater",
"land use", "resource use, fossils","resource use, minerals and metals",
"water scarcity"]] * dataframe2["mass_frac"]
The above code returns a dataframe where all the values are NaN. The names of the columns all are fields with numeric values in them.
I decided to try multiply dataframe1 with just a single value to see if it worked e.g. example below
raw_material_LCI = dataframe1[["climate change","ozone depletion",
"ionising radiation, hh","photochemical ozone formation, hh",
"particulate matter","human toxicity, non-cancer",
"human toxicity, cancer","acidification",
"eutrophication, freshwater","eutrophication, marine",
"eutrophication, terrestrial","ecotoxicity, freshwater",
"land use", "resource use, fossils","resource use, minerals and metals",
"water scarcity"]] * 0.7
The example with the single value returns a dataframe with numbers, so it works. Does anyone know why the multiplication in the first instance does not work? I have looked at multiple articles on multiplying columns in different dataframes in Python and cannot find a solution.
You have to align both row and column indexes when you multiply two dataframes and align row index when you multiply a DataFrame by a Series:
>>> df
A B C D E
0 0.787081 0.350508 0.058542 0.492340 0.489379
1 0.512436 0.501375 0.108115 0.960808 0.841969
2 0.055247 0.305830 0.976043 0.016188 0.006424
3 0.303570 0.914876 0.157100 0.767454 0.340381
4 0.446077 0.595001 0.307799 0.115410 0.568281
5 0.226516 0.636902 0.086790 0.079260 0.402414
6 0.451920 0.526025 0.012470 0.931610 0.267155
7 0.472778 0.137005 0.227569 0.941355 0.584782
8 0.944396 0.769115 0.497214 0.531419 0.570797
9 0.788023 0.310288 0.336480 0.585466 0.432246
>>> sr
0 0.920878
1 0.445332
2 0.894407
3 0.613317
4 0.242270
5 0.299121
6 0.843052
7 0.279014
8 0.526778
9 0.249538
dtype: float64
So, this produces nan values:
>>> df * sr
A B C D E
0 0.724805 0.322775 0.053910 0.453385 0.450658
1 0.228204 0.223279 0.048147 0.427878 0.374956
2 0.049413 0.273536 0.872980 0.014479 0.005745
3 0.186185 0.561109 0.096352 0.470693 0.208762
4 0.108071 0.144151 0.074571 0.027961 0.137678
5 0.067756 0.190511 0.025961 0.023708 0.120371
6 0.380992 0.443466 0.010513 0.785396 0.225226
7 0.131912 0.038226 0.063495 0.262651 0.163162
8 0.497487 0.405153 0.261921 0.279940 0.300683
9 0.196642 0.077429 0.083965 0.146096 0.107862
but using mul along index axis works as expected:
>>> df.mul(sr, axis=0) # but not df.mul(sr) (same as df*sr)
A B C D E
0 0.724805 0.322775 0.053910 0.453385 0.450658
1 0.228204 0.223279 0.048147 0.427878 0.374956
2 0.049413 0.273536 0.872980 0.014479 0.005745
3 0.186185 0.561109 0.096352 0.470693 0.208762
4 0.108071 0.144151 0.074571 0.027961 0.137678
5 0.067756 0.190511 0.025961 0.023708 0.120371
6 0.380992 0.443466 0.010513 0.785396 0.225226
7 0.131912 0.038226 0.063495 0.262651 0.163162
8 0.497487 0.405153 0.261921 0.279940 0.300683
9 0.196642 0.077429 0.083965 0.146096 0.107862
If your series and dataframe have not the same length, you get a partial result:
>>> df.mul(sr.iloc[:5], axis=0)
A B C D E
0 0.724805 0.322775 0.053910 0.453385 0.450658
1 0.228204 0.223279 0.048147 0.427878 0.374956
2 0.049413 0.273536 0.872980 0.014479 0.005745
3 0.186185 0.561109 0.096352 0.470693 0.208762
4 0.108071 0.144151 0.074571 0.027961 0.137678
5 NaN NaN NaN NaN NaN
6 NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN
8 NaN NaN NaN NaN NaN
9 NaN NaN NaN NaN NaN
>>> df.mul(sr.iloc[5:], axis=0)
A B C D E
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
5 0.067756 0.190511 0.025961 0.023708 0.120371
6 0.380992 0.443466 0.010513 0.785396 0.225226
7 0.131912 0.038226 0.063495 0.262651 0.163162
8 0.497487 0.405153 0.261921 0.279940 0.300683
9 0.196642 0.077429 0.083965 0.146096 0.107862
Take care to have the same index between instances.

Pandas dataframe merge row by addition

I want to create a dataframe from census data. I want to calculate the number of people that returned a tax return for each specific earnings group.
For now, I wrote this
census_df = pd.read_csv('../zip code data/19zpallagi.csv')
sub_census_df = census_df[['zipcode', 'agi_stub', 'N02650', 'A02650', 'ELDERLY', 'A07180']].copy()
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
for i, column_name in zip(range(1, 7), num_of_returns):
sub_census_df[column_name] = sub_census_df[sub_census_df['agi_stub'] == i]['N02650']
I have 6 groups attached to a specific zip code. I want to get one row, with the number of returns for a specific zip code appearing just once as a column. I already tried to change NaNs to 0 and to use groupby('zipcode').sum(), but I get 50 million rows summed for zip code 0, where it seems that only around 800k should exist.
Here is the dataframe that I currently get:
zipcode agi_stub N02650 A02650 ELDERLY A07180 Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more Amount_1_25000 Amount_25000_50000 Amount_50000_75000 Amount_75000_100000 Amount_100000_200000 Amount_200000_more
0 0 1 778140.0 10311099.0 144610.0 2076.0 778140.0 NaN NaN NaN NaN NaN 10311099.0 NaN NaN NaN NaN NaN
1 0 2 525940.0 19145621.0 113810.0 17784.0 NaN 525940.0 NaN NaN NaN NaN NaN 19145621.0 NaN NaN NaN NaN
2 0 3 285700.0 17690402.0 82410.0 9521.0 NaN NaN 285700.0 NaN NaN NaN NaN NaN 17690402.0 NaN NaN NaN
3 0 4 179070.0 15670456.0 57970.0 8072.0 NaN NaN NaN 179070.0 NaN NaN NaN NaN NaN 15670456.0 NaN NaN
4 0 5 257010.0 35286228.0 85030.0 14872.0 NaN NaN NaN NaN 257010.0 NaN NaN NaN NaN NaN 35286228.0 NaN
And here is what I want to get:
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 850.0
here is one way to do it using groupby and sum the desired columns
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
df.groupby('zipcode', as_index=False)[num_of_returns].sum()
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 0.0
This question needs more information to actually give a proper answer. For example you leave out what is meant by certain columns in your data frame:
- `N1: Number of returns`
- `agi_stub: Size of adjusted gross income`
According to IRS this has the following levels.
Size of adjusted gross income "0 = No AGI Stub
1 = ‘Under $1’
2 = '$1 under $10,000'
3 = '$10,000 under $25,000'
4 = '$25,000 under $50,000'
5 = '$50,000 under $75,000'
6 = '$75,000 under $100,000'
7 = '$100,000 under $200,000'
8 = ‘$200,000 under $500,000’
9 = ‘$500,000 under $1,000,000’
10 = ‘$1,000,000 or more’"
I got the above from https://www.irs.gov/pub/irs-soi/16incmdocguide.doc
With this information, I think what you want to find is the number of
people who filed a tax return for each of the income levels of agi_stub.
If that is what you mean then, this can be achieved by:
import pandas as pd
data = pd.read_csv("./data/19zpallagi.csv")
## select only the desired columns
data = data[['zipcode', 'agi_stub', 'N1']]
## solution to your problem?
df = data.pivot_table(
index='zipcode',
values='N1',
columns='agi_stub',
aggfunc=['sum']
)
## bit of cleaning up.
PREFIX = 'agi_stub_level_'
df.columns = [PREFIX + level for level in df.columns.get_level_values(1).astype(str)]
Here's the output.
In [77]: df
Out[77]:
agi_stub_level_1 agi_stub_level_2 ... agi_stub_level_5 agi_stub_level_6
zipcode ...
0 50061850.0 37566510.0 ... 21938920.0 8859370.0
1001 2550.0 2230.0 ... 1420.0 230.0
1002 2850.0 1830.0 ... 1840.0 990.0
1005 650.0 570.0 ... 450.0 60.0
1007 1980.0 1530.0 ... 1830.0 460.0
... ... ... ... ... ...
99827 470.0 360.0 ... 170.0 40.0
99833 550.0 380.0 ... 290.0 80.0
99835 1250.0 1130.0 ... 730.0 190.0
99901 1960.0 1520.0 ... 1030.0 290.0
99999 868450.0 644160.0 ... 319880.0 142960.0
[27595 rows x 6 columns]

how to get min values of columns by rolling another columns?

GROUP_NAV_DATE GROUP_REH_VALUE target
0 2018/11/29 1 1.06
1 2018/11/30 1.0029 1.063074
2 2018/12/3 1.03 1.0918
3 2018/12/4 1.032 1.09392
4 2018/12/5 1.0313 1.093178
5 2020/12/6 1.034 1.09604
6 2020/12/8 1.062 1.12572
7 2020/12/9 1.07 1.1342
8 2020/12/10 1 1.06
9 2020/12/11 0.99 1.0494
10 2020/12/12 0.96 1.0176
11 2020/12/13 1.062 1.12572
goal
create first_date column that value is from GROUP_NAV_DATE.The logic is that the value of GROUP_REH_VALUE is the first time less than target values in GROUP_REH_VALUE columns, and the result is greater than original date for each row.
For example, GROUP_REH_VALUE=1 for index 0, the first match is 2020/12/8. For index 9, the first match is 2020/12/13 not 2020/12/8.
Note: for each row, target values is 1.06*GROUP_REH_VALUE.
Expect
GROUP_NAV_DATE GROUP_REH_VALUE target first_date
0 2018/11/29 1 1.06 2020/12/8
1 2018/11/30 1.0029 1.063074 2020/12/9
2 2018/12/3 1.03 1.0918 NA
3 2018/12/4 1.032 1.09392 NA
4 2018/12/5 1.0313 1.093178 NA
5 2020/12/6 1.034 1.09604 NA
6 2020/12/8 1.062 1.12572 NA
7 2020/12/9 1.07 1.1342 NA
8 2020/12/10 1 1.06 2020/12/13
9 2020/12/11 0.99 1.0494 2020/12/13
10 2020/12/12 0.96 1.0176 2020/12/13
11 2020/12/13 1.062 1.12572 NA
Try
I try rolling and idxmin but when it depends on another columns, I could not ger answer.
You can use expanding but this code works only because:
There is a direct relation between GROUP_REH_VALUE and target columns 1.06*GROUP_REH_VALUE so the target column is useless.
You have a numeric index because expanding checks if the return value is numeric else you will raise an TypeError: must be real number, not str if GROUP_NAV_DATE is the index.
def f(sr):
m = sr.iloc[-1]*1.06 < sr
return sr[m].last_valid_index() if sum(m) else np.nan
# Need to reverse dataframe because you are looking forward.
idx = df.loc[::-1, 'GROUP_REH_VALUE'].expanding().apply(f).dropna()
# Set dates
df.loc[idx.index, 'first_time'] = df.loc[idx, 'GROUP_NAV_DATE'].tolist()
Output:
>>> df
GROUP_NAV_DATE GROUP_REH_VALUE target first_time
0 2018/11/29 1.0000 1.060000 2020/12/8
1 2018/11/30 1.0029 1.063074 2020/12/9
2 2018/12/3 1.0300 1.091800 NaN
3 2018/12/4 1.0320 1.093920 NaN
4 2018/12/5 1.0313 1.093178 NaN
5 2020/12/6 1.0340 1.096040 NaN
6 2020/12/8 1.0620 1.125720 NaN
7 2020/12/9 1.0700 1.134200 NaN
8 2020/12/10 1.0000 1.060000 2020/12/13
9 2020/12/11 0.9900 1.049400 2020/12/13
10 2020/12/12 0.9600 1.017600 2020/12/13
11 2020/12/13 1.0620 1.125720 NaN

How to remove spesific things from data in Python

I have a data like this:
draft_round
0 1st round
1 3rd round
2 1st round
3 16th round
4 2nd round
... ...
4680 1st round
4681 NaN
4682 2nd round
4683 2nd round
4684 1947 BAA Draf
As you can see, each row of data has complex data, a combination of words and numbers. The important thing for me here is to get the numbers in these lines. For example, I want to get the number "1" in a data row named "1st round" and "16" in a data row "16th round". In other words, I want the yield to be as follows:
draft_round
0 1
1 3
2 1
3 16
4 2
... ...
4680 1
4681 NaN
4682 2
4683 20
4684 1947 BAA Draf
I hope I was able to explain my problem, thanks in advance.
You can try .str.replace:
df["draft_round"] = df["draft_round"].str.replace(
r"(\d+).*round", r"\1", regex=True
)
print(df)
Prints:
draft_round
0 1
1 3
2 1
3 16
4 2
4680 1
4681 NaN
4682 2
4683 2
4684 1947 BAA Draf
try str.split :
df['draft_round'] = df['draft_round'].str.split(pat='[a-z]', expand=True)[0]

Pandas : How to calculate PCT Change for all columns dynamically?

I got the following pandas df by using the following command, how to get PCT Change for all the columns dynamically for AAL , AAN ... 100 more
price['AABA_PCT_CHG'] = price.AABA.pct_change()
AABA AAL AAN AABA_PCT_CHG
0 16.120001 9.635592 18.836105 NaN
1 16.400000 8.363149 23.105881 0.017370
2 16.680000 8.460282 24.892321 0.017073
3 17.700001 8.829385 28.275263 0.061151
4 16.549999 8.839100 27.705627 -0.064972
5 15.040000 8.654548 27.754738 -0.091239
Apply on dataframe like
In [424]: price.pct_change().add_suffix('_PCT_CHG')
Out[424]:
AABA_PCT_CHG AAL_PCT_CHG AAN_PCT_CHG
0 NaN NaN NaN
1 0.017370 -0.132057 0.226680
2 0.017073 0.011614 0.077315
3 0.061151 0.043628 0.135903
4 -0.064972 0.001100 -0.020146
5 -0.091239 -0.020879 0.001773
In [425]: price.join(price.pct_change().add_suffix('_PCT_CHG'))
Out[425]:
AABA AAL AAN AABA_PCT_CHG AAL_PCT_CHG AAN_PCT_CHG
0 16.120001 9.635592 18.836105 NaN NaN NaN
1 16.400000 8.363149 23.105881 0.017370 -0.132057 0.226680
2 16.680000 8.460282 24.892321 0.017073 0.011614 0.077315
3 17.700001 8.829385 28.275263 0.061151 0.043628 0.135903
4 16.549999 8.839100 27.705627 -0.064972 0.001100 -0.020146
5 15.040000 8.654548 27.754738 -0.091239 -0.020879 0.001773

Categories

Resources