How to extract column from Python Pandas Pivot_table? - python

I have the following pandas pivot_table:
print table
Year
1980.0 11.38
1981.0 35.68
1982.0 28.88
1983.0 16.80
1984.0 50.35
1985.0 53.95
1986.0 37.08
1987.0 21.70
1988.0 47.21
1989.0 73.45
1990.0 49.37
1991.0 32.23
1992.0 76.14
1993.0 45.99
1994.0 79.22
1995.0 88.11
1996.0 199.15
1997.0 201.07
1998.0 256.33
1999.0 251.12
2000.0 201.63
2001.0 331.49
2002.0 394.97
2003.0 357.61
2004.0 418.85
2005.0 459.41
2006.0 520.52
2007.0 610.44
2008.0 678.49
2009.0 667.39
2010.0 600.36
2011.0 515.93
2012.0 363.30
2013.0 367.98
2014.0 337.10
2015.0 264.26
dtype: float64
How do I extract the first column of this pivot_table? If I just do table[:,0], it gives me ValueError: Can only tuple-index with a MultiIndex. I am wondering what can I do in order to extract the first column of the table.

Simply reset_index(). Below creates a reproducible example with loc to slice column:
import numpy as np
import pandas as pd
np.random.seed(44)
# RANDOM DATA WITH US CLASS I RAILROADS
df = pd.DataFrame({'Name': ['UP', 'BNSF', 'CSX', 'KCS','NSF', 'CN', 'CP']*5,
'Other_Sales': np.random.randn(35),
'Year': list(range(2007,2014))*5})
table = df.pivot_table('Other_Sales', columns='Name',
index='Year', aggfunc='sum')
print(table)
# Name BNSF CN CP CSX KCS NSF UP
# Year
# 2007 NaN NaN NaN NaN NaN NaN -1.785934
# 2008 1.605111 NaN NaN NaN NaN NaN NaN
# 2009 NaN NaN NaN 1.800014 NaN NaN NaN
# 2010 NaN NaN NaN NaN -2.577264 NaN NaN
# 2011 NaN NaN NaN NaN NaN 0.899372 NaN
# 2012 NaN -3.988874 NaN NaN NaN NaN NaN
# 2013 NaN NaN 1.725111 NaN NaN NaN NaN
table = df.pivot_table('Other_Sales', columns='Name',
index='Year', aggfunc='sum').sum(axis=1).reset_index()
print(table)
# Year 0
# 0 2007 -1.785934
# 1 2008 1.605111
# 2 2009 1.800014
# 3 2010 -2.577264
# 4 2011 0.899372
# 5 2012 -3.988874
# 6 2013 1.725111
print(table.loc[:,0])
# 0 -1.785934
# 1 1.605111
# 2 1.800014
# 3 -2.577264
# 4 0.899372
# 5 -3.988874
# 6 1.725111
# Name: 0, dtype: float64

Related

How to solve ValueError: DataFrame constructor not properly called

I have tried to put the csv data into pandas data frame but i am getting an error "DataFrame constructor not properly called!". i have uploaded csv file on the github. file="https://raw.githubusercontent.com/gambler2020/Data_Analysis/master/Economy/WEO_Data.csv"
with open("WEO_Data.csv", encoding='utf-16') as f:
contents = f.read()
df = pd.DataFrame(contents)
ValueError: DataFrame constructor not properly called!
How will i solve this error.
You should just use pd.read_csv to do the work.
df = pd.read_csv("WEO_Data.csv")
If you are planning to do amendments in the data, you can do so after reading it into Pandas dataframe.
Use read_csv with encoding='utf-16' parameter:
file="https://raw.githubusercontent.com/gambler2020/Data_Analysis/master/Economy/WEO_Data.csv"
df = pd.read_csv(file, encoding='utf-16')
print (df.head())
Country 1980 1981 1982 1983 1984 1985 1986 \
0 Afghanistan NaN NaN NaN NaN NaN NaN NaN
1 Albania 1.946 2.229 2.296 2.319 2.290 2.339 2.587
2 Algeria 42.346 44.372 44.780 47.529 51.513 61.132 61.535
3 Andorra NaN NaN NaN NaN NaN NaN NaN
4 Angola 6.639 6.214 6.214 6.476 6.864 8.457 7.918
1987 1988 1989 1990 1991 1992 1993 1994 1995 \
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 2.566 2.530 2.779 2.221 1.333 0.843 1.461 2.361 2.882
2 63.300 51.664 52.558 61.892 46.670 49.217 50.963 42.426 42.066
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 9.050 9.818 11.421 12.571 12.186 9.395 6.819 4.965 6.197
1996 1997 1998 1999 2000 2001 2002 2003 2004 \
0 NaN NaN NaN NaN NaN NaN 4.367 4.553 5.146
1 3.200 2.259 2.560 3.209 3.483 3.928 4.348 5.611 7.185
2 46.941 48.178 48.188 48.845 54.749 54.745 56.761 67.864 85.332
3 NaN NaN NaN NaN 1.429 1.547 1.758 2.362 2.896
4 7.994 9.388 7.958 7.526 11.166 10.930 15.286 17.813 23.552
...

Copying a column from one pandas dataframe to another

I have the following code where i try to copy the EXPIRATION from the recent dataframe to the EXPIRATION column in the destination dataframe:
recent = pd.read_excel(r'Y:\Attachments' + '\\' + '962021.xlsx')
print('HERE\n',recent)
print('HERE2\n', recent['EXPIRATION'])
destination= pd.read_excel(r'Y:\Attachments' + '\\' + 'Book1.xlsx')
print('HERE3\n', destination)
destination['EXPIRATION']= recent['EXPIRATION']
print('HERE4\n', destination)
The problem is that destination has less rows than recent so some of the lower rows in the EXPIRATION column from recent do not end up in the destination dataframe. I want all the EXPIRATION values from recent to be in the destination dataframe, even if all the other values are NaN.
Example Output:
HERE
Unnamed: 0 IGNORE DATE_TRADE DIRECTION EXPIRATION NAME OPTION_TYPE PRICE QUANTITY STRATEGY STRIKE TIME_TRADE TYPE UNDERLYING
0 0 21 6/9/2021 B 08/06/2021 BNP FP E C 12 12 CONDORI 12 9:23:40 ETF NASDAQ
1 1 22 6/9/2021 B 16/06/2021 BNP FP E P 12 12 GOLD/SILVER 12 10:9:19 ETF NASDAQ
2 2 23 6/9/2021 B 16/06/2021 TEST P 12 12 CONDORI 21 10:32:12 EQT TEST
3 3 24 6/9/2021 B 22/06/2021 TEST P 12 12 GOLD/SILVER 12 10:35:5 EQT NASDAQ
4 4 0 6/9/2021 B 26/06/2021 TEST P 12 12 GOLD/SILVER 12 10:37:11 ETF FTSE100
HERE2
0 08/06/2021
1 16/06/2021
2 16/06/2021
3 22/06/2021
4 26/06/2021
Name: EXPIRATION, dtype: object
HERE3
Unnamed: 0 IGNORE DATE_TRADE DIRECTION EXPIRATION NAME OPTION_TYPE PRICE QUANTITY STRATEGY STRIKE TIME_TRADE TYPE UNDERLYING
0 NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN
HERE4
Unnamed: 0 IGNORE DATE_TRADE DIRECTION EXPIRATION NAME OPTION_TYPE PRICE QUANTITY STRATEGY STRIKE TIME_TRADE TYPE UNDERLYING
0 NaN NaN NaN NaN 08/06/2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN 16/06/2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 16/06/2021 NaN NaN NaN NaN NaN NaN NaN NaN NaN
Joining is generally the best approach, but I see that you have no id column apart from native pandas indexing, and there are only Nans in destination, so if you are sure that ordering is not a problem you can just use:
>>> destination = pd.concat([recent,destination[['EXPIRATION']]], ignore_index=True, axis=1)
Unnamed: 0 IGNORE DATE_TRADE DIRECTION EXPIRATION ...
0 NaN NaN NaN NaN 08/06/2021 ...
1 NaN NaN NaN NaN 16/06/2021 ...
2 NaN NaN NaN NaN 16/06/2021 ...
3 NaN NaN NaN NaN 22/06/2021 ...
4 NaN NaN NaN NaN 26/06/2021 ...

Grouping by year through months: pivot table

I need to group values by Year, from my dataset:
Date Freq Year Month
0 2020-03-19 32 2020 3
1 2020-03-25 31 2020 3
2 2020-03-23 28 2020 3
3 2020-03-04 26 2020 3
4 2020-08-04 26 2020 8
... ... ... ... ...
2516 2011-09-02 1 2011 9
2517 2013-04-25 1 2013 4
2518 2020-09-02 1 2020 9
2519 2013-09-03 1 2013 9
2520 2015-01-01 1 2015 1
The table below was found as follows:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df['Year'] = df['Date'].dt.year
df['Month'] = df['Date'].dt.month
try_this=pd.pivot_table(df, values = 'Freq', index=['Date','Year'], columns = 'Month')
Month 1 2 3 4 5 6 7 8 9 10 11 12
Date Year
2010-03-04 2010 NaN NaN 1.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
2010-03-07 2010 NaN NaN 1.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
2010-07-31 2010 NaN NaN NaN NaN NaN NaN 1.0 NaN NaN NaN NaN NaN
2010-10-07 2010 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1.0 NaN NaN
2010-12-20 2010 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 1.0
... ... ... ... ... ... ... ... ... ... ... ... ... ...
2020-12-05 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 15.0
2020-12-06 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 10.0
2020-12-08 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 18.0
2020-12-09 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 4.0
2020-12-10 2020 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 14.0
I am trying to get something like this:
Year 1 2 3 4 5 6 7 8 9 10 11 12
2020 ... 61.0
2019 ...
2018 ...
...
i.e. a table where group by year the frequency through months.
What I tried (code above) is not giving me this output.
I would appreciated more help on how to figure it out.
References:
Plot through time setting specific filtering
How to pivot a dataframe in Pandas?
Have you tried using aggfunc in pivot_table:
df = df[['Year', 'Month', 'Freq']]
df = df.pivot_table(values=['Freq'], columns=['Month'], index=['Year'], aggfunc='sum')
print(df)
Freq
Month 1 3 4 8 9
Year
2011 NaN NaN NaN NaN 1.0
2013 NaN NaN 1.0 NaN 1.0
2015 1.0 NaN NaN NaN NaN
2020 NaN 117.0 NaN 26.0 1.0

How to convert messy html table to pandas dataframe

I'm trying to scrape SEC 10-Q and 10-K filings. Though I'm able to extract the tables, the CSV output is a bit messy. Is there any way that I can merge the columns with similar header names with pandas? Or any libraries that can help me export SEC filing data tables as csv?
[user#server sec_parser]$ /usr/bin/python3 /home/user/work_files/sec_parser/parser.py --file 10-Q-cmcsa-3312017x10q.htm
0 1 2 3 4 5 6 7 8 9 10 11
0 (in millions) NaN 2017 2017 2017 NaN 2016 2016 2016 NaN NaN NaN
1 Revenue NaN $ 20463 NaN NaN $ 18790 NaN NaN 8.9 %
2 Costs and Expenses: NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 Programming and production NaN 6074 6074 NaN NaN 5431 5431 NaN NaN 11.8 NaN
4 Other operating and administrative NaN 5827 5827 NaN NaN 5526 5526 NaN NaN 5.4 NaN
5 Advertising, marketing and promotion NaN 1530 1530 NaN NaN 1466 1466 NaN NaN 4.4 NaN
6 Depreciation NaN 1915 1915 NaN NaN 1785 1785 NaN NaN 7.3 NaN
7 Amortization NaN 587 587 NaN NaN 493 493 NaN NaN 19.0 NaN
8 Operating income NaN 4530 4530 NaN NaN 4089 4089 NaN NaN 10.8 NaN
9 Other income (expense) items, net NaN (625 (625 ) NaN (554 (554 ) NaN 13.0 NaN
10 Income before income taxes NaN 3905 3905 NaN NaN 3535 3535 NaN NaN 10.4 NaN
11 Income tax expense NaN (1,258 (1,258 ) NaN (1,311 (1,311 ) NaN (4.1 )
12 Net income NaN 2647 2647 NaN NaN 2224 2224 NaN NaN 19.0 NaN
13 Net (income) loss attributable to noncontrolli... NaN (81 (81 ) NaN (90 (90 ) NaN (10.2 )
14 Net income attributable to Comcast Corporation NaN $ 2566 NaN NaN $ 2134 NaN NaN 20.2 %
The sample table that I'm trying to convert as CSV https://edgartable.netlify.app/.
Here's my code
import os
import argparse
import sys
from bs4 import BeautifulSoup
import argparse
import pandas as pd
args = argparse.ArgumentParser()
args.add_argument('--file', type=str)
args.add_argument('--list', type=str)
opts = args.parse_args()
def parse_file(file):
data_map = []
div = []
tables = []
soup = BeautifulSoup(open(file, 'r'), 'html.parser')
for div in soup.find_all('div'):
if 'Consolidated Operating Results' not in str(div.find('font')): continue
table = div.find('table')
dataset = pd.read_html(str(table), skiprows=3)
print(dataset[0])
for i, data in enumerate(dataset):
data.to_csv(f'test{i}.csv', '|', index=False, header=False)
def main():
parse_file(opts.file)
if __name__ == "__main__": main()
Try this:
import pandas as pd
df = pd.read_html('https://edgartable.netlify.app/')
df = df[0]
df.to_csv('test.csv')

How to filter dataframe on two columns and output cumulative sum

I am early beginner.
I have the following dataframe (df1) with transaction dates as index, columns = account #, quantity of transaction, and ticker.
Account Quantity Symbol/CUSIP
Trade Date
2020-03-31 1 NaN 990156937
2020-03-31 2 0.020 IIAXX
2020-03-24 1 NaN 990156937
2020-03-20 1 650.000 DOC
2020-03-23 1 NaN 990156937
... ... ... ...
2017-11-24 2 55.000 QQQ
2018-01-01 1 10.000 AMZN
2018-01-01 1 250.000 HOS
2017-09-13 1 229.051 VFINX
2017-09-21 1 1.118 VFINX
[266 rows x 3 columns]
I would like to populate a 2nd dataframe (df2) which shows the total quantity on every day between the min & max of the index of (df1), grouped by account and by ticker. Below is am empty dataframe of what I am looking to do:
df2 = Total Quantity by ticker and account #, on every single day between min and max of df1
990156937 IIAXX DOC AER NaN ATVI H VCSH GOOGL VOO VG \
2020-03-31 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2020-03-30 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2020-03-29 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
Thus, for each day between the min of max of the transaction dates in df1 - I need to calculate the cumulative sum of all transaction of that date or earlier, grouped by account and ticker.
How could I accomplish this? Thanks in advance.
I suggest the following:
import pandas as pd
import numpy as np
# first I reproduce a similar dataframe
df = pd.DataFrame({"date": pd.date_range("2017-1-1", periods=3).repeat(6),
"account": [1, 1, 3, 1, 2, 3, 2,2, 1, 1, 2, 3, 1, 2, 3, 2,2,1],
"quantity": [123, 0.020, np.NaN, 650, 345, np.NaN, 345, 456, 121, 243, 445, 453, 987, np.NaN, 76, 143, 87, 19],
"symbol": ['990156937', '990156937', '990156937', 'DOC', 'AER', 'ATVI', 'AER', 'ATVI', 'IIAXX',
'990156937', '990156937', '990156937', 'DOC', 'AER', 'ATVI', 'AER', 'ATVI', 'IIAXX']})
This is what it looks like:
date account quantity symbol
0 2017-01-01 1 123.00 990156937
1 2017-01-01 1 0.02 990156937
2 2017-01-01 3 NaN 990156937
3 2017-01-01 1 650.00 DOC
4 2017-01-01 2 345.00 AER
You want to go to a wide format using unstack:
# You groupby date, account and symbol and sum the quantities
df = df.groupby(["date", "account", "symbol"]).agg({"quantity":"sum"})
df_wide = df.unstack()
# Finally groupby account to get the cumulative sum per account across dates
# Fill na with 0 to get cumulative sum right
df_wide = df_wide.fillna(0)
df_wide = df_wide.groupby(df_wide.index.get_level_values("account")).cumsum()
You get the result:
quantity
990156937 AER ATVI DOC IIAXX
date account
2017-01-01 1 123.02 0.0 0.0 650.0 0.0
2 0.00 345.0 0.0 0.0 0.0
3 0.00 0.0 0.0 0.0 0.0
2017-01-02 1 366.02 0.0 0.0 650.0 121.0
2 445.00 690.0 456.0 0.0 0.0

Categories

Resources