Accessing indexes in a list - python

I am using tabula-py to extract a table from a pdf document like this:
rows = tabula.read_pdf('bank_statement.pdf', pandas_options={"header":[0, 1, 2, 3, 4, 5]}, pages='all', stream=True, lattice=True)
rows
This gives an output like so:
[ 0
0 Customer Statement\rxxxxxxx\rP...
1 Print Date: April 12, 2020Address: 41 BAALE ST...
2 Period: January 1, 2020 ­ April 12, 2020Openin...,
0
0 Customer Statement\xxxxxxxx\rP...
1 Print Date: April 12, 2020Address: 41 gg ST...,
0 1 2 3 4 5 \
0 03­Jan­2020 0 03­Jan­2020 NaN 50,000.00 52,064.00
1 10­Jan­2020 0 10­Jan­2020 25,000.00 NaN 27,064.00
2 10­Jan­2020 0 10­Jan­2020 25.00 NaN 27,039.00
3 10­Jan­2020 0 10­Jan­2020 1.25 NaN 27,037.75
4 20­Jan­2020 999921... 20­Jan­2020 10,000.00 NaN 17,037.75
5 23­Jan­2020 999984... 23­Jan­2020 4,050.00 NaN 12,987.75
6 23­Jan­2020 0 23­Jan­2020 1,000.00 NaN 11,987.75
7 24­Jan­2020 0 24­Jan­2020 2,000.00 NaN 9,987.75
8 24­Jan­2020 0 24­Jan­2020 NaN 30,000.00 39,987.75
6
0 TRANSFER BETWEEN\rCUSTOMERS Via GG from\r...
1 NS Instant Payment Outward\r000013200110121...
2 COMMISSION\r0000132001101218050000326...\rNIP ...
3 VALUE ADDED TAX VAT ON NIP\rTRANSFER FOR 00001
4 CASH WITHDRAWAL FROM\rOTHER ATM ­210674­ ­4420...
5 POS/WEB PURCHASE\rTRANSACTION ­845061­\r­80405...
6 Airtime Purchase MBANKING­\r101CT0000000001551...
7 Airtime Purchase MBANKING­\r101CT0000000001552...
8 TRANSFER BETWEEN\rCUSTOMERS\r00001520012412113... ,
What I want from this pdf starts from index 2. So I run
rows[2]
And I get a dataframe that looks like this:
Now, I want indexes from 2 till the last index. I did
rows[2:]
But I am getting a list and not the expected dataframe.
[ 0 1 2 3 4 5 \
0 03­Jan­2020 0 03­Jan­2020 NaN 50,000.00 52,064.00
1 10­Jan­2020 0 10­Jan­2020 25,000.00 NaN 27,064.00
2 10­Jan­2020 0 10­Jan­2020 25.00 NaN 27,039.00
3 10­Jan­2020 0 10­Jan­2020 1.25 NaN 27,037.75
4 20­Jan­2020 999921... 20­Jan­2020 10,000.00 NaN 17,037.75
5 23­Jan­2020 999984... 23­Jan­2020 4,050.00 NaN 12,987.75
6 23­Jan­2020 0 23­Jan­2020 1,000.00 NaN 11,987.75
7 24­Jan­2020 0 24­Jan­2020 2,000.00 NaN 9,987.75
8 24­Jan­2020 0 24­Jan­2020 NaN 30,000.00 39,987.75
6
0 TRANSFER BETWEEN\rCUSTOMERS Via gg from\r...
1 bi Instant Payment Outward\r000013200110121...
2 COMMISSION\r0000132001101218050000326...\rNIP ...
3 VALUE ADDED TAX VAT ON NIP\rTRANSFER FOR 00001
4 CASH WITHDRAWAL FROM\rOTHER ATM ­210674­ ­4420...
5 POS/WEB PURCHASE\rTRANSACTION ­845061­\r­80405...
Please do I solve this? I need a dataframe for indexes starting at 2 and onwards.

You are getting this behaviour because rows is a list and slicing a list produces another list. When you access an element at a specific index, you get the object at that index; in this case, a DataFrame object.
The pandas library ships with a concat function that can combine multiple DataFrame objects into one -- I believe this is what you want to do -- such that you have:
import pandas as pd
df_combo = pd.concat([rows[2], rows[3], rows[4], rows[5] ...])
Even better:
df_combo = pd.concat(rows[2:])

Take a look at https://medium.com/analytics-vidhya/how-to-extract-multiple-tables-from-a-pdf-through-python-and-tabula-py-6f642a9ee673
The best way to go about what you're trying to achieve is by reading the table and returning the response as JSON, loop through the json objects for your lists.

Related

Concatenating the values of column and putting back to same row again

Customer Material ID Bill Quantity
0 1 64578 100
1 2 64579 58
2 3 64580 36
3 4 64581 45
4 5 64582 145
We have to concatenate the 0th index material id and 1st index material id and put it into the 0th index material id record.
similarly 1,2 3,4
The result should contain only catenated records.
Just shift the data and combine the columns.
df.assign(new_ID=df["Material ID"] + df.shift(-1)["Material ID"])
Customer Material ID Bill Quantity new_ID
0 0 64578 100 NaN 129157.0
1 1 64579 58 NaN 129159.0
2 2 64580 36 NaN 129161.0
3 3 64581 45 NaN 129163.0
4 4 64582 145 NaN NaN
If you need to concatenate it as a str type then the following would work.
df["Material ID"] = df["Material ID"].astype(str)
df.assign(new_ID=df["Material ID"] + df.shift(-1)["Material ID"])
Customer Material ID Bill Quantity new_ID
0 0 64578 100 NaN 6457864579
1 1 64579 58 NaN 6457964580
2 2 64580 36 NaN 6458064581
3 3 64581 45 NaN 6458164582
4 4 64582 145 NaN NaN

how to use LOCF in this time series data for pandas in python

If I have a data as given below , I need to fil the last observations based on the id when it last appeared, the data is as given below -
ID OpenDate ObsDate Amount ClosedDate Output
1 10-12-1990 15-08-1991 20 15-08-1992 2
3 10-12-1993 15-12-1993 25 15-08-1994 1
5 10-12-1995 25-11-1997 0 18-08-1998 1
1 NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN
The expected output should be the filed out values FOR 1 AND 3 IDS with the previous values of 1 and 3 i.e
ID OpenDate ObsDate Amount ClosedDate Output
1 10-12-1990 15-08-1991 20 15-08-1992 2
3 10-12-1993 15-12-1993 25 15-08-1994 1
5 10-12-1995 25-11-1997 0 18-08-1998 1
1 10-12-1990 15-08-1991 20 15-08-1992 2
3 10-12-1993 15-12-1993 25 15-08-1994 1
Consider this to be a dataframe,the inputs needed for python.

Faster way to construct a multiindex of dates in Pandas

I have a Pandas dataframe, df. Here are the first five rows:
Id StartDate EndDate
0 0 2015-08-11 2018-07-13
1 1 2014-02-15 2016-01-25
2 2 2014-12-20 NaT
3 3 2015-01-09 2015-01-14
4 4 2014-07-20 NaT
I want to construct a new dataframe, df2. df2 should have a row for each month between StartDate and EndDate, inclusive, for each Id in df1. For example, since the first row of df1 has StartDate in August 2015 and EndDate in July 2018, df2 should have rows corresponding to August 2015, September 2015, ..., July 2018. If an Id in df1 has no EndDate, we will take it to be June 2019.
I would like df2 to use a multiindex with the first level being the corresponding Id in df1, the second level being the year, and the third level being the month. For example, if the above five rows were all of df1, then df2 should look like:
Id Year Month
0 2015 8
9
10
11
12
2016 1
2
3
4
5
6
7
8
9
10
11
12
2017 1
2
3
4
5
6
7
8
9
10
11
12
2018 1
... ... ...
4 2017 1
2
3
4
5
6
7
8
9
10
11
12
2018 1
2
3
4
5
6
7
8
9
10
11
12
2019 1
2
3
4
5
6
The following code does the trick, but takes about 20 seconds on my decent laptop for 10k Ids. Can I be more efficient somehow?
import numpy as np
def build_multiindex_for_id_(id_, enroll_month, enroll_year, cancel_month, cancel_year):
# Given id_ and start/end dates,
# returns 2d array to be converted to multiindex.
# Each row of returned array represents a month/year
# between enroll date and cancel date inclusive.
year = enroll_year
month = enroll_month
multiindex_array = [[],[],[]]
while (month != cancel_month) or (year != cancel_year):
multiindex_array[0].append(id_)
multiindex_array[1].append(year)
multiindex_array[2].append(month)
month += 1
if month == 13:
month = 1
year += 1
multiindex_array[0].append(id_)
multiindex_array[1].append(year)
multiindex_array[2].append(month)
return np.array(multiindex_array)
# Begin by constructing array for first id.
array_for_multiindex = build_multiindex_for_id_(0,8,2015,7,2018)
# Append the rest of the multiindices for the remaining ids.
for _, row in df.loc[1:].fillna(pd.to_datetime('2019-06-30')).iterrows():
current_id_array = build_multiindex_for_id_(
row['Id'],
row['StartDate'].month,
row['StartDate'].year,
row['EndDate'].month,
row['EndDate'].year)
array_for_multiindex = np.append(array_for_multiindex, current_id_array, axis=1)
df2_index = pd.MultiIndex.from_arrays(array_for_multiindex).rename(['Id','Year','Month'])
pd.DataFrame(index=df2_index)
Here's my approach after several trial and error:
(df.melt(id_vars='Id')
.fillna(pd.to_datetime('June 2019'))
.set_index('value')
.groupby('Id').apply(lambda x: x.asfreq('M').ffill())
.reset_index('value')
.assign(year=lambda x: x['value'].dt.year,
month=lambda x: x['value'].dt.month)
.set_index(['year','month'], append=True)
)
Output:
value Id variable
Id year month
0 2015 8 2015-08-31 NaN NaN
9 2015-09-30 NaN NaN
10 2015-10-31 NaN NaN
11 2015-11-30 NaN NaN
12 2015-12-31 NaN NaN
2016 1 2016-01-31 NaN NaN
2 2016-02-29 NaN NaN
3 2016-03-31 NaN NaN
4 2016-04-30 NaN NaN
5 2016-05-31 NaN NaN
6 2016-06-30 NaN NaN
7 2016-07-31 NaN NaN
8 2016-08-31 NaN NaN
9 2016-09-30 NaN NaN
10 2016-10-31 NaN NaN

Perform multiple operations in a single groupby call with pandas?

I'd like to produce a summary dataframe after grouping by date. I want to have a column that shows the mean of a given column as it is and the mean of that same column after filtering for instances that are greater than 0. I figured out how I can do this (below), but it requires doing two separate groupby calls, renaming the columns, and then joining them back together. I fell like one should be able to do this all in one call. I was trying to use eval to do this but kept getting an error and being told to use apply, that I couldn't use eval on a groupby object.
Code which gets me what I want but doesn't seem very efficient:
# Sample data
data = pd.DataFrame(
{"year" : [2013, 2013, 2013, 2014, 2014, 2014],
"month" : [1, 2, 3, 1, 2, 3],
"day": [1, 1, 1, 1, 1, 1],
"delay": [0, -4, 50, -60, 9, 10]})
subset = (data
.groupby(['year', 'month', 'day'])['delay']
.mean()
.reset_index()
.rename(columns = {'delay' : 'avg_delay'})
)
subset_1 = (data[data.delay > 0]
.groupby(['year', 'month', 'day'])['delay']
.mean()
.reset_index()
.rename(columns = {'delay' : 'avg_delay_pos'})
)
combined = pd.merge(subset, subset_1, how='left', on=['year', 'month', 'day'])
combined
year month day avg_delay avg_delay_pos
0 2013 1 1 0 NaN
1 2013 2 1 -4 NaN
2 2013 3 1 50 50.0
3 2014 1 1 -60 NaN
4 2014 2 1 9 9.0
5 2014 3 1 10 10.0
IIUC, you could use the following code:
>>> data['avg_delay'] = data.pop('delay')
>>> data['avg_delay_pos'] = data.loc[data['avg_delay'].gt(0), 'avg_delay']
>>> data
day month year avg_delay avg_delay_pos
0 1 1 2013 0 NaN
1 1 2 2013 -4 NaN
2 1 3 2013 50 50.0
3 1 1 2014 -60 NaN
4 1 2 2014 9 9.0
5 1 3 2014 10 10.0
>>>
Explanation:
I first remove the delay column, and assign it to the new name of avg_delay, so I am virtually renaming the name of delay to avg_delay.
Then I create a new column called avg_delay_pos, which first uses loc to get the values greater than zero, and since the index doesn't reset, so it will make the indexes that are greater than zero to the values of avg_delay, and the others won't contain any assignments, that said they will be NaN as you expected.
The solution is specific to your problem, but you can do this using a single groupby call. To get "avg_delay_pos", you just have to remove negative (and zero) values.
df['delay_pos'] = df['delay'].where(df['delay'] > 0)
(df.filter(like='delay')
.groupby(pd.to_datetime(df[['year', 'month', 'day']]))
.mean()
.add_prefix('avg_'))
avg_delay avg_delay_pos
2013-01-01 0 NaN
2013-02-01 -4 NaN
2013-03-01 50 50.0
2014-01-01 -60 NaN
2014-02-01 9 9.0
2014-03-01 10 10.0
Breakdown
where is used to mask values that are not positive.
df['delay_pos'] = df['delay'].where(df['delay'] > 0)
# df['delay'].where(df['delay'] > 0)
0 NaN
1 NaN
2 50.0
3 NaN
4 9.0
5 10.0
Name: delay, dtype: float64
Next, extract the delay columns we want to group on,
df.filter(like='delay')
delay delay_pos
0 0 NaN
1 -4 NaN
2 50 50.0
3 -60 NaN
4 9 9.0
5 10 10.0
Then perform a groupby on the date,
_.groupby(pd.to_datetime(df[['year', 'month', 'day']])).mean()
delay delay_pos
2013-01-01 0 NaN
2013-02-01 -4 NaN
2013-03-01 50 50.0
2014-01-01 -60 NaN
2014-02-01 9 9.0
2014-03-01 10 10.0
Where pd.to_datetime is used to convert the year/month/day columns into a single datetime column, it's more efficient to group on a single column than multiple.
pd.to_datetime(df[['year', 'month', 'day']])
0 2013-01-01
1 2013-02-01
2 2013-03-01
3 2014-01-01
4 2014-02-01
5 2014-03-01
dtype: datetime64[ns]
The final .add_prefix('avg_') add prefix "_avg" to the result.
An alternative way to do this if you want separate year/month/day columns would be
df['delay_pos'] = df['delay'].where(df['delay'] > 0)
df.groupby(['year', 'month', 'day']).mean().add_prefix('avg_').reset_index()
year month day avg_delay avg_delay_pos
0 2013 1 1 0 NaN
1 2013 2 1 -4 NaN
2 2013 3 1 50 50.0
3 2014 1 1 -60 NaN
4 2014 2 1 9 9.0
5 2014 3 1 10 10.0

Reindexing after a pivot in pandas

Consider the following dataset:
After running the code:
convert_dummy1 = convert_dummy.pivot(index='Product_Code', columns='Month', values='Sales').reset_index()
The data is in the right form, but my index column is named 'Month', and I cannot seem to remove this at all. I have tried codes such as the below, but it does not do anything.
del convert_dummy1.index.name
I can save the dataset to a csv, delete the ID column, and then read the csv - but there must be a more efficient way.
Dataset after reset_index():
convert_dummy1
Month Product_Code 0 1 2 3 4
0 10133.9 0 0 0 0 0
1 10146.9 120 80 60 0 100
convert_dummy1.index = pd.RangeIndex(len(convert_dummy1.index))
del convert_dummy1.columns.name
convert_dummy1
Product_Code 0 1 2 3 4
0 10133.9 0 0 0 0 0
1 10146.9 120 80 60 0 100
Since you pivot with columns="Month", each column in output corresponds to a month. If you decide to reset index after the pivot, you should check column names with convert_dummy1.columns.value which should return in your case :
array(['Product_Code', 1, 2, 3, 4, 5], dtype=object)
while convert_dummy1.columns.names should return:
FrozenList(['Month'])
So to rename Month, use rename_axis function:
convert_dummy1.rename_axis('index',axis=1)
Output:
index Product_Code 1 2 3 4 5
0 10133 NaN NaN NaN NaN 0.0
1 10234 NaN 0.0 NaN NaN NaN
2 10245 0.0 NaN NaN NaN NaN
3 10345 NaN NaN NaN 0.0 NaN
4 10987 NaN NaN 1.0 NaN NaN
If you wish to reproduce it, this is my code:
df1=pd.DataFrame({'Product_Code':[10133,10245,10234,10987,10345], 'Month': [1,2,3,4,5], 'Sales': [0,0,0,1,0]})
df2=df1.pivot_table(index='Product_Code', columns='Month', values='Sales').reset_index().rename_axis('index',axis=1)

Categories

Resources