I am trying to update this table 1 (Level I, Level II, and Level III) by using pandas iloc or loc with the dataset referenced below. I am open to a better way than loc and iloc if there are suggestions.
Table 1
Example 1
If I want the table to update with new information for the 1102 selection for Pay Grade 13 and Level III I would use the following pd.loc code:
jobseries = '1102'
result = df.loc[('3',jobseries),'13']
print (result)
14.0
Example 2: This works too.
jobseries = '1102'
result = df.loc[('3',jobseries),'13'].sum()
print (result)
14
However, the challenge is when I need to select multiple indexes or multiple columns.
MULTIPLE ROWS
Now, if I want to update Table 1, Total for all Level I, instead of doing some type of df.isin, I need o do the following:
Example 3:
total = df.loc[('1',jobseries),'07'] + df.loc[('1',jobseries),'09'] + and so on...
print (total)
32
This works but I believe eventually will throw a RuntimeWarning: invalid value encountered in long_scalars. So its not the best way to do this. Any recommendations?
MULTIPLE COLUMNs
Now, if I want to update Table 1, # certs for Level I, Level II, and Level III, and any given grade level, I can't figure out the code. I've tried the following but its throwing a keyError. I've tried multiple ways of doing this and still cannot figure it out:
Example 4:
jobseries = '1102'
result = df.loc[('1','2','3',jobseries),'All']
print (result)
KeyError: "None of [[('1', '2', '3', '1102')]] are in the [index]"
This is strange because if I check my index the keyError confuses me.
df.index:
MultiIndex(levels=[['1', '2', '3', 'All'], ['', '0301', '0341', '0342', '0343', '0501', '0560', '0810', '0850', '1101', '1102', '1105', '1106', '1109', '1145', '1146', '1170', '1410']],
labels=[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3], [2, 3, 4, 6, 7, 9, 10, 11, 12, 13, 16, 17, 2, 8, 9, 10, 11, 1, 3, 4, 5, 9, 10, 11, 14, 15, 16, 0]],
names=['Level', 'JobSeries'])
I've also tried df.xs:
Example 5:
jobseries = '1102'
result = df.xs(jobseries, level=1)
print (result)
01 07 08 09 11 12 13 14 15 All
Level
1 1.0 0.0 0.0 9.0 8.0 9.0 6.0 0.0 0.0 15
2 0.0 0.0 0.0 4.0 6.0 12.0 6.0 1.0 0.0 13
3 1.0 0.0 0.0 0.0 1.0 11.0 14.0 9.0 3.0 14
CHANGES IN ROWS OR COLUMNS
The other challenge is that if the dataset changes and index or rows change the pd.loc and pd.iloc will throw a key error. Is there anyway around this?
df:
01 07 08 09 11 12 13 14 15 All
Level JobSeries
1 0341 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1
0342 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1
0343 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 2
0560 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1
0810 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1
1101 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1
1102 1.0 0.0 0.0 9.0 8.0 9.0 6.0 0.0 0.0 15
1105 0.0 7.0 3.0 5.0 0.0 0.0 0.0 0.0 0.0 9
1106 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2
1109 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 2
1170 0.0 0.0 0.0 0.0 1.0 2.0 0.0 0.0 0.0 3
1410 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1
2 0341 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1
0850 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1
1101 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 2
1102 0.0 0.0 0.0 4.0 6.0 12.0 6.0 1.0 0.0 13
1105 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1
3 0301 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1
0342 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1
0343 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1
0501 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1
1101 0.0 0.0 0.0 0.0 0.0 0.0 2.0 1.0 0.0 2
1102 1.0 0.0 0.0 0.0 1.0 11.0 14.0 9.0 3.0 14
1105 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1
1145 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1
1146 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1
1170 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 2
All 2.0 8.0 4.0 11.0 11.0 14.0 15.0 9.0 4.0 17
Reference:
pd.loc: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html
pd.xs: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.xs.html
pd.iloc: https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer
I'm not totally clear on the ask, but would
df.groupby(df.index).count()[13] or df.groupby(df.index).sum()[13] for a column, or
df.groupby(['Level','JobSeries']).sum().loc[1,341] for a row
accomplish what you're looking for? The level argument in the groupby is designed to deal with multi-index problems
Related
I have dataframe which looks like below:
df:
Review_Text Noun Thumbups
Would be nice to be able to import files from ... [My, Tracks, app, phone, Google, Drive, import... 1.0
No Offline Maps! It used to have offline maps ... [Offline, Maps, menu, option, video, exchange,... 18.0
Great application. Designed with very well tho... [application, application] 16.0
Great App. Nice and simple but accurate. Wish ... [Great, App, Nice, Exported] 0.0
Save For Offline - This does not work. The rou... [Save, Offline, route, filesystem] 12.0
Since latest update app will not run. Subscrip... [update, app, Subscription, March, application] 9.0
Great app. Love it! And all the things it does... [Great, app, Thank, work] 1.0
I have paid for subscription but keeps telling... [subscription, trial, period] 0.0
Error: The route cannot be save for no locatio... [Error, route, i, GPS] 0.0
When try to restore my tracks it says "unable ... [try, file, locally-1] 0.0
Was a good app but since the update it only re... [app, update, metre] 2.0
based on 'Noun' Column values, I want to create other columns. For example, all values of noun column from first row become columns and those columns contain value of 'Thumbups' column value. If the column name already present in dataframe then it adds 'Thumbups' value into the existing value of the column.
I was trying to implement by using pivot_table :
pd.pivot_table(latest_review,columns='Noun',values='Thumbups')
But got following error:
TypeError: unhashable type: 'list'
Could anyone help me in fixing the issue?
Use Series.str.join with Series.str.get_dummies for dummies and then multiple by column Thumbups by DataFrame.mul:
df1 = df['Noun'].str.join('|').str.get_dummies().mul(df['Thumbups'], axis=0)
print (df1)
App Drive Error Exported GPS Google Great Maps March My Nice \
0 0.0 10.0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 10.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 180.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 90.0 0.0 0.0
6 0.0 0.0 0.0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 0.0
7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
10 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
Offline Save Subscription Thank Tracks app application exchange \
0 0.0 0.0 0.0 0.0 10.0 10.0 0.0 0.0
1 180.0 0.0 0.0 0.0 0.0 0.0 0.0 180.0
2 0.0 0.0 0.0 0.0 0.0 0.0 160.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 120.0 120.0 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 90.0 0.0 0.0 90.0 90.0 0.0
6 0.0 0.0 0.0 10.0 0.0 10.0 0.0 0.0
7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
10 NaN NaN NaN NaN NaN NaN NaN NaN
file filesystem i import locally-1 menu metre option period \
0 0.0 0.0 0.0 10.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 180.0 0.0 180.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 120.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
10 NaN NaN NaN NaN NaN NaN NaN NaN NaN
phone route subscription trial try update video work
0 10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0 180.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 120.0 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0 0.0 90.0 0.0 0.0
6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10.0
7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
10 NaN NaN NaN NaN NaN NaN NaN NaN
rows = []
#_unpacking Noun column row list values and storing it in rows list
_ = df.apply(lambda row: [rows.append([row['Review_Text'],row['Thumbups'], nn])
for nn in row.Noun], axis=1)
#_creates new dataframe with unpacked values
df_new = pd.DataFrame(rows, columns=df.columns)
#_now doing pivot operation on df_new
pivot_df = df_new.pivot(index='Review_Text', columns='Noun')
My data file (daily rainfall data) has the format
df = Year Month Day01 Day02 Day03 ..............Day31
1970 1 0 0 20 3.5
1970 2 0 0 20 3.5
1970 3 0 0 20 3.5
... . . . .. ...
... . . . .. ...
and I want to read the above data into date format
df = date (year-month-day)
Please help
you can find the data here https://docs.google.com/spreadsheets/d/1sPRiRDYmWyTuuhks3CDWXj0eNcddsJopUNfjEAlSI-w/edit?usp=sharing
I assume that you already have a dataframe with the following format:
YEAR MN DRF01 DRF02 DRF03 DRF04 DRF05 DRF06 DRF07 DRF08 DRF09 DRF10 DRF11 DRF12 DRF13 DRF14 DRF15 DRF16 DRF17 DRF18 DRF19 DRF20 DRF21 DRF22 DRF23 DRF24 DRF25 DRF26 DRF27 DRF28 DRF29 DRF30 DRF31
1971 1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 22.0 0.0 0.0 4.6
1971 2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 NaN NaN NaN
1971 3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1971 4 0.0 0.0 0.0 0.0 0.0 0.0 25.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 8.6 0.0 0.0 0.0 7.4 24.0 0.0 NaN
1971 5 3.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.0 0.0 0.0 106.0 0.0 2.0 0.0 0.0 0.0 0.0 3.0 6.6 0.0 0.0 22.6 26.8 12.8
You can get what you want by stacking the columns. The ideal dataframe for stacking should only contain the columns you want to stack, with the remaining columns moved to the index:
result = df.rename(columns={'MN': 'MONTH'}) \
.set_index(['YEAR', 'MONTH']) \
.rename_axis('DAY', axis=1) \
.stack() \
.to_frame('RAINFALL') \
.reset_index()
result['DAY'] = result['DAY'].str[-2:].astype('int')
result['DATE'] = pd.to_datetime(result[['YEAR', 'MONTH', 'DAY']])
Result:
YEAR MONTH DAY RAINFALL DATE
1971 1 1 0.0 1971-01-01
1971 1 2 0.0 1971-01-02
1971 1 3 0.0 1971-01-03
1971 1 4 0.0 1971-01-04
1971 1 5 0.0 1971-01-05
Using df.melt might be even more straight forward :
import pandas as pd
df = pd.DataFrame({'Year': {0: 1910, 1: 1910, 2: 1911},
'Month': {0:1, 1:1, 2:2},
'Day 1': {0: 1, 1: 3, 2: 5},
'Day 2': {0: 2, 1: 4, 2: 6}})
print(df)
day_columns = [i for i in df.columns if 'Day' in i]
df = pd.melt(df,id_vars=['Year','Month'],value_vars=day_columns,var_name='Day',value_name='Rain')
df['Day'] = df['Day'].str.replace('Day ','')
df['Date'] = pd.to_datetime(df[['Year', 'Month', 'Day']])
print(df)
I have dataframe not sequences. if I use len(df.columns), my data has 3586 columns. How to re-order the data sequences?
ID V1 V10 V100 V1000 V1001 V1002 ... V990 V991 V992 V993 V994
A 1 9.0 2.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B 1 1.2 0.1 3.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
C 2 8.6 8.0 2.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0
D 3 0.0 2.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0
E 4 7.8 6.6 3.0 0.0 0.0 0.0 4.0 0.0 0.0 0.0 0.0
I used this df = df.reindex(sorted(df.columns), axis=1) (based on this question Re-ordering columns in pandas dataframe based on column name) but still not working.
thank you
First get all columns without pattern V + number by filtering with str.contains, then sorting all another values by Index.difference, add together and pass to DataFrame.reindex - get first all non numeric non matched columns in first positions and then sorted V + number columns:
L1 = df.columns[~df.columns.str.contains('^V\d+$')].tolist()
L2 = sorted(df.columns.difference(L1), key=lambda x: float(x[1:]))
df = df.reindex(L1 + L2, axis=1)
print (df)
ID V1 V10 V100 V990 V991 V992 V993 V994 V1000 V1001 V1002
A 1 9.0 2.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B 1 1.2 0.1 3.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
C 2 8.6 8.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
D 3 0.0 2.0 0.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
E 4 7.8 6.6 3.0 4.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
I have a Pandas Dataframe which tells me monthly sales of items in shops
df.head():
ID month sold
0 150983 0 1.0
1 56520 0 13.0
2 56520 1 7.0
3 56520 2 13.0
4 56520 3 8.0
I want to remove all IDs where there were no sales last month. I.e. month == 33 & sold == 0. Doing the following
unwanted_df = df[((df['month'] == 33) & (df['sold'] == 0.0))]
I just get 46 rows, which is far too little. But nevermind, I would like to have the data in different format anyway. Pivoted version of above table is just what I want:
pivoted_df = df.pivot(index='month', columns = 'ID', values = 'sold').fillna(0)
pivoted_df.head()
ID 0 2 3 5 6 7 8 10 11 12 ... 214182 214185 214187 214190 214191 214192 214193 214195 214197 214199
month
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Question. How to remove columns with the value 0 in the last row in pivoted_df?
You can do this with one line:
pivoted_df= pivoted_df.drop(pivoted_df.columns[pivoted_df.iloc[-1,:]==0],axis=1)
I want to remove all IDs where there were no sales last month
You can first calculate the IDs satisfying your condition:
id_selected = df.loc[(df['month'] == 33) & (df['sold'] == 0), 'ID']
Then filter these from your dataframe via a Boolean mask:
df = df[~df['ID'].isin(id_selected)]
Finally, use pd.pivot_table with your filtered dataframe.
I have the following numpy matrix:
0 1 2 3 4 5 6 7 8 9
0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 5.0 0.0 9.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 7.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
7 5.0 0.0 0.0 0.0 0.0 0.0 0.0 6.0 0.0 0.0
8 2.0 0.0 0.0 0.0 3.0 0.0 6.0 0.0 8.0 0.0
9 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
I want to calculate the non-zero values average of every row and column separately. So my result should be something like this:
average_rows = [1.0,7.0,2.0,5.0,0.0,4.0,0.0,5.5,4.75,1.0,0.0]
average_cols = [3.5,1.0,4.33333,0.0,4.33333,0.0,4.0,6.0,6.5,0.0]
I can't figure out how to iterate over them, and I keep getting TypeError: unhashable type
Also, I'm not sure if iterating is the best solution, I also tried something like R[:,i] to grab each column and sum it using sum(R[:,i]), but keep getting the same error.
It is better to use 2d np.array instead of matrix.
import numpy as np
data = np.array([[1, 2, 0], [0, 0, 1], [0, 2, 4]], dtype='float')
data[data == 0] = np.nan
# replace all zeroes with `nan`'s to skip them
# [[ 1. 2. nan]
# [ nan nan 1.]
# [ nan 2. 4.]]
np.nanmean(data, axis=0)
# array([ 1. , 2. , 2.5])
np.nanmean(data, axis=1)
# array([ 1.5, 1. , 3. ])