Pandas Multiindex subtract based on only two index level matchings - python

Say I have a Pandas multi-index data frame with 3 indices:
import pandas as pd
import numpy as np
arrays = [['UK', 'UK', 'US', 'FR'], ['Firm1', 'Firm1', 'Firm2', 'Firm1'], ['Andy', 'Peter', 'Peter', 'Andy']]
idx = pd.MultiIndex.from_arrays(arrays, names = ('Country', 'Firm', 'Responsible'))
df_3idx = pd.DataFrame(np.random.randn(4,3), index = idx)
df_3idx
0 1 2
Country Firm Responsible
UK Firm1 Andy 0.237655 2.049636 0.480805
Peter 1.135344 0.745616 -0.577377
US Firm2 Peter 0.034786 -0.278936 0.877142
FR Firm1 Andy 0.048224 1.763329 -1.597279
I have furthermore another pd.dataframe consisting of unique combinations of multi-index-level 1 and 2 from the above data:
arrays = [['UK', 'US', 'FR'], ['Firm1', 'Firm2', 'Firm1']]
idx = pd.MultiIndex.from_arrays(arrays, names = ('Country', 'Firm'))
df_2idx = pd.DataFrame(np.random.randn(3,1), index = idx)
df_2idx
0
Country Firm
UK Firm1 -0.103828
US Firm2 0.096192
FR Firm1 -0.686631
I want to subtract the values from df_3idx by the corresponding value in df_2idx, so, for instance, I want to subtract from every value of the first two rows the value -0.103828, as index 1 and 2 from both dataframes match.
Does anybody know how to do this? I figured I could simply unstack the first dataframe and then subtract, but I am getting an error message.
df_3idx.unstack('Responsible').sub(df_2idx, axis=0)
ValueError: cannot join with no overlapping index names
Unstacking might anyway not be a preferable solution as my data is very big and unstacking might take a lot of time.
I would appreciate any help. Many thanks in advance!

related question but not focused on MultiIndex
However, the answer doesn't really care. The sub method will align on the matching index levels.
pd.DataFrame.sub with parameter axis=0
df_3idx.sub(df_2idx[0], axis=0)
0 1 2
Country Firm Responsible
FR Firm1 Andy 0.027800 3.316148 0.804833
UK Firm1 Andy -2.009797 -1.830799 -0.417737
Peter -1.174544 0.644006 -1.150073
US Firm2 Peter -2.211121 -3.825443 -4.391965

Related

Update an existing column in one dataframe based on the value of a column in another dataframe

I have two csv files as my raw data to read into different dataframes. One is called 'employee' and another is called 'origin'. However, I cannot upload the files here so I hardcoded the data into the dataframes below. The task I'm trying to solve is to update the 'Eligible' column in employee_details with 'Yes' or 'No' based on the value of the 'Country' column in origin_details. If Country = UK, then put 'Yes' in the Eligible column for that Personal_ID. Else, put 'No'.
import pandas as pd
import numpy as np
employee = {
'Personal_ID': ['1000123', '1100258', '1104682', '1020943'],
'Name': ['Tom', 'Joseph', 'Krish', 'John'],
'Age': ['40', '35', '43', '51'],
'Eligible': ' '}
origin = {
'Personal_ID': ['1000123', '1100258', '1104682', '1020943', '1573482', '1739526'],
'Country': ['UK', 'USA', 'FRA', 'SWE', 'UK', 'AU']}
employee_details = pd.DataFrame(employee)
origin_details = pd.DataFrame(origin)
employee_details['Eligible'] = np.where((origin_details['Country']) == 'UK', 'Yes', 'No')
print(employee_details)
print(origin_details)
The output of above code shows the below error message:
ValueError: Length of values (6) does not match length of index (4)
However, I am expecting to see the below as my output.
Personal_ID Name Age Eligible
0 1000123 Tom 40 Yes
1 1100258 Joseph 35 No
2 1104682 Krish 43 No
3 1020943 John 51 No
I also don't want to delete anything in my dataframes to match the size specified in the ValueError message because I may need the extra Personal_IDs in the origin_details later. Alternatively, I can keep all the existing Personal_ID's in the raw data (employee_details, origin_details) and create a new dataframe from those to extract the records which have the same Personal_ID's and determine the np.where() condition from there.
Please advise! Any helps are appreciated, thank you!!
You can merge the 2 dataframes on Personal ID and then use np.where
Merge with how='outer' to keep all personal IDs
df_merge = pd.merge(employee_details, origin_details, on='Personal_ID', how='outer')
df_merge['Eligible'] = np.where(df_merge['Country']=='UK', 'Yes', 'No')
Personal_ID Name Age Eligible Country
0 1000123 Tom 40 Yes UK
1 1100258 Joseph 35 No USA
2 1104682 Krish 43 No FRA
3 1020943 John 51 No SWE
4 1573482 NaN NaN Yes UK
5 1739526 NaN NaN No AU
If you dont want to keep all personal IDs then you can merge with how='inner' and you won't see the NANs
df_merge = pd.merge(employee_details, origin_details, on='Personal_ID', how='inner')
df_merge['Eligible'] = np.where(df_merge['Country']=='UK', 'Yes', 'No')
Personal_ID Name Age Eligible Country
0 1000123 Tom 40 Yes UK
1 1100258 Joseph 35 No USA
2 1104682 Krish 43 No FRA
3 1020943 John 51 No SWE
You are using a Pandas Series object inside a Numpy method, np.where((origin_details['Country'])). I believe this is the problem.
try:
employee_details['Eligible'] = origin_details['Country'].apply(lambda x:"Yes" if x=='UK' else "No")
It is always much easier and faster to use the pandas library to analyze dataframes instead of converting them back to numpy arrays
Well, the first thing I want to answer about is the exception and how lucky you are that it didn't if your tables were the same length your code was going to work.
but there is an assumption in the code that I don't think you thought about and that is that the ids may not be in the same order or like in the example there are more ids in some table than the other if you had the same length of tables but not the same order you would have got incorrect eligible values for each row. the current way to do this is as follow
first join the table to one using personal_id but use left join as you don't want to lose data if there is no origin info for that personal id.
combine_df = pd.merge(employee_details, origin_details, on='Personal_ID', how='left')
use the apply function to fill the new column
combine_df['Eligible'] = combine_df['Country'].apply(lambda x:'Yes' if x=='UK' else 'No')

Split a column in Python pandas

I'm sorry if I can't explain properly the issue I'm facing since I don't really understand it that much. I'm starting to learn Python and to practice I try to do projects that I face in my day to day job, but using Python. Right now I'm stuck with a project and would like some help or guidance, I have a dataframe that looks like this
Index Country Name IDs
0 USA John PERSID|12345
SSO|John123
STARTDATE|20210101
WAVE|WAVE39
--------------------------------------------
1 UK Jane PERSID|25478
SSO|Jane123
STARTDATE|20210101
WAVE|WAVE40
(I apologize since I can't create a table on this post since the separator of the ids is a | ) but you get the idea, every person has 4 IDs and they are all on the same "cell" of the dataframe, each ID separated from its value by pipes, I need to split those ID's from their values, and put them on separate columns so I get something like this
index
Country
Name
PERSID
SSO
STARTDATE
WAVE
0
USA
John
12345
John123
20210101
WAVE39
1
UK
Jane
25478
Jane123
20210101
WAVE40
Now, adding to the complexity of the table itself, I have another issues, for example, the order of the ID's won't be the same for everyone and some of them will be missing some of the ID's.
I honestly have no idea where to begin, the first thing I thought about trying was to split the IDs column by spaces and then split the result of that by pipes, to create a dictionary, convert it to a dataframe and then join it to my original dataframe using the index.
But as I said, my knowledge in python is quite pathetic, so that failed catastrophically, I only got to the first step of that plan with a Client_ids = df.IDs.str.split(), that returns a series with the IDs separated one from each other like ['PERSID|12345', 'SSO|John123', 'STARTDATE|20210101', 'WAVE|Wave39'] but I can't find a way to split it again because I keep getting an error saying the the list object doesn't have attribute 'split'
How should I approach this? what alternatives do I have to do it?
Thank you in advance for any help or recommendation
You have a few options to consider to do this. Here's how I would do it.
I will split the values in IDs by \n and |. Then create a dictionary with key:value for each split of values of |. Then join it back to the dataframe and drop the IDs and temp columns.
import pandas as pd
df = pd.DataFrame([
["USA", "John","""PERSID|12345
SSO|John123
STARTDATE|20210101
WAVE|WAVE39"""],
["UK", "Jane", """PERSID|25478
SSO|Jane123
STARTDATE|20210101
WAVE|WAVE40"""],
["CA", "Jill", """PERSID|12345
STARTDATE|20210201
WAVE|WAVE41"""]], columns=['Country', 'Name', 'IDs'])
df['temp'] = df['IDs'].str.split('\n|\|').apply(lambda x: {k:v for k,v in zip(x[::2],x[1::2])})
df = df.join(pd.DataFrame(df['temp'].values.tolist(), df.index))
df = df.drop(columns=['IDs','temp'],axis=1)
print (df)
With this approach, it does not matter if a row of data is missing. It will sort itself out.
The output of this will be:
Original DataFrame:
Country Name IDs
0 USA John PERSID|12345
SSO|John123
STARTDATE|20210101
WAVE|WAVE39
1 UK Jane PERSID|25478
SSO|Jane123
STARTDATE|20210101
WAVE|WAVE40
2 CA Jill PERSID|12345
STARTDATE|20210201
WAVE|WAVE41
Updated DataFrame:
Country Name PERSID SSO STARTDATE WAVE
0 USA John 12345 John123 20210101 WAVE39
1 UK Jane 25478 Jane123 20210101 WAVE40
2 CA Jill 12345 NaN 20210201 WAVE41
Note that Jill did not have a SSO value. It set the value to NaN by default.
First generate your dataframe
df1 = pd.DataFrame([["USA", "John","""PERSID|12345
SSO|John123
STARTDATE|20210101
WAVE|WAVE39"""],
["UK", "Jane", """
PERSID|25478
SSO|Jane123
STARTDATE|20210101
WAVE|WAVE40"""]], columns=['Country', 'Name', 'IDs'])
Then split the last cell using lambda
df2 = pd.DataFrame(list(df.apply(lambda r: {p:q for p,q in [x.split("|") for x in r.IDs.split()]}, axis=1).values))
Lastly concat the dataframes together.
df = pd.concat([df1, df2], axis=1)
Quick solution
remove_word = ["PERSID", "SSO" ,"STARTDATE" ,"WAVE"]
for i ,col in enumerate(remove_word):
df[col] = df.IDs.str.replace('|'.join(remove_word), '', regex=True).str.split("|").str[i+1]
Use regex named capture groups with pd.String.str.extract
def ng(x):
return f'(?:{x}\|(?P<{x}>[^\n]+))?\n?'
fields = ['PERSID', 'SSO', 'STARTDATE', 'WAVE']
pat = ''.join(map(ng, fields))
df.drop('IDs', axis=1).join(df['IDs'].str.extract(pat))
Country Name PERSID SSO STARTDATE WAVE
0 USA John 12345 John123 20210101 WAVE39
1 UK Jane 25478 Jane123 20210101 WAVE40
2 CA Jill 12345 NaN 20210201 WAVE41
Setup
Credit to #JoeFerndz for sample df.
NOTE: this sample has missing values in some 'IDs'.
df = pd.DataFrame([
["USA", "John","""PERSID|12345
SSO|John123
STARTDATE|20210101
WAVE|WAVE39"""],
["UK", "Jane", """PERSID|25478
SSO|Jane123
STARTDATE|20210101
WAVE|WAVE40"""],
["CA", "Jill", """PERSID|12345
STARTDATE|20210201
WAVE|WAVE41"""]], columns=['Country', 'Name', 'IDs'])

How do I transpose every nth term in a list to a column from Pandas import?

I imported some data. And its just one line
Jane
M
52,000
NYC
Mike
M
38,000
LA
AND so on I have 1000 lines
how do I get it to be
Name Sex salary City
Jane M 52,000 NYC
Mike M 38,000 LA
So every 5 lines make it a column I guess.
Thanks
Simply reshape.
a = df['column_name'].to_numpy().reshape(-1, 4)
Notice that 4 above means 4 columns, which looks like is what you have after your edit. Before, it looked like you had 5 columns. Just adapt to whatever you have
To make it a DataFrame
pd.DataFrame(a, columns=['Name', 'Sex', 'salary', 'City'])
Since your question is not well defined and it changed a lot after your edit, I hope the above can help being a lead on what you need
You need to grab every subseries of 5 elements (iloc[]), transform them (.T) and concat them.
data = pd.concat([df.iloc[s:s+4].reset_index(drop=True).T for s in range(0,len(df), 5)]).reset_index(drop=True)
data.columns = ['Name', 'Sex', 'City', 'Salary']
data['Sex'] = data['Sex'].str[0]
Output:
Name Sex City Salary
0 MiKE M NYC 52,000
1 MiKE M NYC 52,000

How to transform pandas JSON column into dataframe?

I have a .csv file with mix of columns where some contain entries in JSON syntax (nested). I want to extract relevant data from these columns to obtain a more data-rich dataframe for further analysis. I've checked this tutorial on Kaggle but I failed to obtain the desired result.
In order to better explain my problem I've prepared a dummy version of a database below.
raw = {"team":["Team_1","Team_2"],
"who":[[{"name":"Andy", "age":22},{"name":"Rick", "age":30}],[{"name":"Oli", "age":19},{"name":"Joe", "age":21}]]}
df = pd.DataFrame(raw)
I'd like to generate the following columns (or equivalent):
team name_1 name_2 age_1 age_2
Team_1 Andy Rick 22 30
Team_2 Oli Joe 19 21
I've tried the following.
Code 1:
test_norm = json_normalize(data=df)
AttributeError: 'str' object has no attribute 'values'
Code 2:
test_norm = json_normalize(data=df, record_path='who')
TypeError: string indices must be integers
Code 3:
test_norm = json_normalize(data=df, record_path='who', meta=[team])
TypeError: string indices must be integers
Is there any way to do it in an effectively? I've looked for a solution in other stackoverflow topics and I cannot find a working solution with json_normalize.
I also had trouble using json_normalize on the lists of dicts that were contained in the who column. My workaround was to reformat each row into a Dict with unique keys (name_1, age_1, name_2, etc.) for each team member's name/age. After this, creating a dataframe with your desired structure was trivial.
Here are my steps. Beginning with your example:
raw = {"team":["Team_1","Team_2"],
"who":[[{"name":"Andy", "age":22},{"name":"Rick", "age":30}],[{"name":"Oli", "age":19},{"name":"Joe", "age":21}]]}
df = pd.DataFrame(raw)
df
team who
0 Team_1 [{'name': 'Andy', 'age': 22}, {'name': 'Rick',...
1 Team_2 [{'name': 'Oli', 'age': 19}, {'name': 'Joe', '...
Write a method to reformat a list as a Dict and apply to each row in the who column:
def reformat(x):
res = {}
for i, item in enumerate(x):
res['name_' + str(i+1)] = item['name']
res['age_' + str(i+1)] = item['age']
return res
df['who'] = df['who'].apply(lambda x: reformat(x))
df
team who
0 Team_1 {'name_1': 'Andy', 'age_1': 22, 'name_2': 'Ric...
1 Team_2 {'name_1': 'Oli', 'age_1': 19, 'name_2': 'Joe'...
Use json_normalize on the who column. Then ensure the columns of the normalized dataframe appear in the desired order:
import pandas as pd
from pandas.io.json import json_normalize
n = json_normalize(data = df['who'], meta=['team'])
n = n.reindex(sorted(n.columns, reverse=True, key=len), axis=1)
n
name_1 name_2 age_1 age_2
0 Andy Rick 22 30
1 Oli Joe 19 21
Join the dataframe created by json_normalize back to the original df, and drop the who column:
df = df.join(n).drop('who', axis=1)
df
team name_1 name_2 age_1 age_2
0 Team_1 Andy Rick 22 30
1 Team_2 Oli Joe 19 21
If your real .csv file has too many rows, my solution may be a bit too expensive (seeing as how it iterates over each row, and then over each entry inside the list contained in each row). If (hopefully) this isn't the case, perhaps my approach will be good enough.
One option would be to unpack the dictionary yourself. Like so:
from pandas.io.json import json_normalize
raw = {"team":["Team_1","Team_2"],
"who":[[{"name":"Andy", "age":22},{"name":"Rick", "age":30}],[{"name":"Oli", "age":19},{"name":"Joe", "age":21}]]}
# add the corresponding team to the dictionary containing the person information
for idx, list_of_people in enumerate(raw['who']):
for person in list_of_people:
person['team'] = raw['team'][idx]
# flatten the dictionary
list_of_dicts = [dct for list_of_people in raw['who'] for dct in list_of_people]
# normalize to dataframe
json_normalize(list_of_dicts)
# due to unpacking of dict, this results in the same as doing
pd.DataFrame(list_of_dicts)
This outputs a little different. My output is often more convenient for further analysis.
Output:
age name team
22 Andy Team_1
30 Rick Team_1
19 Oli Team_2
21 Joe Team_2
You can iterate through each element in raw['who'] separately, but when you do this the resultant dataframe will have both the opponents in separate rows.
Example:
json_normalize(raw['who'][0])
Output:
age name
22 Andy
30 Rick
You can flatten these out into a single row and then concatenate all the rows to get your final output.
def flatten(df_temp):
df_temp.index = df_temp.index.astype(str)
flattened_df = df_temp.unstack().to_frame().sort_index(level=1).T
flattened_df.columns = flattened_df.columns.map('_'.join)
return flattened_df
df = pd.concat([flatten(pd.DataFrame(json_normalize(x))) for x in raw['who']])
df['team'] = raw['team']
Output:
age_0 name_0 age_1 name_1 team
22 Andy 30 Rick Team_1
19 Oli 21 Joe Team_2

How to fill pandas dataframe columns with random dictionary values

I'm new to Pandas and I would like to play with random text data. I am trying to add 2 new columns to a DataFrame df which would be each filled by a key (newcol1) + value (newcol2) randomly selected from a dictionary.
countries = {'Africa':'Ghana','Europe':'France','Europe':'Greece','Asia':'Vietnam','Europe':'Lithuania'}
My df already has 2 columns and I'd like something like this :
Year Approved Continent Country
0 2016 Yes  Africa  Ghana
1  2016 Yes Europe Lithuania
2  2017 No Europe  Greece
I can certainly use a for or while loop to fill df['Continent'] and df['Country'] but I sense .apply() and np.random.choice may provide a simpler more pandorable solution for that.
Yep, you're right. You can use np.random.choice with map:
df
Year Approved
0 2016 Yes
1  2016 Yes
2  2017 No
df['Continent'] = np.random.choice(list(countries), len(df))
df['Country'] = df['Continent'].map(countries)
df
Year Approved Continent Country
0 2016 Yes Africa Ghana
1  2016 Yes Asia Vietnam
2  2017 No Europe Lithuania
You choose len(df) number of keys at random from the country key-list, and then use the country dictionary as a mapper to find the country equivalents of the previously picked keys.
You could also try using DataFrame.sample():
df.join(
pd.DataFrame(list(countries.items()), columns=["continent", "country"])
.sample(len(df), replace=True)
.reset_index(drop=True)
)
Which can be made faster if your continent-country map is already a dataframe.
If you're on Python 3.6, another method would be to use random.choices():
df.join(
pd.DataFrame(choices([*countries.items()], k=len(df)), columns=["continent", "country"])
)
random.choices() is similar to numpy.random.choice() except that you can pass a list of key-value tuple pairs whereas numpy.random.choice() only accepts 1-D arrays.

Categories

Resources