I have a list like this:
x = ['Las Vegas', 'San Francisco, 'Dallas']
And a dataframe that looks a bit like this:
import pandas as pd
data = [['Las Vegas (Clark County), 25], ['New York', 23],
['Dallas', 27]]
df = pd.DataFrame(data, columns = ['City', 'Value'])
I want to replace my city values in the DF "Las Vegas (Clark County)" with "Las Vegas". In my dataframe are multiple cities with different names which needs to be changed. I know I could do a regex expression to just strip off the part after the parentheses, but I was wondering if there was a more clever, generic way.
Use Series.str.extract with joined values of list by | for regex OR and then replace non matched values to original by Series.fillna:
df['City'] = df['City'].str.extract(f'({"|".join(x)})', expand=False).fillna(df['City'])
print (df)
City Value
0 Las Vegas 25
1 New York 23
2 Dallas 27
Another idea is use Series.str.contains with loop, but it should be slow if large Dataframe and many values in list:
for val in x:
df.loc[df['City'].str.contains(val), 'City'] = val
Related
Say I have a Pandas multi-index data frame with 3 indices:
import pandas as pd
import numpy as np
arrays = [['UK', 'UK', 'US', 'FR'], ['Firm1', 'Firm1', 'Firm2', 'Firm1'], ['Andy', 'Peter', 'Peter', 'Andy']]
idx = pd.MultiIndex.from_arrays(arrays, names = ('Country', 'Firm', 'Responsible'))
df_3idx = pd.DataFrame(np.random.randn(4,3), index = idx)
df_3idx
0 1 2
Country Firm Responsible
UK Firm1 Andy 0.237655 2.049636 0.480805
Peter 1.135344 0.745616 -0.577377
US Firm2 Peter 0.034786 -0.278936 0.877142
FR Firm1 Andy 0.048224 1.763329 -1.597279
I have furthermore another pd.dataframe consisting of unique combinations of multi-index-level 1 and 2 from the above data:
arrays = [['UK', 'US', 'FR'], ['Firm1', 'Firm2', 'Firm1']]
idx = pd.MultiIndex.from_arrays(arrays, names = ('Country', 'Firm'))
df_2idx = pd.DataFrame(np.random.randn(3,1), index = idx)
df_2idx
0
Country Firm
UK Firm1 -0.103828
US Firm2 0.096192
FR Firm1 -0.686631
I want to subtract the values from df_3idx by the corresponding value in df_2idx, so, for instance, I want to subtract from every value of the first two rows the value -0.103828, as index 1 and 2 from both dataframes match.
Does anybody know how to do this? I figured I could simply unstack the first dataframe and then subtract, but I am getting an error message.
df_3idx.unstack('Responsible').sub(df_2idx, axis=0)
ValueError: cannot join with no overlapping index names
Unstacking might anyway not be a preferable solution as my data is very big and unstacking might take a lot of time.
I would appreciate any help. Many thanks in advance!
related question but not focused on MultiIndex
However, the answer doesn't really care. The sub method will align on the matching index levels.
pd.DataFrame.sub with parameter axis=0
df_3idx.sub(df_2idx[0], axis=0)
0 1 2
Country Firm Responsible
FR Firm1 Andy 0.027800 3.316148 0.804833
UK Firm1 Andy -2.009797 -1.830799 -0.417737
Peter -1.174544 0.644006 -1.150073
US Firm2 Peter -2.211121 -3.825443 -4.391965
I have a dataframe values like this:
name foreign_name acronym alias
United States États-Unis USA USA
I want to merge all those four columns in a row into one single columns 'names', so I do:
merge = lambda x: '|'.join([a for a in x.unique() if a])
df['names'] = df[['name', 'foreign_name', 'acronym', 'alias',]].apply(merge, axis=1)
The problem with this code is that, it doesn't remove the duplicate 'USA', instead it gets:
names = 'United States|États-Unis|USA|USA'
Where am I wrong?
aggregate to set to eliminate duplicates
Turn the set to list
apply str.join('|') to concat the strings with a | separator
df['name']=df.agg(set,1).map(list).str.join('|')
MCVE:
import pandas as pd
import numpy as np
d= {'name': {0: 'United States'},
'foreign_name': {0: 'États-Unis'},
'acronym': {0: 'USA'},
'alias': {0: 'USA'}}
df = pd.DataFrame(d)
merge = lambda x: '|'.join([a for a in x.unique() if a])
df['names'] = df[['name', 'foreign_name', 'acronym', 'alias',]].apply(merge, axis=1)
print(df)
Output:
name foreign_name acronym alias names
0 United States États-Unis USA USA United States|États-Unis|USA
You just need to tell is to operate along row axis. axis=1
df.apply(lambda r: "|".join(r.unique()), axis=1)
output
United States|États-Unis|USA
dtype: object
I have a dataframe which looks like:
df =
|Name Nationality Family etc.....
0|John Born in Spain. Wife
1|nan But live in England son
2|nan nan daughter
Some columns only have one row but others have multiple answers over a few rows, how could i merge the rows onto each other so it would look something like the below:
df =
|Name Nationality Family etc....
0|John Born in Spain. But live in England Wife Son Daughter
Perhaps this will do it for you:
import pandas as pd
# your dataframe
df = pd.DataFrame(
{'Name': ['John', np.nan, np.nan],
'Nationality': ['Born in Spain.', 'But live in England', np.nan],
'Family': ['Wife', 'son', 'daughter']})
def squeeze_df(df):
new_df = {}
for col in df.columns:
new_df[col] = [df[col].str.cat(sep=' ')]
return pd.DataFrame(new_df)
squeeze_df(df)
# >> out:
# Name Nationality Family
# 0 John Born in Spain. But live in England Wife son daughter
I made the assumption that you only need to do this for one single person (i.e. squeezing/joining the rows of the dataframe into a single row). Also, what does "etc...." mean? For example, will you have integer or floating point values in the dataframe?
I have a dataframe df with two of the columns being 'city' and 'zip_code':
df = pd.DataFrame({'city': ['Cambridge','Washington','Miami','Cambridge','Miami',
'Washington'], 'zip_code': ['12345','67891','23457','','','']})
As shown above, a particular city contains zip code in one of the rows, but the zip_code is missing for the same city in some other row. I want to fill those missing values based on the zip_code values of that city in some other row. Basically, wherever there is a missing zip_code, it checks zip_code for that city in other rows, and if found, fills the value for zip_code.If not found, fills 'NA'.
How do I accomplish this task using pandas?
You can go for:
import numpy as np
df['zip_code'] = df.replace(r'', np.nan).groupby('city')['zip_code'].fillna(method='ffill').fillna(method='bfill')
>>> df
city zip_code
0 Cambridge 12345
1 Washington 67891
2 Miami 23457
3 Cambridge 12345
4 Miami 23457
5 Washington 67891
You can check the string length using str.len and for those rows, filter the main df to those with valid zip_codes, set the index to those and call map on the 'city' column which will perform the lookup and fill those values:
In [255]:
df.loc[df['zip_code'].str.len() == 0, 'zip_code'] = df['city'].map(df[df['zip_code'].str.len() == 5].set_index('city')['zip_code'])
df
Out[255]:
city zip_code
0 Cambridge 12345
1 Washington 67891
2 Miami 23457
3 Cambridge 12345
4 Miami 23457
5 Washington 67891
If your real data has lots of repeating values then you'll need to additionally call drop_duplicates first:
df.loc[df['zip_code'].str.len() == 0, 'zip_code'] = df['city'].map(df[df['zip_code'].str.len() == 5].drop_duplicates(subset='city').set_index('city')['zip_code'])
The reason you need to do this is because it'll raise an error if there are duplicate index entries
My suggestion would be to first create a dictonary that maps from the city to the zip code. You can create this dictionary from the one DataFrame.
And then you use that dictionary to fill in all missing zip code values.
I have the following data frame:
population GDP
country
United Kingdom 4.5m 10m
Spain 3m 8m
France 2m 6m
I also have the following information in a 2 column dataframe(happy for this to be made into another datastruct if that will be more beneficial as the plan is that it will be sorted in a VARS file.
county code
Spain es
France fr
United Kingdom uk
The 'mapping' datastruct will be sorted in a random order as countries will be added/removed at random times.
What is the best way to re-index the data frame to its country code from its country name?
Is there a smart solution that would also work on other columns so for example if a data frame was indexed on date but one column was df['county'] then you could change df['country'] to its country code? Finally is there a third option that would add an additional column that was either country/code which selected the right code based on a country name in another column?
I think you can use Series.map, but it works only with Series, so need Index.to_series. Last rename_axis (new in pandas 0.18.0):
df1.index = df1.index.to_series().map(df2.set_index('county').code)
df1 = df1.rename_axis('county')
#pandas bellow 0.18.0
#df1.index.name = 'county'
print (df1)
population GDP
county
uk 4.5m 10m
es 3m 8m
fr 2m 6m
It is same as mapping by dict:
d = df2.set_index('county').code.to_dict()
print (d)
{'France': 'fr', 'Spain': 'es', 'United Kingdom': 'uk'}
df1.index = df1.index.to_series().map(d)
df1 = df1.rename_axis('county')
#pandas bellow 0.18.0
#df1.index.name = 'county'
print (df1)
population GDP
county
uk 4.5m 10m
es 3m 8m
fr 2m 6m
EDIT:
Another solution with Index.map, so to_series is omitted:
d = df2.set_index('county').code.to_dict()
print (d)
{'France': 'fr', 'Spain': 'es', 'United Kingdom': 'uk'}
df1.index = df1.index.map(d.get)
df1 = df1.rename_axis('county')
#pandas bellow 0.18.0
#df1.index.name = 'county'
print (df1)
population GDP
county
uk 4.5m 10m
es 3m 8m
fr 2m 6m
Here are some brief ways to approach your 3 questions. More details below:
1) How to change index based on mapping in separate df
Use df_with_mapping.todict("split") to create a dictionary, then use a list comprehension to change it into {"old1":"new1",...,"oldn":"newn"} form then use df.index = df.base_column.map(dictionary) to get the changed index.
2) How to change index if the new column is in the same df:
df.index = df["column_you_want"]
3) Creating a new column by mapping on a old column:
df["new_column"] = df["old_column"].map({"old1":"new1",...,"oldn":"newn"})
1) Mapping for the current index exists in separate dataframe but you don't have the mapped column in the dataframe yet
This is essentially the same as question 2 with the additional step of creating a dictionary for the mapping you want.
#creating the mapping dictionary in the form of current index : future index
df2 = pd.DataFrame([["es"],["fr"]],index = ["spain","france"])
interm_dict = df2.to_dict("split") #Creates a dictionary split into column labels, data labels and data
mapping_dict = {country:data[0] for country,data in zip(interm_dict["index"],interm_dict['data'])}
#We only want the first column of the data and the index so we need to make a new dict with a list comprehension and zip
df["country"] = df.index #Create a new column if u want to save the index
df.index = pd.Series(df.index).map(mapping_dict) #change the index
df.index.name = "" #Blanks out index name
df = df.drop("county code",1) #Drops the county code column to avoid duplicate columns
Before:
county code language
spain es spanish
france fr french
After:
language country
es spanish spain
fr french france
2) Changing the current index to one of the columns already in the dataframe
df = pd.DataFrame([["es","spanish"],["fr","french"]], columns = ["county code","language"], index = ["spain", "french"])
df["country"] = df.index #if you want to save the original index
df.index = df["county code"] #The only step you actually need
df.index.name = "" #if you want a blank index name
df = df.drop("county code",1) #if you dont want the duplicate column
Before:
county code language
spain es spanish
french fr french
After:
language country
es spanish spain
fr french french
3) Creating an additional column based on another column
This is again essentially the same as step 2 except we create an additional column instead of assigning .index to the created series.
df = pd.DataFrame([["es","spanish"],["fr","french"]], columns = ["county code","language"], index = ["spain", "france"])
df["city"] = df["county code"].map({"es":"barcelona","fr":"paris"})
Before:
county code language
spain es spanish
france fr french
After:
county code language city
spain es spanish barcelona
france fr french paris