In my dataframe , There are several countries with numbers and/or parenthesis in their name.
I want to remove parentheses and numbers from these countries names.
For example :
'Bolivia (Plurinational State of)' should be 'Bolivia',
'Switzerland17' should be 'Switzerland'.
Here is my code , but it seems not working :
import numpy as np
import pandas as pd
def func():
energy=pd.ExcelFile('Energy Indicators.xls').parse('Energy')
energy=energy.iloc[16:243][['Environmental Indicators: Energy','Unnamed: 3','Unnamed: 4','Unnamed: 5']].copy()
energy.columns=['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
o="..."
n=np.NaN
energy = energy.replace('...', np.nan)
energy['Energy Supply']=energy['Energy Supply']*1000000
old=["Republic of Korea","United States of America","United Kingdom of "
+"Great Britain and Northern Ireland","China, Hong "
+"Kong Special Administrative Region"]
new=["South Korea","United States","United Kingdom","Hong Kong"]
for i in range(0,4):
energy = energy.replace(old[i], new[i])
#I'm trying to remove it here =====>
p="("
for j in range(16,243):
if p in energy.iloc[j]['Country']:
country=""
for c in energy.iloc[j]['Country'] :
while(c!=p & !c.isnumeric()):
country=c+country
energy = energy.replace(energy.iloc[j]['Country'], country)
return energy
Here is the .xls file i'm working on : https://drive.google.com/file/d/0B80lepon1RrYeDRNQVFWYVVENHM/view?usp=sharing
Use str.extract:
energy['country'] = energy['country'].str.extract('(^[a-zA-Z]+)', expand=False)
df
country
0 Bolivia (Plurinational State of)
1 Switzerland17
df['country'] = df['country'].str.extract('(^[a-zA-Z]+)', expand=False)
df
country
0 Bolivia
1 Switzerland
To handle countries with spaces in their names (very common), a small improvement to the regex would be enough.
df
country
0 Bolivia (Plurinational State of)
1 Switzerland17
2 West Indies (foo bar)
df['country'] = df['country'].str.extract('(^[a-zA-Z\s]+)', expand=False).str.strip()
df
country
0 Bolivia
1 Switzerland
2 West Indies
Related
I have a pandas dataset like below:
import pandas as pd
data = {'id': ['001', '002', '003'],
'address': ["William J. Clare\n290 Valley Dr.\nCasper, WY 82604\nUSA, United States",
"1180 Shelard Tower\nMinneapolis, MN 55426\nUSA, United States",
"William N. Barnard\n145 S. Durbin\nCasper, WY 82601\nUSA, United States"]
}
df = pd.DataFrame(data)
print(df)
I need to convert address column to text delimited by \n and create new columns like name, address line 1, City, State, Zipcode, Country like below:
id Name addressline1 City State Zipcode Country
1 William J. Clare 290 Valley Dr. Casper WY 82604 United States
2 null 1180 Shelard Tower Minneapolis MN 55426 United States
3 William N. Barnard 145 S. Durbin Casper WY 82601 United States
I am learning python and from morning I am solving this. Any help will be greatly appreciated.
Thanks,
Right now, Pandas is returning you the table with 2 columns. If you look at the value in the second column, the essential information is separated with the comma. Therefore, if you saved your dataframe to df you can do the following:
df['address_and_city'] = df['address'].apply(lambda x: x.split(',')[0])
df['state_and_postal'] = df['address'].apply(lambda x: x.split(',')[1])
df['country'] = df['address'].apply(lambda x: x.split(',')[2])
Now, you have additional three columns in your dataframe, the last one contains the full information about the country already. Now from the first two columns that you have created you can extract the info you need in a similar way.
df['address_first_line'] = df['address_and_city'].apply(lambda x: ' '.join(x.split('\n')[:-1]))
df['city'] = df['address_and_city'].apply(lambda x: x.split('\n')[-1])
df['state'] = df['state_and_postal'].apply(lambda x: x.split(' ')[1])
df['postal'] = df['state_and_postal'].apply(lambda x: x.split(' ')[2].split('\n')[0])
Now you should have all the columns you need. You can remove the excess columns with:
df.drop(columns=['address','address_and_city','state_and_postal'], inplace=True)
Of course, it all can be done faster and with fewer lines of code, but I think it is the clearest way of doing it, which I hope you will find useful. If you don't understand what I did there, check the documentation for split and join methods, and also for apply method, native to pandas.
I would like to add the regional information to the main table that contains entity and account columns. In this way, each row in the main table should be duplicated, just like the append tool in Alteryx.
Is there a way to do this operation with Pandas in Python?
Thanks!
Unfortunately no build-in method exist, as you'll need to build cartesian product of those DataFrame check that fancy explanation of merge DataFrames in pandas
But for your specific problem, try this:
import pandas as pd
import numpy as np
df1 = pd.DataFrame(columns=['Entity', 'Account'])
df1.Entity = ['Entity1', 'Entity1']
df1.Account = ['Sales', 'Cost']
df2 = pd.DataFrame(columns=['Region'])
df2.Region = ['North America', 'Europa', 'Asia']
def cartesian_product_simplified(left, right):
la, lb = len(left), len(right)
ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb])
return pd.DataFrame(
np.column_stack([left.values[ia2.ravel()], right.values[ib2.ravel()]]))
resultdf = cartesian_product_simplified(df1, df2)
print(resultdf)
output:
0 1 2
0 Entity1 Sales North America
1 Entity1 Sales Europa
2 Entity1 Sales Asia
3 Entity1 Cost North America
4 Entity1 Cost Europa
5 Entity1 Cost Asia
as expected.
Btw, please provide the Data Frame the next time as code, not as a screenshot or even as link. It helps up saving time (please check how to ask)
I have a dataframe, where I have set indexes as countries. However there are also groups of countries such as Sub-Saharan Africa (IDA & IBRD countries) or Middle East & North Africa (IDA & IBRD countries). I want to delete them. I want just countries to stay.
Example imput dataframe where indexes are:
Antigua and Barbuda
Angola
Arab World
Wanted output dataframe:
Antigua and Barbuda
Angola
My idea was using pycountry, however it does nothing.
countr=list(pycountry.countries)
for idx in df.index:
if idx in countr :
continue
else:
df.drop(index=idx)
Check your list of country names:
countries_list = []
for country in pycountry.countries:
countries_list.append(country.name)
Checking output:
>>> print(countries_list[0:5])
['Aruba', 'Afghanistan', 'Angola', 'Anguilla', 'Åland Islands']
You can do this if you want to get a dataframe of countries that are in your countries list:
import pycountry
import pandas as pd
country = {'Country Name': ['Antigua and Barbuda','USSR', 'United States', 'Germany','The Moon']}
df = pd.DataFrame(data=country)
countries_list = []
for country in pycountry.countries:
countries_list.append(country.name)
new_df = []
for i in df['Country Name']:
if i in countries_list:
new_df.append(i)
Checking output:
>>> print(new_df)
['Antigua and Barbuda', 'United States', 'Germany']
Otherwise, for your specific code, try this:
Assuming you have your data in a dataframe 'df':
import pandas as pd
import pycountry
countries_list = []
for country in pycountry.countries:
countries_list.append(country.name)
for idx in df.index:
if idx in countries_list:
continue
else:
df.drop(index=idx)
Let me know if this works for you.
I am doing a triple for loop on a dataframe with almost 70 thousand entries. How do I optimize it?
My ultimate goal is to create a new column that has the country of a seismic event. I have a latitude, longitude and 'place' (ex: '17km N of North Nenana, Alaska') column. I tried to reverse geocode, but with 68,488 entries, there is no free service that lets me do that. And as a student, I cannot afford it.
So I am using a dataframe with a list of countries and a dataframe with a list of states to compare to USGS['place']'s values. To do that, I ultimately settled on using 3 for loops.
As you can assume, it takes a long time. I was hoping there is a way to speed things up. I am using python, but I use r as well. The for loops just run better on python.
Any better options I'll take.
USGS = pd.DataFrame(data = {'latitide':[64.7385, 61.116], 'longitude':[-149.136, -138.655], 'place':['17km N of North Nenana, Alaska', '74km WNW of Haines Junction, Canada'], 'country':[NA, NA]})
states = pd.DataFrame(data = {'state':['AK', 'AL'], 'name':['Alaska', 'Alabama']})
countries = pd.DataFrame(data = {'country':['Afghanistan', 'Canada']})
for head in states:
for state in states[head]:
for p in USGS['place']:
if state in p:
USGS['country'] = USGS['country'].map({p : 'United 'States'})
# I have not finished the code for the countries dataframe
You do have options to do geocoding. Mapquest offers a free 15,000 calls per month. You can also look at using geopy which is what I use.
import pandas as pd
import geopy
from geopy.geocoders import Nominatim
USGS_df = pd.DataFrame(data = {'latitude':[64.7385, 61.116], 'longitude':[-149.136, -138.655], 'place':['17km N of North Nenana, Alaska', '74km WNW of Haines Junction, Canada'], 'country':[None, None]})
geopy.geocoders.options.default_user_agent = "locations-application"
geolocator=Nominatim(timeout=10)
for i, row in USGS_df.iterrows():
try:
lat = row['latitude']
lon = row['longitude']
location = geolocator.reverse('%s, %s' %(lat, lon))
country = location.raw['address']['country']
print ('Found: ' + location.address)
USGS_df.loc[i, 'country'] = country
except:
print ('Location not identified: %s, %s' %(lat, lon))
Input:
print (USGS_df)
latitude longitude place country
0 64.7385 -149.136 17km N of North Nenana, Alaska None
1 61.1160 -138.655 74km WNW of Haines Junction, Canada None
Output:
print (USGS_df)
latitude longitude place country
0 64.7385 -149.136 17km N of North Nenana, Alaska USA
1 61.1160 -138.655 74km WNW of Haines Junction, Canada Canada
My code looks like:
import pandas as pd
df = pd.read_excel("Energy Indicators.xls", header=None, footer=None)
c_df = df.copy()
c_df = c_df.iloc[18:245, 2:]
c_df = c_df.rename(columns={2: 'Country', 3: 'Energy Supply', 4:'Energy Supply per Capita', 5:'% Renewable'})
c_df['Energy Supply'] = c_df['Energy Supply'].apply(lambda x: x*1000000)
print(c_df)
c_df = c_df.loc[c_df['Country'] == ('Korea, Rep.')] = 'South Korea'
When I run it, I get the error "'str' has no attribute 'loc'". It seems like it is telling me that I can't use loc on the dataframe. All I want to do is replace the value so if there is an easier way, I am all ears.
Just do
c_df.loc[c_df['Country'] == ('Korea, Rep.')] = 'South Korea'
instead of
c_df = c_df.loc[c_df['Country'] == ('Korea, Rep.')] = 'South Korea'
I would suggest using df.replace:
df = df.replace({'c_df':{'Korea, Rep.':'South Korea'}})
The code above replaces Korea, Rep. with South Korea only in the column c_df. Take a look at the df.replace documentation, which explains the nested dictionary syntax I used above as :
Nested dictionaries, e.g., {‘a’: {‘b’: nan}}, are read as follows: look in column ‘a’ for the value ‘b’ and replace it with nan. You can nest regular expressions as well. Note that column names (the top-level dictionary keys in a nested dictionary) cannot be regular expressions.
Example:
# Original dataframe:
>>> df
c_df whatever
0 Korea, Rep. abcd
1 x abcd
2 Korea, Rep. abcd
3 y abcd
# After df.replace:
>>> df
c_df whatever
0 South Korea abcd
1 x abcd
2 South Korea abcd
3 y abcd