Itering through a list if identical elements - python

I have the following function, which returns the pandas series of States - Associated Counties
def answer():
census_df.set_index(['STNAME', 'CTYNAME'])
for name, state, cname in zip(census_df['STNAME'], census_df['STATE'], census_df['CTYNAME']):
print(name, state, cname)
Alabama 1 Tallapoosa County
Alabama 1 Tuscaloosa County
Alabama 1 Walker County
Alabama 1 Washington County
Alabama 1 Wilcox County
Alabama 1 Winston County
Alaska 2 Alaska
Alaska 2 Aleutians East Borough
Alaska 2 Aleutians West Census Area
Alaska 2 Anchorage Municipality
Alaska 2 Bethel Census Area
Alaska 2 Bristol Bay Borough
Alaska 2 Denali Borough
Alaska 2 Dillingham Census Area
Alaska 2 Fairbanks North Star Borough
I would like to know the state with the most counties in it. I can iterate through each state like this:
counter = 0
counter2 = 0
for name, state, cname in zip(census_df['STNAME'], census_df['STATE'], census_df['CTYNAME']):
if state == 1:
counter += 1
print(counter)
if state == 1:
counter2 += 1
print(counter2)
and so on. I can range the number of states (rng = range(1, 56)) and iterate through it, but creating 56 lists is a nightmare. Is there an easier way if doing so?

Pandas allows us to do such operations without loops/iterating:
In [21]: df.STNAME.value_counts()
Out[21]:
Alaska 9
Alabama 6
Name: STNAME, dtype: int64
In [24]: df.STNAME.value_counts().head(1)
Out[24]:
Alaska 9
Name: STNAME, dtype: int64
or
In [18]: df.groupby('STNAME')['CTYNAME'].count()
Out[18]:
STNAME
Alabama 6
Alaska 9
Name: CTYNAME, dtype: int64
In [19]: df.groupby('STNAME')['CTYNAME'].count().idxmax()
Out[19]: 'Alaska'

Related

Swap df1 column with df2 column, based on value

Goal: swap out df_hsa.stateabbr with df_state.state, based on 'df_state.abbr`.
Is there such a function, where I mention source, destination, and based-on dataframe columns?
Do I need to order both DataFrames similarly?
df_hsa:
hsa stateabbr county
0 259 AL Butler
1 177 AL Calhoun
2 177 AL Cleburne
3 172 AL Chambers
4 172 AL Randolph
df_state:
abbr state
0 AL Alabama
1 AK Alaska
2 AZ Arizona
3 AR Arkansas
4 CA California
Desired Output:
df_hsa with state column instead of stateabbr.
hsa state county
0 259 Alabama Butler
1 177 Alabama Calhoun
2 177 Alabama Cleburne
3 172 Alabama Chambers
4 172 Alabama Randolph
you can simply join after setting the index to be "stateabbr"
df_hsa.set_index("stateabbr").join(df_state.set_index("abbr"))
output:
hsa county state
AL 259 Butler Alabama
AL 177 Calhoun Alabama
AL 177 Cleburne Alabama
AL 172 Chambers Alabama
AL 172 Randolph Alabama
if you also want the original index your can add .set_index(df_hsa.index) at the end of the line

DataFrame from Dictionary with variable length keys

So for this assignment I managed to create a dictionary, where the keys are State names (eg: Alabama, Alaska, Arizona), and the values are lists of regions for each state. The problem is that the lists of regions are of different lengths - so each state can have a different number of regions associated.
Example : 'Alabama': ['Auburn',
'Florence',
'Jacksonville',
'Livingston',
'Montevallo',
'Troy',
'Tuscaloosa',
'Tuskegee'],
'Alaska': ['Fairbanks'],
'Arizona': ['Flagstaff', 'Tempe', 'Tucson'],
How can I unload this into a pandas Dataframe? What I want is basically 2 columns - "State", "Region". Something similar to what you would get if you would do a "GroupBy" on state for the regions.
If you work on pandas 0.25+, you can use explode:
pd.Series(states).explode()
Output:
Alabama Auburn
Alabama Florence
Alabama Jacksonville
Alabama Livingston
Alabama Montevallo
Alabama Troy
Alabama Tuscaloosa
Alabama Tuskegee
Alaska Fairbanks
Arizona Flagstaff
Arizona Tempe
Arizona Tucson
dtype: object
You can also use concat which works for most pandas version:
pd.concat(pd.DataFrame({'state':k, 'Region':v}) for k,v in states.items())
Output:
state Region
0 Alabama Auburn
1 Alabama Florence
2 Alabama Jacksonville
3 Alabama Livingston
4 Alabama Montevallo
5 Alabama Troy
6 Alabama Tuscaloosa
7 Alabama Tuskegee
0 Alaska Fairbanks
0 Arizona Flagstaff
1 Arizona Tempe
2 Arizona Tucson
You can also do this by dividing the dictionary into lists. Although that will be a little longer approach. For Example:
Example = {'Alabama': ['Auburn','Florence','Jacksonville','Livingston','Montevallo','Troy','Tuscaloosa','Tuskegee'],
'Alaska': ['Fairbanks'],
'Arizona': ['Flagstaff', 'Tempe', 'Tucson']}
new_list_of_keys = []
new_list_of_values = []
keys = list(Example.keys())
values = list(Example.values())
for i in range(len(keys)):
for j in range(len(values[i])):
new_list_of_values.append(values[i][j])
new_list_of_keys.append(keys[i])
df = pd.DataFrame(zip(new_list_of_keys, new_list_of_values), columns = ['State', 'Region'])
This will give output as:
State Region
0 Alabama Auburn
1 Alabama Florence
2 Alabama Jacksonville
3 Alabama Livingston
4 Alabama Montevallo
5 Alabama Troy
6 Alabama Tuscaloosa
7 Alabama Tuskegee
8 Alaska Fairbanks
9 Arizona Flagstaff
10 Arizona Tempe
11 Arizona Tucson

Extract string from column following a specific pattern

Please forgive my panda newbie question, but I have a column of U.S. towns and states, such as the truncated version shown below (For some strange reason, the name of the column is called 'Alabama[edit]' which is associated with the first 0-7 town values in the column):
0 Auburn (Auburn University)[1]
1 Florence (University of North Alabama)
2 Jacksonville (Jacksonville State University)[2]
3 Livingston (University of West Alabama)[2]
4 Montevallo (University of Montevallo)[2]
5 Troy (Troy University)[2]
6 Tuscaloosa (University of Alabama, Stillman Co...
7 Tuskegee (Tuskegee University)[5]
8 Alaska[edit]
9 Fairbanks (University of Alaska Fairbanks)[2]
10 Arizona[edit]
11 Flagstaff (Northern Arizona University)[6]
12 Tempe (Arizona State University)
13 Tucson (University of Arizona)
14 Arkansas[edit]
15 Arkadelphia (Henderson State University, Ouach...
16 Conway (Central Baptist College, Hendrix Colle...
17 Fayetteville (University of Arkansas)[7]
18 Jonesboro (Arkansas State University)[8]
19 Magnolia (Southern Arkansas University)[2]
20 Monticello (University of Arkansas at Monticel...
21 Russellville (Arkansas Tech University)[2]
22 Searcy (Harding University)[5]
23 California[edit]
The towns that are in each state are below each state name, e.g. Fairbanks (column value 9) is a town in the state of Alaska.
What I want to do is to split up the town names based on the state names so that I have two columns 'State' and 'RegionName' where each state name is associated with each town name, like so:
RegionName State
0 Auburn (Auburn University)[1] Alabama
1 Florence (University of North Alabama) Alabama
2 Jacksonville (Jacksonville State University)[2] Alabama
3 Livingston (University of West Alabama)[2] Alabama
4 Montevallo (University of Montevallo)[2] Alabama
5 Troy (Troy University)[2] Alabama
6 Tuscaloosa (University of Alabama, Stillman Co... Alabama
7 Tuskegee (Tuskegee University)[5] Alabama
8 Fairbanks (University of Alaska Fairbanks)[2] Alaska
9 Flagstaff (Northern Arizona University)[6] Arizona
10 Tempe (Arizona State University) Arizona
11 Tucson (University of Arizona) Arizona
12 Arkadelphia (Henderson State University, Ouach... Arkansas
. . .etc.
I know that each state name is followed by a string '[edit]', which I assume I can use to do the split and assignment of the town names. But I don't know how to do this.
Also, I know that there's a lot of other data cleaning I need to do, such as removing the strings within parentheses and within the brackets '[]'. That can be done later...the important part is splitting up the states and towns and assigning each town to its proper U.S. Any advice would be most appreciated.
Without much context or access to your data, I'd suggest something along these lines. First, modify the code that reads your data:
df = pd.read_csv(..., header=None, names=['RegionName'])
# add header=False so as to read the first row as data
Now, extract the state name using str.extract, this should only extract names as long as they are succeeded by the substring "[edit]". You can then forward fill all NaN values using ffill.
df['State'] = df['RegionName'].str.extract(
r'(?P<State>.*)(?=\s*\[edit\])'
).ffill()

pandas: if intersection then update dataframe

I have two dataframes:
countries:
Country or Area Name ISO-2 ISO-3
0 Afghanistan AF AFG
1 Philippines PH PHL
2 Albania AL ALB
3 Norway NO NOR
4 American Samoa AS ASM
contracts:
Country Name Jurisdiction Signature year
0 Yemen KY;NO;CA;NO 1999.0
1 Yemen BM;TC;YE 2007.0
2 Congo, CD;CD 2015.0
3 Philippines PH 2009.0
4 Philippines PH;PH 2007.0
5 Philippines PH 2001.0
6 Philippines PH;PH 1997.0
7 Bolivia, Plurinational State of BO;BO 2006.0
I want to:
check whether the column Jurdisctiction in contracts contains at least one two letter code from the countries ISO-2 column.
I have tried numerous ways of testing whether there is an intersection, but none of them works. My last try was:
i1 = pd.Index(contracts['Jurisdiction of Incorporation'].str.split(';'))
i2 = pd.Index(countries['ISO-2'])
print i1, i2
i1.intersection(i2)
Which gives me TypeError: unhashable type: 'list'
if at least one of the codes is present, I want to update the contracts dataframe with new column that will contain just boolean values
contracts['new column'] = np.where("piece of code that will actually work", 1, 0)
So the desired output would be
Country Name Jurisdiction Signature year new column
0 Yemen KY;NO;CA;NO 1999.0 1
1 Yemen BM;TC;YE 2007.0 0
2 Congo, CD;CD 2015.0 0
3 Philippines PH 2009.0 1
4 Philippines PH;PH 2007.0 1
5 Philippines PH 2001.0 1
6 Philippines PH;PH 1997.0 1
7 Bolivia, Plurinational State of BO;BO 2006.0 0
How can I achieve this?
A bit of a mouthful, but try this:
occuring_iso_2_codes = set(countries['ISO-2'])
contracts['new column'] = contracts.Jurisdiction.apply(
lambda s: int(bool(set(s.split(';')).intersection(occuring_iso_2_codes))))

Pandas Dataframes- Adding Fields Based on Column Titles

I have a pandas dataframe with some information in the column titles that I want to add to each row. The dataframe looks like:
print working_df
Retail Sales of Electricity : Arkansas : Industrial : Annual \
Year
0 16709.19272
1 16847.75502
2 16993.92202
3 16774.69902
4 14710.29400
Retail Sales of Electricity : Arizona : Residential : Annual \
Year
0 33138.47860
1 32922.97001
2 33079.07402
3 32448.13802
4 32846.84298
[8 rows x 701 columns]
How can I pull out two variables from the column name (the state, e.g. Arizona, and the sector, e.g. Industrial or Residential) and put them as a value the row in two new columns, respectively?
I would like the to have fields that look like
Year State Sector Sales
0 Arizona Residential 33138.47860
1 Arizona Residential 32922.97001
2 Arizona Residential 33079.07402
3 Arizona Residential 32448.13802
4 Arizona Residential 32846.84298
0 Arkansas Industrial 16709.19272
1 Arkansas Industrial 16847.75502
2 Arkansas Industrial 16993.92202
3 Arkansas Industrial 16774.69902
4 Arkansas Industrial 14710.29400
I think I'd do something like
d2 = df.unstack().reset_index()
d2 = d2.rename(columns={0: "Sales"})
parts = d2.pop("level_0").str.split(":")
d2["State"] = [p[1].strip() for p in parts]
d2["Sector"] = [p[2].strip() for p in parts]
which produces
>>> d2
Year Sales State Sector
0 0 16709.19272 Arkansas Industrial
1 1 16847.75502 Arkansas Industrial
2 2 16993.92202 Arkansas Industrial
3 3 16774.69902 Arkansas Industrial
4 4 14710.29400 Arkansas Industrial
5 0 33138.47860 Arizona Residential
6 1 32922.97001 Arizona Residential
7 2 33079.07402 Arizona Residential
8 3 32448.13802 Arizona Residential
9 4 32846.84298 Arizona Residential
[10 rows x 4 columns]
You could be a little fancier and do something with str.extract -- str.extract(r".*?:\s*(?P<State>.*?)\s*:\s*(?P<Sector>.*?)\s*:.*"), maybe -- but I don't think it's really worth it.

Categories

Resources