understanding lambda functions in pandas - python

I'm trying to solve a problem for a course in Python and found someone has implemented solutions for the same problem in github. I'm just trying to understand the solution given in github.
I have a pandas dataframe called Top15 with 15 countries and one of the columns in the dataframe is 'HighRenew'. This column stores the % of renewable energy used in each country. My task is to convert the column values in 'HighRenew' column into boolean datatype.
If the value for a particular country is higher than the median renewable energy percentage in all the 15 countries then I should encode it as 1 otherwise it should a 0. The 'HighRenew' column is sliced out as a Series from the dataframe, which is copied below.
Country
China True
United States False
Japan False
United Kingdom False
Russian Federation True
Canada True
Germany True
India False
France True
South Korea False
Italy True
Spain True
Iran False
Australia False
Brazil True
Name: HighRenew, dtype: bool
The github solution is implemented in 3 steps, of which I understand the first 2 but not the last one where lambda function is used. Can someone explain how this lambda function works?
median_value = Top15['% Renewable'].median()
Top15['HighRenew'] = Top15['% Renewable']>=median_value
Top15['HighRenew'] = Top15['HighRenew'].apply(lambda x:1 if x else 0)

lambda represents an anonymous (i.e. unnamed) function. If it is used with pd.Series.apply, each element of the series is fed into the lambda function. The result will be another pd.Series with each element run through the lambda.
apply + lambda is just a thinly veiled loop. You should prefer to use vectorised functionality where possible. #jezrael offers such a vectorised solution.
The equivalent in regular python is below, given a list lst. Here each element of lst is passed through the lambda function and aggregated in a list.
list(map(lambda x: 1 if x else 0, lst))
It is a Pythonic idiom to test for "Truthy" values using if x rather than if x == True, see this answer for more information on what is considered True.

I think apply are loops under the hood, better is use vectorized astype - it convert True to 1 and False to 0:
Top15['HighRenew'] = (Top15['% Renewable']>=median_value).astype(int)
lambda x:1 if x else 0
means anonymous function (lambda function) with condition - if True return 1 else return 0.
For more information about lambda function check this answers.

Instead of using workarounds or lambdas, just use Panda's built-in functionality meant for this problem. The approach is called masking, and in essence we use comparators against a Series (column of a df) to get the boolean values:
import pandas as pd
import numpy as np
foo = [{
'Country': 'Germany',
'Percent Renew': 100
}, {
'Country': 'Germany',
'Percent Renew': 75
}, {
'Country': 'China',
'Percent Renew': 25
}, {
'Country': 'USA',
'Percent Renew': 5
}]
df = pd.DataFrame(foo, index=pd.RangeIndex(0, len(foo)))
df
#| Country | Percent Renew |
#| Germany | 100 |
#| Australia | 75 |
#| China | 25 |
#| USA | 5 |
np.mean(df['Percent Renew'])
# 51.25
df['Better Than Average'] = df['Percent Renew'] > np.mean(df['Percent Renew'])
#| Country | Percent Renew | Better Than Average |
#| Germany | 100 | True
#| Australia | 75 | True
#| China | 25 | False
#| USA | 5 | False
The reason specifically why I propose this over the other solutions is that masking can be used for a host of other purposes as well. I wont get into them here, but once you learn that pandas supports this kind of functionality, it becomes a lot easier to perform other data manipulations in pandas.
EDIT: I read needing boolean datatype as needing True False and not as needing the encoded version 1 and 0 in which case the astype that was proposed will sufficiently convert the booleans to integer values. For masking purposes though, the 'True' 'False' is needed for slicing.

Related

Apply user-defined functions over a python datatable (not pandas dataframe)?

Datatable is popular for R, but it also has a Python version. However, I don't see anything in the docs for applying a user defined function over a datatable.
Here's a toy example (in pandas) where a user function is applied over a dataframe to look for po-box addresses:
df = pd.DataFrame({'customer':[101, 102, 103],
'address':['12 main st', '32 8th st, 7th fl', 'po box 123']})
customer | address
----------------------------
101 | 12 main st
102 | 32 8th st, 7th fl
103 | po box 123
# User-defined function:
def is_pobox(s):
rslt = re.search(r'^p(ost)?\.? *o(ffice)?\.? *box *\d+', s)
if rslt:
return True
else:
return False
# Using .apply() for this example:
df['is_pobox'] = df.apply(lambda x: is_pobox(x['address']), axis = 1)
# Expected Output:
customer | address | rslt
----------------------------|------
101 | 12 main st | False
102 | 32 8th st, 7th fl| False
103 | po box 123 | True
Is there a way to do this .apply operation in datatable? Would be nice, because datatable seems to be quite a bit faster than pandas for most operations.

How to fill pandas dataframe columns with random dictionary values

I'm new to Pandas and I would like to play with random text data. I am trying to add 2 new columns to a DataFrame df which would be each filled by a key (newcol1) + value (newcol2) randomly selected from a dictionary.
countries = {'Africa':'Ghana','Europe':'France','Europe':'Greece','Asia':'Vietnam','Europe':'Lithuania'}
My df already has 2 columns and I'd like something like this :
Year Approved Continent Country
0 2016 Yes  Africa  Ghana
1  2016 Yes Europe Lithuania
2  2017 No Europe  Greece
I can certainly use a for or while loop to fill df['Continent'] and df['Country'] but I sense .apply() and np.random.choice may provide a simpler more pandorable solution for that.
Yep, you're right. You can use np.random.choice with map:
df
Year Approved
0 2016 Yes
1  2016 Yes
2  2017 No
df['Continent'] = np.random.choice(list(countries), len(df))
df['Country'] = df['Continent'].map(countries)
df
Year Approved Continent Country
0 2016 Yes Africa Ghana
1  2016 Yes Asia Vietnam
2  2017 No Europe Lithuania
You choose len(df) number of keys at random from the country key-list, and then use the country dictionary as a mapper to find the country equivalents of the previously picked keys.
You could also try using DataFrame.sample():
df.join(
pd.DataFrame(list(countries.items()), columns=["continent", "country"])
.sample(len(df), replace=True)
.reset_index(drop=True)
)
Which can be made faster if your continent-country map is already a dataframe.
If you're on Python 3.6, another method would be to use random.choices():
df.join(
pd.DataFrame(choices([*countries.items()], k=len(df)), columns=["continent", "country"])
)
random.choices() is similar to numpy.random.choice() except that you can pass a list of key-value tuple pairs whereas numpy.random.choice() only accepts 1-D arrays.

Pandas groupby, get ratio of boolean variable [duplicate]

I have a column of sites: ['Canada', 'USA', 'China' ....]
Each site occurs many times in the SITE column and next to each instance is a true or false value.
INDEX | VALUE | SITE
0 | True | Canada
1 | False | Canada
2 | True | USA
3 | True | USA
And it goes on.
Goal 1: I want to find, for each site, what percent of the VALUE column is True.
Goal 2: I want to return a list of sites where % True in the VALUE column is greater than 10%.
How do I use groupby to achieve this? I only know how to use groupby to find the mean for each site which won't help me here.
Something like this:
In [13]: g = df.groupby('SITE')['VALUE'].mean()
In [14]: g[g > 0.1]
Out[14]:
SITE
Canada 0.5
USA 1.0

Combining similar rows in Stata / python

I am doing some data prep for graph analysis and my data looks as follows.
country1 country2 pair volume
USA CHN USA_CHN 10
CHN USA CHN_USA 5
AFG ALB AFG_ALB 2
ALB AFG ALB_AFG 5
I would like to combine them such that
country1 country2 pair volume
USA CHN USA_CHN 15
AFG ALB AFG_ALB 7
Is there a simple way for me to do so in Stata or python? I've tried making a duplicate dataframe and renamed the 'pair' as country2_country1, then merged them, and dropped duplicate volumes, but it's a hairy way of going about things: I was wondering if there is a better way.
If it helps to know, my data format is for a directed graph, and I am converting it to undirected.
Your key must consist of sets of two countries, so that they compare equal regardless of order. In Python/Pandas, this can be accomplished as follows.
import pandas as pd
import io
# load in your data
s = """
country1 country2 pair volume
USA CHN USA_CHN 10
CHN USA CHN_USA 5
AFG ALB AFG_ALB 2
ALB AFG ALB_AFG 5
"""
data = pd.read_table(io.BytesIO(s), sep='\s+')
# create your key (using frozenset instead of set, since frozenset is hashable)
key = data[['country1', 'country2']].apply(frozenset, 1)
# group by the key and aggregate using sum()
print(data.groupby(key).sum())
This results in
volume
(CHN, USA) 15
(AFG, ALB) 7
which isn't exactly what you wanted, but you should be able to get it into the right shape from here.
Here is a solution that takes pandas automatic alignment of indexes
df1 = df.set_index(['country1'])
df2 = df.set_index(['country2'])
df1['volume'] += df2['volume']
df1.reset_index().query('country1 > country2')
country1 country2 pair volume
0 USA CHN USA_CHN 15
3 ALB AFG ALB_AFG 7
Here is a solution based on #jean-françois-fabre comment.
split_sorted = df.pair.str.split('_').map(sorted)
df_switch = pd.concat([split_sorted.str[0],
split_sorted.str[1],
df['volume']], axis=1, keys=['country1', 'country2', 'volume'])
df_switch.groupby(['country1', 'country2'], as_index=False, sort=False).sum()
output
country1 country2 volume
0 CHN USA 15
1 AFG ALB 7
In Stata you can just lean on the fact that alphabetical ordering gives a distinct signature to each pair.
clear
input str3 (country1 country2) volume
USA CHN 10
CHN USA 5
AFG ALB 2
ALB AFG 5
end
gen first = cond(country1 < country2, country1, country2)
gen second = cond(country1 < country2, country2, country1)
collapse (sum) volume, by(first second)
list
+-------------------------+
| first second volume |
|-------------------------|
1. | AFG ALB 7 |
2. | CHN USA 15 |
+-------------------------+
You can merge back with the original dataset if wished.
Documented and discussed here
NB: Presenting a clear data example is helpful. Presenting it as the code to input the data is even more helpful.
Note: As Nick Cox comments below, this solution gets a bit crazy when the number of countries is large. (With 200 countries, you need to accurately store a 200-bit number)
Here's a neat way to do it using pure Stata.
I effectively convert the countries into binary "flags", making something like the following mapping:
AFG 0001
ALB 0010
CHN 0100
USA 1000
This is achieved by numbering each country as normal, then calculating 2^(country_number). When we then add these binary numbers, the result is a combination of the two "flags". For example,
AFG + CHN = 0101
CHN + AFG = 0101
Notice that it now doesn't make any difference which order the countries come in!
So we can now happily add the flags and collapse by the result, summing over volume as we go.
Here's the complete code (heavily commented so it looks much longer than it is!)
// Convert country names into numbers, storing the resulting
// name/number mapping in a label called "countries"
encode country1, generate(from_country) label(countries)
// Do it again for the other country, using the existing
// mappings where the countries already exist, and adding to the
// existing mapping where they don't
encode country2, generate(to_country) label(countries)
// Add these numbers as if they were binary flags
// Thus CHN (3) + USA (4) becomes:
// 010 +
// 100
// ---
// 110
// This makes adding strings commutative and unique. This means that
// the new variable doesn't care which way round the countries are
// nor can it get confused by pairs of countries adding up to the same
// number.
generate bilateral = 2^from_country + 2^to_country
// The rest is easy. Collapse by the new summed variable
// taking (arbitrarily) the lowest of the from_countries
// and the highest of the to_countries
collapse (sum) volume (min) from_country (max) to_country, by(bilateral)
// Tell Stata that these new min and max countries still have the same
// label:
label values from_country "countries"
label values to_country "countries"

Pandas: Delete rows of a DataFrame if total count of a particular column occurs only 1 time

I'm looking to delete rows of a DataFrame if total count of a particular column occurs only 1 time
Example of raw table (values are arbitrary for illustrative purposes):
print df
Country Series Value
0 Bolivia Population 123
1 Kenya Population 1234
2 Ukraine Population 12345
3 US Population 123456
5 Bolivia GDP 23456
6 Kenya GDP 234567
7 Ukraine GDP 2345678
8 US GDP 23456789
9 Bolivia #McDonalds 3456
10 Kenya #Schools 3455
11 Ukraine #Cars 3456
12 US #Tshirts 3456789
Intended outcome:
print df
Country Series Value
0 Bolivia Population 123
1 Kenya Population 1234
2 Ukraine Population 12345
3 US Population 123456
5 Bolivia GDP 23456
6 Kenya GDP 234567
7 Ukraine GDP 2345678
8 US GDP 23456789
I know that df.Series.value_counts()>1 will identify which df.Series occur more than 1 time; and that the code returned will look something like the following:
Population
True
GDP
True
#McDonalds
False
#Schools
False
#Cars
False
#Tshirts
False
I want to write something like the following so that my new DataFrame drops column values from df.Series that occur only 1 time, but this doesn't work:
df.drop(df.Series.value_counts()==1,axis=1,inplace=True)
You can do this by creating a boolean list/array by either list comprehensions or using DataFrame's string manipulation methods.
The list comprehension approach is:
vc = df['Series'].value_counts()
u = [i not in set(vc[vc==1].index) for i in df['Series']]
df = df[u]
The other approach is to use the str.contains method to check whether the values of the Series column contain a given string or match a given regular expression (used in this case as we are using multiple strings):
vc = df['Series'].value_counts()
pat = r'|'.join(vc[vc==1].index) #Regular expression
df = df[~df['Series'].str.contains(pat)] #Tilde is to negate boolean
Using this regular expressions approach is a bit more hackish and may require some extra processing (character escaping, etc) on pat in case you have regex metacharacters in the strings you want to filter out (which requires some basic regex knowledge). However, it's worth noting this approach is about 4x faster than using the list comprehension approach (tested on the data provided in the question).
As a side note, I recommend avoiding using the word Series as a column name as that's the name of a pandas object.
This is an old question, but the current answer doesn't work for any moderately large dataframes. A much faster and more "dataframe" way is to add a value count column and filter out count.
Create the dataset:
df = pd.DataFrame({'Country': 'Bolivia Kenya Ukraine US Bolivia Kenya Ukraine US Bolivia Kenya Ukraine US'.split(),
'Series': 'Pop Pop Pop Pop GDP GDP GDP GDP McDonalds Schools Cars Tshirts'.split()})
Drop rows that have a count < 1 for the column ('Series' in this case):
# Group values for Series and add 'cnt' column with count
df['cnt'] = df.groupby(['Series'])['Country'].transform('count')
# Drop indexes for count value == 1, and dropping 'cnt' column
df.drop(df[df.cnt==1].index)[['Country','Series']]

Categories

Resources