I am dealing with a dataset that uses ".." as a placeholder for null values. These null values span across all of my columns. My dataset looks as follows:
Country Code
Year
GDP growth (%)
GDP (constant)
AFG
2010
3.5
..
AFG
2011
..
2345
AFG
2012
1.4
3372
ALB
2010
..
4567
ALB
2011
..
5678
ALB
2012
4.2
..
DZA
2010
2.0
4321
DZA
2011
..
5432
DZA
2012
3.8
6543
I want to remove the rows containing missing data from my data however my solutions are not very clean.
I have tried:
df_GDP_1[df_GDP_1.str.contains("..")==False]
Which I had hoped to be a solution to deal with all columns at once, however this returns an error.
Otherwise I have tried:
df_GDP_1[df_GDP_1.col1 != '..' | df_GDP_1.col2 != '..']
However this solution requires me to alter names of columns to remove spaces and then reverse this after, and even at that, which seems unnecessarily long for the task at hand.
Any ideas which enable me to perform this in a cleaner manner would be greatly appreciated!
With combination of pandas.DataFrame.eq and pandas.DataFrame.any
functions.
.any(1) tells to find a match over the columns (axis=1)
the negation ~ tells to omit records with matches
In [269]: df[~df.eq("..").any(1)]
Out[269]:
Country Code Year GDP growth (%) GDP (constant)
2 AFG 2012 1.4 3372
6 DZA 2010 2.0 4321
8 DZA 2012 3.8 6543
This is the typical case of world bank data. Here's the simplest way to deal with this:
# This is just for reproducting your example dataset
your_example = """Country Code Year GDP growth (%) GDP (constant)
AFG 2010 3.5 ..
AFG 2011 .. 2345
AFG 2012 1.4 3372
ALB 2010 .. 4567
ALB 2011 .. 5678
ALB 2012 4.2 ..
DZA 2010 2.0 4321
DZA 2011 .. 5432
DZA 2012 3.8 6543"""
your_example = your_example.split("\n")
your_example = pd.DataFrame(
[row.split("\t") for row in your_example[1:]], columns=your_example[0].split("\t")
)
# You just have to do this:
your_example = your_example.replace({"..": None})
your_example = your_example.dropna()
print("DF after dropping rows with ..", your_example)
>>> Country Code Year GDP growth (%) GDP (constant)
>>> 2 AFG 2012 1.4 3372
>>> 6 DZA 2010 2.0 4321
>>> 8 DZA 2012 3.8 6543
I'm just replacing the ".." by None since you are saying this ".." represents a NULL. Then I'm deleting it using dropna() method of pandas dataframe, which is what you wanted to achieve.
Following you original approach (you were almost there!) you could use:
df_GDP_1 = df_GDP_1[(df_GDP_1['GPD Growth (%)']+ != '..') & (df_GDP_1['GDP (constant)'] != '..')]
names with spaces have to go in [ ] instead of dot notation. Also you want to keep rows where both the columns do not have the .. marker so use & not |. Each condition needs to be in ( ) brackets.
Related
My dataset looks as follows:
Country
Year
Value
Ireland
2010
9
Ireland
2011
11
Ireland
2012
14
Ireland
2013
17
Ireland
2014
20
France
2011
15
France
2012
19
France
2013
21
France
2014
28
Germany
2008
17
Germany
2009
20
Germany
2010
19
Germany
2011
24
Germany
2012
27
Germany
2013
32
My goal is to create a new dataset which tells me the % increase from the first year of available data for a given country, compared to the most recent, which would look roughly as follows:
Country
% increase
Ireland
122
France
87
Germany
88
In essence, I need my code for each country in my dataset, to locate the smallest and largest value for year, then take the corresponding values within the value column and calculate the % increase.
I can do this manually, however I have a lot of countries in my dataset and am looking for a more elegant way to do it. I am trying to troubleshoot my code for this however I am not having much luck as of yet.
My code looks as follows at present:
df_1["Min_value"] = df.loc[df["Year"].min(),"Value"].iloc[0]
df_1["Max_value"] = df.loc[df["Year"].max(),"Value"].iloc[0]
df_1["% increase"] = ((df_1["Max_value"]-df_1["Min_value"])/df_1["Min_value"])*100
This returns an error:
AttributeError: 'numpy.float64' object has no attribute 'iloc'
In addition to this it also has the issue that I cannot figure out a way to have the code to run individually for each country within my dataset, so this is another challenge which I am not entirely sure how to address.
Could I potentially go down the route of defining a particular function which could then be applied to each country?
You can group by Country and aggregate min/max for both Year and Value, then calculate percentage change between min and max of the Value.
pct_df = df.groupby(['Country']).agg(['min', 'max'])['Value']\
.apply(lambda x: x.pct_change().round(2) * 100, axis=1)\
.drop('min', axis=1).rename(columns={'max':'% increase'}).reset_index()
print(pct_df)
The output:
Country % increase
0 France 87.0
1 Germany 88.0
2 Ireland 122.0
Dataset image
Please help, I have a dataset in which I have columns Country, Gas and Year from 2019 to 1991. Also attaching the snapshot of the dataset. I want to answer a question that I want to add all the values of a country column wise? For example, for Afghanistan, value should come 56.4 under 2019 (adding 28.79 + 6.23 + 16.37 + 5.01 = 56.4). Now I want it should calculate the result for every year. I have used below code for achieving 2019 data.
df.groupby(by='Country')['2019'].sum()
This is the output of that code:
Country
---------------------
Afghanistan 56.40
Albania 17.31
Algeria 558.67
Andorra 1.18
Angola 256.10
...
Venezuela 588.72
Vietnam 868.40
Yemen 50.05
Zambia 182.08
Zimbabwe 235.06
I have group the data country wise and adding the 2019 column values, but how should I add values of other years in single line of code?
Please help.
I can do the code shown here, to add rows and show multiple columns like this but this will be tedious task to do so write each column name.
df.groupby(by='Country')[['2019','2018','2017']].sum()
If you don't specify the column, it will sum all the numeric column.
df.groupby(by='Country').sum()
2019 2020 ...
Country
Afghanistan 56.40 32.4 ...
Albania 17.31 12.5 ...
Algeria 558.67 241.5 ...
Andorra 1.18 1.5 ...
Angola 256.10 32.1 ...
... ... ...
Venezuela 588.72 247.3 ...
Vietnam 868.40 323.5 ...
Yemen 50.05 55.7 ...
Zambia 182.08 23.4 ...
Zimbabwe 235.06 199.4 ...
Do a reset_index() to flatten the columns
df.groupby(by='Country').sum().reset_index()
Country 2020 2019 ...
Afghanistan 56.40 32.4 ...
Albania 17.31 12.5 ...
Algeria 558.67 241.5 ...
Andorra 1.18 1.5 ...
Angola 256.10 32.1 ...
... ... ...
Venezuela 588.72 247.3 ...
Vietnam 868.40 323.5 ...
Yemen 50.05 55.7 ...
Zambia 182.08 23.4 ...
Zimbabwe 235.06 199.4 ...
You can select columns keys in your dataframe starting from column 2019 till the last column key in this way:
df.groupby(by='Country')[df.keys()[2:]].sum()
Method df.keys will return all dataframe columns keys in a list then you can slice it from the index of 2019 key which is 2 till end of columns keys.
Suppose you want to select columns from 2016 till 1992 column:
df.groupby(by='Country')[df.keys()[5:-1]].sum()
you just need to slice the list of columns keys in correct index order.
I work in R and this operation would be easy in tidyverse; However, I'm having trouble figuring out how to do it in Python and Pandas.
Let's say we're using the gapminder dataset
data_url = 'https://raw.githubusercontent.com/resbaz/r-novice-gapminder-files/master/data/gapminder-FiveYearData.csv'
gapminder = pd.read_csv(data_url)
and let's say that I want to filter out from the dataset all year values that are equal to 1952 and 1957. I would think that something like this would work, but it doesn't:
vector = [1952, 1957]
gapminder.query("year isin(vector)")
I realize here that I've made a vector in what is really a list. When I try to pass those two year values into an array as vector = pd.array(1952, 1957) That doesn't work either.
In R, for instance, you would have to do something simple like
vector = c(1952, 1957)
gapminder %>% filter(year %in% vector)
#or
gapminder %>% filter(year %in% c(1952, 1957))
So really this is a two part question: first, how can I create a vector of many values (if I were pulling these values from another dataset, I believe that I could just use pd.to_numpy) and then how do I then remove all rows based on that vector of observations from a dataframe?
I've looked at a lot of different variations for using query like here, for instance, https://www.geeksforgeeks.org/python-filtering-data-with-pandas-query-method/, but this has been surprisingly hard to find.
*Here I am updating my question: I found that this isn't working if I pull a vector from another dataset (or even from the same dataset); for instance:
vector = (1952, 1957)
#how to take a dataframe and make a vector
#how to make a vector
gapminder.vec = gapminder\
.query('year == [1952, 1958]')\
[['country']]\
.to_numpy()
gap_sum = gapminder.query("year != #gapminder.vec")
gap_sum
I receive the following error:
Thanks much!
James
You can use in or even == inside the query string like so:
# gapminder.query("year == #vector") returns the same result
print(gapminder.query("year in #vector"))
country year pop continent lifeExp gdpPercap
0 Afghanistan 1952 8425333.0 Asia 28.801 779.445314
1 Afghanistan 1957 9240934.0 Asia 30.332 820.853030
12 Albania 1952 1282697.0 Europe 55.230 1601.056136
13 Albania 1957 1476505.0 Europe 59.280 1942.284244
24 Algeria 1952 9279525.0 Africa 43.077 2449.008185
... ... ... ... ... ... ...
1669 Yemen Rep. 1957 5498090.0 Asia 33.970 804.830455
1680 Zambia 1952 2672000.0 Africa 42.038 1147.388831
1681 Zambia 1957 3016000.0 Africa 44.077 1311.956766
1692 Zimbabwe 1952 3080907.0 Africa 48.451 406.884115
1693 Zimbabwe 1957 3646340.0 Africa 50.469 518.764268
The # symbol tells the query string to look for a variable named vector outside of the context of the dataframe.
There are a couple of issues with the updated component of your question that I'll address:
The direct issue you're receiving is because you're using double square brackets to select a column. By using a double square bracket, you're forcing the selected column to be returned as a 2d table (e.g. a dataframe that contains a single column), instead of just the column itself. To resolve this issue, simply get rid of the double brackets. The to_numpy is also not necessary.
in your gap_sum variable, you're checking where the values in "year" are not in your gapminder.vec - which is a pd.Series (array for more generic term) of country names. So these don't really make sense to compare.
Don't use . notation to create variables in python. You're not making a new variable, but are attaching a new attribute to an existing object. Instead use underscores as is common practice in python (e.g. use gapminder_vec instead of gapminder.vec)
# countries that have years that are either 1952 or 1958
# will contain duplicate country names
gapminder_vec = gapminder.query('year == [1952, 1958]')['country']
# This won't actually filter anything- because `gapminder_vec` is
# a bunch of country names. Not years.
gapminder.query("year not in #gapminder_vec")
Also to perform a filter rather than a subset:
vec = (1952, 1958)
# returns a subset containing the rows who have a year in `vec`
subset_with_years_in_vec = gapminder.query('year in #vec')
# return subset containing rows who DO NOT have a year in `vec`
subset_without_years_in_vec = gapminder.query('year not in #vec')
To filter out years 1952 and 1957 you can use:
print(gapminder.loc[~(gapminder.year.isin([1952, 1957]))])
Prints:
country year pop continent lifeExp gdpPercap
2 Afghanistan 1962 1.026708e+07 Asia 31.99700 853.100710
3 Afghanistan 1967 1.153797e+07 Asia 34.02000 836.197138
4 Afghanistan 1972 1.307946e+07 Asia 36.08800 739.981106
5 Afghanistan 1977 1.488037e+07 Asia 38.43800 786.113360
6 Afghanistan 1982 1.288182e+07 Asia 39.85400 978.011439
7 Afghanistan 1987 1.386796e+07 Asia 40.82200 852.395945
8 Afghanistan 1992 1.631792e+07 Asia 41.67400 649.341395
9 Afghanistan 1997 2.222742e+07 Asia 41.76300 635.341351
10 Afghanistan 2002 2.526840e+07 Asia 42.12900 726.734055
11 Afghanistan 2007 3.188992e+07 Asia 43.82800 974.580338
14 Albania 1962 1.728137e+06 Europe 64.82000 2312.888958
15 Albania 1967 1.984060e+06 Europe 66.22000 2760.196931
16 Albania 1972 2.263554e+06 Europe 67.69000 3313.422188
17 Albania 1977 2.509048e+06 Europe 68.93000 3533.003910
...
So I'm a beginner at Python and I have a dataframe with Country, avgTemp and year.
What I want to do is calculate new rows on each country where the year adds 20 and avgTemp is multiplied by a variable called tempChange. I don't want to remove the previous values though, I just want to append the new values.
This is how the dataframe looks:
Preferably I would also want to create a loop that runs the code a certain number of times
Super grateful for any help!
If you need to copy the values from the dataframe as an example you can have it here:
Country avgTemp year
0 Afghanistan 14.481583 2012
1 Africa 24.725917 2012
2 Albania 13.768250 2012
3 Algeria 23.954833 2012
4 American Samoa 27.201417 2012
243 rows × 3 columns
If you want to repeat the rows, I'd create a new dataframe, perform any operation in the new dataframe (sum 20 years, multiply the temperature by a constant or an array, etc...) and use then use concat() to append it to the original dataframe:
import pandas as pd
tempChange=1.15
data = {'Country':['Afghanistan','Africa','Albania','Algeria','American Samoa'],'avgTemp':[14,24,13,23,27],'Year':[2012,2012,2012,2012,2012]}
df = pd.DataFrame(data)
df_2 = df.copy()
df_2['avgTemp'] = df['avgTemp']*tempChange
df_2['Year'] = df['Year']+20
df = pd.concat([df,df_2]) #ignore_index=True if you wish to not repeat the index value
print(df)
Output:
Country avgTemp Year
0 Afghanistan 14.00 2012
1 Africa 24.00 2012
2 Albania 13.00 2012
3 Algeria 23.00 2012
4 American Samoa 27.00 2012
0 Afghanistan 16.10 2032
1 Africa 27.60 2032
2 Albania 14.95 2032
3 Algeria 26.45 2032
4 American Samoa 31.05 2032
where df is your data frame name:
df['tempChange'] = df['year']+ 20 * df['avgTemp']
This will add a new column to your df with the logic above. I'm not sure if I understood your logic correct so the math may need some work
I believe that what you're looking for is
dfName['newYear'] = dfName.apply(lambda x: x['year'] + 20,axis=1)
dfName['tempDiff'] = dfName.apply(lambda x: x['avgTemp']*tempChange,axis=1)
This is how you apply to each row.
I have 3 dataframes each with the same columns (years) and same indexes (countries).
Now I want to merge these 3 dataframes. But since all have the same columns it is appending those.
So 'd like to keep the country index and add a subindex for each dataframe because all represent different numbers for each year.
#dataframe 1
#CO2:
2005 2010 2015 2020
country
Afghanistan 169405 210161 259855 319447
Albania 762 940 1154 1408
Algeria 158336 215865 294768 400126
#dataframe 2
#Arrivals + Departures:
2005 2010 2015 2020
country
Afghanistan 977896 1326120 1794547 2414943
Albania 103132 154219 224308 319440
Algeria 3775374 5307448 7389427 10159656
#data frame 3
#Travel distance in km:
2005 2010 2015 2020
country
Afghanistan 9330447004 12529259781 16776152792 22337458954
Albania 63159063 82810491 107799357 139543748
Algeria 12254674181 17776784271 25782632480 37150057977
The result should be something like:
2005 2010 2015 2020
country
Afghanistan co2 169405 210161 259855 319447
flights 977896 1326120 1794547 2414943
traveldistance 9330447004 12529259781 16776152792 22337458954
Albania ....
How can I do this?
NOTE: The years are an input so these are not fixed. They could just be 2005,2010 for example.
Thanks in advance.
I have tried to solve the problem using concat and groupby using your dataset hope it helps
First concat the 3 dfs
l=[df,df2,df3]
f=pd.concat(l,keys= ['CO2','Flights','traveldistance'],axis=0,).reset_index().rename(columns={'level_0':'Category'})
the use groupby to get the values
result_df=f.groupby(['country', 'Category'])[f.columns[2:]].first()
Hope it helps and solve your problem
Output looks like this