This question already has an answer here:
pandas: drop duplicates in groupby 'date'
(1 answer)
Closed 4 years ago.
I'm trying to create a new column in the dataframe called volume. The DF already consists of other columns like market. What I want to do is to group by price and company and then get their count and add it in a new column called volume. Here's what I have:
df['volume'] = df.groupby(['price', 'company']).transform('count')
This does create a new column, however, it's giving me all the rows. I don't need all the rows. For example, before the transformation I would get 4 rows and after the transformation I still get 4 rows but with a new column.
market company price volume
LA EK 206.0 2
LA SQ 206.0 1
LA EK 206.0 2
LA EK 36.0 3
LA EK 36.0 3
LA SQ 36.0 1
LA EK 36.0 3
I'd like to drop the duplicated rows. Is there a query that I can do with groupby that will only show the rows like so:
market company price volume
LA EK 206.0 2
LA SQ 206.0 1
LA SQ 36.0 1
LA EK 36.0 3
Simply drop_duplicates with the columns ['market', 'company', 'price']:
>>> df.drop_duplicates(['market', 'company', 'price'])
market company price volume
0 LA EK 206.0 2
1 LA SQ 206.0 1
3 LA EK 36.0 3
5 LA SQ 36.0 1
Your data contains duplicates, probably because you are only including a subset of the columns. You need something else in your data other than price (e.g. two different days could close at the same price, but you wouldn't aggregate the volume from the two).
Assuming that the price is unique for a given timestamp, market and company and you first sort on your timestamp column if any (not required if there is only one price per company and market):
df = pd.DataFrame({
'company': ['EK', 'SQ', 'EK', 'EK', 'EK', 'SQ', 'EK'],
'date': ['2018-08-13'] * 3 + ['2018-08-14'] * 4,
'market': ['LA'] * 7,
'price': [206] * 3 + [36] * 4})
>>> (df.groupby(['market', 'date', 'company'])['price']
.agg({'price': 'last', 'volume': 'count'}[['price', 'volume']]
.reset_index()
market date company price volume
0 LA 2018-08-13 EK 206 2
1 LA 2018-08-13 SQ 206 1
2 LA 2018-08-14 EK 36 3
3 LA 2018-08-14 SQ 36 1
Related
I have two different excel files which I read using pd.readExcel. The first excel file is kind of a master file which has a lot of columns. showing only those columns which are relevant:
df1
Company Name Excel Company ID
0 cleverbridge AG IQ109133656
1 BT España, Compañía de Servicios Globales de T... IQ3806173
2 Technoserv Group IQ40333012
3 Blue Media S.A. IQ50008102
4 zeb.rolfes.schierenbeck.associates gmbh IQ30413992
and the second excel is basically an output excel file which looks like this:
df2
company_id found_keywords no_of_url company_name
0 IQ137156215 insurance 15 Zühlke Technology Group AG
1 IQ3806173 insurance 15 BT España, Compañía de Servicios Globales de T...
2 IQ40333012 insurance 4 Technoserv Group
3 IQ51614192 insurance 15 Octo Telematics S.p.A.
I want this output excel file/ df2 to include those company_id and company name from df1 where company id and company name from df1 is not a part of df2. Something like this:
df2
company_id found_keywords no_of_url company_name
0 IQ137156215 insurance 15 Zühlke Technology Group AG
1 IQ3806173 insurance 15 BT España, Compañía de Servicios Globales de T...
2 IQ40333012 insurance 4 Technoserv Group
3 IQ51614192 insurance 15 Octo Telematics S.p.A.
4 IQ30413992 NaN NaN zeb.rolfes.schierenbeck.associates gmbh
I tried several ways of achieveing this by using pd.merge as well as np.where
I even tried reindexing based on columns but nothing worked out. What exactly do I need to do so that it works as expected. Please help me out.Thanks!
EDIT:
using pd.merge
df2.merge(df, right_on='company_id', left_on='Excel Company ID', how='outer')
which gave an output with [220 rows X 31 columns]
Your expected output is unclear. If you use pd.merge with how='outer' and indicator=True, you will have:
df1 = df1.rename(columns={'Company Name': 'company_name', 'Excel Company ID': 'company_id'})
out = df2.merge(df1, on=['company_id', 'company_name'], how='outer', indicator=True)
Output:
>>> out
company_id found_keywords no_of_url company_name _merge
0 IQ137156215 insurance 15.0 Zühlke Technology Group AG left_only
1 IQ3806173 insurance 15.0 BT España, Compañía de Servicios Globales de T... both
2 IQ40333012 insurance 4.0 Technoserv Group both
3 IQ51614192 insurance 15.0 Octo Telematics S.p.A. left_only
4 IQ109133656 NaN NaN cleverbridge AG right_only
5 IQ50008102 NaN NaN Blue Media S.A. right_only
6 IQ30413992 NaN NaN zeb.rolfes.schierenbeck.associates gmbh right_only
Check the last column _merge. If you have right_only, it means the company_id and company_name are not found in df2.
I have been trying to work on this dataset that includes quantities for two types of sales (0,1) for different counties across different dates. Some dates, however, include both type 1 and type 0 sales. How can I merge type 1 and 0 sales for the same date and same id? The dataset has over 40k rows and I have no idea where to start. I was thinking about creating an if loop but I have no idea how to write it. It can be in python or R.
Essentially, I have a table that looks like this:
Date
City
Type
Quantity
2020-01-01
Rio
1
10
2020-01-01
Rio
0
16
2020-03-01
Rio
0
23
2020-03-01
Rio
1
27
2020-05-01
Rio
1
29
2020-08-01
Rio
0
36
2020-01-01
Sao Paulo
0
50
2020-01-01
Sao Paulo
1
62
2020-03-01
Sao Paulo
0
30
2020-04-01
Sao Paulo
1
32
2020-05-01
Sao Paulo
0
65
2020-05-01
Sao Paulo
1
155
I want to combine, for example, Rio's quantities for both type 1 and 0 on 2020-01-01, as well as 2020-03-01, and the same thing for Sao Paulo and all subsequent counties. I want to aggregate types 1 and 0 quantities but still preserve the date and city columns.
Try something like this:
import pandas as pd
df = pd.read_csv('your_file_name.csv')
df.pivot_table(values='Sales', index=['Date', 'City'], aggfunc='sum')
You can use the pandas groupby and agg functions to do this operation. Here is some example code:
import pandas as pd
df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000', '3/10/2000'],
'id':[0,1,0,0], 'sale_type':[0,0,0,1], 'amount': [2, 3, 4, 2]})
df['date'] = pd.to_datetime(df['date'])
df.groupby(['date', 'id']).agg({'amount':sum})
>>> amount
date id
2000-03-10 0 4
2000-03-11 1 3
2000-03-12 0 4
My version of code:
# -*- coding: utf-8 -*-
import pandas as pd
# generating a sample dataframe
df = pd.DataFrame([['10-01-2020', 311100, 'ABADIA', 'MG', 'MINAS', 'IVERMECTIONA', 0, 68],
['10-01-2020', 311100, 'ABADIA', 'MG', 'MINAS', 'IVERMECTIONA', 1, 120]],
columns=['date', 'code1', 'code2', 'code3', 'code4', 'code5', 'type_of_sales', 'count_sales'])
# printing content of dataframe
print(df)
# using group by operation over columns we want to see in resultset and aggregating additive columns
df = df.groupby(['date', 'code1', 'code2', 'code3', 'code4', 'code5']).agg({'count_sales': ['sum']})
# aligning levels of column headers
df = df.droplevel(axis=1, level=0).reset_index()
# renaming column name to previous after aggregating
df = df.rename(columns={'sum':'count_sales'})
print(df)
I have a dataframe with the population of a region and i want to populate a column of other dataframe with the same distribution.
The first dataframe looks like this:
Municipio Population Population5000
0 Lisboa 3184984 1291
1 Porto 2597191 1053
2 Braga 924351 375
3 Setúbal 880765 357
4 Aveiro 814456 330
5 Faro 569714 231
6 Leiria 560484 227
7 Coimbra 541166 219
8 Santarém 454947 184
9 Viseu 378784 154
10 Viana do Castelo 252952 103
11 Vila Real 214490 87
12 Castelo Branco 196989 80
13 Évora 174490 71
14 Guarda 167359 68
15 Beja 158702 64
16 Bragança 140385 57
17 Portalegre 120585 49
18 Total 12332794 5000
Basically, the second dataframe has 5000 rows and i want to create a column with a name corresponding to the Municipios from the first df.
My problem is that i dont know how to populate the column with same occurence distribution from the first dataframe.
The final result would be something like this:
Municipio
0 Porto
1 Porto
2 Lisboa
3 Évora
4 Lisboa
5 Aveiro
...
4996 Viseu
4997 Lisboa
4998 Porto
4999 Guarda
5000 Beja
Can someone help me?
I would use a simple comprehension to build a list of size 5000 with as many elements with a town name as the value of Population5000, and optionally shuffle it if you want a random order:
lst = [m for m,n in df.loc[:len(df)-2,
['Municipio', 'Population5000']].to_numpy()
for i in range(n)]
random.shuffle(lst)
result = pd.Series(1, index=lst, name='Municipio')
Initialized with random.seed(0), it gives:
Setúbal 1
Santarém 1
Lisboa 1
Setúbal 1
Aveiro 1
..
Santarém 1
Porto 1
Lisboa 1
Faro 1
Aveiro 1
Name: Municipio, Length: 5000, dtype: int64
You could just do a simple map if you do;
map = dict(zip(DF1['Population5000'], DF1['Municipio']))
DF2['Municipo'] = DF2['Population5000'].map(map)
or just change the population 5000 column name in the map (DF2) to whatever the column containing your population values is called.
map = dict(zip(municipios['Population5000'], municipios['Municipio']))
df['Municipio'] = municipios['Population5000'].map(map)
I tried this as suggested by Amen_90 and the column Municipio from the second dataframe it only gets populated with 1 instance of every Municipio, when i wanted to have the same value_counts as in the column "Population5000" in my first dataframe.
df["Municipio"].value_counts()
Beja 1
Aveiro 1
Bragança 1
Vila Real 1
Porto 1
Santarém 1
Coimbra 1
Guarda 1
Leiria 1
Castelo Branco 1
Viseu 1
Total 1
Faro 1
Portalegre 1
Braga 1
Évora 1
Setúbal 1
Viana do Castelo 1
Lisboa 1
Name: Municipio, dtype: int64
I have two dataframes of differing index lengths that look like this:
df_1:
State Month Total Time ... N columns
AL 4 1000
AL 5 500
.
.
.
VA 11 750
VA 12 1500
df_2:
State Month ... N columns
AL 4
AL 5
.
.
.
VA 11
VA 12
I would like to add a Total Time column to df_2 that uses the values from the Total Time column of df_1 if the State and Month value are the same between dataframes. Ultimately, I would end up with:
df_2:
State Month Total Time ... N columns
AL 4 1000
AL 5 500
.
.
.
VA 11 750
VA 12 1500
I want to do this conditionally since the index lengths are not the same. I have tried this so far:
def f(row):
if (row['State'] == row['State']) & (row['Month'] == row['Month']):
val = x for x in df_1["Total Time"]
return val
df_2['Total Time'] = df_2.apply(f, axis=1)
This did not work. What method would you use to accomplish this?
Any help is appreciated!
You can do this:
Consider my sample dataframes:
In [2327]: df_1
Out[2327]:
State Month Total Time
0 AL 2 1000
1 AB 4 500
2 BC 1 600
In [2328]: df_2
Out[2328]:
State Month
0 AL 2
1 AB 5
In [2329]: df_2 = pd.merge(df_2, df_1, on=['State', 'Month'], how='left')
In [2330]: df_2
Out[2330]:
State Month Total Time
0 AL 2 1000.0
1 AB 5 NaN
As mentioned in other comment, pd.merge() is how you would join two dataframes and extract a column. The issue is that merging solely on 'State' and 'Month' would result in every permutation becoming a new column (all Al-4 in df_1 would join with all other AL-4 in df_2).
With your example, there'd be
df_1
State Month Total Time df_1 col...
0 AL 4 1000 6
1 AL 4 500 7
2 VA 12 750 8
3 VA 12 1500 9
df_2
State Month df_2 col...
0 AL 4 1
1 AL 4 2
2 VA 12 3
3 VA 12 4
df_1_cols_to_use = ['State', 'Month', 'Total Time']
# note the selection of the column to use from df_1. We only want the column
# we're merging on, plus the column(s) we want to bring in, in this case 'Total Time'
new_df = pd.merge(df_2, df_1[df_1_cols_to_use], on=['State', 'Month'], how='left')
new_df:
State Month df_2 col... Total Time
0 AL 4 1 1000
1 AL 4 1 500
2 AL 4 2 1000
3 AL 4 2 500
4 VA 12 3 750
5 VA 12 3 1500
6 VA 12 4 750
7 VA 12 4 1500
You mention these have differing index lengths. Based on the parameters of the question, it's not possible to determine what value of Total Time would match up with a row in df_2. If there's three AL-4 entries in df_2, do they each get 1000, 500, or some combination? That info would be needed. Without this, this would be the best guess at getting all possibilities.
I have two dataframes (with strings) that I am trying to compare to each other. One has a list of areas, the other has a list of areas with long,lat info as well. I am struggling to write a code to perform the following:
1) Check if the string in df1 matches (or a partially matches) area names in df2, then it will merge & carry over the long lat columns.
2) if df1 does not match with df2, then the new column will have NaN or zero.
Code:
import pandas as pd
df1 = pd.read_csv('Dubai Communities1.csv')
df1.head()
CNAME_E1
0 Abu Hail
1 Al Asbaq
2 Al Aweer First
3 Al Aweer Second
4 Al Bada
df2 = pd.read_csv('Dubai Communities2.csv')
df2.head()
COMM_NUM CNAME_E2 Latitude Longitude
0 315 UMM HURAIR 55.3237 25.2364
1 917 AL MARMOOM 55.4518 24.9756
2 624 WARSAN 55.4034 25.1424
3 123 AL MUTEENA 55.3228 25.2739
4 813 AL ROWAIYAH 55.3981 25.1053
The output after search and join will look like this:
CName_E1 CName_E3 Latitude Longitude
0 Area1 Area1 22 7.25
1 Area2 Area2 38 71.83
2 Area3 NaN NaN NaN
3 Area4 Area4 35 8.05