Python transform data long to wide - python

I'm looking to transform some data in Python.
Originally, in column 1 there are various identifiers (A to E in this example) associated with towns in column 2. There is a separate row for each identifier and town association. There can be any number of identifier to town associations.
I'd like to end up with ONE row per identifier and with all the associated towns going horizontally separated by commas.
Tried using long to wide but having difficulty in doing the above, appreciate any suggestions.
Thank you

One way to do it is using gruopby. For example, you can group by Column 1 and apply a function that returns the list of unique values for each group (i.e. each code).
import numpy as np
import pandas as pd
df = pd.DataFrame({
'col1': 'A A A A B B C C C D E E E E E'.split(' '),
'col2': ['Accrington', 'Acle', 'Suffolk', 'Hampshire', 'Lincolnshire',
'Derbyshire', 'Aldershot', 'Alford', 'Cumbria', 'Hampshire', 'Bath',
'Alston', 'Greater Manchester', 'Northumberland', 'Cumbria'],
})
def get_towns(town_list):
return ', '.join(np.unique(town_list))
df.groupby('col1')['col2'].apply(get_towns)
And the result is:
col1
A Accrington, Acle, Hampshire, Suffolk
B Derbyshire, Lincolnshire
C Aldershot, Alford, Cumbria
D Hampshire
E Alston, Bath, Cumbria, Greater Manchester, Nor...
Name: col2, dtype: object
Note: the last line contains also Cumbria, differently from you expected results as this value appears also with the code E. I guess that was a typo in your question...

Another option is to use .groupby with aggregate because conceptually, this is not a pivoting operation but, well, an aggregation (concatenation) of values. This solution is quite similar to Luca Clissa's answer, but it uses the pandas api instead of numpy.
>>> df.groupby("col1").col2.agg(list)
col1
A [Accrington, Acle, Suffolk, Hampshire]
B [Lincolnshire, Derbyshire]
C [Aldershot, Alford, Cumbria]
D [Hampshire]
E [Bath, Alston, Greater Manchester, Northumberl...
Name: col2, dtype: object
That gives you cells of lists; if you need strings, add a .str.join(", "):
>>> df.groupby("col1").col2.agg(list).str.join(", ")
col1
A Accrington, Acle, Suffolk, Hampshire
B Lincolnshire, Derbyshire
C Aldershot, Alford, Cumbria
D Hampshire
E Bath, Alston, Greater Manchester, Northumberla...
Name: col2, dtype: object
If you want col1 as a normal column instead of an index, add a .reset_index() at the end.

Related

Pandas group by either column [duplicate]

This question already has an answer here:
Group a pandas dataframe by one column OR another one
(1 answer)
Closed 7 months ago.
I want to find the groups (rather than the grouping variable) in a pandas groupby. Here is an example:
Name Col1 Col2 Col3
John 1 A C
Sam 1 B C
Mike 1 B D
Kate 2 E G
Fred 3 E H
Liz 3 F H
Jane 4 X Y
Henry 4 Z T
If I group then using Col1 and (Col2 or Col3), the corresponding groups will be
output = [['John', 'Sam', 'Mike'], ['Kate'], ['Fred', 'Liz'], ['Jane'], ['Henry']]
because a group consists of people having the same Col1 values, as well as either the same Col2 or the same Col3 value.
I was able to get what I want by creating a graph and finding connected components. Grouping by Col1 first, then finding connected components is another idea. However, I believe there must be a simpler way.
I would also like to do this in a more general case, such as grouping by Col1 and Col2 and (Col3 or Col4) and (Col5 or Col6).
I've had a look around, and this question is effectively a duplicate of this post:
Group a pandas dataframe by one column OR another one. So, I cannot - not remotely - take credit for the following solution, but let me just show how you can adjust the impressive answer provided there by #AmiTavory to suit your specific needs:
import pandas as pd
import networkx as nx
import itertools
G = nx.Graph()
G.add_nodes_from(df.Name)
G.add_edges_from(
[(r1[1]['Name'], r2[1]['Name'])
for (r1, r2) in itertools.product(df.iterrows(), df.iterrows())
if r1[1].Name < r2[1].Name and
(r1[1]['Col1'] == r2[1]['Col1'] and
(r1[1]['Col2'] == r2[1]['Col2'] or r1[1]['Col3'] == r2[1]['Col3']))]
)
df['group'] = df['Name'].map(
dict(itertools.chain.from_iterable([[(ee, i) for ee in e]
for (i, e) in enumerate(nx.connected_components(G))])))
# finally, we only need to add this to get the list with nested lists
# containing the names.
output = df.groupby('group')['Name'].apply(list).values.tolist()
output
# [['John', 'Sam', 'Mike'], ['Kate'], ['Fred', 'Liz'], ['Jane'], ['Henry']]
In order to achieve other combinations of and/or, you will just have to rewrite this bit:
(r1[1]['Col1'] == r2[1]['Col1'] and
(r1[1]['Col2'] == r2[1]['Col2'] or r1[1]['Col3'] == r2[1]['Col3']))
From what i understood of your question you want to formulate, a list of what it seens to be your index of each individual of your grouped data, for that you will need groups.
So first lets grab your groups names and indexes.
df.groupby('col1').groups
That is we gonna return a dict whose keys are the names of each group in the columns used and indexes well they are the index of dataframe which is what you want.
After utilizing this groups keys, try doing the following comprehension
[v.to_list() for v in df.groupby('col1').groups.values()]
That will return you wanted output independtly from what column you groupby

How to exclude elements contained in another column - Pyspark DataFrame

Imagine you have a pyspark data frame df with three columns: A, B, C. I want to take the rows in the data frame where the value of B does not exist in C.
Example:
A B C
a 1 2
b 2 4
c 3 6
d 4 8
would return
A B C
a 1 2
c 3 6
What I tried
df.filter(~df.B.isin(df.C))
I also tried to making the values of B into a list, but that takes a significant amount of time.
The problem is how you're using isin. For better or worse, isin can't actually handle another pyspark Column object as an input, it needs an actual collection. So one thing you could do is convert your column to a list :
col_values = df.select("C").rdd.flatMap(lambda x: x).collect()
df.filter(~df.B.isin(col_values))
Performance wise though, this is obviously not ideal as your master node is now in charge of manipulating the entire contents of the single column you've just loaded into memory. You could use a left anti join to get the result you need without having to transform anything into a list and losing the efficiency of spark distributed computing :
df0 = df[["C"]].withColumnRenamed("C", "B")
df.join(df0, "B", "leftanti").show()
Thanks to Emma in the comments for her contribution.

Group by specific token in Pandas Dataframe

so I have my dataframe which is formatted as below.
Sentiments.head()
Sentiment Tweet
0 0 [corona, updat, govern, vow, pay, wage, staff,...
1 0 [open, today, til, PM, takeaway, beer, need, s...
2 0 [that, call, corona, viru, coronaviru, london,...
3 1 [that, th, person, know, bought, corona, dog, ...
4 1 [hhmmm, colodia, drifu, nigeria, believ, coron...
I need to group the tweets, using the token 'govern' and 'Johnson'. I have tried this code below
grouped_df = Sentiments.groupby('Tweet')
grouped_df.get_group('govern')
However I get an error
TypeError: unhashable type: 'list'
Both of the columns are added from lists, so is it possible to group by specific tokens or do I need to change the datatypes?
Thanks in advance!
return the dataframe rows which contain the words 'govern' and 'Johnson'
This might be done using set arithemtic following way, consider following simple example: getting records where both bb and cc are present
import pandas as pd
def has_bb_cc(x):
return set(['bb','cc']).issubset(x)
df = pd.DataFrame({'col1':['a','b','c'],'col2':[['aa','bb','cc'],['bb','cc','dd'],['cc','dd','ee']]})
bb_cc_df = df[df.col2.apply(has_bb_cc)]
print(bb_cc_df)
output:
col1 col2
0 a [aa, bb, cc]
1 b [bb, cc, dd]
Explanation: I define function for checking if bb and cc are present using set arithmetic, then I apply it to column with lists thus getting pandas.Series of Trues and Falses which I then use to extract records from df.
As side note I would call it filtering rather than grouping.

Python Return all Columns [duplicate]

Using Python Pandas I am trying to find the Country & Place with the maximum value.
This returns the maximum value:
data.groupby(['Country','Place'])['Value'].max()
But how do I get the corresponding Country and Place name?
Assuming df has a unique index, this gives the row with the maximum value:
In [34]: df.loc[df['Value'].idxmax()]
Out[34]:
Country US
Place Kansas
Value 894
Name: 7
Note that idxmax returns index labels. So if the DataFrame has duplicates in the index, the label may not uniquely identify the row, so df.loc may return more than one row.
Therefore, if df does not have a unique index, you must make the index unique before proceeding as above. Depending on the DataFrame, sometimes you can use stack or set_index to make the index unique. Or, you can simply reset the index (so the rows become renumbered, starting at 0):
df = df.reset_index()
df[df['Value']==df['Value'].max()]
This will return the entire row with max value
I think the easiest way to return a row with the maximum value is by getting its index. argmax() can be used to return the index of the row with the largest value.
index = df.Value.argmax()
Now the index could be used to get the features for that particular row:
df.iloc[df.Value.argmax(), 0:2]
The country and place is the index of the series, if you don't need the index, you can set as_index=False:
df.groupby(['country','place'], as_index=False)['value'].max()
Edit:
It seems that you want the place with max value for every country, following code will do what you want:
df.groupby("country").apply(lambda df:df.irow(df.value.argmax()))
Use the index attribute of DataFrame. Note that I don't type all the rows in the example.
In [14]: df = data.groupby(['Country','Place'])['Value'].max()
In [15]: df.index
Out[15]:
MultiIndex
[Spain Manchester, UK London , US Mchigan , NewYork ]
In [16]: df.index[0]
Out[16]: ('Spain', 'Manchester')
In [17]: df.index[1]
Out[17]: ('UK', 'London')
You can also get the value by that index:
In [21]: for index in df.index:
print index, df[index]
....:
('Spain', 'Manchester') 512
('UK', 'London') 778
('US', 'Mchigan') 854
('US', 'NewYork') 562
Edit
Sorry for misunderstanding what you want, try followings:
In [52]: s=data.max()
In [53]: print '%s, %s, %s' % (s['Country'], s['Place'], s['Value'])
US, NewYork, 854
In order to print the Country and Place with maximum value, use the following line of code.
print(df[['Country', 'Place']][df.Value == df.Value.max()])
You can use:
print(df[df['Value']==df['Value'].max()])
Using DataFrame.nlargest.
The dedicated method for this is nlargest which uses algorithm.SelectNFrame on the background, which is a performant way of doing: sort_values().head(n)
x y a b
0 1 2 a x
1 2 4 b x
2 3 6 c y
3 4 1 a z
4 5 2 b z
5 6 3 c z
df.nlargest(1, 'y')
x y a b
2 3 6 c y
import pandas
df is the data frame you create.
Use the command:
df1=df[['Country','Place']][df.Value == df['Value'].max()]
This will display the country and place whose value is maximum.
My solution for finding maximum values in columns:
df.ix[df.idxmax()]
, also minimum:
df.ix[df.idxmin()]
I'd recommend using nlargest for better performance and shorter code. import pandas
df[col_name].value_counts().nlargest(n=1)
I encountered a similar error while trying to import data using pandas, The first column on my dataset had spaces before the start of the words. I removed the spaces and it worked like a charm!!

Returning date that corresponds with maximum value in pandas dataframe [duplicate]

How can I find the row for which the value of a specific column is maximal?
df.max() will give me the maximal value for each column, I don't know how to get the corresponding row.
Use the pandas idxmax function. It's straightforward:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations.
idxmax() returns indices labels, not integers.
Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd').
if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).
HISTORICAL NOTES:
idxmax() used to be called argmax() prior to 0.11
argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0
back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax).
argmax function returned the integer position within the index of the row location of the maximum element.
pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
For example, consider this toy DataFrame with a duplicate row label:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9).
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.
You might also try idxmax:
In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])
In [6]: df
Out[6]:
A B C
0 2.001289 0.482561 1.579985
1 -0.991646 -0.387835 1.320236
2 0.143826 -1.096889 1.486508
3 -0.193056 -0.499020 1.536540
4 -2.083647 -3.074591 0.175772
5 -0.186138 -1.949731 0.287432
6 -0.480790 -1.771560 -0.930234
7 0.227383 -0.278253 2.102004
8 -0.002592 1.434192 -1.624915
9 0.404911 -2.167599 -0.452900
In [7]: df.idxmax()
Out[7]:
A 0
B 8
C 7
e.g.
In [8]: df.loc[df['A'].idxmax()]
Out[8]:
A 2.001289
B 0.482561
C 1.579985
Both above answers would only return one index if there are multiple rows that take the maximum value. If you want all the rows, there does not seem to have a function.
But it is not hard to do. Below is an example for Series; the same can be done for DataFrame:
In [1]: from pandas import Series, DataFrame
In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])
In [3]: s.idxmax()
Out[3]: 'b'
In [4]: s[s==s.max()]
Out[4]:
b 4
c 4
dtype: int64
df.iloc[df['columnX'].argmax()]
argmax() would provide the index corresponding to the max value for the columnX. iloc can be used to get the row of the DataFrame df for this index.
A more compact and readable solution using query() is like this:
import pandas as pd
df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
print(df)
# find row with maximum A
df.query('A == A.max()')
It also returns a DataFrame instead of Series, which would be handy for some use cases.
Very simple: we have df as below and we want to print a row with max value in C:
A B C
x 1 4
y 2 10
z 5 9
In:
df.loc[df['C'] == df['C'].max()] # condition check
Out:
A B C
y 2 10
If you want the entire row instead of just the id, you can use df.nlargest and pass in how many 'top' rows you want and you can also pass in for which column/columns you want it for.
df.nlargest(2,['A'])
will give you the rows corresponding to the top 2 values of A.
use df.nsmallest for min values.
The direct ".argmax()" solution does not work for me.
The previous example provided by #ely
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
returns the following message :
FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
So that my solution is :
df['A'].values.argmax()
mx.iloc[0].idxmax()
This one line of code will give you how to find the maximum value from a row in dataframe, here mx is the dataframe and iloc[0] indicates the 0th index.
Considering this dataframe
[In]: df = pd.DataFrame(np.random.randn(4,3),columns=['A','B','C'])
[Out]:
A B C
0 -0.253233 0.226313 1.223688
1 0.472606 1.017674 1.520032
2 1.454875 1.066637 0.381890
3 -0.054181 0.234305 -0.557915
Assuming one want to know the rows where column "C" is max, the following will do the work
[In]: df[df['C']==df['C'].max()])
[Out]:
A B C
1 0.472606 1.017674 1.520032
The idmax of the DataFrame returns the label index of the row with the maximum value and the behavior of argmax depends on version of pandas (right now it returns a warning). If you want to use the positional index, you can do the following:
max_row = df['A'].values.argmax()
or
import numpy as np
max_row = np.argmax(df['A'].values)
Note that if you use np.argmax(df['A']) behaves the same as df['A'].argmax().
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -finds max value location in terms of row
data.iloc() - returns the row
If there are ties in the maximum values, then idxmax returns the index of only the first max value. For example, in the following DataFrame:
A B C
0 1 0 1
1 0 0 1
2 0 0 0
3 0 1 1
4 1 0 0
idxmax returns
A 0
B 3
C 0
dtype: int64
Now, if we want all indices corresponding to max values, then we could use max + eq to create a boolean DataFrame, then use it on df.index to filter out indexes:
out = df.eq(df.max()).apply(lambda x: df.index[x].tolist())
Output:
A [0, 4]
B [3]
C [0, 1, 3]
dtype: object
what worked for me is:
df[df['colX'] == df['colX'].max()
You then get the row in your df with the maximum value of colX.
Then if you just want the index you can add .index at the end of the query.

Categories

Resources