Pandas Dataframe selecting a single value each row - python

I want to have access to an element inside a panda dataframe, my df looks like below
index
A
B
0
3, 2, 1
5, 6, 7
1
3, 2, 1
5, 6, 7
2
3, 2, 1
5, 6, 7
I want to print from A the second value for every index for example, the problem I don't know how to select them.
Output should be
(2,2,2)

Assuming "3, 2, 1" is a list, you can do this with :
df.A.apply(lambda x: x[1])
if this is a string, you can do this with :
df.A.apply(lambda x: x.split(", ")[1])

If the entries in A are a non-string iterable (like a list or tuple, e.g.), you can use pandas string indexing:
df['A'].str[1]
Full example:
>>> import pandas as pd
>>> a = (3, 2, 1)
>>> df = pd.DataFrame([[a], [a], [a]], columns=['A'])
>>> df
A
0 (3, 2, 1)
1 (3, 2, 1)
2 (3, 2, 1)
>>> df['A'].str[1]
0 2
1 2
2 2
Name: A, dtype: int64
If the entries are strings, you can use pandas string methods to split them into a list and apply the same approach above:
>>> import pandas as pd
>>> a = '3,2,1'
>>> df = pd.DataFrame([[a], [a], [a]], columns=['A'])
>>> df
A
0 3,2,1
1 3,2,1
2 3,2,1
>>> df['A'].str.split(',').str[1]
0 2
1 2
2 2
Name: A, dtype: object

If column A contain string values:
import pandas as pd
data = {
"A" :["3, 2, 1","3, 2, 1", "3, 2, 1"],
"B" : ["5, 6, 7", "5, 6, 7", "5, 6, 7"]
}
df = pd.DataFrame(data)
output = df["A"].apply(lambda x: (x.split(",")[1]).strip()).to_list()
print(output)
Result:
['2', '2', '2']

Related

Lists become pd.Series, the again lists with one dimension more

I have another problem with pandas, I will never make mine this library.
First, this is - I think - how zip() is supposed to work with lists:
import numpy as np
import pandas as pd
a = [1,2]
b = [3,4]
print(type(a))
print(type(b))
vv = zip([1,2], [3,4])
for i, v in enumerate(vv):
print(f"{i}: {v}")
with output:
<class 'list'>
<class 'list'>
0: (1, 3)
1: (2, 4)
Problem. I create a dataframe, with list elements (in the actual code the lists come from grouping ops and I cannot change them, basically they contain all the values in a dataframe grouped by a column).
# create dataframe
values = [{'x': list( (1, 2, 3) ), 'y': list( (4, 5, 6))}]
df = pd.DataFrame.from_dict(values)
print(df)
x y
0 [1, 2, 3] [4, 5, 6]
However, the lists are now pd.Series:
print(type(df["x"]))
<class 'pandas.core.series.Series'>
If I do this:
col1 = df["x"].tolist()
col2 = df["y"].tolist()
print(f"col1 is of type {type(col1)}, with length {len(col1)}, first el is {col1[0]} of type {type(col1[0])}")
col1 is of type <class 'list'>, width length 1, first el is [1, 2, 3] of type <class 'list'>
Basically, the tolist() returned a list of list (why?):
Indeed:
print("ZIP AND ITER")
vv = zip(col1, col2)
for v in zip(col1, col2):
print(v)
ZIP AND ITER
([1, 2, 3], [4, 5, 6])
I neeed only to compute this:
# this fails because x (y) is a list
# df['s'] = [np.sqrt(x**2 + y**2) for x, y in zip(df["x"], df["y"])]
I could add df["x"][0] that seems not very elegant.
Question:
How am I supposed to compute sqrt(x^2 + y^2) when x and y are in two columns df["x"] and df["y"]
This should calculate df['s']
df['s'] = df.apply(lambda row: [np.sqrt(x**2 + y**2) for x, y in zip(row["x"], row["y"])], axis=1)
Basically, the tolist() returned a list of list (why?):
Because your dataframe has only 1 row, with two columns and both columns contain a list for its value. So, returning that column as a list of its values, it would return a list with 1 element (the list that is the value).
I think you wanted to create a dataframe like this:
values = {'x': list( (1, 2, 3) ), 'y': list( (4, 5, 6))}
x y
0 1 4
1 2 5
2 3 6
values = [{'x': list( (1, 2, 3) ), 'y': list( (4, 5, 6))}]
df = pd.DataFrame.from_dict(values)
print(df) # yields
x y
0 [1, 2, 3] [4, 5, 6]
An elegant solution to computing sqrt(x^2 + y^2) can be done by converting the dataframe as following:
new_df = df.iloc[0,:].apply(pd.Series).T.reset_index(drop=True)
This yields the follwoing output
x y
0 1 4
1 2 5
2 3 6
Now compute the sqrt(x^2 + y^2)
np.sqrt(new_df['x']**2 + new_df['y']**2)
This yields :
0 4.123106
1 5.385165
2 6.708204
dtype: float64

In pandas, filter for duplicate values appearing in 1 of 2 different columns, for list of certain values only

zed = pd.DataFrame(data = { 'date': ['2022-03-01', '2022-03-02', '2022-03-03', '2022-03-04', '2022-03-05'], 'a': [1, 5, 7, 3, 4], 'b': [3, 4, 9, 12, 5] })
How can the following dataframe be filtered to keep the earliest row (earliest == lowest date) for each of the 3 values 1, 5, 4 appearing in either column a or column b? In this example, the rows with dates '2022-03-01', '2022-03-02' would be kept as they are the lowest dates where each of the 3 values appears?
We have tried zed[zed.isin({'a': [1, 5, 4], 'b': [1, 5, 4]}).any(1)].sort_values(by=['date']) but this returns the incorrect result as it returns 3 rows.
Without reshape your dataframe, you can use:
idx = max([zed[['a', 'b']].eq(i).sum(axis=1).idxmax() for i in [1, 5, 4]])
out = zed.loc[:idx]
Output:
>>> out
date a b
0 2022-03-01 1 3
1 2022-03-02 5 4
You can reshape by DataFrame.stack, so possible filterin gby list with remove duplicates:
s = zed.set_index('date')[['a','b']].stack()
idx = s[s.isin([1, 5, 4])].drop_duplicates().index.remove_unused_levels().levels[0]
print (idx)
Index(['2022-03-01', '2022-03-02'], dtype='object', name='date')
out = zed[zed['date'].isin(idx)]
print (out)
date a b
0 2022-03-01 1 3
1 2022-03-02 5 4
Or filter first index value matching conditions, get unique values and select rows by DataFrame.loc:
L = [1, 5, 4]
idx = pd.unique([y for x in L for y in zed[zed[['a', 'b']].eq(x).any(axis=1)].index[:1]])
df = zed.loc[idx]
print (df)
date a b
0 2022-03-01 1 3
1 2022-03-02 5 4

Proper way to do this in pandas without using for loop

The question is I would like to avoid iterrows here.
From my dataframe I want to create a new column "unique" that will be based on the condition that if "a" and "b" column values are the same I would give it a value "uniqueN" then for all occurrence of the exact "a" and "b" I would need the same value "uniqueN".
In this case
"1", "3" (the first row) from "a" and "b" is the first unique pair, so I give that the value "unique1", and the seventh row will also have the same value which is "unique1" as it is also "1", "3".
"2", "2" (the second row) is the next unique "a", "b" pair so I give them "unique2" and the eight row also has "2", "2" so that will also have "unique2".
"3", "1" (third row) is the next unique, so "unique3", no more rows in the df is "3", "1" so that value wont repeat.
and so on
I have a working code that uses loops but this is not the pandas way, can anyone suggest how I can do this using pandas functions?
Expected Output (My code works, but its not using pandas methods)
a b unique
0 1 3 unique1
1 2 2 unique2
2 3 1 unique3
3 4 2 unique4
4 3 3 unique5
5 4 2 unique4
6 1 3 unique1
7 2 2 unique2
Code
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3, 4, 3, 4, 1, 2], 'b': [3, 2, 1, 2, 3, 2, 3, 2]})
c = 1
seen = {}
for i, j in df.iterrows():
j = tuple(j)
if j not in seen:
seen[j] = 'unique' + str(c)
c += 1
for key, value in seen.items():
df.loc[(df.a == key[0]) & (df.b == key[1]), 'unique'] = value
Let's use groupby ngroup with sort=False to ensure values are enumerated in order of appearance, add 1 so group numbers start at one, then convert to string with astype so we can add the prefix unique to the number:
df['unique'] = 'unique' + \
df.groupby(['a', 'b'], sort=False).ngroup().add(1).astype(str)
Or with map and format instead of converting and concatenating:
df['unique'] = (
df.groupby(['a', 'b'], sort=False).ngroup()
.add(1)
.map('unique{}'.format)
)
df:
a b unique
0 1 3 unique1
1 2 2 unique2
2 3 1 unique3
3 4 2 unique4
4 3 3 unique5
5 4 2 unique4
6 1 3 unique1
7 2 2 unique2
Setup:
import pandas as pd
df = pd.DataFrame({
'a': [1, 2, 3, 4, 3, 4, 1, 2], 'b': [3, 2, 1, 2, 3, 2, 3, 2]
})
I came up with a slightly different solution. I'll add this for posterity, but the groupby answer is superior.
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3, 4, 3, 4, 1, 2], 'b': [3, 2, 1, 2, 3, 2, 3, 2]})
print(df)
df1 = df[~df.duplicated()]
print(df1)
df1['unique'] = df1.index
print(df1)
df2 = df.merge(df1, how='left')
print(df2)

groupby count of values not equal to other col value pandas

I'm aiming to pass a groupby count of values but only considering rows where Item and Item 2 are different. The following achieves this but drops rows if no values are different. If there are one or more values that are present but are identical between Item and Item 2 then I'm hoping to return 0.
import pandas as pd
df = pd.DataFrame({
'Time' : [1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,4,4,4],
'Item' : ['A','A','A','A','A','A','A','B','B','B','B','B','B','B','A','B','B','B'],
'Item2' : ['B','A','A','A','B','B','B','A','A','B','A','B','B','B','A','B','A','A'],
'Value' : [5, 6, 6, 5, 5, 6, 5, 6, 3, 1, 4, 6, 7, 4, 5, 1, 2, 3],
})
df1 = df[df['Item'] != df['Item2']].groupby(['Time']).size().reset_index(name='count')
Intended Output:
Time count
0 1 4
1 2 3
2 3 0
3 4 2
Edit 2:
df = pd.DataFrame({
'Time' : ['1','1','1','1','1','1','1','2','2','2','2','2','2','2','3','4','4','4'],
'Item' : ['A','A','A','A','A','A','A','B','B','B','B','B','B','B','A','B','B','B'],
'Item2' : ['B','A','A','A','B','B','B','A','A','B','A','B','B','B','A','B','A','A'],
'Value' : [2, 6, 6, 5, 3, 3, 4, 6, 5, 1, 4, 6, 7, 4, 5, 1, 2, 3],
})
df1 = (df.assign(new = df['Item'] != df['Item2'])
.groupby('Time')['new']
.mean()
.reset_index(name='avg')
)
Intended Output:
Time avg
0 1 3.0
1 2 5.0
2 3 0.0
3 4 2.5
Idea is not filter, bur count values of Trues per groups by sum, here is passed Series df['Time'] to groupby:
df1 = (df['Item'] != df['Item2']).groupby(df['Time']).sum().reset_index(name='count')
print (df1)
Time count
0 1 4
1 2 3
2 3 0
3 4 2
Another similar solution is create new helper column and aggregate it:
df1 = (df.assign(new = df['Item'] != df['Item2'])
.groupby('Time')['new']
.sum()
.reset_index(name='count'))
EDIT: You can replace non matched values to misisng values by Series.where and then replace misisng values by fillna
df1 = (df.assign(new = df['Value'].where(df['Item'] != df['Item2']))
.groupby('Time')['new']
.mean()
.fillna(0)
.reset_index(name='avg')
)
print (df1)
Time avg
0 1 3.0
1 2 5.0
2 3 0.0
3 4 2.5
Alternative is use Series.reindex by uniqu values of original Time column:
df1 = (df[df['Item'] != df['Item2']]
.groupby(['Time'])['Value']
.mean()
.reindex(df['Time'].unique(), fill_value=0)
.reset_index(name='avg'))
Have a look at the pivot tables for pandas
import pandas as pd
import numpy as np
df = pd.DataFrame({
'Time' : [1,1,1,1,1,1,1,2,2,2,2,2,2,2,3],
'Item' : ['A','A','A','A','A','A','A','B','B','B','B','B','B','B','A'],
'Item2' : ['B','A','A','A','B','B','B','A','A','B','A','B','B','B','A'],
'Value' : [5, 6, 6, 5, 5, 6, 5, 6, 3, 1, 4, 6, 7, 4, 5],
})
# this gives you just the ones were there is a differance
df2 = df[df['Item'] != df['Item2']]
# then sum up the numbers for each item
pd.pivot_table(df2,index='Time',aggfunc='count')
This gives you the table
Item Item2 Value
Time
1 4 4 4
2 3 3 3

Pandas - add a row at the end of a for loop iteration

So I have a for loop that gets a series of values and makes some tests:
list = [1, 2, 3, 4, 5, 6]
df = pd.DataFrame(columns=['columnX','columnY', 'columnZ'])
for value in list:
if value > 3:
df['columnX']="A"
else:
df['columnX']="B"
df['columnZ']="Another value only to be filled in this condition"
df['columnY']=value-1
How can I do this and keep all the values in a single row for each loop iteration no matter what's the if outcome? Can I keep some columns empty?
I mean something like the following process:
[create empty row] -> [process] -> [fill column X] -> [process] -> [fill column Y if true] ...
Like:
[index columnX columnY columnZ]
[0 A 0 NULL ]
[1 A 1 NULL ]
[2 B 2 "..." ]
[3 B 3 "..." ]
[4 B 4 "..." ]
I am not sure to understand exactly but I think this may be a solution:
list = [1, 2, 3, 4, 5, 6]
d = {'columnX':[],'columnY':[]}
for value in list:
if value > 3:
d['columnX'].append("A")
else:
d['columnX'].append("B")
d['columnY'].append(value-1)
df = pd.DataFrame(d)
for the second question just add another condition
list = [1, 2, 3, 4, 5, 6]
d = {'columnX':[],'columnY':[], 'columnZ':[]}
for value in list:
if value > 3:
d['columnX'].append("A")
else:
d['columnX'].append("B")
if condition:
d['columnZ'].append(xxx)
else:
d['columnZ'].append(None)
df = pd.DataFrame(d)
According to the example you have given I have changed your code a bit to achieve the result you shared:
list = [1, 2, 3, 4, 5, 6]
df = pd.DataFrame(columns=['columnX','columnY', 'columnZ'])
for index, value in enumerate(list):
temp = []
if value > 3:
#df['columnX']="A"
temp.append("A")
temp.append(None)
else:
#df['columnX']="B"
temp.append("B")
temp.append("Another value") # or you can add any conditions
#df['columnY']=value-1
temp.append(value-1)
df.loc[index] = temp
print(df)
this produce the result:
columnX columnY columnZ
0 B Another value 0.0
1 B Another value 1.0
2 B Another value 2.0
3 A None 3.0
4 A None 4.0
5 A None 5.0
df.index is printed as : Int64Index([0, 1, 2, 3, 4, 5], dtype='int64')
You may just prepare/initialize your Dataframe with an index depending on input list size, then getting power from np.where routine:
In [111]: lst = [1, 2, 3, 4, 5, 6]
...: df = pd.DataFrame(columns=['columnX','columnY', 'columnZ'], index=range(len(lst)))
In [112]: int_arr = np.array(lst)
In [113]: df['columnX'] = np.where(int_arr > 3, 'A', 'B')
In [114]: df['columnZ'] = np.where(int_arr > 3, df['columnZ'], '...')
In [115]: df['columnY'] = int_arr - 1
In [116]: df
Out[116]:
columnX columnY columnZ
0 B 0 ...
1 B 1 ...
2 B 2 ...
3 A 3 NaN
4 A 4 NaN
5 A 5 NaN

Categories

Resources