Why can't you modify attribute of a row directly in pandas - python

Lets say I have a dataframe A with attribute called 'score'.
I can modify the 'score' attribute of the second row by doing:
tmp = A.loc[2]
tmp.score = some_new_value
A.loc[2] = tmp
But I cant do it like this:
A.loc[2].score = some_new_value
Why ?

It will be hard to reproduce your case, because Pandas does not guarantee, when using chained indexing, whether the operation will return a view or a copy of the dataframe.
When you access a "cell" of the dataframe by
A.loc[2].score
you are actually performing two steps: first .loc and then .score (which is essentially chained indexing). The Pandas documentation has a nice post about it here.
The simplest way to prevent this is by consistently using .loc or .iloc to access the rows/columns you need and reassigning the value. Therefore, I would recommend always using either
A.loc[2, "score"] = some_new_value
or
A.at[2, "score"] = some_new_value
This kind of indexing + setting will be translated "under the hood" to:
A.loc.__setitem__((2, 'score'), some_new_value) # modifies A directly
instead of an unreliable chain of __getitem__ and __setitem__.

Let's show an example:
import pandas as pd
dict_ = {'score': [1,2,3,4,5,6], 'other':'a'}
A = pd.DataFrame(dict_)
A
Dataframe:
score other
0 1 a
1 2 a
2 3 a
3 4 a
4 5 a
5 6 a
Now you can do the following, and the values are actually saved:
A.loc[2,'score'] = 'Heyyyy'
A
Dataframe:
score other
0 1 a
1 2 a
2 Heyyyy a
3 4 a
4 5 a
5 6 a

Related

Obtain a view of a DataFrame using the loc method

I am trying to obtain a view of a pandas dataframe using the loc method but it is not working as expected when I am modifying the original DataFrame.
I want to extract a row/slice of a DataFrame using the loc method so that when a modification is done to the DataFrame, the slice reflects the change.
Let's have a look at this example:
import pandas as pd
import numpy as np
df = pd.DataFrame({'ID':np.arange(0,5,2), 'a':np.arange(3), 'b':np.arange(3)}).set_index('ID')
df
a b
ID
0 0 0
2 1 1
4 2 2
Now I create a slice using loc:
slice1 = df.loc[[2],]
slice1
a b
ID
2 1 1
Then I modify the original DataFrame:
df.loc[2, 'b'] = 9
df
a b
ID
0 0 0
2 1 9
4 2 2
But unfortunately our slice does not reflect this modification as I would be expecting for a view:
slice1
a b
ID
2 1 1
My expectation:
a b
ID
2 1 9
I found an ugly fix using a mix of iloc and loc but I hope there is a nicer way to obtain the result I am expecting.
Thank you for your help.
Disclaimer: This is not an answer.
I tried testing how over-writing the values in chained assignment vs .loc referring to the pandas documentation link that was shared by #Quang Hoang above.
This is what I tried:
dfmi = pd.DataFrame([list('abcd'),
list('efgh'),
list('ijkl'),
list('mnop')],
columns=pd.MultiIndex.from_product([['one', 'two'],
['first', 'second']]))
df1 = dfmi['one']['second']
df2 = dfmi.loc[:, ('one', 'second')]
Output of both df1 and df2:
0 b
1 f
2 j
3 n
Iteration 1:
value = ['z', 'x', 'c', 'v']
dfmi['one']['second'] = value
Output df1:
0 z
1 x
2 c
3 v
Iteration 2:
value = ['z', 'x', 'c', 'v']
dfmi.loc[:, ('one', 'second')] = value
Output df2:
0 z
1 x
2 c
3 v
The assignment of new sets is changing the values in both the cases.
The documentation says:
Quote 1: 'method 2 (.loc) is much preferred over method 1 (chained [])'
Quote 2:
'Outside of simple cases, it’s very hard to predict whether "getitem" (used by chained option) will return a view or a copy (it depends on the memory layout of the array, about which pandas makes no guarantees), and therefore whether the "setitem" (used by .loc) will modify dfmi or a temporary object that gets thrown out immediately afterward.'
I am not able to understand the explanation above. If the value in dfmi can change (in my case) and may not change (like in Benoit's case) then which way to obtain the result? Not sure if I am missing a point here.
Looking for help
The reason the slice didn't reflect the changes you made in the original dataframe is b/c you created the slice first.
When you create a slice, you create a "copy" of a slice of the data. You're not directly linking the two.
The short answer here is that you have two options 1) changed the original df first, then create a slice 2) don't slice, just do your operations referencing the original df using .loc or iloc
The memory address of your dataframe and slice are different, so changes in dataframe won't reflect in the slice-
The answer is to change the value in the dataframe and then slice it -

Creating a new column but creates copy of dataframe

I would like to check the value of the row above and see it it is the same as the current row. I found a great answer here: df['match'] = df.col1.eq(df.col1.shift()) such that col1 is what you are comparing.
However, when I tried it, I received a SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. warning. My col1 is a string. I know you can suppress warnings but how would I check the same row above and make sure that I am not creating a copy of the dataframe? Even with the warning I do get my desired output, but was curious if there exists a better way.
import pandas as pd
data = {'col1':['a','a','a','b','b','c','c','c','d','d'],
'week':[1,1,1,1,1,2,2,2,2,2]}
df = pd.DataFrame(data, columns=['col1','week'])
df['check_condition'] = 1
while sum(df.check_condition) != 0:
for week in df.week:
wk = df.loc[df.week == week]
wk['match'] = wk.col1.eq(wk.col1.shift()) # <-- where the warning occurs
# fix the repetitive value...which I have not done yet
# for now just exit out of the while loop
df.loc[df.week == week,'check_condition'] = 0
You can't ignore a pandas SettingWithCopyWarning!
It's 100% telling you that your code is not going to work as intended, if at all. Stop, investigate and fix it. (It's not an ignoreable thing you can filter out, like a pandas FutureWarning nagging about deprecation.)
Multiple issues with your code:
You're trying to iterate over a dataframe (but not with groupby()), take slices of it (in the subdataframe wk, which yes is a copy of a slice)...
then assign to the (nonexistent) new column wk['match']. This is bad, you shouldn't do this. (You could initialize df['match'] = np.nan, but it'd still be wrong to try to assign to the copy in wk)...
SettingWithCopyWarning is being triggered when you try to assign to wk['match']. It's telling you wk is a copy of a slice from dataframe df, not df itself. Hence like it tells you: A value is trying to be set on a copy of a slice from a DataFrame. That assignment would only get thrown away every time wk gets overwritten by your loop, so even if you could force it to work on wk it would be wrong. That's why SettingWithCopyWarning is a code smell you shouldn't be making a copy of a slice of df in the first place.
Later on, you also try to assign to column df['check_condition'] while iterating over the df, that's also bad.
Solution:
df['check_condition'] = df['col1'].eq(df['col1'].shift()).astype(int)
df
col1 week check_condition
0 a 1 0
1 a 1 1
2 a 1 1
3 b 1 0
4 b 1 1
5 c 2 0
6 c 2 1
7 c 2 1
8 d 2 0
9 d 2 1
More generally, for more complicated code where you want to iterate over each group of dataframe according to some grouping criteria, you'd use use groupby() and split-apply-combine instead.
you're grouping by wk.col1.eq(wk.col1.shift()), i.e. rows where col1 value doesn't change from the preceding row
and you want to set check_condition to 0 on those rows
and 1 on rows where col1 value did change from the preceding row
But in this simpler case you can skip groupby() and do a direct assignment.

Filtering data in Pandas returns error 'method' object is not iterable

I have a dataset as follows:
I am going to filter rows where the counts value equals 1.
index count
1 4
2 5
3 1
4 1
This is my code:
booleans =[]
for number in df1.count:
if number ==1:
booleans.append (True)
else:
booleans.append (False)
but it has this error:
'method' object is not iterable
I also tried this:
df[df.count==1]
but I had the following error:
KeyError: False
any suggestion?
In your code the problem is with the this part df1.count. Actually, pandas is having a method count() which is used to count the no. of non-NA/null observations across the given axis.
And in your code it returns something like this,
<bound method DataFrame.count of index count
0 1 4
1 2 5
2 3 1
3 4 1>
Instead of it, you can use df[df['count']=='1'] to get what you were looking for.
import pandas as pd
data = {"index":['1','2','3','4'],
"count":['4','5','1','1']}
df = pd.DataFrame(data)
indexes = df[df['count']=='1']
print(indexes)
Output
index count
2 3 1
3 4 1
Count is also a method of pandas DataFrame.
When you do df.count, pandas understands you are calling the count() method, not fetching your column that happens to have the same name. Doing df["count"] would solve your issue.
The standard way to do this is to do the following:
Solution 1
df1[df1["count"]=='1']
Solution 2
However, if you really do want to get a list of booleans you might want to use lambdas:
booleans = list(df1['count'].apply(lambda x:x=='1').values)
You can then use this list to get the result you want like so:
df1[booleans]
This is basically the same thing as solution 1.

Why there is an extra index when using apply in Pandas

When I use apply to a user defined function in Pandas, it looks like python is creating an additional array. How could I get rid of it? Here is my code:
def fnc(group):
x = group.C.values
out = x[np.where(x < 0)]
return pd.DataFrame(out)
data = pd.DataFrame({'A':np.random.randint(1, 3, 10),
'B':3,
'C':np.random.normal(0, 1, 10)})
data.groupby(by=['A', 'B']).apply(fnc).reset_index()
There is this weird Level_2 index created. Is there a way to avoid creating it when running my function?
A B level_2 0
0 1 3 0 -1.054134802
1 1 3 1 -0.691996447
2 2 3 0 -1.068693768
3 2 3 1 -0.080342046
4 2 3 2 -0.181869799
As such, you will have no way to avoid level_2 appearing. This is because the result of your grouping is a dataframe with several items in it: pandas is cool enough to understand your wish is to broadcast these items across the grouped keys, yet it is taking the index of the dataframe as an additional level to guarantee coherent output data. So dropping level=-1 at the end of your processing explicitly is expected.
If you want to avoid to reset that extra index, but still have some post processing, another way would be to call transform instead of apply, and get the returned data from fnc being the entire group vector where you put np.nan for results to exclude. Then, your dataframe will not have an extra level, but you'll need to call dropna() afterwards.

How can I use the apply() function for a single column?

I have a pandas dataframe with two columns. I need to change the values of the first column without affecting the second one and get back the whole dataframe with just first column values changed. How can I do that using apply() in pandas?
Given a sample dataframe df as:
a b
0 1 2
1 2 3
2 3 4
3 4 5
what you want is:
df['a'] = df['a'].apply(lambda x: x + 1)
that returns:
a b
0 2 2
1 3 3
2 4 4
3 5 5
For a single column better to use map(), like this:
df = pd.DataFrame([{'a': 15, 'b': 15, 'c': 5}, {'a': 20, 'b': 10, 'c': 7}, {'a': 25, 'b': 30, 'c': 9}])
a b c
0 15 15 5
1 20 10 7
2 25 30 9
df['a'] = df['a'].map(lambda a: a / 2.)
a b c
0 7.5 15 5
1 10.0 10 7
2 12.5 30 9
Given the following dataframe df and the function complex_function,
import pandas as pd
def complex_function(x, y=0):
if x > 5 and x > y:
return 1
else:
return 2
df = pd.DataFrame(data={'col1': [1, 4, 6, 2, 7], 'col2': [6, 7, 1, 2, 8]})
col1 col2
0 1 6
1 4 7
2 6 1
3 2 2
4 7 8
there are several solutions to use apply() on only one column. In the following I will explain them in detail.
I. Simple solution
The straightforward solution is the one from #Fabio Lamanna:
df['col1'] = df['col1'].apply(complex_function)
Output:
col1 col2
0 2 6
1 2 7
2 1 1
3 2 2
4 1 8
Only the first column is modified, the second column is unchanged. The solution is beautiful. It is just one line of code and it reads almost like english: "Take 'col1' and apply the function complex_function to it."
However, if you need data from another column, e.g. 'col2', it won't work. If you want to pass the values of 'col2' to variable y of the complex_function, you need something else.
II. Solution using the whole dataframe
Alternatively, you could use the whole dataframe as described in this SO post or this one:
df['col1'] = df.apply(lambda x: complex_function(x['col1']), axis=1)
or if you prefer (like me) a solution without a lambda function:
def apply_complex_function(x):
return complex_function(x['col1'])
df['col1'] = df.apply(apply_complex_function, axis=1)
There is a lot going on in this solution that needs to be explained. The apply() function works on pd.Series and pd.DataFrame. But you cannot use df['col1'] = df.apply(complex_function).loc[:, 'col1'], because it would throw a ValueError.
Hence, you need to give the information which column to use. To complicate things, the apply() function does only accept callables. To solve this, you need to define a (lambda) function with the column x['col1'] as argument; i.e. we wrap the column information in another function.
Unfortunately, the default value of the axis parameter is zero (axis=0), which means it will try executing column-wise and not row-wise. This wasn't a problem in the first solution, because we gave apply() a pd.Series. But now the input is a dataframe and we must be explicit (axis=1). (I marvel how often I forget this.)
Whether you prefer the version with the lambda function or without is subjective. In my opinion the line of code is complicated enough to read even without a lambda function thrown in. You only need the (lambda) function as a wrapper. It is just boilerplate code. A reader should not be bothered with it.
Now, you can modify this solution easily to take the second column into account:
def apply_complex_function(x):
return complex_function(x['col1'], x['col2'])
df['col1'] = df.apply(apply_complex_function, axis=1)
Output:
col1 col2
0 2 6
1 2 7
2 1 1
3 2 2
4 2 8
At index 4 the value has changed from 1 to 2, because the first condition 7 > 5 is true but the second condition 7 > 8 is false.
Note that you only needed to change the first line of code (i.e. the function) and not the second line.
Side note
Never put the column information into your function.
def bad_idea(x):
return x['col1'] ** 2
By doing this, you make a general function dependent on a column name! This is a bad idea, because the next time you want to use this function, you cannot. Worse: Maybe you rename a column in a different dataframe just to make it work with your existing function. (Been there, done that. It is a slippery slope!)
III. Alternative solutions without using apply()
Although the OP specifically asked for a solution with apply(), alternative solutions were suggested. For example, the answer of #George Petrov suggested to use map(); the answer of #Thibaut Dubernet proposed assign().
I fully agree that apply() is seldom the best solution, because apply() is not vectorized. It is an element-wise operation with expensive function calling and overhead from pd.Series.
One reason to use apply() is that you want to use an existing function and performance is not an issue. Or your function is so complex that no vectorized version exists.
Another reason to use apply() is in combination with groupby(). Please note that DataFrame.apply() and GroupBy.apply() are different functions.
So it does make sense to consider some alternatives:
map() only works on pd.Series, but accepts dict and pd.Series as input. Using map() with a function is almost interchangeable with using apply(). It can be faster than apply(). See this SO post for more details.
df['col1'] = df['col1'].map(complex_function)
applymap() is almost identical for dataframes. It does not support pd.Series and it will always return a dataframe. However, it can be faster. The documentation states: "In the current implementation applymap calls func twice on the first column/row to decide whether it can take a fast or slow code path.". But if performance really counts you should seek an alternative route.
df['col1'] = df.applymap(complex_function).loc[:, 'col1']
assign() is not a feasible replacement for apply(). It has a similar behaviour in only the most basic use cases. It does not work with the complex_function. You still need apply() as you can see in the example below. The main use case for assign() is method chaining, because it gives back the dataframe without changing the original dataframe.
df['col1'] = df.assign(col1=df.col1.apply(complex_function))
Annex: How to speed up apply()?
I only mention it here because it was suggested by other answers, e.g. #durjoy. The list is not exhaustive:
Do not use apply(). This is no joke. For most numeric operations, a vectorized method exists in pandas. If/else blocks can often be refactored with a combination of boolean indexing and .loc. My example complex_function could be refactored in this way.
Refactor to Cython. If you have a complex equation and the parameters of the equation are in your dataframe, this might be a good idea. Check out the official pandas user guide for more information.
Use raw=True parameter. Theoretically, this should improve the performance of apply() if you are just applying a NumPy reduction function, because the overhead of pd.Series is removed. Of course, your function has to accept an ndarray. You have to refactor your function to NumPy. By doing this, you will have a huge performance boost.
Use 3rd party packages. The first thing you should try is Numba. I do not know swifter mentioned by #durjoy; and probably many other packages are worth mentioning here.
Try/Fail/Repeat. As mentioned above, map() and applymap() can be faster - depending on the use case. Just time the different versions and choose the fastest. This approach is the most tedious one with the least performance increase.
You don't need a function at all. You can work on a whole column directly.
Example data:
>>> df = pd.DataFrame({'a': [100, 1000], 'b': [200, 2000], 'c': [300, 3000]})
>>> df
a b c
0 100 200 300
1 1000 2000 3000
Half all the values in column a:
>>> df.a = df.a / 2
>>> df
a b c
0 50 200 300
1 500 2000 3000
Although the given responses are correct, they modify the initial data frame, which is not always desirable (and, given the OP asked for examples "using apply", it might be they wanted a version that returns a new data frame, as apply does).
This is possible using assign: it is valid to assign to existing columns, as the documentation states (emphasis is mine):
Assign new columns to a DataFrame.
Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten.
In short:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame([{'a': 15, 'b': 15, 'c': 5}, {'a': 20, 'b': 10, 'c': 7}, {'a': 25, 'b': 30, 'c': 9}])
In [3]: df.assign(a=lambda df: df.a / 2)
Out[3]:
a b c
0 7.5 15 5
1 10.0 10 7
2 12.5 30 9
In [4]: df
Out[4]:
a b c
0 15 15 5
1 20 10 7
2 25 30 9
Note that the function will be passed the whole dataframe, not only the column you want to modify, so you will need to make sure you select the right column in your lambda.
If you are really concerned about the execution speed of your apply function and you have a huge dataset to work on, you could use swifter to make faster execution, here is an example for swifter on pandas dataframe:
import pandas as pd
import swifter
def fnc(m):
return m*3+4
df = pd.DataFrame({"m": [1,2,3,4,5,6], "c": [1,1,1,1,1,1], "x":[5,3,6,2,6,1]})
# apply a self created function to a single column in pandas
df["y"] = df.m.swifter.apply(fnc)
This will enable your all CPU cores to compute the result hence it will be much faster than normal apply functions. Try and let me know if it become useful for you.
Let me try a complex computation using datetime and considering nulls or empty spaces. I am reducing 30 years on a datetime column and using apply method as well as lambda and converting datetime format. Line if x != '' else x will take care of all empty spaces or nulls accordingly.
df['Date'] = df['Date'].fillna('')
df['Date'] = df['Date'].apply(lambda x : ((datetime.datetime.strptime(str(x), '%m/%d/%Y') - datetime.timedelta(days=30*365)).strftime('%Y%m%d')) if x != '' else x)
Make a copy of your dataframe first if you need to modify a column
Many answers here suggest modifying some column and assign the new values to the old column. It is common to get the SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. warning. This happens when your dataframe was created from another dataframe but is not a proper copy.
To silence this warning, make a copy and assign back.
df = df.copy()
df['a'] = df['a'].apply('add', other=1)
apply() only needs the name of the function
You can invoke a function by simply passing its name to apply() (no need for lambda). If your function needs additional arguments, you can pass them either as keyword arguments or pass the positional arguments as args=. For example, suppose you have file paths in your dataframe and you need to read files in these paths.
def read_data(path, sep=',', usecols=[0]):
return pd.read_csv(path, sep=sep, usecols=usecols)
df = pd.DataFrame({'paths': ['../x/yz.txt', '../u/vw.txt']})
df['paths'].apply(read_data) # you don't need lambda
df['paths'].apply(read_data, args=(',', [0, 1])) # pass the positional arguments to `args=`
df['paths'].apply(read_data, sep=',', usecols=[0, 1]) # pass as keyword arguments
Don't apply a function, call the appropriate method directly
It's almost never ideal to apply a custom function on a column via apply(). Because apply() is a syntactic sugar for a Python loop with a pandas overhead, it's often slower than calling the same function in a list comprehension, never mind, calling optimized pandas methods. Almost all numeric operators can be directly applied on the column and there are corresponding methods for all of them.
# add 1 to every element in column `a`
df['a'] += 1
# for every row, subtract column `a` value from column `b` value
df['c'] = df['b'] - df['a']
If you want to apply a function that has if-else blocks, then you should probably be using numpy.where() or numpy.select() instead. It is much, much faster. If you have anything larger than 10k rows of data, you'll notice the difference right away.
For example, if you have a custom function similar to func() below, then instead of applying it on the column, you could operate directly on the columns and return values using numpy.select().
def func(row):
if row == 'a':
return 1
elif row == 'b':
return 2
else:
return -999
# instead of applying a `func` to each row of a column, use `numpy.select` as below
import numpy as np
conditions = [df['col'] == 'a', df['col'] == 'b']
choices = [1, 2]
df['new'] = np.select(conditions, choices, default=-999)
As you can see, numpy.select() has very minimal syntax difference from an if-else ladder; only need to separate conditions and choices into separate lists. For other options, check out this answer.

Categories

Resources