Why does a DF sometimes automatically update, other times does NOT? - python

So, I'm unclear why doing certain operations on a DF updates it right away, but other times it does not update it unless you re-use the old name or use a new df variable name.
Doesn't this make it really confusing where the last 'real' change is?

First of all, a df behaves like a list in python, meaning that if you make a soft copy of it and change it, the original df changes too. To answer your question you must know that some methods of updating a df, write on a hard copy version of that df (which are indicated by a warning given by pandas letting you know that you are writing on a copy) so the data might not change. the best and most reliable way to change data using pandas is either:
df.at[a_cell] = some_data
or:
df.loc[some_rows, some_columns] = some_data

Related

When should I worry about using copy() with a pandas DataFrame?

I'm more of an R user and have recently been "switching" to Python. So that means I'm way more used to the R way of dealing with things. In Python, the whole concept of mutability and passing by assignment is kind of hard to grasp at first.
I can easily understand the issues that mutability may lead to when using lists or dictionaries. However, when using pandas DataFrames, I find that mutability is specially difficult to understand.
For example: let's say I have a DataFrame (df) with some raw data. I want to use a function that receives df as a parameter and outputs a modified version of that df, but keeping the original df. If I wrote the function, maybe I can inspect it and be assured that it makes a copy of the input before applying any manipulation. However, if it's a function I don't know (let's say, from some package), should I always pass my input df as df.copy()?
In my case, I'm trying to write some custom function that transforms a df using a WoE encoder. The data parameter is a DataFrame with feature columns and a label column. It kinda looks like this:
def my_function(data, var_list, label_column):
encoder = category_encoders.WOEEncoder(cols=var_list) # var_list = cols to be encoded
fit_encoder = encoder.fit(
X=data[var_list],
y=data[label_column]
)
new_data = fit_encoder.transform(
data[var_list]
)
new_data[label_column] = data[label_column]
return new_data
So should I be passing data[var_list].copy() instead of data[var_list]? Should I assume that every function that receives a df will modify it in-place or will it return a different object? I mean, how can I be sure that fit_encoder.transform won't modify data itself? I also learned that Pandas sometimes produces views and sometimes not, depending on the operation you apply to the whatever subset of the df. So I feel like there's too much uncertainty surrounding operations on DataFrames.
From the exercise shown on the website https://www.statology.org/pandas-copy-dataframe/ it shows that if you don't use .copy() when manipulating a subset of your dataframe, it could change values in your original dataframe as well. This is not what you want, so you should use .copy() when passing your dataframe to your function.
The example on the link I listed above really illustrates this concept well (and no I'm not affiliated with their site lol, I was just searching for this answer myself).

Python variable, in a Jupyter notebook, modified despite no further action to it

I'm not going to share the direct code since I feel the length of it would bog down anyone's analysis of my main question (though I can share it if absolutely need be). It's best I summarize it.
To put it generally, in a previous cell, I established a pandas dataframe - let's say we call it original_df here. In the next cell, I initialize a new variable temp_df = original_df so that I may manipulate the same data while keeping the original dataframe in tact. At no point in the code that follows do I ever once assign anything to nor even use original_df in any way, yet when I get done manipulating temp_df as much as I need to...subsequently checking original_df shows that it has changed in all the same ways that temp_df did.
Any ideas why this would happen?, some type of environment issue perhaps? Re-running each of the code cells that created original_df is rather inconvenient, and putting all of the code in one cell would negate the whole point of using a Jupyter Notebook (given the visuals along the way). I find it incredibly bizarre that this is happening at all, but is there any way to explicitly force a "freeze" on the original_df variable while its temp_df copy is being manipulated?
You could try to copy the dataframe with the libary copy
import copy
temp_df = copy.copy(original_df)
temp_df = original_df only creates a new reference to original_df.
You need to call copy() on your original dataframe: temp_df = original_df.copy() to create a new dataframe.
More info here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.copy.html

Loop Interpolate and then update values

I have a dataset containing both timeseries and cross-sectional data. There are some missing columns that I want to handle through linear interpolation.
I tried this code but there was a caveat error that appeared. The code still worked but I'm just worried that it might not work after some time. Is there a better way to do this process?
for i in merged_df.country_code.unique():
merged_df[merged_df.country_code == i].interpolate(inplace=True)
Error code below:
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation:
https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
The problem, as indicated in the doc, that merged_df[merged_df.country_code == i] is a part of your merged_df. Once you try to chain with with some inplace operations, pandas cannot guarantee the operations work on the original dataframe or a copy of it. It's safer to do a copy and reassign with loc:
for i in merged_df.country_code.unique():
mask = merged_df.country_code == i
merged_df.loc[mask] = merged_df.loc[mask].interpolate()
This is, IMHO, one of the reasons why inplace=True is not a good practice.
That said, in this case, you can bypass for loop with a groupby:
merge_df = merge_df.groupby('country_code').interpolate()
or:
merge_df = merge_df.groupby('country_code').apply(lambda x: x.interpolate())
The problem is that pandas can't guarantee that the object that you're assigning the new data to is a temporary object or the correct object. Whilst it will probably work, it's better you use
merged_df.loc[merged_df["country_code"]==i,0].interpolate(inplace=True)
As this guarantees that you use the correct object.

Dataframe .where() method returns None [duplicate]

In the pandas library many times there is an option to change the object inplace such as with the following statement...
df.dropna(axis='index', how='all', inplace=True)
I am curious what is being returned as well as how the object is handled when inplace=True is passed vs. when inplace=False.
Are all operations modifying self when inplace=True? And when inplace=False is a new object created immediately such as new_df = self and then new_df is returned?
If you are trying to close a question where someone should use inplace=True and hasn't, consider replace() method not working on Pandas DataFrame instead.
When inplace=True is passed, the data is renamed in place (it returns nothing), so you'd use:
df.an_operation(inplace=True)
When inplace=False is passed (this is the default value, so isn't necessary), performs the operation and returns a copy of the object, so you'd use:
df = df.an_operation(inplace=False)
In pandas, is inplace = True considered harmful, or not?
TLDR; Yes, yes it is.
inplace, contrary to what the name implies, often does not prevent copies from being created, and (almost) never offers any performance benefits
inplace does not work with method chaining
inplace can lead to SettingWithCopyWarning if used on a DataFrame column, and may prevent the operation from going though, leading to hard-to-debug errors in code
The pain points above are common pitfalls for beginners, so removing this option will simplify the API.
I don't advise setting this parameter as it serves little purpose. See this GitHub issue which proposes the inplace argument be deprecated api-wide.
It is a common misconception that using inplace=True will lead to more efficient or optimized code. In reality, there are absolutely no performance benefits to using inplace=True. Both the in-place and out-of-place versions create a copy of the data anyway, with the in-place version automatically assigning the copy back.
inplace=True is a common pitfall for beginners. For example, it can trigger the SettingWithCopyWarning:
df = pd.DataFrame({'a': [3, 2, 1], 'b': ['x', 'y', 'z']})
df2 = df[df['a'] > 1]
df2['b'].replace({'x': 'abc'}, inplace=True)
# SettingWithCopyWarning:
# A value is trying to be set on a copy of a slice from a DataFrame
Calling a function on a DataFrame column with inplace=True may or may not work. This is especially true when chained indexing is involved.
As if the problems described above aren't enough, inplace=True also hinders method chaining. Contrast the working of
result = df.some_function1().reset_index().some_function2()
As opposed to
temp = df.some_function1()
temp.reset_index(inplace=True)
result = temp.some_function2()
The former lends itself to better code organization and readability.
Another supporting claim is that the API for set_axis was recently changed such that inplace default value was switched from True to False. See GH27600. Great job devs!
The way I use it is
# Have to assign back to dataframe (because it is a new copy)
df = df.some_operation(inplace=False)
Or
# No need to assign back to dataframe (because it is on the same copy)
df.some_operation(inplace=True)
CONCLUSION:
if inplace is False
Assign to a new variable;
else
No need to assign
The inplace parameter:
df.dropna(axis='index', how='all', inplace=True)
in Pandas and in general means:
1. Pandas creates a copy of the original data
2. ... does some computation on it
3. ... assigns the results to the original data.
4. ... deletes the copy.
As you can read in the rest of my answer's further below, we still can have good reason to use this parameter i.e. the inplace operations, but we should avoid it if we can, as it generate more issues, as:
1. Your code will be harder to debug (Actually SettingwithCopyWarning stands for warning you to this possible problem)
2. Conflict with method chaining
So there is even case when we should use it yet?
Definitely yes. If we use pandas or any tool for handeling huge dataset, we can easily face the situation, where some big data can consume our entire memory.
To avoid this unwanted effect we can use some technics like method chaining:
(
wine.rename(columns={"color_intensity": "ci"})
.assign(color_filter=lambda x: np.where((x.hue > 1) & (x.ci > 7), 1, 0))
.query("alcohol > 14 and color_filter == 1")
.sort_values("alcohol", ascending=False)
.reset_index(drop=True)
.loc[:, ["alcohol", "ci", "hue"]]
)
which make our code more compact (though harder to interpret and debug too) and consumes less memory as the chained methods works with the other method's returned values, thus resulting in only one copy of the input data. We can see clearly, that we will have 2 x original data memory consumption after this operations.
Or we can use inplace parameter (though harder to interpret and debug too) our memory consumption will be 2 x original data, but our memory consumption after this operation remains 1 x original data, which if somebody whenever worked with huge datasets exactly knows can be a big benefit.
Final conclusion:
Avoid using inplace parameter unless you don't work with huge data and be aware of its possible issues in case of still using of it.
Save it to the same variable
data["column01"].where(data["column01"]< 5, inplace=True)
Save it to a separate variable
data["column02"] = data["column01"].where(data["column1"]< 5)
But, you can always overwrite the variable
data["column01"] = data["column01"].where(data["column1"]< 5)
FYI: In default inplace = False
When trying to make changes to a Pandas dataframe using a function, we use 'inplace=True' if we want to commit the changes to the dataframe.
Therefore, the first line in the following code changes the name of the first column in 'df' to 'Grades'. We need to call the database if we want to see the resulting database.
df.rename(columns={0: 'Grades'}, inplace=True)
df
We use 'inplace=False' (this is also the default value) when we don't want to commit the changes but just print the resulting database. So, in effect a copy of the original database with the committed changes is printed without altering the original database.
Just to be more clear, the following codes do the same thing:
#Code 1
df.rename(columns={0: 'Grades'}, inplace=True)
#Code 2
df=df.rename(columns={0: 'Grades'}, inplace=False}
Yes, in Pandas we have many functions has the parameter inplace but by default it is assigned to False.
So, when you do df.dropna(axis='index', how='all', inplace=False) it thinks that you do not want to change the orignial DataFrame, therefore it instead creates a new copy for you with the required changes.
But, when you change the inplace parameter to True
Then it is equivalent to explicitly say that I do not want a new copy
of the DataFrame instead do the changes on the given DataFrame
This forces the Python interpreter to not to create a new DataFrame
But you can also avoid using the inplace parameter by reassigning the result to the orignal DataFrame
df = df.dropna(axis='index', how='all')
inplace=True is used depending if you want to make changes to the original df or not.
df.drop_duplicates()
will only make a view of dropped values but not make any changes to df
df.drop_duplicates(inplace = True)
will drop values and make changes to df.
Hope this helps.:)
inplace=True makes the function impure. It changes the original dataframe and returns None. In that case, You breaks the DSL chain.
Because most of dataframe functions return a new dataframe, you can use the DSL conveniently. Like
df.sort_values().rename().to_csv()
Function call with inplace=True returns None and DSL chain is broken. For example
df.sort_values(inplace=True).rename().to_csv()
will throw NoneType object has no attribute 'rename'
Something similar with python’s build-in sort and sorted. lst.sort() returns None and sorted(lst) returns a new list.
Generally, do not use inplace=True unless you have specific reason of doing so. When you have to write reassignment code like df = df.sort_values(), try attaching the function call in the DSL chain, e.g.
df = pd.read_csv().sort_values()...
As Far my experience in pandas I would like to answer.
The 'inplace=True' argument stands for the data frame has to make changes permanent
eg.
df.dropna(axis='index', how='all', inplace=True)
changes the same dataframe (as this pandas find NaN entries in index and drops them).
If we try
df.dropna(axis='index', how='all')
pandas shows the dataframe with changes we make but will not modify the original dataframe 'df'.
If you don't use inplace=True or you use inplace=False you basically get back a copy.
So for instance:
testdf.sort_values(inplace=True, by='volume', ascending=False)
will alter the structure with the data sorted in descending order.
then:
testdf2 = testdf.sort_values( by='volume', ascending=True)
will make testdf2 a copy. the values will all be the same but the sort will be reversed and you will have an independent object.
then given another column, say LongMA and you do:
testdf2.LongMA = testdf2.LongMA -1
the LongMA column in testdf will have the original values and testdf2 will have the decrimented values.
It is important to keep track of the difference as the chain of calculations grows and the copies of dataframes have their own lifecycle.

adding row to pandas dataframe from series

Let df be a pandas.DataFrame object. Let se be a pandas.Series object.
The columns of df, are the indices of se.
I wish to add a new row to df from se and set the index as 555.
The command I use is df.loc[555]=se.
It works, it seems?
I get that A value is trying to be set on a copy of a slice from a DataFrame
error/warning.
I get it, I've read the documentation.
Two questions though:
Should I really care about the warning?
What is the recommended way to go about this so that the warning doesn't pop up?
Thanks.
Should I really care about the warning?
It depends. In your example you are first referring to a subset of the data(df.loc[555]) and then setting values on this subset. Almost always pandas makes a copy of the original data and setting values on the copy will not modify the original Dataframe, hence the warning.
In some cases pandas will make a view of the original data(ex: if all columns have the same dtype.), setting values here will work as expected.
If all columns in your dataframe have the same dtype(ex: all floats) and you are using iloc on a single existing index then you are getting a view and the warning can be ignored. If you are setting on a non-existing index the you are Setting with enlargment, this is also the expected behaviour and the warning can also be ignored.
What is the recommended way to go about this so that the warning doesn't pop up?
Your use of df.loc given the info you provided seems alright. You have several alternatives to avoid the warning:
First, update your version of pandas. The situation with these false positive warnings has been improving with each version, I don't get any in 0.15.1.
Second, If you are sure that what you are doing is the intended behaviour then you can just silence the warning globally with:
pd.set_option('chained_assignment', None)
Finally, in some cases you can set the is_copy property of your resulting object effectively disabling the checks on this object, for example:
df_temp = df.loc[555]
df_temp.is_copy = False
Note that this last option can only be used on existing indexes, on new indexes this raises a KeyError.

Categories

Resources