When I use apply to a user defined function in Pandas, it looks like python is creating an additional array. How could I get rid of it? Here is my code:
def fnc(group):
x = group.C.values
out = x[np.where(x < 0)]
return pd.DataFrame(out)
data = pd.DataFrame({'A':np.random.randint(1, 3, 10),
'B':3,
'C':np.random.normal(0, 1, 10)})
data.groupby(by=['A', 'B']).apply(fnc).reset_index()
There is this weird Level_2 index created. Is there a way to avoid creating it when running my function?
A B level_2 0
0 1 3 0 -1.054134802
1 1 3 1 -0.691996447
2 2 3 0 -1.068693768
3 2 3 1 -0.080342046
4 2 3 2 -0.181869799
As such, you will have no way to avoid level_2 appearing. This is because the result of your grouping is a dataframe with several items in it: pandas is cool enough to understand your wish is to broadcast these items across the grouped keys, yet it is taking the index of the dataframe as an additional level to guarantee coherent output data. So dropping level=-1 at the end of your processing explicitly is expected.
If you want to avoid to reset that extra index, but still have some post processing, another way would be to call transform instead of apply, and get the returned data from fnc being the entire group vector where you put np.nan for results to exclude. Then, your dataframe will not have an extra level, but you'll need to call dropna() afterwards.
Related
I've got a pandas dataframe, and I'm trying to fill a new column in the dataframe, which takes the maximum value of two values situated in another column of the dataframe, iteratively. I'm trying to build a loop to do this, and save time with computation as I realise I could probably do it with more lines of code.
for x in ((jac_input.index)):
jac_output['Max Load'][x] = jac_input[['load'][x],['load'][x+1]].max()
However, I keep getting this error during the comparison
IndexError: list index out of range
Any ideas as to where I'm going wrong here? Any help would be appreciated!
Many things are wrong with your current code.
When you do ['abc'][x], x can only take the value 0 and this will return 'abc' as you are slicing a list. Not at all what you expect it to do (I imagine, slicing the Series).
For your code to be valid, you should do something like:
jac_input = pd.DataFrame({'load': [1,0,3,2,5,4]})
for x in jac_input.index:
print(jac_input['load'].loc[x:x+1].max())
output:
1
3
3
5
5
4
Also, when assigning, if you use jac_output['Max Load'][x] = ... you will likely encounter a SettingWithCopyWarning. You should rather use loc: jac_outputLoc[x, 'Max Load'] = .
But you do not need all that, use vectorial code instead!
You can perform rolling on the reversed dataframe:
jac_output['Max Load'] = jac_input['load'][::-1].rolling(2, min_periods=1).max()[::-1]
Or using concat:
jac_output['Max Load'] = pd.concat([jac_input['load'], jac_input['load'].shift(-1)], axis=1).max(1)
output (without assignment):
0 1.0
1 3.0
2 3.0
3 5.0
4 5.0
5 4.0
dtype: float64
Lets say I have a dataframe A with attribute called 'score'.
I can modify the 'score' attribute of the second row by doing:
tmp = A.loc[2]
tmp.score = some_new_value
A.loc[2] = tmp
But I cant do it like this:
A.loc[2].score = some_new_value
Why ?
It will be hard to reproduce your case, because Pandas does not guarantee, when using chained indexing, whether the operation will return a view or a copy of the dataframe.
When you access a "cell" of the dataframe by
A.loc[2].score
you are actually performing two steps: first .loc and then .score (which is essentially chained indexing). The Pandas documentation has a nice post about it here.
The simplest way to prevent this is by consistently using .loc or .iloc to access the rows/columns you need and reassigning the value. Therefore, I would recommend always using either
A.loc[2, "score"] = some_new_value
or
A.at[2, "score"] = some_new_value
This kind of indexing + setting will be translated "under the hood" to:
A.loc.__setitem__((2, 'score'), some_new_value) # modifies A directly
instead of an unreliable chain of __getitem__ and __setitem__.
Let's show an example:
import pandas as pd
dict_ = {'score': [1,2,3,4,5,6], 'other':'a'}
A = pd.DataFrame(dict_)
A
Dataframe:
score other
0 1 a
1 2 a
2 3 a
3 4 a
4 5 a
5 6 a
Now you can do the following, and the values are actually saved:
A.loc[2,'score'] = 'Heyyyy'
A
Dataframe:
score other
0 1 a
1 2 a
2 Heyyyy a
3 4 a
4 5 a
5 6 a
I have a very simple for loop problem and I haven't found a solution in any of the similar questions on Stack. I want to use a for loop to create values in a pandas dataframe. I want the values to be strings that contain a numerical index. I can make the correct value print, but I can't make this value get saved in the dataframe. I'm new to python.
# reproducible example
import pandas as pd
df1 = pd.DataFrame({'x':range(5)})
# for loop to add a row with an index
for i in range(5):
print("data_{i}.txt".format(i=i)) # this prints the value that I want
df1['file'] = "data_{i}.txt".format(i=i)
This loop prints the exact value that I want to put into the 'file' column of df1, but when I look at df1, it only uses the last value for the index.
x file
0 0 data_4.txt
1 1 data_4.txt
2 2 data_4.txt
3 3 data_4.txt
4 4 data_4.txt
I have tried using enumerate, but can't find a solution with this. I assume everyone will yell at me for posting a duplicate question, but I have not found anything that works and if someone points me to a solution that solves this problem, I'll happily remove this question.
There are better ways to create a DataFrame, but to answer your question:
Replace the last line in your code:
df1['file'] = "data_{i}.txt".format(i=i)
with:
df1.loc[i, 'file'] = "data_{0}.txt".format(i)
For more information, read about the .loc here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html
On the same page, you can read about accessors like .at and .iloc as well.
You can do list-comprehension:
df1['file'] = ["data_{i}.txt".format(i=i) for i in range(5)]
print(df1)
Prints:
x file
0 0 data_0.txt
1 1 data_1.txt
2 2 data_2.txt
3 3 data_3.txt
4 4 data_4.txt
OR at the creating of DataFrame:
df1 = pd.DataFrame({'x':range(5), 'file': ["data_{i}.txt".format(i=i) for i in range(5)]})
print(df1)
OR:
df1 = pd.DataFrame([{'x':i, 'file': "data_{i}.txt".format(i=i)} for i in range(5)])
print(df1)
I've found success with the .at method
for i in range(5):
print("data_{i}.txt".format(i=i)) # this prints the value that I want
df1.at[i, 'file'] = "data_{i}.txt".format(i=i)
Returns:
x file
0 0 data_0.txt
1 1 data_1.txt
2 2 data_2.txt
3 3 data_3.txt
4 4 data_4.txt
when you assign a variable to a dataframe column the way you do -
using the df['colname'] = 'val', it assigns the val across all rows.
That is why you are seeing only the last value.
Change your code to:
import pandas as pd
df1 = pd.DataFrame({'x':range(5)})
# for loop to add a row with an index
to_assign = []
for i in range(5):
print("data_{i}.txt".format(i=i)) # this prints the value that I want
to_assign.append(data_{i}.txt".format(i=i))
##outside of the loop - only once - to all dataframe rows
df1['file'] = to_assign.
As a thought, pandas has a great API for performing these type of actions without for loops.
You should start practicing those.
I would like to check the value of the row above and see it it is the same as the current row. I found a great answer here: df['match'] = df.col1.eq(df.col1.shift()) such that col1 is what you are comparing.
However, when I tried it, I received a SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. warning. My col1 is a string. I know you can suppress warnings but how would I check the same row above and make sure that I am not creating a copy of the dataframe? Even with the warning I do get my desired output, but was curious if there exists a better way.
import pandas as pd
data = {'col1':['a','a','a','b','b','c','c','c','d','d'],
'week':[1,1,1,1,1,2,2,2,2,2]}
df = pd.DataFrame(data, columns=['col1','week'])
df['check_condition'] = 1
while sum(df.check_condition) != 0:
for week in df.week:
wk = df.loc[df.week == week]
wk['match'] = wk.col1.eq(wk.col1.shift()) # <-- where the warning occurs
# fix the repetitive value...which I have not done yet
# for now just exit out of the while loop
df.loc[df.week == week,'check_condition'] = 0
You can't ignore a pandas SettingWithCopyWarning!
It's 100% telling you that your code is not going to work as intended, if at all. Stop, investigate and fix it. (It's not an ignoreable thing you can filter out, like a pandas FutureWarning nagging about deprecation.)
Multiple issues with your code:
You're trying to iterate over a dataframe (but not with groupby()), take slices of it (in the subdataframe wk, which yes is a copy of a slice)...
then assign to the (nonexistent) new column wk['match']. This is bad, you shouldn't do this. (You could initialize df['match'] = np.nan, but it'd still be wrong to try to assign to the copy in wk)...
SettingWithCopyWarning is being triggered when you try to assign to wk['match']. It's telling you wk is a copy of a slice from dataframe df, not df itself. Hence like it tells you: A value is trying to be set on a copy of a slice from a DataFrame. That assignment would only get thrown away every time wk gets overwritten by your loop, so even if you could force it to work on wk it would be wrong. That's why SettingWithCopyWarning is a code smell you shouldn't be making a copy of a slice of df in the first place.
Later on, you also try to assign to column df['check_condition'] while iterating over the df, that's also bad.
Solution:
df['check_condition'] = df['col1'].eq(df['col1'].shift()).astype(int)
df
col1 week check_condition
0 a 1 0
1 a 1 1
2 a 1 1
3 b 1 0
4 b 1 1
5 c 2 0
6 c 2 1
7 c 2 1
8 d 2 0
9 d 2 1
More generally, for more complicated code where you want to iterate over each group of dataframe according to some grouping criteria, you'd use use groupby() and split-apply-combine instead.
you're grouping by wk.col1.eq(wk.col1.shift()), i.e. rows where col1 value doesn't change from the preceding row
and you want to set check_condition to 0 on those rows
and 1 on rows where col1 value did change from the preceding row
But in this simpler case you can skip groupby() and do a direct assignment.
I couldn't find anything on SO on this. What I'm trying to do is generate 4 new columns on my existing dataframe, by applying a separate function with 4 specific columns as inputs and return 4 output columns that are not the 4 initial columns. However, the function requires me to slice the dataframe by conditions before usage. I have been using for loops and appending, but it is extremely slow. I was hoping that there was a way to do a MapReduce-esque operation, where it would take my DataFrame, do a groupby and apply a function I separately wrote.
The function has multiple outputs, so just imagine a function like this:
def func(a,b,c,d):
return f(a),g(b),h(c),i(d)
where f,g,h,i are different functions performed on the inputs. I am trying to do something like:
import pandas as pd
df = pd.DataFrame({'a': range(10),
'b': range(10),
'c': range(10),
'd':range(10},
'e': [0,0,0,0,0,1,1,1,1,1])
df.groupby('e').apply(lambda df['x1'],df['x2'],df['x3'],df['x4'] =
func(df['a'],df['b'],df['c'],df['d']))
Wondering if this is possible. If there are other functions out there in the library/ more efficient ways to go about this, please do advise. Thanks.
EDIT: Here's a sample output
a b c d e f g h i
--------------------------
0 0 0 0 0 f1 g1 h1 i1
1 1 1 1 1 f2 g2 h2 i2
... and so on
The reason why I'd like this orientation of operations is due to the function's operations being reliant on structures within the data (hence the groupby) before performing the function. Previously, I obtained the unique values and iterated over them while slicing the dataframe up, before appending it to a new dataframe. Runs in quadratic time.
You could do something like this:
def f(data):
data['a2']=data['a']*2 #or whatever function/calculation you want
data['b2']=data['b']*3 #etc etc
#e.g. data['g']=g(data['b'])
return data
df.groupby('e').apply(f)