Python Pandas make calculation in single cell - python

I have a TYPE column
and a VOLUME column
What I'm looking to do if first check if TYPE column == 'var1'
If so I would like to make a calculation in the VOLUME column.
So far I have something like this:
data.loc[data['TYPE'] == 'var1', ['VOLUME']] * 2
data.loc[data['TYPE'] == 'var1', ['VOLUME']] * 4
This seems to set the entire column that meets the condition to the last variable. So I end up with just two values.
Out:
4
4
4
4
8
8
8
Another option:
data['VOLUME'] = data.loc[data['TYPE'] == 'var1', ['VOLUME']] * 2
This works for the first condition but show NaN for the second condition
Then when I run:
data['VOLUME'] = data.loc[data['TYPE'] == 'var2', ['VOLUME']] * 4
The whole column show as NaN.

Consider a simple example which demonstrates what is happening.
df = pd.DataFrame({'A': [1, 2, 3]})
df
A
0 1
1 2
2 3
Now, only values below 2 in column "A" are to be modified. So, try something like
df.loc[df.A < 2, 'A'] * 2
0 2
Name: A, dtype: int64
This series only has 1 row at index 0. If you try assigning this back, the implicit assumption is that the other index values are to be reset to NaN.
df.assign(A=df.loc[df.A < 2, 'A'] * 2)
A
0 2.0
1 NaN
2 NaN
What we want to do is to modify only the rows we're interested in. This is best done with the in-place modification arithmetic operator *=:
df.loc[df.A < 2, 'A'] *= 2
In your case, it is
data.loc[data['TYPE'] == 'var1', 'VOLUME'] *= 2

You are really close. The problem is in how you are storing the result. This should work:
data.loc[data['TYPE'] == 'var1', ['VOLUME']] = data['VOLUME'] * 2

You can use *= with loc:
In [11]: df = pd.DataFrame([[1], [2]], columns=["A"])
In [12]: df
Out[12]:
A
0 1
1 2
In [13]: df.loc[df.A == 1, "A"] *= 3
In [14]: df
Out[14]:
A
0 3
1 2

Related

How to conditionally change pandas DataFrame values into f-strings?

I have a pandas DataFrame whose values I want to conditionally change into strings without looping over every value.
Example input:
In [1]: df = pd.DataFrame(data = [[1,2], [4,5]], columns = ['a', 'b'])
Out[2]:
a b
0 1 2
1 4 5
This is my best attempt which doesn't work properly
df['a'] = np.where(df['a'] < 3, f'string-{df["a"]}', df['a'])
In [1]: df
Out[2]:
a b
0 string0 1\n1 4\nName: a, dtype: int64 2
1 4 5
Desired output:
Out[2]:
A B
0 string-1 2
1 4 5
I am using np.where() since looping is not feasible due to the size of the actual DataFrame. The actual f-string I am using is also more complex and has two variables that include column names, but the problem is the same.
Are there other ways to conditionally change pandas values into f-strings without looping over each value?
You can use .map() together with f-string, as follows:
df['a'] = df['a'].map(lambda x: f'string-{x}' if x < 3 else x)
Alternatively, you can also use .loc together with string concatenation, as follows:
df.loc[df['a'] < 3, 'a'] = 'string-' + df['a'].astype(str)
#OR
df['a']=np.where(df['a'] < 3, 'string-'+df['a'].astype(str), df['a'])
Result:
print(df)
a b
0 string-1 2
1 4 5

Get indices of columns satisfying multiple conditions in new column with pandas

With the following dataframe as an example :
df = pd.DataFrame({'Sample':['X', 'Y', 'Z'], 'Base':[2, 10, 3], 'A':[0,5,100], 'C':[0,10,7]})
I would like to add a new column called df["indices"] with the indices of columns df["A"] and/or df["C"] provided they satisfy 2 conditions:
Must be greater than 5
df["A"]/df["Base"] or df["C"]/df["Base"] must be greater than or equal to 1
The resulting dataframe would be:
df = pd.DataFrame({'Sample':['X', 'Y', 'Z'], 'Base':[2, 20, 3], 'A':[0,6,100], 'C':[0,10,7], 'indices': ['','C','A,C']})
I can get True or False values for my first condition with df[['A','C']] > 5 but I cannot get it to work with my condition 2 which is based on another column in my dataframe. Getting the indices where I get True in a new column is yet another story. I imagine something with apply and get_loc or index but I cannot get it to work no matter how I try.
Let's create a boolean mask satisfying the two given conditions, then use DataFrame.dot on this mask to get the indices:
m = df[['A', 'C']].gt(5) & df[['A', 'C']].div(df['Base'], axis=0).ge(1)
df['indices'] = m.dot(m.columns + ',').str.rstrip(',')
Sample Base A C indices
0 X 2 0 0
1 Y 10 5 10 C
2 Z 3 100 7 A,C
You can use df.loc to assign values back to the column when any number of conditions are met. A simple approach would be to have 3 of these, each with your desired conditions. You could also probably chain together np.where to achieve the same thing if you wanted.
import pandas as pd
df = pd.DataFrame({'Sample':['X', 'Y', 'Z'],
'Base':[2, 10, 3],
'A':[0,5,100],
'C':[0,10,7]})
df.loc[(df['A'] / df['Base'] >=1) & (df['C'] / df['Base'] >=1), 'indicies'] = 'A,C'
df.loc[(df['A'] / df['Base'] >=1) & (df['C'] / df['Base'] <1), 'indicies'] = 'A'
df.loc[(df['A'] / df['Base'] <1) & (df['C'] / df['Base'] >=1), 'indicies'] = 'C'
Output
Sample Base A C indicies
0 X 2 0 0 NaN
1 Y 10 5 10 C
2 Z 3 100 7 A,C

Reassigning Entries in a Column of Pandas DataFrame

My goal is to conditionally index a data frame and change the values in a column for these indexes.
I intend on looking through the column 'A' to find entries = 'a' and update their column 'B' with the word 'okay.
group = ['a']
df = pd.DataFrame({"A": [a,b,a,a,c], "B": [NaN,NaN,NaN,NaN,NaN]})
>>>df
A B
0 a NaN
1 b NaN
2 a NaN
3 a NaN
4 c NaN
df[df['A'].apply(lambda x: x in group)]['B'].fillna('okay', inplace=True)
This gives me the following error:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._update_inplace(new_data)
Following the documentation (what I understood of it) I tried the following instead:
df[df['A'].apply(lambda x: x in group)].loc[:,'B'].fillna('okay', inplace=True)
I can't figure out why the reassignment of 'NaN' to 'okay' is not occurring inplace and how this can be rectified?
Thank you.
Try this with lambda:
Solution First:
>>> df
A B
0 a NaN
1 b NaN
2 a NaN
3 a NaN
4 c NaN
Using lambda + map or apply..
>>> df["B"] = df["A"].map(lambda x: "okay" if "a" in x else "NaN")
OR# df["B"] = df["A"].map(lambda x: "okay" if "a" in x else np.nan)
OR# df['B'] = df['A'].apply(lambda x: 'okay' if x == 'a' else np.nan)
>>> df
A B
0 a okay
1 b NaN
2 a okay
3 a okay
4 c NaN
Solution second:
>>> df
A B
0 a NaN
1 b NaN
2 a NaN
3 a NaN
4 c NaN
another fancy way to Create Dictionary frame and apply it using map function across the column:
>>> frame = {'a': "okay"}
>>> df['B'] = df['A'].map(frame)
>>> df
A B
0 a okay
1 b NaN
2 a okay
3 a okay
4 c NaN
Solution Third:
This is already been posted by #d_kennetz but Just want to club together, wher you can also do the assignment to both columns (A & B)in one shot:..
>>> df.loc[df.A == 'a', 'B'] = "okay"
If I understand this correctly, you simply want to replace the value for a column on those rows matching a given condition (i.e. where A column belongs to a certain group, here with a single value 'a'). The following should do the trick:
import pandas as pd
group = ['a']
df = pd.DataFrame({"A": ['a','b','a','a','c'], "B": [None,None,None,None,None]})
print(df)
df.loc[df['A'].isin(group),'B'] = 'okay'
print(df)
What we're doing here is we're using the .loc filter, which just returns a view on the existing dataframe.
First argument (df['A'].isin(group)) filters on those rows matching a given criterion. Notice you can use the equality operator (==) but not the in operator and therefore have to use .isin() instead).
Second argument selects only the 'B' column.
Then you just assign the desired value (which is a constant).
Here's the output:
A B
0 a None
1 b None
2 a None
3 a None
4 c None
A B
0 a okay
1 b None
2 a okay
3 a okay
4 c None
If you wanted to fancier stuff, you might want do the following:
import pandas as pd
group = ['a', 'b']
df = pd.DataFrame({"A": ['a','b','a','a','c'], "B": [None,None,None,None,None]})
df.loc[df['A'].isin(group),'B'] = "okay, it was " + df['A']+df['A']
print(df)
Which gives you:
A B
0 a okay, it was aa
1 b okay, it was bb
2 a okay, it was aa
3 a okay, it was aa
4 c None

Pandas Set element of a new column as a list (iterable) raise ValueError: setting an array element with a sequence

I want to, at the same time, create a new column in a pandas dataframe and set its first value to a list.
I want to transform this dataframe
df = pd.DataFrame.from_dict({'a':[1,2],'b':[3,4]})
a b
0 1 3
1 2 4
into this one
a b c
0 1 3 [2,3]
1 2 4 NaN
I tried :
df.loc[0, 'c'] = [2,3]
df.loc[0, 'c'] = np.array([2,3])
df.loc[0, 'c'] = [[2,3]]
df.at[0,'c'] = [2,3]
df.at[0,'d'] = [[2,3]]
It does not work.
How should I proceed?
If the first element of a series is a list, then the series must be of type object (not the most efficient for numerical computations). This should work, however.
df = df.assign(c=None)
df.loc[0, 'c'] = [2, 3]
>>> df
a b c
0 1 3 [2, 3]
1 2 4 None
If you really need the remaining values of column c to be NaNs instead of None, use this:
df.loc[1:, 'c'] = np.nan
The problem seems to have something to do with the type of the c column. If you convert it to type 'object', you can use iat, loc or set_value to set a cell as a list.
df2 = (
df.assign(c=np.nan)
.assign(c=lambda x: x.c.astype(object))
)
df2.set_value(0,'c',[2,3])
Out[86]:
a b c
0 1 3 [2, 3]
1 2 4 NaN

Python Compare rows in two columns and write a result conditionally

I've been searching for quite a while not not getting anywhere close to what I wanted to do...
I have a pandas dataframe in which I want to compare the value of column A to B and write a 1 or 0 in a new column if A and B are equal.
I could write an ugly for loop but I know this is not very pythony.
I'm pretty sure there is a way to do this with apply() but I'm not getting anywhere.
I'd like to be able to compare columns that contain integers as well as columns containing strings.
Thanks in advance for your help.
If df is a Pandas DataFrame, then
df['newcol'] = (df['A'] == df['B']).astype('int')
For example,
In [20]: df = pd.DataFrame({'A': [1,2,'foo'], 'B': [1,99,'foo']})
In [21]: df
Out[21]:
A B
0 1 1
1 2 99
2 foo foo
In [22]: df['newcol'] = (df['A'] == df['B']).astype('int')
In [23]: df
Out[23]:
A B newcol
0 1 1 1
1 2 99 0
2 foo foo 1
df['A'] == df['B'] returns a boolean Series:
In [24]: df['A'] == df['B']
Out[24]:
0 True
1 False
2 True
dtype: bool
astype('int') converts the True/False values to integers -- 0 for False and 1 for True.

Categories

Resources