df = pd.DataFrame([["a", "d"], ["", ""], ["", "3"]],
columns=["a", "b"])
df
a b
0 a d
1
2 3
I'm looking to do a vectorized string concatenation with an if statement like this:
df["c"] = df["a"] + "()" + df["b"] if df["a"].item != "" else ""
But it doesn't work because .item returns a series. Is it possible to do it like this without an apply or lambda method that goes through each row? In a vectorized operation pandas will try and concatenate multiple cells at a time and make it faster...
Desired output:
df
a b c
0 a d a ()b
1
2 3
Try this: using np.where()
df = pd.DataFrame([["a", "d"], ["", ""], ["", "3"]],
columns=["a", "b"])
df['c']=np.where(df['a']!='',df['a'] + '()' + df['b'],'')
print(df)
output:
a b c
0 a d a()d
1
2 3
IIUC you could use mask to concatenate both columns, separated by some string using str.cat, whenever a condition holds:
df['c'] = df.a.mask(df.a.ne(''), df.a.str.cat(df.b, sep='()'))
print(df)
a b c
0 a d a()d
1
2 3
Since nobody already mentioned it, you can also use the apply method:
df['c'] = df.apply(lambda r: r['a']+'()'+r['b'] if r['a']!='' else '', axis=1)
If anyone checks performances please comment below :)
Related
For each row of a dataframe I want to repeat the row n times inside a iterrows in a new dataframe. Basically I'm doing this:
df = pd.DataFrame(
[
("abcd", "abcd", "abcd") # create your data here, be consistent in the types.
],
["A", "B", "C"] # add your column names here
)
n_times = 2
for index, row in df.iterrows():
new_df = row.loc[row.index.repeat(n_times)]
new_df
and I get the following output:
0 abcd
0 abcd
1 abcd
1 abcd
2 abcd
2 abcd
Name: C, dtype: object
while it should be:
A B C
0 abcd abcd abcd
1 abcd abcd abcd
How should I proceed to get the desired output?
The df.T attribute in Pandas is used to transpose a DataFrame. Transposing a DataFrame means to flip its rows and columns, so that the rows become columns and the columns become rows.
I don't think you defined your df the right way.
df = pd.DataFrame(data = [["abcd", "abcd", "abcd"]],
columns = ["A", "B", "C"])
n_times = 2
for _ in range(n_times):
new_df = pd.concat([df, df], axis=0)
Is that how it should look like?
Given a DataFrame, how can I add a new level to the columns based on an iterable given by the user? In other words, how do I append a new level?
The question How to simply add a column level to a pandas dataframe shows how to add a new level given a single value, so it doesn't cover this case.
Here is the expected behaviour:
>>> df = pd.DataFrame(0, columns=["A", "B"], index=range(2))
>>> df
A B
0 0 0
1 0 0
>>> append_level(df, ["C", "D"])
A B
C D
0 0 0
1 0 0
The solution should also work with MultiIndex columns, so
>>> append_level(append_level(df, ["C", "D"]), ["E", "F"])
A B
C D
E F
0 0 0
1 0 0
If the columns is not multiindex, you can just do:
df.columns = pd.MultiIndex.from_arrays([df.columns.tolist(), ['C','D']])
If its multiindex:
if isinstance(df.columns, pd.MultiIndex):
df.columns = pd.MultiIndex.from_arrays([*df.columns.levels, ['E', 'F']])
The pd.MultiIndex.levels gives a Frozenlist of level values and you need to unpack to form the list of lists as input to from_arrays
def append_level(df, new_level):
new_df = df.copy()
new_df.columns = pd.MultiIndex.from_tuples(zip(*zip(*df.columns), new_level))
return new_df
This question already has answers here:
How can I pivot a dataframe?
(5 answers)
Closed 1 year ago.
Given the following data:
data = pd.DataFrame(
{
"A": ["a", "a", "b", "b"],
"B": ["x", "y", "p", "q"],
"C": ["one", "two", "one", "two"],
}
)
which looks as:
A B C
0 a x one
1 a y two
2 b p one
3 b q two
I would like to create the following:
data_out = pd.DataFrame(
{
"A": ["a", "b"],
"one": ["x", "p"],
"two": ["y", "q"],
}
)
which looks as:
A one two
0 a x y
1 b p q
I'm aware that I could do something along the lines of:
d_piv = pd.pivot_table(
data,
index=["A"],
columns=["C"],
values=["B"],
aggfunc=lambda x: x,
).reset_index()
which gives:
A B
C one two
0 a x y
1 b p q
from which the columns could be cleaned up, but I'm wondering how I'd go about solving this using melt and unstack?
I have tried:
print(data.set_index("C", append=True).unstack())
which gives:
A B
C one two one two
0 a NaN x NaN
1 NaN a NaN y
2 b NaN p NaN
3 NaN b NaN q
The NaN values aren't wanted here, so I could instead try:
data.index = [0, 0, 1, 1]
data.set_index(["A", "C"], append=True).unstack(-1).reset_index(level=-1)
which gives:
A B
C one two
0 a x y
1 b p q
So that's closer - but it still feels as though there's still some unnecessary bits there.
Particularly coding the index like that.
Edit
Solution of :
df.set_index('A').pivot(columns='C', values='B').reset_index().rename_axis(None, axis=1)
is good, but I am wondering whether unstack can be used here instead of pivot?
First, set A column as the index then use df.pivot. To get the exact output we have to reset index and rename axis.
(df.set_index("A").pivot(columns="C", values="B")
.reset_index()
.rename_axis(None, axis=1))
A one two
0 a x y
1 b p q
Using df.unstack
df.set_index(["A", "C"])["B"].unstack().reset_index().rename_axis(None, axis=1)
A one two
0 a x y
1 b p q
My dataset currently has 1 column with different opportunity types. I have another column with a dummy variable as to whether or not the opportunity is a first time client or not.
import pandas as pd
df = pd.DataFrame(
{"col_opptype": ["a", "b", "c", "d"],
"col_first": [1,0,1,0] }
)
I would like to create a new category within col_opptype based on col_first. Where only 1 category (i.e. a) will be matched to its corresponding col_first
I.e.,
col_opptype = {a_first, a_notfirst, b, c, d}
col_first = {1, 0}
where:
a_first is when col_opptype = a and col_first = 1
a_notfirst is when col_opptype = a and col_first = 0
desired output:
col_opptype col_first
0 a_first 1
1 b 0
2 c 1
3 a_notfirst 0
I am working on Python and am a relatively new user so I hope the above makes sense. Thank you!
This should solve your problem :)
Please add your code attempty and at least an example dataframe definition to your next question, so we do not have to invent examples to help you. An exact example of what the final result should look like would also have been great :)
Edit I adjusted the code to your changed question.
import pandas as pd
df = pd.DataFrame(
{"col_opptype": ["a", "b", "c", "d"],
"col_first": [1,0,1,0] }
)
def is_first_opptype(opptype: str, wanted_type:str, first: int):
if first and opptype == wanted_type:
return opptype + "_first"
elif not first and opptype == wanted_type:
return opptype + "_notfirst"
else:
return opptype
df["col_opptype"] = df.apply(lambda x: is_first_opptype(x["col_opptype"],
x["col_first"], "a"), axis=1)
print(df)
output:
col_opptype col_first
0 a_first 1
1 b 0
2 c 1
3 d 0
Note: my question isn't this one, but something a little more subtle.
Say I have a dataframe that looks like this
df =
A B C
0 3 3 1
1 2 1 9
df[["A", "B", "D"]] will raise a KeyError.
Is there a python pandas way to let df[["A", "B", "D"]] == df[["A", "B"]]? (Ie: just select the columns that exist.)
One solution might be
good_columns = list(set(df.columns).intersection(["A", "B", "D"]))
mydf = df[good_columns]
But this has two problems:
It's clunky and inelegant.
The ordering of mydf.columns could be ["A", "B"] or ["B", "A"].
You can use filter, this will just ignore any extra keys:
df.filter(["A","B","D"])
A B
0 3 3
1 2 1
You can use a conditional list comprehension:
target_cols = ['A', 'B', 'D']
>>> df[[c for c in target_cols if c in df]]
A B
0 3 3
1 2 1