Imagine I have the following DataFrames on Pandas:
In [7]: A= pd.DataFrame([['foo'],['bar'],['quz'],['baz']],columns=['key'])
In [8]: A['value'] = 'None'
In [9]: A
Out[9]:
key value
0 foo None
1 bar None
2 quz None
3 baz None
In [10]: B = pd.DataFrame([['foo',5],['bar',6],['quz',7]],columns= ['key','value'])
In [11]: B
Out[11]:
key value
0 foo 5
1 bar 6
2 quz 7
In [12]: pd.merge(A,B, on='key', how='outer')
Out[12]:
key value_x value_y
0 foo None 5
1 bar None 6
2 quz None 7
3 baz None NaN
But what I want is (avoiding the repeat column basically):
key value
0 foo 5
1 bar 6
2 quz 7
3 baz NaN
I suppose I can take the output and drop the _x value and rename the _y but that seems like an overkill. On SQL this would be trivial.
EDIT:
John as recomended to use:
In [1]: A.set_index('key', inplace=True)
A.update(B.set_index('key'), join='left', overwrite=True)
A.reset_index(inplace=True)
This works and does what I asked for.
In the example you are merging two dataframes with the same column, one contains strings ('None') the other integers, pandas doesn't know which column value you want to keep and which should be replaced, so it creates a column for both.
You can use update instead
In [10]: A.update(B, join='left', overwrite=True)
In [11]: A
Out[11]:
key value
0 foo 5
1 bar 6
2 quz 7
3 baz NaN
Another solution would be to just state the values that you want for the given column:
In [15]: A.loc[B.index, 'value'] = B.value
In [16]: A
Out[16]:
key value
0 foo 5
1 bar 6
2 quz 7
3 baz NaN
Personally I prefer the second solution because I know exactly what is happening, but the first is probably closer to what you are looking for in your question.
EDIT:
If the indices don't match, I'm not quite sure how to make this happen. Hence I would suggest making them match:
In [1]: A.set_index('key', inplace=True)
A.update(B.set_index('key'), join='left', overwrite=True)
A.reset_index(inplace=True)
It may be that there is a better way to do this, but I don't believe pandas has a way to perform this operation outright.
The second solution can also be used with the updated index:
In [24]: A.set_index('key', inplace=True)
A.loc[B.key, 'value'] = B.value.tolist()
Related
What's the difference between:
pandas df.loc[:,('col_a','col_b')]
and
df.loc[:,['col_a','col_b']]
The link below doesn't mention the latter, though it works. Do both pull a view? Does the first pull a view and the second pull a copy? Love learning Pandas.
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
Thanks
If your DataFrame has a simple column index, then there is no difference.
For example,
In [8]: df = pd.DataFrame(np.arange(12).reshape(4,3), columns=list('ABC'))
In [9]: df.loc[:, ['A','B']]
Out[9]:
A B
0 0 1
1 3 4
2 6 7
3 9 10
In [10]: df.loc[:, ('A','B')]
Out[10]:
A B
0 0 1
1 3 4
2 6 7
3 9 10
But if the DataFrame has a MultiIndex, there can be a big difference:
df = pd.DataFrame(np.random.randint(10, size=(5,4)),
columns=pd.MultiIndex.from_arrays([['foo']*2+['bar']*2,
list('ABAB')]),
index=pd.MultiIndex.from_arrays([['baz']*2+['qux']*3,
list('CDCDC')]))
# foo bar
# A B A B
# baz C 7 9 9 9
# D 7 5 5 4
# qux C 5 0 5 1
# D 1 7 7 4
# C 6 4 3 5
In [27]: df.loc[:, ('foo','B')]
Out[27]:
baz C 9
D 5
qux C 0
D 7
C 4
Name: (foo, B), dtype: int64
In [28]: df.loc[:, ['foo','B']]
KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (1), lexsort depth (0)'
The KeyError is saying that the MultiIndex has to be lexsorted. If we do that, then we still get a different result:
In [29]: df.sortlevel(axis=1).loc[:, ('foo','B')]
Out[29]:
baz C 9
D 5
qux C 0
D 7
C 4
Name: (foo, B), dtype: int64
In [30]: df.sortlevel(axis=1).loc[:, ['foo','B']]
Out[30]:
foo
A B
baz C 7 9
D 7 5
qux C 5 0
D 1 7
C 6 4
Why is that? df.sortlevel(axis=1).loc[:, ('foo','B')] is selecting the column where the first column level equals foo, and the second column level is B.
In contrast, df.sortlevel(axis=1).loc[:, ['foo','B']] is selecting the columns where the first column level is either foo or B. With respect to the first column level, there are no B columns, but there are two foo columns.
I think the operating principle with Pandas is that if you use df.loc[...] as
an expression, you should assume df.loc may be returning a copy or a view. The Pandas docs do not specify any rules about which you should expect.
However, if you make an assignment of the form
df.loc[...] = value
then you can trust Pandas to alter df itself.
The reason why the documentation warns about the distinction between views and copies is so that you are aware of the pitfall of using chain assignments of the form
df.loc[...][...] = value
Here, Pandas evaluates df.loc[...] first, which may be a view or a copy. Now if it is a copy, then
df.loc[...][...] = value
is altering a copy of some portion of df, and thus has no effect on df itself. To add insult to injury, the effect on the copy is lost as well since there are no references to the copy and thus there is no way to access the copy after the assignment statement completes, and (at least in CPython) it is therefore soon-to-be garbage collected.
I do not know of a practical fool-proof a priori way to determine if df.loc[...] is going to return a view or a copy.
However, there are some rules of thumb which may help guide your intuition (but note that we are talking about implementation details here, so there is no guarantee that Pandas needs to behave this way in the future):
If the resultant NDFrame can not be expressed as a basic slice of the
underlying NumPy array, then it probably will be a copy. Thus, a selection of arbitrary rows or columns will lead to a copy. A selection of sequential rows and/or sequential columns (which may be expressed as a slice) may return a view.
If the resultant NDFrame has columns of different dtypes, then df.loc
will again probably return a copy.
However, there is an easy way to determine if x = df.loc[..] is a view a postiori: Simply see if changing a value in x affects df. If it does, it is a view, if not, x is a copy.
Let's say I have a DataFrame that looks like this:
a b c d e f g
1 2 3 4 5 6 7
4 3 7 1 6 9 4
8 9 0 2 4 2 1
How would I go about deleting every column besides a and b?
This would result in:
a b
1 2
4 3
8 9
I would like a way to delete these using a simple line of code that says, delete all columns besides a and b, because let's say hypothetically I have 1000 columns of data.
Thank you.
In [48]: df.drop(df.columns.difference(['a','b']), 1, inplace=True)
Out[48]:
a b
0 1 2
1 4 3
2 8 9
or:
In [55]: df = df.loc[:, df.columns.intersection(['a','b'])]
In [56]: df
Out[56]:
a b
0 1 2
1 4 3
2 8 9
PS please be aware that the most idiomatic Pandas way to do that was already proposed by #Wen:
df = df[['a','b']]
or
df = df.loc[:, ['a','b']]
Another option to add to the mix. I prefer this approach for readability.
df = df.filter(['a', 'b'])
Where the first positional argument is items=[]
Bonus
You can also use a like argument or regex to filter.
Helpful if you have a set of columns like ['a_1','a_2','b_1','b_2']
You can do
df = df.filter(like='b_')
and end up with ['b_1','b_2']
Pandas documentation for filter.
there are multiple solution .
df = df[['a','b']] #1
df = df[list('ab')] #2
df = df.loc[:,df.columns.isin(['a','b'])] #3
df = pd.DataFrame(data=df.eval('a,b').T,columns=['a','b']) #4 PS:I do not recommend this method , but still a way to achieve this
Hey what you are looking for is:
df = df[["a","b"]]
You will recive a dataframe which only contains the columns a and b
If you only want to keep more columns than you're dropping put a "~" before the .isin statement to select every column except the ones you want:
df = df.loc[:, ~df.columns.isin(['a','b'])]
If you have more than two columns that you want to drop, let's say 20 or 30, you can use lists as well. Make sure that you also specify the axis value.
drop_list = ["a","b"]
df = df.drop(df.columns.difference(drop_list), axis=1)
I'm using Pandas in a Jupyter notebook. I have a dataframe, result_df, containing a column _text. I'm trying to filter out rows satisfying a certain condition (specifically ones where number of words in result_df[_text] is 0).
When I start, I have this:
len(result_df)
and I get back:
49708
Then I do this:
result_df[result_df['_text'].apply(textstat.lexicon_count) != 0]
In the notebook, I see a huge dataframe with this at the bottom:
49701 rows × 5 columns
However, when I run:
len(result_df)
I get back:
49708
So now I'm very confused: it looks like I've removed 7 rows but the len function disagrees...
Any clarification would be awesome!
Thanks!
Overwriting will help. Use this line of code:
result_df = result_df[result_df['_text'].apply(textstat.lexicon_count) != 0]
len(result_df)
What you have done is simply obtained a view of the original data frame using boolean indexing. No change was made. As an example:
In [108]: df
Out[108]:
colx coly name
0 1 5 foo
1 2 6 foo
2 3 7 bar
3 4 8 bar
In [109]: len(df)
Out[109]: 4
Now, index to find all rows with colx > 3:
In [110]: df[df['colx'] > 3]
Out[110]:
colx coly name
3 4 8 bar
In [111]: len(df[df['colx'] > 3])
Out[111]: 1
However, if you print out the original df:
In [112]: df
Out[112]:
colx coly name
0 1 5 foo
1 2 6 foo
2 3 7 bar
3 4 8 bar
If you want to reassign the data frame to the slice, you need to explicitly assign it:
result_df = result_df[result_df['_text'].apply(textstat.lexicon_count) != 0]
There appears to be a quirk with the pandas merge function. It considers NaN values to be equal, and will merge NaNs with other NaNs:
>>> foo = DataFrame([
['a',1,2],
['b',4,5],
['c',7,8],
[np.NaN,10,11]
], columns=['id','x','y'])
>>> bar = DataFrame([
['a',3],
['c',9],
[np.NaN,12]
], columns=['id','z'])
>>> pd.merge(foo, bar, how='left', on='id')
Out[428]:
id x y z
0 a 1 2 3
1 b 4 5 NaN
2 c 7 8 9
3 NaN 10 11 12
[4 rows x 4 columns]
This is unlike any RDB I've seen, normally missing values are treated with agnosticism and won't be merged together as if they are equal. This is especially problematic for datasets with sparse data (every NaN will be merged to every other NaN, resulting in a huge DataFrame!)
Is there a way to ignore missing values during a merge without first slicing them out?
You could exclude values from bar (and indeed foo if you wanted) where id is null during the merge. Not sure it's what you're after, though, as they are sliced out.
(I've assumed from your left join that you're interested in retaining all of foo, but only want to merge the parts of bar that match and are not null.)
foo.merge(bar[pd.notnull(bar.id)], how='left', on='id')
Out[11]:
id x y z
0 a 1 2 3
1 b 4 5 NaN
2 c 7 8 9
3 NaN 10 11 NaN
If You want to preserve the NaNs from both tables without slicing them out, you could use the outer join method as follows:
pd.merge(foo, bar.dropna(subset=['id']), how='outer', on='id')
It basically returns the union of foo and bar
if do not need NaN in both left and right DF, use
pd.merge(foo.dropna(subset=['id']), bar.dropna(subset=['id']), how='left', on='id')
else if need NaN in left DF, use
pd.merge(foo, bar.dropna(subset=['id']), how='left', on='id')
Another approach, which also keeps all rows if performing an outer join:
foo['id'] = foo.id.fillna('missing')
pd.merge(foo, bar, how='left', on='id')
How do I access the corresponding groupby dataframe in a groupby object by the key?
With the following groupby:
rand = np.random.RandomState(1)
df = pd.DataFrame({'A': ['foo', 'bar'] * 3,
'B': rand.randn(6),
'C': rand.randint(0, 20, 6)})
gb = df.groupby(['A'])
I can iterate through it to get the keys and groups:
In [11]: for k, gp in gb:
print 'key=' + str(k)
print gp
key=bar
A B C
1 bar -0.611756 18
3 bar -1.072969 10
5 bar -2.301539 18
key=foo
A B C
0 foo 1.624345 5
2 foo -0.528172 11
4 foo 0.865408 14
I would like to be able to access a group by its key:
In [12]: gb['foo']
Out[12]:
A B C
0 foo 1.624345 5
2 foo -0.528172 11
4 foo 0.865408 14
But when I try doing that with gb[('foo',)] I get this weird pandas.core.groupby.DataFrameGroupBy object thing which doesn't seem to have any methods that correspond to the DataFrame I want.
The best I could think of is:
In [13]: def gb_df_key(gb, key, orig_df):
ix = gb.indices[key]
return orig_df.ix[ix]
gb_df_key(gb, 'foo', df)
Out[13]:
A B C
0 foo 1.624345 5
2 foo -0.528172 11
4 foo 0.865408 14
but this is kind of nasty, considering how nice pandas usually is at these things.
What's the built-in way of doing this?
You can use the get_group method:
In [21]: gb.get_group('foo')
Out[21]:
A B C
0 foo 1.624345 5
2 foo -0.528172 11
4 foo 0.865408 14
Note: This doesn't require creating an intermediary dictionary / copy of every subdataframe for every group, so will be much more memory-efficient than creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object.
You can select different columns using the groupby slicing:
In [22]: gb[["A", "B"]].get_group("foo")
Out[22]:
A B
0 foo 1.624345
2 foo -0.528172
4 foo 0.865408
In [23]: gb["C"].get_group("foo")
Out[23]:
0 5
2 11
4 14
Name: C, dtype: int64
Wes McKinney (pandas' author) in Python for Data Analysis provides the following recipe:
groups = dict(list(gb))
which returns a dictionary whose keys are your group labels and whose values are DataFrames, i.e.
groups['foo']
will yield what you are looking for:
A B C
0 foo 1.624345 5
2 foo -0.528172 11
4 foo 0.865408 14
Rather than
gb.get_group('foo')
I prefer using gb.groups
df.loc[gb.groups['foo']]
Because in this way you can choose multiple columns as well. for example:
df.loc[gb.groups['foo'],('A','B')]
gb = df.groupby(['A'])
gb_groups = grouped_df.groups
If you are looking for selective groupby objects then, do: gb_groups.keys(), and input desired key into the following key_list..
gb_groups.keys()
key_list = [key1, key2, key3 and so on...]
for key, values in gb_groups.items():
if key in key_list:
print(df.ix[values], "\n")
I was looking for a way to sample a few members of the GroupBy obj - had to address the posted question to get this done.
create groupby object based on some_key column
grouped = df.groupby('some_key')
pick N dataframes and grab their indices
sampled_df_i = random.sample(grouped.indices, N)
grab the groups
df_list = map(lambda df_i: grouped.get_group(df_i), sampled_df_i)
optionally - turn it all back into a single dataframe object
sampled_df = pd.concat(df_list, axis=0, join='outer')
df.groupby('A').get_group('foo')
is equivalent to:
df[df['A'] == 'foo']