Fastest way to copy columns from one DataFrame to another using pandas? - python

I have a large DataFrame (million +) records I'm using to store core of my data (like a database) and I then have a smaller DataFrame (1 to 2000) records that I'm combining a few of the columns for each time step in my program which can be several thousand time steps . Both DataFrames are indexed the same way by a id column.
the code I'm using is:
df_large.loc[new_ids, core_cols] = df_small.loc[new_ids, core_cols]
Where core_cols is a list of about 10 fields that I'm coping over and new_ids are the ids from the small DataFrame. This code works fine but it is the slowest part of my code my a magnitude of three. I just wanted to know if they was a faster way to merge the data of the two DataFrame together.
I tried merging the data each time with the merge function but process took way to long that is way I have gone to creating a larger DataFrame that I update to improve the speed.

There is nothing inherently slow about using .loc to set with an alignable frame, though it does go through a bit of code to cover lot of cases, so probably it's not ideal to have in a tight loop. FYI, this example is slightly different that the 2nd example.
In [1]: import numpy as np
In [2]: import pandas as pd
In [3]: from pandas import DataFrame
In [4]: df = DataFrame(1.,index=list('abcdefghij'),columns=[0,1,2])
In [5]: df
Out[5]:
0 1 2
a 1 1 1
b 1 1 1
c 1 1 1
d 1 1 1
e 1 1 1
f 1 1 1
g 1 1 1
h 1 1 1
i 1 1 1
j 1 1 1
[10 rows x 3 columns]
In [6]: df2 = DataFrame(0,index=list('afg'),columns=[1,2])
In [7]: df2
Out[7]:
1 2
a 0 0
f 0 0
g 0 0
[3 rows x 2 columns]
In [8]: df.loc[df2.index,df2.columns] = df2
In [9]: df
Out[9]:
0 1 2
a 1 0 0
b 1 1 1
c 1 1 1
d 1 1 1
e 1 1 1
f 1 0 0
g 1 0 0
h 1 1 1
i 1 1 1
j 1 1 1
[10 rows x 3 columns]
Here's an alternative. It may or may not fit your data pattern. If the updates (your small frame) are pretty much independent this would work (IOW you are not updating the big frame, then picking out a new sub-frame, then updating, etc. - if this is your pattern, then using .loc is about right).
Instead of updating the big frame, update the small frame with the columns from the big frame, e.g.:
In [10]: df = DataFrame(1.,index=list('abcdefghij'),columns=[0,1,2])
In [11]: df2 = DataFrame(0,index=list('afg'),columns=[1,2])
In [12]: needed_columns = df.columns-df2.columns
In [13]: df2[needed_columns] = df.reindex(index=df2.index,columns=needed_columns)
In [14]: df2
Out[14]:
1 2 0
a 0 0 1
f 0 0 1
g 0 0 1
[3 rows x 3 columns]
In [15]: df3 = DataFrame(0,index=list('cji'),columns=[1,2])
In [16]: needed_columns = df.columns-df3.columns
In [17]: df3[needed_columns] = df.reindex(index=df3.index,columns=needed_columns)
In [18]: df3
Out[18]:
1 2 0
c 0 0 1
j 0 0 1
i 0 0 1
[3 rows x 3 columns]
And concat everything together when you want (they are kept in a list in the mean time, or see my comments below, these sub-frames could be moved to external storage when created, then read back before this concatenating step).
In [19]: pd.concat([ df.reindex(index=df.index-df2.index-df3.index), df2, df3]).reindex_like(df)
Out[19]:
0 1 2
a 1 0 0
b 1 1 1
c 1 0 0
d 1 1 1
e 1 1 1
f 1 0 0
g 1 0 0
h 1 1 1
i 1 0 0
j 1 0 0
[10 rows x 3 columns]
The beauty of this pattern is that it is easily extended to using an actual db (or much better an HDFStore), to actually store the 'database', then creating/updating sub-frames as needed, then writing out to a new store when finished.
I use this pattern all of the time, though with Panels actually.
perform a computation on a sub-set of the data and write each to a separate file
then at the end read them all in and concat (in memory), and write out a gigantic new file. The concat step could be done all at once in memory, or if truly a large task, then can be done iteratively.
I am able to use multi-processes to perform my computations AND write each individual Panel to a file separate as they are all completely independent. The only dependent part is the concat.
This is essentially a map-reduce pattern.

Quickly: Copy columns a and b from the old df into a new df.
df1 = df[['a', 'b']]

I've had to copy between large dataframes a fair bit. I'm using dataframes with realtime market data, which may not be what pandas is designed for, but this is my experience..
On my pc, copying a single datapoint with .at takes 15µs with the df size making negligible difference. .loc takes a minimum of 550µs and increases as the df gets larger: 3100µs to copy a single point from one 100000x2 df to another. .ix seems to be just barely faster than .loc.
For a single datapoint .at is very fast and is not impacted by the size of the dataframe, but it cannot handle ranges so loops are required, and as such the time scaling is linear. .loc and .ix on the other hand are (relatively) very slow for single datapoints, but they can handle ranges and scale up better than linearly. However, unlike .at they slow down significantly wrt dataframe size.
Therefore when I'm frequently copying small ranges between large dataframes, I tend to use .at with a for loop, and otherwise I use .ix with a range.
for new_id in new_ids:
for core_col in core_cols:
df_large.at[new_id, core_col] = df_small.at[new_id, core_col]
Of course, to do it properly I'd go with Jeff's solution above, but it's nice to have options.
Caveats of .at: it doesn't work with ranges, and it doesn't work if the dtype is datetime (and maybe others).

Related

How to efficiently filter pandas dataframe

I have this huge dataset (100M rows) of consumer transactions that looks as follows:
df = pd.DataFrame({'id':[1, 1, 2, 2, 3],'brand':['a','b','a','a','c'], 'date': ['01-01-2020', '01-02-2020', '01-05-2019', '01-06-2019', '01-12-2018']})
For each row (each transaction), I would like to check if the same person (same "id") bought something in the past for a different brand. The resulting dataset should look like this:
id brand date check
0 1 a 01-01-2020 0
1 1 b 01-02-2020 1
2 2 a 01-05-2019 0
3 2 a 01-06-2019 0
4 3 c 01-12-2018 0
Now, my solution was:
def past_transaction(row):
x = df[(df['id'] == row['id']) & (df['brand'] != row['brand']) & (df['date'] < row['date'])]
if x.shape[0]>0:
return 1
else:
return 0
df['check'] = df.appy(past_transaction, axis=1)
This works well, but the performance is abysmal. Is there a more efficient way to do this (with or without Pandas)? Thanks!
I would personally use two booleans,
First check if the id is duplicated.
Second is to check for those that are not duplicated id & brand
import numpy as np
s = df.duplicated(subset=['id'],keep='first')
s1 = ~df.duplicated(subset=['id','brand'],keep=False)
df['check'] = np.where(s & s1,1,0)
id brand date check
0 1 a 01-01-2020 0
1 1 b 01-02-2020 1
2 2 a 01-05-2019 0
3 2 a 01-06-2019 0
4 3 c 01-12-2018 0
A) Use Pandas builtin functions
First step would be to utilize pandas instead of making your own function:
df['check'] = np.logical_and(df.id.duplicated(), ~df[['id','brand']].duplicated())
It will make your code faster already!
B) Take advantage of hardware
Opt-in to utilize all the cores you have in your machine if your RAM permits. You can use modin.pandas or any alternative for that. I recommended this because its minimal changes and will provide exponential speed-up depending on your machine's configuration
C) Big Data Frameworks
If it is a big data problem you should be already utilizing dask or spark dataframes which are meant to handle Big Data as pandas isn't meant to handle such large volumes of data.
Some things I found effective while dealing with a similar problem.

Pivot table to "tidy" data frame in Pandas

I have an array of numbers (I think the format makes it a pivot table) that I want to turn into a "tidy" data frame. For example, I start with variable 1 down the left, variable 2 across the top, and the value of interest in the middle, something like this:
X Y
A 1 2
B 3 4
I want to turn that into a tidy data frame like this:
V1 V2 value
A X 1
A Y 2
B X 3
B Y 4
The row and column order don't matter to me, so the following is totally acceptable:
value V1 V2
2 A Y
4 B Y
3 B X
1 A X
For my first go at this, which was able to get me the correct final answer, I looped over the rows and columns. This was terribly slow, and I suspected that some machinery in Pandas would make it go faster.
It seems that melt is close to the magic I seek, but it doesn't get me all the way there. That first array turns into this:
V2 value
0 X 1
1 X 2
2 Y 3
3 Y 4
It gets rid of my V1 variable!
Nothing is special about melt, so I will be happy to read answers that use other approaches, particularly if melt is not much faster than my nested loops and another solution is. Nonetheless, how can I go from that array to the kind of tidy data frame I want as the output?
Example dataframe:
df = pd.DataFrame({"X":[1,3], "Y":[2,4]},index=["A","B"])
Use DataFrame.reset_index with DataFrame.rename_axis and then DataFrame.melt. If you want order columns we could use DataFrame.reindex.
new_df = (df.rename_axis(index = 'V1')
.reset_index()
.melt('V1',var_name='V2')
.reindex(columns = ['value','V1','V2']))
print(new_df)
Another approach DataFrame.stack:
new_df = (df.stack()
.rename_axis(index = ['V1','V2'])
.rename('value')
.reset_index()
.reindex(columns = ['value','V1','V2']))
print(new_df)
value V1 V2
0 1 A X
1 3 B X
2 2 A Y
3 4 B Y
to names names there is another alternative like commenting #Scott Boston in the comments
Melt is a good approach, but it doesn't seem to play nicely with identifying the results by index. You can reset the index first to move it to its own column, then use that column as the id col.
test = pd.DataFrame([[1,2],[3,4]], columns=['X', 'Y'], index=['A', 'B'])
X Y
A 1 2
B 3 4
test = test.reset_index()
index X Y
0 A 1 2
1 B 3 4
test.melt('index',['X', 'Y'], 'prev cols')
index prev cols value
0 A X 1
1 B X 3
2 A Y 2
3 B Y 4

is it possible to select a view from a pandas dataframe for a subset of columns without slicing?

I'm working with fairly large datasets that are close to my available memory. I want to select a subset of columns based on column names and then save this data. I don't think I can use regular slicing, as in :2 notation, so I need to select based on label or location. But it seems the only way to do this produces a copy, increasing memory usage considerably whenever I want to save a subset of the data. Is it possible to select a view without using slices? Or is there some creative way to use slices that can allow me to select arbitrarily located columns?
Consider the following:
import pandas as pd
df = pd.DataFrame([[1, 2, 1], [3, 4, 1]], columns=list('abc'))
# you can get a view using :2 slicing
my_slice = df.iloc[:, :2]
my_slice.iloc[0, 0] = 100
df
a b c
0 100 2 1
1 3 4 1
my_slice
a b
0 100 2
1 3 4
This returns a view and hence doesn't copy, but I had index by slicing.
Now I try alternatives.
my_slice = df.iloc[:, [0, 1]]
my_slice.iloc[0, 0] = 99
my_slice
a b
0 99 2
1 3 4
df
a b c
0 100 2 1
1 3 4 1
Or
my_slice = df.loc[:, ['a', 'b']]
my_slice.iloc[0, 0] = 55
my_slice
a b
0 55 2
1 3 4
df
a b c
0 100 2 1
1 3 4 1
Thus, the last two attempts returned a copy. Again, this is just a simple example. In reality, I have many more columns and the location of the subset of columns I want to save may not be amenable to slicing. This post is related, as it discusses selecting columns from dataframes, but it doesn't focus on being able to select views.

Data cleaning: Remove 0 value from my dataset having a header and index_col

I have a dataset showing below.
What I would like to do is three things.
Step 1: AA to CC is an index, however, happy to keep in the dataset for the future purpose.
Step 2: Count 0 value to each row.
Step 3: If 0 is more than 20% in the row, which means more than 2 in this case because DD to MM is 10 columns, remove the row.
So I did a stupid way to achieve above three steps.
df = pd.read_csv("dataset.csv", header=None)
df_bool = (df == "0")
print(df_bool.sum(axis=1))
then I got an expected result showing below.
0 0
1 0
2 1
3 0
4 1
5 8
6 1
7 0
So removed the row #5 as I indicated below.
df2 = df.drop([5], axis=0)
print(df2)
This works well even this is not an elegant, kind of a stupid way to go though.
However, if I import my dataset as header=0, then this approach did not work at all.
df = pd.read_csv("dataset.csv", header=0)
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
How come this happens?
Also, if I would like to write a code with loop, count and drop functions, what does the code look like?
You can just continue using boolean_indexing:
First we calculate number of columns and number of zeroes per row:
n_columns = len(df.columns) # or df.shape[1]
zeroes = (df == "0").sum(axis=1)
We then select only rows that have less than 20 % zeroes.
proportion_zeroes = zeroes / n_columns
max_20 = proportion_zeroes < 0.20
df[max_20] # This will contain only rows that have less than 20 % zeroes
One liner:
df[((df == "0").sum(axis=1) / len(df.columns)) < 0.2]
It would have been great if you could have posted how the dataframe looks in pandas rather than a picture of an excel file. However, constructing a dummy df
df = pd.DataFrame({'index1':['a','b','c'],'index2':['b','g','f'],'index3':['w','q','z']
,'Col1':[0,1,0],'Col2':[1,1,0],'Col3':[1,1,1],'Col4':[2,2,0]})
Step1, assigning the index can be done using the .set_index() method as per below
df.set_index(['index1','index2','index3'],inplace=True)
instead of doing everything manually when it comes fo filtering out, you can use the return you got from df_bool.sum(axis=1) in the filtering of the dataframe as per below
df.loc[(df==0).sum(axis=1) / (df.shape[1])>0.6]
index1 index2 index3 Col1 Col2 Col3 Col4
c f z 0 0 1 0
and using that you can drop those rows, assuming 20% then you would use
df = df.loc[(df==0).sum(axis=1) / (df.shape[1])<0.2]
Ween it comes to the header issue it's a bit difficult to answer without seeing the what the file or dataframe looks like

Change values over specific indexes in DataFrame

Let's say I have a Series of flags in a DataFrame:
a=pd.DataFrame({'flag':[0,1,0,0,1]})
and I want to change the values of the flags which are in a specific indexes:
lind=[0,1,3]
This is a simple solution:
def chnflg(series,ind):
if series.ix[ind]==0:
series.ix[ind]=1
else:
series.ix[ind]=0
map(partial(chnflg,a),lind)
It works fine but there are two issues: the first is that it makes the changes in-place, while I would like a new series in DataFrame. This is not a big deal after all.
The second point is that it does not seems pythonic enough. Is it possible to do better?
An easier way to describe your function is as x -> 1 - x, this will be more efficient that apply/map.
In [11]: 1 - a.iloc[lind]
Out[11]:
flag
0 1
1 0
3 1
Note: I like to use iloc here as it's less ambiguous.
If you wanted to assign these inplace then do the explicit assignment:
In [12]: a.iloc[lind] = 1 - a.iloc[lind]
In [13]: a
Out[13]:
flag
0 1
1 0
2 0
3 1
4 1
You could create a dict that flips the values and call map, this would return a series and you can create a new dataframe and leave the original intact:
In [6]:
temp={0:1,1:0}
pd.DataFrame(a.ix[lind]['flag'].map(temp))
Out[6]:
flag
0 1
1 0
3 1

Categories

Resources