how to make pandas row processing faster? - python

I have 600 csv files each file contains around 1500 rows of data. I have to run a function on the every row of data. I have define the function.
def query_prepare(data):
"""function goes here"""
"""here input data is list of single row of dataframe"""
the above function is perform some function like strip(), replace() based on conditions. Above function takes every single row data as list.
data = ['apple$*7','orange ','bananna','-'].
this is my initial dataframe looklike
a b c d
0 apple$*7 orange bananna -
1 apple()*7 flower] *bananna -
I checked with the function for one row of data processing it takes around 0.04s. and if I run this on one csv file which contains 1500 row of data it takes almost 1500*0.04s. I have tried with some of the methods....
# normal in built apply function
t = time.time()
a = df.apply(lambda x: query_prepare(x.to_list()),axis=1)
print('time taken',time.time()-t)
# time taken 52.519816637039185
# with swifter
t = time.time()
a = df.swifter.allow_dask_on_strings().apply(lambda x: query_prepare(x.to_list()),axis=1)
print('time taken',time.time()-t)
# time taken 160.31028127670288
# with pandarallel
pandarallel.initialize()
t = time.time()
a = df.parallel_apply(lambda x: query_prepare(x.to_list()),axis=1)
print('time taken',time.time()-t)
# time taken 55.000578
I did everything with my query_prepare function to reduce the time so there are no way to change or modify it. Any other suggestion suggestions?
P.S by the way I'm running it on google colab
EDIT: If we have 1500 row data, split it into 15 then apply the function. can we decrease the time by 15 times if we do something like this?. (I'm sorry I'm not sure its possible or not guide me in a good way)

For example you could roughly do the following:
def sanitize_column(s: pd.Series):
return s.str.strip().str.strip('1234567890()*[]')
then you could do:
df.apply(sanitize_column, axis=0)
with:
df = pd.DataFrame({'a': ['apple7', 'apple()*7'], 'b': [" asd ", ']asds89']})
this will give
a b
0 apple asd
1 apple asds
This should be faster than your solution. For proper benchmarking, we'd need your full solution.

Related

Anyone has a better way of efficiently creating a DataFrame from 60000 txt Files with keys in one column and values in the second?

Disclaimer !! This is my first post ever, so sorry if I don't meet certain standards of the community. _________________ _________________ _________________ _________________ _________________
I use python3, Jupyter Notebooks, Pandas
I used KMC kmer counter to count kmers of 60,000 DNA sequences in a reasonable amount of time. I want to use these kmer counts as input to ML algorithms as part of a Bag Of Words model.
The shape of a file containing kmer counts is as below, or as in image here and I have 60K files:
AAAAAC     2
AAAAAG     6
AAAAAT      2
AAAACC     4
AAAACG     2
AAAACT     3
AAAAGA     5
I want to create a single DataFrame from all the 60K files with one line per DNA sequence kmer counts which would have this form:
The target DataFrame shape
A first approach was successful and I managed to import 100 sequences(100 txt files) in 58 seconds, using this code:
import time
countsPath = r'D:\DataSet\MULTI\bow\6mer'
start = time.time()
for i in range(0, 60000):
sample = pd.read_fwf(countsPath + r'\kmers-' + str(k) +'-seqNb-'+ str(i) + '.txt',sep=" ", header=None).T
new_header = sample.iloc[0] #grab the first row for the header
sample = sample[1:] #take the data less the header row
sample.columns = new_header #set the header row as the df header
df= df.append(sample, ignore_index=True) #APPEND Sample to df DataSet
end = time.time()
# total time taken
print(f"Runtime of the program is {end - start} secs")
# display(sample)
display(df)
However, this was very slow, and took 59 secs on 100 files. On the full dataset, take a factor of x600.
I tried dask DataFrames Bag to accelerate the process because it reads dictionary-like data, but I couldn't append each file as a row. The resulting Dask DataFrame is as follows or as in this image:
0          AAAAA   18
1          AAAAC   16
2          AAAAG   13
...
1023   TTTTT   14
0          AAAAA   5
1          AAAAC   4
...
1023   TTTTT   9
0          AAAAA   18
1          AAAAC   16
2          AAAAG   13
3          AAAAT   12
4          AAACA   11
So the files are being inserted in a single column.
Anyone has a better way of efficiently creating a DataFrame from 60k txt Files?
Love the disclaimer. I have a similar one - this is the first time I'm trying to answer a question. But I'm pretty certain I got this...and so will you:
dict_name = dict(zip(df['column_name'],df['the_other_column_name']))

Efficient way to loop through GroupBy DataFrame

Since my last post did lack in information:
example of my df (the important col):
deviceID: unique ID for the vehicle. Vehicles send data all Xminutes.
mileage: the distance moved since the last message (in km)
positon_timestamp_measure: unixTimestamp of the time the dataset was created.
deviceID mileage positon_timestamp_measure
54672 10 1600696079
43423 20 1600696079
42342 3 1600701501
54672 3 1600702102
43423 2 1600702701
My Goal is to validate the milage by comparing it to the max speed of the vehicle (which is 80km/h) by calculating the speed of the vehicle using the timestamp and the milage. The result should then be written in the orginal dataset.
What I've done so far is the following:
df_ori['dataIndex'] = df_ori.index
df = df_ori.groupby('device_id')
#create new col and set all values to false
df_ori['valid'] = 0
for group_name, group in df:
#sort group by time
group = group.sort_values(by='position_timestamp_measure')
group = group.reset_index()
#since I can't validate the first point in the group, I set it to valid
df_ori.loc[df_ori.index == group.dataIndex.values[0], 'validPosition'] = 1
#iterate through each data in the group
for i in range(1, len(group)):
timeGoneSec = abs(group.position_timestamp_measure.values[i]-group.position_timestamp_measure.values[i-1])
timeHours = (timeGoneSec/60)/60
#calculate speed
if((group.mileage.values[i]/timeHours)<maxSpeedKMH):
df_ori.loc[dataset.index == group.dataIndex.values[i], 'validPosition'] = 1
dataset.validPosition.value_counts()
It definitely works the way I want it to, however it lacks in performance a lot. The df contains nearly 700k in data (already cleaned). I am still a beginner and can't figure out a better solution. Would really appreciate any of your help.
If I got it right, no for-loops are needed here. Here is what I've transformed your code into:
df_ori['dataIndex'] = df_ori.index
df = df_ori.groupby('device_id')
#create new col and set all values to false
df_ori['valid'] = 0
df_ori = df_ori.sort_values(['position_timestamp_measure'])
# Subtract preceding values from currnet value
df_ori['timeGoneSec'] = \
df_ori.groupby('device_id')['position_timestamp_measure'].transform('diff')
# The operation above will produce NaN values for the first values in each group
# fill the 'valid' with 1 according the original code
df_ori[df_ori['timeGoneSec'].isna(), 'valid'] = 1
df_ori['timeHours'] = df_ori['timeGoneSec']/3600 # 60*60 = 3600
df_ori['flag'] = (df_ori['mileage'] / df_ori['timeHours']) <= maxSpeedKMH
df_ori.loc[df_ori['flag'], 'valid'] = 1
# Remove helper columns
df_ori = df.drop(columns=['flag', 'timeHours', 'timeGoneSec'])
The basic idea is try to use vectorized operation as much as possible and to avoid for loops, typically iteration row by row, which can be insanly slow.
Since I can't get the context of your code, please double check the logic and make sure it works as desired.

Slicing my data frame is returning unexpected results

I have 13 CSV files that contain billing information in an unusual format. Multiple readings are recorded every 30 minutes of the day. Five days are recorded beside each other (columns). Then the next five days are recorded under it. To make things more complicated, the day of the week, date, and billing day is shown over the first recording of KVAR each day.
The image blow shows a small example. However, imagine that KW, KVAR, and KVA repeat 3 more times before continuing some 50 rows later.
My goal as to create a simple python script that would make the data into a data frame with the columns: DATE, TIME, KW, KVAR, KVA, and DAY.
The problem is my script returns NaN data for the KW, KVAR, and KVA data after the first five days (which is correlated with a new instance of a for loop). What is weird to me is that when I try to print out the same ranges I get the data that I expect.
My code is below. I have included comments to help further explain things. I also have an example of sample output of my function.
def make_df(df):
#starting values
output = pd.DataFrame(columns=["DATE", "TIME", "KW", "KVAR", "KVA", "DAY"])
time = df1.loc[3:50,0]
val_start = 3
val_end = 51
date_val = [0,2]
day_type = [1,2]
# There are 7 row movements that need to take place.
for row_move in range(1,8):
day = [1,2,3]
date_val[1] = 2
day_type[1] = 2
# There are 5 column movements that take place.
# The basic idea is that I would cycle through the five days, grab their data in a temporary dataframe,
# and then append that dataframe onto the output dataframe
for col_move in range(1,6):
temp_df = pd.DataFrame(columns=["DATE", "TIME", "KW", "KVAR", "KVA", "DAY"])
temp_df['TIME'] = time
#These are the 3 values that stop working after the first column change
# I get the values that I expect for the first 5 days
temp_df['KW'] = df.iloc[val_start:val_end, day[0]]
temp_df['KVAR'] = df.iloc[val_start:val_end, day[1]]
temp_df['KVA'] = df.iloc[val_start:val_end, day[2]]
# These 2 values work perfectly for the entire data set
temp_df['DAY'] = df.iloc[day_type[0], day_type[1]]
temp_df["DATE"] = df.iloc[date_val[0], date_val[1]]
# trouble shooting
print(df.iloc[val_start:val_end, day[0]])
print(temp_df)
output = output.append(temp_df)
# increase values for each iteration of row loop.
# seems to work perfectly when I print the data
day = [x + 3 for x in day]
date_val[1] = date_val[1] + 3
day_type[1] = day_type[1] + 3
# increase values for each iteration of column loop
# seems to work perfectly when I print the data
date_val[0] = date_val[0] + 55
day_type [0]= day_type[0] + 55
val_start = val_start + 55
val_end = val_end + 55
return output
test = make_df(df1)
Below is some sample output. It shows where the data starts to break down after the fifth day (or first instance of the column shift in the for loop). What am I doing wrong?
Could be pd.append requiring matched row indices for numerical values.
import pandas as pd
import numpy as np
output = pd.DataFrame(np.random.rand(5,2), columns=['a','b']) # fake data
output['c'] = list('abcdefghij') # add a column of non-numerical entries
tmp = pd.DataFrame(columns=['a','b','c'])
tmp['a'] = output.iloc[0:2, 2]
tmp['b'] = output.iloc[3:5, 2] # generates NaN
tmp['c'] = output.iloc[0:2, 2]
data.append(tmp)
(initial response)
How does df1 look like? Is df.iloc[val_start:val_end, day[0]] have any issue past the fifth day? The codes didn't show how you read from the csv files, or df1 itself.
My guess: if val_start:val_end gives invalid indices on the sixth day, or df1 happens to be malformed past the fifth day, df.iloc[val_start:val_end, day[0]] will return an empty Series object and possibly make its way into temp_df. iloc do not report invalid row indices, though similar column indices would trigger IndexError.
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(5,3), columns=['a','b','c'], index=np.arange(5)) # fake data
df.iloc[0:2, 1] # returns the subset
df.iloc[100:102, 1] # returns: Series([], Name: b, dtype: float64)
A little off topic but I would recommend preprocessing the csv files rather than deal with indexing in Pandas DataFrame, as the original format was kinda complex. Slice the data by date and later use pd.melt or pd.groupby to shape them into the format you like. Or alternatively try multi-index if stick with Pandas I/O.

add computed column to a csv file

I expect that this don't be a classic beginner question. However I read and spent days trying to save my csv data without success.
I have a function that uses an input parameter that I give manually. The function generates 3 columns that I saved in a CSV file. When I want to use the function with other inputs and save the new data allocated at right from the previous computed columns, the result is that pandas sort my CSV file in 3 single columns one below each other with the headings.
I'm using the next code to save my data:
data.to_csv('/Users/Computer/Desktop/Examples anaconda/data_new.csv', sep=',',mode='a')
and the result is:
dot lake mock
1 42 11.914558
2 41 42.446977
3 40 89.188668
dot lake mock
1 42 226.266513
2 41 317.768887
dot lake mock
3 42 560.171830
4. 41. 555.005333
What I want is:
dot lake mock mock mock
0 42 11.914558. 226.266513. 560.171830
1 41 42.446977. 317.768887. 555.005533
2 40 89.188668
UPDATE:
My DataFrame was generated using a function like this:
First I opened a csv file:
df1=pd.read_csv('current_state.csv')
def my_function(df1, photos, coords=['X', 'Y']):
Hzs = t.copy()
shifts = np.floor(Hzs / t_step).astype(np.int)
ms = np.zeros(shifts.size)
delta_inv = np.arange(N+1)
dot = delta_inv[N:0:-1]
lake = np.arange(1,N+1)
for i, shift in enumerate(shifts):
diffs = df1[coords] - df1[coords].shift(-shift)
sqdist = np.square(diffs).sum(axis=1)
ms[i] = sqdist.sum()
mock = np.divide(ms, dot)
msds = pd.DataFrame({'dot':dot, 'lake':lake, 'mock':mock})
return msds
data = my_function(df1, photos, coords=['X', 'Y'])
print(data)
data.to_csv('/Users/Computer/Desktop/Examples anaconda/data_new.csv', sep=',',mode='a'
I looked for several day the way to write in a csv file containing several computed columns just right to the next one. Even the unpleasant comments of some guys! I finally found how to do this. If someone need something similar:
First I save my data using to_csv:
data.to_csv('/Users/Computer/Desktop/Examples/data_new.csv', sep=',',mode='a', index=False)
after the file has been already generated with the headers, I remove the index that I don't need and I only call the function using at the end:
b = data
a = pd.read_csv('data_new.csv')
c = pd.concat ([a,b],axis=1, ignore_index=True)
c.to_csv('/Users/Computer/Desktop/Examples/data_new.csv', sep=',', index=False)
As a result I got the CSV file desired and is possible to call the function the times that you want!

Optimizing Python code - overhead due to pandas.core.series.Series.__getitem__

I have pandas data object - data - that is stored as Series of Series. The first series is indexed on ID1 and the second on ID2.
ID1 ID2
1 10259 0.063979
14166 0.120145
14167 0.177417
14244 0.277926
14245 0.436048
15021 0.624367
15260 0.770925
15433 0.918439
15763 1.000000
...
1453 812690 0.752274
813000 0.755041
813209 0.756425
814045 0.778434
814474 0.910647
814475 1.000000
Length: 19726, dtype: float64
I have a function that uses values from this object for further data processing. Here is the function:
#Function
def getData(ID1, randomDraw):
dataID2 = data[ID1]
value = dataID2.index[np.searchsorted(dataID2, randomDraw, side='left').iloc[0]]
return value
I use np.vectorize to apply this function on a DataFrame - dataFrame - that has about 22 million rows.
dataFrame['ID2'] = np.vectorize(getData)(dataFrame['ID1'], dataFrame['RAND'])
where ID1 and RAND are columns with values that are feeding into the function.
The code takes about 6 hours to process everything. A similar implementation in Java takes only about 6 minutes to get through 22 million rows of data.
On running a profiler on my program I find that the most expensive call is the indexing into data and the second most expensive is searchsorted.
Function Name: pandas.core.series.Series.__getitem__
Elapsed inclusive time percentage: 54.44
Function Name: numpy.core.fromnumeric.searchsorted
Elapsed inclusive time percentage: 25.49
Using data.loc[ID1] to get data makes the program even slower. How can I make this faster? I understand that Python cannot achieve the same efficiency as Java but 6 hours compared to 6 minutes seems too much of a difference. Maybe I should be using a different data structure/ functions? I am using Python 2.7 and PTVS IDE.
Adding a minimum working example:
import numpy as np
import pandas as pd
np.random.seed = 0
#Creating a dummy data object - Series within Series
alt = pd.Series(np.array([ 0.25, 0.50, 0.75, 1.00]), index=np.arange(1,5))
data = pd.Series([alt]*1500, index=np.arange(1,1501))
#Creating dataFrame -
nRows = 200000
d = {'ID1': np.random.randint(1500, size=nRows) + 1
,'RAND': np.random.uniform(low=0.0, high=1.0, size=nRows)}
dataFrame = pd.DataFrame(d)
#Function
def getData(ID1, randomDraw):
dataID2 = data[ID1]
value = dataID2.index[np.searchsorted(dataID2, randomDraw, side='left').iloc[0]]
return value
dataFrame['ID2'] = np.vectorize(getData)(dataFrame['ID1'], dataFrame['RAND'])
You may get a better performance with this code:
>>> def getData(ts):
... dataID2 = data[ts.name]
... i = np.searchsorted(dataID2.values, ts.values, side='left')
... return dataID2.index[i]
...
>>> dataFrame['ID2'] = dataFrame.groupby('ID1')['RAND'].transform(getData)

Categories

Resources