So, I've got two dataframes, one with 54k rows and 1 column and another with 139k rows and 3 columns, I need to check weather the values of a column from first dataframe lies in between values of two columns in second dataframe, and if they match, I need to replace that particular value with corresponding string value in the second dataframe into first dataframe.
I tried doing it with simple for loops and if else statements, but the number of iteration are huge and my cell is taking forever to run. I've attached some snippets down below, If there is any better way to rewrite that particular part of code, It would be great help. Thanks in advance.
First DataFrame:
ip_address_to_clean
IP_Address_clean
0 815237196
1 1577685417
2 979279225
3 3250268602
4 2103448748
... ...
54208 4145673247
54209 1344187002
54210 3156712153
54211 1947493810
54212 2872038579
54213 rows × 1 columns
Second DataFrame:
ip_boundaries_file
country lower_bound_ip_address_clean upper_bound_ip_address_clean
0 Australia 16777216 16777471
1 China 16777472 16777727
2 China 16777728 16778239
3 Australia 16778240 16779263
4 China 16779264 16781311
... ... ... ...
138841 Hong Kong 3758092288 3758093311
138842 India 3758093312 3758094335
138843 China 3758095360 3758095871
138844 Singapore 3758095872 3758096127
138845 Australia 3758096128 3758096383
138846 rows × 3 columns
Code I've written :
ip_address_to_clean_copy = ip_address_to_clean.copy()
o_ip = ip_address_to_clean['IP_Address_clean'].values
l_b = ip_boundaries_file['lower_bound_ip_address_clean'].values
for i in range(len(o_ip)):
for j in range(len(l_b)):
if (ip_address_to_clean['IP_Address_clean'][i] > ip_boundaries_file['lower_bound_ip_address_clean'][j]) and (ip_address_to_clean['IP_Address_clean'][i] < ip_boundaries_file['upper_bound_ip_address_clean'][j]):
ip_address_to_clean_copy['IP_Address_clean'][i] = ip_boundaries_file['country'][j]
#print(ip_address_to_clean_copy['IP_Address_clean'][i])
#print(i)
This works (I tested it on small tables).
replacement1 = [None]*3758096384
replacement2 = []
for _, row in ip_boundaries_file.iterrows():
a,b,c = row['lower_bound_ip_address_clean'], row['upper_bound_ip_address_clean'], row['country']
replacement1[a+1:b]=[len(replacement2)]*(b-a-1)
replacement2.append(c)
ip_address_to_clean_copy['IP_Address_clean'] = ip_address_to_clean_copy['IP_Address_clean'].apply(lambda x:replacement2[replacement1[x]] if (x < len(replacement1) and replacement1[x]!=None) else x)
I tweaked the lambda function to keep the original ip if it's not in the replacement table.
Notes:
Compared to my comment, I added the replacement2 table to hold the actual strings, and put the indexes in replacement1 to make it more memory efficient.
This is based on one of the methods to sort a list in O(n) when you know the contained values are bounded.
Example:
Inputs:
ip_address_to_clean = pd.DataFrame([10,33,2,179,2345,123], columns = ['IP_Address_clean'])
ip_boundaries_file = pd.DataFrame([['China',1,12],
['Australia', 20,40],
['China',2000,3000],
['France', 100,150]],
columns = ['country', 'lower_bound_ip_address_clean',
'upper_bound_ip_address_clean'])
Output:
ip_address_to_clean_copy
# Out[13]:
# IP_Address_clean
# 0 China
# 1 Australia
# 2 China
# 3 179
# 4 China
# 5 France
As I mentioned in another comment, here's another script that performs a dichotomy search on the 2nd DataFrame; it works in O(n log(p)), which is slower than the above script, but consumes far less memory!
def replace(n, df):
if len(df) == 0:
return n
i = len(df)//2
if df.iloc[i]['lower_bound_ip_address_clean'] < n < df.iloc[i]['upper_bound_ip_address_clean']:
return df.iloc[i]['country']
elif len(df) == 1:
return n
else:
if n <= df.iloc[i]['lower_bound_ip_address_clean']:
return replace(n, df.iloc[:i-1])
else:
return replace(n, df.iloc[i+1:])
ip_address_to_clean_copy['IP_Address_clean'] = ip_address_to_clean['IP_Address_clean'].apply(lambda x: replace(x,ip_boundaries_file))
Related
Overview: I am working with pandas dataframes of census information, while they only have two columns, they are several hundred thousand rows in length. One column is a census block ID number and the other is a 'place' value, which is unique to the city in which that census block ID resides.
Example Data:
BLOCKID PLACEFP
0 60014001001000 53000
1 60014001001001 53000
...
5844 60014099004021 53000
5845 60014100001000
5846 60014100001001
5847 60014100001002 53000
Problem: As shown above, there are several place values that are blank, though they have a census block ID in their corresponding row. What I found was that in several instances, the census block ID that is missing a place value, is located within the same city as the surrounding blocks that do not have a missing place value, especially if the bookend place values are the same - as shown above, with index 5844 through 5847 - those two blocks are located within the same general area as the surrounding blocks, but just seem to be missing the place value.
Goal: I want to be able to go through this dataframe, find these instances and fill in the missing place value, based on the place value before the missing value and the place value that immediately follows.
Current State & Obstacle: I wrote a loop that goes through the dataframe to correct these issues, shown below.
current_state_blockid_df = pandas.DataFrame({'BLOCKID':[60014099004021,60014100001000,60014100001001,60014100001002,60014301012019,60014301013000,60014301013001,60014301013002,60014301013003,60014301013004,60014301013005,60014301013006],
'PLACEFP': [53000,,,53000,11964,'','','','','','',11964]})
for i in current_state_blockid_df.index:
if current_state_blockid_df.loc[i, 'PLACEFP'] == '':
#Get value before blank
prior_place_fp = current_state_blockid_df.loc[i - 1, 'PLACEFP']
next_place_fp = ''
_n = 1
# Find the end of the blank section
while next_place_fp == '':
next_place_fp = current_state_blockid_df.loc[i + _n, 'PLACEFP']
if next_place_fp == '':
_n += 1
# if the blanks could likely be in the same city, assign them the city's place value
if prior_place_fp == next_place_fp:
for _i in range(1, _n):
current_state_blockid_df.loc[_i, 'PLACEFP'] = prior_place_fp
However, as expected, it is very slow when dealing with hundreds of thousands or rows of data. I have considered using maybe ThreadPool executor to split up the work, but I haven't quite figured out the logic I'd use to get that done. One possibility to speed it up slightly, is to eliminate the check to see where the end of the gap is and instead just fill it in with whatever the previous place value was before the blanks. While that may end up being my goto, there's still a chance it's too slow and ideally I'd like it to only fill in if the before and after values match, eliminating the possibility of the block being mistakenly assigned. If someone has another suggestion as to how this could be achieved quickly, it would be very much appreciated.
You can use shift to help speed up the process. However, this doesn't solve for cases where there are multiple blanks in a row.
df['PLACEFP_PRIOR'] = df['PLACEFP'].shift(1)
df['PLACEFP_SUBS'] = df['PLACEFP'].shift(-1)
criteria1 = df['PLACEFP'].isnull()
criteria2 = df['PLACEFP_PRIOR'] == df['PLACEFP_AFTER']
df.loc[criteria1 & criteria2, 'PLACEFP'] = df.loc[criteria1 & criteria2, 'PLACEFP_PRIOR']
If you end up needing to iterate over the dataframe, use df.itertuples. You can access the column values in the row via dot notation (row.column_name).
for idx, row in df.itertuples():
# logic goes here
Using your dataframe as defined
def fix_df(current_state_blockid_df):
df_with_blanks = current_state_blockid_df[current_state_blockid_df['PLACEFP'] == '']
df_no_blanks = current_state_blockid_df[current_state_blockid_df['PLACEFP'] != '']
sections = {}
last_i = 0
grouping = []
for i in df_with_blanks.index:
if i - 1 == last_i:
grouping.append(i)
last_i = i
else:
last_i = i
if len(grouping) > 0:
sections[min(grouping)] = {'indexes': grouping}
grouping = []
grouping.append(i)
if len(grouping) > 0:
sections[min(grouping)] = {'indexes': grouping}
for i in sections.keys():
sections[i]['place'] = current_state_blockid_df.loc[i-1, 'PLACEFP']
l = []
for i in sections:
for x in sections[i]['indexes']:
l.append(sections[i]['place'])
df_with_blanks['PLACEFP'] = l
final_df = pandas.concat([df_with_blanks, df_no_blanks]).sort_index(axis=0)
return final_df
df = fix_df(current_state_blockid_df)
print(df)
Output:
BLOCKID PLACEFP
0 60014099004021 53000
1 60014100001000 53000
2 60014100001001 53000
3 60014100001002 53000
4 60014301012019 11964
5 60014301013000 11964
6 60014301013001 11964
7 60014301013002 11964
8 60014301013003 11964
9 60014301013004 11964
10 60014301013005 11964
11 60014301013006 11964
I have the following code which reads a csv file and then analyzes it. One patient has more than one illness and I need to find how many times an illness is seen on all patients. But the query given here
raw_data[(raw_data['Finding Labels'].str.contains(ctr)) & (raw_data['Patient ID'] == i)].size
is so slow that it takes more than 15 mins. Is there a way to make the query faster?
raw_data = pd.read_csv(r'C:\Users\omer.kurular\Desktop\Data_Entry_2017.csv')
data = ["Cardiomegaly", "Emphysema", "Effusion", "No Finding", "Hernia", "Infiltration", "Mass", "Nodule", "Atelectasis", "Pneumothorax", "Pleural_Thickening", "Pneumonia", "Fibrosis", "Edema", "Consolidation"]
illnesses = pd.DataFrame({"Finding_Label":[],
"Count_of_Patientes_Having":[],
"Count_of_Times_Being_Shown_In_An_Image":[]})
ids = raw_data["Patient ID"].drop_duplicates()
index = 0
for ctr in data[:1]:
illnesses.at[index, "Finding_Label"] = ctr
illnesses.at[index, "Count_of_Times_Being_Shown_In_An_Image"] = raw_data[raw_data["Finding Labels"].str.contains(ctr)].size / 12
for i in ids:
illnesses.at[index, "Count_of_Patientes_Having"] = raw_data[(raw_data['Finding Labels'].str.contains(ctr)) & (raw_data['Patient ID'] == i)].size
index = index + 1
Part of dataframes:
Raw_data
Finding Labels - Patient ID
IllnessA|IllnessB - 1
Illness A - 2
From what I read I understand that ctr stands for the name of a disease.
When you are doing this query:
raw_data[(raw_data['Finding Labels'].str.contains(ctr)) & (raw_data['Patient ID'] == i)].size
You are not only filtering the rows which have the disease, but also which have a specific patient id. If you have a lot of patients, you will need to do this query a lot of times. A simpler way to do it would be to not filter on the patient id and then take the count of all the rows which have the disease.
This would be:
raw_data[raw_data['Finding Labels'].str.contains(ctr)].size
And in this case since you want the number of rows, len is what you are looking for instead of size (size will be the number of cells in the dataframe).
Finally another source of error in your current code was the fact that you were not keeping the count for every patient id. You needed to increment illnesses.at[index, "Count_of_Patientes_Having"] not set it to a new value each time.
The code would be something like (for the last few lines), assuming you want to keep the disease name and the index separate:
for index, ctr in enumerate(data[:1]):
illnesses.at[index, "Finding_Label"] = ctr
illnesses.at[index, "Count_of_Times_Being_Shown_In_An_Image"] = len(raw_data[raw_data["Finding Labels"].str.contains(ctr)]) / 12
illnesses.at[index, "Count_of_Patientes_Having"] = len(raw_data[raw_data['Finding Labels'].str.contains(ctr)])
I took the liberty of using enumerate for a more pythonic way of handling indexes. I also don't really know what "Count_of_Times_Being_Shown_In_An_Image" is, but I assumed you had had the same confusion between size and len.
Likely the reason your code is slow is that you are growing a data frame row-by-row inside a loop which can involve multiple in-memory copying. Usually this is reminiscent of general purpose Python and not Pandas programming which ideally handles data in blockwise, vectorized processing.
Consider a cross join of your data (assuming a reasonable data size) to the list of illnesses to line up Finding Labels to each illness in same row to be filtered if longer string contains shorter item. Then, run a couple of groupby() to return the count and distinct count by patient.
# CROSS JOIN LIST WITH MAIN DATA FRAME (ALL ROWS MATCHED)
raw_data = (raw_data.assign(key=1)
.merge(pd.DataFrame({'ills':ills, 'key':1}), on='key')
.drop(columns=['key'])
)
# SUBSET BY ILLNESS CONTAINED IN LONGER STRING
raw_data = raw_data[raw_data.apply(lambda x: x['ills'] in x['Finding Labels'], axis=1)]
# CALCULATE GROUP BY count AND distinct count
def count_distinct(grp):
return (grp.groupby('Patient ID').size()).size
illnesses = pd.DataFrame({'Count_of_Times_Being_Shown_In_An_Image': raw_data.groupby('ills').size(),
'Count_of_Patients_Having': raw_data.groupby('ills').apply(count_distinct)})
To demonstrate, consider below with random, seeded input data and output.
Input Data (attempting to mirror original data)
import numpy as np
import pandas as pd
alpha = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'
data_tools = ['sas', 'stata', 'spss', 'python', 'r', 'julia']
ills = ["Cardiomegaly", "Emphysema", "Effusion", "No Finding", "Hernia",
"Infiltration", "Mass", "Nodule", "Atelectasis", "Pneumothorax",
"Pleural_Thickening", "Pneumonia", "Fibrosis", "Edema", "Consolidation"]
np.random.seed(542019)
raw_data = pd.DataFrame({'Patient ID': np.random.choice(data_tools, 25),
'Finding Labels': np.core.defchararray.add(
np.core.defchararray.add(np.array([''.join(np.random.choice(list(alpha), 3)) for _ in range(25)]),
np.random.choice(ills, 25).astype('str')),
np.array([''.join(np.random.choice(list(alpha), 3)) for _ in range(25)]))
})
print(raw_data.head(10))
# Patient ID Finding Labels
# 0 r xPNPneumothoraxXYm
# 1 python ScSInfiltration9Ud
# 2 stata tJhInfiltrationJtG
# 3 r thLPneumoniaWdr
# 4 stata thYAtelectasis6iW
# 5 sas 2WLPneumonia1if
# 6 julia OPEConsolidationKq0
# 7 sas UFFCardiomegaly7wZ
# 8 stata 9NQHerniaMl4
# 9 python NB8HerniapWK
Output (after running above process)
print(illnesses)
# Count_of_Times_Being_Shown_In_An_Image Count_of_Patients_Having
# ills
# Atelectasis 3 1
# Cardiomegaly 2 1
# Consolidation 1 1
# Effusion 1 1
# Emphysema 1 1
# Fibrosis 2 2
# Hernia 4 3
# Infiltration 2 2
# Mass 1 1
# Nodule 2 2
# Pleural_Thickening 1 1
# Pneumonia 3 3
# Pneumothorax 2 2
I am working on an assignment for the coursera Introduction to Data Science course. I have a dataframe with 'Country' as the index and 'Rank" as one of the columns. When I try to reduce the data frame only to include the rows with countries in rank 1-15, the following works but excludes Iran, which is ranked 13.
df.set_index('Country', inplace=True)
df.loc['Iran', 'Rank'] = 13 #I did this in case there was some sort of
corruption in the original data
df_top15 = df.where(df.Rank < 16).dropna().copy()
return df_top15
When I try
df_top15 = df.where(df.Rank == 12).dropna().copy()
I get the row for Spain.
But when I try
df_top15 = df.where(df.Rank == 13).dropna().copy()
I just get the column headers, no row for Iran.
I also tried
df.Rank == 13
and got a series with False for all countries but Iran, which was True.
Any idea what could be causing this?
Your code works fine:
df = pd.DataFrame([['Italy', 5],
['Iran', 13],
['Tinbuktu', 20]],
columns=['Country', 'Rank'])
res = df.where(df.Rank < 16).dropna()
print(res)
Country Rank
0 Italy 5.0
1 Iran 13.0
However, I dislike this method because via mask the dtype of your Rank series becomes float due to initial conversion of some values to NaN.
A better idea, in my opinion, is to use query or loc. Using either method obviates the need for dropna:
res = df.query('Rank < 16')
res = df.loc[df['Rank'] < 16]
print(res)
Country Rank
0 Italy 5
1 Iran 13
I need to use a DataFrame as a lookup table on columns that are not part of the index. For example (this is a simple one just to illustrate):
import pandas as pd
westcoast = pd.DataFrame([['Washington','Olympia'],['Oregon','Salem'],
['California','Sacramento']],
columns=['state','capital'])
print westcoast
state capital
0 Washington Olympia
1 Oregon Salem
2 California Sacramento
It's easy to lookup and get a Series as an output:
westcoast[westcoast.state=='Oregon'].capital
1 Salem
Name: capital, dtype: object
but I want to obtain the string 'Salem':
westcoast[westcoast.state=='Oregon'].capital.values[0]
'Salem'
and the .values[0] seems somewhat clunky... is there a better way?
(FWIW: my real data has maybe 50 rows at most, but lots of columns, so if I do set an index column, no matter what column I choose, there will be a lookup operation like this that is not based on an index, and the relatively small number of rows means that I don't care if it's O(n) lookup.)
Yes, you can use Series.item if the lookup will always returns one element from the Series:
westcoast.loc[westcoast.state=='Oregon', 'capital'].item()
Exceptions can be handled if the lookup returns nothing, or one or more values and you need only the first item:
s = westcoast.loc[westcoast.state=='Oregon', 'capital']
s = np.nan if s.empty else s.iat[0]
print (s) #Salem
s = westcoast.loc[westcoast.state=='New York', 'capital']
s = np.nan if s.empty else s.iat[0]
print (s)
nan
A more general solution to handle the exceptions because there are 3 possible output scenarios:
westcoast = pd.DataFrame([['Washington','Olympia'],['Oregon','Salem'],
['California','Sacramento'],['Oregon','Portland']],
columns=['state','capital'])
print (westcoast)
state capital
0 Washington Olympia
1 Oregon Salem
2 California Sacramento
3 Oregon Portland
s = westcoast.loc[westcoast.state=='Oregon', 'capital']
#if not value returned
if s.empty:
s = 'no match'
#if only one value returned
elif len(s) == 1:
s = s.item()
else:
# if multiple values returned, return a list of values
s = s.tolist()
print (s)
['Salem', 'Portland']
It is possible to create a lookup function:
def look_up(a):
s = westcoast.loc[westcoast.state==a, 'capital']
#for no match
if s.empty:
return np.nan
#for match only one value
elif len(s) == 1:
return s.item()
else:
#for return multiple values
return s.tolist()
print (look_up('Oregon'))
['Salem', 'Portland']
print (look_up('California'))
Sacramento
print (look_up('New Yourk'))
nan
If you are going to do frequent lookups of this sort, then it pays to make state the index:
state_capitals = westcoast.set_index('state')['capital']
print(state_capitals['Oregon'])
# Salem
With an index, each lookup is O(1) on average, whereas westcoast['state']=='Oregon' requires O(n) comparisons. Of course, building the index is also O(n), so you would need to do many lookups for this to pay off.
At the same time, once you have state_capitals the syntax is simple and dict-like. That might be reason enough to build state_capitals.
I have 2 columns, I need to take specific string information from each column and create a new column with new strings based on this.
In column "Name" I have wellnames, I need to look at the last 4 characters of each wellname and if it Contains "H" then call that "HZ" in a new column.
I need to do the same thing if the column "WELLTYPE" contains specific words.
Using a Data Analysis program Spotfire I can do this all in one simple equation. (see below).
case
When right([UWI],4)~="H" Then "HZ"
When [WELLTYPE]~="Horizontal" Then "HZ"
When [WELLTYPE]~="Deviated" Then "D"
When [WELLTYPE]~="Multilateral" Then "ML"
else "V"
End
What would be the best way to do this in Python Pandas?
Is there a simple clean way you can do this all at once like in the spotfire equaiton above?
Here is the datatable with the two columns and my hopeful outcome column. (it did not copy very well into this), I also provide the code for the table below.
Name WELLTYPE What I Want
0 HH-001HST2 Oil Horizontal HZ
1 HH-001HST Oil_Horizontal HZ
2 HB-002H Oil HZ
3 HB-002 Water_Deviated D
4 HB-002 Oil_Multilateral ML
5 HB-004 Oil V
6 HB-005 Source V
7 BB-007 Water V
Here is the code to create the dataframe
# Dataframe with hopeful outcome
raw_data = {'Name': ['HH-001HST2', 'HH-001HST', 'HB-002H', 'HB-002', 'HB-002','HB-004','HB-005','BB-007'],
'WELLTYPE':['Oil Horizontal', 'Oil_Horizontal', 'Oil', 'Water_Deviated', 'Oil_Multilateral','Oil','Source','Water'],
'What I Want': ['HZ', 'HZ', 'HZ', 'D', 'ML','V','V','V']}
df = pd.DataFrame(raw_data, columns = ['Name','WELLTYPE','What I Want'])
df
Nested 'where' variant:
df['What I Want'] = np.where(df.Name.str[-4:].str.contains('H'), 'HZ',
np.where(df.WELLTYPE.str.contains('Horizontal'),'HZ',
np.where(df.WELLTYPE.str.contains('Deviated'),'D',
np.where(df.WELLTYPE.str.contains('Multilateral'),'ML',
'V'))))
Using apply by row:
def criteria(row):
if row.Name[-4:].find('H') > 0:
return 'HZ'
elif row.WELLTYPE.find('Horizontal') > 0:
return 'HZ'
elif row.WELLTYPE.find('Deviated') > 0:
return 'D'
elif row.WELLTYPE.find('Multilateral') > 0:
return 'ML'
else:
return 'V'
df['want'] = df.apply(criteria, axis=1)
This feels more natural to me. Obviously subjective
from_name = df.Name.str[-4:].str.contains('H').map({True: 'HZ'})
regex = '(Horizontal|Deviated|Multilateral)'
m = dict(Horizontal='HZ', Deviated='D', Multilateral='ML')
from_well = df.WELLTYPE.str.extract(regex, expand=False).map(m)
df['What I Want'] = from_name.fillna(from_well).fillna('V')
print(df)
Name WELLTYPE What I Want
0 HH-001HST2 Oil Horizontal HZ
1 HH-001HST Oil_Horizontal HZ
2 HB-002H Oil HZ HZ
3 HB-002 Water_Deviated D
4 HB-002 Oil_Multilateral ML
5 HB-004 Oil V V
6 HB-005 Source V
7 BB-007 Water V