How to drop duplicates ignoring one column - python

I have a DataFrame with multiple columns and the last column is timestamp which I want Python to ignore. I've used drop_columns(subset=...) but does not work as it returns literally the same DataFrame.
This is what the DataFrame looks like:
id
name
features
timestamp
1
34233
Bob
athletics
04-06-2022
2
23423
John
mathematics
03-06-2022
3
34233
Bob
english_literature
06-06-2022
4
23423
John
mathematics
10-06-2022
...
...
...
...
...
And this is are the data types when doing df.dtypes:
id
int64
name
object
features
object
timestamp
object
Lastly, this is the piece of code I used:
df.drop_duplicates(subset=df.columns.tolist().remove("timestamp"), keep="first").reset_index(drop=True)
The idea is to keep track of changes based on a timestamp IF there are changes to the other columns. For instance, I don't want to keep row 4 because nothing has changed with John, however, I want to keep Bob as it has changed from athletics to english_literature. Does that make sense?
EDIT:
This is the full code:
"""
db_data contains 10 records
new_data contains 12 records but I know only 5 are needed based on the logic I want to implement
"""
db_data = pd.read_sql("SELECT * FROM subscribed", engine)
new_data = pd.read_csv("new_data.csv")
# Checking columns match
# This prints "matching"
if db_data.columns == new_data.columns: print("matching")
df = pd.concat([db_data, new_data], axis=1)
consider = [x for x in df.columns if x != "timestamp"]
df = df.drop_duplicates(subset=consider).reset_index(drop=True)
# This outputs 22 but should have printed 15
print(len(df))
TEST:
I've done a test but has puzzled me even more. I've created a separate table in the db and loaded the csv file new_data.csv and then used read_sql to get it back into a DataFrame. Surprisingly, this works. However, I do not want to take this unnecessary extra step. I am puzzled on why this works. I've checked the data types they match.
db_data = pd.read_sql("SELECT * FROM subscribed, engine")
new_data = pd.read_sql("SELECT * FROM test, engine")
# Checking columns match
# This still prints "matching"
if db_data.columns == new_data.columns: print("matching")
df = pd.concat([db_data, new_data], axis=1)
consider = [x for x in df.columns if x != "timestamp"]
df = df.drop_duplicates(subset=consider).reset_index(drop=True)
# This the right output... in other words, it worked.
print(len(df))

The remove method of a list returns None. That's why the returned dataframe is similar. You can do as follows:
Create the list of columns for the subset: col_subset = df.columns.tolist()
Remove timestamp: col_subset.remove('timestamp')
Use the col_subset list in the drop_duplicates() function: df.drop_duplicates(subset=col_subset, keep="first").reset_index(drop=True)

Try this:
consider = [x for x in df.columns if x != "timestamp"]
df.drop_duplicates(subset=consider).reset_index(drop=True)
(You don't need tolist() and keep="first" here)

If I understood you correctly, this code would do:
df.drop_duplicates(subset='features', keep ='first').reset_index()

Related

Compare entire rows for equality if some condition is satisfied

Let's say I have the following data of a match in a CSV file:
name,match1,match2,match3
Alice,2,4,3
Bob,2,3,4
Charlie,1,0,4
I'm writing a python program. Somewhere in my program I have scores collected for a match stored in a list, say x = [1,0,4]. I have found where in the data these scores exist using pandas and I can print "found" or "not found". However I want my code to print out to which name these scores correspond to. In this case the program should output "charlie" since charlie has all these values [1,0,4]. how can I do that?
I will have a large set of data so I must be able to tell which name corresponds to the numbers I pass to the program.
Yes, here's how to compare entire rows in a dataframe:
df[(df == x).all(axis=1)].index # where x is the pd.Series we're comparing to
Also, it makes life easiest if you directly set name as the index column when you read in the CSV.
import pandas as pd
from io import StringIO
df = """\
name,match1,match2,match3
Alice,2,4,3
Bob,2,3,4
Charlie,1,0,4"""
df = pd.read_csv(StringIO(df), index_col='name')
x = pd.Series({'match1':1, 'match2':0, 'match3':4})
Now you can see that doing df == x, or equivalently df.eq(x), is not quite what you want because it does element-wise compare and returns a row of True/False. So you need to aggregate those rows with .all(axis=1) which finds rows where all comparison results were True...
df.eq(x).all(axis=1)
df[ (df == x).all(axis=1) ]
# match1 match2 match3
# name
# Charlie 1 0 4
...and then finally since you only want the name of such rows:
df[ (df == x).all(axis=1) ].index
# Index(['Charlie'], dtype='object', name='name')
df[ (df == x).all(axis=1) ].index.tolist()
# ['Charlie']
which is what you wanted. (I only added the spaces inside the expression for clarity).
You need to use DataFrame.loc which would work like this:
print(df.loc[(df.match1 == 1) & (df.match2 == 0) & (df.match3 == 4), 'name'])
Maybe try something like this:
import pandas as pd
import numpy as np
# Makes sample data
match1 = np.array([2,2,1])
match2 = np.array([4,4,0])
match3 = np.array([3,3,4])
name = np.array(['Alice','Bob','Charlie'])
df = pd.DataFrame({'name': id, 'match1': match1, 'match2':match2, 'match3' :match3})
df
# example of the list you want to get the data from
x=[1,0,4]
#x=[2,4,3]
# should return the name Charlie as well as the index (based on the values in the list x)
df['name'].loc[(df['match1'] == x[0]) & (df['match2'] == x[1]) & (df['match3'] ==x[2])]
# Makes a new dataframe out of the above
mydf = pd.DataFrame(df['name'].loc[(df['match1'] == x[0]) & (df['match2'] == x[1]) & (df['match3'] ==x[2])])
# Loop that prints out the name based on the index of mydf
# Assuming there are more than one name, it will print all. if there is only one name, it will print only that)
for i in range(0,len(mydf)):
print(mydf['name'].iloc[i])
you can use this
here data is your Data frame ,you can change accordingly your data frame name,
and
considering [1,0,4] is int type
data = data[(data['match1']== 1)&(data['match2']==0)&(data['match3']== 4 ).index
print(data[0])
if data is object type then use this
data = data[(data['match1']== "1")&(data['match2']=="0")&(data['match3']== "4" ).index
print(data[0])

Python remove everything after specific string and loop through all rows in multiple columns in a dataframe

I have a file full of URL paths like below spanning across 4 columns in a dataframe that I am trying to clean:
Path1 = ["https://contentspace.global.xxx.com/teams/Australia/WA/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FWA%2FDocuments%2FIn%20Scope&FolderCTID\
=0x012000EDE8B08D50FC3741A5206CD23377AB75&View=%7B287FFF9E%2DD60C%2D4401%2D9ECD%2DC402524F1D4A%7D"]
I want to remove everything after a specific string which I defined it as "string1" and I would like to loop through all 4 columns in the dataframe defined as "df_MasterData":
string1 = "&FolderCTID"
import pandas as pd
df_MasterData = pd.read_excel(FN_MasterData)
cols = ['Column_A', 'Column_B', 'Column_C', 'Column_D']
for i in cols:
# Objective: Replace "&FolderCTID", delete all string after
string1 = "&FolderCTID"
# Method 1
df_MasterData[i] = df_MasterData[i].str.split(string1).str[0]
# Method 2
df_MasterData[i] = df_MasterData[i].str.split(string1).str[1].str.strip()
# Method 3
df_MasterData[i] = df_MasterData[i].str.split(string1)[:-1]
I did search and google and found similar solutions which were used but none of them work.
Can any guru shed some light on this? Any assistance is appreciated.
Added below is a few example rows in column A and B for these URLs:
Column_A = ['https://contentspace.global.xxx.com/teams/Australia/NSW/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FNSW%2FDocuments%2FIn%20Scope%2FA%20I%20TOPPER%20GROUP&FolderCTID=\
0x01200016BC4CE0C21A6645950C100F37A60ABD&View=%7B64F44840%2D04FE%2D4341%2D9FAC%2D902BB54E7F10%7D',\
'https://contentspace.global.xxx.com/teams/Australia/Victoria/Documents/Forms/AllItems.aspx?RootFolder\
=%2Fteams%2FAustralia%2FVictoria%2FDocuments%2FIn%20Scope&FolderCTID=0x0120006984C27BA03D394D9E2E95FB\
893593F9&View=%7B3276A351%2D18C1%2D4D32%2DADFF%2D54158B504FCC%7D']
Column_B = ['https://contentspace.global.xxx.com/teams/Australia/WA/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FWA%2FDocuments%2FIn%20Scope&FolderCTID=0x012000EDE8B08D50FC3741A5\
206CD23377AB75&View=%7B287FFF9E%2DD60C%2D4401%2D9ECD%2DC402524F1D4A%7D',\
'https://contentspace.global.xxx.com/teams/Australia/QLD/Documents/Forms/AllItems.aspx?RootFolder=%\
2Fteams%2FAustralia%2FQLD%2FDocuments%2FIn%20Scope%2FAACO%20GROUP&FolderCTID=0x012000E689A6C1960E8\
648A90E6EC3BD899B1A&View=%7B6176AC45%2DC34C%2D4F7C%2D9027%2DDAEAD1391BFC%7D']
This is how i would do it,
first declare a variable with your target columns.
Then use stack() and str.split to get your target output.
finally, unstack and reapply the output to your original df.
cols_to_slice = ['ColumnA','ColumnB','ColumnC','ColumnD']
string1 = "&FolderCTID"
df[cols_to_slice].stack().str.split(string1,expand=True)[1].unstack(1)
if you want to replace these columns in your target df then simply do -
df[cols_to_slice] = df[cols_to_slice].stack().str.split(string1,expand=True)[1].unstack(1)
You should first get the index of string using
indexes = len(string1) + df_MasterData[i].str.find(string1)
# This selected the final location of this string
# if you don't want to add string in result just use below one
indexes = len(string1) + df_MasterData[i].str.find(string1)
Now do
df_MasterData[i] = df_MasterData[i].str[:indexes]

How can I improve the speed of pandas rows operations?

I have a large .csv file that has 11'000'000 rows and 3 columns: id ,magh , mixid2.
What I have to do is to select the rows with the same id and then check if these rows have the same mixid2; if True I remove the rows, If False I initialize a class with the information of the selected rows.
That is my code:
obs=obs.set_index('id')
obs=obs.sort_index()
#dropping elements with only one mixid2 and filling S
ID=obs.index.unique()
S=[]
good_bye_list = []
for i in tqdm(ID):
app=obs.loc[i]
if len(np.unique([app['mixid2'],])) != 1:
#fill the class list
S.append(star(app['magh'].values,app['mixid2'].values,z_in))
else :
#drop
good_bye_list.append(i)
obs=obs.drop(good_bye_list)
The .csv file is very large so it takes 40 min to compute everything.
How can I improve the speed??
Thank you for the help.
This is the .csv file:
id,mixid2,magh
3447001203296326,557,14.25
3447001203296326,573,14.25
3447001203296326,525,14.25
3447001203296326,541,14.25
3447001203296330,540,15.33199977874756
3447001203296330,573,15.33199977874756
3447001203296333,172,17.476999282836914
3447001203296333,140,17.476999282836914
3447001203296333,188,17.476999282836914
3447001203296333,156,17.476999282836914
3447001203296334,566,15.626999855041506
3447001203296334,534,15.626999855041506
3447001203296334,550,15.626999855041506
3447001203296338,623,14.800999641418455
3447001203296338,639,14.800999641418455
3447001203296338,607,14.800999641418455
3447001203296344,521,12.8149995803833
3447001203296344,537,12.8149995803833
3447001203296344,553,12.8149995803833
3447001203296345,620,12.809000015258787
3447001203296345,543,12.809000015258787
3447001203296345,636,12.809000015258787
3447001203296347,558,12.315999984741213
3447001203296347,542,12.315999984741213
3447001203296347,526,12.315999984741213
3447001203296352,615,12.11299991607666
3447001203296352,631,12.11299991607666
3447001203296352,599,12.11299991607666
3447001203296360,540,16.926000595092773
3447001203296360,556,16.926000595092773
3447001203296360,572,16.926000595092773
3447001203296360,524,16.926000595092773
3447001203296367,490,15.80799961090088
3447001203296367,474,15.80799961090088
3447001203296367,458,15.80799961090088
3447001203296369,639,15.175000190734865
3447001203296369,591,15.175000190734865
3447001203296369,623,15.175000190734865
3447001203296369,607,15.175000190734865
3447001203296371,460,14.975000381469727
3447001203296373,582,14.532999992370605
3447001203296373,614,14.532999992370605
3447001203296373,598,14.532999992370605
3447001203296374,184,14.659000396728516
3447001203296374,203,14.659000396728516
3447001203296374,152,14.659000396728516
3447001203296374,136,14.659000396728516
3447001203296374,168,14.659000396728516
3447001203296375,592,14.723999977111815
3447001203296375,608,14.723999977111815
3447001203296375,624,14.723999977111815
3447001203296375,92,14.723999977111815
3447001203296375,76,14.723999977111815
3447001203296375,108,14.723999977111815
3447001203296375,576,14.723999977111815
3447001203296376,132,14.0649995803833
3447001203296376,164,14.0649995803833
3447001203296376,180,14.0649995803833
3447001203296376,148,14.0649995803833
3447001203296377,168,13.810999870300293
3447001203296377,152,13.810999870300293
3447001203296377,136,13.810999870300293
3447001203296377,184,13.810999870300293
3447001203296378,171,13.161999702453613
3447001203296378,187,13.161999702453613
3447001203296378,155,13.161999702453613
3447001203296378,139,13.161999702453613
3447001203296380,565,13.017999649047852
3447001203296380,517,13.017999649047852
3447001203296380,549,13.017999649047852
3447001203296380,533,13.017999649047852
3447001203296383,621,13.079999923706055
3447001203296383,589,13.079999923706055
3447001203296383,605,13.079999923706055
3447001203296384,541,12.732000350952148
3447001203296384,557,12.732000350952148
3447001203296384,525,12.732000350952148
3447001203296385,462,12.784000396728516
3447001203296386,626,12.663999557495115
3447001203296386,610,12.663999557495115
3447001203296386,577,12.663999557495115
3447001203296389,207,12.416000366210938
3447001203296389,255,12.416000366210938
3447001203296389,223,12.416000366210938
3447001203296389,239,12.416000366210938
3447001203296390,607,12.20199966430664
3447001203296390,591,12.20199966430664
3447001203296397,582,16.635000228881836
3447001203296397,598,16.635000228881836
3447001203296397,614,16.635000228881836
3447001203296399,630,17.229999542236328
3447001203296404,598,15.970000267028807
3447001203296404,631,15.970000267028807
3447001203296404,582,15.970000267028807
3447001203296408,540,16.08799934387207
3447001203296408,556,16.08799934387207
3447001203296408,524,16.08799934387207
3447001203296408,572,16.08799934387207
3447001203296409,632,15.84000015258789
3447001203296409,616,15.84000015258789
Hello and welcome to StackOverflow.
In pandas the rule of thumb is that raw loops are always slower than the dedicated functions. To apply a function to a sub-DataFrame of rows that fulfill certain criteria you can use groupby
In your case the function is a bit ... unpythonic as the instantiation of S is a side effect and the deleting of rows you are currenty iterating over is dangerous. For example in a dictionary you should never do this. That said, you can create a function like this:
In [37]: def my_func(df):
...: if df['mixid2'].nunique() == 1:
...: return None
...: else:
...: S.append(df['mixid2'])
...: return df
and apply it to you DataFrame via
S = []
obs.groupby('id').apply(my_func)
This iterates over all subdataframes with the same id and drops them if there is exactly one unique value in mixid2. Otherwise it appends the values to a list S
The resulting DataFrame is 3 rows shorter
Out[38]:
id mixid2 magh
id
3447001203296326 0 3447001203296326 557 14.250000
1 3447001203296326 573 14.250000
... ... ... ...
3447001203296409 98 3447001203296409 632 15.840000
99 3447001203296409 616 15.840000
[97 rows x 3 columns]
and S contains 28 elements. That you could pass into the star constructor just as you did.
I guess you want to groupby and exclude all the elements where mixid2 appears more than 1 times using set_index. To get the original shape, we use reset_index after the filtering.
df = obs.set_index('mixid2').loc[~df.groupby('mixid2').count().id.eq(1)].reset_index()
df.shape
(44, 3)
I'm not entirely sure, if I understood you correctly. But what you can do is first remove duplicates in your dataframe and then use the groupby function to get all the remaining data points with same id:
# dropping all duplicates based on id an mixid2
df.drop_duplicates(["id", "mixid2"], inplace=True)
# then iterate over all groups:
for index, grp in df.groupby(["id"]):
pass # do stuff here with the grp
Normally it is a good idea to rely on pandas internal functions, since they are mostly optimised quite well.
new_df = app.groupby(['id','mixid2'], as_index=False).agg('count')
new_df = new_df[new_df['magh'] > 1]
then pass new_df to your function.

Parsing JSON in Pandas

I need to extract the following json:
{"PhysicalDisks":[{"Status":"SMART Passed","Name":"/dev/sda"}]}
{"PhysicalDisks":[{"Status":"SMART Passed","Name":"/dev/sda"},{"Status":"SMART Passed","Name":"/dev/sdb"}]}
{"PhysicalDisks":[{"Status":"SMART Passed","Name":"/dev/sda"},{"Status":"SMART Passed","Name":"/dev/sdb"}]}
{"PhysicalDisks":[{"Name":"disk0","Status":"Passed"},{"Name":"disk1","Status":"Passed"}]}
{"PhysicalDisks":[{"Name":"disk0","Status":"Failed"},{"Name":"disk1","Status":"not supported"}]}
{"PhysicalDisks":[{"Name":"disk0","Status":"Passed"}]}
Name: raw_results, dtype: object
Into separate columns. I don't know how many disks per result there might be in future. What would be the best way here?
I tried the following:
d = raw_res['raw_results'].map(json.loads).apply(pd.Series).add_prefix('raw_results.')
Gives me:
Example output might be something like
Better way would be to add each disk check as an additional row into dataframe with the same checkid as the row it was extracted from. So for 3 disks in results it will generate 3 rows 1 per disk
UPDATE
This code
# This works
dfs = []
def json_to_df(row, json_col):
json_df = pd.read_json(row[json_col])
dfs.append(json_df.assign(**row.drop(json_col)))
df['raw_results'].replace("{}", pd.np.nan, inplace=True)
df = df.dropna()
df.apply(json_to_df, axis=1, json_col='raw_results')
df = pd.concat(dfs)
df.head()
Adds an extra row for each disk (sda, sdb etc.)
So now I would need to split this column into 2: Status and Name.
df1 = df["PhysicalDisks"].apply(pd.Series)
df_final = pd.concat([df, df1], axis = 1).drop('PhysicalDisks', axis = 1)
df_final.head()

pandas - drop row with list of values, if contains from list

I have a huge set of data. Something like 100k lines and I am trying to drop a row from a dataframe if the row, which contains a list, contains a value from another dataframe. Here's a small time example.
has = [['#a'], ['#b'], ['#c, #d, #e, #f'], ['#g']]
use = [1,2,3,5]
z = ['#d','#a']
df = pd.DataFrame({'user': use, 'tweet': has})
df2 = pd.DataFrame({'z': z})
tweet user
0 [#a] 1
1 [#b] 2
2 [#c, #d, #e, #f] 3
3 [#g] 5
z
0 #d
1 #a
The desired outcome would be
tweet user
0 [#b] 2
1 [#g] 5
Things i've tried
#this seems to work for dropping #a but not #d
for a in range(df.tweet.size):
for search in df2.z:
if search in df.loc[a].tweet:
df.drop(a)
#this works for my small scale example but throws an error on my big data
df['tweet'] = df.tweet.apply(', '.join)
test = df[~df.tweet.str.contains('|'.join(df2['z'].astype(str)))]
#the error being "unterminated character set at position 1343770"
#i went to check what was on that line and it returned this
basket.iloc[1343770]
user_id 17060480
tweet [#IfTheyWereBlackOrBrownPeople, #WTF]
Name: 4612505, dtype: object
Any help would be greatly appreciated.
is ['#c, #d, #e, #f'] 1 string or a list like this ['#c', '#d', '#e', '#f'] ?
has = [['#a'], ['#b'], ['#c', '#d', '#e', '#f'], ['#g']]
use = [1,2,3,5]
z = ['#d','#a']
df = pd.DataFrame({'user': use, 'tweet': has})
df2 = pd.DataFrame({'z': z})
simple solution would be
screen = set(df2.z.tolist())
to_delete = list() # this will speed things up doing only 1 delete
for id, row in df.iterrows():
if set(row.tweet).intersection(screen):
to_delete.append(id)
df.drop(to_delete, inplace=True)
speed comparaison (for 10 000 rows):
st = time.time()
screen = set(df2.z.tolist())
to_delete = list()
for id, row in df.iterrows():
if set(row.tweet).intersection(screen):
to_delete.append(id)
df.drop(to_delete, inplace=True)
print(time.time()-st)
2.142000198364258
st = time.time()
for a in df.tweet.index:
for search in df2.z:
if search in df.loc[a].tweet:
df.drop(a, inplace=True)
break
print(time.time()-st)
43.99799990653992
For me, your code works if I make several adjustments.
First, you're missing the last line when putting range(df.tweet.size), either increase this or (more robust, if you don't have an increasing index), use df.tweet.index.
Second, you don't apply your dropping, use inplace=True for that.
Third, you have #d in a string, the following is not a list: '#c, #d, #e, #f' and you have to change it to a list so it works.
So if you change that, the following code works fine:
has = [['#a'], ['#b'], ['#c', '#d', '#e', '#f'], ['#g']]
use = [1,2,3,5]
z = ['#d','#a']
df = pd.DataFrame({'user': use, 'tweet': has})
df2 = pd.DataFrame({'z': z})
for a in df.tweet.index:
for search in df2.z:
if search in df.loc[a].tweet:
df.drop(a, inplace=True)
break # so if we already dropped it we no longer look whether we should drop this line
This will provide the desired result. Be aware of this potentially being not optimal due to missing vectorization.
EDIT:
you can achieve the string being a list with the following:
from itertools import chain
df.tweet = df.tweet.apply(lambda l: list(chain(*map(lambda lelem: lelem.split(","), l))))
This applies a function to each line (assuming each line contains a list with one or more elements): Split each element (should be a string) by comma into a new list and "flatten" all the lists in one line (if there are multiple) together.
EDIT2:
Yes, this is not really performant But basically does what was asked. Keep that in mind and after having it working, try to improve your code (less for iterations, do tricks like collecting the indices and then drop all of them).

Categories

Resources