How to remove 'giambja01' from DataFrame? - python

I have a Data Science Related project. I need to remove a certain name from my DataFrame. here is what I attempted:
delete_row_1 = batsal[batsal["playerID"]=='giambja01'].index
remaining_players = batsal.drop(delete_row_1)
To test whether this worked I wrote this and got False:
'giambja01' in remaining_players['playerID']
False
It seems to have worked. and yet when i run the following code i get this:
remaining_players['playerID']
10836 giambja01
13287 heltoto01
2446 berkmla01
11336 gonzalu01
8271 drewjd01
25101 pujolal01
17276 lawtoma02
82 abreubo01
5395 catalfr01
10852 giambje01
22174 nevinph01
20635 mientdo01
6275 coninje01
11545 gracema01
20173 mclemma01
23005 ordonma01
24596 pierrju01
22418 nixontr01
5903 clarkto02
30281 sweenmi01
20688 millake01
18086 loducpa01
11810 grievbe01
3145 boonebr01
29869 stewash01
33183 whitero02
32039 vidrojo01
Name: playerID, dtype: object
I am attaching a sample DataFrame:
batsal = pd.DataFrame({'playerID':['giambja01' , 'damonjo01' , 'saenzol01'],'Sex':['M','M','M']})
Please let me know what I did wrong.

The issue is drop works with columns and not rows. Instead you have to designate the index of the item you would like to remove and the columns data should be removed from. You should try:
df.drop(index='giambja01', columns='1').

Try this, specifying the index:
remaining_players = batsal.drop(index=delete_row_1)
Find the documentation on the function here.

Related

Pandas slices row and adds it to the firts columns

I'm trying to display all the columns of a csv file. This is the file info.
File
And this is the code I'm using:
pd.options.display.max_colwidth = None
pd.options.display.max_columns = None
excel1 = pd.read_csv('CO-Chats1.csv', sep=';')
But when I read it, I get this.
Case Owner Resolved Date/Time Case Origin Case Number Status \
0 Reinaldo Franco 10/16/2021, 3:54 PM Chat 20546561 Resolved
1 Catalina Sanchez 10/16/2021, 5:38 AM Chat 5625033 Resolved
Subject
0 General Support
1 Support for payment
Not sure what causes the \ and then adding the following columns to the first one.
You should try to use display() instead of print() to see the output.
excel1 = pd.read_csv('CO-Chats1.csv', sep=';')
display(excel1)

Is pandas and numpy any good for manipulation of non numeric data?

I've been going in circles for days now, and I've run out of steam. Doesn't help that I'm new to python / numpy / pandas etc.
I started with numpy which led me to pandas, because of a GIS function that delivers a numpy array of data. That is my starting point. I'm trying to get to an endpoint being a small enriched dataset, in an excel spreadsheet.
But it seems like going down a rabbit hole trying to extract that data, and then manipulate it with the numpy toolsets. The delivered data is one dimensional, but each row contains 8 fields. A simple conversion to pandas and then to ndarray, magically makes it all good. Except that I lose headers in the process, and it just snowballs from there.
I've had to revaluate my understanding, based on some feedback on another post, and that's fine. But I'm just going in circles. Example after example seems to use predominantly numerical data, and I'm starting to get the feeling that's where it's strength lies. My trying to use it for what I call more of a non-mathematical / numerical purpose looks like I'm barking up the wrong tree.
Any advice?
Addendum
The data I extract from the GIS system is names, dates, other textual data. I then have another csv file that I need to use as a lookup, so that I can enrich the source with more textual information which finally gets published to excel.
SAMPLE DATA - SOURCE
WorkCode Status WorkName StartDate EndDate siteType Supplier
0 AT-W34319 None Second building 2020-05-04 2020-05-31 Type A Acem 1
1 AT-W67713 None Left of the red office tower 2019-02-11 2020-08-28 Type B Quester Q
2 AT-W68713 None 12 main street 2019-05-23 2020-11-03 Class 1 Type B Dettlim Group
3 AT-W70105 None city central 2019-03-07 2021-08-06 Other Hans Int
4 AT-W73855 None top floor 2019-05-06 2020-10-28 Type a None
SAMPLE DATA - CSV
["Id", "Version","Utility/Principal","Principal Contractor Contact"]
XM-N33463,7.1,"A Contracting company", "555-12345"
XM-N33211,2.1,"Contractor #b", "555-12345"
XM-N33225,1.3,"That other contractor", "555-12345"
XM-N58755,1.0,"v Contracting", "555-12345"
XM-N58755,2.3,"dsContracting", "555-12345"
XM-222222,2.3,"dsContracting", "555-12345"
BM-O33343,2.1,"dsContracting", "555-12345"
def SMAN():
####################################################################################################################
# Exporting the results of the analysis...
####################################################################################################################
"""
Approach is as follows:
1) Get the source data
2) Get he CSV lookup data loaded into memory - it'll be faster
3) Iterate through the source data, looking for matches in the CSV data
4) Add an extra couple of columns onto the source data, and populate it with the (matching) lookup data.
5) Export the now enhanced data to excel.
"""
arcpy.env.workspace = workspace + filenameGDB
input = "ApprovedActivityByLocalBoard"
exportFile = arcpy.da.FeatureClassToNumPyArray(input, ['WorkCode', 'Status','WorkName', 'PSN2', 'StartDate', 'EndDate', 'siteType', 'Supplier'])
# we have our data, but it's (9893,) instead of [9893 rows x 8 columns]
pdExportFile = pandas.DataFrame(exportFile)
LBW = pdExportFile.to_numpy()
del exportFile
del pdExportFile
# Now we have [9893 rows x 8 columns] - but we've lost the headers
col_list = ["WorkCode", "Version","Principal","Contact"]
allPermits = pandas.read_csv("lookup.csv", usecols=col_list)
# Now we have the CSV file loaded ... and only the important parts - should be fast.
# Shape: (94523, 4)
# will have to find a way to improve this...
# CSV file has got more than WordCode, because there are different versions (as different records)
# Only want the last one.
# each record must now be "enhanced" with matching record from the CSV file.
finalReport = [] # we are expecting this to be [9893 rows x 12 columns] at the end
counter = -1
for eachWorksite in LBW [:5]: #let's just work with 5 records right now...
counter += 1
# eachWorksite=list(eachWorksite) # eachWorksite is a tuple - so need to convert it
# # but if we change it to a list, we lose the headers!
certID = LBW [counter][0] # get the ID to use for lookup matching
# Search the CSV data
permitsFound = allPermits[allPermits['Id']==certID ]
permitsFound = permitsFound.to_numpy()
if numpy.shape(permitsFound)[0] > 1:
print ("Too many hits!") # got to deal with that CSV Version field.
exit()
else:
# now "enrich" the record/row by adding on the fields from the lookup
# so a row goes from 8 fields to 12 fields
newline = numpy.append (eachWorksite, permitsFound)
# and this enhanced record/row must become the new normal
# but I cannot change the original, so it must go into a new container
finalReport = numpy.append(finalReport, newline, axis = 0)
# now I should have a new container, of "enriched" data
# which as gone from [9893 rows x 8 columns] to [9893 rows x 12 columns]
# Some of the columns of course, could be empty.
#Now let's dump the results to an Excel file and make it accessible for everyone else.
df = pandas.DataFrame (finalReport)
filepath = 'finalreport.csv'
df.to_csv('filepath', index = False)
# Somewhere I was getting Error("Cannot convert {0!r} to Excel".format(value))
# Now I get
filepath = 'finalReport.xlsx'
df.to_excel(filepath, index=False)
I have eventually answered my own question, and this is how:
Yes, for my situation, pandas worked just fine, even beautifully for
manipulating non numerical data. I just had to learn some basics.
The biggest learning was to understand the pandas data frame as an object that has to be manipulated remotely by various functions/tools. Just because I "print" the dataframe, doesn't mean it's just text. (Thanks juanpa.arrivillaga for poitning out my erroneous assumptions) in Why can I not reproduce a nd array manually?
I also had to wrap my mind around the concept of indexes and columns, and how they could be altered/manipulated/ etc. And then, how to use them to maximum effect.
Once those fundamentals had been sorted, the rest followed naturally, and my code reduced to a couple of nice elegant functions.
Cheers

Potentially faulty or weird behavior for pandas.series.isin

I have 2 tables in my database (visits, events).
visits has a primary key visit_id,
events_and_pages has a column visit_id which is sort of a foreign key of visits. (An events row can belong to 0 to 1 visit)
What I want to do: Filter-out from events table all the visit_id that don't belong to visits table. Simple task.
I have the data for each of those tables stored in pandas.DataFrame, respectively df_visits and df_events
I do the following operation :
len(set(df_visits.visit_id) - set(df_events.visit_id)) I get a result of 1670, which is compliant with what I should expect.
But when I do
filter_real_v = df_events.visit_id.isin(set(visits.visit_id))
filter_real_v.value_counts() # I get only True values
filter_real_v = df_events.visit_id.isin(visits.visit_id)
filter_real_v.value_counts() # I get only True values
Even weirder, when I use
pd.DataFrame(df_events.visit_id).isin(real_visits)).visit_id.value_counts() #I get all False values except 8 that are True
pd.DataFrame(df_events.visit_id).isin(set(real_visits)).visit_id.value_counts() #I get all True values
What is going on here? And how can I define a filter for which visit_id exists in events but not in visits?
Please find in this link, the df_events and df_visits csv files to reproduce this error (comma separated index,visit_id)
EDIT : Add snippet for minimal reproducible code:
Download the files in the link and put them in a file_path_events & file_path_visits of your chosing
Execute the code bellow:
import pandas as pd
events = pd.read_csv("df_events.csv")
events.set_index('index',inplace=True)
visits = pd.read_csv("df_visits.csv")
visits.set_index('index',inplace=True)
correct_delta = len(set(visits.visit_id) - set(events.visit_id))
print(correct_delta) #1670
filter_real_v = events.visit_id.isin(set(visits.visit_id))
bad_delta = filter_real_v.value_counts()
print(bad_delta[True]) #702680
Best regards
Everything is behaving correctly, your just misinterpreting the set operation "-"
len(set(df_visits.visit_id) - set(df_events.visit_id))
Will return the values of df_visits.visit_id not in df_events.visit_id. Note: If values of df_events.visit_id are not in df_visits.visit_id they will not be represented here. This is how sets work.
For example:
set([1,2,3,9]) - set([9,10,11])
Output:
{1, 2, 3}
Notice how 10 or 11 do not show up in the answer. None of the second set will as a matter of fact. Only the values in the second set will be taken away from the first set.
With isin() you are effectively doing:
visits['visit_id'].isin(df_events['visit_id'].values).value_counts()
True 56071
False 1670
# Note 1670 is the exact same you got in your set operation
and not:
df_events['visit_id'].isin(visits['visit_id'].values).value_counts()
True 702680

Multiprocessing group apply python

I have two groups, one with the rows to be processed as groups, another with groups to be looked upon.
test = pd.DataFrame({'Address1':['123 Cheese Way','234 Cookie Place','345 Pizza Drive','456 Pretzel Junction'],'city':['X','U','X','U']})
test2 = pd.DataFrame({'Address1':['123 chese wy','234 kookie Pl','345 Pizzza DR','456 Pretzel Junktion'],'city':['X','U','Z','Y'] , 'ID' : ['1','3','4','8']})
gr1 = test.groupby('city')
gr2 = test2.groupby('city')
Currently I am applying my function to every row of the group,
gr1.apply(lambda x: custom_func(x.Address1, gr2.get_group(x.name)))
However I don't know how to do multiprocessing on this. Please advise.
EDIT : - I tried to use dask , but I can't pass the entire data frame to my function in dask - as there is a limitation with its apply function. And I tried to use dask apply on my gr1 (group), but since I am setting index in my custom function, dask throws an error saying "Too many indexers".
Here with Dask, this gives me an error - 'Pandas' object has no attribute 'city'
ddf1 = dd.from_pandas(test, 2)
ddf2 = dd.from_pandas(test2, 2)
dgr1 = ddf1.groupby('city')
dgr2 = ddf2.groupby('city')
meta = pd.DataFrame(columns=['Address1', 'score', 'idx','source_index'])
ddf1.map_partitions(custom_func, x.Address1, dgr2.get_group(x.city).Address1,meta=meta).compute()
I provide an alternative solution to using dask here,
import pandas as pd
from multiprocessing import Pool
test = pd.DataFrame({'Address1':['123 Cheese Way','234 Cookie Place','345 Pizza Drive','456 Pretzel Junction'],'city':['X','U','X','U']})
test2 = pd.DataFrame({'Address1':['123 chese wy','234 kookie Pl','345 Pizzza DR','456 Pretzel Junktion'],'city':['X','U','Z','Y'] , 'ID' : ['1','3','4','8']})
test=test.assign(dataset = 'test')
test2=test2.assign(dataset = 'test2')
newdf=pd.concat([test2,test],keys = ['test2','test'])
gpd=newdf.groupby('city')
def my_func(mygrp):
test_data=mygrp.loc['test']
test2_data=mygrp.loc['test2']
#do something specific
#if needed print something
return {'Address':test2_data.Address1.values[0],'ID':test2_data.ID.values[0]} #return some other stuff
mypool=Pool(processes=2)
ret_list=mypool.imap(my_func,(group for name, group in gpd))
pd.DataFrame(ret_list)
returns something like
ID address
0 3 234 kookie Pl
1 1 123 chese wy
2 8 456 Pretzel Junktion
3 4 345 Pizzza DR
PS: In OP's question two similar datasets are compared in a specialized function, the solution here uses pandas.concat . One could also imagine a pd.merge depending on the problem.

Setting pandas conditions for columns by row, Python 2.7

(I suck at titling these questions...)
So I've gotten 90% of the way through a very laborious learning process with pandas, but I have one thing left to figure out. Let me show an example (actual original is a comma-delimited CSV that has many more rows):
Name Price Rating URL Notes1 Notes2 Notes3
Foo $450 9 a.com/x NaN NaN NaN
Bar $99 5 see over www.b.com Hilarious Nifty
John $551 2 www.c.com Pretty NaN NaN
Jane $999 8 See Over in Notes Funky http://www.d.com Groovy
The URL column can say many different things, but they all include "see over," and do not indicate with consistency which column to the right includes the site.
I would like to do a few things, here: first, move websites from any Notes column to URL; second, collapse all notes columns to one column with a new line between them. So this (NaN's removed because pandas makes me in order to use them in df.loc):
Name Price Rating URL Notes1
Foo $450 9 a.com/x
Bar $99 5 www.b.com Hilarious
Nifty
John $551 2 www.c.com Pretty
Jane $999 8 http://www.d.com Funky
Groovy
I got partway there by doing this:
df['URL'] = df['URL'].fillna('')
df['Notes1'] = df['Notes1'].fillna('')
df['Notes2'] = df['Notes2'].fillna('')
df['Notes3'] = df['Notes3'].fillna('')
to_move = df['URL'].str.lower().str.contains('see over')
df.loc[to_move, 'URL'] = df['Notes1']
What I don't know is how to find the Notes column with either www or .com. If I, for example, try to use my above method as a condition, e.g.:
if df['Notes1'].str.lower().str.contains('www'):
df.loc[to_move, 'URL'] = df['Notes1']
I get back ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() But adding .any() or .all() has the obvious flaw that they don't give me what I'm looking for: with any, e.g., every line that meets the to_move requirement in URL will get whatever's in Notes1. I need the check to occur row by row. For similar reasons, I can't even get started collapsing the Notes columns (and I don't know how to check for non-null empty string cells, either, a problem I created at this point).
Where it stands, I know I also have to move in Notes2 to Notes1, Notes3 to Notes2, and '' to Notes3 when the first condition is satisfied, because I don't want the leftover URLs in the Notes columns. I'm sure pandas has easier routes than what I'm doing, because it's pandas, and when I try to do anything with pandas, I find out that it can be done in one line instead of my 20...
(PS, I don't care if the empty columns Notes2 and Notes3 are left over, b/c I'm not using them in my CSV import in the next step, though I can always learn more than I need)
UPDATE: So I figured out a crummy verbose solution using my non-pandas python logic one step at a time. I came up with this (same first five lines above, minus the df.loc line):
url_in1 = df['Notes1'].str.contains('\.com')
url_in2 = df['Notes2'].str.contains('\.com')
to_move = df['URL'].str.lower().str.contains('see-over')
to_move1 = to_move & url_in1
to_move2 = to_move & url_in2
df.loc[to_move1, 'URL'] = df.loc[url_in1, 'Notes1']
df.loc[url_in1, 'Notes1'] = df['Notes2']
df.loc[url_in1, 'Notes2'] = ''
df.loc[to_move2, 'URL'] = df.loc[url_in2, 'Notes2']
df.loc[url_in2, 'Notes2'] = ''
(Lines moved around and to_move repeated in actual code) I know there has to be a more efficient method... This also doesn't collapse in the Notes columns, but that should be easy using the same method, except that I still don't know a good way to find the empty strings.
I'm still learning pandas, so some parts of this code may be not so elegant, but general idea is - get all notes columns, find all urls in there, combine it with URL column and then concat remaining notes into Notes1 column:
import pandas as pd
import numpy as np
import pandas.core.strings as strings
# Just to get first notnull occurence
def geturl(s):
try:
return next(e for e in s if not pd.isnull(e))
except:
return np.NaN
df = pd.read_csv("d:/temp/data2.txt")
dfnotes = df[[e for e in df.columns if 'Notes' in e]]
# Notes1 Notes2 Notes3
# 0 NaN NaN NaN
# 1 www.b.com Hilarious Nifty
# 2 Pretty NaN NaN
# 3 Funky http://www.d.com Groovy
dfurls = dfnotes.apply(lambda x: x.str.contains('\.com'), axis=1)
dfurls = dfurls.fillna(False).astype(bool)
# Notes1 Notes2 Notes3
# 0 False False False
# 1 True False False
# 2 False False False
# 3 False True False
turl = dfnotes[dfurls].apply(geturl, axis=1)
df['URL'] = np.where(turl.isnull(), df['URL'], turl)
df['Notes1'] = dfnotes[~dfurls].apply(lambda x: strings.str_cat(x[~x.isnull()], sep=' '), axis=1)
del df['Notes2']
del df['Notes3']
df
# Name Price Rating URL Notes1
# 0 Foo $450 9 a.com/x
# 1 Bar $99 5 www.b.com Hilarious Nifty
# 2 John $551 2 www.c.com Pretty
# 3 Jane $999 8 http://www.d.com Funky Groovy

Categories

Resources