My ultimate goal is two compare column names in df2 if the names appear in a list of values extracted from df1.
I have a list of names and a function that checks if those names exists as a column name in df1. However, this worked in python and doesn't work in pySpark. The error I'm getting:AttributeError: 'DataFrame' object has no attribute 'values'.
How can I change my function so that it iterates over the column names? Or is there a way to compare my list values to the df2's column names (the full dataframe; ie. no need to make a new dataframe with just the column names)?
#Function to check matching values
def checkIfDomainsExists(data, listOfValues):
'''List of elements '''
entityDomainList=Entity.select("DomainName").rdd.flatMap(lambda x:x).collect()
#entityDomainList
'''Check if given elements exists in data'''
results_true = {}
results_false ={}
#Iterate over list of domains one by one
for elem in listOfValues:
#Check if the element exists in dataframe values
if elem in data.columns:
results_true[elem] = True
else:
results_false[elem] = False
#Return dictionary of values and their flag
#Only return TRUE values
return results_true;
# Get TRUE matched column values
results_true = checkIfDomainsExists(psv, entityDomainList)
results_true
You don't need to write the function for just filtering the values.
YOu can do this in following ways:
df = spark.createDataFrame([(1, 'LeaseStatus'), (2, 'IncludeLeaseInIPM'), (5, 'NonExistantDomain')], ("id", "entity"))
domainList=['LeaseRecoveryType','LeaseStatus','IncludeLeaseInIPM','LeaseAccountType', 'ClassofUse','LeaseType']
df.withColumn('Exists', df.entity.isin(domainList)).filter(f.col('Exists')=='true').show()
+---+-----------------+------+
| id| entity|Exists|
+---+-----------------+------+
| 1| LeaseStatus| true|
| 2|IncludeLeaseInIPM| true|
+---+-----------------+------+
#or you can filter directly without adding additional column
df.filter(f.col('entity').isin(domainList)).select('entity').collect()
[Row(entity='LeaseStatus'), Row(entity='IncludeLeaseInIPM')]
Hope it helps.
Related
I have a dataframe with several columns, some of which have names that match the keys in a dictionary. I want to append the value of the items in the dictionary to the non null values of the column whos name matches the key in said dictionary. Hopefully that isn't too confusing.
example:
realms = {}
realms['email'] = '<email>'
realms['android'] = '<androidID>'
df = pd.DataFrame()
df['email'] = ['foo#gmail.com','',foo#yahoo.com]
df['android'] = [1234567,,55533321]
how could I could I append '<email>' to 'foo#gmail.com' and 'foo#yahoo.com'
without appending to the empty string or null value too?
I'm trying to do this without using iteritems(), as I have about 200,000 records to apply this logic to.
expected output would be like 'foo#gmail.com<email>',,'foo#yahoo.com<email>'
for column in df.columns:
df[column] = df[column].astype(str) + realms[column]
>>> df
email android
0 foo#gmail.com<email> 1234567<androidID>
1 foo#yahoo.com<email> 55533321<androidID>
I have pandas dataframe which contains value in below format. How to filter dataframe which matches the 'd6d4e77e-b8ec-467a-ba06-1c6079aa2d82' in any of the value of type list part of PathDSC column
i tried
def get_projects_belongs_to_root_project(project_df, root_project_id):
filter_project_df = project_df.query("root_project_id in PathDSC")
it didn't work i got empty dataframe
Assuming the values of PathDSC column are lists of strings, you can check row-wise if each list contains the wanted value and mask those rows using Series.apply. Then select only those rows using boolean indexing.
def get_projects_belongs_to_root_project(project_df, root_project_id):
mask = project_df['PathDSC'].apply(lambda lst: root_project_id in lst)
filter_project_df = project_df[mask]
# ...
root_project_id = 'd6d4e77e-b8ec-467a-ba06-1c6079aa2d82'
df = df[df['PathDSC'].str.contains(root_project_id)]
I have the following dataframe:
df = pd.DataFrame( columns = ['Name'])
df['Name'] = ['Aadam','adam','AdAm','adammm','Adam.','Bethh','beth.','beht','Beeth','Beth']
I want to clean the column in order to achieve the following:
df['Name Corrected'] = ['adam','adam','adam','adam','adam','beth','beth','beth','beth','beth']
df
Cleaned names are based on the following reference table:
ref = pd.DataFrame( columns = ['Cleaned Names'])
ref['Cleaned Names'] = ['adam','beth']
I am aware of fuzzy matching but I'm not sure if that's the most efficient way of solving the problem.
You can try:
lst=['adam','beth']
out=pd.concat([df['Name'].str.contains(x,case=False).map({True:x}) for x in lst],axis=1)
df['Name corrected']=out.bfill(axis=1).iloc[:,0]
#Finally:
df['Name corrected']=df['Name corrected'].ffill()
#but In certain condition ffill() gives you wrong values
Explaination:
lst=['adam','beth']
#created a list of words
out=pd.concat([df['Name'].str.contains(x,case=False).map({True:x}) for x in lst],axis=1)
#checking If the 'Name' column contain the word one at a time that are inside the list and that will give a boolean series of True and False and then we are mapping The value of that particular element that is inside list so True becomes that value and False become NaN and then we are concatinating both list of Series on axis=1 so that It becomes a Dataframe
df['Name corrected']=out.bfill(axis=1).iloc[:,0]
#Backword filling values on axis=1 and getting the 1st column
#Finally:
df['Name corrected']=df['Name corrected'].ffill()
#Forward filling the missing values
I have following problem with a dataframe in python:
I have a dataframe with an ID column (which is not the index) and other columns.
Now I want to write a code that gives back a new dataframe with all rows that have the same value in columnx, as the requested item ID. It should also contain all columns of the dataframe df.
def subset(itemID):
columnxValue = df[df['ID'] == itemID]['columnx']
subset = df[df['columnx'] == columnxValue]
return subset
If I do it like this I always get the Error "Can only compare identically-labeled Series Objects
I changed the question to be more clear.
You can use .loc as follows:
def subset(itemID):
columnValueRequest = df.loc[df['ID'] == itemID, 'columnx'].iloc[0]
subset1 = df[df['columnx'] == columnValueRequest]
return subset1
As you want to get a value, instead of a Series for the variable columnValueRequest, you have to further use .iloc[0] to get the (first) value.
Do you mean something like this?
You give the ItemID as an argument to the subset function. Then it checks if the ItemID corresponds to the value in the ID column. It returns the value from columnx from the row where ItemID is equal to the value in the ID column.
def subset(itemID):
columnValueRequest = df[df['ID'] == itemID][columnx]
subset = df[df[columnx] == columnValueRequest]
return subset
I have a pandas dataframe grouped by certain columns. Now I want to insert the mean of the numeric values of four adjacent columns into a new column. This is what I did:
df = pd.read_csv(filename)
# in this line I extract a unique ID from the filename
id = re.search('(\w\w\w)', filename).group(1)
Files look like this:
col1 | col2 | col3
-----------------------
str1a | str1b | float1
My idea was now the following:
# get the numeric values
df2 = pd.DataFrame(df.groupby(['col1', 'col2']).mean()['col3'].T
# insert the id into a new column
df2.insert(0, 'ID', id)
Now loop over all
for j in range(len(df2.values)):
for k in df['col1'].unique():
df2.insert(j+5, (k, 'mean'), df2.values[j])
df2.to_excel('text.xlsx')
But I get the following error, referring to the line with df.insert:
TypeError: not all arguments converted during string formatting
and
if not allow_duplicates and item in self.items:
# Should this be a different kind of error??
raise ValueError('cannot insert %s, already exists' % item)
I am not sure what string formatting refers to here, since I have only numerical values being passed around.
The final output should have all values from col3 in a single row (indexed by id) and every fifth column should be the inserted mean value of the four preceding values.
If I had to work with files like yours I code a function to convert to csv... something like that:
data = []
for lineInFile in file.read().splitlines():
lineInFile_splited = lineInFile.split('|')
if len(lineInFile_splited)>1: ## get only data and not '-------'
data.append(lineInFile_splited)
df = pandas.DataFrame(data, columns = ['A','B'])
Hope it helps!