how to use list comprehension variable names in Pyspark dataframes - python

I am trying to build a list comprehension that has an iteration built into it. however, I have not been able to get this to work. What am I doing wrong?
Here is a trivial representation of what I am trying to do.
dataframe columns = ["code_number_1", "code_number_2", "code_number_3", "code_number_4", "code_number_5", "code_number_6", "code_number_7", "code_number_8",
cols = [0,3,4]
result = df.select([code_number_{f"{x}" for x in cols])
Addendum:
my ultimate goal is to do something like this:
col_buckets ["code_1", "code_2", "code_3"]
amt_buckets = ["code_1_amt", "code_2_amt", "code_3_amt" ]
result = df.withColumn("max_amt_{col_index}", max(df.select(max(**amt_buckets**) for col_indices of amt_buckets if ***any of col indices of col_buckets*** =='01')))

[code_number_{f"{x}" for x in cols] not a valid list comprehension syntax.
Instead try with ["code_number_"+str(x) for x in cols] generates list of column names ['code_number_0', 'code_number_3', 'code_number_4'].
.select accepts strings/columns as arguments to select the matching fields from dataframe.
Example:
df=spark.createDataFrame([("a","b","c","d","e")],["code_number_0","code_number_1","code_number_2","code_number_3","code_number_4"])
cols = [0,3,4]
#passing strings to select
result = df.select(["code_number_"+str(x) for x in cols])
#or passing columns to select
result = df.select([col("code_number_"+str(x)) for x in cols]).show()
result.show()
#+-------------+-------------+-------------+
#|code_number_0|code_number_3|code_number_4|
#+-------------+-------------+-------------+
#| a| d| e|
#+-------------+-------------+-------------+

Related

pySpark check if column exists based on list

My ultimate goal is two compare column names in df2 if the names appear in a list of values extracted from df1.
I have a list of names and a function that checks if those names exists as a column name in df1. However, this worked in python and doesn't work in pySpark. The error I'm getting:AttributeError: 'DataFrame' object has no attribute 'values'.
How can I change my function so that it iterates over the column names? Or is there a way to compare my list values to the df2's column names (the full dataframe; ie. no need to make a new dataframe with just the column names)?
#Function to check matching values
def checkIfDomainsExists(data, listOfValues):
'''List of elements '''
entityDomainList=Entity.select("DomainName").rdd.flatMap(lambda x:x).collect()
#entityDomainList
'''Check if given elements exists in data'''
results_true = {}
results_false ={}
#Iterate over list of domains one by one
for elem in listOfValues:
#Check if the element exists in dataframe values
if elem in data.columns:
results_true[elem] = True
else:
results_false[elem] = False
#Return dictionary of values and their flag
#Only return TRUE values
return results_true;
# Get TRUE matched column values
results_true = checkIfDomainsExists(psv, entityDomainList)
results_true
You don't need to write the function for just filtering the values.
YOu can do this in following ways:
df = spark.createDataFrame([(1, 'LeaseStatus'), (2, 'IncludeLeaseInIPM'), (5, 'NonExistantDomain')], ("id", "entity"))
domainList=['LeaseRecoveryType','LeaseStatus','IncludeLeaseInIPM','LeaseAccountType', 'ClassofUse','LeaseType']
df.withColumn('Exists', df.entity.isin(domainList)).filter(f.col('Exists')=='true').show()
+---+-----------------+------+
| id| entity|Exists|
+---+-----------------+------+
| 1| LeaseStatus| true|
| 2|IncludeLeaseInIPM| true|
+---+-----------------+------+
#or you can filter directly without adding additional column
df.filter(f.col('entity').isin(domainList)).select('entity').collect()
[Row(entity='LeaseStatus'), Row(entity='IncludeLeaseInIPM')]
Hope it helps.

Apply a function to all cells in Spark DataFrame

I'm trying to convert some Pandas code to Spark for scaling. myfunc is a wrapper to a complex API that takes a string and returns a new string (meaning I can't use vectorized functions).
def myfunc(ds):
for attribute, value in ds.items():
value = api_function(attribute, value)
ds[attribute] = value
return ds
df = df.apply(myfunc, axis='columns')
myfunc takes a DataSeries, breaks it up into individual cells, calls the API for each cell, and builds a new DataSeries with the same column names. This effectively modifies all cells in the DataFrame.
I'm new to Spark and I want to translate this logic using pyspark. I've converted my pandas DataFrame to Spark:
spark = SparkSession.builder.appName('My app').getOrCreate()
spark_schema = StructType([StructField(c, StringType(), True) for c in df.columns])
spark_df = spark.createDataFrame(df, schema=spark_schema)
This is where I get lost. Do I need a UDF, a pandas_udf? How do I iterate across all cells and return a new string for each using myfunc? spark_df.foreach() doesn't return anything and it doesn't have a map() function.
I can modify myfunc from DataSeries -> DataSeries to string -> string if necessary.
Option 1: Use a UDF on One Column at a Time
The simplest approach would be to rewrite your function to take a string as an argument (so that it is string -> string) and use a UDF. There's a nice example here. This works on one column at a time. So, if your DataFrame has a reasonable number of columns, you can apply the UDF to each column one at a time:
from pyspark.sql.functions import col
new_df = df.select(udf(col("col1")), udf(col("col2")), ...)
Example
df = sc.parallelize([[1, 4], [2,5], [3,6]]).toDF(["col1", "col2"])
df.show()
+----+----+
|col1|col2|
+----+----+
| 1| 4|
| 2| 5|
| 3| 6|
+----+----+
def plus1_udf(x):
return x + 1
plus1 = spark.udf.register("plus1", plus1_udf)
new_df = df.select(plus1(col("col1")), plus1(col("col2")))
new_df.show()
+-----------+-----------+
|plus1(col1)|plus1(col2)|
+-----------+-----------+
| 2| 5|
| 3| 6|
| 4| 7|
+-----------+-----------+
Option 2: Map the entire DataFrame at once
map is available for Scala DataFrames, but, at the moment, not in PySpark.
The lower-level RDD API does have a map function in PySpark. So, if you have too many columns to transform one at a time, you could operate on every single cell in the DataFrame like this:
def map_fn(row):
return [api_function(x) for (column, x) in row.asDict().items()
column_names = df.columns
new_df = df.rdd.map(map_fn).toDF(df.columns)
Example
df = sc.parallelize([[1, 4], [2,5], [3,6]]).toDF(["col1", "col2"])
def map_fn(row):
return [value + 1 for (_, value) in row.asDict().items()]
columns = df.columns
new_df = df.rdd.map(map_fn).toDF(columns)
new_df.show()
+----+----+
|col1|col2|
+----+----+
| 2| 5|
| 3| 6|
| 4| 7|
+----+----+
Context
The documentation of foreach only gives the example of printing, but we can verify looking at the code that it indeed does not return anything.
You can read about pandas_udf in this post, but it seems that it is most suited to vectorized functions, which, as you pointed out, you can't use because of api_function.
The solution is:
udf_func = udf(func, StringType())
for col_name in spark_df.columns:
spark_df = spark_df.withColumn(col_name, udf_func(lit(col_name), col_name))
return spark_df.toPandas()
There are 3 key insights that helped me figure this out:
If you use withColumn with the name of an existing column (col_name), Spark "overwrites"/shadows the original column. This essentially gives the appearance of editing the column directly as if it were mutable.
By creating a loop across the original columns and reusing the same DataFrame variable spark_df, I use the same principle to simulate a mutable DataFrame, creating a chain of column-wise transformations, each time "overwriting" a column (per #1 - see below)
Spark UDFs expect all parameters to be Column types, which means it attempts to resolve column values for each parameter. Because api_function's first parameter is a literal value that will be the same for all rows in the vector, you must use the lit() function. Simply passing col_name to the function will attempt to extract the column values for that column. As far as I could tell, passing col_name is equivalent to passing col(col_name).
Assuming 3 columns 'a', 'b' and 'c', unrolling this concept would look like this:
spark_df = spark_df.withColumn('a', udf_func(lit('a'), 'a')
.withColumn('b', udf_func(lit('b'), 'b')
.withColumn('c', udf_func(lit('c'), 'c')

PySpark: list column names based on characters in values

In PySpark, I am trying to clean a dataset. Some of the columns have unwanted characters (=" ") in it's values. I read the dataset as a DataFrame and I already created a User Defined Function which can remove the characters successfully, but now I am struggling to write a script which can identify on which columns I need to perform the UserDefinedFunction. I only use the last row of the dataset, assuming the columns always contains similar entries.
DataFrame (df):
id value1 value2 value3
="100010" 10 20 ="30"
In Python, the following works:
columns_to_fix = []
for col in df:
value = df[col][0]
if type(value) == str and value.startswith('='):
columns_to_fix.append(col)
I tried the following in PySpark, but this returns all the column names:
columns_to_fix = []
for x in df.columns:
if df[x].like('%="'):
columns_to_fix.append(x)
Desired output:
columns_to_fix: ['id', 'value3']
Once I have the column names in a list, I can use a for loop to fix the entries in the columns. I am very new to PySpark, so my apologies if this is a too basic question. Thank you so much in advance for your advice!
"I only use the last row of the dataset, assuming the columns always contains similar entries." Under that assumption, you could collect a single row and test if the character you are looking for is in there.
Also, note that you do not need a udf to replace = in your columns, you can use regexp_replace. A working example is given below, hope this helps!
import pyspark.sql.functions as F
df = spark.createDataFrame([['=123','456','789'], ['=456','789','123']], ['a', 'b','c'])
df.show()
# +----+---+---+
# | a| b| c|
# +----+---+---+
# |=123|456|789|
# |=456|789|123|
# +----+---+---+
# list all columns with '=' in it.
row = df.limit(1).collect()[0].asDict()
columns_to_replace = [i for i,j in row.items() if '=' in j]
for col in columns_to_replace:
df = df.withColumn(col, F.regexp_replace(col, '=', ''))
df.show()
# +---+---+---+
# | a| b| c|
# +---+---+---+
# |123|456|789|
# |456|789|123|
# +---+---+---+

How to convert a list of array to Spark dataframe

Suppose I have a list:
x = [[1,10],[2,14],[3,17]]
I want to convert x to a Spark dataframe with two columns id (1,2,3) and value (10,14,17).
How could I do that?
Thanks
x = [[1,10],[2,14],[3,17]]
df = sc.parallelize(x).toDF(['ID','VALUE'])
df.show()
Alternatively you can create it directly using SparkSession-
x = [[1,10],[2,14],[3,17]]
df = spark.createDataFrame(data=x, schema = ["id","value"])
df.printSchema()
df.show()

python pandas selecting columns from a dataframe via a list of column names

I have a dataframe with a lot of columns in it. Now I want to select only certain columns. I have saved all the names of the columns that I want to select into a Python list and now I want to filter my dataframe according to this list.
I've been trying to do:
df_new = df[[list]]
where list includes all the column names that I want to select.
However I get the error:
TypeError: unhashable type: 'list'
Any help on this one?
You can remove one []:
df_new = df[list]
Also better is use other name as list, e.g. L:
df_new = df[L]
It look like working, I try only simplify it:
L = []
for x in df.columns:
if not "_" in x[-3:]:
L.append(x)
print (L)
List comprehension:
print ([x for x in df.columns if not "_" in x[-3:]])

Categories

Resources