PySpark Self Join without alias - python

I have a DF, I want to left_outer join with itself but I would liek to do it with pyspark api rather than alias.
So it is something like:
df = ...
df2 = df
df.join(df2, [df['SomeCol'] == df2['SomeOtherCol']], how='left_outer')
Interestingly this is incorrect. When I run it I get this error:
WARN Column: Constructing trivially true equals predicate, 'CAMPAIGN_ID#62L = CAMPAIGN_ID#62L'. Perhaps you need to use aliases.
Is there a way to do this without using alias? Or a clean way with alias? Alias really makes the code a lot dirtier rather than using the pyspark api directly.

The most clean way of using aliases is as follows.
Given the following Dataframe.
df.show()
+---+----+---+
| ID|NAME|AGE|
+---+----+---+
| 1|John| 50|
| 2|Anna| 32|
| 3|Josh| 41|
| 4|Paul| 98|
+---+----+---+
In the following example, I am simply adding "2" to each of the column names so that each column has is unique name after the join.
df2 = df.select([functions.col(c).alias(c + "2") for c in df.columns])
df = df.join(df2, on = df['NAME'] == df2['NAME2'], how='left_outer')
df.show()
+---+----+---+---+-----+----+
| ID|NAME|AGE|ID2|NAME2|AGE2|
+---+----+---+---+-----+----+
| 1|John| 50| 1| John| 50|
| 2|Anna| 32| 2| Anna| 32|
| 3|Josh| 41| 3| Josh| 41|
| 4|Paul| 98| 4| Paul| 98|
+---+----+---+---+-----+----+
If I just simply did a df.join(df).select("NAME"), pyspark does not know which column I want to select as they both have the exact same name. This will lead to errors like the following.
AnalysisException: Reference 'NAME' is ambiguous, could be: NAME, NAME.

Related

In a pyspark dataframe, when I rename a column, the previous name can still be used for filtering. Bug or feature?

I work on DataBricks with PySpark dataframe containing string-type columns. I use .withColumnRenamed() to rename one of them. Later in the process I use a .filter() to select rows that contain a certain substring.
I accidentally used the old column name and it still ran the filter and produced the 'correct' results as if I used the new column name.
My problem is: is this a bug or a feature?
I reproduced the problem in a simple situation:
_test = sqlContext.createDataFrame([("abcd","efgh"), ("kalp","quarto"), ("aceg","egik")], [ 'x1', 'x2'])
_test.show()
+----+------+
| x1| x2|
+----+------+
|abcd| efgh|
|kalp|quarto|
|aceg| egik|
+----+------+
_test2 = _test.withColumnRenamed('x1', 'new')
_test2.filter("x1 == 'aceg'").show()
+----+----+
| new| x2|
+----+----+
|aceg|egik|
+----+----+
_test2.filter("substring(x1,1,2) == 'ka'").show()
+----+------+
| new| x2|
+----+------+
|kalp|quarto|
+----+------+
I would have expected an error in the filter commands as the column x1 does not exist anymore in "_test2". The weird thing is that the output is showing the new name ('new').
Another example:
_test2.filter("substring(x1,1,1) == 'a'").show()
gives
+----+----+
| new| x2|
+----+----+
|abcd|efgh|
|aceg|egik|
+----+----+
and _test2.filter("substring(x1,1,1) == 'a'").filter(F.col('x1') == 'abcd').show() gives
+----+----+
| new| x2|
+----+----+
|abcd|efgh|
+----+----+
However _test2.select(['x1', 'x2']).show() will throw an error that 'x1' does not exist.
This is the known issue of Spark. The community decided not to fix it. See this related jira for more information.

How to find the how many TRUE or FALSE are in the VALUE column

I have a PySpark Dataframe with a column of strings. I did find if those columns are numeric or not. But now I want to find how many TRUE are in the Value column.
values = [('25q36',),('75647',),('13864',),('8758K',),('07645',)]
df = sqlContext.createDataFrame(values,['ID',])
df.show()
+-----+
| ID|
+-----+
|25q36|
|75647|
|13864|
|8758K|
|07645|
+-----+
I did apply the following
from pyspark.sql import functions as F
my_df.select(
"ID",
F.col("ID").cast("int").isNotNull().alias("Value ")
).show()
+-----+------+
| ID|Value |
+-----+------+
|25q36| false|
|75647| true|
|13864| true|
|8758K| false|
|07645| true|
+-----+------+
But now I want to know how many TRUE or False are in that column.
Good Night.
Try something like that...
df.groupBy('Value').count().show()
This should get the job done!
df['Value'].value_counts()

Use spark function result as input of another function

In my Spark application I have a dataframe with informations like
+------------------+---------------+
| labels | labels_values |
+------------------+---------------+
| ['l1','l2','l3'] | 000 |
| ['l3','l4','l5'] | 100 |
+------------------+---------------+
What I am trying to achieve is to create, given a label name as input a single_label_value column that takes the value for that label from the labels_values column.
For example, for label='l3' I would like to retrieve this output:
+------------------+---------------+--------------------+
| labels | labels_values | single_label_value |
+------------------+---------------+--------------------+
| ['l1','l2','l3'] | 000 | 0 |
| ['l3','l4','l5'] | 100 | 1 |
+------------------+---------------+--------------------+
Here's what I am attempting to use:
selected_label='l3'
label_position = F.array_position(my_df.labels, selected_label)
my_df= my_df.withColumn(
"single_label_value",
F.substring(my_df.labels_values, label_position, 1)
)
But I am getting an error because the substring function does not like the label_position argument.
Is there any way to combine these function outputs without writing an udf?
Hope, this will work for you.
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark=SparkSession.builder.getOrCreate()
mydata=[[['l1','l2','l3'],'000'], [['l3','l4','l5'],'100']]
df = spark.createDataFrame(mydata,schema=["lebels","lebel_values"])
selected_label='l3'
df2=df.select(
"*",
(array_position(df.lebels,selected_label)-1).alias("pos_val"))
df2.createOrReplaceTempView("temp_table")
df3=spark.sql("select *,substring(lebel_values,pos_val,1) as val_pos from temp_table")
df3.show()
+------------+------------+-------+-------+
| lebels|lebel_values|pos_val|val_pos|
+------------+------------+-------+-------+
|[l1, l2, l3]| 000| 2| 0|
|[l3, l4, l5]| 100| 0| 1|
+------------+------------+-------+-------+
This is giving location of the value. If you want exact index then you can use -1 from this value.
--Edited anser -> Worked with temp view. Still looking for solution using withColumn option. I hope, it will help you for now.
Edit2 -> Answer using dataframe.
df2=df.select(
"*",
(array_position(df.lebels,selected_label)-1).astype("int").alias("pos_val")
)
df3=df2.withColumn("asked_col",expr("substring(lebel_values,pos_val,1)"))
df3.show()
Try maybe:
import pyspark.sql.functions as f
from pyspark.sql.functions import *
selected_label='l3'
df=df.withColumn('single_label_value', f.substring(f.col('labels_values'), array_position(f.col('labels'), lit(selected_label))-1, 1))
df.show()
(for spark version >=2.4)
I think lit() was the function you were missing - you can use it to pass constant values to spark dataframes.

pyspark sql query equivalent functions

I'm just starting to dive into Pyspark.
There's this dataset which contains some values I'll demonstrate below to ask the query I'm not able to create.
This is a sample of the actual dataset which contains roughly 20K rows. I'm reading this CSV file in pyspark shell as data frame. Trying to convert some basic SQL queries on this data to get hands on. Below are one such query I'm not able to:
1. Which country has the least number of Government Type (4th Column).
There are other queries I've manually created myself that I can do in SQL but I'm just stuck in understanding the one. If I get an idea for this, it'll be fairly relatable to address other ones.
This is the only line I can create after much bugging:
df.filter(df.Government=='Democratic').select('Country').show()
I'm not sure how to approach this problem statement. Any ideas?
Here is how you can do it
Demography = Row("City", "Country", "Population", "Government")
demo1 = Demography("a","AD",1.2,"Democratic")
demo2 = Demography("b","AD",1.2,"Democratic")
demo3 = Demography("c","AD",1.2,"Democratic")
demo4 = Demography("m","XX",1.2,"Democratic")
demo5 = Demography("n","XX",1.2,"Democratic")
demo6 = Demography("o","XX",1.2,"Democratic")
demo7 = Demography("q","XX",1.2,"Democratic")
demographic_data = [demo1,demo2,demo3,demo4,demo5,demo6,demo7]
demographic_data_df = spark.createDataFrame(demographic_data)
demographic_data_df.show(10)
+----+-------+----------+----------+
|City|Country|Population|Government|
+----+-------+----------+----------+
| a| AD| 1.2|Democratic|
| b| AD| 1.2|Democratic|
| c| AD| 1.2|Democratic|
| m| XX| 1.2|Democratic|
| n| XX| 1.2|Democratic|
| o| XX| 1.2|Democratic|
| q| XX| 1.2|Democratic|
+----+-------+----------+----------+
new = demographic_data_df.groupBy('Country').count().select('Country', f.col('count').alias('n'))
max = new.agg(f.max('n').alias('n'))
new.join(max , on = "n",
how = "inner").show()
+---+-------+
| n|Country|
+---+-------+
| 4| XX|
+---+-------+
The other option is to register the dataframe as a temporary table and run normal SQL queries. For registering it as temporary table you can do the following
demographic_data_df.registerTempTable("demographic_data_table")
Hope that helps

How to modify a column for a join in spark dataframe when the join key are given as a list?

I have been trying to join two dataframes using the following list of join key passed as a list and I want to add the functionality to join on a subset of the keys if one of the key value is null
I have been trying to join two dataframes df_1 and df_2.
data1 = [[1,'2018-07-31',215,'a'],
[2,'2018-07-30',None,'b'],
[3,'2017-10-28',201,'c']
]
df_1 = sqlCtx.createDataFrame(data1,
['application_number','application_dt','account_id','var1'])
and
data2 = [[1,'2018-07-31',215,'aaa'],
[2,'2018-07-30',None,'bbb'],
[3,'2017-10-28',201,'ccc']
]
df_2 = sqlCtx.createDataFrame(data2,
['application_number','application_dt','account_id','var2'])
The code I use to join is this:
key_a = ['application_number','application_dt','account_id']
new = df_1.join(df_2,key_a,'left')
The output for the same is:
+------------------+--------------+----------+----+----+
|application_number|application_dt|account_id|var1|var2|
+------------------+--------------+----------+----+----+
| 1| 2018-07-31| 215| a| aaa|
| 3| 2017-10-28| 201| c| ccc|
| 2| 2018-07-30| null| b|null|
+------------------+--------------+----------+----+----+
My concern here is, in the case where account_id is null, the join should still work by comparing other 2 keys.
The required output should be like this:
+------------------+--------------+----------+----+----+
|application_number|application_dt|account_id|var1|var2|
+------------------+--------------+----------+----+----+
| 1| 2018-07-31| 215| a| aaa|
| 3| 2017-10-28| 201| c| ccc|
| 2| 2018-07-30| null| b| bbb|
+------------------+--------------+----------+----+----+
I have found a similar approach to do so by using the statement:
join_elem = "df_1.application_number ==
df_2.application_number|df_1.application_dt ==
df_2.application_dt|F.coalesce(df_1.account_id,F.lit(0)) ==
F.coalesce(df_2.account_id,F.lit(0))".split("|")
join_elem_column = [eval(x) for x in join_elem]
But the design consideration do not allow me to use a full join expression and i am stuck with using the list of column names as join-key.
I have been trying to find a way to accommodate this coalesce thing into this list itself but have not found any success so far.
I would call this solution a workaround.
The issue here is that we have Null value for one of the keys in the DataFrame and the OP wants that rest of the key columns to be used instead. Why not assign an arbitrary value to this Null and then apply the join. Effectively this would be same thing like making a join on the remaining two keys.
# Let's replace Null with an arbitrary value, which has
# little chance of occurring in the Dataset. For eg; -100000
df1 = df1.withColumn('account_id', when(col('account_id').isNull(),-100000).otherwise(col('account_id')))
df2 = df2.withColumn('account_id', when(col('account_id').isNull(),-100000).otherwise(col('account_id')))
# Do a FULL Join
df = df1.join(df2,['application_number','application_dt','account_id'],'full')
# Replace the arbitrary value back with Null.
df = df.withColumn('account_id', when(col('account_id')== -100000, None).otherwise(col('account_id')))
df.show()
+------------------+--------------+----------+----+----+
|application_number|application_dt|account_id|var1|var2|
+------------------+--------------+----------+----+----+
| 1| 2018-07-31| 215| a| aaa|
| 2| 2018-07-30| null| b| bbb|
| 3| 2017-10-28| 201| c| ccc|
+------------------+--------------+----------+----+----+

Categories

Resources