How do I expand a dataframe based on column values? I intend to go from this dataframe:
+---------+----------+----------+
|DEVICE_ID| MIN_DATE| MAX_DATE|
+---------+----------+----------+
| 1|2019-08-29|2019-08-31|
| 2|2019-08-27|2019-09-02|
+---------+----------+----------+
To one that looks like this:
+---------+----------+
|DEVICE_ID| DATE|
+---------+----------+
| 1|2019-08-29|
| 1|2019-08-30|
| 1|2019-08-31|
| 2|2019-08-27|
| 2|2019-08-28|
| 2|2019-08-29|
| 2|2019-08-30|
| 2|2019-08-31|
| 2|2019-09-01|
| 2|2019-09-02|
+---------+----------+
Any help would be much appreciated.
from datetime import timedelta, date
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType
# Create a sample data row.
df = sqlContext.sql("""
select 'dev1' as device_id,
to_date('2020-01-06') as start,
to_date('2020-01-09') as end""")
# Define a UDf to return a list of dates
#udf
def datelist(start, end):
return ",".join([str(start + datetime.timedelta(days=x)) for x in range(0, 1+(end-start).days)])
# explode the list of dates into rows
df.select("device_id",
F.explode(
F.split(datelist(df["start"], df["end"]), ","))
.alias("date")).show(10, False)
Related
Consider the simple DataFrame:
from pyspark import SparkContext
import pyspark
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from pyspark.sql.window import Window
from pyspark.sql.types import *
from pyspark.sql.functions import pandas_udf, PandasUDFType
spark = SparkSession.builder.appName('Trial').getOrCreate()
simpleData = (("2000-04-17", "144", 1), \
("2000-07-06", "015", 1), \
("2001-01-23", "015", -1), \
("2001-01-18", "144", -1), \
("2001-04-17", "198", 1), \
("2001-04-18", "036", -1), \
("2001-04-19", "012", -1), \
("2001-04-19", "188", 1), \
("2001-04-25", "188", 1),\
("2001-04-27", "015", 1) \
)
columns= ["dates", "id", "eps"]
df = spark.createDataFrame(data = simpleData, schema = columns)
df.printSchema()
df.show(truncate=False)
Out:
root
|-- dates: string (nullable = true)
|-- id: string (nullable = true)
|-- eps: long (nullable = true)
+----------+---+---+
|dates |id |eps|
+----------+---+---+
|2000-04-17|144|1 |
|2000-07-06|015|1 |
|2001-01-23|015|-1 |
|2001-01-18|144|-1 |
|2001-04-17|198|1 |
|2001-04-18|036|-1 |
|2001-04-19|012|-1 |
|2001-04-19|188|1 |
|2001-04-25|188|1 |
|2001-04-27|015|1 |
+----------+---+---+
I would like to sum the values in the eps column over a rolling window keeping only the last value for any given ID in the id column. For example, defining a window of 5 rows and assuming we are on 2001-04-17, I want to sum only the last eps value for each given unique ID. In the 5 rows we have only 3 different ID, so the sum must be of 3 elements: -1 for the ID 144 (forth row), -1 for the ID 015 (third row) and 1 for the ID 198 (fifth row) for a total of -1.
In my mind, within the rolling window I should do something like F.sum(groupBy('id').agg(F.last('eps'))) that of course is not possible to achieve in a rolling window.
I obtained the desired result using a UDF.
#pandas_udf(IntegerType(), PandasUDFType.GROUPEDAGG)
def fun_sum(id, eps):
df = pd.DataFrame()
df['id'] = id
df['eps'] = eps
value = df.groupby('id').last().sum()
return value
And then:
w = Window.orderBy('dates').rowsBetween(-5,0)
df = df.withColumn('sum', fun_sum(F.col('id'), F.col('eps')).over(w))
The problem is that my dataset contains more than 8 milion rows and performing this task with this UDF takes about 2 hours.
I was wandering whether there is a way to achieve the same result with built-in PySpark functions avoiding using a UDF or at least whether there is a way to improve the performance of my UDF.
For completeness, the desired output should be:
+----------+---+---+----+
|dates |id |eps|sum |
+----------+---+---+----+
|2000-04-17|144|1 |1 |
|2000-07-06|015|1 |2 |
|2001-01-23|015|-1 |0 |
|2001-01-18|144|-1 |-2 |
|2001-04-17|198|1 |-1 |
|2001-04-18|036|-1 |-2 |
|2001-04-19|012|-1 |-3 |
|2001-04-19|188|1 |-1 |
|2001-04-25|188|1 |0 |
|2001-04-27|015|1 |0 |
+----------+---+---+----+
EDIT: the rseult must also be achievable using a .rangeBetween() window.
In case you haven't figured it out yet, here's one way of achieving it.
Assuming that df is defined and initialised the way you defined and initialised it in your question.
Import the required functions and classes:
from pyspark.sql.functions import row_number, col
from pyspark.sql.window import Window
Create the necessary WindowSpec:
window_spec = (
Window
# Partition by 'id'.
.partitionBy(df.id)
# Order by 'dates', latest dates first.
.orderBy(df.dates.desc())
)
Create a DataFrame with partitioned data:
partitioned_df = (
df
# Use the window function 'row_number()' to populate a new column
# containing a sequential number starting at 1 within a window partition.
.withColumn('row', row_number().over(window_spec))
# Only select the first entry in each partition (i.e. the latest date).
.where(col('row') == 1)
)
Just in case you want to double-check the data:
partitioned_df.show()
# +----------+---+---+---+
# | dates| id|eps|row|
# +----------+---+---+---+
# |2001-04-19|012| -1| 1|
# |2001-04-25|188| 1| 1|
# |2001-04-27|015| 1| 1|
# |2001-04-17|198| 1| 1|
# |2001-01-18|144| -1| 1|
# |2001-04-18|036| -1| 1|
# +----------+---+---+---+
Group and aggregate the data:
sum_rows = (
partitioned_df
# Aggragate data.
.groupBy()
# Sum all rows in 'eps' column.
.sum('eps')
# Get all records as a list of Rows.
.collect()
)
Get the result:
print(f"sum eps: {sum_rows[0][0]})
# sum eps: 0
For each set of coordinates in a pyspark dataframe, I need to find closest set of coordinates in another dataframe
I have one pyspark dataframe with coordinate data like so (dataframe a):
+------------------+-------------------+
| latitude_deg| longitude_deg|
+------------------+-------------------+
| 40.07080078125| -74.93360137939453|
| 38.704022| -101.473911|
| 59.94919968| -151.695999146|
| 34.86479949951172| -86.77030181884766|
| 35.6087| -91.254898|
| 34.9428028| -97.8180194|
And another like so (dataframe b): (only few rows are shown for understanding)
+-----+------------------+-------------------+
|ident| latitude_deg| longitude_deg|
+-----+------------------+-------------------+
| 00A| 30.07080078125| -24.93360137939453|
| 00AA| 56.704022| -120.473911|
| 00AK| 18.94919968| -109.695999146|
| 00AL| 76.86479949951172| -67.77030181884766|
| 00AR| 10.6087| -87.254898|
| 00AS| 23.9428028| -10.8180194|
Is it possible to somehow merge the dataframes to have a result that a has the closest ident from dataframe b for each row in dataframe a:
+------------------+-------------------+-------------+
| latitude_deg| longitude_deg|closest_ident|
+------------------+-------------------+-------------+
| 40.07080078125| -74.93360137939453| 12A|
| 38.704022| -101.473911| 14BC|
| 59.94919968| -151.695999146| 278A|
| 34.86479949951172| -86.77030181884766| 56GH|
| 35.6087| -91.254898| 09HJ|
| 34.9428028| -97.8180194| 09BV|
What I have tried so far:
I have a pyspark UDF to calculate the haversine distance between 2 pairs of coordinates defined.
udf_get_distance = F.udf(get_distance)
It works like this:
df = (df.withColumn(“ABS_DISTANCE”, udf_get_distance(
df.latitude_deg_a, df.longitude_deg_a,
df.latitude_deg_b, df.longitude_deg_b,)
))
I'd appreciate any kind of help. Thanks so much
You need to do a crossJoin first. something like this
joined_df=source_df1.crossJoin(source_df2)
Then you can call your udf like you have mentioned, generate rownum based on distance and filter out the close one
from pyspark.sql.functions import row_number,Window
rwindow=Window.partitionBy("latitude_deg_a","latitude_deg_b").orderBy("ABS_DISTANCE")
udf_result_df = joined_df.withColumn(“ABS_DISTANCE”, udf_get_distance(
df.latitude_deg_a, df.longitude_deg_a,
df.latitude_deg_b, df.longitude_deg_b).withColumn("rownum",row_number().over(rwindow)).filter("rownum = 1")
Note: add return type to your udf
I am trying to update some rows of dataframe ,below is my code.
dfs_ids1 = dfs_ids1.withColumn("arrival_dt", F.when(F.col("arrival_dt")=='1960-01-01', lit(None)) )
Basically, I want to update all the rows where arrival_dt is 1960-01-01 with null and leave rest of the rows unchanged.
You need to understand the filter and when functions.
If you want to fetch rows only without caring about others , try this.
from pyspark.sql.functions import *
dfs_ids1 = dfs_ids1.filter(col("arrival_dt='1960-01-01'"))
If you want to update remaining with custom value or other columns.
dfs_ids1=dfs_ids1.withColumn("arrival_dt",when(col("arrival_dt")=="1960-01-01",col("arrival_dt")).otherwise(lit(None)))
//Or
dfs_ids1=dfs_ids1.withColumn("arrival_dt",when(col("arrival_dt")=="1960-01-01",col("arrival_dt")))
//Sample example
//Input df
+------+-------+-----+
| name| city|state|
+------+-------+-----+
| manoj|gwalior| mp|
| kumar| delhi|delhi|
|dhakad|chennai| tn|
+------+-------+-----+
from pyspark.sql.functions import *
opOneDf=df.withColumn("name",when(col("city")=="delhi",col("city")).otherwise(lit(None)))
opOneDf.show()
//Sample output
+-----+-------+-----+
| name| city|state|
+-----+-------+-----+
| null|gwalior| mp|
|delhi| delhi|delhi|
| null|chennai| tn|
+-----+-------+-----+
This question already has answers here:
How can we JOIN two Spark SQL dataframes using a SQL-esque "LIKE" criterion?
(2 answers)
Closed 4 years ago.
I have 2 dataframes named - brand_name and poi_name.
Dataframe 1(brand_name):-
+-------------+
|brand_stop[0]|
+-------------+
|TOASTMASTERS |
|USBORNE |
|ARBONNE |
|USBORNE |
|ARBONNE |
|ACADEMY |
|ARBONNE |
|USBORNE |
|USBORNE |
|PILLAR |
+-------------+
Dataframe 2:-(poi_name)
+---------------------------------------+
|Name |
+---------------------------------------+
|TOASTMASTERS DISTRICT 48 |
|USBORNE BOOKS AND MORE |
|ARBONNE |
|USBORNE BOOKS AT HOME |
|ARBONNE |
|ACADEMY, LTD. |
|ARBONNE |
|USBORNE BOOKS AT HOME |
|USBORNE BOOKS & MORE |
|PILLAR TO POST HOME INSPECTION SERVICES|
+---------------------------------------+
I want to check whether the strings in brand_stop column of dataframe 1 are present in Name column of dataframe 2. The matching should be done row wise and then if there is a successful match, that particular record should be stored in a new column.
I have tried filtering the dataframe using Join:-
from pyspark.sql.functions import udf, col
from pyspark.sql.types import BooleanType
contains = udf(lambda s, q: q in s, BooleanType())
like_with_python_udf = (poi_names.join(brand_names1)
.where(contains(col("Name"), col("brand_stop[0]")))
.select(col("Name")))
like_with_python_udf.show()
But this shows an error
"AnalysisException: u'Detected cartesian product for INNER join between logical plans"
I am new to PySpark. Please help me with this.
Thank you
The scala code will be like this:
val d1 = Array(("TOASTMASTERS"),("USBORNE"),("ARBONNE"),("USBORNE"),("ARBONNE"),("ACADEMY"),("ARBONNE"),("USBORNE"),("USBORNE"),("PILLAR"))
val rdd1 = sc.parallelize(d1)
val df1 = rdd1.toDF("brand_stop")
val d2 = Array(("TOASTMASTERS DISTRICT 48"),("USBORNE BOOKS AND MORE"),("ARBONNE"),("USBORNE BOOKS AT HOME"),("ARBONNE"),("ACADEMY, LTD."),("ARBONNE"),("USBORNE BOOKS AT HOME"),("USBORNE BOOKS & MORE"),("PILLAR TO POST HOME INSPECTION SERVICES"))
val rdd2 =sc.parallelize(d2)
val df2 = rdd2.toDF("names")
def matchFunc(s1:String,s2:String) : Boolean ={
if(s2.contains(s1)) true
else false
}
val contains = udf(matchFunc _)
val like_with_python_udf = (df1.join(df2).where(contains(col("brand_stop"), col("names"))).select(col("brand_stop"), col("names")))
like_with_python_udf.show()
The Python code:
from pyspark.sql import Row
from pyspark.sql.functions import udf, col
from pyspark.sql.types import BooleanType
schema1 = Row("brand_stop")
schema2 = Row("names")
df1 = sc.parallelize([
schema1("TOASTMASTERS"),
schema1("USBORNE"),
schema1("ARBONNE")
]).toDF()
df2 = sc.parallelize([
schema2("TOASTMASTERS DISTRICT 48"),
schema2("USBORNE BOOKS AND MORE"),
schema2("ARBONNE"),
schema2("ACADEMY, LTD."),
schema2("PILLAR TO POST HOME INSPECTION SERVICES")
]).toDF()
contains = udf(lambda s, q: q in s, BooleanType())
like_with_python_udf = (df1.join(df2)
.where(contains(col("brand_stop"), col("names")))
.select(col("brand_stop"), col("names")))
like_with_python_udf.show()
I am getting ouput:
+------------+
| brand_stop|
+------------+
|TOASTMASTERS|
| USBORNE|
| ARBONNE|
+------------+
The matching should be done row wise
In that case you have to add some form of indices and join
from pyspark.sql.types import *
def index(df):
schema = StructType(df.schema.fields + [(StructField("_idx", LongType()))])
rdd = df.rdd.zipWithIndex().map(lambda x: x[0] +(x[1], ))
return rdd.toDF(schema)
brand_name = spark.createDataFrame(["TOASTMASTERS", "USBORNE"], "string").toDF("brand_stop")
poi_name = spark.createDataFrame(["TOASTMASTERS DISTRICT 48", "USBORNE BOOKS AND MORE"], "string").toDF("poi_name")
index(brand_name).join(index(poi_name), ["_idx"]).selectExpr("*", "poi_name rlike brand_stop").show()
# +----+------------+--------------------+-------------------------+
# |_idx| brand_stop| poi_name|poi_name RLIKE brand_stop|
# +----+------------+--------------------+-------------------------+
# | 0|TOASTMASTERS|TOASTMASTERS DIST...| true|
# | 1| USBORNE|USBORNE BOOKS AND...| true|
# +----+------------+--------------------+-------------------------+
I have a Spark dataframe which I want to get the statistics
stats_df = df.describe(['mycol'])
stats_df.show()
+-------+------------------+
|summary| mycol|
+-------+------------------+
| count| 300|
| mean| 2243|
| stddev| 319.419860456123|
| min| 1400|
| max| 3100|
+-------+------------------+
How do I extract the values of min and max in mycol using the summary min max column values? How do I do it by number index?
You could easily assign a variable from a select on that dataframe.
x = stats_df.select('mycol').where('summary' == 'min')
Ok let's consider the following example :
from pyspark.sql.functions import rand, randn
df = sqlContext.range(1, 1000).toDF('mycol')
df.describe().show()
# +-------+-----------------+
# |summary| mycol|
# +-------+-----------------+
# | count| 999|
# | mean| 500.0|
# | stddev|288.5307609250702|
# | min| 1|
# | max| 999|
# +-------+-----------------+
If you want to access the row concerning stddev, per example, you'll just need to convert it into an RDD, collect it and convert it into a dictionary as following :
stats = dict(df.describe().map(lambda r : (r.summary,r.mycol)).collect())
print(stats['stddev'])
# 288.5307609250702