Conditional calculation with two datasets - PySpark - python

Imagine you have two datasets df and df2 like the following:
df:
ID Size Condition
1 2 1
2 3 0
3 5 0
4 7 1
df2:
aux_ID Scalar
1 2
3 2
I want to get an output where if the condition of df is 1, we multiply the size times the scalar and then return df with the changed values.
I would want to do this as efficient as possible, perhaps avoiding the join if that's possible.
output_df:
ID Size Condition
1 4 1
2 3 0
3 5 0
4 7 1

Not sure why would you want to avoid Joins in the first place. They can be efficient in there own regards.
With this said , this can be easily done with Merging the 2 datasets and building a case-when statement against the condition
Data Preparation
df1 = pd.read_csv(StringIO("""ID,Size,Condition
1,2,1
2,3,0
3,5,0
4,7,1
""")
,delimiter=','
)
df2 = pd.read_csv(StringIO("""aux_ID,Scalar
1,2
3,2
""")
,delimiter=','
)
sparkDF1 = sql.createDataFrame(df1)
sparkDF2 = sql.createDataFrame(df2)
sparkDF1.show()
+---+----+---------+
| ID|Size|Condition|
+---+----+---------+
| 1| 2| 1|
| 2| 3| 0|
| 3| 5| 0|
| 4| 7| 1|
+---+----+---------+
sparkDF2.show()
+------+------+
|aux_ID|Scalar|
+------+------+
| 1| 2|
| 3| 2|
+------+------+
Case When
finalDF = sparkDF1.join(sparkDF2
,sparkDF1['ID'] == sparkDF2['aux_ID']
,'left'
).select(sparkDF1['*']
,sparkDF2['Scalar']
,sparkDF2['aux_ID']
).withColumn('Size_Changed',F.when( ( F.col('Condition') == 1 )
& ( F.col('aux_ID').isNotNull())
,F.col('Size') * F.col('Scalar')
).otherwise(F.col('Size')
)
)
finalDF.show()
+---+----+---------+------+------+------------+
| ID|Size|Condition|Scalar|aux_ID|Size_Changed|
+---+----+---------+------+------+------------+
| 1| 2| 1| 2| 1| 4|
| 3| 5| 0| 2| 3| 5|
| 2| 3| 0| null| null| 3|
| 4| 7| 1| null| null| 7|
+---+----+---------+------+------+------------+
You can drop the unnecessary columns , I kept them for your illustration

Related

Split rows in train test based on user id PySpark

I have a PySpark dataframe containing multiple rows for each user:
userId
action
time
1
buy
8 AM
1
buy
9 AM
1
sell
2 PM
1
sell
3 PM
2
sell
10 AM
2
buy
11 AM
2
sell
2 PM
2
sell
3 PM
My goal is to split this dataset into a training and a test set in such a way that, for each userId, N % of the rows are in the training set and 100-N % rows are in the test set. For example, given N=75%, the training set will be
userId
action
time
1
buy
8 AM
1
buy
9 AM
1
sell
2 PM
2
sell
10 AM
2
buy
11 AM
2
sell
2 PM
and the test set will be
userId
action
time
1
sell
3 PM
2
sell
3 PM
Any suggestion? Rows are ordered according to column time and I don't think that Spark's RandomSplit may help as I cannot stratify the split on specific columns
We had similar requirement and solved it in following way:
data = [
(1, "buy"),
(1, "buy"),
(1, "sell"),
(1, "sell"),
(2, "sell"),
(2, "buy"),
(2, "sell"),
(2, "sell"),
]
df = spark.createDataFrame(data, ["userId", "action"])
Use Window functionality to create serial row numbers. Also compute count of records by each userId. This will be helpful to compute percentage of records to filter.
from pyspark.sql.window import Window
from pyspark.sql.functions import col, row_number
window = Window.partitionBy(df["userId"]).orderBy(df["userId"])
df_count = df.groupBy("userId").count().withColumnRenamed("userId", "userId_grp")
df = df.join(df_count, col("userId") == col("userId_grp"), "left").drop("userId_grp")
df = df.select("userId", "action", "count", row_number().over(window).alias("row_number"))
df.show()
+------+------+-----+----------+
|userId|action|count|row_number|
+------+------+-----+----------+
| 1| buy| 4| 1|
| 1| buy| 4| 2|
| 1| sell| 4| 3|
| 1| sell| 4| 4|
| 2| sell| 4| 1|
| 2| buy| 4| 2|
| 2| sell| 4| 3|
| 2| sell| 4| 4|
+------+------+-----+----------+
Filter training records by required percentage:
n = 75
df_train = df.filter(col("row_number") <= col("count") * n / 100)
df_train.show()
+------+------+-----+----------+
|userId|action|count|row_number|
+------+------+-----+----------+
| 1| buy| 4| 1|
| 1| buy| 4| 2|
| 1| sell| 4| 3|
| 2| sell| 4| 1|
| 2| buy| 4| 2|
| 2| sell| 4| 3|
+------+------+-----+----------+
And remaining records go to the test set:
df_test = df.alias("df").join(df_train.alias("tr"), (col("df.userId") == col("tr.userId")) & (col("df.row_number") == col("tr.row_number")), "leftanti")
df_test.show()
+------+------+-----+----------+
|userId|action|count|row_number|
+------+------+-----+----------+
| 1| sell| 4| 4|
| 2| sell| 4| 4|
+------+------+-----+----------+
You can use ntile:
ds = ds.withColumn("tile", expr("ntile(4) over (partition by id order by id)"))
The dataset where tile=4 is your test set, and tile<4 is your train set:
val train = ds.filter(col("tile").equalTo(4))
val test = ds.filter(col("tile").lt(4))
test.show()
+---+------+----+----+
| id|action|time|tile|
+---+------+----+----+
| 1| sell|3 PM| 4|
| 2| sell|3 PM| 4|
+---+------+----+----+
train.show()
+---+------+-----+----+
| id|action| time|tile|
+---+------+-----+----+
| 1| buy| 8 AM| 1|
| 1| buy| 9 AM| 2|
| 1| sell| 2 PM| 3|
| 2| sell|10 AM| 1|
| 2| buy|11 AM| 2|
| 2| sell| 2 PM| 3|
+---+------+-----+----+
Good luck!

how to group by only specific features using pyspark

I have this data frame
+---------+------+-----+-------------+-----+
| LCLid|KWH/hh|Acorn|Acorn_grouped|Month|
+---------+------+-----+-------------+-----+
|MAC000002| 0.0| 0| 0| 10|
|MAC000002| 0.0| 0| 0| 10|
|MAC000002| 0.0| 0| 0| 10|
I want to group by the LCid and month's average consumption only in a certain way that a get
+---------+-----+------------------+----------+------------------+
| LCLid|Month| sum(KWH/hh)|Acorn |Acorn_grouped |
+---------+-----+------------------+----------+------------------+
|MAC000003| 10| 904.9270009999999| 0 | 0 |
|MAC000022| 2|1672.5559999999978| 1 | 0 |
|MAC000019| 4| 368.4720001000007| 1 | 1 |
|MAC000022| 9|449.07699989999975| 0 | 1 |
|MAC000024| 8| 481.7160003000004| 1 | 0 |
but what I could do is using this code
dataset=dataset.groupBy("LCLid","Month").sum()
which gave me this result
+---------+-----+------------------+----------+------------------+----------+
| LCLid|Month| sum(KWH/hh)|sum(Acorn)|sum(Acorn_grouped)|sum(Month)|
+---------+-----+------------------+----------+------------------+----------+
|MAC000003| 10| 904.9270009999999| 2978| 2978| 29780|
|MAC000022| 2|1672.5559999999978| 12090| 4030| 8060|
|MAC000019| 4| 368.4720001000007| 20174| 2882| 11528|
|MAC000022| 9|449.07699989999975| 8646| 2882| 25938|
the problem is that the sum function was calculated also on the acron and acron_grouped
have you any idea how could I make the grouping only on the KWH/hh
Depends on how you want to handle the other two columns. If you don't want to sum them, and just want any value from that column, you can do
import pyspark.sql.functions as F
dataset = dataset.groupBy("LCLid","Month").agg(
F.sum("KWH/hh"),
F.first("Acorn").alias("Acorn"),
F.first("Acorn_grouped").alias("Acorn_grouped")
)

Search the rest columns of pyspark dataframe for values in column1

Suppose there is a pyspark dataframe of the form:
id col1 col2 col3 col4
------------------------
as1 4 10 4 6
as2 6 3 6 1
as3 6 0 2 1
as4 8 8 6 1
as5 9 6 6 9
Is there a way to search the col 2-4 of the pyspark dataframe for values in col1 and to return the (id row name, column name)?
For instance:
In col1, 4 is found in (as1, col3)
In col1, 6 is found in (as2,col3),(as1,col4),(as4, col3) (as5,col3)
In col1, 8 is found in (as4,col2)
In col1, 9 is found in (as5,col4)
Hint: Assume that col1 will be a set {4,6,8,9} i.e. unique
Yes, you can leverage the Spark SQL .isin operator.
Let's first create the DataFrame in your example
Part 1- Creating the DataFrame
cSchema = StructType([StructField("id", IntegerType()),\
StructField("col1", IntegerType()),\
StructField("col2", IntegerType()),\
StructField("col3", IntegerType()),\
StructField("col4", IntegerType())])
test_data = [[1,4,10,4,6],[2,6,3,6,1],[3,6,0,2,1],[4,8,8,6,1],[5,9,6,6,9]]
df = spark.createDataFrame(test_data,schema=cSchema)
df.show()
+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
| 1| 4| 10| 4| 6|
| 2| 6| 3| 6| 1|
| 3| 6| 0| 2| 1|
| 4| 8| 8| 6| 1|
| 5| 9| 6| 6| 9|
+---+----+----+----+----+
Part 2 -Function To Search for Matching Values
isin: A boolean expression that is evaluated to true if the value of this expression is contained by the evaluated values of the arguments.
http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html
def search(col1,col3):
col1_list = df.select(col1).rdd\
.map(lambda x: x[0]).collect()
search_results = df[df[col3].isin(col1_list)]
return search_results
search_results.show()
+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
| 1| 4| 10| 4| 6|
| 2| 6| 3| 6| 1|
| 4| 8| 8| 6| 1|
| 5| 9| 6| 6| 9|
+---+----+----+----+----+
This should guide you in the right direction. You can select for just the Id Column etc.. or whatever you are attempting to return. The function can easily be changed to take more columns to search through. Hope this helps!
# create structfield using array list
cSchema = StructType([StructField("id", StringType()),
StructField("col1", IntegerType()),
StructField("col2", IntegerType()),
StructField("col3", IntegerType()),
StructField("col4", IntegerType())])
test_data = [['as1', 4, 10, 4, 6],
['as2', 6, 3, 6, 1],
['as3', 6, 0, 2, 1],
['as4', 8, 8, 6, 1],
['as5', 9, 6, 6, 9]]
# create pyspark dataframe
df = spark.createDataFrame(test_data, schema=cSchema)
df.show()
# obtain the distinct items for col 1
distinct_list = [i.col1 for i in df.select("col1").distinct().collect()]
# rest columns
col_list = ['id', 'col2', 'col3', 'col4']
# implement the search of values in rest columns found in col 1
def search(distinct_list ):
for i in distinct_list :
print(str(i) + ' found in: ')
# for col in df.columns:
for col in col_list:
df_search = df.select(*col_list) \
.filter(df[str(col)] == str(i))
if (len(df_search.head(1)) > 0):
df_search.show()
search(distinct_list)
Find full example code at GITHUB
Output:
+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
|as1| 4| 10| 4| 6|
|as2| 6| 3| 6| 1|
|as3| 6| 0| 2| 1|
|as4| 8| 8| 6| 1|
|as5| 9| 6| 6| 9|
+---+----+----+----+----+
6 found in:
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as5| 6| 6| 9|
+---+----+----+----+
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as2| 3| 6| 1|
|as4| 8| 6| 1|
|as5| 6| 6| 9|
+---+----+----+----+
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as1| 10| 4| 6|
+---+----+----+----+
9 found in:
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as5| 6| 6| 9|
+---+----+----+----+
4 found in:
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as1| 10| 4| 6|
+---+----+----+----+
8 found in:
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as4| 8| 6| 1|
+---+----+----+----+

Combine two rows in Pyspark if a condition is met

I have a PySpark data table that looks like the following
shouldMerge | number
true | 1
true | 1
true | 2
false | 3
false | 1
I want to combine all of the columns with shouldMerge as true and add up the numbers.
so the final output would look like
shouldMerge | number
true | 4
false | 3
false | 1
How can I select all the ones with shouldMerge == true, add up the numbers, and generate a new row in PySpark?
Edit: Alternate, slightly more complicated scenario closer to what I'm trying to solve, where we only aggregate positive numbers:
mergeId | number
1 | 1
2 | 1
1 | 2
-1 | 3
-1 | 1
shouldMerge | number
1 | 3
2 | 1
-1 | 3
-1 | 1
IIUC, you want to do a groupBy but only on the positive mergeIds.
One way is to filter your DataFrame for the positive ids, group, aggregate, and union this back with the negative ids (similar to #shanmuga's answer).
Other way would be use when to dynamically create a grouping key. If the mergeId is positive, use the mergeId to group. Otherwise, use a monotonically_increasing_id to ensure that the row does not get aggregated.
Here is an example:
import pyspark.sql.functions as f
df.withColumn("uid", f.monotonically_increasing_id())\
.groupBy(
f.when(
f.col("mergeId") > 0,
f.col("mergeId")
).otherwise(f.col("uid")).alias("mergeKey"),
f.col("mergeId")
)\
.agg(f.sum("number").alias("number"))\
.drop("mergeKey")\
.show()
#+-------+------+
#|mergeId|number|
#+-------+------+
#| -1| 1.0|
#| 1| 3.0|
#| 2| 1.0|
#| -1| 3.0|
#+-------+------+
This can easily be generalized by changing the when condition (in this case it's f.col("mergeId") > 0) to match your specific requirements.
Explanation:
First we create a temporary column uid which is a unique ID for each row. Next, we call groupBy and if the mergeId is positive use the mergeId to group. Otherwise we use the uid as the mergeKey. I also passed in the mergeId as a second group by column as a way to keep that column for the output.
To demonstrate what is going on, take a look at the intermediate result:
df.withColumn("uid", f.monotonically_increasing_id())\
.withColumn(
"mergeKey",
f.when(
f.col("mergeId") > 0,
f.col("mergeId")
).otherwise(f.col("uid")).alias("mergeKey")
)\
.show()
#+-------+------+-----------+-----------+
#|mergeId|number| uid| mergeKey|
#+-------+------+-----------+-----------+
#| 1| 1| 0| 1|
#| 2| 1| 8589934592| 2|
#| 1| 2|17179869184| 1|
#| -1| 3|25769803776|25769803776|
#| -1| 1|25769803777|25769803777|
#+-------+------+-----------+-----------+
As you can see, the mergeKey remains the unique value for the negative mergeIds.
From this intermediate step, the desired result is just a trivial group by and sum, followed by dropping the mergeKey column.
You will have to filter out only the rows where should merge is true and aggregate. then union this with all the remaining rows.
import pyspark.sql.functions as functions
df = sqlContext.createDataFrame([
(True, 1),
(True, 1),
(True, 2),
(False, 3),
(False, 1),
], ("shouldMerge", "number"))
false_df = df.filter("shouldMerge = false")
true_df = df.filter("shouldMerge = true")
result = true_df.groupBy("shouldMerge")\
.agg(functions.sum("number").alias("number"))\
.unionAll(false_df)
df = sqlContext.createDataFrame([
(1, 1),
(2, 1),
(1, 2),
(-1, 3),
(-1, 1),
], ("mergeId", "number"))
merge_condition = df["mergeId"] > -1
remaining = ~merge_condition
grouby_field = "mergeId"
false_df = df.filter(remaining)
true_df = df.filter(merge_condition)
result = true_df.groupBy(grouby_field)\
.agg(functions.sum("number").alias("number"))\
.unionAll(false_df)
result.show()
The first problem posted by the OP.
# Create the DataFrame
valuesCol = [(True,1),(True,1),(True,2),(False,3),(False,1)]
df = sqlContext.createDataFrame(valuesCol,['shouldMerge','number'])
df.show()
+-----------+------+
|shouldMerge|number|
+-----------+------+
| true| 1|
| true| 1|
| true| 2|
| false| 3|
| false| 1|
+-----------+------+
# Packages to be imported
from pyspark.sql.window import Window
from pyspark.sql.functions import when, col, lag
# Register the dataframe as a view
df.registerTempTable('table_view')
df=sqlContext.sql(
'select shouldMerge, number, sum(number) over (partition by shouldMerge) as sum_number from table_view'
)
df = df.withColumn('number',when(col('shouldMerge')==True,col('sum_number')).otherwise(col('number')))
df.show()
+-----------+------+----------+
|shouldMerge|number|sum_number|
+-----------+------+----------+
| true| 4| 4|
| true| 4| 4|
| true| 4| 4|
| false| 3| 4|
| false| 1| 4|
+-----------+------+----------+
df = df.drop('sum_number')
my_window = Window.partitionBy().orderBy('shouldMerge')
df = df.withColumn('shouldMerge_lag', lag(col('shouldMerge'),1).over(my_window))
df.show()
+-----------+------+---------------+
|shouldMerge|number|shouldMerge_lag|
+-----------+------+---------------+
| false| 3| null|
| false| 1| false|
| true| 4| false|
| true| 4| true|
| true| 4| true|
+-----------+------+---------------+
df = df.where(~((col('shouldMerge')==True) & (col('shouldMerge_lag')==True))).drop('shouldMerge_lag')
df.show()
+-----------+------+
|shouldMerge|number|
+-----------+------+
| false| 3|
| false| 1|
| true| 4|
+-----------+------+
For the second problem posted by the OP
# Create the DataFrame
valuesCol = [(1,2),(1,1),(2,1),(1,2),(-1,3),(-1,1)]
df = sqlContext.createDataFrame(valuesCol,['mergeId','number'])
df.show()
+-------+------+
|mergeId|number|
+-------+------+
| 1| 2|
| 1| 1|
| 2| 1|
| 1| 2|
| -1| 3|
| -1| 1|
+-------+------+
# Packages to be imported
from pyspark.sql.window import Window
from pyspark.sql.functions import when, col, lag
# Register the dataframe as a view
df.registerTempTable('table_view')
df=sqlContext.sql(
'select mergeId, number, sum(number) over (partition by mergeId) as sum_number from table_view'
)
df = df.withColumn('number',when(col('mergeId') > 0,col('sum_number')).otherwise(col('number')))
df.show()
+-------+------+----------+
|mergeId|number|sum_number|
+-------+------+----------+
| 1| 5| 5|
| 1| 5| 5|
| 1| 5| 5|
| 2| 1| 1|
| -1| 3| 4|
| -1| 1| 4|
+-------+------+----------+
df = df.drop('sum_number')
my_window = Window.partitionBy('mergeId').orderBy('mergeId')
df = df.withColumn('mergeId_lag', lag(col('mergeId'),1).over(my_window))
df.show()
+-------+------+-----------+
|mergeId|number|mergeId_lag|
+-------+------+-----------+
| 1| 5| null|
| 1| 5| 1|
| 1| 5| 1|
| 2| 1| null|
| -1| 3| null|
| -1| 1| -1|
+-------+------+-----------+
df = df.where(~((col('mergeId') > 0) & (col('mergeId_lag').isNotNull()))).drop('mergeId_lag')
df.show()
+-------+------+
|mergeId|number|
+-------+------+
| 1| 5|
| 2| 1|
| -1| 3|
| -1| 1|
+-------+------+
Documentation: lag() - Returns the value that is offset rows before the current row.

Partition pyspark dataframe based on the change in column value

I have a dataframe in pyspark.
Say the has some columns a,b,c...
I want to group the data into groups as the value of column changes. Say
A B
1 x
1 y
0 x
0 y
0 x
1 y
1 x
1 y
There will be 3 groups as (1x,1y),(0x,0y,0x),(1y,1x,1y)
And corresponding row data
If I understand correctly you want to create a distinct group every time column A changes values.
First we'll create a monotonically increasing id to keep the row order as it is:
import pyspark.sql.functions as psf
df = sc.parallelize([[1,'x'],[1,'y'],[0,'x'],[0,'y'],[0,'x'],[1,'y'],[1,'x'],[1,'y']])\
.toDF(['A', 'B'])\
.withColumn("rn", psf.monotonically_increasing_id())
df.show()
+---+---+----------+
| A| B| rn|
+---+---+----------+
| 1| x| 0|
| 1| y| 1|
| 0| x| 2|
| 0| y| 3|
| 0| x|8589934592|
| 1| y|8589934593|
| 1| x|8589934594|
| 1| y|8589934595|
+---+---+----------+
Now we'll use a window function to create a column that contains 1 every time column A changes:
from pyspark.sql import Window
w = Window.orderBy('rn')
df = df.withColumn("changed", (df.A != psf.lag('A', 1, 0).over(w)).cast('int'))
+---+---+----------+-------+
| A| B| rn|changed|
+---+---+----------+-------+
| 1| x| 0| 1|
| 1| y| 1| 0|
| 0| x| 2| 1|
| 0| y| 3| 0|
| 0| x|8589934592| 0|
| 1| y|8589934593| 1|
| 1| x|8589934594| 0|
| 1| y|8589934595| 0|
+---+---+----------+-------+
Finally we'll use another window function to allocate different numbers to each group:
df = df.withColumn("group_id", psf.sum("changed").over(w)).drop("rn").drop("changed")
+---+---+--------+
| A| B|group_id|
+---+---+--------+
| 1| x| 1|
| 1| y| 1|
| 0| x| 2|
| 0| y| 2|
| 0| x| 2|
| 1| y| 3|
| 1| x| 3|
| 1| y| 3|
+---+---+--------+
Now you can build you groups

Categories

Resources