I'm writing some code to bring back a unique ID for each event that comes in in a given version. The value can repeat in a future version as the prefix for the version will change. I have the version information but I'm struggling to bring back the uid. I found some code that seems to produce what I need, found here and have to implement it for what I want but I am facing an issue.
I have the information I need as a dataframe and when I run the code it returns all values as the same unique value. I suspect that the issue stems from how I am using the used set from the example and it isn't being properly stored hence why it returns the same info each time.
Is anyone able to provide some hint on where to look as I can't seem to work out how to persist the information to change it for each row. Side note, I can't use Pandas so I can't use the udf function in there and the uuid module is no good as the requirement is to keep it short to allow easy human typing for searching. I've posted the code below.
import itertools
import string
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
#udf(returnType=StringType())
def uid_generator(id_column):
valid_chars = set(string.ascii_lowercase + string.digits) - set('lio01')
used = set()
unique_id_generator = itertools.combinations(valid_chars, 6)
uid = "".join(next(unique_id_generator)).upper()
while uid in used:
uid = "".join(next(unique_id_generator))
return uid
used.add(uid)
#uuid_udf = udf(uuid_generator,)
df2 = df_uid_register_input.withColumn('uid', uid_generator(df_uid_register_input.record))
The output is:
In the function definition, you have the argument id_column, but you never use that argument in the function body. And it seems that you haven't tried to use the version column either.
What may be easier for you, is not to aim for true uniqueness, but use one of hash functions. Even though in theory they don't give unique results, but practically, it's just ridiculously unlikely that one would get the same hash for different inputs.
from pyspark.sql import functions as F
df = spark.createDataFrame(
[(1, 1, 2),
(2, 1, 2),
(3, 1, 2)],
['record', 'job_id', 'version'])
df = df.select(
'*',
F.sha1(F.concat_ws('_', 'record', 'version')).alias('uid1'),
F.sha2(F.concat_ws('_', 'record', 'version'), 0).alias('uid2'),
F.md5(F.concat_ws('_', 'record', 'version')).alias('uid3'),
)
df.show()
# +------+------+-------+--------------------+--------------------+--------------------+
# |record|job_id|version| uid1| uid2| uid3|
# +------+------+-------+--------------------+--------------------+--------------------+
# | 1| 1| 2|486cbd63f94d703d2...|0c79023f435b2e9e6...|ab35e84a215f0f711...|
# | 2| 1| 2|f5d7b663eea5f2e69...|48fccc7ee00b72959...|5229803558d4b7895...|
# | 3| 1| 2|982bde375462792cb...|ad9a5c5fb1bc135d8...|dfe3a334fc99f298a...|
# +------+------+-------+--------------------+--------------------+--------------------+
Related
Given a PySpark DataFrame is it possible to obtain a list of source columns that are being referenced by the DataFrame?
Perhaps a more concrete example might help explain what I'm after. Say I have a DataFrame defined as:
import pyspark.sql.functions as func
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
source_df = spark.createDataFrame(
[("pru", 23, "finance"), ("paul", 26, "HR"), ("noel", 20, "HR")],
["name", "age", "department"],
)
source_df.createOrReplaceTempView("people")
sqlDF = spark.sql("SELECT name, age, department FROM people")
df = sqlDF.groupBy("department").agg(func.max("age").alias("max_age"))
df.show()
which returns:
+----------+--------+
|department|max_age |
+----------+--------+
| finance| 23|
| HR| 26|
+----------+--------+
The columns that are referenced by df are [department, age]. Is it possible to get that list of referenced columns programatically?
Thanks to Capturing the result of explain() in pyspark I know I can extract the plan as a string:
df._sc._jvm.PythonSQLUtils.explainString(df._jdf.queryExecution(), "formatted")
which returns:
== Physical Plan ==
AdaptiveSparkPlan (6)
+- HashAggregate (5)
+- Exchange (4)
+- HashAggregate (3)
+- Project (2)
+- Scan ExistingRDD (1)
(1) Scan ExistingRDD
Output [3]: [name#0, age#1L, department#2]
Arguments: [name#0, age#1L, department#2], MapPartitionsRDD[4] at applySchemaToPythonRDD at NativeMethodAccessorImpl.java:0, ExistingRDD, UnknownPartitioning(0)
(2) Project
Output [2]: [age#1L, department#2]
Input [3]: [name#0, age#1L, department#2]
(3) HashAggregate
Input [2]: [age#1L, department#2]
Keys [1]: [department#2]
Functions [1]: [partial_max(age#1L)]
Aggregate Attributes [1]: [max#22L]
Results [2]: [department#2, max#23L]
(4) Exchange
Input [2]: [department#2, max#23L]
Arguments: hashpartitioning(department#2, 200), ENSURE_REQUIREMENTS, [plan_id=60]
(5) HashAggregate
Input [2]: [department#2, max#23L]
Keys [1]: [department#2]
Functions [1]: [max(age#1L)]
Aggregate Attributes [1]: [max(age#1L)#12L]
Results [2]: [department#2, max(age#1L)#12L AS max_age#13L]
(6) AdaptiveSparkPlan
Output [2]: [department#2, max_age#13L]
Arguments: isFinalPlan=false
which is useful, however its not what I need. I need a list of the referenced columns. Is this possible?
Perhaps another way of asking the question is... is there a way to obtain the explain plan as an object that I can iterate over/explore?
UPDATE. Thanks to the reply from #matt-andruff I have gotten this:
df._jdf.queryExecution().executedPlan().treeString().split("+-")[-2]
which returns:
' Project [age#1L, department#2]\n '
from which I guess I could parse the information I'm after but this is a far from elegant way to do it, and is particularly error prone.
What I'm really after is a failsafe, reliable, API-supported way to get this information. I'm starting to think it isn't possible.
There is an object for that unfortunately its a java object, and not translated to pyspark.
You can still access it with Spark constucts:
>>> df._jdf.queryExecution().executedPlan().apply(0).output().apply(0).toString()
u'department#1621'
>>> df._jdf.queryExecution().executedPlan().apply(0).output().apply(1).toString()
u'max_age#1632L'
You could loop through both the above apply to get the information you are looking for with something like:
plan = df._jdf.queryExecution().executedPlan()
steps = [ plan.apply(i) for i in range(1,100) if not isinstance(plan.apply(i), type(None)) ]
iterator = steps[0].inputSet().iterator()
>>> iterator.next().toString()
u'department#1621'
>>> iterator.next().toString()
u'max#1642L'
steps = [ plan.apply(i) for i in range(1,100) if not isinstance(plan.apply(i), type(None)) ]
projections = [ (steps[0].p(i).toJSON().encode('ascii','ignore')) for i in range(1,100) if not( isinstance(steps[0].p(i), type(None) )) and steps[0].p(i).nodeName().encode('ascii','ignore') == 'Project' ]
dd = spark.sparkContext.parallelize(projections)
df2 = spark.read.json(rdd)
>>> df2.show(1,False)
+-----+------------------------------------------+----+------------+------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----+
|child|class |name|num-children|output|outputOrdering|outputPartitioning|projectList |rdd |
+-----+------------------------------------------+----+------------+------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----+
|0 |org.apache.spark.sql.execution.ProjectExec|null|1 |null |null |null |[[[org.apache.spark.sql.catalyst.expressions.AttributeReference, long, [1620, 4ad48da6-03cf-45d4-9b35-76ac246fadac, org.apache.spark.sql.catalyst.expressions.ExprId], age, true, 0, [people]]], [[org.apache.spark.sql.catalyst.expressions.AttributeReference, string, [1621, 4ad48da6-03cf-45d4-9b35-76ac246fadac, org.apache.spark.sql.catalyst.expressions.ExprId], department, true, 0, [people]]]]|null|
+-----+------------------------------------------+----+------------+------+--------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----+
df2.select(func.explode(func.col('projectList'))).select( func.col('col')[0]["name"] ) .show(100,False)
+-----------+
|col[0].name|
+-----------+
|age |
|department |
+-----------+
range --> Bit of a hack but apparently size doesn't work.I'm sure with more time I could refine the range hack.
You can then use json to pull the information programmatically.
I have something that, while not being an answer to my original question (see Matt Andruff's answer for that), could still be useful here. Its a way to get all the source columns referenced by a pyspark.sql.column.Column.
Simple repro:
from pyspark.sql import functions as f, SparkSession
SparkSession.builder.getOrCreate()
col = f.concat(f.col("A"), f.col("B"))
type(col)
col._jc.expr().references().toList().toString()
returns:
<class 'pyspark.sql.column.Column'>
"List('A, 'B)"
its definitely not perfect, it still requires you to parse out the column names from the string that is returned, but at least the information I'm after is available. There might be some more methods on the object returned from references() that makes it easier to parse the returned string but if there is, I haven't found it!
Here is a function I wrote to do the parsing
def parse_references(references: str):
return sorted(
"".join(
references.replace("'", "")
.replace("List(", "")
.replace(")", "")
.replace(")", "")
.split()
).split(",")
)
assert parse_references("List('A, 'B)") == ["A", "B"]
PySpark is not really designed for such lower-level tricks (which begs more for Scala that Spark is developed in and as such offers all there is inside).
This step where you access QueryExecution is the main entry point to the machinery of Spark SQL's query execution engine.
The issue is that py4j (that is used as a bridge between JVM and Python environments) makes it of no use on PySpark's side.
You can use the following if you need to access the final query plan (just before it's converted into RDDs):
df._jdf.queryExecution().executedPlan().prettyJson()
Review the QueryExecution API.
QueryExecutionListener
You should really consider Scala to intercept whatever you want about your queries and QueryExecutionListener seems a fairly viable starting point.
There is more but it's all over Scala :)
What I'm really after is a failsafe, reliable, API-supported way to get this information. I'm starting to think it isn't possible.
I'm not surprised since you're throwing away the best possible answer: Scala. I'd recommend using it for a PoC and see what you can get and only then (if you have to) look out for a Python solution (which I think is doable yet highly error-prone).
You can try the below codes, this will give you a column list and its data type in the data frame.
for field in df.schema.fields:
print(field.name +" , "+str(field.dataType))
I have a source dataframe which has some records. I want to perform some operation on each row of this dataframe. For this purpose, the rdd.map function was used. However, looking at the logs recorded using accumulators, looks like the mapped function was called multiple times for some rows. As per the documentation, it should be called once ONLY.
I tried replicating the issue in a small script and noticed the same behavior. This script is shown below:
import os
import sys
os.environ['SPARK_HOME'] = "/usr/lib/spark/"
sys.path.append("/usr/lib/spark/python/")
from pyspark.sql import *
from pyspark.accumulators import AccumulatorParam
class StringAccumulatorParam(AccumulatorParam):
def zero(self, initialValue=""):
return ""
def addInPlace(self, s1, s2):
return s1.strip() + " " + s2.strip()
def mapped_func(row, logging_acc):
logging_acc += "Started map"
logging_acc += str(row)
return "test"
if __name__ == "__main__":
spark_session = SparkSession.builder.enableHiveSupport().appName("rest-api").getOrCreate()
sc = spark_session.sparkContext
df = spark_session.sql("select col1, col2, col3, col4, col5, col6 from proj1_db.dw_table where col3='P1'")
df.show()
logging_acc = sc.accumulator("", StringAccumulatorParam())
result_rdd = df.rdd.map(lambda row: Row(row, mapped_func(row, logging_acc)))
result_rdd.toDF().show()
print "logs: " + str(logging_acc.value)
Below is the relevant piece of output:
+----+----+----+----+----+----+
|col1|col2|col3|col4|col5|col6|
+----+----+----+----+----+----+
| 1| 1| P1| 2| 10| 20|
| 3| 1| P1| 1| 25| 25|
+----+----+----+----+----+----+
+--------------------+----+
| _1| _2|
+--------------------+----+
|[1, 1, P1, 2, 10,...|test|
|[3, 1, P1, 1, 25,...|test|
+--------------------+----+
logs: Started map Row(col1=1, col2=1, col3=u'P1', col4=2, col5=10, col6=20) Started map Row(col1=1, col2=1, col3=u'P1', col4=2, col5=10, col6=20) Started map Row(col1=3, col2=1, col3=u'P1', col4=1, col5=25, col6=25)
The first table is the source dataframe and the second table is the resultant dataframe created post the map function call.
As seen, the function is being called twice for the first row. Can anyone please help me understand what is happening and how can we make sure the mapped function is called only ONCE per row.
As per the documentation, it should be called once ONLY.
That's really not the case. Any transformation can be executed arbitrary number of times (typically in case of failures or to support secondary logic) and the documentation says explicitly that:
For accumulator updates performed inside actions only, Spark guarantees that each task’s update to the accumulator will only be applied once
So implicitly accumulators used inside transformations (like map) can be updated multiple times per tasks.
In your case multiple executions happen because you don't provide schema when you convert RDD to DataFrame. In such case Spark will perform another data scan to infer schema from data, i.e.
spark.createDataFrame(result_rdd, schema)
That however will only address this particular issue, and general point about transformation and accumulator behavior stands.
I'm trying to create a UDF returning an interpolation function, but the function is returning a Series, with index and throwing an exception.
from pyspark.sql.types import FloatType
#F.pandas_udf(FloatType(), F.PandasUDFType.GROUPED_AGG)
def udf_interpolate(v):
return v.interpolate('linear')
## Test data
df = spark.createDataFrame([
("charles", 1),
("charles", None),
("charles", 3),
], ["name", "value"])
window = Window.partitionBy('name').rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
df.withColumn('test_interp', udf_interpolate(df.value).over(window)).show()
The error message:
pyarrow.lib.ArrowInvalid: Could not convert 0 3.0
1 2.0
2 1.0
Name: _0, dtype: float64 with type Series: tried to convert to float32
I tried to force the conversion to float32, but the error persists. My initial idea is because I'm returning a Series with multiple values in a 'one value expected' but I don't know exactly how to solve this problem.
If I change my function, for example, to return a v.mean(), works well.
Appreciate any help.
Thanks.
GROUPED_AGG requires the UDF to return a scalar; In your case, better to use a GROUPED_MAP since you are returning a Series and need to perform the calculation by group; Essentially you pass a sub data frame for each name to the pandas_udf, transform it with pandas API and return the transformed data frame back:
#F.pandas_udf(df.schema, F.PandasUDFType.GROUPED_MAP)
def udf_interpolate(g):
return g.assign(value=g.value.interpolate('linear'))
df.groupby('name').apply(udf_interpolate).show()
+-------+-----+
| name|value|
+-------+-----+
|charles| 1|
|charles| 2|
|charles| 3|
+-------+-----+
I have a PySpark DataFrame and I have tried many examples showing how to create a new column based on operations with existing columns, but none of them seem to work.
So I have t̶w̶o̶ one questions:
1- Why doesn't this code work?
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
import pyspark.sql.functions as F
sc = SparkContext()
sqlContext = SQLContext(sc)
a = sqlContext.createDataFrame([(5, 5, 3)], ['A', 'B', 'C'])
a.withColumn('my_sum', F.sum(a[col] for col in a.columns)).show()
I get the error:
TypeError: Column is not iterable
EDIT: Answer 1
I found out how to make this work. I have to use the native Python sum function. a.withColumn('my_sum', F.sum(a[col] for col in a.columns)).show(). It works, but I have no idea why.
2- If there is a way to make this sum work, how can I write a udf function to do this (and add the result to a new column of a DataFrame)?
import numpy as np
def my_dif(row):
d = np.diff(row) # creates an array of differences element by element
return d.mean() # returns the mean of the array
I am using Python 3.6.1 and Spark 2.1.1.
Thank you!
a = sqlContext.createDataFrame([(5, 5, 3)], ['A', 'B', 'C'])
a = a.withColumn('my_sum', F.UserDefinedFunction(lambda *args: sum(args), IntegerType())(*a.columns))
a.show()
+---+---+---+------+
| A| B| C|my_sum|
+---+---+---+------+
| 5| 5| 3| 13|
+---+---+---+------+
Your problem is in this part for col in a.columns cuz you cannot iterate the result, so you must:
a = a.withColumn('my_sum', a.A + a.B + a.C)
I have this python code that runs locally in a pandas dataframe:
df_result = pd.DataFrame(df
.groupby('A')
.apply(lambda x: myFunction(zip(x.B, x.C), x.name))
I would like to run this in PySpark, but having trouble dealing with pyspark.sql.group.GroupedData object.
I've tried the following:
sparkDF
.groupby('A')
.agg(myFunction(zip('B', 'C'), 'A'))
which returns
KeyError: 'A'
I presume because 'A' is no longer a column and I can't find the equivalent for x.name.
And then
sparkDF
.groupby('A')
.map(lambda row: Row(myFunction(zip('B', 'C'), 'A')))
.toDF()
but get the following error:
AttributeError: 'GroupedData' object has no attribute 'map'
Any suggestions would be really appreciated!
Since Spark 2.3 you can use pandas_udf. GROUPED_MAP takes Callable[[pandas.DataFrame], pandas.DataFrame] or in other words a function which maps from Pandas DataFrame of the same shape as the input, to the output DataFrame.
For example if data looks like this:
df = spark.createDataFrame(
[("a", 1, 0), ("a", -1, 42), ("b", 3, -1), ("b", 10, -2)],
("key", "value1", "value2")
)
and you want to compute average value of pairwise min between value1 value2, you have to define output schema:
from pyspark.sql.types import *
schema = StructType([
StructField("key", StringType()),
StructField("avg_min", DoubleType())
])
pandas_udf:
import pandas as pd
from pyspark.sql.functions import pandas_udf
from pyspark.sql.functions import PandasUDFType
#pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP)
def g(df):
result = pd.DataFrame(df.groupby(df.key).apply(
lambda x: x.loc[:, ["value1", "value2"]].min(axis=1).mean()
))
result.reset_index(inplace=True, drop=False)
return result
and apply it:
df.groupby("key").apply(g).show()
+---+-------+
|key|avg_min|
+---+-------+
| b| -1.5|
| a| -0.5|
+---+-------+
Excluding schema definition and decorator, your current Pandas code can be applied as-is.
Since Spark 2.4.0 there is also GROUPED_AGG variant, which takes Callable[[pandas.Series, ...], T], where T is a primitive scalar:
import numpy as np
#pandas_udf(DoubleType(), functionType=PandasUDFType.GROUPED_AGG)
def f(x, y):
return np.minimum(x, y).mean()
which can be used with standard group_by / agg construct:
df.groupBy("key").agg(f("value1", "value2").alias("avg_min")).show()
+---+-------+
|key|avg_min|
+---+-------+
| b| -1.5|
| a| -0.5|
+---+-------+
Please note that neither GROUPED_MAP nor GROUPPED_AGG pandas_udf behave the same way as UserDefinedAggregateFunction or Aggregator, and it is closer to groupByKey or window functions with unbounded frame. Data is shuffled first, and only after that, UDF is applied.
For optimized execution you should implement Scala UserDefinedAggregateFunction and add Python wrapper.
See also User defined function to be applied to Window in PySpark?
What you are trying to is write a UDAF (User Defined Aggregate Function) as opposed to a UDF (User Defined Function). UDAFs are functions that work on data grouped by a key. Specifically they need to define how to merge multiple values in the group in a single partition, and then how to merge the results across partitions for key. There is currently no way in python to implement a UDAF, they can only be implemented in Scala.
But, you can work around it in Python. You can use collect set to gather your grouped values and then use a regular UDF to do what you want with them. The only caveat is collect_set only works on primitive values, so you will need to encode them down to a string.
from pyspark.sql.types import StringType
from pyspark.sql.functions import col, collect_list, concat_ws, udf
def myFunc(data_list):
for val in data_list:
b, c = data.split(',')
# do something
return <whatever>
myUdf = udf(myFunc, StringType())
df.withColumn('data', concat_ws(',', col('B'), col('C'))) \
.groupBy('A').agg(collect_list('data').alias('data'))
.withColumn('data', myUdf('data'))
Use collect_set if you want deduping. Also, if you have lots of values for some of your keys, this will be slow because all values for a key will need to be collected in a single partition somewhere on your cluster. If your end result is a value you build by combining the values per key in some way (for example summing them) it might be faster to implement it using the RDD aggregateByKey method which lets you build an intermediate value for each key in a partition before shuffling data around.
EDIT: 11/21/2018
Since this answer was written, pyspark added support for UDAF'S using Pandas. There are some nice performance improvements when using the Panda's UDFs and UDAFs over straight python functions with RDDs. Under the hood it vectorizes the columns (batches the values from multiple rows together to optimize processing and compression). Take a look at here for a better explanation or look at user6910411's answer below for an example.
I am going to extend above answer.
So you can implement same logic like pandas.groupby().apply in pyspark using #pandas_udf
and which is vectorization method and faster then simple udf.
from pyspark.sql.functions import pandas_udf, PandasUDFType
import pandas as pd
df3 = spark.createDataFrame([('a', 1, 0), ('a', -1, 42), ('b', 3, -1),
('b', 10, -2)], ('key', 'value1', 'value2'))
from pyspark.sql.types import *
schema = StructType([StructField('key', StringType()),
StructField('avg_value1', DoubleType()),
StructField('avg_value2', DoubleType()),
StructField('sum_avg', DoubleType()),
StructField('sub_avg', DoubleType())])
#pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP)
def g(df):
gr = df['key'].iloc[0]
x = df.value1.mean()
y = df.value2.mean()
w = df.value1.mean() + df.value2.mean()
z = df.value1.mean() - df.value2.mean()
return pd.DataFrame([[gr] + [x] + [y] + [w] + [z]])
df3.groupby('key').apply(g).show()
You will get below result:
+---+----------+----------+-------+-------+
|key|avg_value1|avg_value2|sum_avg|sub_avg|
+---+----------+----------+-------+-------+
| b| 6.5| -1.5| 5.0| 8.0|
| a| 0.0| 21.0| 21.0| -21.0|
+---+----------+----------+-------+-------+
So , You can do more calculation between other fields in grouped data.and add them into dataframe in list format.
Another extend new in PySpark version 3.0.0:
applyInPandas
df = spark.createDataFrame([(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
("id", "v"))
def mean_func(key, pdf):
# key is a tuple of one numpy.int64, which is the value
# of 'id' for the current group
return pd.DataFrame([key + (pdf.v.mean(),)])
df.groupby('id').applyInPandas(mean_func, schema="id long, v double").show()
results in:
+---+---+
| id| v|
+---+---+
| 1|1.5|
| 2|6.0|
+---+---+
for further details see: https://spark.apache.org/docs/3.2.0/api/python/reference/api/pyspark.sql.GroupedData.applyInPandas.html