I have to run a really heavy python function as UDF in Spark and I want to cache some data inside UDF. The case is similar to one mentioned here
I am aware that it is slow and wrong.
But the existing infrastructure is in spark and I don't want set up a new infrastructure and deal with data loading/parallelization/fail safety separately for this case.
This is how my spark program looks like:
from mymodule import my_function # here is my function
from pyspark.sql.types import *
from pyspark.sql.functions import udf
from pyspark.sql.session import SparkSession
spark = SparkSession.builder.getOrCreate()
schema = StructType().add("input", "string")
df = spark.read.format("json").schema(schema).load("s3://input_path")
udf1 = udf(my_function, StructType().add("output", "string"))
df.withColumn("result", udf1(df.input)).write.json("s3://output_path/")
The my_function internally calls a method of an object with a slow constructor.
Therefore I don't want the object to be initialized for every entry and I am trying to cache it:
from my_slow_class import SlowClass
from cachetools import cached
#cached(cache={})
def get_cached_object():
# this call is really slow therefore I am trying
# to cache it with cachetools
return SlowClass()
def my_function(input):
slow_object = get_cached_object()
output = slow_object.call(input)
return {'output': output}
mymodule and my_slow_class are installed as modules on each spark machine.
It seems working. The constructor is called only a few times (only 10-20 times for 100k lines in input dataframe). And that is what I want.
My concern is multithreading/multiprocessing inside Spark executors and if the cached SlowObject instance is shared between many parallel my_function calls.
Can I rely on the fact that my_function is called once at a time inside python processes on worker nodes? Does spark use any multiprocessing/multithreading in python process that executes my UDF?
Spark forks Python process to create individual workers, however all processing in the individual worker process is sequential, unless multithreading or multiprocessing is used explicitly by the UserDefinedFunction.
So as long as state is used for caching and slow_object.call is a pure function you have nothing to worry about.
Related
Pyspark uses cProfile and works according to the docs for the RDD API, but it seems that there is no way to get the profiler to print results after running a bunch of DataFrame API operations?
from pyspark import SparkContext, SQLContext
sc = SparkContext()
sqlContext = SQLContext(sc)
rdd = sc.parallelize([('a', 0), ('b', 1)])
df = sqlContext.createDataFrame(rdd)
rdd.count() # this ACTUALLY gets profiled :)
sc.show_profiles() # here is where the profiling prints out
sc.show_profiles() # here prints nothing (no new profiling to show)
rdd.count() # this ACTUALLY gets profiled :)
sc.show_profiles() # here is where the profiling prints out in DataFrame API
df.count() # why does this NOT get profiled?!?
sc.show_profiles() # prints nothing?!
# and again it works when converting to RDD but not
df.rdd.count() # this ACTUALLY gets profiled :)
sc.show_profiles() # here is where the profiling prints out
df.count() # why does this NOT get profiled?!?
sc.show_profiles() # prints nothing?!
That is the expected behavior.
Unlike RDD API, which provides native Python logic, DataFrame / SQL API are JVM native. Unless you invoke Python udf* (including pandas_udf), no Python code is executed on the worker machines. All that is done on the Python side, is simple API calls through Py4j gateway.
Therefore there no profiling information exists.
* Note that the udf's seem to be excluded from the profiling as well.
I wrote a script in python 2.7 that using pyspark for converting csv to parquet and other stuff.
when I ran my script on a small data it works well but when I did it on a bigger data (250GB) I crush on the following error- Total allocation exceeds 95.00% (960,285,889 bytes) of heap memory.
How can I solve this problem? and what is the reason that it's happening?
tnx!
part of code:
libraries imported:
import pyspark as ps
from pyspark.sql.types import StructType, StructField, IntegerType,
DoubleType, StringType, TimestampType,LongType,FloatType
from collections import OrderedDict
from sys import argv
using pyspark:
schema_table_name="schema_"+str(get_table_name())
print (schema_table_name)
schema_file= OrderedDict()
schema_list=[]
ddl_to_schema(data)
for i in schema_file:
schema_list.append(StructField(i,schema_file[i]()))
schema=StructType(schema_list)
print schema
spark = ps.sql.SparkSession.builder.getOrCreate()
df = spark.read.option("delimiter",
",").format("csv").schema(schema).option("header", "false").load(argv[2])
df.write.parquet(argv[3])
# df.limit(1500).write.jdbc(url = url, table = get_table_name(), mode =
"append", properties = properties)
# df = spark.read.jdbc(url = url, table = get_table_name(), properties =
properties)
pq = spark.read.parquet(argv[3])
pq.show()
just to clarify the schema_table_name is meant to save all tables name ( that are in DDL that fit the csv).
function ddl_to_schema just take a regular ddl and edit it to a ddl that parquet can work with.
It seems your driver is running out of memory.
By default the driver memory is set to 1GB. Since your program used 95% of it the application ran out of memory.
you can try to change it until you reach the "sweet spot" for your needs below I'm setting it to 2GB:
pyspark --driver-memory 2g
you can play with the executor memory too, although it doesn't seem to be the problem here (the default value for the executor is 4GB).
pyspark --driver-memory 2g --executor-memory 8g
the theory is, spark actions can offload data to the driver causing it to run out of memory if not properly sized. I can't tell for sure in your case, but it seems that the write is what is causing this.
You can take a look at the theory here (read about driver program and then check the actions):
https://spark.apache.org/docs/2.2.0/rdd-programming-guide.html#actions
If you are running a local script and aren't using spark-submit directly, you can do this:
import os
os.environ["PYSPARK_SUBMIT_ARGS"] = "--driver-memory 2g"
Q: How to change SparkContext property spark.sql.pivotMaxValues in jupyter PySpark session
I made the following code change to increase spark.sql.pivotMaxValues. It sadly had no effect in the resulting error after restarting jupyter and running the code again.
from pyspark import SparkConf, SparkContext
from pyspark.mllib.linalg import Vectors
from pyspark.mllib.linalg.distributed import RowMatrix
import numpy as np
try:
#conf = SparkConf().setMaster('local').setAppName('autoencoder_recommender_wide_user_record_maker') # original
#conf = SparkConf().setMaster('local').setAppName('autoencoder_recommender_wide_user_record_maker').set("spark.sql.pivotMaxValues", "99999")
conf = SparkConf().setMaster('local').setAppName('autoencoder_recommender_wide_user_record_maker').set("spark.sql.pivotMaxValues", 99999)
sc = SparkContext(conf=conf)
except:
print("Variables sc and conf are now defined. Everything is OK and ready to run.")
<... (other code) ...>
df = sess.read.csv(in_filename, header=False, mode="DROPMALFORMED", schema=csv_schema)
ct = df.crosstab('username', 'itemname')
Spark error message that was thrown on my crosstab line of code:
IllegalArgumentException: "requirement failed: The number of distinct values for itemname, can't exceed 1e4. Currently 16467"
I expect I'm not actually setting the config variable that I was trying to set, so what is a way to get that value actually set, programmatically if possible? THanks.
References:
Finally, you may be interested to know that there is a maximum number
of values for the pivot column if none are specified. This is mainly
to catch mistakes and avoid OOM situations. The config key is
spark.sql.pivotMaxValues and its default is 10,000.
Source: https://databricks.com/blog/2016/02/09/reshaping-data-with-pivot-in-apache-spark.html
I would prefer to change the config variable upwards, since I have written the crosstab code already which works great on smaller datasets. If it turns out there truly is no way to change this config variable then my backup plans are, in order:
relational right outer join to implement my own Spark crosstab with higher capacity than was provided by databricks
scipy dense vectors with handmade unique combinations calculation code using dictionaries
kernel.json
This configuration file should be distributed together with jupyter
~/.ipython/kernels/pyspark/kernel.json
It contains SPARK configuration, including variable PYSPARK_SUBMIT_ARGS - list of arguments that will be used with spark-submit script.
You can try to add --conf spark.sql.pivotMaxValues=99999 to this variable in mentioned script.
PS
There are also cases where people are trying to override this variable programmatically. You can give it a try too...
I use pyspark to do some data processing and leverage HiveContext for the window function.
In order to test the code, I use TestHiveContext, basically copying the implementation from pyspark source code:
https://spark.apache.org/docs/preview/api/python/_modules/pyspark/sql/context.html
#classmethod
def _createForTesting(cls, sparkContext):
"""(Internal use only) Create a new HiveContext for testing.
All test code that touches HiveContext *must* go through this method. Otherwise,
you may end up launching multiple derby instances and encounter with incredibly
confusing error messages.
"""
jsc = sparkContext._jsc.sc()
jtestHive = sparkContext._jvm.org.apache.spark.sql.hive.test.TestHiveContext(jsc)
return cls(sparkContext, jtestHive)
My tests then inherit the base class which can access the context.
This worked fine for a while. However, I started noticing some intermittent process running out of memory issues as I added more tests. Now I can't run the test suite without a failure.
"java.lang.OutOfMemoryError: Java heap space"
I explicitly stop the spark context after each test is run, but that does not appear to kill the HiveContext. Thus, I believe it keeps creating new HiveContexts everytime a new test is run and doesn't remove the old one which results in the memory leak.
Any suggestions for how to teardown the base class such that it kills the HiveContext?
If you're happy to use a singleton to hold the Spark/Hive context in all your tests, you can do something like the following.
test_contexts.py:
_test_spark = None
_test_hive = None
def get_test_spark():
if _test_spark is None:
# Create spark context for tests.
# Not really sure what's involved here for Python.
_test_spark = ...
return _test_spark
def get_test_hive():
if _test_hive is None:
sc = get_test_spark()
jsc = test_spark._jsc.sc()
_test_hive = sc._jvm.org.apache.spark.sql.hive.test.TestHiveContext(jsc)
return _test_hive
And then you just import these functions in your tests.
my_test.py:
from test_contexts import get_test_spark, get_test_hive
def test_some_spark_thing():
sc = get_test_spark()
sqlContext = get_test_hive()
# etc
I'm running a Spark Streaming task in a cluster using YARN. Each node in the cluster runs multiple spark workers. Before the streaming starts I want to execute a "setup" function on all workers on all nodes in the cluster.
The streaming task classifies incoming messages as spam or not spam, but before it can do that it needs to download the latest pre-trained models from HDFS to local disk, like this pseudo code example:
def fetch_models():
if hadoop.version > local.version:
hadoop.download()
I've seen the following examples here on SO:
sc.parallelize().map(fetch_models)
But in Spark 1.6 parallelize() requires some data to be used, like this shitty work-around I'm doing now:
sc.parallelize(range(1, 1000)).map(fetch_models)
Just to be fairly sure that the function is run on ALL workers I set the range to 1000. I also don't exactly know how many workers are in the cluster when running.
I've read the programming documentation and googled relentlessly but I can't seem to find any way to actually just distribute anything to all workers without any data.
After this initialization phase is done, the streaming task is as usual, operating on incoming data from Kafka.
The way I'm using the models is by running a function similar to this:
spark_partitions = config.get(ConfigKeys.SPARK_PARTITIONS)
stream.union(*create_kafka_streams())\
.repartition(spark_partitions)\
.foreachRDD(lambda rdd: rdd.foreachPartition(lambda partition: spam.on_partition(config, partition)))
Theoretically I could check whether or not the models are up to date in the on_partition function, though it would be really wasteful to do this on each batch. I'd like to do it before Spark starts retrieving batches from Kafka, since the downloading from HDFS can take a couple of minutes...
UPDATE:
To be clear: it's not an issue on how to distribute the files or how to load them, it's about how to run an arbitrary method on all workers without operating on any data.
To clarify what actually loading models means currently:
def on_partition(config, partition):
if not MyClassifier.is_loaded():
MyClassifier.load_models(config)
handle_partition(config, partition)
While MyClassifier is something like this:
class MyClassifier:
clf = None
#staticmethod
def is_loaded():
return MyClassifier.clf is not None
#staticmethod
def load_models(config):
MyClassifier.clf = load_from_file(config)
Static methods since PySpark doesn't seem to be able to serialize classes with non-static methods (the state of the class is irrelevant with relation to another worker). Here we only have to call load_models() once, and on all future batches MyClassifier.clf will be set. This is something that should really not be done for each batch, it's a one time thing. Same with downloading the files from HDFS using fetch_models().
If all you want is to distribute a file between worker machines the simplest approach is to use SparkFiles mechanism:
some_path = ... # local file, a file in DFS, an HTTP, HTTPS or FTP URI.
sc.addFile(some_path)
and retrieve it on the workers using SparkFiles.get and standard IO tools:
from pyspark import SparkFiles
with open(SparkFiles.get(some_path)) as fw:
... # Do something
If you want to make sure that model is actually loaded the simplest approach is to load on module import. Assuming config can be used to retrieve model path:
model.py:
from pyspark import SparkFiles
config = ...
class MyClassifier:
clf = None
#staticmethod
def is_loaded():
return MyClassifier.clf is not None
#staticmethod
def load_models(config):
path = SparkFiles.get(config.get("model_file"))
MyClassifier.clf = load_from_file(path)
# Executed once per interpreter
MyClassifier.load_models(config)
main.py:
from pyspark import SparkContext
config = ...
sc = SparkContext("local", "foo")
# Executed before StreamingContext starts
sc.addFile(config.get("model_file"))
sc.addPyFile("model.py")
import model
ssc = ...
stream = ...
stream.map(model.MyClassifier.do_something).pprint()
ssc.start()
ssc.awaitTermination()
This is a typical use case for Spark's broadcast variables. Let's say fetch_models returns the models rather than saving them locally, you would do something like:
bc_models = sc.broadcast(fetch_models())
spark_partitions = config.get(ConfigKeys.SPARK_PARTITIONS)
stream.union(*create_kafka_streams())\
.repartition(spark_partitions)\
.foreachRDD(lambda rdd: rdd.foreachPartition(lambda partition: spam.on_partition(config, partition, bc_models.value)))
This does assume that your models fit in memory, on the driver and the executors.
You may be worried that broadcasting the models from the single driver to all the executors is inefficient, but it uses 'efficient broadcast algorithms' that can outperform distributing through HDFS significantly according to this analysis