How to change SparkContext property spark.sql.pivotMaxValues in jupyter PySpark session - python

Q: How to change SparkContext property spark.sql.pivotMaxValues in jupyter PySpark session
I made the following code change to increase spark.sql.pivotMaxValues. It sadly had no effect in the resulting error after restarting jupyter and running the code again.
from pyspark import SparkConf, SparkContext
from pyspark.mllib.linalg import Vectors
from pyspark.mllib.linalg.distributed import RowMatrix
import numpy as np
try:
#conf = SparkConf().setMaster('local').setAppName('autoencoder_recommender_wide_user_record_maker') # original
#conf = SparkConf().setMaster('local').setAppName('autoencoder_recommender_wide_user_record_maker').set("spark.sql.pivotMaxValues", "99999")
conf = SparkConf().setMaster('local').setAppName('autoencoder_recommender_wide_user_record_maker').set("spark.sql.pivotMaxValues", 99999)
sc = SparkContext(conf=conf)
except:
print("Variables sc and conf are now defined. Everything is OK and ready to run.")
<... (other code) ...>
df = sess.read.csv(in_filename, header=False, mode="DROPMALFORMED", schema=csv_schema)
ct = df.crosstab('username', 'itemname')
Spark error message that was thrown on my crosstab line of code:
IllegalArgumentException: "requirement failed: The number of distinct values for itemname, can't exceed 1e4. Currently 16467"
I expect I'm not actually setting the config variable that I was trying to set, so what is a way to get that value actually set, programmatically if possible? THanks.
References:
Finally, you may be interested to know that there is a maximum number
of values for the pivot column if none are specified. This is mainly
to catch mistakes and avoid OOM situations. The config key is
spark.sql.pivotMaxValues and its default is 10,000.
Source: https://databricks.com/blog/2016/02/09/reshaping-data-with-pivot-in-apache-spark.html
I would prefer to change the config variable upwards, since I have written the crosstab code already which works great on smaller datasets. If it turns out there truly is no way to change this config variable then my backup plans are, in order:
relational right outer join to implement my own Spark crosstab with higher capacity than was provided by databricks
scipy dense vectors with handmade unique combinations calculation code using dictionaries

kernel.json
This configuration file should be distributed together with jupyter
~/.ipython/kernels/pyspark/kernel.json
It contains SPARK configuration, including variable PYSPARK_SUBMIT_ARGS - list of arguments that will be used with spark-submit script.
You can try to add --conf spark.sql.pivotMaxValues=99999 to this variable in mentioned script.
PS
There are also cases where people are trying to override this variable programmatically. You can give it a try too...

Related

When would I use Spark Operator vs Spark Standalone in Iguazio?

I see in the services UI that I can create a Spark cluster. I also see that I can use the Spark operator runtime when executing a job. What is the use case for each and why would I choose one vs the other?
There are two ways of using Spark in Iguazio:
Create a standalone Spark cluster via the Iguazio UI (like you found on the services page). This is a persistent cluster that you can associate with multiple jobs, Jupyter notebooks, etc. This is a good choice for long running computations with a static pool of resources. An overview of the Spark service in Iguazio can be found here along with some ingestion examples.
When creating a JupyterLab instance in the UI, there is an option to associate it with an existing Spark cluster. This lets you use PySpark out of the box
Create an ephemeral Spark cluster via the Spark Operator. This is a temporary cluster that only exists for the duration of the job. This is a good choice for shorter one-off jobs with a static or variable pool of resources. The Spark Operator runtime is usually the better option if you don't need a persistent Spark cluster. Some examples of using the Spark operator on Iguazio can be found here as well as below.
import mlrun
import os
# set up new spark function with spark operator
# command will use our spark code which needs to be located on our file system
# the name param can have only non capital letters (k8s convention)
sj = mlrun.new_function(kind='spark', command='spark_read_csv.py', name='sparkreadcsv')
# set spark driver config (gpu_type & gpus=<number_of_gpus> supported too)
sj.with_driver_limits(cpu="1300m")
sj.with_driver_requests(cpu=1, mem="512m")
# set spark executor config (gpu_type & gpus=<number_of_gpus> are supported too)
sj.with_executor_limits(cpu="1400m")
sj.with_executor_requests(cpu=1, mem="512m")
# adds fuse, daemon & iguazio's jars support
sj.with_igz_spark()
# set spark driver volume mount
# sj.function.with_driver_host_path_volume("/host/path", "/mount/path")
# set spark executor volume mount
# sj.function.with_executor_host_path_volume("/host/path", "/mount/path")
# args are also supported
sj.spec.args = ['-spark.eventLog.enabled','true']
# add python module
sj.spec.build.commands = ['pip install matplotlib']
# Number of executors
sj.spec.replicas = 2
# Rebuilds the image with MLRun - needed in order to support artifactlogging etc
sj.deploy()
# Run task while setting the artifact path on which our run artifact (in any) will be saved
sj.run(artifact_path='/User')
Where the spark_read_csv.py file looks like:
from pyspark.sql import SparkSession
from mlrun import get_or_create_ctx
context = get_or_create_ctx("spark-function")
# build spark session
spark = SparkSession.builder.appName("Spark job").getOrCreate()
# read csv
df = spark.read.load('iris.csv', format="csv",
sep=",", header="true")
# sample for logging
df_to_log = df.describe().toPandas()
# log final report
context.log_dataset("df_sample",
df=df_to_log,
format="csv")
spark.stop()

How to read files in parallel in DataBricks?

Could someone tell me how to read files in parallel? I'm trying something like this:
def processFile(path):
df = spark.read.json(path)
return df.count()
paths = ["...", "..."]
distPaths = sc.parallelize(paths)
counts = distPaths.map(processFile).collect()
print(counts)
It fails with the following error:
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
Is there any other way to optimize this?
In your particular case, you can just pass the whole paths array to DataFrameReader:
df = spark.read.json(paths)
...and reading its file elements will be parallelized by Spark.

Total allocation exceeds 95.00% (960,285,889 bytes) of heap memory- pyspark error

I wrote a script in python 2.7 that using pyspark for converting csv to parquet and other stuff.
when I ran my script on a small data it works well but when I did it on a bigger data (250GB) I crush on the following error- Total allocation exceeds 95.00% (960,285,889 bytes) of heap memory.
How can I solve this problem? and what is the reason that it's happening?
tnx!
part of code:
libraries imported:
import pyspark as ps
from pyspark.sql.types import StructType, StructField, IntegerType,
DoubleType, StringType, TimestampType,LongType,FloatType
from collections import OrderedDict
from sys import argv
using pyspark:
schema_table_name="schema_"+str(get_table_name())
print (schema_table_name)
schema_file= OrderedDict()
schema_list=[]
ddl_to_schema(data)
for i in schema_file:
schema_list.append(StructField(i,schema_file[i]()))
schema=StructType(schema_list)
print schema
spark = ps.sql.SparkSession.builder.getOrCreate()
df = spark.read.option("delimiter",
",").format("csv").schema(schema).option("header", "false").load(argv[2])
df.write.parquet(argv[3])
# df.limit(1500).write.jdbc(url = url, table = get_table_name(), mode =
"append", properties = properties)
# df = spark.read.jdbc(url = url, table = get_table_name(), properties =
properties)
pq = spark.read.parquet(argv[3])
pq.show()
just to clarify the schema_table_name is meant to save all tables name ( that are in DDL that fit the csv).
function ddl_to_schema just take a regular ddl and edit it to a ddl that parquet can work with.
It seems your driver is running out of memory.
By default the driver memory is set to 1GB. Since your program used 95% of it the application ran out of memory.
you can try to change it until you reach the "sweet spot" for your needs below I'm setting it to 2GB:
pyspark --driver-memory 2g
you can play with the executor memory too, although it doesn't seem to be the problem here (the default value for the executor is 4GB).
pyspark --driver-memory 2g --executor-memory 8g
the theory is, spark actions can offload data to the driver causing it to run out of memory if not properly sized. I can't tell for sure in your case, but it seems that the write is what is causing this.
You can take a look at the theory here (read about driver program and then check the actions):
https://spark.apache.org/docs/2.2.0/rdd-programming-guide.html#actions
If you are running a local script and aren't using spark-submit directly, you can do this:
import os
os.environ["PYSPARK_SUBMIT_ARGS"] = "--driver-memory 2g"

multi-processing with spark(PySpark) [duplicate]

This question already has an answer here:
How to run independent transformations in parallel using PySpark?
(1 answer)
Closed 5 years ago.
The usecase is the following:
I have a large dataframe, with a 'user_id' column in it (every user_id can appear in many rows). I have a list of users my_users which I need to analyse.
Groupby, filter and aggregate could be a good idea, but the available aggregation functions included in pyspark did not fit my needs. In the pyspark ver, user defined aggregation functions are still not fully supported and I decided to leave it for now..
Instead, I simply iterate the my_users list, filter each user in the dataframe, and analyse. In order to optimize this procedure, I decided to use python multiprocessing pool, for each user in my_users
The function that does the analysis (and passed to the pool) takes two arguments: the user_id, and a path to the main dataframe, on which I perform all the computations (PARQUET format). In the method, I load the dataframe, and work on it (DataFrame can't be passed as an argument itself)
I get all sorts of weird errors, on some of the processes (different in each run), that look like:
PythonUtils does not exist in the JVM (when reading the 'parquet' dataframe)
KeyError: 'c' not found (also, when reading the 'parquet' dataframe. What is 'c' anyway??)
When I run it without any multiprocessing, everything runs smooth, but slow..
Any ideas where these errors are coming from?
I'll put some code sample just to make things clearer:
PYSPRAK_SUBMIT_ARGS = '--driver-memory 4g --conf spark.driver.maxResultSize=3g --master local[*] pyspark-shell' #if it's relevant
# ....
def users_worker(df_path, user_id):
df = spark.read.parquet(df_path) # The problem is here!
## the analysis of user_id in df is here
def user_worker_wrapper(args):
users_worker(*args)
def analyse():
# ...
users_worker_args = [(df_path, user_id) for user_id in my_users]
users_pool = Pool(processes=len(my_users))
users_pool.map(users_worker_wrapper, users_worker_args)
users_pool.close()
users_pool.join()
Indeed, as #user6910411 commented, when I changed the Pool to be threadPool (multiprocessing.pool.ThreadPool package), everything worked as expected and these errors were gone.
The root reasons for the errors themselves are also clear now, if you want me to share them, please comment below.

How to debug a SQL/Python UDF in MonetDB

Using native Python code in SQL UDFs in Monetdb is really powerful. BUT, debugging such UDFs could benefit from more support. In particular, if I use the old-fashioned print('debugging info') it disappears in the big black void.
create function dummy()
returns string
language python{
print('Entering the dummy UDF')
return 'hello';
};
How to retrieve this information from the server or MonetDB client.
I was debugging some Python UDF last week :)
Step 1: first make sure your Python code at least works in a Python interpreter.
Step 2: in a Python UDF, write your debugging info. to a file, e.g.:
f = open('/tmp/debug.out', 'w')
f.write('my debugging info\n')
f.close()
This isn't ideal, but it works. Also, I used this to export the parameter values of my Python UDF. In this way, I can run the body of my Python UDF in a Python interpreter with the exact data I receive from MonetDB.
In case someone is still interested in this problem.
There are two novel ways of debugging MonetDB's Python/UDFs.
1) Using the python client pymonetdb (https://github.com/gijzelaerr/pymonetdb).
You can install it throw pip
pip install numpy
To use it, think of the following setting with a table that holds an integer and a UDF that computes the mean absolute deviation of a given column.
CREATE TABLE integers(i INTEGER);
INSERT INTO integers VALUES (1), (3), (6), (8), (10);
CREATE OR REPLACE FUNCTION mean_deviation(column INTEGER)
RETURNS DOUBLE LANGUAGE PYTHON {
mean = 0.0
for i in range (0, len(column)):
mean += column[I]
mean = mean / len(column)
distance = 0.0
for i in range (0, len(column)):
distance += column[i] - mean
deviation = distance/len(column)
return deviation;
};
To debug your function using terminal debugging (i.e., pdb) you just need to open a database connection using pymonetdb.connect(), later you get a cursor object from the connection, and through the cursor object you call the debug() function, sending as parameters the SQL you want to examine and the UDF name you wish to debug.
import pymonetdb
conn = pymonetdb.connect(database='demo') #Open Database connection
c = conn.cursor()
sql = 'select mean_deviation(i) from integers;'
c.debug(sql, 'mean_deviation') #Console Debugging
There is an optional sampling step that only transfers a uniform random sample of the data instead of the full input data set. If you wish to sample you just need to send the number of elements you wish to get from the sampling (e.g., c.debug(sql, 'mean_deviation', 10) in case you desire the subset of 10 elements)
2) Using a POC plugin for PyCharm called devudf, which you can install throw the plugin page of pycharm, or by directly going to the JetBrains page: https://plugins.jetbrains.com/plugin/12063-devudf. It adds an option to the main menu called "UDF Development" and allows for you do directly import and export UDFs from your database directly to pycharm, and enjoy the IDE's debugging capabilities.

Categories

Resources