How to: Pyspark dataframe persist usage and reading-back - python

I'm quite new to pyspark, and I'm having the following error: Py4JJavaError: An error occurred while calling o517.showString. and I've read that is due to a lack of memory:Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
So, I've been reading that a turn-around to this situation is to use df.persist() and then read again the persisted df, so I would like to know:
Given a for loop in which I do some .join operations, should I use the .persist() inside the loop or at the end of it? e.g.
for col in columns:
df_AA = df_AA.join(df_B, df_AA[col] == 'some_value', 'outer').persist()
--> or <--
for col in columns:
df_AA = df_AA.join(df_B, df_AA[col] == 'some_value', 'outer')
df_AA.persist()
Once I've done that, how should I read back?
df_AA.unpersist()? sqlContext.read.some_thing(df_AA)?
I'm really new to this, so please, try to explain as best as you can.
I'm running on a local machine (8GB ram), using jupyter-notebooks(anaconda); windows 7; java 8; python 3.7.1; pyspark v2.4.3

Spark is lazy evaluated framework so, none of the transformations e.g: join are called until you call an action.
So go ahead with what you have done
from pyspark import StorageLevel
for col in columns:
df_AA = df_AA.join(df_B, df_AA[col] == 'some_value', 'outer')
df_AA.persist(StorageLevel.MEMORY_AND_DISK)
df_AA.show()
There multiple persist options available so choosing the MEMORY_AND_DISK will spill the data that cannot be handled in memory into DISK.
Also GC errors could be a result of lesser DRIVER memory provided for the Spark Application to run.

Related

Possible overhead on dask computation over list of delayed objects

I have a ddf with lots of partitions
ddf = dd.read_parquet("./input-*", engine='fastparquet')
ddf
Dask DataFrame Structure:
datetime ndvi str utm_x utm_y fpath scl_value
npartitions=71
Dask Name: read-parquet, 71 tasks
In each partition I want to run a custom function
my_df_list = list()
for arg_key, arg_value in my_dict_of_args.items() :
ddf_item = ddf_sliced.map_partitions(myfunc,
my_arg1 = arg_key,
my_arg2 = arg_value,
meta = my_meta)
my_df_list.append(ddf_item)
Things start to get tricky there, I have experienced the following command is too much for my pc, taking forever the beginning of the first item computation and eventually depleting all my ram:
dask.compute(*my_df_list)
Example graph using 2 dfs instead 71, dask.visualize(*my_df_list):
But it can handle easily the computation of each partition, one by one:
my_df_list[0].compute()
...
my_df_list[71].compute()
Example graph using 2 dfs instead 71 my_df_list[0].visualize():
Im struggling understanding the difference since to me its the same iteration scheme.
If it is indeed an overhead I will be glad to get some alternative flows to not call .compute on each element manually.
EDIT 1
After posting the graph images I understand dask.compute(*list) boost parallelism to optimize the df readings. See documentation section, Avoid calling compute repeatedly.
Now I can see the real problem is the initialization of the graph and probably my code: even loading 2 dfs instead of 71, my memory is depleted far before the real computation starts, when using dask.compute(*list)

How to read files in parallel in DataBricks?

Could someone tell me how to read files in parallel? I'm trying something like this:
def processFile(path):
df = spark.read.json(path)
return df.count()
paths = ["...", "..."]
distPaths = sc.parallelize(paths)
counts = distPaths.map(processFile).collect()
print(counts)
It fails with the following error:
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
Is there any other way to optimize this?
In your particular case, you can just pass the whole paths array to DataFrameReader:
df = spark.read.json(paths)
...and reading its file elements will be parallelized by Spark.

multi-processing with spark(PySpark) [duplicate]

This question already has an answer here:
How to run independent transformations in parallel using PySpark?
(1 answer)
Closed 5 years ago.
The usecase is the following:
I have a large dataframe, with a 'user_id' column in it (every user_id can appear in many rows). I have a list of users my_users which I need to analyse.
Groupby, filter and aggregate could be a good idea, but the available aggregation functions included in pyspark did not fit my needs. In the pyspark ver, user defined aggregation functions are still not fully supported and I decided to leave it for now..
Instead, I simply iterate the my_users list, filter each user in the dataframe, and analyse. In order to optimize this procedure, I decided to use python multiprocessing pool, for each user in my_users
The function that does the analysis (and passed to the pool) takes two arguments: the user_id, and a path to the main dataframe, on which I perform all the computations (PARQUET format). In the method, I load the dataframe, and work on it (DataFrame can't be passed as an argument itself)
I get all sorts of weird errors, on some of the processes (different in each run), that look like:
PythonUtils does not exist in the JVM (when reading the 'parquet' dataframe)
KeyError: 'c' not found (also, when reading the 'parquet' dataframe. What is 'c' anyway??)
When I run it without any multiprocessing, everything runs smooth, but slow..
Any ideas where these errors are coming from?
I'll put some code sample just to make things clearer:
PYSPRAK_SUBMIT_ARGS = '--driver-memory 4g --conf spark.driver.maxResultSize=3g --master local[*] pyspark-shell' #if it's relevant
# ....
def users_worker(df_path, user_id):
df = spark.read.parquet(df_path) # The problem is here!
## the analysis of user_id in df is here
def user_worker_wrapper(args):
users_worker(*args)
def analyse():
# ...
users_worker_args = [(df_path, user_id) for user_id in my_users]
users_pool = Pool(processes=len(my_users))
users_pool.map(users_worker_wrapper, users_worker_args)
users_pool.close()
users_pool.join()
Indeed, as #user6910411 commented, when I changed the Pool to be threadPool (multiprocessing.pool.ThreadPool package), everything worked as expected and these errors were gone.
The root reasons for the errors themselves are also clear now, if you want me to share them, please comment below.

RDD, PySpark, Why rdd.flatMap seems does not do any operation in CPU?

Show my code
In [10]: rdd = sc.mongoPairRDD("mongodb://localhost/stackoverflow.stack")
......
A lot of INFO
......
In [11]: newrdd = rdd.flatMap(f)
# No INFO
In [12]: newrdd.collect()
# A lot of INFO
When a function of rdd was call, say flatMap, it seems the system doesn't run the code of the function. But when, say call collect(), the system runs and collect all the data from memory or disk?
Am I right?
Yes you are! It is actually the expected behavior for Spark. There are transformations (eg map, flatMap, reduce) and actions (count, collect, saveAsTextFile) that you can apply to an RDD.
As you noted, when you call a transformation, no computation happen, it just stacks the operation to the RDD to get some kind of recipe to produce it. But as soon as you call an action then boom, the RDD is actually evaluated. This is what happens when you call collect.

Memory leak in Pandas.groupby.apply()?

I'm currently using Pandas for a project with csv source files of around 600mb. During the analysis I am reading in the csv to a dataframe, grouping on some column and applying a simple function to the grouped dataframe. I noticed that I was going into Swap Memory during this process and so carried out a basic test:
I first created a fairly large dataframe in the shell:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(3000000, 3),index=range(3000000),columns=['a', 'b', 'c'])
I defined a pointless function called do_nothing():
def do_nothing(group):
return group
And ran the following command:
df = df.groupby('a').apply(do_nothing)
My system has 16gb of RAM and is running Debian (Mint). After creating the dataframe I was using ~600mb of RAM. As soon as the apply method began to execute, that value started to soar. It steadily climbed up to around 7gb(!) before finishing the command and settling back down to 5.4gb (while the shell was still active). The problem is, my work requires doing more than the 'do_nothing' method and as such while executing the real program, I cap my 16gb of RAM and start swapping, making the program unusable. Is this intended? I can't see why Pandas should need 7gb of RAM to effectively 'do_nothing', even if it has to store the grouped object.
Any ideas on what's causing this/how to fix it?
Cheers,
.P
Using 0.14.1, I don't think their is a memory leak (1/3 size of your frame).
In [79]: df = DataFrame(np.random.randn(100000,3))
In [77]: %memit -r 3 df.groupby(df.index).apply(lambda x: x)
maximum of 3: 1365.652344 MB per loop
In [78]: %memit -r 10 df.groupby(df.index).apply(lambda x: x)
maximum of 10: 1365.683594 MB per loop
Two general comments on how to approach a problem like this:
1) use the cython level function if at all possible, will be MUCH faster, and will use much less memory. IOW, it almost always worth it to decouple a groupby expression and void using function (if possible, somethings are just too complicated, but that's the point, you want to break things down). e.g.
Instead of:
df.groupby(...).apply(lambda x: x.sum() / x.mean())
It is MUCH better to do:
g = df.groupby(...)
g.sum() / g.mean()
2) You can easily 'control' the groupby by doing your aggregation manually (additionally this will allow periodic output and garbage collection if needed).
results = []
for i, (g, grp) in enumerate(df.groupby(....)):
if i % 500 == 0:
print "checkpoint: %s" % i
gc.collect()
results.append(func(g,grp))
# final result
pd.concate(results)

Categories

Resources