spark-submit --master local[n] cannot create multi-threads - python

I write pyspark code to deal with some spark-sql data.
Last month, it worked perfectly when I ran spark-submit --master local[25]. From top command, I could see 25 python threads.
However, nothing change but today the spark-submit only create one thread. I wonder what kind of things can cause such problem.
This is on a ubuntu server on AWS, which has 16 CPU cores. The Spark version is 2.2.1 and Python is 3.6

Just find the problem: there is another user running his own spark task on the same instance which occupying resources.

Related

Trigerring spark-submit in local mode from a remote server

I have a pyspark application that i am currently running in local mode. Going forward this needs to be deployed in production and the pyspark job needs to be trigerred from clients end.
What changes do i need to make from my end so that it can be trigerred from clients end?
I am using spark 3.2 and python3.6.
Currently i am executing this job by firing spark-submit command from the same server where spark is installed.
spark-submit --jars /app/some_path/lib/db.jar,/app/some_path/lib/thirdparthy.jar spark_job.py
1.First i think i need to specify jars in my spark-session
spark = SparkSession.builder.master("local[*]").appName('test1').config("spark.jars", "/app/some_path/lib/db.jar,/app/some_path/lib/thirdparthy.jar").getOrCreate()
This didnt work and erroring out with "java.lang.ClassNotFoundException"
I think it is not able to find the jar though it works fine when i paas it from spark-submit.
What is the right way of passing jars in spark session?
2.I dont have a spark cluster. It is more like a R&D project which process small dataset so i dont think i need a cluster yet. How shall i run spark-submit from a remote server at client's end?
Shall i change master from local to my server's ip where spark is installed?
3.How client can trigger this job. Shall i write a python script with a subprocess trigerring this
spark-submit and give it to client so that they can execute this python file at a specific time from their workflow manager tool?
import subprocess
spark_submit_str= "spark-submit --master "server-ip" spark_job.py"
process=subprocess.Popen(spark_submit_str,stdout=subprocess.PIPE,stderr=subprocess.PIPE, universal_newlines=True, shell=True)
stdout,stderr = process.communicate()
if process.returncode !=0:
print(stderr)
print(stdout)

Set Deploy-mode to cluster for pyspark from jupyter

I've installed a cloudera CDH cluster with spark2 on 7 hosts ( 2 matsers, 4 workers and 1 edge)
I installed a Jupyter server on the edge node, I want to set pyspark to run on cluster mode, I run this on a notebook
os.environ['PYSPARK_SUBMIT_ARGS']='--master yarn --deploy-mode=cluster pyspark-shell'
It gives me "Error: Cluster deploy mode is not applicable to Spark shells."
Can someone helps me with this?
Thanks
The answer here is you can't. Firstly because the configured Jupiter behind the scenes launches a pyspark shell session. Which you cant run on cluster mode.
One soultion which i think of to your problem can be
Livy+spark magic+jupyter
Where Livy can run on yarn mode and serve job request as REST calls.
Spark_magic residing on jupyter.
You can follow the below link for more info on this
https://blog.chezo.uno/livy-jupyter-notebook-sparkmagic-powerful-easy-notebook-for-data-scientist-a8b72345ea2d
Major update.
I. Have succeeded to deploy a jupyter hub with cdh5.13, it works without no problems.
One thing to pay attention to is to install as default language python 3, with python 2, multiple jobs will failed because of incompatibility with cloudera package

Spark streaming and kafka integration

I'm using kafka and spark streaming for a project programmed in python. I want to send data from kafka producer to my streaming program. It's working smoothly when i execute the following command with the dependencies specified:
./spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.0 ./kafkastreaming.py
Is there any way where i can specify the dependencies and run the streaming code directly(i.e. without using spark-submit or with using spark-submit but not specifying the dependencies.)
I tried specifying the dependencies in the spark-defaults.conf in the conf dir of spark.
The specified dependencies were:
1.org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.0
2.org.apache.spark:spark-streaming-kafka-0-8-assembly:2.1.1
NOTE - I referred to spark streaming guide using netcat from
https://spark.apache.org/docs/latest/streaming-programming-guide.html
and it worked without using spark-submit command hence i want to know if i can do the same with kafka and spark streaming.
Provide your additional dependencies into the "jars" folder of your spark distribution. stop and start spark again. This way, dependencies wil be resolved at runtime without adding any additional option in your command line

How to submit a pyspark job to remote cluster from windows client?

We are using a remote Spark cluster with YARN (in Hortonworks). Developers want to use Spyder to implement Spark application in Windows. To ssh to cluster using ipython notebook or Jupyter works well. Is there any other way to communicate with Spark cluster from Windows.
Question 1: I got a headache with submitting spark job(written in python) from Windows which has no Spark installed. Is there anyone could help me out of this. Specifically, how to phrase the command line to submit the job.
We could ssh to YARN node in the cluster just in case these might relative to some solution. It is also ping-able from cluster to windows client.
Question 2: What do we need to have in the client side e.g. Spark libraries if we want to do the debug with environment like this?

Apache Spark Python to Scala translation

If I got it right Apache YARN receives Application Master and Node Manager as JAR files. They executed as Java process on the nodes of the YARN cluster.
When I write a Spark program using Python, Does it compiled into JAR somehow?
If not how come Spark is able to execute Python logic on YARN cluster nodes?
The PySpark driver program uses Py4J (http://py4j.sourceforge.net/) to launch a JVM and create a Spark Context. Spark RDD operations written in Python are mapped to operations on PythonRDD.
On the remote workers, PythonRDD launches sub-processes which run Python. The data and code is passed from the Remote Worker's JVM to its Python sub-process using pipes.
Therefore, it is necessary for your YARN nodes to have python installed for this to work.
The python code is not compiled to a JAR, but is distributed around the cluster using Spark. In order to make this possible, user functions written in Python are pickled using the following code https://github.com/apache/spark/blob/master/python/pyspark/cloudpickle.py
Source: https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals

Categories

Resources