Apache Spark ALS algorithm - python

I want to run a movie recommendation app based on ALS algorithm on Apache Spark using Python
I’m using Spark2.2.0-Hadoop2.7
I have one master and 2 workers
When I want to run the app using this command
Spark-submit —master Spark://192.168.190.132:7077 —total-executor-cores 8 —executor-memory 2g engine.py
I get errors it says the ratings.csv file doesn’t exist( I checked the addres everything is correct)
error picture below
https://i.stack.imgur.com/dgK2Q.jpg
But when I use this command
Spark-submit app.pyit works but fails after a while
I’m not using HDFS I load dataset locally
Do I need to copy datasets to all worker nodes?

you need to upload dataset to HDFS if you want to work as spark standalone spark.using webui for all worker nodes.using hdfs -put to upload on HDFS.

Related

Two separate images to run spark in client-mode using Kubernetes, Python with Apache-Spark 3.2.0?

I deployed Apache Spark 3.2.0 using this script run from a distribution folder for Python:
./bin/docker-image-tool.sh -r <repo> -t my-tag -p ./kubernetes/dockerfiles/spark/bindings/python/Dockerfile build
I can create a container under K8s using Spark-Submit just fine. My goal is to run spark-submit configured for client mode vs. local mode and expect additional containers will be created for the executors.
Does the image I created allow for this, or do I need to create a second image (without the -p option) using the docker-image tool and configure within a different container ?
It turns out that only one image is needed if you're running PySpark. Using Client-mode, the code spawns the executors and workers for you and they run once you create a spark-submit command. Big improvement from Spark version 2.4!

How to run a non-spark code on databricks cluster?

I am able to pull the data from databricks connect and run spark jobs perfectly. My question is how to run non-spark or native python code on remote cluster. Not sharing the code due to confidentiality.
When you're using databricks connect, then your local machine is a driver of your Spark job, so non-Spark code will be always executed on your local machine. If you want to execute it remotely, then you need to package it as wheel/egg, or upload Python files onto DBFS (for example, via databricks-cli) and execute your code as Databricks job (for example, using the Run Submit command of Jobs REST API, or create a Job with databricks-cli and use databricks jobs run-now to execute it)

Spark streaming and kafka integration

I'm using kafka and spark streaming for a project programmed in python. I want to send data from kafka producer to my streaming program. It's working smoothly when i execute the following command with the dependencies specified:
./spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.0 ./kafkastreaming.py
Is there any way where i can specify the dependencies and run the streaming code directly(i.e. without using spark-submit or with using spark-submit but not specifying the dependencies.)
I tried specifying the dependencies in the spark-defaults.conf in the conf dir of spark.
The specified dependencies were:
1.org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.0
2.org.apache.spark:spark-streaming-kafka-0-8-assembly:2.1.1
NOTE - I referred to spark streaming guide using netcat from
https://spark.apache.org/docs/latest/streaming-programming-guide.html
and it worked without using spark-submit command hence i want to know if i can do the same with kafka and spark streaming.
Provide your additional dependencies into the "jars" folder of your spark distribution. stop and start spark again. This way, dependencies wil be resolved at runtime without adding any additional option in your command line

Execute a Hadoop Job in remote server and get its results from python webservice

I have a Hadoop job packaged in a jar file that I can execute in a server using command line and store the results in the hdfs of the server using the command line.
Now, I need to create a Web Service in Python (Tornado) that must to execute the Hadoop Job and get the results to present them to the user. The Web Service is hosted in other server.
I googled a lot for call the Job from outside the server in python Script but unfortunately did not have answers.
Anyone have a solution for this?
Thanks
One option could be install the binaries of hadoop in your webservice server using the same configuration than in your hadoop cluster. You will require that to be able to talk with the cluster. You don't nead to lunch any hadoop deamon there. At least configure HADOOP_HOME, HADOOP_CONFIG_DIR, HADOOP_LIBS and set properly the PATH environment variables.
You need the binaries because you will use them to submit the job and the configurations to tell hadoop client where is the cluster (the namenode and the resourcemanager).
Then in python you can execute the hadoop jar command using subprocess: https://docs.python.org/2/library/subprocess.html
You can configure the job to notify your server when the job has finished using a callback: https://hadoopi.wordpress.com/2013/09/18/hadoop-get-a-callback-on-mapreduce-job-completion/
And finally you could read the results in HDFS using WebHDFS (HDFS WEB API) or using some python HDFS package like: https://pypi.python.org/pypi/hdfs/

Apache Spark Python to Scala translation

If I got it right Apache YARN receives Application Master and Node Manager as JAR files. They executed as Java process on the nodes of the YARN cluster.
When I write a Spark program using Python, Does it compiled into JAR somehow?
If not how come Spark is able to execute Python logic on YARN cluster nodes?
The PySpark driver program uses Py4J (http://py4j.sourceforge.net/) to launch a JVM and create a Spark Context. Spark RDD operations written in Python are mapped to operations on PythonRDD.
On the remote workers, PythonRDD launches sub-processes which run Python. The data and code is passed from the Remote Worker's JVM to its Python sub-process using pipes.
Therefore, it is necessary for your YARN nodes to have python installed for this to work.
The python code is not compiled to a JAR, but is distributed around the cluster using Spark. In order to make this possible, user functions written in Python are pickled using the following code https://github.com/apache/spark/blob/master/python/pyspark/cloudpickle.py
Source: https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals

Categories

Resources