I'm trying to connect to hive using Python. I installed all of the dependencies required (sasl, thrift_sasl, etc..)
Here is how I try to connect:
configuration = {"hive.server2.authentication.kerberos.principal" : "hive/_HOST#REALM_HOST", "hive.server2.authentication.kerberos.keytab" : "/etc/security/keytabs/hive.service.keytab"}
connection = hive.Connection(configuration = configuration, host="host", port=port, auth="KERBEROS", kerberos_service_name = "hiveserver2")
But I get this error:
Minor code may provide more information (Cannot find KDC for realm "REALM_DOMAIN")
Whay I'm missing? Does someone has an example of an pyHive connection using kerberos?
Thank you for your help.
Thank you #Kishore.
Actually in PySpark, the code looks like this :
import pyspark
from pyspark import SparkContext
from pyspark.sql import Row
from pyspark import SparkConf
from pyspark.sql import HiveContext
from pyspark.sql import functions as F
import pyspark.sql.types as T
def connection(self):
conf = pyspark.SparkConf()
conf.setMaster('yarn-client')
sc = pyspark.SparkContext(conf=conf)
self.cursor = HiveContext(sc)
self.cursor.setConf("hive.exec.dynamic.partition", "true")
self.cursor.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
self.cursor.setConf("hive.warehouse.subdir.inherit.perms", "true")
self.cursor.setConf('spark.scheduler.mode', 'FAIR')
and you can request using :
rows = self.cursor.sql("SELECT someone FROM something")
for row in rows.collect():
print row
I'm actually running the code via the command :
spark-submit --master yarn MyProgram.py
I guess you could using basically run the python with pyspark installed like :
python MyProgram.py
but I didn't tried so I won't assure that it's working
I don't know in pyspark, but I am using below scala code and it is working since last one year. If you can change this code in python. Replace the value of properties based on your kerberos.
System.setProperty("hive.metastore.uris", "add hive.metastore.uris url");
System.setProperty("hive.metastore.sasl.enabled", "true")
System.setProperty("hive.metastore.kerberos.keytab.file", "add keytab")
System.setProperty("hive.security.authorization.enabled", "false")
System.setProperty("hive.metastore.kerberos.principal", "replace hive.metastore.kerberos.principal value")
System.setProperty("hive.metastore.execute.setugi", "true")
val hiveContext = new HiveContext(sparkContext)
Related
I have been trying to access tables in Hive using PySpark and after reading a few other posts, this is the way people recommend connecting to Hive. But it doesn't work. Then I realize I must probably pass my username and password, but I can't understand how to do it. So is there a way to pass the username and pw when setting up SparkSession, or what else could be the problem?
import sys
from pyspark import SparkContext, SparkConf, HiveContext
from pyspark.sql import SparkSession
if __name__ == "__main__":
# create Spark context with Spark configuration
spark = SparkSession.builder()
.appName("interfacing spark sql to hive metastore without configuration file")
.config("hive.metastore.uris", "thrift://my_server:10000")
.enableHiveSupport()
.getOrCreate()
sc = spark.sparkContext
df = sc.parallelize([(1, 2, 3, 'a b c'),(4, 5, 6, 'd e f'),(7, 8, 9, 'g h i')]).toDF(['col1', 'col2', 'col3','col4'])
df.write.mode("overwrite").saveAsTable("test_spark")
Traceback
Exception in thread "main" org.apache.spark.SparkException: Application application_1575789516697_258641 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1122)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1168)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:780)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Spark connects to Hive directly. No need to pass user name and password, just pass the hive-site.xml while submit the spark application.
Use this bellow code,
from pyspark.sql import SparkSession
sparkSession = SparkSession.builder.appName("ApplicationName").enableHiveSupport().getOrCreate()
While submitting your application pass the hive-site.xml file,
AS,
spark-submit --files /<location>/hive-site.xml --py-files <List_of_Pyfiles>
try adding below to config
.config("spark.sql.warehouse.dir", your_warehouse_location)
Use this as reference.
I'm using pyspark to make some sql queries to a parquet file. I need to use multiple cores, but i didn't find any useful information. Here's the code i'm using. As you can see i set to 3 the number of cores, but when i run the script, i can see on htop that there's only 1 core in use. How can i solve this?
from pyspark.sql import SparkSession
from pyspark.sql.types import *
spark = SparkSession \
.builder \
.appName("Python Spark SQL tests") \
.config("spark.executor.cores", 3) \
.getOrCreate()
# Check conf
for item in spark.sparkContext.getConf().getAll():
print(item)
# Open file and create dataframe
filename = "gs://path/to/file.parquet"
df = spark.read.parquet(filename)
# Create table
df.createOrReplaceTempView("myTable")
# Query
sqlDF = spark.sql("SELECT * FROM myTable")
sqlDF.show()
From my understanding you use spark standalone(only in your machine not a cluster).
try:
from pyspark import sql
spark = (
sql.SparkSession.builder.master("local[*]")
.config("spark.executor.memory", "32g")
.config("spark.driver.memory", "32g")
...
.getOrCreate()
)
* in local[*] means use all avaliable cores. You can give a number like local[3].
I am having problems executing the example code from the mleap repository. I wish to run the code in a script instead of a jupyter notebook (which is how the example is run). My script is as follows:
##################################################################################
# start a local spark session
# https://spark.apache.org/docs/0.9.0/python-programming-guide.html
##################################################################################
from pyspark import SparkContext, SparkConf
conf = SparkConf()
#set app name
conf.set("spark.app.name", "train classifier")
#Run Spark locally with as many worker threads as logical cores on your machine (cores X threads).
conf.set("spark.master", "local[*]")
#number of cores to use for the driver process (only in cluster mode)
conf.set("spark.driver.cores", "1")
#Limit of total size of serialized results of all partitions for each Spark action (e.g. collect)
conf.set("spark.driver.maxResultSize", "1g")
#Amount of memory to use for the driver process
conf.set("spark.driver.memory", "1g")
#Amount of memory to use per executor process (e.g. 2g, 8g).
conf.set("spark.executor.memory", "2g")
#pass configuration to the spark context object along with code dependencies
sc = SparkContext(conf=conf)
from pyspark.sql.session import SparkSession
spark = SparkSession(sc)
##################################################################################
import mleap.pyspark
# # Imports MLeap serialization functionality for PySpark
from mleap.pyspark.spark_support import SimpleSparkSerializer
# Import standard PySpark Transformers and packages
from pyspark.ml.feature import VectorAssembler, StandardScaler, OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import Row
# Create a test data frame
l = [('Alice', 1), ('Bob', 2)]
rdd = sc.parallelize(l)
Person = Row('name', 'age')
person = rdd.map(lambda r: Person(*r))
df2 = spark.createDataFrame(person)
df2.collect()
# Build a very simple pipeline using two transformers
string_indexer = StringIndexer(inputCol='name', outputCol='name_string_index')
feature_assembler = VectorAssembler(
inputCols=[string_indexer.getOutputCol()], outputCol="features")
feature_pipeline = [string_indexer, feature_assembler]
featurePipeline = Pipeline(stages=feature_pipeline)
featurePipeline.fit(df2)
featurePipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip")
On executing spark-submit script.py I get the following error:
17/09/18 13:26:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
File "/Users/opringle/Documents/Repos/finn/Magellan/src/no_spark_predict.py", line 58, in <module>
featurePipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip")
AttributeError: 'Pipeline' object has no attribute 'serializeToBundle'
Any help would be much appreciated! I have installed mleap from pypy.
See Here
It seems MLeap isn't ready for Spark 2.3 yet. If you happen to be running Spark 2.3, try downgrading to 2.2 and retry. Hopefully, that helps!
I have solved the issue by attaching the following jar file when running:
spark-submit --packages ml.combust.mleap:mleap-spark_2.11:0.8.1 script.py
It seems you didn't follow the steps correctly, here http://mleap-docs.combust.ml/getting-started/py-spark.html it says that
Note: the import of mleap.pyspark needs to happen before any other PySpark libraries are imported.
Hence try importing your SparkContext after mleap
I am running the docker image for snappydata v0.9. From inside that image, I can run queries against the database. However, I cannot do so from a second server on my machine.
I copied the python files from snappydata to the installed pyspark (editing snappysession to SnappySession in the imports) and (based on the answer to Unable to connect to snappydata store with spark-shell command), I wrote the following script (it is a bit of cargo-cult programming as I was copying from the python code in the docker image -- suggestions to improve it are welcome):
import pyspark
from pyspark.context import SparkContext
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.snappy import SnappyContext
from pyspark.storagelevel import StorageLevel
SparkContext._ensure_initialized()
spark = SparkSession.builder.appName("test") \
.master("local[*]") \
.config("snappydata.store.locators", "localhost:10034") \
.getOrCreate()
spark.sql("SELECT col1, min(col2) from TABLE1")
However, I get a traceback with:
pyspark.sql.utils.AnalysisException: u'Table or view not found: TABLE1
I have verified with wireshark that my program is communicating with the docker image (TCP follow stream shows the traceback message and a scala traceback). My assumption is that the permissions in the snappydata cluster is set wrong, but grepping through the logs and configuration did not show anything obvious.
How can I proceed?
-------- Edit 1 ------------
The new code that I am running (still getting the same error), incorporating the suggestions for the change in the config and ensuring that I get a SnappySession is:
from pyspark.sql.snappy import SnappySession
snappy = SnappySession.builder.appName("test") \
.master("local[*]") \
.config("spark.snappydata.connection", "localhost:1527") \
.getOrCreate()
snappy.sql("SELECT col1, min(col2) from TABLE1")
Can you change your config to the following -
.config("spark.snappydata.connection", "localhost:1527")
The 'snappydata.store.locators' property is no more there in 0.9.
You can refer the docs here - https://github.com/SnappyDataInc/snappydata/blob/master/docs/deployment.md#connectormode
Also, you need to create a SnappySession to access the Snappy managed Tables.
Something like this ....
spark = SparkSession.builder.appName("test") \
.master("local[*]") \
.config("spark.snappydata.connection", "localhost:1527") \
.getOrCreate()
snappy = SnappySession(spark)
snappy.sql("SELECT col1, min(col2) from TABLE1")
I wrote a simple Flask app to pass some data to Spark. The script works in IPython Notebook, but not when I try to run it in it's own server. I don't think that the Spark context is running within the script. How do I get Spark working in the following example?
from flask import Flask, request
from pyspark import SparkConf, SparkContext
app = Flask(__name__)
conf = SparkConf()
conf.setMaster("local")
conf.setAppName("SparkContext1")
conf.set("spark.executor.memory", "1g")
sc = SparkContext(conf=conf)
#app.route('/accessFunction', methods=['POST'])
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080)
In IPython Notebook I don't define the SparkContext because it is automatically configured. I don't remember how I did this, I followed some blogs.
On the Linux server I have set the .py to always be running and installed the latest Spark by following up to step 5 of this guide.
Edit:
Following the advice by davidism I have now instead resorted to simple programs with increasing complexity to localise the error.
Firstly I created .py with just the script from the answer below (after appropriately adjusting the links):
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
This returns "Successfully imported Spark Modules". However, the next .py file I made returns an exception:
from pyspark import SparkContext
sc = SparkContext('local')
rdd = sc.parallelize([0])
print rdd.count()
This returns exception:
"Java gateway process exited before sending the driver its port number"
Searching around for similar problems I found this page but when I run this code nothing happens, no print on the console and no error messages. Similarly, this did not help either, I get the same Java gateway exception as above. I have also installed anaconda as I heard this may help unite python and java, again no success...
Any suggestions about what to try next? I am at a loss.
Okay, so I'm going to answer my own question in the hope that someone out there won't suffer the same days of frustration! It turns out it was a combination of missing code and bad set up.
Editing the code:
I did indeed need to initialise a Spark Context by appending the following in the preamble of my code:
from pyspark import SparkContext
sc = SparkContext('local')
So the full code will be:
from pyspark import SparkContext
sc = SparkContext('local')
from flask import Flask, request
app = Flask(__name__)
#app.route('/whateverYouWant', methods=['POST']) #can set first param to '/'
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080) #note set to 8080!
Editing the setup:
It is essential that the file (yourrfilename.py) is in the correct directory, namely it must be saved to the folder /home/ubuntu/spark-1.5.0-bin-hadoop2.6.
Then issue the following command within the directory:
./bin/spark-submit yourfilename.py
which initiates the service at 10.0.0.XX:8080/accessFunction/ .
Note that the port must be set to 8080 or 8081: Spark only allows web UI for these ports by default for master and worker respectively
You can test out the service with a restful service or by opening up a new terminal and sending POST requests with cURL commands:
curl --data "DATA YOU WANT TO POST" http://10.0.0.XX/8080/accessFunction/
I was able to fix this problem by adding the location of PySpark and py4j to the path in my flaskapp.wsgi file. Here's the full content:
import sys
sys.path.insert(0, '/var/www/html/flaskapp')
sys.path.insert(1, '/usr/local/spark-2.0.2-bin-hadoop2.7/python')
sys.path.insert(2, '/usr/local/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip')
from flaskapp import app as application
Modify your .py file as it is shown in the linked guide 'Using IPython Notebook with Spark' part second point. Insted sys.path.insert use sys.path.append. Try insert this snippet:
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)