Querying json object in dataframe using Pyspark - python

I have a MySql table with following schema:
id-int
path-varchar
info-json {"name":"pat", "address":"NY, USA"....}
I used JDBC driver to connect pyspark to MySql. I can retrieve data from mysql using
df = sqlContext.sql("select * from dbTable")
This query works all fine. My question is, how can I query on "info" column? For example, below query works all fine in MySQL shell and retrieve data but this is not supported in Pyspark (2+).
select id, info->"$.name" from dbTable where info->"$.name"='pat'

from pyspark.sql.functions import *
res = df.select(get_json_object(df['info'],"$.name").alias('name'))
res = df.filter(get_json_object(df['info'], "$.name") == 'pat')
There is already a function named get_json_object
For your situation:
df = spark.read.jdbc(url='jdbc:mysql://localhost:3306', table='test.test_json',
properties={'user': 'hive', 'password': '123456'})
df.createOrReplaceTempView('test_json')
res = spark.sql("""
select col_json,get_json_object(col_json,'$.name') from test_json
""")
res.show()
Spark sql is almost like HIVE sql, you can see
https://cwiki.apache.org/confluence/display/Hive/Home

Related

How do I get the schema of all the tables in Hive db using Python?

How do I get the schema of all the tables in Hive db using Python ,
Can I use the "SHOW TABLES" as query like in the following example ? :
with pyodbc.connect('DNS = Hive_Connection',autocommit=True) as conn :
df = pd.read_sql('SHOW TABLES',conn)
Thank You (-:
I think the command SHOW DATABASES gives you the names of all databases, which are the HIVE equivalent of schemas in other DBMSs.
So the combination of the two commands should allow you to infer the complete name of a table on HIVE.
import pandas as pd
import pyodbc
cnxn = pyodbc.connect("DSN=Hive_Connection", autocommit=True)
my_databases = """SHOW DATABASES;"""
df_databases = pd.read_sql(my_databases, cnxn)
print(df_databases)
for ind in df_databases.index:
my_database_tables = "SHOW TABLES IN " + df_databases["database_name"][ind]
print(my_database_tables)
df_new

DataFrame.to_sql equivalent of using "RETURNING id" with psycopg2?

When inserting data using psycopg2 I can retrieve the id of the inserted row using a RETURNING PostgreSQL statement:
import psycopg2
conn = my_connection_parameters()
curs = conn.cursor()
sql_insert_data_query = (
"""INSERT INTO public.data
(created_by, comment)
VALUES ( %(user)s, %(comment)s )
RETURNING id; # the id is automatically managed by the database.
"""
)
curs.execute(
sql_insert_data_query,
{
"user": 'me',
"comment": 'my comment'
}
)
conn.commit()
data_id = curs.fetchone()[0]
and that's great because I need this id to write other data to, e.g. an associative table.
But when having a large dictionary to write to PostgreSQL (which keys are the column identifiers), it's more convenient to rely on pandas' DataFrame.to_sql() method:
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('postgresql+psycopg2://', creator=my_connection_parameters)
df = pd.DataFrame(my_dict, index=[0]) # this is a "one-row" DataFrame, each column being created from the dict keys
df.to_sql(
name='user_table',
con=engine,
schema='public',
if_exists='append',
index=False
)
but there is no direct way to retrieve the id PostgreSQL has created when this record was actually inserted.
Is there a nice and reliable workaround to get it?
Or should I stick with psycopg2 to write my large dictionary using a SQL query?

Insert Into Hive Using Pyhive invoke an error

I am using pyhive to interact with hive.
The SELECT statement going well using this code bellow.
# Import hive module and connect
from pyhive import hive
conn = hive.Connection(host="HOST")
cur = conn.cursor()
# Import pandas
import pandas as pd
# Store select query in dataframe
all_tables = pd.read_sql("SELECT * FROM table LIMIT 5", conn)
print all_tables
# Using curssor
cur = conn.cursor()
cur.execute('SELECT * FROM table LIMIT 5')
print cursor.fetchall()
Until here there is no problem. When I want to INSERT into hive.
Let's say I want to excute this query : INSERT INTO table2 SELECT Col1, Col2 FROM table1;
I tried :
cur.execute('INSERT INTO table2 SELECT Col1, Col2 FROM table1')
I recieve this error
pyhive.exc.OperationalError: TExecuteStatementResp(status=TStatus(errorCode=1, errorMessage=u'Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask', sqlState=u'08S01', infoMessages=[u'*org.apache.hive.service.cli.HiveSQLException:Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask:28:27', u'org.apache.hive.service.cli.operation.Operation:toSQLException:Operation.java:388', u'org.apache.hive.service.cli.operation.SQLOperation:runQuery:SQLOperation.java:244', u'org.apache.hive.service.cli.operation.SQLOperation:runInternal:SQLOperation.java:279', u'org.apache.hive.service.cli.operation.Operation:run:Operation.java:324', u'org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementInternal:HiveSessionImpl.java:499', u'org.apache.hive.service.cli.session.HiveSessionImpl:executeStatement:HiveSessionImpl.java:475', u'sun.reflect.GeneratedMethodAccessor81:invoke::-1', u'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43', u'java.lang.reflect.Method:invoke:Method.java:498', u'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78', u'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36', u'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63', u'java.security.AccessController:doPrivileged:AccessController.java:-2', u'javax.security.auth.Subject:doAs:Subject.java:422', u'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1698', u'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59', u'com.sun.proxy.$Proxy33:executeStatement::-1', u'org.apache.hive.service.cli.CLIService:executeStatement:CLIService.java:270', u'org.apache.hive.service.cli.thrift.ThriftCLIService:ExecuteStatement:ThriftCLIService.java:507', u'org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1437', u'org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1422', u'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', u'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', u'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', u'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286', u'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1149', u'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:624', u'java.lang.Thread:run:Thread.java:748'], statusCode=3), operationHandle=None)
If I excute the same query in hive directly everything run well.
Any thoughts?
NB: All my tables are external
CREATE EXTERNAL TABLE IF NOT EXISTS table ( col1 String, col2 String) stored as orc LOCATION 's3://somewhere' tblproperties ("orc.compress"="SNAPPY");
The solution was to add the username in the connection line; conn = hive.Connection(host="HOST", username="USER")
From what I understand hive queries divided on many type of operations (jobs). While you are performing a simple query (ie. SELECT * FROM table) This reads data from the hive metastore no mapReduce job or tmp tables needed to perform the query. But as soon as you switch to more complicated queries (ie. using JOINs) you end up having the same error.
The file code looks like this:
# Import hive module and connect
from pyhive import hive
conn = hive.Connection(host="HOST", username="USER")
cur = conn.cursor()
query = "INSERT INTO table2 SELECT Col1, Col2 FROM table1"
cur.execute(query)
So maybe it needs permission or something.. I will search more about this behavior and update the answer.
I'm not sure how to insert a pandas df using pyhive, but if you have pyspark installed, one option is that you could convert to a spark df and use pyspark to do it.
from pyspark.sql import sqlContext
spark_df = sqlContext.createDataFrame(pandas_df)
spark_df.write.mode('append').saveAsTable('database_name.table_name')
You can do the following using spark.
from pyspark.sql import sqlContext
# convert the pandas data frame to spark data frame
spark_df = sqlContext.createDataFrame(pandas_df)
# register the spark data frame as temp table
spark_df.registerTempTable("my_temp_table")
# execute insert statement using spark sql
sqlContext,sql("insert into hive_table select * from my_temp_table")
This will insert data in your data frame to a hive table.
Hope this helps you

How do I insert my Python dictionary into my SQL Server database table?

I have a dictionary with 3 keys which correspond to field names in a SQL Server table. The values of these keys come from an excel file and I store this dictionary in a dataframe which I now need to insert into a SQL table. This can all be seen in the code below:
import pandas as pd
import pymssql
df=[]
fp = "file path"
data = pd.read_excel(fp,sheetname ="CRM View" )
row_date = data.loc[3, ]
row_sita = "ABZPD"
row_event = data.iloc[12, :]
df = pd.DataFrame({'date': row_date,
'sita': row_sita,
'event': row_event
}, index=None)
df = df[4:]
df = df.fillna("")
print(df)
My question is how do I insert this dictionary into a SQL table now?
Also, as a side note, this code is part of a loop which needs to go through several excel files one by one, insert the data into dictionary then into SQL then delete the data in the dictionary and start again with the next excel file.
You could try something like this:
import MySQLdb
# connect
conn = MySQLdb.connect("127.0.0.1","username","passwore","table")
x = conn.cursor()
# write
x.execute('INSERT into table (row_date, sita, event) values ("%d", "%d", "%d")' % (row_date, sita, event))
# close
conn.commit()
conn.close()
You might have to change it a little based on your SQL restrictions, but should give you a good start anyway.
For the pandas dataframe, you can use the pandas built-in method to_sql to store in db. Following is the way to use it.
import sqlalchemy as sa
params = urllib.quote_plus("DRIVER={};SERVER={};DATABASE={};Trusted_Connection=True;".format("{SQL Server}",
"<db_server_url>",
"<db_name>"))
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = sa.create_engine(conn_str)
df.to_sql(<table_name>, engine,schema=<schema_name>, if_exists="append", index=False)
For this method you you will need to install sqlalchemy package.
pip install sqlalchemy
You will also need to setup the MSSql DSN on the machine.

Python Teradata Connector - Column Names

Does anyone know how to retrive the column names using Python's Teradata libary..?
Here is a code sample:
import teradata
udaExec = teradata.UdaExec (appName="HelloWorld", version="1.0" logConsole=False)
session = udaExec.connect(method='odbc', system='system_name',authentication='LDAP', username='username', password='$$tdwallet(tdprod)');
lst_results = []
for row in session.execute("select * from table_name"):
print(row)
lst_results.append(row)
The code above will not return the column names. Ultimately, I would like to put the query's results into a Panda's dataframe
You do not need for-loop:
df = pandas.read_sql(session,query)
Will do the job. See my answer here:
Python PyTd teradata Query Into Pandas DataFrame

Categories

Resources