Hive UDF with Python - python

I'm new to python, pandas, and hive and would definitely appreciate some tips.
I have the python code below, which I would like to turn into a UDF in hive. Only instead of taking a csv as the input, doing the transformations and then exporting another csv, I would like to take a hive table as the input, and then export the results as a new hive table containing the transformed data.
Python Code:
import pandas as pd
data = pd.read_csv('Input.csv')
df = data
df = df.set_index(['Field1','Field2'])
Dummies=pd.get_dummies(df['Field3']).reset_index()
df2=Dummies.drop_duplicates()
df3=df2.groupby(['Field1','Field2']).sum()
df3.to_csv('Output.csv')

You can make use of the TRANSFORM function to make use of a UDF written in Python. The detailed steps are outlined here and here.

Related

Is it possible to manipulate a dataframe created through Pandas using SQL?

So I'm trying to create a python script that allows me to perform SQL manipulations on a dataframe (masterfile) I created using pandas. The dataframe draws its contents from the csv files found in a specific folder.
I was able to successfully create everything else, but I am having trouble with the SQL manipulation part. I am trying to use the dataframe as the "database" where I will pull the data using my SQL query but I am getting a "AttributeError: 'DataFrame' object has no attribute 'cursor' " error.
I'm not really seeing a lot of examples for pandas.read_sql_query() so I am having a difficult time on understanding how I will use my dataframe in it.
import os
import glob
import pandas
os.chdir("SOMECENSOREDDIRECTORY")
all_csv = [i for i in glob.glob('*.{}'.format('csv')) if i != 'Masterfile.csv']
edited_files = []
for i in all_csv:
df = pandas.read_csv(i)
df["file_name"] = i.split('.')[0]
edited_files.append(df)
masterfile = pandas.concat(edited_files, sort=False)
print("Data fields are as shown below:")
print(masterfile.iloc[0])
sql_query = "SELECT Country, file_name as Year, Happiness_Score FROM masterfile WHERE Country = 'Switzerland'"
output = pandas.read_sql_query(sql_query, masterfile)
output.to_csv('data_pull')
I know this part is wrong, but this is the concept I am trying to get to work but don't know how:
output = pandas.read_sql_query(sql_query, masterfile)
I appreciate any help I can get! I am a self-thought python programmer by the way, so I might be missing some general rule or something. Thanks!
Edit: replaced "slice" with "manipulate" because I realized I didn't want to just slice it. Also fixed some alignment issues on my code block.
It's is possible to slice dataframe Which created through Pandas and SQL You can use loc function of pandas to slice dataframe.
pd.df.loc[row,colums]

How do I execute this python code automatically in in excel cells?

I need to extract the domain for example: (http: //www.example.com/example-page, http ://test.com/test-page) from a list of websites in an excel sheet and modify that domain to give its url (example.com, test.com). I have got the code part figured put but i still need to get these commands to work on excel sheet cells in a column automatically.
here's_the_code
I think you should read in the data as a pandas DataFrame (pd.read_excel), make a function from your code then apply to the dframe (df.apply). Then it is easy to save to excel with pd.to_excel().
ofc you will need pandas to be installed.
Something like:
import pandas as pd
dframe = pd.read_excel(io='' , sheet_name='')
dframe['domains'] = dframe['urls col name'].apply(your function)
dframe.to_excel('your path')
Best

How to read an ORC file stored locally in Python Pandas?

Can I think of an ORC file as similar to a CSV file with column headings and row labels containing data? If so, can I somehow read it into a simple pandas dataframe? I am not that familiar with tools like Hadoop or Spark, but is it necessary to understand them just to see the contents of a local ORC file in Python?
The filename is someFile.snappy.orc
I can see online that spark.read.orc('someFile.snappy.orc') works, but even after import pyspark, it is throwing error.
I haven't been able to find any great options, there are a few dead projects trying to wrap the java reader. However, pyarrow does have an ORC reader that won't require you using pyspark. It's a bit limited but it works.
import pandas as pd
import pyarrow.orc as orc
with open(filename) as file:
data = orc.ORCFile(file)
df = data.read().to_pandas()
In case import pyarrow.orc as orc does not work (did not work for me in Windows 10), you can read them to Spark data frame then convert to pandas's data frame
import findspark
from pyspark.sql import SparkSession
findspark.init()
spark = SparkSession.builder.getOrCreate()
df_spark = spark.read.orc('example.orc')
df_pandas = df_spark.toPandas()
Starting from Pandas 1.0.0, there is a built in function for Pandas.
https://pandas.pydata.org/docs/reference/api/pandas.read_orc.html
import pandas as pd
import pyarrow.orc
df = pd.read_orc('/tmp/your_df.orc')
Be sure to read this warning about dependencies. This function might not work on Windows
https://pandas.pydata.org/docs/getting_started/install.html#install-warn-orc
If you want to use
read_orc(), it is highly recommended to install pyarrow using conda
ORC, like AVRO and PARQUET, are format specifically designed for massive storage. You can think about them "like a csv", they are all files containing data, with their particular structure (different than csv, or a json of course!).
Using pyspark should be easy reading an orc file, as soon as your environment grants the Hive support.
Answering your question, I'm not sure that in a local environment without Hive you will be able to read it, I've never done it (you can do a quick test with the following code):
Loads ORC files, returning the result as a DataFrame.
Note: Currently ORC support is only available together with Hive support.
>>> df = spark.read.orc('python/test_support/sql/orc_partitioned')
Hive is a data warehouse system, that allows you to query your data on HDFS (distributed file system) through Map-Reduce like a traditional relational database (creating queries SQL-like, doesn't support 100% all the standard SQL features!).
Edit: Try the following to create a new Spark Session. Not to be rude, but I suggest you to follow one of many PySpark tutorial in order to understand the basics of this "world". Everything will be much clearer.
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Test').getOrCreate()
Easiest way is using pyorc:
import pyorc
import pandas as pd
with open(r"my_orc_file.orc", "rb") as orc_file:
reader = pyorc.Reader(orc_file)
orc_data = reader.read()
orc_schema = reader.schema
columns = list(orc_schema.fields)
df = pd.DataFrame(data=orc_data, columns=columns)
I did not want to submit a spark job to read local ORC files or have pandas. This worked for me.
import pyarrow.orc as orc
data_reader = orc.ORCFile("/path/to/orc/part_file.zstd.orc")
data = data_reader.read()
source = data.to_pydict()

Convert JSON to CSV using Azure Databricks

I'm new to Azure Databricks so I am having a hard time importing JSON data and converting it to CSV using Azure Databricks even after reading the documentation.
After converting JSON to CSV, I need to combine it with another CSV data that has a mutual column.
Any help would be really appreciated. Thank you
Are you looking to join on the mutual column? If so you can do something like this:
dfjson = spark.read.json(/path/to/json)
dfcsv = spark.read.csv(/path/to/csv)
dfCombined = dfjson.join(dfcsv, dfjson.mutualCol == dfcsv.mutualCol, joinType)
dfCombined.save.format(someFormat).write(/path/to/output)

How to preprocess csv data for Spark 2.0 clustering?

I have a very simple csv file that looks like this:
time,is_boy,is_girl
135,1,0
136,0,1
137,0,1
I have this csv file sitting in a Hive table also, where all the values have been created as doubles in the table.
Behind the scenes, this table is actually enormous, and has an enormous number of rows, so I have chosen to use Spark 2 to solve this problem.
I would like to use this clustering library, with Python:
https://spark.apache.org/docs/2.2.0/ml-clustering.html
If anyone knows how to load this data, either directly from the csv or by using some Spark SQL magic, and preprocess it correctly, using Python, so that it can be passed into the kmeans fit() method and calculate a model, I would be very grateful. I also think it would be useful for others as I haven't found an example for csvs and for this library yet.
The fit method just takes a vector / Dataframe
spark.read().csv or spark.sql both return you a Dataframe.
However you want to preprocess your data, read over the Dataframe documentation before getting into the MlLib / Kmeans examples
So I guessed enough times and finally solved this, there were quite a few weird things I had to do to get it to work, so I feel it's worth sharing:
I created a simple csv like so:
time,is_boy,is_girl
123,1.0,0.0
132,1.0,0.0
135,0.0,1.0
139,0.0,1.0
140,1.0,0.0
Then I created a hive table, executing this query in hue:
CREATE EXTERNAL TABLE pollab02.experiment_raw(
`time` double,
`is_boy` double,
`is_girl` double)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' with
serdeproperties( 'separatorChar' = ',' )
STORED AS TEXTFILE LOCATION "/user/me/hive/experiment"
TBLPROPERTIES ("skip.header.line.count"="1", "skip.footer.line.count"="0")
Then my pyspark script was as follows:
(I'm assuming a SparkSession has been created with the name "spark")
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.ml.feature import VectorAssembler
raw_data = spark.sql("select * from dbname.experiment_raw")
#filter out row of null values that were added for some reason
raw_data_filtered=raw_data.filter(raw_data.time>-1)
#convert rows of strings to doubles for kmeans:
data=raw_data_filtered.select([col(c).cast("double") for c in raw_data_filtered.columns])
cols = data.columns
#Merge data frame with column called features, that contains all data as a vector in each row
vectorAss = VectorAssembler(inputCols=cols, outputCol="features")
vdf=vectorAss.transform(data)
kmeans = KMeans(k=2, maxIter=10, seed=1)
model = kmeans.fit(vdf)
and the rest is history. I haven't done best best practices here. We could maybe drop some columns that we don't need from the vdf DataFrame to save space and improve performance, but this works.

Categories

Resources