I have a dataframe and I'm doing this:
df = dataframe.withColumn("test", lit(0.4219759403))
I want to get just the first four numbers after the dot, without rounding.
When I cast to DecimalType, with .cast(DataTypes.createDecimalType(20,4)
or even with round function, this number is rounded to 0.4220.
The only way that I found without rounding is applying the function format_number(), but this function gives me a string, and when I cast this string to DecimalType(20,4), the framework rounds the number again to 0.4220.
I need to convert this number to DecimalType(20,4) without rounding, and I expect to see 0.4219.
If you have numbers with more than 1 digit before the decimal point, the substr is not adapt. Instead, you can use a regex to always extract the first 4 decimal digits (if present).
You can do this using regexp_extract
df = dataframe.withColumn('rounded', F.regexp_extract(F.col('test'), '\d+\.\d{0,4}', 0))
Example
import pyspark.sql.functions as F
dataframe = spark.createDataFrame([
(0.4219759403, ),
(0.4, ),
(1.0, ),
(0.5431293, ),
(123.769859, )
], ['test'])
df = dataframe.withColumn('rounded', F.regexp_extract(F.col('test'), '\d+\.\d{0,4}', 0))
df.show()
+------------+--------+
| test| rounded|
+------------+--------+
|0.4219759403| 0.4219|
| 0.4| 0.4|
| 1.0| 1.0|
| 0.5431293| 0.5431|
| 123.769859|123.7698|
+------------+--------+
Hi welcome to stackoverflow,
please next time try to provide a reproducible example with the code you tried, anyways this works for me:
from pyspark.sql.types import DecimalType
df = spark.createDataFrame([
(1, "a"),
(2, "b"),
(3, "c"),
], ["ID", "Text"])
df = df.withColumn("test", lit(0.4219759403))
df = df.withColumn("test_string", F.substring(df["test"].cast("string"), 0, 6))
df = df.withColumn("test_string_decimaltype", df["test_string"].cast(DecimalType(20,4)))
df.show()
df.printSchema()
+---+----+------------+-----------+-----------------------+
| ID|Text| test|test_string|test_string_decimaltype|
+---+----+------------+-----------+-----------------------+
| 1| a|0.4219759403| 0.4219| 0.4219|
| 2| b|0.4219759403| 0.4219| 0.4219|
| 3| c|0.4219759403| 0.4219| 0.4219|
+---+----+------------+-----------+-----------------------+
root
|-- ID: long (nullable = true)
|-- Text: string (nullable = true)
|-- test: double (nullable = false)
|-- test_string: string (nullable = false)
|-- test_string_decimaltype: decimal(20,4) (nullable = true)
Of course if you want you can overwrite the same column by putting always "test", i choose different names to let you see the steps.
Related
My Question is while converting from Rdd to dataframe in pyspark does the schema depends on the first row ?
data1 = [('A','abc',0.1,'',0.562),('B','def',0.15,0.5,0.123),('A','ghi',0.2,0.2,0.1345),('B','jkl','',0.1,0.642),('B','mno',0.1,0.1,'')]
>>> val1=sc.parallelize(data1).toDF()
>>> val1.show()
+---+---+----+---+------+
| _1| _2| _3| _4| _5|
+---+---+----+---+------+
| A|abc| 0.1| | 0.562| <------ Does it depends on type of this row?
| B|def|0.15|0.5| 0.123|
| A|ghi| 0.2|0.2|0.1345|
| B|jkl|null|0.1| 0.642|
| B|mno| 0.1|0.1| null|
+---+---+----+---+------+
>>> val1.printSchema()
root
|-- _1: string (nullable = true)
|-- _2: string (nullable = true)
|-- _3: double (nullable = true)
|-- _4: string (nullable = true)
|-- _5: double (nullable = true)
As you can see column _4 should have been double but it considered as string.
Any Suggestions will be helpfull.
Thanks!
#Prathik, I think you are right.
toDF() is a shorthand for spark.createDataFrame(rdd, schema, sampleRatio).
Here's the signature for createDataFrame:
def createDataFrame(self, data, schema=None, samplingRatio=None, verifySchema=True)
So by default, the parameters schema and samplingRatio are None.
According to the doc:
If schema inference is needed, samplingRatio is used to determined the ratio of
rows used for schema inference. The first row will be used if samplingRatio is None.
So by default, toDF() will use the first row to infer the data type, which it figures StringType for column 4, but FloatType for column 5.
Here you can't specify the schema to be FloatType for column 4 and 5, since they have strings in their columns.
But you can try set sampleRatio to 0.3 as below:
data1 = [('A','abc',0.1,'',0.562),('B','def',0.15,0.5,0.123),('A','ghi',0.2,0.2,0.1345),('B','jkl','',0.1,0.642),('B','mno',0.1,0.1,'')]
val1=sc.parallelize(data1).toDF(sampleRatio=0.3)
val1.show()
val1.printSchema()
Some times the above code will throw out error if it happens to sample the string row
Can not merge type <class 'pyspark.sql.types.DoubleType'> and <class 'pyspark.sql.types.StringType'>
but if you are patient and try more times (< 10 for me), you may get something like this. And you can see that both column 4 and 5 are FloatType, because by luck, the program picked double numbers while running createDataFrame.
+---+---+----+----+------+
| _1| _2| _3| _4| _5|
+---+---+----+----+------+
| A|abc| 0.1|null| 0.562|
| B|def|0.15| 0.5| 0.123|
| A|ghi| 0.2| 0.2|0.1345|
| B|jkl|null| 0.1| 0.642|
| B|mno| 0.1| 0.1| null|
+---+---+----+----+------+
root
|-- _1: string (nullable = true)
|-- _2: string (nullable = true)
|-- _3: double (nullable = true)
|-- _4: double (nullable = true)
|-- _5: double (nullable = true)
I am having a dataframe, with numbers in European format, which I imported as a String. Comma as decimal and vice versa -
from pyspark.sql.functions import regexp_replace,col
from pyspark.sql.types import FloatType
df = spark.createDataFrame([('-1.269,75',)], ['revenue'])
df.show()
+---------+
| revenue|
+---------+
|-1.269,75|
+---------+
df.printSchema()
root
|-- revenue: string (nullable = true)
Output desired:
df.show()
+---------+
| revenue|
+---------+
|-1269.75|
+---------+
df.printSchema()
root
|-- revenue: float (nullable = true)
I am using function regexp_replace to first replace dot with empty space - then replace comma with empty dot and finally cast into floatType.
df = df.withColumn('revenue', regexp_replace(col('revenue'), ".", ""))
df = df.withColumn('revenue', regexp_replace(col('revenue'), ",", "."))
df = df.withColumn('revenue', df['revenue'].cast("float"))
But, when I attempt replacing below, I get empty string. Why?? I was expecting -1269,75.
df = df.withColumn('revenue', regexp_replace(col('revenue'), ".", ""))
+-------+
|revenue|
+-------+
| |
+-------+
You need to escape . to match it literally, as . is a special character that matches almost any character in regex:
df = df.withColumn('revenue', regexp_replace(col('revenue'), "\\.", ""))
I have a PySpark dataframe where a column is in string type whereas the string is a 2D array/list which needs to be exploded into rows. However, since it is not a Struct/Array Type it's not possible to use explode directly.
This can be seen in the example below:
a = [('Bob', 562,"Food", "[[29,June,2018],[12,May,2018]]"), ('Bob',880,"Food","[[01,June,2018]]"), ('Bob',380,'Household',"[[16,June,2018]]")]
df = spark.createDataFrame(a, ["Person", "Amount","Budget", "Date"])
df.printSchema()
Output:
root
|-- Person: string (nullable = true)
|-- Amount: long (nullable = true)
|-- Budget: string (nullable = true)
|-- Date: string (nullable = true)
The output I am looking for is seen below. I need to be able to convert the String to a Struct/Array so that I can explode:
+------+------+---------+---+-----+-----+
|Person|Amount|Budget |Day|Month|Year |
+------+------+---------+---+-----+-----+
|Bob |562 |Food |29 |June |2018 |
|Bob |562 |Food |12 |May |2018 |
|Bob |880 |Food |01 |June |2018 |
|Bob |380 |Household|16 |June |2018 |
+------+------+---------+---+-----+-----+
Start with removing the outer [[ and ]] and splitting the string on all ],[.
After splitting the data will be in an array, making it possible to use the explode function. After that, what is left is simply to format the data into the desired output using another split and getItem.
It can be done as follows:
from pyspark.sql import functions as F
df.withColumn('date_arr', F.split(F.regexp_replace('Date', '\[\[|\]\]', ''), ','))\
.withColumn('date_arr', F.explode('date_arr'))\
.withColumn('date_arr', F.split('date_arr', ','))\
.select('Person',
'Amount',
'Budget',
'date_arr'.getItem(0).alias('Day'),
'date_arr'.getItem(0).alias('Month'),
'date_arr'.getItem(0).alias('Year'))
If you have a string in the same format you showed us, then you need to do perform the tasks :
Split the columns in as many array you have with explode
Remove the [ and ]
Split the date by , comma
assign each member of the new array to its column.
If I write you the code :
from pyspark.sql import functions as F
df.select("Person",
"Amount",
"Budget",
F.explode(F.split("Date", '],')).alias("date_array")
).select("Person",
"Amount",
"Budget",
F.split(F.translate(F.translate("date_array", '[', ''), ']', ''), ',').alias("date_array")
).select("Person",
"Amount",
"Budget",
F.col("date_array").getItem(0).alias("Day"),
F.col("date_array").getItem(1).alias("Month"),
F.col("date_array").getItem(2).alias("Year"),
).show()
+------+------+---------+---+-----+----+
|Person|Amount| Budget|Day|Month|Year|
+------+------+---------+---+-----+----+
| Bob| 562| Food| 29| June|2018|
| Bob| 562| Food| 12| May|2018|
| Bob| 880| Food| 01| June|2018|
| Bob| 380|Household| 16| June|2018|
+------+------+---------+---+-----+----+
I'm using PySpark v1.6.1 and I want to create a dataframe using another one:
Convert a field that has a struct of three values in different columns
Convert the timestamp from string to datatime
Create more columns using that timestamp
Change the rest of the column names and types
Right now is using .map(func) creating an RDD using that function (which transforms from one row from the original type and returns a row with the new one). But this is creating an RDD and I don't wont that.
Is there a nicer way to do this?
from pyspark.sql.functions import unix_timestamp, col, to_date, struct
####
#sample data
####
df = sc.parallelize([[25, 'Prem', 'M', '12-21-2006 11:00:05','abc', '1'],
[20, 'Kate', 'F', '05-30-2007 10:05:00', 'asdf', '2'],
[40, 'Cheng', 'M', '12-30-2017 01:00:01', 'qwerty', '3']]).\
toDF(["age","name","sex","datetime_in_strFormat","initial_col_name","col_in_strFormat"])
#create 'struct' type column by combining first 3 columns of sample data - (this is built to answer query #1)
df = df.withColumn("struct_col", struct('age', 'name', 'sex')).\
drop('age', 'name', 'sex')
df.show()
df.printSchema()
####
#query 1
####
#Convert a field that has a struct of three values (i.e. 'struct_col') in different columns (i.e. 'name', 'age' & 'sex')
df = df.withColumn('name', col('struct_col.name')).\
withColumn('age', col('struct_col.age')).\
withColumn('sex', col('struct_col.sex')).\
drop('struct_col')
df.show()
df.printSchema()
####
#query 2
####
#Convert the timestamp from string (i.e. 'datetime_in_strFormat') to datetime (i.e. 'datetime_in_tsFormat')
df = df.withColumn('datetime_in_tsFormat',
unix_timestamp(col('datetime_in_strFormat'), 'MM-dd-yyyy hh:mm:ss').cast("timestamp"))
df.show()
df.printSchema()
####
#query 3
####
#create more columns using above timestamp (e.g. fetch date value from timestamp column)
df = df.withColumn('datetime_in_dateFormat', to_date(col('datetime_in_tsFormat')))
df.show()
####
#query 4.a
####
#Change column name (e.g. 'initial_col_name' is renamed to 'new_col_name)
df = df.withColumnRenamed('initial_col_name', 'new_col_name')
df.show()
####
#query 4.b
####
#Change column type (e.g. string type in 'col_in_strFormat' is coverted to double type in 'col_in_doubleFormat')
df = df.withColumn("col_in_doubleFormat", col('col_in_strFormat').cast("double"))
df.show()
df.printSchema()
Sample data:
+---------------------+----------------+----------------+------------+
|datetime_in_strFormat|initial_col_name|col_in_strFormat| struct_col|
+---------------------+----------------+----------------+------------+
| 12-21-2006 11:00:05| abc| 1| [25,Prem,M]|
| 05-30-2007 10:05:00| asdf| 2| [20,Kate,F]|
| 12-30-2017 01:00:01| qwerty| 3|[40,Cheng,M]|
+---------------------+----------------+----------------+------------+
root
|-- datetime_in_strFormat: string (nullable = true)
|-- initial_col_name: string (nullable = true)
|-- col_in_strFormat: string (nullable = true)
|-- struct_col: struct (nullable = false)
| |-- age: long (nullable = true)
| |-- name: string (nullable = true)
| |-- sex: string (nullable = true)
Final output data:
+---------------------+------------+----------------+-----+---+---+--------------------+----------------------+-------------------+
|datetime_in_strFormat|new_col_name|col_in_strFormat| name|age|sex|datetime_in_tsFormat|datetime_in_dateFormat|col_in_doubleFormat|
+---------------------+------------+----------------+-----+---+---+--------------------+----------------------+-------------------+
| 12-21-2006 11:00:05| abc| 1| Prem| 25| M| 2006-12-21 11:00:05| 2006-12-21| 1.0|
| 05-30-2007 10:05:00| asdf| 2| Kate| 20| F| 2007-05-30 10:05:00| 2007-05-30| 2.0|
| 12-30-2017 01:00:01| qwerty| 3|Cheng| 40| M| 2017-12-30 01:00:01| 2017-12-30| 3.0|
+---------------------+------------+----------------+-----+---+---+--------------------+----------------------+-------------------+
root
|-- datetime_in_strFormat: string (nullable = true)
|-- new_col_name: string (nullable = true)
|-- col_in_strFormat: string (nullable = true)
|-- name: string (nullable = true)
|-- age: long (nullable = true)
|-- sex: string (nullable = true)
|-- datetime_in_tsFormat: timestamp (nullable = true)
|-- datetime_in_dateFormat: date (nullable = true)
|-- col_in_doubleFormat: double (nullable = true)
The following is a sample of my Spark DataFrame with the printSchema below it:
+--------------------+---+------+------+--------------------+
| device_id|age|gender| group| apps|
+--------------------+---+------+------+--------------------+
|-9073325454084204615| 24| M|M23-26| null|
|-8965335561582270637| 28| F|F27-28|[1.0,1.0,1.0,1.0,...|
|-8958861370644389191| 21| M| M22-|[4.0,0.0,0.0,0.0,...|
|-8956021912595401048| 21| M| M22-| null|
|-8910497777165914301| 25| F|F24-26| null|
+--------------------+---+------+------+--------------------+
only showing top 5 rows
root
|-- device_id: long (nullable = true)
|-- age: integer (nullle = true)
|-- gender: string (nullable = true)
|-- group: string (nullable = true)
|-- apps: vector (nullable = true)
I'm trying to fill the null in the 'apps' column with np.zeros(19237). However When I execute
df.fillna({'apps': np.zeros(19237)}))
I get an error
Py4JJavaError: An error occurred while calling o562.fill.
: java.lang.IllegalArgumentException: Unsupported value type java.util.ArrayList
Or if I try
df.fillna({'apps': DenseVector(np.zeros(19237)})))
I get an error
AttributeError: 'numpy.ndarray' object has no attribute '_get_object_id'
Any ideas?
DataFrameNaFunctions support only a subset of native (no UDTs) types, so you'll need an UDF here.
from pyspark.sql.functions import coalesce, col, udf
from pyspark.ml.linalg import Vectors, VectorUDT
def zeros(n):
def zeros_():
return Vectors.sparse(n, {})
return udf(zeros_, VectorUDT())()
Example usage:
df = spark.createDataFrame(
[(1, Vectors.dense([1, 2, 3])), (2, None)],
("device_id", "apps"))
df.withColumn("apps", coalesce(col("apps"), zeros(3))).show()
+---------+-------------+
|device_id| apps|
+---------+-------------+
| 1|[1.0,2.0,3.0]|
| 2| (3,[],[])|
+---------+-------------+