Trying to parse a fixed width text file.
my text file looks like the following and I need a row id, date, a string, and an integer:
00101292017you1234
00201302017 me5678
I can read the text file to an RDD using sc.textFile(path).
I can createDataFrame with a parsed RDD and a schema.
It's the parsing in between those two steps.
Spark's substr function can handle fixed-width columns, for example:
df = spark.read.text("/tmp/sample.txt")
df.select(
df.value.substr(1,3).alias('id'),
df.value.substr(4,8).alias('date'),
df.value.substr(12,3).alias('string'),
df.value.substr(15,4).cast('integer').alias('integer')
).show()
will result in:
+---+--------+------+-------+
| id| date|string|integer|
+---+--------+------+-------+
|001|01292017| you| 1234|
|002|01302017| me| 5678|
+---+--------+------+-------+
Having splitted columns you can reformat and use them as in normal spark dataframe.
Someone asked how to do it based on a schema. Based on above responses, here is a simple example:
x= ''' 1 123121234 joe
2 234234234jill
3 345345345jane
4abcde12345jack'''
schema = [
("id",1,5),
("ssn",6,10),
("name",16,4)
]
with open("personfixed.csv", "w") as f:
f.write(x)
df = spark.read.text("personfixed.csv")
df.show()
df2 = df
for colinfo in schema:
df2 = df2.withColumn(colinfo[0], df2.value.substr(colinfo[1],colinfo[2]))
df2.show()
Here is the output:
+-------------------+
| value|
+-------------------+
| 1 123121234 joe|
| 2 234234234jill|
| 3 345345345jane|
| 4abcde12345jack|
+-------------------+
+-------------------+-----+----------+----+
| value| id| ssn|name|
+-------------------+-----+----------+----+
| 1 123121234 joe| 1| 123121234| joe|
| 2 234234234jill| 2| 234234234|jill|
| 3 345345345jane| 3| 345345345|jane|
| 4abcde12345jack| 4|abcde12345|jack|
+-------------------+-----+----------+----+
Here is a Oneliner for you :
df = spark.read.text("/folder/file.txt")
df.select(*map(lambda x: trim(df.value.substr(col_idx[x]['idx'], col_idx[x]['len'])).alias(x), col_idx))
where col_idx is something like this :
col_idx = {col1: {'idx': 1, 'len': 2}, col2: {'idx': 3, 'len': 1}}
It's practical when you have a lot of columns, and also more efficient to use select than multiple withcolumn (see The hidden cost of Spark withColumn)
df = spark.read.text("fixedwidth")
df.withColumn("id",df.value.substr(1,5)).withColumn("name",df.value.substr(6,11)).drop('value').show()
the result is
+-----+------+
| id| name|
+-----+------+
|23465|ramasg|
|54334|hjsgfd|
|87687|dgftre|
|45365|ghfduh|
+-----+------+
Related
need to split the delimited(~) column values into new columns dynamically. Thie input s a dataframe and column name list. We are trying to solve using spark datfarame functions. Please help.
Input:
|Raw_column_name|
|1~Ram~1000~US|
|2~john~2000~UK|
|3~Marry~7000~IND|
col_names=[id,names,sal,country]
output:
id | names | sal | country
1 | Ram | 1000 | US
2 | joh n| 2000 | UK
3 | Marry | 7000 | IND
We can use split() and then use the resulting array's elements to create columns.
data_sdf. \
withColumn('raw_col_split_arr', func.split('raw_column_name', '~')). \
select(func.col('raw_col_split_arr').getItem(0).alias('id'),
func.col('raw_col_split_arr').getItem(1).alias('name'),
func.col('raw_col_split_arr').getItem(2).alias('sal'),
func.col('raw_col_split_arr').getItem(3).alias('country')
). \
show()
# +---+-----+----+-------+
# | id| name| sal|country|
# +---+-----+----+-------+
# | 1| Ram|1000| US|
# | 2| john|2000| UK|
# | 3|Marry|7000| IND|
# +---+-----+----+-------+
In case the use case is extended to be a dynamic list of columns.
col_names = ['id', 'names', 'sal', 'country']
data_sdf. \
withColumn('raw_col_split_arr', func.split('raw_column_name', '~')). \
select(*[func.col('raw_col_split_arr').getItem(i).alias(k) for i, k in enumerate(col_names)]). \
show()
# +---+-----+----+-------+
# | id|names| sal|country|
# +---+-----+----+-------+
# | 1| Ram|1000| US|
# | 2| john|2000| UK|
# | 3|Marry|7000| IND|
# +---+-----+----+-------+
Another option is from_csv() function. The only thing that needs to be defined is schema which has the added advantage that data can be parsed to correct type automatically:
df = spark.createDataFrame([('1~Ram~1000~US',), ('2~john~2000~UK',), ('3~Marry~7000~IND',)], ["Raw_column_name"])
df.show()
col_names = ['id', 'names', 'sal', 'country']
schema = ','.join([f'{name} string' for name in col_names])
# if custom type conversion is needed
# schema = "id int, names string, sal string, country string"
options = {'sep': '~'}
df2 = (df
.select(from_csv(col('Raw_column_name'), schema, options).alias('cols'))
.select(col('cols.*'))
)
df2.printSchema()
df2.show()
This question already has answers here:
How to explode multiple columns of a dataframe in pyspark
(7 answers)
Closed last year.
I have a Pyspark dataframe as below with 7 columns out of which 6 fields are array and one column is array<array>.
Sample data is as below
+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+-------------------+--------------------+-------------------+------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------+
|customer_id |equipment_id |type |language |country |lang_cnt_str |model_num |
+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+-------------------+--------------------+-------------------+------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------+
|[18e644bb-4342-4c22-ab9b-a90fda50ad69, 70f0b998-3e4e-422d-b863-1f5f455c4883, 54a99992-5403-4946-b059-f71ec7ef2cca]|[1407c4a9-b075-4837-bada-690da10717cd, fc4632f3-302b-43cb-9245-ede2d1ac590f, 1407c4a9-b075-4837-bada-690da10717cd]|[comm, comm, vspec]|[cs, en-GB, pt-PT] |[[CZ], [PT], [PT]] |[(language = 'cs' AND country IS IN ('CZ')), (language = 'en-GB' AND country IS IN ('PT')), (language = 'pt-PT' AND country IS IN ('PT'))]|[1618832612617, 1618832612858, 1618832614027]|
+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+-------------------+--------------------+-------------------+------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------+
I want to split and map every element of all columns. Below is the expected output.
+---------------------------------------+---------------------------------------+-------+-----------+------------+--------------------------------------------------+-------------------+
|customer_id |equipment_id |type |language |country |lang_cnt_str |model_num |
+---------------------------------------+---------------------------------------+-------+-----------+------------+--------------------------------------------------+-------------------+
|18e644bb-4342-4c22-ab9b-a90fda50ad69 |1407c4a9-b075-4837-bada-690da10717cd |comm |cs |[CZ] |(language = 'cs' AND country IS IN ('CZ')) |1618832612617 |
|70f0b998-3e4e-422d-b863-1f5f455c4883 |fc4632f3-302b-43cb-9245-ede2d1ac590f |comm |en-GB |[PT] |(language = 'en-GB' AND country IS IN ('PT')) |1618832612858 |
|54a99992-5403-4946-b059-f71ec7ef2cca |1407c4a9-b075-4837-bada-690da10717cd |vspec |pt-PT |[PT] |(language = 'pt-PT' AND country IS IN ('PT')) |1618832614027 |
+---------------------------------------+---------------------------------------+-------+-----------+------------+--------------------------------------------------+-------------------+
How can we achieve this in pyspark. Can someone please help me. Thanks in advance!!
We exchanged a couple of comments above, and I think there's nothing special about the array(array(string)) column. So I post this answer to show the solution posted in How to explode multiple columns of a dataframe in pyspark
df = spark.createDataFrame([
(['1', '2', '3'], [['1'], ['2'], ['3']])
], ['col1', 'col2'])
df = (df
.withColumn('zipped', f.arrays_zip(f.col('col1'), f.col('col2')))
.withColumn('unzipped', f.explode(f.col('zipped')))
.select(f.col('unzipped.col1'),
f.col('unzipped.col2')
)
)
df.show()
The input is:
+---------+---------------+
| col1| col2|
+---------+---------------+
|[1, 2, 3]|[[1], [2], [3]]|
+---------+---------------+
And the output is:
+----+----+
|col1|col2|
+----+----+
| 1| [1]|
| 2| [2]|
| 3| [3]|
+----+----+
Am very new pyspark but familiar with pandas.
I have a pyspark Dataframe
# instantiate Spark
spark = SparkSession.builder.getOrCreate()
# make some test data
columns = ['id', 'dogs', 'cats']
vals = [
(1, 2, 0),
(2, 0, 1)
]
# create DataFrame
df = spark.createDataFrame(vals, columns)
wanted to add new Row (4,5,7) so it will output:
df.show()
+---+----+----+
| id|dogs|cats|
+---+----+----+
| 1| 2| 0|
| 2| 0| 1|
| 4| 5| 7|
+---+----+----+
As thebluephantom has already said union is the way to go. I'm just answering your question to give you a pyspark example:
# if not already created automatically, instantiate Sparkcontext
spark = SparkSession.builder.getOrCreate()
columns = ['id', 'dogs', 'cats']
vals = [(1, 2, 0), (2, 0, 1)]
df = spark.createDataFrame(vals, columns)
newRow = spark.createDataFrame([(4,5,7)], columns)
appended = df.union(newRow)
appended.show()
Please have also a lookat the databricks FAQ: https://kb.databricks.com/data/append-a-row-to-rdd-or-dataframe.html
From something I did, using union, showing a block partial coding - you need to adapt of course to your own situation:
val dummySchema = StructType(
StructField("phrase", StringType, true) :: Nil)
var dfPostsNGrams2 = spark.createDataFrame(sc.emptyRDD[Row], dummySchema)
for (i <- i_grams_Cols) {
val nameCol = col({i})
dfPostsNGrams2 = dfPostsNGrams2.union(dfPostsNGrams.select(explode({nameCol}).as("phrase")).toDF )
}
union of DF with itself is the way to go.
To append row to dataframe one can use collect method also. collect() function converts dataframe to list and you can directly append data to list and again convert list to dataframe.
my spark dataframe called df is like
+---+----+------+
| id|name|gender|
+---+----+------+
| 1| A| M|
| 2| B| F|
| 3| C| M|
+---+----+------+
convert this dataframe to list using collect
collect_df = df.collect()
print(collect_df)
[Row(id=1, name='A', gender='M'),
Row(id=2, name='B', gender='F'),
Row(id=3, name='C', gender='M')]
append new row to this list
collect_df.append({"id" : 5, "name" : "E", "gender" : "F"})
print(collect_df)
[Row(id=1, name='A', gender='M'),
Row(id=2, name='B', gender='F'),
Row(id=3, name='C', gender='M'),
{'id': 5, 'name': 'E', 'gender': 'F'}]
convert this list to dataframe
added_row_df = spark.createDataFrame(collect_df)
added_row_df.show()
+---+----+------+
| id|name|gender|
+---+----+------+
| 1| A| M|
| 2| B| F|
| 3| C| M|
| 5| E| F|
+---+----+------+
Another alternative would be to utilize the partitioned parquet format, and add an extra parquet file for each dataframe you want to append. This way you can create (hundreds, thousands, millions) of parquet files, and spark will just read them all as a union when you read the directory later.
This example uses pyarrow
Note I also showed how to write a single parquet (example.parquet) that isn't partitioned, if you already know where you want to put the single parquet file.
import pyarrow.parquet as pq
import pandas as pd
headers=['A', 'B', 'C']
row1 = ['a1', 'b1', 'c1']
row2 = ['a2', 'b2', 'c2']
df1 = pd.DataFrame([row1], columns=headers)
df2 = pd.DataFrame([row2], columns=headers)
df3 = df1.append(df2, ignore_index=True)
table = pa.Table.from_pandas(df3)
pq.write_table(table, 'example.parquet', flavor='spark')
pq.write_to_dataset(table, root_path="test_part_file", partition_cols=['B', 'C'], flavor='spark')
# Adding a new partition (B=b2/C=c3
row3 = ['a3', 'b3', 'c3']
df4 = pd.DataFrame([row3], columns=headers)
table2 = pa.Table.from_pandas(df4)
pq.write_to_dataset(table2, root_path="test_part_file", partition_cols=['B', 'C'], flavor='spark')
# Add another parquet file to the B=b2/C=c2 partition
# Note this does not overwrite existing partitions, it just appends a new .parquet file.
# If files already exist, then you will get a union result of the two (or multiple) files when you read the partition
row5 = ['a5', 'b2', 'c2']
df5 = pd.DataFrame([row5], columns=headers)
table3 = pa.Table.from_pandas(df5)
pq.write_to_dataset(table3, root_path="test_part_file", partition_cols=['B', 'C'], flavor='spark')
Reading the output afterwards
from pyspark.sql import SparkSession
spark = (SparkSession
.builder
.appName("testing parquet read")
.getOrCreate())
df_spark = spark.read.parquet('test_part_file')
df_spark.show(25, False)
You should see something like this
+---+---+---+
|A |B |C |
+---+---+---+
|a5 |b2 |c2 |
|a2 |b2 |c2 |
|a1 |b1 |c1 |
|a3 |b3 |c3 |
+---+---+---+
If you run the same thing end to end again, you should see duplicates like this (since all of the previous parquet files are still there, spark unions them).
+---+---+---+
|A |B |C |
+---+---+---+
|a2 |b2 |c2 |
|a5 |b2 |c2 |
|a5 |b2 |c2 |
|a2 |b2 |c2 |
|a1 |b1 |c1 |
|a1 |b1 |c1 |
|a3 |b3 |c3 |
|a3 |b3 |c3 |
+---+---+---+
I have a large dataset of which I would like to drop columns that contain null values and return a new dataframe. How can I do that?
The following only drops a single column or rows containing null.
df.where(col("dt_mvmt").isNull()) #doesnt work because I do not have all the columns names or for 1000's of columns
df.filter(df.dt_mvmt.isNotNull()) #same reason as above
df.na.drop() #drops rows that contain null, instead of columns that contain null
For example
a | b | c
1 | | 0
2 | 2 | 3
In the above case it will drop the whole column B because one of its values is empty.
Here is one possible approach for dropping all columns that have NULL values: See here for the source on the code of counting NULL values per column.
import pyspark.sql.functions as F
# Sample data
df = pd.DataFrame({'x1': ['a', '1', '2'],
'x2': ['b', None, '2'],
'x3': ['c', '0', '3'] })
df = sqlContext.createDataFrame(df)
df.show()
def drop_null_columns(df):
"""
This function drops all columns which contain null values.
:param df: A PySpark DataFrame
"""
null_counts = df.select([F.count(F.when(F.col(c).isNull(), c)).alias(c) for c in df.columns]).collect()[0].asDict()
to_drop = [k for k, v in null_counts.items() if v > 0]
df = df.drop(*to_drop)
return df
# Drops column b2, because it contains null values
drop_null_columns(df).show()
Before:
+---+----+---+
| x1| x2| x3|
+---+----+---+
| a| b| c|
| 1|null| 0|
| 2| 2| 3|
+---+----+---+
After:
+---+---+
| x1| x3|
+---+---+
| a| c|
| 1| 0|
| 2| 3|
+---+---+
Hope this helps!
If we need to keep only the rows having at least one inspected column not null then use this. Execution time is very less.
from operator import or_
from functools import reduce
inspected = df.columns
df = df.where(reduce(or_, (F.col(c).isNotNull() for c in inspected ), F.lit(False)))```
I'm trying to transpose some columns of my table to row.
I'm using Python and Spark 1.5.0. Here is my initial table:
+-----+-----+-----+-------+
| A |col_1|col_2|col_...|
+-----+-------------------+
| 1 | 0.0| 0.6| ... |
| 2 | 0.6| 0.7| ... |
| 3 | 0.5| 0.9| ... |
| ...| ...| ...| ... |
I would like to have somthing like this:
+-----+--------+-----------+
| A | col_id | col_value |
+-----+--------+-----------+
| 1 | col_1| 0.0|
| 1 | col_2| 0.6|
| ...| ...| ...|
| 2 | col_1| 0.6|
| 2 | col_2| 0.7|
| ...| ...| ...|
| 3 | col_1| 0.5|
| 3 | col_2| 0.9|
| ...| ...| ...|
Does someone know haw I can do it? Thank you for your help.
Spark >= 3.4
You can use built-in melt method. With Python:
df.melt(
ids=["A"], values=["col_1", "col_2"],
variableColumnName="key", valueColumnName="val"
)
with Scala
df.melt(Array($"A"), Array($"col_1", $"col_2"), "key", "val")
Spark < 3.4
It is relatively simple to do with basic Spark SQL functions.
Python
from pyspark.sql.functions import array, col, explode, struct, lit
df = sc.parallelize([(1, 0.0, 0.6), (1, 0.6, 0.7)]).toDF(["A", "col_1", "col_2"])
def to_long(df, by):
# Filter dtypes and split into column names and type description
cols, dtypes = zip(*((c, t) for (c, t) in df.dtypes if c not in by))
# Spark SQL supports only homogeneous columns
assert len(set(dtypes)) == 1, "All columns have to be of the same type"
# Create and explode an array of (column_name, column_value) structs
kvs = explode(array([
struct(lit(c).alias("key"), col(c).alias("val")) for c in cols
])).alias("kvs")
return df.select(by + [kvs]).select(by + ["kvs.key", "kvs.val"])
to_long(df, ["A"])
Scala:
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions.{array, col, explode, lit, struct}
val df = Seq((1, 0.0, 0.6), (1, 0.6, 0.7)).toDF("A", "col_1", "col_2")
def toLong(df: DataFrame, by: Seq[String]): DataFrame = {
val (cols, types) = df.dtypes.filter{ case (c, _) => !by.contains(c)}.unzip
require(types.distinct.size == 1, s"${types.distinct.toString}.length != 1")
val kvs = explode(array(
cols.map(c => struct(lit(c).alias("key"), col(c).alias("val"))): _*
))
val byExprs = by.map(col(_))
df
.select(byExprs :+ kvs.alias("_kvs"): _*)
.select(byExprs ++ Seq($"_kvs.key", $"_kvs.val"): _*)
}
toLong(df, Seq("A"))
One way to solve with pyspark sql using functions create_map and explode.
from pyspark.sql import functions as func
#Use `create_map` to create the map of columns with constant
df = df.withColumn('mapCol', \
func.create_map(func.lit('col_1'),df.col_1,
func.lit('col_2'),df.col_2,
func.lit('col_3'),df.col_3
)
)
#Use explode function to explode the map
res = df.select('*',func.explode(df.mapCol).alias('col_id','col_value'))
res.show()
The Spark local linear algebra libraries are presently very weak: and they do not include basic operations as the above.
There is a JIRA for fixing this for Spark 2.1 - but that will not help you today.
Something to consider: performing a transpose will likely require completely shuffling the data.
For now you will need to write RDD code directly. I have written transpose in scala - but not in python. Here is the scala version:
def transpose(mat: DMatrix) = {
val nCols = mat(0).length
val matT = mat
.flatten
.zipWithIndex
.groupBy {
_._2 % nCols
}
.toSeq.sortBy {
_._1
}
.map(_._2)
.map(_.map(_._1))
.toArray
matT
}
So you can convert that to python for your use. I do not have bandwidth to write/test that at this particular moment: let me know if you were unable to do that conversion.
At the least - the following are readily converted to python.
zipWithIndex --> enumerate() (python equivalent - credit to #zero323)
map --> [someOperation(x) for x in ..]
groupBy --> itertools.groupBy()
Here is the implementation for flatten which does not have a python equivalent:
def flatten(L):
for item in L:
try:
for i in flatten(item):
yield i
except TypeError:
yield item
So you should be able to put those together for a solution.
You could use the stack function:
for example:
df.selectExpr("stack(2, 'col_1', col_1, 'col_2', col_2) as (key, value)")
where:
2 is the number of columns to stack (col_1 and col_2)
'col_1' is a string for the key
col_1 is the column from which to take the values
if you have several columns, you could build the whole stack string iterating the column names and pass that to selectExpr
Use flatmap. Something like below should work
from pyspark.sql import Row
def rowExpander(row):
rowDict = row.asDict()
valA = rowDict.pop('A')
for k in rowDict:
yield Row(**{'A': valA , 'colID': k, 'colValue': row[k]})
newDf = sqlContext.createDataFrame(df.rdd.flatMap(rowExpander))
I took the Scala answer that #javadba wrote and created a Python version for transposing all columns in a DataFrame. This might be a bit different from what OP was asking...
from itertools import chain
from pyspark.sql import DataFrame
def _sort_transpose_tuple(tup):
x, y = tup
return x, tuple(zip(*sorted(y, key=lambda v_k: v_k[1], reverse=False)))[0]
def transpose(X):
"""Transpose a PySpark DataFrame.
Parameters
----------
X : PySpark ``DataFrame``
The ``DataFrame`` that should be tranposed.
"""
# validate
if not isinstance(X, DataFrame):
raise TypeError('X should be a DataFrame, not a %s'
% type(X))
cols = X.columns
n_features = len(cols)
# Sorry for this unreadability...
return X.rdd.flatMap( # make into an RDD
lambda xs: chain(xs)).zipWithIndex().groupBy( # zip index
lambda val_idx: val_idx[1] % n_features).sortBy( # group by index % n_features as key
lambda grp_res: grp_res[0]).map( # sort by index % n_features key
lambda grp_res: _sort_transpose_tuple(grp_res)).map( # maintain order
lambda key_col: key_col[1]).toDF() # return to DF
For example:
>>> X = sc.parallelize([(1,2,3), (4,5,6), (7,8,9)]).toDF()
>>> X.show()
+---+---+---+
| _1| _2| _3|
+---+---+---+
| 1| 2| 3|
| 4| 5| 6|
| 7| 8| 9|
+---+---+---+
>>> transpose(X).show()
+---+---+---+
| _1| _2| _3|
+---+---+---+
| 1| 4| 7|
| 2| 5| 8|
| 3| 6| 9|
+---+---+---+
A very handy way to implement:
from pyspark.sql import Row
def rowExpander(row):
rowDict = row.asDict()
valA = rowDict.pop('A')
for k in rowDict:
yield Row(**{'A': valA , 'colID' : k, 'colValue' : row[k]})
newDf = sqlContext.createDataFrame(df.rdd.flatMap(rowExpander)
To transpose Dataframe in pySpark, I use pivot over the temporary created column, which I drop at the end of the operation.
Say, we have a table like this. What we wanna do is to find all users over each listed_days_bin value.
+------------------+-------------+
| listed_days_bin | users_count |
+------------------+-------------+
|1 | 5|
|0 | 2|
|0 | 1|
|1 | 3|
|1 | 4|
|2 | 5|
|2 | 7|
|2 | 2|
|1 | 1|
+------------------+-------------+
Create new temp column - 'pvt_value', aggregate over it and pivot results
import pyspark.sql.functions as F
agg_df = df.withColumn('pvt_value', lit(1))\
.groupby('pvt_value')\
.pivot('listed_days_bin')\
.agg(F.sum('users_count')).drop('pvt_value')
New Dataframe should look like:
+----+---+---+
| 0 | 1 | 2 | # Columns
+----+---+---+
| 3| 13| 14| # Users over the bin
+----+---+---+
I found PySpark to be too complicated to transpose so I just convert my dataframe to Pandas and use the transpose() method and convert the dataframe back to PySpark if required.
dfOutput = spark.createDataFrame(dfPySpark.toPandas().transpose())
dfOutput.display()