I have a pyspark dataframe, with text column.
I wanted to map the values which with a regex expression.
df = df.withColumn('mapped_col', regexp_replace('mapped_col', '.*-RH', 'RH'))
df = df.withColumn('mapped_col', regexp_replace('mapped_col', '.*-FI, 'FI'))
Plus I wanted to map specifics values according to a dictionnary, I did the following (mapper is from create_map()):
df = df.withColumn("mapped_col",mapper.getItem(F.col("action")))
Finaly the values which has not been mapped by the dictionnary or the regex expression, will be set null. I do not know how to do this part in accordance to the two others.
Is it possible to have like a dictionnary of regex expression so I can regroup the two 'functions'?
{".*-RH": "RH", ".*FI" : "FI"}
Original Output Example
+-----------------------------+
|message |
+-----------------------------+
|GDF2009 |
|GDF2014 |
|ADS-set |
|ADS-set |
|XSQXQXQSDZADAA5454546a45a4-FI|
|dadaccpjpifjpsjfefspolamml-FI|
|dqdazdaapijiejoajojp565656-RH|
|kijipiadoa
+-----------------------------+
Expected Output Example
+-----------------------------+-----------------------------+
|message |status|
+-----------------------------+-----------------------------+
|GDF2009 | GDF
|GDF2014 | GDF
|ADS/set | ADS
|ADS-set | ADS
|XSQXQXQSDZADAA5454546a45a4-FI| FI
|dadaccpjpifjpsjfefspolamml-FI| FI
|dqdazdaapijiejoajojp565656-RH| RH
|kijipiadoa | null or ??
So first 4th line are mapped with a dict, and the other are mapped using regex. Unmapped are null or ??
Thank you,
You can achieve it using contains function:
from pyspark.sql.types import StringType
df = spark.createDataFrame(
["GDF2009", "GDF2014", "ADS-set", "ADS-set", "XSQXQXQSDZADAA5454546a45a4-FI", "dadaccpjpifjpsjfefspolamml-FI",
"dqdazdaapijiejoajojp565656-RH", "kijipiadoa"], StringType()).toDF("message")
df.show()
names = ("GDF", "ADS", "FI", "RH")
def c(col, names):
return [f.when(f.col(col).contains(i), i).otherwise("") for i in names]
df.select("message", f.concat_ws("", f.array_remove(f.array(*c("message", names)), "")).alias("status")).show()
output:
+--------------------+
| message|
+--------------------+
| GDF2009|
| GDF2014|
| ADS-set|
| ADS-set|
|XSQXQXQSDZADAA545...|
|dadaccpjpifjpsjfe...|
|dqdazdaapijiejoaj...|
| kijipiadoa|
+--------------------+
+--------------------+------+
| message|status|
+--------------------+------+
| GDF2009| GDF|
| GDF2014| GDF|
| ADS-set| ADS|
| ADS-set| ADS|
|XSQXQXQSDZADAA545...| FI|
|dadaccpjpifjpsjfe...| FI|
|dqdazdaapijiejoaj...| RH|
| kijipiadoa| |
+--------------------+------+
Related
I'm using the code bellow to collect some info:
df = (
df
.select(
date_format(date_trunc('month', col("reference_date")), 'yyyy-MM-dd').alias("month"),
col("id"),
col("name"),
col("item_type"),
col("sub_group"),
col("latitude"),
col("longitude")
)
My latitude and longitude are values with dots, like this: -30.130307 -51.2060018 but I must replace the dot for a comma. I've tried both .replace() and .regexp_replace() but none of them are working. Could you guys help me please?
With the following dataframe as an example.
df.show()
+-------------------+-------------------+
| latitude| longitude|
+-------------------+-------------------+
| 85.70708380916193| -68.05674981929877|
| 57.074495803252404|-42.648691976080215|
| 2.944303748172473| -62.66186439333423|
| 119.76923402031701|-114.41179457810185|
|-138.52573939229234| 54.38429596238362|
+-------------------+-------------------+
You should be able to use spark.sql functions like the following
from pyspark.sql import functions
df = df.withColumn("longitude", functions.regexp_replace('longitude',r'[.]',","))
df = df.withColumn("latitude", functions.regexp_replace('latitude',r'[.]',","))
df.show()
+-------------------+-------------------+
| latitude| longitude|
+-------------------+-------------------+
| 85,70708380916193| -68,05674981929877|
| 57,074495803252404|-42,648691976080215|
| 2,944303748172473| -62,66186439333423|
| 119,76923402031701|-114,41179457810185|
|-138,52573939229234| 54,38429596238362|
+-------------------+-------------------+
I want to remove the words in main data frame from secondary data frame.
This is the main data frame:
+----------+--------------------+
| event_dt| cust_text|
+----------+--------------------+
|2020-09-02|hi fine i want to go|
|2020-09-02|i need a line hold |
|2020-09-02|i have the 60 packs|
|2020-09-02|hello want you teach|
Below is single-column secondary data frame. The words in the secondary data frame need to be removed from the main data frame in column cust_text wherever the words occur. For example, 'want' will be removed from every row wherever it shows up in the main data frame (in this example will be removed from 1st and 4th row).
+-------+
|column1|
+-------+
| want|
|because|
| need|
| hello|
| a|
| have|
| go|
+-------+
The event_dt column will remain as is and each row will remain as is, only the secondary data frame words are removed from main data frame in the result data frame as shown below
+----------+--------------------+
| event_dt| cust_text|
+----------+--------------------+
|2020-09-02|hi fine i to |
|2020-09-02|i line hold |
|2020-09-02|i the 60 packs |
|2020-09-02|you teach |
+----------+--------------------+
Help is appreciated!!
This should be the working solution for you - Use array_except() in order to eliminate the unwanted strings, however in order to do that, we need to do a little bit of preparation.
Create the DataFrame Here
from pyspark.sql import functions as F
from pyspark.sql import types as T
df = spark.createDataFrame([("2020-09-02","hi fine i want to go"),("2020-09-02","i need a line hold"), ("2020-09-02", "i have the 60 packs"), ("2020-09-02", "hello want you teach")],[ "col1","col2"])
Make the column as Array for future use
df = df.withColumn("col2", F.split("col2", " "))
df.show(truncate=False)
df_lookup = spark.createDataFrame([(1,"want"),(1,"because"), (1, "need"), (1, "hello"),(1, "a"),(1, "give"), (1, "go")],[ "col1","col2"])
df_lookup.show()
Output
+----------+---------------------------+
|col1 |col2 |
+----------+---------------------------+
|2020-09-02|[hi, fine, i, want, to, go]|
|2020-09-02|[i, need, , a, line, hold] |
|2020-09-02|[i, have, the, , 60, packs]|
|2020-09-02|[hello, want, you, teach] |
+----------+---------------------------+
+----+-------+
|col1| col2|
+----+-------+
| 1| want|
| 1|because|
| 1| need|
| 1| hello|
| 1| a|
| 1| give|
| 1| go|
+----+-------+
Now, just groupBy the lookup dataframe and take all the lookup values in a variable as below
df_lookup_var = df_lookup.groupBy("col1").agg(F.collect_set("col2").alias("col2")).collect()[0][1]
print(df_lookup_var)
x = ",".join(df_lookup_var)
print(x)
df = df.withColumn("filter_col", F.lit(x))
df = df.withColumn("filter_col", F.split("filter_col", ","))
df.show(truncate=False)
This does the trick
df = df.withColumn("ArrayColumn", F.array_except("col2", "filter_col"))
df.show(truncate = False)
+----------+---------------------------+-----------------------------------------+---------------------------+
|col1 |col2 |filter_col |ArrayColumn |
+----------+---------------------------+-----------------------------------------+---------------------------+
|2020-09-02|[hi, fine, i, want, to, go]|[need, want, a, because, hello, give, go]|[hi, fine, i, to] |
|2020-09-02|[i, need, , a, line, hold] |[need, want, a, because, hello, give, go]|[i, , line, hold] |
|2020-09-02|[i, have, the, , 60, packs]|[need, want, a, because, hello, give, go]|[i, have, the, , 60, packs]|
|2020-09-02|[hello, want, you, teach] |[need, want, a, because, hello, give, go]|[you, teach] |
+----------+---------------------------+-----------------------------------------+---------------------------+
I am loading a csv file into postgresql using pyspark. I have a record in the input file which looks like below -
Id,dept,city,name,country,state
1234,ABC,dallas,markhenry\,USA,texas
When I load it into the postgresql database then it gets loaded like this which is not correct -
Id | dept| city | name | country | state
1234 | ABC | dallas | markhenry,USA | texas | null
correct output in postgresdb should be -
Id | dept| city | name | country | state
1234 | ABC | dallas | markhenry | USA | texas
I am reading the file like below -
input_df = spark.read.format("csv").option("quote", "\"").option("escape", "\"").option("header",
"true").load(filepath)
Is there a way I can modify my code to handle the backslash () coming in the data. Thanks in advance
The purpose of the "quote" option is to specify a quote character, which wraps entire column values. Not sure if that is needed here, but you can use the regexp_replace function to remove specific characters (just select everything else as-is and modify the name column this way).
from pyspark.sql.functions import *
df = spark.read.option("inferSchema", "true").option("header", "true").csv(filepath)
df2 = df.select(col("Id"), col("dept"), col("city"), regexp_replace(col("name"), "\\\\", "").alias("name"), col("country"), col("state"))
df2.show(4, False)
Output:
+----+----+------+---------+-------+-----+
|Id |dept|city |name |country|state|
+----+----+------+---------+-------+-----+
|1234|ABC |dallas|markhenry|USA |texas|
+----+----+------+---------+-------+-----+
In my Spark application I have a dataframe with informations like
+------------------+---------------+
| labels | labels_values |
+------------------+---------------+
| ['l1','l2','l3'] | 000 |
| ['l3','l4','l5'] | 100 |
+------------------+---------------+
What I am trying to achieve is to create, given a label name as input a single_label_value column that takes the value for that label from the labels_values column.
For example, for label='l3' I would like to retrieve this output:
+------------------+---------------+--------------------+
| labels | labels_values | single_label_value |
+------------------+---------------+--------------------+
| ['l1','l2','l3'] | 000 | 0 |
| ['l3','l4','l5'] | 100 | 1 |
+------------------+---------------+--------------------+
Here's what I am attempting to use:
selected_label='l3'
label_position = F.array_position(my_df.labels, selected_label)
my_df= my_df.withColumn(
"single_label_value",
F.substring(my_df.labels_values, label_position, 1)
)
But I am getting an error because the substring function does not like the label_position argument.
Is there any way to combine these function outputs without writing an udf?
Hope, this will work for you.
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark=SparkSession.builder.getOrCreate()
mydata=[[['l1','l2','l3'],'000'], [['l3','l4','l5'],'100']]
df = spark.createDataFrame(mydata,schema=["lebels","lebel_values"])
selected_label='l3'
df2=df.select(
"*",
(array_position(df.lebels,selected_label)-1).alias("pos_val"))
df2.createOrReplaceTempView("temp_table")
df3=spark.sql("select *,substring(lebel_values,pos_val,1) as val_pos from temp_table")
df3.show()
+------------+------------+-------+-------+
| lebels|lebel_values|pos_val|val_pos|
+------------+------------+-------+-------+
|[l1, l2, l3]| 000| 2| 0|
|[l3, l4, l5]| 100| 0| 1|
+------------+------------+-------+-------+
This is giving location of the value. If you want exact index then you can use -1 from this value.
--Edited anser -> Worked with temp view. Still looking for solution using withColumn option. I hope, it will help you for now.
Edit2 -> Answer using dataframe.
df2=df.select(
"*",
(array_position(df.lebels,selected_label)-1).astype("int").alias("pos_val")
)
df3=df2.withColumn("asked_col",expr("substring(lebel_values,pos_val,1)"))
df3.show()
Try maybe:
import pyspark.sql.functions as f
from pyspark.sql.functions import *
selected_label='l3'
df=df.withColumn('single_label_value', f.substring(f.col('labels_values'), array_position(f.col('labels'), lit(selected_label))-1, 1))
df.show()
(for spark version >=2.4)
I think lit() was the function you were missing - you can use it to pass constant values to spark dataframes.
I have a Spark dataframe which I want to get the statistics
stats_df = df.describe(['mycol'])
stats_df.show()
+-------+------------------+
|summary| mycol|
+-------+------------------+
| count| 300|
| mean| 2243|
| stddev| 319.419860456123|
| min| 1400|
| max| 3100|
+-------+------------------+
How do I extract the values of min and max in mycol using the summary min max column values? How do I do it by number index?
You could easily assign a variable from a select on that dataframe.
x = stats_df.select('mycol').where('summary' == 'min')
Ok let's consider the following example :
from pyspark.sql.functions import rand, randn
df = sqlContext.range(1, 1000).toDF('mycol')
df.describe().show()
# +-------+-----------------+
# |summary| mycol|
# +-------+-----------------+
# | count| 999|
# | mean| 500.0|
# | stddev|288.5307609250702|
# | min| 1|
# | max| 999|
# +-------+-----------------+
If you want to access the row concerning stddev, per example, you'll just need to convert it into an RDD, collect it and convert it into a dictionary as following :
stats = dict(df.describe().map(lambda r : (r.summary,r.mycol)).collect())
print(stats['stddev'])
# 288.5307609250702