How to rename multiple column names as single column? - python

I have a table which has columns [col1, col2, col3 .... col9].
I want to merge all the columns data into one column as col in python?

from pyspark.sql.functions import concat
values = [('A','B','C','D'),('E','F','G','H'),('I','J','K','L')]
df = sqlContext.createDataFrame(values,['col1','col2','col3','col4'])
df.show()
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| A| B| C| D|
| E| F| G| H|
| I| J| K| L|
+----+----+----+----+
req_column = ['col1','col2','col3','col4']
df = df.withColumn('concatenated_cols',concat(*req_column))
df.show()
+----+----+----+----+-----------------+
|col1|col2|col3|col4|concatenated_cols|
+----+----+----+----+-----------------+
| A| B| C| D| ABCD|
| E| F| G| H| EFGH|
| I| J| K| L| IJKL|
+----+----+----+----+-----------------+

using Spark SQL
new_df=sqlContext.sql("SELECT CONCAT(col1,col2,col3,col3) FROM df")
Using Non Spark SQL way you can use Concat function
new_df = df.withColumn('joined_column', concat(col('col1'),col('col2'),col('col3'),col('col4'))

In Spark(pySpark) for reasons, there is no edit of existing data. What you can do is create a new column. Please check the following link.
How do I add a new column to a Spark DataFrame (using PySpark)?
Using a UDF function, you can aggregate/combine all those values in a row and return you as a single value.
Few cautions, please look out for following data issues while aggregation
Null values
Type mismatches
String Encoding issues

Related

Reversing Group By in PySpark

I am not sure about the correctness of the question itself. The solutions I've found for SQL do not work at Hive SQL or recursion is prohibited.
Thus, I'd like to solve the problem in Pyspark and need a solution or at least ideas, how to tackle the problem.
I have an original table which looks like this:
+--------+----------+
|customer|nr_tickets|
+--------+----------+
| A| 3|
| B| 1|
| C| 2|
+--------+----------+
This is how I want the table:
+--------+
|customer|
+--------+
| A|
| A|
| A|
| B|
| C|
| C|
+--------+
Do you have any suggestions?
Thank you very much in advance!
For Spark2.4+, use array_repeat with explode.
from pyspark.sql import functions as F
df.selectExpr("""explode(array_repeat(customer,cast(nr_tickets as int))) as customer""").show()
#+--------+
#|customer|
#+--------+
#| A|
#| A|
#| A|
#| B|
#| C|
#| C|
#+--------+
You can make a new dataframe by iterating over rows(groups).
1st make list of Rows havingcustomer (Row(customer=a["customer"])) repeated nr_tickets times for that customer using range(int(a["nr_tickets"]))
df_list + [Row(customer=a["customer"]) for T in range(int(a["nr_tickets"]))]
you can store and append these in a list and later make a dataframe with it.
df= spark.createDataFrame(df_list)
Overall,
from pyspark.sql import Row
df_list = []
for a in df.select(["customer","nr_tickets"]).collect():
df_list = df_list + [Row(customer=a["customer"]) for T in range(int(a["nr_tickets"]))]
df= spark.createDataFrame(df_list)
df.show()
you can also do it with list comprehension as
from pyspark.sql import Row
from functools import reduce #python 3
df_list = [
[Row(customer=a["customer"])]*int(a["nr_tickets"])
for a in df.select(["customer","nr_tickets"]).collect()
]
df= spark.createDataFrame(reduce(lambda x,y: x+y,df_list))
df.show()
Produces
+--------+
|customer|
+--------+
| A|
| A|
| A|
| B|
| C|
| C|
+--------+
in the meanwhile I have also found a solution by myself:
for i in range(1, max_nr_of_tickets):
table = table.filter(F.col('nr_tickets') >= 1).union(test)
table = table.withColumn('nr_tickets', F.col('nr_tickets') - 1)
Explanation: The DFs "table" and "test" are the same at the beginning.
So "max_nr_of_tickets" is just the highest "nr_tickets". It works.
I am only struggling with the format of the max number:
max_nr_of_tickets = df.select(F.max('nr_tickets')).collect()
I cannot use the result in the for loop's range as it is a list. So I manually enter the highest number.
Any ideas how I could get the max_nr_of_tickets into the right format so the loops range will accept it?
Thanks

How to map each i-th element of a dataframe to a key from another dataframe defined by ranges in PySpark

what I want to do
Transform the input file df0 into the desired output df2 based on the clustering define in df1
What I have
df0 = spark.createDataFrame(
[('A',0.05),('B',0.01),('C',0.75),('D',1.05),('E',0.00),('F',0.95),('G',0.34), ('H',0.13)],
("items","quotient")
)
df1 = spark.createDataFrame(
[('C0',0.00,0.00),('C1',0.01,0.05),('C2',0.06,0.10), ('C3',0.11,0.30), ('C4',0.31,0.50), ('C5',0.51,99.99)],
("cluster","from","to")
)
What I want
df2 = spark.createDataFrame(
[('A',0.05,'C1'),('B',0.01,'C1'),('C',0.75,'C5'),('D',1.05,'C5'),('E',0.00,'C0'),('F',0.95,'C3'),('G',0.34,'C2'), ('H',0.13,'C4')],
("items","quotient","cluster")
)
notes
the coding environment is PySpark within Palantir.
the structure and content of DataFrame df1 can be adjusted for the sake of simplification in coding: df1 is what tells which cluster the items from df0 should be linked to.
Thank you very in advance for your time and feedback !
This is a simple left join problem.
df0.join(df1, df0['quotient'].between(df1['from'], df1['to']), "left") \
.select(*df0.columns, df1['cluster']).show()
+-----+--------+-------+
|items|quotient|cluster|
+-----+--------+-------+
| A| 0.05| C1|
| B| 0.01| C1|
| C| 0.75| C5|
| D| 1.05| C5|
| E| 0.0| C0|
| F| 0.95| C5|
| G| 0.34| C4|
| H| 0.13| C3|
+-----+--------+-------+

Pyspark groupBy DataFrame without aggregation or count

Can it iterate through the Pyspark groupBy dataframe without aggregation or count?
For example code in Pandas:
for i, d in df2:
mycode ....
^^ if using pandas ^^
Is there a difference in how to iterate groupby in Pyspark or have to use aggregation and count?
At best you can use .first , .last to get respective values from the groupBy but not all in the way you can get in pandas.
ex:
from pyspark.sql import functions as f
df.groupBy(df['some_col']).agg(f.first(df['col1']), f.first(df['col2'])).show()
Since their is a basic difference between the way the data is handled in pandas and spark not all functionalities can be used in the same way.
Their are a few work arounds to get what you want like:
for diamonds DataFrame:
+---+-----+---------+-----+-------+-----+-----+-----+----+----+----+
|_c0|carat| cut|color|clarity|depth|table|price| x| y| z|
+---+-----+---------+-----+-------+-----+-----+-----+----+----+----+
| 1| 0.23| Ideal| E| SI2| 61.5| 55.0| 326|3.95|3.98|2.43|
| 2| 0.21| Premium| E| SI1| 59.8| 61.0| 326|3.89|3.84|2.31|
| 3| 0.23| Good| E| VS1| 56.9| 65.0| 327|4.05|4.07|2.31|
| 4| 0.29| Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63|
| 5| 0.31| Good| J| SI2| 63.3| 58.0| 335|4.34|4.35|2.75|
+---+-----+---------+-----+-------+-----+-----+-----+----+----+----+
You can use:
l=[x.cut for x in diamonds.select("cut").distinct().rdd.collect()]
def groups(df,i):
import pyspark.sql.functions as f
return df.filter(f.col("cut")==i)
#for multi grouping
def groups_multi(df,i):
import pyspark.sql.functions as f
return df.filter((f.col("cut")==i) & (f.col("color")=='E'))# use | for or.
for i in l:
groups(diamonds,i).show(2)
output :
+---+-----+-------+-----+-------+-----+-----+-----+----+----+----+
|_c0|carat| cut|color|clarity|depth|table|price| x| y| z|
+---+-----+-------+-----+-------+-----+-----+-----+----+----+----+
| 2| 0.21|Premium| E| SI1| 59.8| 61.0| 326|3.89|3.84|2.31|
| 4| 0.29|Premium| I| VS2| 62.4| 58.0| 334| 4.2|4.23|2.63|
+---+-----+-------+-----+-------+-----+-----+-----+----+----+----+
only showing top 2 rows
+---+-----+-----+-----+-------+-----+-----+-----+----+----+----+
|_c0|carat| cut|color|clarity|depth|table|price| x| y| z|
+---+-----+-----+-----+-------+-----+-----+-----+----+----+----+
| 1| 0.23|Ideal| E| SI2| 61.5| 55.0| 326|3.95|3.98|2.43|
| 12| 0.23|Ideal| J| VS1| 62.8| 56.0| 340|3.93| 3.9|2.46|
+---+-----+-----+-----+-------+-----+-----+-----+----+----+----+
...
In Function groups you can decide what kind of grouping you want for the data. It is a simple filter condition but it will get you all the groups separately.
When we do a GroupBy we end up with a RelationalGroupedDataset, which is a fancy name for a DataFrame that has a grouping specified but needs the user to specify an aggregation before it can be queried further.
When you try to do any functions on the Grouped dataframe it throws an error
AttributeError: 'GroupedData' object has no attribute 'show'
Yes, don't use group by, use distinct with select instead.
df.select("col1", "col2", ...).distinct()
Then you could do any number of things for iterating through your DataFrame.
i.e.
1- Convert PySpark DF to Pandas.
DataFrame.toPandas()
2- If your DF is small, you could convert it to list.
DataFrame.collect()
3- Apply a method with foreach(your_method).
Dataframe.foreach(your_method)
4- Convert to RDD and use map with a lambda.
DataFrame.rdd.map(lambda x: your_method(x))

Pyspark: filter function error with .isNotNull() and other 2 other conditions

I'm trying to filter my dataframe in Pyspark and I want to write my results in a parquet file, but I get an error every time because something is wrong with my isNotNull() condition. I have 3 conditions in the filter function, and if one of them is true the resulting row should be written in the parquet file.
I tried different versions with OR and | and different versions with isNotNull(), but nothing helped me.
This is one example I tried:
from pyspark.sql.functions import col
df.filter(
(df['col1'] == 'attribute1') |
(df['col1'] == 'attribute2') |
(df.where(col("col2").isNotNull()))
).write.save("new_parquet.parquet")
This is the other example I tried, but in that example it ignores the rows with attribute1 or attribute2:
df.filter(
(df['col1'] == 'attribute1') |
(df['col1'] == 'attribute2') |
(df['col2'].isNotNull())
).write.save("new_parquet.parquet")
This is the error message:
AttributeError: 'DataFrame' object has no attribute '_get_object_id'
I hope you can help me, I'm new to the topic. Thank you so much!
First of, about the col1 filter, you could do it using isin like this:
df['col1'].isin(['attribute1', 'attribute2'])
And then:
df.filter((df['col1'].isin(['atribute1', 'atribute2']))|(df['col2'].isNotNull()))
AFAIK, the dataframe.column.isNotNull() should work, but I dont have sample data to test it, sorry.
See the example below:
from pyspark.sql import functions as F
df = spark.createDataFrame([(3,'a'),(5,None),(9,'a'),(1,'b'),(7,None),(3,None)], ["id", "value"])
df.show()
The original DataFrame
+---+-----+
| id|value|
+---+-----+
| 3| a|
| 5| null|
| 9| a|
| 1| b|
| 7| null|
| 3| null|
+---+-----+
Now we do the filter:
df = df.filter( (df['id']==3)|(df['id']=='9')|(~F.isnull('value')))
df.show()
+---+-----+
| id|value|
+---+-----+
| 3| a|
| 9| a|
| 1| b|
| 3| null|
+---+-----+
So you see
row(3, 'a') and row(3, null) are selected because of `df['id']==3'
row(9, 'a') is selected because of `df['id']==9'
row(1, 'b') is selected because of ~F.isnull('value'), but row(5, null) and row(7, null) are not selected.

Removing duplicate columns after a DF join in Spark

When you join two DFs with similar column names:
df = df1.join(df2, df1['id'] == df2['id'])
Join works fine but you can't call the id column because it is ambiguous and you would get the following exception:
pyspark.sql.utils.AnalysisException: "Reference 'id' is ambiguous,
could be: id#5691, id#5918.;"
This makes id not usable anymore...
The following function solves the problem:
def join(df1, df2, cond, how='left'):
df = df1.join(df2, cond, how=how)
repeated_columns = [c for c in df1.columns if c in df2.columns]
for col in repeated_columns:
df = df.drop(df2[col])
return df
What I don't like about it is that I have to iterate over the column names and delete them why by one. This looks really clunky...
Do you know of any other solution that will either join and remove duplicates more elegantly or delete multiple columns without iterating over each of them?
If the join columns at both data frames have the same names and you only need equi join, you can specify the join columns as a list, in which case the result will only keep one of the join columns:
df1.show()
+---+----+
| id|val1|
+---+----+
| 1| 2|
| 2| 3|
| 4| 4|
| 5| 5|
+---+----+
df2.show()
+---+----+
| id|val2|
+---+----+
| 1| 2|
| 1| 3|
| 2| 4|
| 3| 5|
+---+----+
df1.join(df2, ['id']).show()
+---+----+----+
| id|val1|val2|
+---+----+----+
| 1| 2| 2|
| 1| 2| 3|
| 2| 3| 4|
+---+----+----+
Otherwise you need to give the join data frames alias and refer to the duplicated columns by the alias later:
df1.alias("a").join(
df2.alias("b"), df1['id'] == df2['id']
).select("a.id", "a.val1", "b.val2").show()
+---+----+----+
| id|val1|val2|
+---+----+----+
| 1| 2| 2|
| 1| 2| 3|
| 2| 3| 4|
+---+----+----+
df.join(other, on, how) when on is a column name string, or a list of column names strings, the returned dataframe will prevent duplicate columns.
when on is a join expression, it will result in duplicate columns. We can use .drop(df.a) to drop duplicate columns. Example:
cond = [df.a == other.a, df.b == other.bb, df.c == other.ccc]
# result will have duplicate column a
result = df.join(other, cond, 'inner').drop(df.a)
Assuming 'a' is a dataframe with column 'id' and 'b' is another dataframe with column 'id'
I use the following two methods to remove duplicates:
Method 1: Using String Join Expression as opposed to boolean expression. This automatically remove a duplicate column for you
a.join(b, 'id')
Method 2: Renaming the column before the join and dropping it after
b.withColumnRenamed('id', 'b_id')
joinexpr = a['id'] == b['b_id']
a.join(b, joinexpr).drop('b_id)
The code below works with Spark 1.6.0 and above.
salespeople_df.show()
+---+------+-----+
|Num| Name|Store|
+---+------+-----+
| 1| Henry| 100|
| 2| Karen| 100|
| 3| Paul| 101|
| 4| Jimmy| 102|
| 5|Janice| 103|
+---+------+-----+
storeaddress_df.show()
+-----+--------------------+
|Store| Address|
+-----+--------------------+
| 100| 64 E Illinos Ave|
| 101| 74 Grand Pl|
| 102| 2298 Hwy 7|
| 103|No address available|
+-----+--------------------+
Assuming -in this example- that the name of the shared column is the same:
joined=salespeople_df.join(storeaddress_df, ['Store'])
joined.orderBy('Num', ascending=True).show()
+-----+---+------+--------------------+
|Store|Num| Name| Address|
+-----+---+------+--------------------+
| 100| 1| Henry| 64 E Illinos Ave|
| 100| 2| Karen| 64 E Illinos Ave|
| 101| 3| Paul| 74 Grand Pl|
| 102| 4| Jimmy| 2298 Hwy 7|
| 103| 5|Janice|No address available|
+-----+---+------+--------------------+
.join will prevent the duplication of the shared column.
Let's assume that you want to remove the column Num in this example, you can just use .drop('colname')
joined=joined.drop('Num')
joined.show()
+-----+------+--------------------+
|Store| Name| Address|
+-----+------+--------------------+
| 103|Janice|No address available|
| 100| Henry| 64 E Illinos Ave|
| 100| Karen| 64 E Illinos Ave|
| 101| Paul| 74 Grand Pl|
| 102| Jimmy| 2298 Hwy 7|
+-----+------+--------------------+
After I've joined multiple tables together, I run them through a simple function to drop columns in the DF if it encounters duplicates while walking from left to right. Alternatively, you could rename these columns too.
Where Names is a table with columns ['Id', 'Name', 'DateId', 'Description'] and Dates is a table with columns ['Id', 'Date', 'Description'], the columns Id and Description will be duplicated after being joined.
Names = sparkSession.sql("SELECT * FROM Names")
Dates = sparkSession.sql("SELECT * FROM Dates")
NamesAndDates = Names.join(Dates, Names.DateId == Dates.Id, "inner")
NamesAndDates = dropDupeDfCols(NamesAndDates)
NamesAndDates.saveAsTable("...", format="parquet", mode="overwrite", path="...")
Where dropDupeDfCols is defined as:
def dropDupeDfCols(df):
newcols = []
dupcols = []
for i in range(len(df.columns)):
if df.columns[i] not in newcols:
newcols.append(df.columns[i])
else:
dupcols.append(i)
df = df.toDF(*[str(i) for i in range(len(df.columns))])
for dupcol in dupcols:
df = df.drop(str(dupcol))
return df.toDF(*newcols)
The resulting data frame will contain columns ['Id', 'Name', 'DateId', 'Description', 'Date'].
In my case I had a dataframe with multiple duplicate columns after joins and I was trying to same that dataframe in csv format, but due to duplicate column I was getting error. I followed below steps to drop duplicate columns. Code is in scala
1) Rename all the duplicate columns and make new dataframe
2) make separate list for all the renamed columns
3) Make new dataframe with all columns (including renamed - step 1)
4) drop all the renamed column
private def removeDuplicateColumns(dataFrame:DataFrame): DataFrame = {
var allColumns: mutable.MutableList[String] = mutable.MutableList()
val dup_Columns: mutable.MutableList[String] = mutable.MutableList()
dataFrame.columns.foreach((i: String) =>{
if(allColumns.contains(i))
if(allColumns.contains(i))
{allColumns += "dup_" + i
dup_Columns += "dup_" +i
}else{
allColumns += i
}println(i)
})
val columnSeq = allColumns.toSeq
val df = dataFrame.toDF(columnSeq:_*)
val unDF = df.drop(dup_Columns:_*)
unDF
}
to call the above function use below code and pass your dataframe which contains duplicate columns
val uniColDF = removeDuplicateColumns(df)
Here is simple solution for remove duplicate column
final_result=df1.join(df2,(df1['subjectid']==df2['subjectid']),"left").drop(df1['subjectid'])
If you join on a list or string, dup cols are automatically]1 removed
This is a scala solution, you could translate the same idea into any language
// get a list of duplicate columns or use a list/seq
// of columns you would like to join on (note that this list
// should include columns for which you do not want duplicates)
val duplicateCols = df1.columns.intersect(df2.columns)
// no duplicate columns in resulting DF
df1.join(df2, duplicateCols.distinct.toSet)
Spark SQL version of this answer:
df1.createOrReplaceTempView("t1")
df2.createOrReplaceTempView("t2")
spark.sql("select * from t1 inner join t2 using (id)").show()
# +---+----+----+
# | id|val1|val2|
# +---+----+----+
# | 1| 2| 2|
# | 1| 2| 3|
# | 2| 3| 4|
# +---+----+----+
This works for me when multiple columns used to join and need to drop more than one column which are not string type.
final_data = mdf1.alias("a").join(df3.alias("b")
(mdf1.unique_product_id==df3.unique_product_id) &
(mdf1.year_week==df3.year_week) ,"left" ).select("a.*","b.promotion_id")
Give a.* to select all columns from one table and from the other table choose specific columns.

Categories

Resources