I am new to PySpark dataframes and used to work with RDDs before. I have a dataframe like this:
date path
2017-01-01 /A/B/C/D
2017-01-01 /X
2017-01-01 /X/Y
And want to convert to the following:
date path
2017-01-01 /A/B
2017-01-01 /X
2017-01-01 /X/Y
Basically to get rid of everything after the third / including it. So before with RDD I used to have the following:
from urllib import quote_plus
path_levels = df['path'].split('/')
filtered_path_levels = []
for _level in range(min(df_size, 3)):
# Take only the top 2 levels of path
filtered_path_levels.append(quote_plus(path_levels[_level]))
df['path'] = '/'.join(map(str, filtered_path_levels))
Things with pyspark are more complicated I would say. Here is what I have got so far:
path_levels = split(results_df['path'], '/')
filtered_path_levels = []
for _level in range(size(df_size, 3)):
# Take only the top 2 levels of path
filtered_path_levels.append(quote_plus(path_levels[_level]))
df['path'] = '/'.join(map(str, filtered_path_levels))
which is giving me the following error:
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
Any help regrading this would be much appreciated. Let me know if this need more information/explanation.
Use udf:
from pyspark.sql.functions import *
#udf
def quote_string_(path, size):
if path:
return "/".join(quote_plus(x) for x in path.split("/")[:size])
df.withColumn("foo", quote_string_("path", lit(2)))
I resolved my problem using the following code:
from pyspark.sql.functions import split, col, lit, concat
split_col = split(df['path'], '/')
df = df.withColumn('l1_path', split_col.getItem(1))
df = df.withColumn('l2_path', split_col.getItem(2))
df = df.withColumn('path', concat(col('l1_path'), lit('/'), col('l2_path')))
df = df.drop('l1_path', 'l2_path')
Related
I have this CSV file
but when I am running my notebook regex shows some error
from pyspark.sql.functions import regexp_replace
path="dbfs:/FileStore/df/test.csv"
dff = spark.read.option("header", "true").option("inferSchema", "true").option('multiline', 'true').option('encoding', 'UTF-8').option("delimiter", "‡‡,‡‡").csv(path)
dff.show(truncate=False)
#dffs_headers = dff.dtypes
for i in dffs_headers:
columnLabel = i[0]
print(columnLabel)
newColumnLabel = columnLabel.replace('‡‡','').replace('‡‡','')
dff=dff.withColumn(newColumnLabel,regexp_replace(columnLabel,'^\\‡‡|\\‡‡$','')).drop(newColumnLabel)
if columnLabel != newColumnLabel:
dff = dff.drop(columnLabel)
dff.show(truncate=False)
As and a result I am getting this
Can anyone improvise this code, it will be a great help.
Expected output is
|��123456��,��Version2��,��All questions have been answered accurately and the guidance in the questionnaire was understood and followed��,��2010-12-16 00:01:48.020000000��|
But I am getting
��Id��,��Version��,��Questionnaire��,��Date��
Second column is showing Truncated value
You will need to import the libraries you want to use first, to use them. The below code in a cell before the regexp_replace call should fix this issue
from pyspark.sql.functions import regexp_replace
This is working asnwer
from pyspark.sql.functions import regexp_replace
path="dbfs:/FileStore/df/test.csv"
dff = spark.read.option("header", "true").option("inferSchema", "true").option('multiline', 'true').option('encoding', 'UTF-8').option("delimiter", "‡‡,‡‡").csv(path)
#dffs_headers = dff.dtypes
for i in dffs_headers:
columnLabel = i[0]
newColumnLabel = columnLabel.replace('‡‡','').replace('‡‡','')
dff=dff.withColumn(newColumnLabel,regexp_replace(columnLabel,'^\\‡‡|\\‡‡$',''))
if columnLabel != newColumnLabel:
dff = dff.drop(columnLabel)
dff.show(truncate=False)
I'm trying to add a new record Timezone to my pysaprk dataframe
from timezonefinder import TimezoneFinder
tf = TimezoneFinder()
df = df.withColumn("longitude",col("longitude").cast("float"))
df = df.withColumn("Latitude",col("Latitude").cast("float"))
df = df.withColumn("timezone",tf.timezone_at(lng=col("longitude"), lat=col("Latitude")))
I'm getting below error.
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
Timezonefinder library is used to find timezone by passing geocoordinates.
Latitude, longitude = 20.5061, 50.358
tf.timezone_at(lng=longitude, lat=Latitude)
-- 'Asia/Riyadh'
You need to use a UDF to pass columns to Python functions:
import pyspark.sql.functions as F
#F.udf('string')
def tfUDF(lng, lat):
from timezonefinder import TimezoneFinder
tf = TimezoneFinder()
return tf.timezone_at(lng=lng, lat=lat)
df = df.withColumn("longitude", F.col("longitude").cast("float"))
df = df.withColumn("Latitude", F.col("Latitude").cast("float"))
df = df.withColumn("timezone", tfUDF(F.col("longitude"), F.col("Latitude")))
df.show()
+--------+---------+-----------+
|Latitude|longitude| timezone|
+--------+---------+-----------+
| 20.5061| 50.358|Asia/Riyadh|
+--------+---------+-----------+
I am using python3 and pandas to create a script to:
Read unstructured xsls data of varing column lengths
Total the "this", "last" and "diff" columns
Add Total under the brands columns
Dynamically bold the entire row that contains "total"
On the last point, the challenge I have been struggling with is that the row index changes depending on the data being fed in to the script. The code provided does not have a solution to this issue. I have tried every variation I can think of using style.applymap(bold) with and without variables.
Example of input
input
Example of desired outcome
outcome
Script:
import pandas as pd
import io
import sys
import warnings
def bold(val):
return 'font-weight: bold'
excel_file = 'testfile1.xlsx'
df = pd.read_excel(excel_file)
product = (df.loc[df['Brand'] == "widgit"])
product = product.append({'Brand':'Total',
'This':product['This'].sum(),
'Last':product['Last'].sum(),
'Diff':product['Diff'].sum(),
'% Chg':product['This'].sum()/product['Last'].sum()
},
ignore_index=True)
product = product.append({'Brand':' '}, ignore_index=True)
product.fillna(' ', inplace=True)
try something like this:
def highlight_max(x):
return ['font-weight: bold' if v == x.loc[4] else ''
for v in x]
df = pd.DataFrame(np.random.randn(5, 2))
df.style.apply(highlight_max)
output:
I need to polish a csv dataset, but it seems the changes are not applied to the dataset itslef.
CSV is in this format:
ID, TRACK_LINK
761607, https://mylink.com//track/...
This is my script:
import pandas as pd
df = pd.read_csv('./file.csv').fillna('')
# remove double // from TRACK_LINK
def polish_track_link(track_link):
return track_link.replace("//track", "/track")
df['LINK'].apply(polish_track_link)
print(df)
this prints something like:
...
761607 https://mylink.com//track/...
note the //track
If I do print(df['LINK'].apply(polish_track_link)) I get:
...
761607, https://mylink.com/track/...
So the function polish_track_link works but it's not applied to the dataset. Any idea why?
You need assign back:
df['TRACK_LINK'] = df['TRACK_LINK'].apply(polish_track_link)
But better is use pandas functions str.replace or replace with regex=True for replace substrings:
df['TRACK_LINK'] = df['TRACK_LINK'].str.replace("//track", "/track")
Or:
df['TRACK_LINK'] = df['TRACK_LINK'].replace("//track", "/track", regex=True)
print(df)
ID TRACK_LINK
0 761607 https://mylink.com/track/
I have an instance of pyspark.sql.dataframe.DataFrame created using
dataframe = sqlContext.sql("select * from table").
One column is 'arrival_date' and contains a string.
How do I modify this column so as to only take the first 4 characters from it and throw away the rest?
How would I convert the type of this column from string to date?
In graphlab.SFrame, this would be:
dataframe['column_name'] = dataframe['column_name'].apply(lambda x: x[:4] )
and
dataframe['column_name'] = dataframe['column_name'].str_to_datetime()
As stated by Orions, you can't modify a column, but you can override it. Also, you shouldn't need to create an user defined function, as there is a built-in function for extracting substrings:
from pyspark.sql.functions import *
df = df.withColumn("arrival_date", df['arrival_date'].substr(0, 4))
To convert it to date, you can use to_date, as Orions said:
from pyspark.sql.functions import *
df = df.withColumn("arrival_date", to_date(df['arrival_date'].substr(0, 4)))
However, if you need to specify the format, you should use unix_timestamp:
from pyspark.sql.functions import *
format = 'yyMM'
col = unix_timestamp(df['arrival_date'].substr(0, 4), format).cast('timestamp')
df = df.withColumn("arrival_date", col)
All this can be found in the pyspark documentation.
To extract first 4 characters from the arrival_date (StringType) column, create a new_df by using UserDefinedFunction (as you cannot modify the columns: they are immutable):
from pyspark.sql.functions import UserDefinedFunction, to_date
old_df = spark.sql("SELECT * FROM table")
udf = UserDefinedFunction(lambda x: str(x)[:4], StringType())
new_df = old_df.select(*[udf(column).alias('arrival_date') if column == 'arrival_date' else column for column in old_df.columns])
And to covert the arrival_date (StringType) column into DateType column, use the to_date function as show below:
new_df = old_df.select(old_df.other_cols_if_any, to_date(old_df.arrival_date).alias('arrival_date'))
Sources:
https://stackoverflow.com/a/29257220/2873538
https://databricks.com/blog/2015/09/16/apache-spark-1-5-dataframe-api-highlights.html