I have the following problem. Suppose I have a wide data Frame consisting of three columns (mock example follows below). Essentially, it consists of three factors, A, B and C for which I have certain values for each business day within a time range.
import pandas as pd
import numpy as np
index_d = pd.bdate_range(start='10/5/2022', end='10/27/2022')
index = np.repeat(index_d,3)
values = np.random.randn(3*len(index_d), 1)
columns_v = len(index_d)*["A","B","C"]
df = pd.DataFrame()
df["x"] = np.asarray(index)
df["y"] = values
df["factor"] = np.asarray([columns_v]).T
I would like to plot the business weekly averages of the the three factors along time. A business week goes from Monday to Friday. However, in the example above I start within a week and end within a week. That means the first weekly averages consist only of the data points on 5th, 6th and 7th of October. Similar for the last week. Ideally, the output should have the form
dt1 = dt.datetime.strptime("20221007", "%Y%m%d").date()
dt2 = dt.datetime.strptime("20221014", "%Y%m%d").date()
dt3 = dt.datetime.strptime("20221021", "%Y%m%d").date()
dt4 = dt.datetime.strptime("20221027", "%Y%m%d").date()
d = 3*[dt1, dt2, dt3, dt4]
values = np.random.randn(len(d), 1)
factors = 4*["A","B","C"]
df_output = pd.DataFrame()
df_output["time"] = d
df_output["values"] = values
df_output["factors"] = factors
I can then plot the weekly averages using seaborn as a lineplot with hue. Important to note is that the respective time value for weekly average is always the last business day in that week (Friday except for the last, where it is a Thursday).
I was thinking of groupby. However, given that my real data is much larger and has possibly some NaN I'm not sure how to do it. In particular with regards to the random start / end points that don't need to be Monday / Friday.
Try as follows:
res = df.groupby([pd.Grouper(key='x', freq='W-FRI'),df.factor])['y'].mean()\
.reset_index(drop=False)
res = res.rename(columns={'x':'time','factor':'factors','y':'values'})
res['time'] = res.time.map(pd.merge_asof(df.x, res.time, left_on='x',
right_on='time', direction='forward')\
.groupby('time').last()['x']).astype(str)
print(res)
time factors values
0 2022-10-07 A 0.171228
1 2022-10-07 B -0.250432
2 2022-10-07 C -0.126960
3 2022-10-14 A 0.455972
4 2022-10-14 B 0.582900
5 2022-10-14 C 0.104652
6 2022-10-21 A -0.526221
7 2022-10-21 B 0.371007
8 2022-10-21 C 0.012099
9 2022-10-27 A -0.123510
10 2022-10-27 B -0.566441
11 2022-10-27 C -0.652455
Plot data:
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme()
fig, ax = plt.subplots(figsize=(8,5))
ax = sns.lineplot(data=res, x='time', y='values', hue='factors')
sns.move_legend(ax, "upper left", bbox_to_anchor=(1, 1))
plt.show()
Result:
Explanation
First, apply df.groupby. Grouping by factor is of course easy; for the dates we can use pd.Grouper with freq parameter set to W-FRI (each week through to Friday), and then we want to get the mean for column y (NaN values will just be ignored).
In the next step, let's use df.rename to rename the columns.
We are basically done now, except for the fact that pd.Grouper will use each Friday (even if it isn't present in the actual set). E.g.:
print(res.time.unique())
['2022-10-07T00:00:00.000000000' '2022-10-14T00:00:00.000000000'
'2022-10-21T00:00:00.000000000' '2022-10-28T00:00:00.000000000']
If you are OK with this, you can just start plotting (but see below). If you would like to get '2022-10-27' instead of '2022-10-28', we can combine Series.map applied to column time with pd.merge_asof,and perform another groupby to get last in column x. I.e. this will get us the closest match to each Friday within each week (so, in fact just Friday in all cases, except the last: 2022-10-17).
In either scenario, before plotting, make sure to turn the datetime values into strings: res['time'] = res['time'].astype(str)!
You can add a column with the calendar week:
df['week'] = df.x.dt.isocalendar().week
Get a mask for all the Fridays, and for the last day:
last_of_week = (df.x.dt.isocalendar().day == 5).values
last_of_week[-1] = True
Get the actual dates:
last_days = df.x[last_of_week].unique()
Group by week and factor, take the mean:
res = df.groupby(['week', 'factor']).mean().reset_index()
Clean up:
res = res.drop('week', axis=1)
res['x'] = pd.Series(last_days).repeat(3).reset_index(drop=True)
Related
I am facing difficulties trying to customize the x-axis ticks labels. I am plotting a graph on Count of Laptops sold daily. What I want is the x-axis to show all the 7-days Day and corresponding count value including days where 0 laptops are sold. I would also like the x-axis(day) to be sorted in ascending order. How do I do that? Below is my code:
count = [5,1,12,3,4]
day = pd.Series(['3','2','4','5','7'], dtype="category")
df = pd.DataFrame({'day':day, 'count':count})
trace1 = go.Bar(x=df['day'], y=df['count'], name= 'Day', text=df['count'], textposition='auto', marker_color='rgb(55, 83, 109)')
data = [trace1]
layout = go.Layout(title='Laptops sold daily', xaxis=dict(title='Day'), yaxis=dict(title='Count of Laptops'), hovermode='closest')
fig = go.Figure(data=data, layout=layout)
pyo.iplot(fig)
Dataframe Output
Graph Output
Output Graph based on Accepted Solution:
Merge your dataframe first on a series of all days you potentially need. Days without sales will have a count of NaN which causes the sales count column become float. We finally replace the NaNs with 0 and convert back to int.
df = df.merge(pd.Series(range(1,8), name='day').astype(str), how='right').fillna(0).astype(int)
The resulting dataframe will be automatically sorted as the right join preserves to order of the (right) keys, which you provide in ascending order.
Since you're using pyo.iplot(fig), I would like to show you how you can do it more easily using plotly as a plotting backend for pandas. The data munging is identical to the suggestion from Stef.
Code
# imports and settings
import pandas as pd
pd.options.plotting.backend = "plotly"
# data
count = [5,1,12,3,4]
day = pd.Series(['3','2','4','5','7'], dtype="category")
df = pd.DataFrame({'day':day, 'count':count})
df = df.merge(pd.Series(range(1,8), name='day').astype(str), how='right').fillna(0).astype(int)
# plotly
fig = df.plot(kind='bar', x = 'day', y = 'count', text='count')
fig.update_traces(marker_color='rgb(55, 83, 109)')
Plot
Another approach is to update the x-axis with a dictionary of days of the week and day number, so that the x-axis can be extended to the name of the day.
df.set_index('day', inplace=True)
weekdaynum = {'1':'Mon','2':'Tue','3':'Wed','4':'Thu','5':'Fri','6':'Sat','7':'Sun'}
df = df.reindex(weekdaynum.keys(), fill_value=0)
df.reset_index(inplace=True)
df.day = df.day.map(weekdaynum)
day count
0 Mon 0
1 Tue 1
2 Wed 5
3 Thu 12
4 Fri 3
5 Sat 0
6 Sun 4
I have a .txt file with three columns: Time, ticker, price. The time is spaced in 15 second intervals. It looks like this uploaded to jupyter notebook and put into a Pandas DF.
time ticker price
0 09:30:35 EV 33.860
1 00:00:00 AMG 60.430
2 09:30:35 AMG 60.750
3 00:00:00 BLK 455.350
4 09:30:35 BLK 451.514
... ... ... ...
502596 13:00:55 TLT 166.450
502597 13:00:55 VXX 47.150
502598 13:00:55 TSLA 529.800
502599 13:00:55 BIDU 103.500
502600 13:00:55 ON 12.700
# NOTE: the first set of data has the data at market open for -
# every other time point, so that's what the 00:00:00 is.
#It is only limited to the 09:30:35 data.
I need to create a function that takes an input (a ticker) and then creates a bar chart that displays the data with 5 minute ticks ( the data is every 20 seconds, so for every 15 points in time).
So far I've thought about separating the "mm" part of the hh:mm:ss to just get the minutes in another column and then right a for loop that looks something like this:
for num in df['mm']:
if num %5 == 0:
print('tick')
then somehow appending the "tick" to the "time" column for every 5 minutes of data (I'm not sure how I would do this), then using the time column as the index and only using data with the "tick" index in it (some kind of if statement). I'm not sure if this makes sense but I'm drawing a blank on this.
You should have a look at the built-in functions in pandas. In the following example I'm using a date + time format but it shouldn't be hard to convert one to the other.
Generate data
%matplotlib inline
import pandas as pd
import numpy as np
dates = pd.date_range(start="2020-04-01", periods=150, freq="20S")
df1 = pd.DataFrame({"date":dates,
"price":np.random.rand(len(dates))})
df2 = df1.copy()
df1["ticker"] = "a"
df2["ticker"] = "b"
df = pd.concat([df1,df2], ignore_index=True)
df = df.sample(frac=1).reset_index(drop=True)
Resample Timeseries every 5 minutes
Here you can try to see the output of
df1.set_index("date")\
.resample("5T")\
.first()\
.reset_index()
Where we are considering just the first element at 05:00, 10:00 and so on. In general to do the same for every ticker we need a groupby
out = df.groupby("ticker")\
.apply(lambda x: x.set_index("date")\
.resample("5T")\
.first()\
.reset_index())\
.reset_index(drop=True)
Plot function
def plot_tick(data, ticker):
ts = data[data["ticker"]==ticker].reset_index(drop=True)
ts.plot(x="date", y="price", kind="bar", title=ticker);
plot_tick(out, "a")
Then you can improve the plot or, eventually, try to use plotly.
I have 13 CSV files that contain billing information in an unusual format. Multiple readings are recorded every 30 minutes of the day. Five days are recorded beside each other (columns). Then the next five days are recorded under it. To make things more complicated, the day of the week, date, and billing day is shown over the first recording of KVAR each day.
The image blow shows a small example. However, imagine that KW, KVAR, and KVA repeat 3 more times before continuing some 50 rows later.
My goal as to create a simple python script that would make the data into a data frame with the columns: DATE, TIME, KW, KVAR, KVA, and DAY.
The problem is my script returns NaN data for the KW, KVAR, and KVA data after the first five days (which is correlated with a new instance of a for loop). What is weird to me is that when I try to print out the same ranges I get the data that I expect.
My code is below. I have included comments to help further explain things. I also have an example of sample output of my function.
def make_df(df):
#starting values
output = pd.DataFrame(columns=["DATE", "TIME", "KW", "KVAR", "KVA", "DAY"])
time = df1.loc[3:50,0]
val_start = 3
val_end = 51
date_val = [0,2]
day_type = [1,2]
# There are 7 row movements that need to take place.
for row_move in range(1,8):
day = [1,2,3]
date_val[1] = 2
day_type[1] = 2
# There are 5 column movements that take place.
# The basic idea is that I would cycle through the five days, grab their data in a temporary dataframe,
# and then append that dataframe onto the output dataframe
for col_move in range(1,6):
temp_df = pd.DataFrame(columns=["DATE", "TIME", "KW", "KVAR", "KVA", "DAY"])
temp_df['TIME'] = time
#These are the 3 values that stop working after the first column change
# I get the values that I expect for the first 5 days
temp_df['KW'] = df.iloc[val_start:val_end, day[0]]
temp_df['KVAR'] = df.iloc[val_start:val_end, day[1]]
temp_df['KVA'] = df.iloc[val_start:val_end, day[2]]
# These 2 values work perfectly for the entire data set
temp_df['DAY'] = df.iloc[day_type[0], day_type[1]]
temp_df["DATE"] = df.iloc[date_val[0], date_val[1]]
# trouble shooting
print(df.iloc[val_start:val_end, day[0]])
print(temp_df)
output = output.append(temp_df)
# increase values for each iteration of row loop.
# seems to work perfectly when I print the data
day = [x + 3 for x in day]
date_val[1] = date_val[1] + 3
day_type[1] = day_type[1] + 3
# increase values for each iteration of column loop
# seems to work perfectly when I print the data
date_val[0] = date_val[0] + 55
day_type [0]= day_type[0] + 55
val_start = val_start + 55
val_end = val_end + 55
return output
test = make_df(df1)
Below is some sample output. It shows where the data starts to break down after the fifth day (or first instance of the column shift in the for loop). What am I doing wrong?
Could be pd.append requiring matched row indices for numerical values.
import pandas as pd
import numpy as np
output = pd.DataFrame(np.random.rand(5,2), columns=['a','b']) # fake data
output['c'] = list('abcdefghij') # add a column of non-numerical entries
tmp = pd.DataFrame(columns=['a','b','c'])
tmp['a'] = output.iloc[0:2, 2]
tmp['b'] = output.iloc[3:5, 2] # generates NaN
tmp['c'] = output.iloc[0:2, 2]
data.append(tmp)
(initial response)
How does df1 look like? Is df.iloc[val_start:val_end, day[0]] have any issue past the fifth day? The codes didn't show how you read from the csv files, or df1 itself.
My guess: if val_start:val_end gives invalid indices on the sixth day, or df1 happens to be malformed past the fifth day, df.iloc[val_start:val_end, day[0]] will return an empty Series object and possibly make its way into temp_df. iloc do not report invalid row indices, though similar column indices would trigger IndexError.
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(5,3), columns=['a','b','c'], index=np.arange(5)) # fake data
df.iloc[0:2, 1] # returns the subset
df.iloc[100:102, 1] # returns: Series([], Name: b, dtype: float64)
A little off topic but I would recommend preprocessing the csv files rather than deal with indexing in Pandas DataFrame, as the original format was kinda complex. Slice the data by date and later use pd.melt or pd.groupby to shape them into the format you like. Or alternatively try multi-index if stick with Pandas I/O.
I am working with hourly monitoring data which consists of incomplete time series, i.e. several hours during a year (or during several years) will be absent from my dataframe.
I would like to determine the data capture, i.e. the percentage of values present in a month, a season, or a year.
This works with the following code (for demonstration written for monthly resampling) - however that piece of code appears somewhat inefficient, because I need to create a second hourly dataframe and I need to resample two dataframes.
Is there a more elegant solution to this?
import numpy as np
import pandas as pd
# create dummy series
t1 = pd.date_range(start="1997-01-01 05:00", end="1997-04-25 17:00", freq="H")
t2 = pd.date_range(start="1997-06-11 15:00", end="1997-06-15 12:00", freq="H")
t3 = pd.date_range(start="1997-06-18 00:00", end="1997-08-22 23:00", freq="H")
df1 = pd.DataFrame(np.random.randn(len(t1)), index=t1)
df2 = pd.DataFrame(np.random.randn(len(t2)), index=t2)
df3 = pd.DataFrame(np.random.randn(len(t3)), index=t3)
df = pd.concat((df1, df2, df3))
# create time index with complete hourly coverage over entire years
tstart = "%i-01-01 00:00"%(df.index.year[0])
tend = "%i-12-31 23:00"%(df.index.year[-1])
tref = pd.date_range(start=tstart, end=tend, freq="H")
dfref = pd.DataFrame(np.zeros(len(tref)), index=tref)
# count number of values in reference dataframe and actual dataframe
# Example: monthly resampling
cntref = dfref.resample("MS", "count")
cnt = df.resample("MS", "count").reindex(cntref.index).fillna(0)
for i in range(len(cnt.index)):
print cnt.index[i], cnt.values[i], cntref.values[i], cnt.values[i] / cntref.values[i]
pandas' Timedelta will do the trick:
# Time delta between rows of the df
df['index'] = df.index
pindex = df['index'].shift(1)
delta = df['index'] - pindex
# Any delta > 1H means a missing data period
missing_delta = delta[delta > pd.Timedelta('1H')]
# Sum of missing data periods divided by total period
ratio_missing = missing_delta.sum() / (df.index[-1] - df.index[0])
You can use TimeGrouper.
# Create an hourly index spanning the range of your data.
idx = pd.date_range(pd.Timestamp(df.index[0].strftime('%Y-%m-%d %H:00')),
pd.Timestamp(df.index[-1].strftime('%Y-%m-%d %H:00')),
freq='H')
# Use TimeGrouper to calculate the fraction of observations from `df` that are in the
# hourly time index.
>>> (df.groupby(pd.TimeGrouper('M')).size() /
pd.Series(idx).reindex(idx).groupby(pd.TimeGrouper('M')).size())
1997-01-31 1.000000
1997-02-28 1.000000
1997-03-31 1.000000
1997-04-30 0.825000
1997-05-31 0.000000
1997-06-30 0.563889
1997-07-31 1.000000
1997-08-31 1.000000
Freq: M, dtype: float64
As there have been no further suggestions, it appears as if the originally posted solution is most efficient.
Not sure about performance, but for a (very long) one liner you can do this once you have created 'df'... It at least has the benefits of not requiring a dummy dataframe. It should work for any period of data input and resampling.
month_counts = df.resample('H').mean().resample('M').count() / df.resample('H').ffill().fillna(1).resample('M').count()
Say I create a fully random Dataframe using the following:
from pandas.util import testing
from random import randrange
def random_date(start, end):
delta = end - start
int_delta = (delta.days * 24 * 60 * 60) + delta.seconds
random_second = randrange(int_delta)
return start + timedelta(seconds=random_second)
def rand_dataframe():
df = testing.makeDataFrame()
df['date'] = [random_date(datetime.date(2014,3,18),datetime.date(2014,4,1)) for x in xrange(df.shape[0])]
df.sort(columns=['date'], inplace=True)
return df
df = rand_dataframe()
which results in the dataframe shown at the bottom of this post. I would like to plot my columns A, B, C and D using the timeseries visualization features in seaborn so that I get something along these lines:
How can I approach this problem? From what I read on this notebook, the call should be:
sns.tsplot(df, time="time", unit="unit", condition="condition", value="value")
but this seems to require that the dataframe is represented in a different way, with the columns somehow encoding time, unit, condition and value, which is not my case. How can I convert my dataframe (shown below) into this format?
Here is my dataframe:
date A B C D
2014-03-18 1.223777 0.356887 1.201624 1.968612
2014-03-18 0.160730 1.888415 0.306334 0.203939
2014-03-18 -0.203101 -0.161298 2.426540 0.056791
2014-03-18 -1.350102 0.990093 0.495406 0.036215
2014-03-18 -1.862960 2.673009 -0.545336 -0.925385
2014-03-19 0.238281 0.468102 -0.150869 0.955069
2014-03-20 1.575317 0.811892 0.198165 1.117805
2014-03-20 0.822698 -0.398840 -1.277511 0.811691
2014-03-20 2.143201 -0.827853 -0.989221 1.088297
2014-03-20 0.299331 1.144311 -0.387854 0.209612
2014-03-20 1.284111 -0.470287 -0.172949 -0.792020
2014-03-22 1.031994 1.059394 0.037627 0.101246
2014-03-22 0.889149 0.724618 0.459405 1.023127
2014-03-23 -1.136320 -0.396265 -1.833737 1.478656
2014-03-23 -0.740400 -0.644395 -1.221330 0.321805
2014-03-23 -0.443021 -0.172013 0.020392 -2.368532
2014-03-23 1.063545 0.039607 1.673722 1.707222
2014-03-24 0.865192 -0.036810 -1.162648 0.947431
2014-03-24 -1.671451 0.979238 -0.701093 -1.204192
2014-03-26 -1.903534 -1.550349 0.267547 -0.585541
2014-03-27 2.515671 -0.271228 -1.993744 -0.671797
2014-03-27 1.728133 -0.423410 -0.620908 1.430503
2014-03-28 -1.446037 -0.229452 -0.996486 0.120554
2014-03-28 -0.664443 -0.665207 0.512771 0.066071
2014-03-29 -1.093379 -0.936449 -0.930999 0.389743
2014-03-29 1.205712 -0.356070 -0.595944 0.702238
2014-03-29 -1.069506 0.358093 1.217409 -2.286798
2014-03-29 2.441311 1.391739 -0.838139 0.226026
2014-03-31 1.471447 -0.987615 0.201999 1.228070
2014-03-31 -0.050524 0.539846 0.133359 -0.833252
In the end, what I am looking for is an overlay of of plots (one per column), where each of them looks as follows (note that different values of CI get different values of alphas):
I don't think tsplot is going to work with the data you have. The assumptions it makes about the input data are that you've sampled the same units at each timepoint (although you can have missing timepoints for some units).
For example, say you measured blood pressure from the same people every day for a month, and then you wanted to plot the average blood pressure by condition (where maybe the "condition" variable is the diet they are on). tsplot could do this, with a call that would look something like sns.tsplot(df, time="day", unit="person", condition="diet", value="blood_pressure")
That scenario is different from having large groups of people on different diets and each day randomly sampling some from each group and measuring their blood pressure. From the example you gave, it seems like your data are structured like the this.
However, it's not that hard to come up with a mix of matplotlib and pandas that will do what I think you want:
# Read in the data from the stackoverflow question
df = pd.read_clipboard().iloc[1:]
# Convert it to "long-form" or "tidy" representation
df = pd.melt(df, id_vars=["date"], var_name="condition")
# Plot the average value by condition and date
ax = df.groupby(["condition", "date"]).mean().unstack("condition").plot()
# Get a reference to the x-points corresponding to the dates and the the colors
x = np.arange(len(df.date.unique()))
palette = sns.color_palette()
# Calculate the 25th and 75th percentiles of the data
# and plot a translucent band between them
for cond, cond_df in df.groupby("condition"):
low = cond_df.groupby("date").value.apply(np.percentile, 25)
high = cond_df.groupby("date").value.apply(np.percentile, 75)
ax.fill_between(x, low, high, alpha=.2, color=palette.pop(0))
This code produces: