Substract values from two columns where same time (pandas, python) - python

I have a pandas dataFrame with 3 columns of weather data - temperature, time and the name of the weather station.
It looks like this:
Time
Station_name
Temperature
2022-05-12 22:09:35+00:00
station_a
18.3
2022-05-12 22:09:42+00:00
station_b
18.0
I would like to calculate the temperature difference of station_a from station_b at every same minute (as the time stamps are not exactly equal but precise at minute-level (and there is only one measurement every 10 minutes) in a new column.
Is there a way to do this?

You can use a merge_asof on the two sub-dataframes:
df['Time'] = pd.to_datetime(df['Time'])
out = (pd
.merge_asof(df[df['Station_name'].eq('station_a')],
df[df['Station_name'].eq('station_b')],
on='Time', direction='nearest',
tolerance=pd.Timedelta('1min'),
suffixes=('_a', '_b')
)
.set_index('Time')
.eval('diff = Temperature_b - Temperature_a')
['diff']
)
output:
Time
2022-05-12 22:09:35+00:00 -0.3
Name: diff, dtype: float64
You can also try to round the times, but it is more risky if one time gets rounded up and the other down:
df['Time'] = pd.to_datetime(df['Time'])
(df
.assign(Time=df['Time'].dt.round('10min'))
.pivot('Time', 'Station_name', 'Temperature')
.eval('diff = station_b - station_a')
)
output:
Station_name station_a station_b diff
Time
2022-05-12 22:10:00+00:00 18.3 18.0 -0.3

If you have this pandas dataframe
from datetime import datetime
import pandas as pd
data = [{"Time":datetime(2022,5,12,22,9,35), "Station_name":"station_a", "Temperature": 18.3},
{"Time":datetime(2022,5,12,22,9,42), "Station_name":"station_b", "Temperature": 18.0 },
{"Time":datetime(2022,5,12,22,10,35), "Station_name":"station_a", "Temperature": 17.3},
{"Time":datetime(2022,5,12,22,10,42), "Station_name":"station_b", "Temperature": 18.0 }]
df = pd.DataFrame(data)
truncate to minutes: Truncate `TimeStamp` column to hour precision in pandas `DataFrame`
pivot tables / reshape: https://pandas.pydata.org/docs/user_guide/reshaping.html
#truncate to minutes
df["Time_trunc"] = df["Time"].values.astype('<M8[m]')
#Set index (in order to pivot) and pivot (unstack)
df = df.set_index(["Time_trunc",'Station_name'])
df_pivoted = df.unstack()
#Flatten multi-columns
df_new = pd.DataFrame(df_pivoted.to_records())
df_new.columns = ["Time_trunc", "Temp_station_a", "Temp_station_b", "time_station_a", "Time_station_b"]
#Add Diff of temperatures
df_new["DiffAbs"] = abs(df_new["Temp_station_a"]-df_new["Temp_station_b"])
Resulting DataFrame Image

You can use pandas.Series.diff
For example:
df['Temperature_diff'] = df['Temperature'].diff()

Related

Why do i get all date similiar while trying to fill them in dataset?

I have dataset with 800 rows and i want to create new column with date, and in each row in should increase on one day.
import datetime
date = datetime.datetime.strptime('5/11/2011', '%d/%m/%Y')
for x in range(800):
df['Date'] = date + datetime.timedelta(days=x)
In each column date is equal to '2014-01-12', as i inderstand it fills as if x is always equal to 799
Each time through the loop you are updating the ENTIRE Date column. You see the results of the 800th update at the end.
You could use a date range:
dr = pd.date_range('5/11/2011', periods=800, freq='D')
df = pd.DataFrame({'Date': dr})
print(df)
Date
0 2011-05-11
1 2011-05-12
2 2011-05-13
3 2011-05-14
4 2011-05-15
.. ...
795 2013-07-14
796 2013-07-15
797 2013-07-16
798 2013-07-17
799 2013-07-18
Or:
df['Date'] = dr
pandas is nice tool which can repeate some calculations without using for-loop.
When you use df['Date'] = ... then you assign the same value to all cells in column.
You have to use df.loc[x, 'Date'] = ... to assign to single cell.
Minimal working example (with only 10 rows).
import pandas as pd
import datetime
df = pd.DataFrame({'Date':[1,2,3,4,5,6,7,8,9,0]})
date = datetime.datetime.strptime('5/11/2011', '%d/%m/%Y')
for x in range(10):
df.loc[x,'Date'] = date + datetime.timedelta(days=x)
print(df)
But you could use also pd.date_range() for this.
Minimal working example (with only 10 rows).
import pandas as pd
import datetime
df = pd.DataFrame({'Date':[1,2,3,4,5,6,7,8,9,0]})
date = datetime.datetime.strptime('5/11/2011', '%d/%m/%Y')
df['Date'] = pd.date_range(date, periods=10)
print(df)

Resample a pandas dataframe with multiple variables

I have a dataframe in long format with data on a 15 min interval for several variables. If I apply the resample method to get the average daily value, I get the average values of all variables for a given time interval (and not the average value for speed, distance).
Does anyone know how to resample the dataframe and keep the 2 variables?
Note: The code below contains an EXAMPLE dataframe in long format, my real example loads data from csv and has different time intervals and frequencies for the variables, so I cannot simply resample the dataframe in wide format.
import pandas as pd
import numpy as np
dti = pd.date_range('2015-01-01', '2015-12-31', freq='15min')
df = pd.DataFrame(index = dti)
# Average speed in miles per hour
df['speed'] = np.random.randint(low=0, high=60, size=len(df.index))
# Distance in miles (speed * 0.5 hours)
df['distance'] = df['speed'] * 0.25
df.reset_index(inplace=True)
df2 = df.melt (id_vars = 'index')
df3 = df2.resample('d', on='index').mean()
IIUC:
>>> df.groupby(df.index.date).mean()
speed distance
2015-01-01 29.562500 7.390625
2015-01-02 31.885417 7.971354
2015-01-03 30.895833 7.723958
2015-01-04 30.489583 7.622396
2015-01-05 28.500000 7.125000
... ... ...
2015-12-27 28.552083 7.138021
2015-12-28 29.437500 7.359375
2015-12-29 29.479167 7.369792
2015-12-30 28.864583 7.216146
2015-12-31 48.000000 12.000000
[365 rows x 2 columns]

finding first and last available days of a month in pandas

I have a pandas dataframe from 2007 to 2017. The data is like this:
date closing_price
2007-12-03 728.73
2007-12-04 728.83
2007-12-05 728.83
2007-12-07 728.93
2007-12-10 728.22
2007-12-11 728.50
2007-12-12 728.51
2007-12-13 728.65
2007-12-14 728.65
2007-12-17 728.70
2007-12-18 728.73
2007-12-19 728.73
2007-12-20 728.73
2007-12-21 728.52
2007-12-24 728.52
2007-12-26 728.90
2007-12-27 728.90
2007-12-28 728.91
2008-01-05 728.88
2008-01-08 728.86
2008-01-09 728.84
2008-01-10 728.85
2008-01-11 728.85
2008-01-15 728.86
2008-01-16 728.89
As you can see, some days are missing for each month. I want to take the first and last 'available' days of each month, and calculate the difference of their closing_price, and put the results in a new dataframe. For example for the first month, the days will be 2007-12-03 and 2007-12-28, and the closing prices would be 728.73 and 728.91, so the result would be 0.18. How can I do this?
you can group df by month and apply a function to do it. Notice the to_period, this function convert DataFrame from DatetimeIndex to PeriodIndex with desired frequency.
def calculate(x):
start_closing_price = x.loc[x.index.min(), "closing_price"]
end_closing_price = x.loc[x.index.max(), "closing_price"]
return end_closing_price-start_closing_price
result = df.groupby(df["date"].dt.to_period("M")).apply(calculate)
# result
date
2007-12 0.18
2008-01 0.01
Freq: M, dtype: float64
First make sure they are datetime and sorted:
import pandas as pd
df['date'] = pd.to_datetime(df.date)
df = df.sort_values('date')
Groupby
gp = df.groupby([df.date.dt.year.rename('year'), df.date.dt.month.rename('month')])
gp.closing_price.last() - gp.closing_price.first()
#year month
#2007 12 0.18
#2008 1 0.01
#Name: closing_price, dtype: float64
or
gp = df.groupby(pd.Grouper(key='date', freq='1M'))
gp.last() - gp.first()
# closing_price
#date
#2007-12-31 0.18
#2008-01-31 0.01
Resample
gp = df.set_index('date').resample('1M')
gp.last() - gp.first()
# closing_price
#date
#2007-12-31 0.18
#2008-01-31 0.01
Problem: Get first or last date of indexed dataframe
Solution: Resample the index and then extract the data.
lom = pd.Series(x.index, index = x.index).resample('m').last()
xlast = x[x.index.isin(lom)] # .resample('m').last() to get monthly freq
fom = pd.Series(x.index, index = x.index).resample('m').first()
xfirst = x[x.index.isin(fom)]

Subtracting values across grouped data frames in Pandas

I have a set of IDs and Timestamps, and want to calculate the "total time elapsed per ID" by getting the difference of the oldest / earliest timestamps, grouped by ID.
Data
id timestamp
1 2018-02-01 03:00:00
1 2018-02-01 03:01:00
2 2018-02-02 10:03:00
2 2018-02-02 10:04:00
2 2018-02-02 11:05:00
Expected Result
(I want the delta converted to minutes)
id delta
1 1
2 62
I have a for loop, but it's very slow (10+ min for 1M+ rows). I was wondering if this was achievable via pandas functions?
# gb returns a DataFrameGroupedBy object, grouped by ID
gb = df.groupby(['id'])
# Create the resulting df
cycletime = pd.DataFrame(columns=['id','timeDeltaMin'])
def calculate_delta():
for id, groupdf in gb:
time = groupdf.timestamp
# returns timestamp rows for the current id
time_delta = time.max() - time.min()
# convert Timedelta object to minutes
time_delta = time_delta / pd.Timedelta(minutes=1)
# insert result to cycletime df
cycletime.loc[-1] = [id,time_delta]
cycletime.index += 1
Thinking of trying next:
- Multiprocessing
First ensure datetimes are OK:
df.timestamp = pd.to_datetime(df.timestamp)
Now find the number of minutes in the difference between the maximum and minimum for each id:
import numpy as np
>>> (df.timestamp.groupby(df.id).max() - df.timestamp.groupby(df.id).min()) / np.timedelta64(1, 'm')
id
1 1.0
2 62.0
Name: timestamp, dtype: float64
You can sort by id and tiemstamp, then groupby id and then find the difference between min and max timestamp per group.
df['timestamp'] = pd.to_datetime(df['timestamp'])
result = df.sort_values(['id']).groupby('id')['timestamp'].agg(['min', 'max'])
result['diff'] = (result['max']-result['min']) / np.timedelta64(1, 'm')
result.reset_index()[['id', 'diff']]
Output:
id diff
0 1 1.0
1 2 62.0
Another one:
import pandas as pd
import numpy as np
import datetime
ids = [1,1,2,2,2]
times = ['2018-02-01 03:00:00','2018-02-01 03:01:00','2018-02-02
10:03:00','2018-02-02 10:04:00','2018-02-02 11:05:00']
df = pd.DataFrame({'id':ids,'timestamp':pd.to_datetime(pd.Series(times))})
df.set_index('id', inplace=True)
print(df.groupby(level=0).diff().sum(level=0)['timestamp'].dt.seconds/60)

Multi-indexing - accessing the last time in every day

New to multiindexing in Pandas. I have data that looks like this
Date Time value
2014-01-14 12:00:04 .424
12:01:12 .342
12:01:19 .341
...
12:05:49 .23
2014-05-12 ...
1:02:42 .23
....
For now, I want to access the last time for every single date and store the value in some array. I've made a multiindex like this
df= pd.read_csv("df.csv",index_col=0)
df.index = pd.to_datetime(df.index,infer_datetime_format=True)
df.index = pd.MultiIndex.from_arrays([df.index.date,df.index.time],names=['Date','Time'])
df= df[~df.index.duplicated(keep='first')]
dates = df.index.get_level_values(0)
So I have dates saved as an array. I want to iterate through the dates but can't either get the syntax right or am accessing the values incorrectly. I've tried a for loop but can't get it to run (for date in dates) and can't do direct access either (df.loc[dates[i]] or something like that). Also the number of time variables in each date varies. Is there any way to fix this?
This sounds like a groupby/max operation. More specifically, you want to group by the Date and aggregate the Times by taking the max. Since aggregation can only be done over column values, we'll need to change the Time index level into a column (by using reset_index):
import pandas as pd
df = pd.DataFrame({'Date': ['2014-01-14', '2014-01-14', '2014-01-14', '2014-01-14', '2014-05-12', '2014-05-12'], 'Time': ['12:00:04', '12:01:12', '12:01:19', '12:05:49', '01:01:59', '01:02:42'], 'value': [0.42399999999999999, 0.34200000000000003, 0.34100000000000003, 0.23000000000000001, 0.0, 0.23000000000000001]})
df['Date'] = pd.to_datetime(df['Date'])
df = df.set_index(['Date', 'Time'])
df = df.reset_index('Time', drop=False)
max_times = df.groupby(level=0)['Time'].max()
print(max_times)
yields
Date
2014-01-14 12:05:49
2014-05-12 1:02:42
Name: Time, dtype: object
If you wish to select the entire row, then you could use idxmax -- but there is a caveat. idxmax returns index labels. Therefore, the index must be unique for the labels to signify unique rows. Since the Date level is not by itself unique, to use idxmax we'll need to reset_index completely (to make an index of unique integers):
df = pd.DataFrame({'Date': ['2014-01-14', '2014-01-14', '2014-01-14', '2014-01-14', '2014-05-12', '2014-05-12'], 'Time': ['12:00:04', '12:01:12', '12:01:19', '12:05:49', '01:01:59', '1:02:42'], 'value': [0.42399999999999999, 0.34200000000000003, 0.34100000000000003, 0.23000000000000001, 0.0, 0.23000000000000001]})
df['Date'] = pd.to_datetime(df['Date'])
df['Time'] = pd.to_timedelta(df['Time'])
df = df.set_index(['Date', 'Time'])
df = df.reset_index()
idx = df.groupby(['Date'])['Time'].idxmax()
print(df.loc[idx])
yields
Date Time value
3 2014-01-14 12:05:49 0.23
5 2014-05-12 01:02:42 0.23
I don't see a good way to do this while keeping the MultiIndex.
It is easier to perform the groupby operation before setting the MultiIndex.
Moreover, it is probably preferable to preserve the datetimes as one value instead of splitting it into two parts. Note that given a datetime/period-like Series, the .dt accessor gives you easy access to the date and the time as needed. Thus you can group by the Date without making a Date column:
df = pd.DataFrame({'DateTime': ['2014-01-14 12:00:04', '2014-01-14 12:01:12', '2014-01-14 12:01:19', '2014-01-14 12:05:49', '2014-05-12 01:01:59', '2014-05-12 01:02:42'], 'value': [0.42399999999999999, 0.34200000000000003, 0.34100000000000003, 0.23000000000000001, 0.0, 0.23000000000000001]})
df['DateTime'] = pd.to_datetime(df['DateTime'])
# df = pd.read_csv('df.csv', parse_dates=[0])
idx = df.groupby(df['DateTime'].dt.date)['DateTime'].idxmax()
result = df.loc[idx]
print(result)
yields
DateTime value
3 2014-01-14 12:05:49 0.23
5 2014-05-12 01:02:42 0.23

Categories

Resources