How to eliminate leap years in pandas data frame - python

I have daily temperature data from 1901-1940. I want to exclude leap years i.e. remove any temperature data that falls on 2/29. My data is currently one long array. I am reshaping it so that every year is a row and every column is a day. I'm trying to remove the leap years with the last line of code here:
import requests
from datetime import date
params = {"sid": "PHLthr", "sdate":"1900-12-31", "edate":"2020-12-31", "elems": [{"name": "maxt", "interval": "dly", "duration": "dly", "prec": 6}]}
baseurl = "http://data.rcc-acis.org/StnData"
#get the data
resp = requests.post(baseurl, json=params)
#package into the dataframe
df = pd.DataFrame(columns=['date', 'tmax'], data=resp.json()['data'])
#convert the date column to datetimes
df['date']=pd.to_datetime(df['date'])
#select years
mask = (df['date'] >= '1900-01-01') & (df['date'] <= '1940-12-31')
Baseline=df.loc[mask]
#get rid of leap years:
Baseline=Baseline.loc[(Baseline['date'].dt.day!=29) & (Baseline['date'].dt.month!=2)]
but when I reshape the array I notice that there are 366 columns instead of 365 so I don't think I'm actually getting rid of february 29th data. How would I completely eliminate any temperature data that is recorded on 2/29 throughout my data set. I only want 365 data points for each year.
daily=pd.DataFrame(data={'date':Baseline.date,'tmax':Baseline.tmax})
daily['day']=daily.date.dt.dayofyear
daily['year']=daily.date.dt.year
daily.pivot(index='year', columns='day', values='tmax')

The source of your problem is that you used daily.date.dt.dayofyear.
Each day in a year, including Feb 29 has its own number.
To make thing worse, e.g. Mar 1 has dayofyear:
61 in leap years,
60 in non-leap years.
One of possible solutions is to set the day column to a string
representation of month and day.
To provide proper sort in the pivoted table, the month part should come first.
So, after you convert date column to datetime, to create both
additional columns run:
daily['year'] = daily.date.dt.year
daily['day'] = daily.date.dt.strftime('%m-%d')
Then you can filter out Feb 29 and generate the pivot table in one go:
result = daily[daily.day != '02-29'].pivot(index='year', columns='day',
values='tmax')
For some limited source data sample, other than yours, I got:
day 02-27 02-28 03-01 03-02
year
2020 11 10 14 15
2021 11 21 22 24
An alternative
Create 3 additional columns:
daily['year'] = daily.date.dt.year
daily['month'] = daily.date.dt.strftime('%m')
daily['day'] = daily.date.dt.strftime('%d')
Note the string representation of month and day, to keep leading
zeroes.
Then filter out Feb 29 and generate the pivot table with a MulitiIndex
on columns:
result = daily[(daily.month != '02') | (daily.day != '29')].pivot(
index='year', columns=['month', 'day'], values='tmax')
This time the result is:
month 02 03
day 27 28 01 02
year
2020 11 10 14 15
2021 11 21 22 24

The easy way is to eliminate those items before building the array.
import requests
from datetime import date
params = {"sid": "PHLthr", "sdate":"1900-12-31", "edate":"2020-12-31", "elems": [{"name": "maxt", "interval": "dly", "duration": "dly", "prec": 6}]}
baseurl = "http://data.rcc-acis.org/StnData"
#get the data
resp = requests.post(baseurl, json=params)
vals = resp.json()
rows = [row for row in vals['data'] if '02-29' not in row[0]]
print(rows)

You get 366 columns because of using dayofyear. That will calculate the day per the actual calendar (i.e. without removing 29 Feb).
To see this:
>>> daily.iloc[1154:1157]
date tmax day year
1154 1904-02-28 38.000000 59 1904
1156 1904-03-01 39.000000 61 1904
1157 1904-03-02 37.000000 62 1904
Notice the day goes from 59 to 61 (the 60th day was 29 February 1904).

Related

Using numpy, how do you calculate snowfall per month?

I have a data set with snowfall records per day for one year. Date variable is in YYYYMMDD form.
Date Snow
20010101 0
20010102 10
20010103 5
20010104 3
20010105 0
...
20011231 0
The actual data is here
https://github.com/emily737373/emily737373/blob/master/COX_SNOW-1.csv
I want to calculate the number of days it snowed each month. I know how to do this with pandas, but for a school project, I need to do it only using numpy. I can not import datetime either, it must be done only using numpy.
The output should be in this form
Month # days snowed
January 13
February 19
March 20
...
December 15
My question is how do I only count the number of days it snowed (basically when snow variable is not 0) without having to do it separately for each month?
I hope you can use some built-in packages, such as datetime, cause it's useful when working with datetime objects.
import numpy as np
import datetime as dt
df = np.genfromtxt('test_files/COX_SNOW-1.csv', delimiter=',', skip_header=1, dtype=str)
date = np.array([dt.datetime.strptime(d, "%Y%m%d").month for d in df[:, 0]])
snow = df[:, 1].copy().astype(np.int32)
has_snowed = snow > 0
for month in range(1, 13):
month_str = dt.datetime(year=1, month=month, day=1).strftime('%B')
how_much_snow = len(snow[has_snowed & (date == month)])
print(month_str, ':', how_much_snow)
I loaded the data as str so we guarantee we can parse the Date column as dates later on. That's why we also need to explicitly convert the snow column to int32, otherwise the > comparison won't work.
The output is as follows:
January : 13
February : 19
March : 20
April : 13
May : 8
June : 9
July : 2
August : 7
September : 9
October : 19
November : 16
December : 15
Let me know if this worked for you or if you have any further questions.

How to calculate number of events per day using python?

I am having problems calculating/counting the number of events per day using python. I have a .txt file of earthquake data that I am using to do this. Here is what the file looks like:
2000 Jan 19 00 21 45 -118.815670 37.533170 3.870000 2.180000 383.270000
2000 Jan 11 16 16 46 -118.804500 37.551330 5.150000 2.430000 380.930000
2000 Jan 11 19 55 54 -118.821830 37.508830 0.600000 2.360000 378.080000
2000 Jan 11 05 33 02 -118.802000 37.554670 4.820000 2.530000 375.480000
2000 Jan 08 19 37 04 -118.815500 37.534670 3.900000 2.740000 373.650000
2000 Jan 09 19 34 27 -118.817670 37.529670 3.990000 3.170000 373.07000
Where column 0 is the year, 1 is the month, 2 is the day. There are no headers.
I want to calculate/count the number of events per day. Each line in the file (example: 2000 Jan 11) is an event. So, On January 11th, I would like to know how many times there was an event. In this case, on January 11th, there were 3 events.
I've tried looking on stack for some guidance and have found code that works for arrays such as:
a = [1, 1, 1, 0, 0, 0, 1]
which counts the occurrence of certain items in the array using code like:
unique, counts = numpy.unique(a, return_counts=True)
dict(zip(unique, counts))
I have not been able to find anything that helps me. Any help/advice would be appreciated.
groupby() is going to be your friend here. However, I would concatenate the Year, Month and Day so that you can use dataframe.groupby(["full_date"]).count()
Full solution
Setup DF
df = pd.DataFrame([[2000, "Jan", 19],[2000, "Jan", 20],[2000, "Jan", 19],[2000, "Jan", 19]], columns = ["Year", "Month", "Day"])
Convert datatypes to str for concatenation
df["Year"] = df["Year"].astype(str)
df["Day"] = df["Day"].astype(str)
Create 'full_date' column
df["full_date"] = df["Year"] + "-" + df["Month"] + "-" + df["Day"]
Count the # of days
df.groupby(["full_date"])["Day"].count()
Hope this helps/provides value :)

Is there some Python function like .to_period that could help me extract a fiscal year's week number based on a date?

Essentially, I want to apply some lambda function(?) of some sort to apply to a column in my dataframe that contains dates. Originally, I used dt.week to extract the week number but the calendar dates don't match up with the fiscal year I'm using (Apr 2019 - Mar 2020).
I have tried using pandas' function to_period('Q-MAR) but that seems to be a little bit off. I have been researching other ways but nothing seems to work properly.
Apr 1 2019 -> Week 1
Apr 3 2019 -> Week 1
Apr 30 2019 -> Week 5
May 1 2019 -> Week 5
May 15 2019 -> Week 6
Thank you for any advice or tips in advance!
You can create a DataFrame which contains the dates with a frequency of weeks:
date_rng = pd.date_range(start='01/04/2019',end='31/03/2020', freq='W')
df = pd.DataFrame(date_rng, columns=['date'])
You can then query df for which index the date is smaller than or equal to the value:
df.index[df.date <= query_date][-1]
This will output the largest index which is smaller than or equal to the date you want to examine. I imagine you can pour this into a lambda yourself?
NOTE
This solution has limitations, the biggest one being you have to manually define the datetime dataframe.
I did create a fiscal calendar that can be later improvised to create function in spark
from fiscalyear import *
beginDate = '2016-01-01'
endDate = '2021-12-31'
#create empty dataframe
df = spark.createDataFrame([()])
#create date from given date range
df1 = df.withColumn("date",explode(expr(f"sequence(to_date('{beginDate}'), to_date('{endDate}'), interval 1 day)")))
# get week
df1 = df1.withColumn('week',weekofyear(col("date"))).withColumn('year',year(col("date")))
#translate to use pandas in python
df1 = df1.toPandas()
#get fiscal year
df1['financial_year'] = df1['date'].map(lambda x: x.year if x.month > 3 else x.year-1)
df1['date'] = pd.to_datetime(df1['date'])
#get calendar qtr
df1['quarter_old'] = df1['date'].dt.quarter
#get fiscal calendar
df1['quarter'] = np.where(df1['financial_year']< (df1['year']),df1['quarter_old']+3,df1['quarter_old'])
df1['quarter'] = np.where(df1['financial_year'] == (df1['year']),df1['quarter_old']-1,df1['quarter'])
#get fiscal week by shiftin gas per number of months different from usual calendar
df1["fiscal_week"] = df1.week.shift(91)
df1 = df1.loc[(df1['date'] >= '2020-01-01')]
df1.display()

Sum values in column 3 related to unique values in column 2 and 1

I'm working in Python and I have a Pandas DataFrame of Uber data from New York City. A part of the DataFrame looks like this:
Year Week_Number Total_Dispatched_Trips
2015 51 1,109
2015 5 54,380
2015 50 8,989
2015 51 1,025
2015 21 10,195
2015 38 51,957
2015 43 266,465
2015 29 66,139
2015 40 74,321
2015 39 3
2015 50 854
As it is right now, the same week appears multiple times for each year. I want to sum the values for "Total_Dispatched_Trips" for every week for each year. I want each week to appear only once per year. (So week 51 can't appear multiple times for year 2015 etc.). How do I do this? My dataset is over 3k rows, so I would prefer not to do this manually.
Thanks in advance.
okidoki here is it, borrowing on Convert number strings with commas in pandas DataFrame to float
import locale
from locale import atof
locale.setlocale(locale.LC_NUMERIC, '')
df['numeric_trip'] = pd.to_numeric(df.Total_Dispatched_trips.apply(atof), errors = 'coerce')
df.groupby(['Year', 'Week_number']).numeric_trip.sum()

Average hourly week profile for a year excluding weekend days and holidays

With Pandas I have created a DataFrame from an imported .csv file (this file is generated through simulation). The DataFrame consists of half-hourly energy consumption data for a single year. I have already created a DateTimeindex for the dates.
I would like to be able to reformat this data into average hourly week and weekend profile results. With the week profile excluding holidays.
DataFrame:
Date_Time Equipment:Electricity:LGF Equipment:Electricity:GF
01/01/2000 00:30 0.583979872 0.490327348
01/01/2000 01:00 0.583979872 0.490327348
01/01/2000 01:30 0.583979872 0.490327348
01/01/2000 02:00 0.583979872 0.490327348
I found an example (Getting the average of a certain hour on weekdays over several years in a pandas dataframe) that explains doing this for several years, but not explicitly for a week (without holidays) and weekend.
I realised that there is no resampling techniques in Pandas that do this directly, I used several aliases (http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) for creating Monthly and Daily profiles.
I was thinking of using the business day frequency and create a new dateindex with working days and compare that to my DataFrame datetimeindex for every half hour. Then return values for working days and weekend days when true or false respectively to create a new dataset, but am not sure how to do this.
PS; I am just getting into Python and Pandas.
Dummy data (for future reference, more likely to get an answer if you post some in a copy-paste-able form)
df = pd.DataFrame(data={'a':np.random.randn(1000)},
index=pd.date_range(start='2000-01-01', periods=1000, freq='30T'))
Here's an approach. First define a US (or modify as appropriate) business day offset with holidays, and generate and range covering your dates.
from pandas.tseries.holiday import USFederalHolidayCalendar
from pandas.tseries.offsets import CustomBusinessDay
bday_us = CustomBusinessDay(calendar=USFederalHolidayCalendar())
bday_over_df = pd.date_range(start=df.index.min().date(),
end=df.index.max().date(), freq=bday_us)
Then, develop your two grouping columns. An hour column is easy.
df['hour'] = df.index.hour
For weekday/weekend/holiday, define a function to group the data.
def group_day(date):
if date.weekday() in [5,6]:
return 'weekend'
elif date.date() in bday_over_df:
return 'weekday'
else:
return 'holiday'
df['day_group'] = df.index.map(group_day)
Then, just group by the two columns as you wish.
In [140]: df.groupby(['day_group', 'hour']).sum()
Out[140]:
a
day_group hour
holiday 0 1.890621
1 -0.029606
2 0.255001
3 2.837000
4 -1.787479
5 0.644113
6 0.407966
7 -1.798526
8 -0.620614
9 -0.567195
10 -0.822207
11 -2.675911
12 0.940091
13 -1.601885
14 1.575595
15 1.500558
16 -2.512962
17 -1.677603
18 0.072809
19 -1.406939
20 2.474293
21 -1.142061
22 -0.059231
23 -0.040455
weekday 0 9.192131
1 2.759302
2 8.379552
3 -1.189508
4 3.796635
5 3.471802
... ...
18 -5.217554
19 3.294072
20 -7.461023
21 8.793223
22 4.096128
23 -0.198943
weekend 0 -2.774550
1 0.461285
2 1.522363
3 4.312562
4 0.793290
5 2.078327
6 -4.523184
7 -0.051341
8 0.887956
9 2.112092
10 -2.727364
11 2.006966
12 7.401570
13 -1.958666
14 1.139436
15 -1.418326
16 -2.353082
17 -1.381131
18 -0.568536
19 -5.198472
20 -3.405137
21 -0.596813
22 1.747980
23 -6.341053
[72 rows x 1 columns]

Categories

Resources