Problem description
I'd like to unstack or pivot a DataFrame, but it raises the numpy exception MemoryError: Unable to allocate 1.72 GiB for an array with shape (1844040704,) and data type bool. I have tried this with a DataFrame with a numerical index -> df.pivot() and with a Multiindex -> df.unstack() ]. Both show the same exception and I don't know a way around. I don't feel like I have an exceptionally large dataset with 175199 rows. I have previously used unstack on DataFrames with more than 5mio rows. The df will even become 2 x larger for the complete analysis!
I try to unstack with df_unstacked = df.unstack(level=0)
Additional info
Befor pivot / unstack, I had to add an unique index with df['row_num'] = np.arange(len(df)), because the dataset contains (wanted) duplicate index entries. Thats due to daylight saving time, where one day in octobre has 25 hours. The 2nd hour is duplicated.
I work with Jupyterlab from a virtualenv with python 3.7.
Package versions:
pandas==1.1.2
numpy==1.19.2
jupyterlab==2.2.8
Example data
value
target_frame row_num year
2017-01-01 01:00:00 0 2016 10,3706
2017-01-01 01:15:00 1 2016 27,2456
2017-01-01 01:30:00 2 2016 20,4022
2017-01-01 01:45:00 3 2016 14,4911
2017-01-01 02:00:00 4 2016 14,2611
... ...
2017-12-31 23:45:00 175195 2020 30,7177
2017-01-01 00:00:00 175196 2020 21,4708
2017-01-01 00:15:00 175197 2020 44,9192
2017-01-01 00:30:00 175198 2020 37,8560
2017-01-01 00:45:00 175199 2020 30,9901
[175200 rows x 1 columns]
Desired result
The index will contain duplicates. For the record, i don't care if it's an index or a regular column.
value
year 2016 2017 ... 2020
target_frame
2017-01-01 01:00:00 10,3706 11 ... 32
2017-01-01 01:15:00 27,2456 12 ... 32
2017-01-01 01:30:00 20,4022 13 ... 541
2017-01-01 01:45:00 14,4911 51 ... 123
2017-01-01 02:00:00 14,2611 56 ... 12
... ...
2017-12-31 23:45:00 30,7177 12 ... 12
2017-01-01 00:00:00 21,4708 21 ... 12
2017-01-01 00:15:00 44,9192 21 ... 13
2017-01-01 00:30:00 37,8560 21 ... 11
2017-01-01 00:45:00 30,9901 12 ... 10
[35040 rows x 5 columns]
I will try to help you by addressing the issue of lack of memory, and a way to deal with it.
As your data already has almost 2 billion records, and the error is related to memory, I will focus on that without taking into account the transformations themselves.
If you are using something like, df, df_pivoted, df_unstacked, etc. With each transformation you are creating a new variable, and multiplying your memory consumption. So it is important to clear the memory in the process. Even if your data donĀ“t seems big enough to consume all your memory.
One way to solve this problem is to work on "chuncks" and save each transformation step to a file in order to clear the memory.
So the first step is to save the files, with a simple 'dataframe.to_csv ()'.
The second step is to make the transformations using parts of the data that fit in memory.
For this, there is an argument in the pandas.read_csv () function, called 'chuncksize' that transforms your import object into iteration TextFileReader.
that way, if you want to access the data information, you need to iterate over it.
iterator = pandas.read_csv('file.csv', chuncksize=32)
iterator.shape # will raise an error.
AttributeError: 'TextFileReader' object has no attribute 'shape'
the right way to do it:
for chunck in iterator:
print (chunck.shape)
output:
(32, ncols)
That way, to deal with your problem, you can work with chuncks and use the join functions to do the analysis as you need the data.
I think this might be a bug in pandas or numpy. There are differnet ErrorMessages with different pandas and numpy versions (Anaconda vs. pip). I coded the transformation myself and it runs in no time.
# Get the 2017 timestamp for the side_df
side_df = pd.DataFrame ({'timestamp': next_df.loc[next_df['year'] == 2017]['target_frame']})
for year in next_df['year'].unique():
side_df[year] = next_df.loc[next_df['year'] == year]['value']
display(side_df)
Results in:
timestamp 2016 2017 2018 2019 2020
8839 2017-01-01 01:00:00 10,3706 4,4184 14,7919 30,6942 31,0594
8840 2017-01-01 01:15:00 27,2456 23,7641 31,0019 40,2778 46,8350
8841 2017-01-01 01:30:00 20,4022 14,9732 23,8531 34,4941 41,3688
8842 2017-01-01 01:45:00 14,4911 9,4986 17,0181 28,8678 37,8213
8843 2017-01-01 02:00:00 14,2611 5,1241 14,0869 24,3203 34,4150
... ... ... ... ... ... ...
43874 2017-12-31 23:45:00 10,9256 15,2959 22,6000 40,1677 NaN
43875 2017-01-01 00:00:00 10,9706 4,8184 11,5150 30,9208 NaN
43876 2017-01-01 00:15:00 35,6275 25,8251 30,2893 41,5722 NaN
43877 2017-01-01 00:30:00 24,555 17,7821 24,2928 35,5510 NaN
43878 2017-01-01 00:45:00 5,61 11,7059 20,0477 31,2884 NaN
There are still some problems in the dataset (like the NaNs), but that has nothing to do with this question.
Related
I have 7 columns of data, indexed by datetime (30 minutes frequency) starting from 2017-05-31 ending in 2018-05-25. I want to plot the mean of specific range of date (seasons). I have been trying groupby, but I can't get to group by specific range. I get wrong results if I do df.groupby(df.date.dt.month).mean().
A few lines from the dataset (date range is from 2017-05-31 to 2018-05-25)
50 51 56 58
date
2017-05-31 00:00:00 200.213542 276.929198 242.879051 NaN
2017-05-31 00:30:00 200.215478 276.928229 242.879051 NaN
2017-05-31 01:00:00 200.215478 276.925324 242.878083 NaN
2017-06-01 01:00:00 200.221288 276.944691 242.827729 NaN
2017-06-01 01:30:00 200.221288 276.944691 242.827729 NaN
2017-08-31 09:00:00 206.961886 283.374453 245.041349 184.358250
2017-08-31 09:30:00 206.966727 283.377358 245.042317 184.360187
2017-12-31 09:00:00 212.925877 287.198416 247.455413 187.175144
2017-12-31 09:30:00 212.926846 287.196480 247.465097 187.179987
2018-03-31 23:00:00 213.304498 286.933093 246.469647 186.887548
2018-03-31 23:30:00 213.308369 286.938902 246.468678 186.891422
2018-04-30 23:00:00 215.496812 288.342024 247.522230 188.104749
2018-04-30 23:30:00 215.497781 288.340086 247.520294 188.103780
I have created these variables (These are the ranges I need)
increment_rates_winter = df['2017-08-30'].mean() - df['2017-06-01'].mean()
increment_rates_spring = df['2017-11-30'].mean() - df['2017-09-01'].mean()
increment_rates_summer = df['2018-02-28'].mean() - df['2017-12-01'].mean()
increment_rates_fall = df['2018-05-24'].mean() - df['2018-03-01'].mean()
Concatenated them:
df_seasons =pd.concat([increment_rates_winter,increment_rates_spring,increment_rates_summer,increment_rates_fall],axis=1)
and after plotting, I got this:
However, I've been trying to get this:
df_seasons
Out[664]:
Winter Spring Summer Fall
50 6.697123 6.948447 -1.961549 7.662622
51 6.428329 4.760650 -2.188402 5.927087
52 5.580953 6.667529 1.136889 12.939295
53 6.406259 2.506279 -2.105125 6.964549
54 4.332826 3.678492 -2.574769 6.569398
56 2.222032 3.359607 -2.694863 5.348258
58 NaN 1.388535 -0.035889 4.213046
The seasons in x and the means plotted for each column.
Winter = df['2017-06-01':'2017-08-30']
Spring = df['2017-09-01':'2017-11-30']
Summer = df['2017-12-01':'2018-02-28']
Fall = df['2018-03-01':'2018-05-30']
Thank you in advance!
We can get a specific date range in the following way, and then you can define it however you want and take the mean
import pandas as pd
df = pd.read_csv('test.csv')
df['date'] = pd.to_datetime(df['date'])
start_date = "2017-12-31 09:00:00"
end_date = "2018-04-30 23:00:00"
mask = (df['date'] > start_date) & (df['date'] <= end_date)
f_df = df.loc[mask]
This gives the output
date 50 ... 58
8 2017-12-31 09:30:00 212.926846 ... 187.179987 NaN
9 2018-03-31 23:00:00 213.304498 ... 186.887548 NaN
10 2018-03-31 23:30:00 213.308369 ... 186.891422 NaN
11 2018-04-30 23:00:00 215.496812 ... 188.104749 NaN
Hope this helps
How about transpose it:
df_seasons.T.plot()
Output:
My dataset looks like this:
time Open
2017-01-01 00:00:00 1.219690
2017-01-01 01:00:00 1.688490
2017-01-01 02:00:00 1.015285
2017-01-01 03:00:00 1.357672
2017-01-01 04:00:00 1.293786
2017-01-01 05:00:00 1.040048
2017-01-01 06:00:00 1.225080
2017-01-01 07:00:00 1.145402
...., ....
2017-12-31 23:00:00 1.145402
I want to find the sum between the time-range specified and save it to new dataframe.
let's say,
I want to find the sum between 2017-01-01 22:00:00 and 2017-01-02 04:00:00. This is the sum of 6 hours between 2 days. I want to find the sum of the data in the time-range such as 10 PM to next day 4 AM and put it in a different data frame for example df_timerange_sum. Please note that we are doing sum of time in 2 different date?
What did I do?
I used the sum() to calculate time-range like this: df[~df['time'].dt.hour.between(10, 4)].sum()but it gives me sum as a whole of the df but not on the between time-range I have specified.
I also tried the resample but I cannot find a way to do it for time-specific
df['time'].dt.hour.between(10, 4) is always False because no number is larger than 10 and smaller than 4 at the same time. What you want is to mark between(4,21) and then negate that to get the other hours.
Here's what I would do:
# mark those between 4AM and 10PM
# data we want is where s==False, i.e. ~s
s = df['time'].dt.hour.between(4, 21)
# use s.cumsum() marks the consecutive False block
# on which we will take sum
blocks = s.cumsum()
# again we only care for ~s
(df[~s].groupby(blocks[~s], as_index=False) # we don't need the blocks as index
.agg({'time':'min', 'Open':'sum'}) # time : min -- select the beginning of blocks
) # Open : sum -- compute sum of Open
Output for random data:
time Open
0 2017-01-01 00:00:00 1.282701
1 2017-01-01 22:00:00 2.766324
2 2017-01-02 22:00:00 2.838216
3 2017-01-03 22:00:00 4.151461
4 2017-01-04 22:00:00 2.151626
5 2017-01-05 22:00:00 2.525190
6 2017-01-06 22:00:00 0.798234
an alternative (in my opinion more straightforward) approach that accomplishes the same thing..there's definitely ways to reduce the code but I am also relatively new to pandas
df.set_index(['time'],inplace=True) #make time the index col (not 100% necessary)
df2=pd.DataFrame(columns=['start_time','end_time','sum_Open']) #new df that stores your desired output + start and end times if you need them
df2['start_time']=df[df.index.hour == 22].index #gets/stores all start datetimes
df2['end_time']=df[df.index.hour == 4].index #gets/stores all end datetimes
for i,row in df2.iterrows():
df2.set_value(i,'sum_Open',df[(df.index >= row['start_time']) & (df.index <= row['end_time'])]['Open'].sum())
you'd have to add an if statement or something to handle the last day which ends at 11pm.
I am trying to calculate degree hours based on hourly temperature values.
The data that I am using has some missing days and I am trying to interpolate that data. Below is some part of the data;
2012-06-27 19:00:00 24
2012-06-27 20:00:00 23
2012-06-27 21:00:00 23
2012-06-27 22:00:00 16
2012-06-27 23:00:00 15
2012-06-29 00:00:00 15
2012-06-29 01:00:00 16
2012-06-29 02:00:00 16
2012-06-29 03:00:00 16
2012-06-29 04:00:00 17
2012-06-29 05:00:00 17
2012-06-29 06:00:00 18
....
2014-12-14 20:00:00 1
2014-12-14 21:00:00 0
2014-12-14 22:00:00 -1
2014-12-14 23:00:00 8
The full code is;
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
filename = 'Temperature12.xls'
df_temp = pd.read_excel(filename)
df_temp = df_temp.set_index('datetime')
ts_temp = df_temp['temp']
def inter_lin_nan(ts_temp, rule):
ts_temp = ts_temp.resample(rule)
mask = np.isnan(ts_temp)
# interpolling missing values
ts_temp[mask] = np.interp(np.flatnonzero(mask), np.flatnonzero(~mask),ts_temp[~mask])
return(ts_temp)
ts_temp = inter_lin_nan(ts_temp,'1H')
print ts_temp['2014-06-28':'2014-06-29']
def HDH (Tcurr,Tref=15.0):
if Tref >= Tcurr:
return ((Tref-Tcurr)/24)
else:
return (0)
df_temp['H-Degreehours'] = df_temp.apply(lambda row: HDH(row['temp']),axis=1)
df_temp['CDD-CUMSUM'] = df_temp['C-Degreehours'].cumsum()
df_temp['HDD-CUMSUM'] = df_temp['H-Degreehours'].cumsum()
df_temp1=df_temp['H-Degreehours'].resample('H', how=sum)
print df_temp1
Now I have two questions; while using inter_lin_nan function, it does interpolate data but it also changes the next day data and the next data is totally different from the one available in the excel file. Is this common or I have missed something?
Second question: At the end of the code I am trying to add hourly degree days values and that is why I have created another Data frame, but when I print that data frame, it still has NaN number as in the original data file. Could you please tell why this is happening?
I may be missing something very obvious as I am new to Python.
Don't use numpy when pandas has its own version.
df = pd.read_csv(filepath)
df =df.asfreq('1d') #get a timeseries with index timestamps each day.
df['somelabel'] = df['somelabel'].interpolate(method='linear') # interpolate nan values
Use as frequency to add the required frequency of timestamps to your time series, and uses interpolate() to interpolate nan values only.
http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.Series.interpolate.html
http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.asfreq.html
I am trying use pandas to resample vessel tracking data from seconds to minutes using how='first'. The dataframe is called hg1s. The unique ID is called MMSI. The datetime index is TX_DTTM. Here is a data sample:
TX_DTTM MMSI LAT LON NS
2013-10-01 00:00:02 367542760 29.660550 -94.974195 15
2013-10-01 00:00:04 367542760 29.660550 -94.974195 15
2013-10-01 00:00:07 367451120 29.614161 -94.954459 0
2013-10-01 00:00:15 367542760 29.660210 -94.974069 15
2013-10-01 00:00:13 367542760 29.660210 -94.974069 15
The code to resample:
hg1s1min = hg1s.groupby('MMSI').resample('1Min', how='first')
And a data sample of the output:
hg1s1min[20000:20004]
MMSI TX_DTTM NS LAT LON
367448060 2013-10-21 00:42:00 NaN NaN NaN
2013-10-21 00:43:00 NaN NaN NaN
2013-10-21 00:44:00 NaN NaN NaN
2013-10-21 00:45:00 NaN NaN NaN
It's safe to assume that there are several data points within each minute, so I don't understand why this isn't picking up the first record for that method. I looked at this link: Pandas Downsampling Issue because it seemed similar to my problem. I tried passing label='left' and label='right', neither worked.
How do I return the first record in every minute for each MMSI?
As it turns out, the problem isn't with the method, but with my assumption about the data. The large data set is a month, or 44640 minutes. While every record in my dataset has the relevant values, there isn't 100% overlap in time. In this case MMSI = 367448060 is present at 2013-10-17 23:24:31 and again at 2013-10-29 20:57:32. between those two data points, there isn't data to sample, resulting in a NaN, which is correct.
I have two columns; the time an event started and the duration of that event. Like so:
time, duration
1:22:51,41
1:56:29,36
2:02:06,12
2:32:37,38
2:34:51,24
3:24:07,31
3:28:47,59
3:31:19,32
3:42:52,37
3:57:04,58
4:21:55,23
4:40:28,17
4:52:39,51
4:54:48,26
5:17:06,46
6:08:12,1
6:21:34,12
6:22:48,24
7:04:22,1
7:06:28,46
7:19:12,51
7:19:19,4
7:22:27,27
7:32:25,53
I want to create a line chart that shows the number of concurrent events happening throughout the day. Renaming time to start_time and adding a new column that computes the end_time is easy enough (assuming that's the next step) -- what I'm not quite sure I understand is how, afterwards, I can resample this data so I can chart concurrents.
I imagine I want to wind up with something like (but bucketed by the minute):
time, events
1:30:00,1
2:00:00,2
2:30:00,1
3:00:00,1
3:30:00,2
First make it an actual time stamp:
df['time'] = pd.to_datetime('2014-03-14 ' + df['time'])
Now you can get the end times:
df['end_time'] = df['time'] + df['duration'] * pd.offsets.Minute(1)
A way to get the open events is to combine the start and end times, resample and cumsum:
In [11]: open = pd.concat([pd.Series(1, df.time), # created add 1
pd.Series(-1, df.end_time) # closed substract 1
]).resample('30Min', how='sum').cumsum()
In [12]: open
Out[12]:
2014-03-14 01:00:00 1
2014-03-14 01:30:00 2
2014-03-14 02:00:00 1
2014-03-14 02:30:00 1
2014-03-14 03:00:00 2
2014-03-14 03:30:00 4
2014-03-14 04:00:00 2
2014-03-14 04:30:00 2
2014-03-14 05:00:00 2
2014-03-14 05:30:00 1
2014-03-14 06:00:00 2
2014-03-14 06:30:00 0
2014-03-14 07:00:00 3
2014-03-14 07:30:00 2
2014-03-14 08:00:00 0
Freq: 30T, dtype: int64
You could create a list containing dictionary items with values "time", "events"
obviously you need to handle the evaluating and manipulating of time data types differently, but you could do something like this:
event_bucket = []
time_interval = (end_time - start_time) / num_of_buckets
for ii in range(num_of_buckets):
event_bucket.append({"time":start_time + ii*time_interval,"events":0})
for entry in time_entry:
for bucket in event_bucket:
if bucket["time"] >= entry["start_time"] and bucket["time"] <= entry["end_time"]:
bucket["events"] += 1
If you make num_of_buckets larger you make the graph more precise.