Pandas resample drops column - python

Pandas is dropping one of my columns when resampling, and I don't understand why. I've read that this can happen if the columns don't have a proper numerical type, but that isn't the case here:
import pandas;
# movements is the target data frame with daily movements
movements = pandas.DataFrame(columns=['date', 'amount', 'cash']);
movements.set_index('date', inplace=True);
# df is a movement to add
df = pandas.DataFrame({'amount': 179,
'cash': 100.00},
index=[pandas.Timestamp('2015/12/31')]);
print(df); print(df.info()); print();
# add df to movements and resample movements
movements = movements.append(df).resample('D').sum().fillna(0);
print(movements); print(movements.info());
results in:
amount cash
2015-12-31 179 100.0
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1 entries, 2015-12-31 to 2015-12-31
Data columns (total 2 columns):
amount 1 non-null int64
cash 1 non-null float64
dtypes: float64(1), int64(1)
memory usage: 24.0 bytes
None
cash
2015-12-31 100.0
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1 entries, 2015-12-31 to 2015-12-31
Freq: D
Data columns (total 1 columns):
cash 1 non-null float64
dtypes: float64(1)
memory usage: 16.0 bytes
None
I noticed that the drop happens only when cash is a float, i.e. if in the code above cash is set to 100 (int) rather than 100.00, then all columns are int and amount isn't dropped.
Any idea?

The problem is when you created movements DF, the datetype for the columns is set to object.
If you set the column types upfront or change it later to numeric types, it should work.
movements.append(df).apply(pd.to_numeric).resample('D').sum().fillna(0)
Out[100]:
amount cash
2015-12-31 179 100.0

Related

python pandas | replacing the date and time string with only time

price
quantity
high time
10.4
3
2021-11-08 14:26:00-05:00
dataframe = ddg
the datatype for hightime is datetime64[ns, America/New_York]
i want the high time to be only 14:26:00 (getting rid of 2021-11-08 and -05:00) but i got an error when using the code below
ddg['high_time'] = ddg['high_time'].dt.strftime('%H:%M')
I think because it's not the right column name:
# Your code
>>> ddg['high_time'].dt.strftime('%H:%M')
...
KeyError: 'high_time'
# With right column name
>>> ddg['high time'].dt.strftime('%H:%M')
0 14:26
Name: high time, dtype: object
# My dataframe:
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1 entries, 0 to 0
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 price 1 non-null float64
1 quantity 1 non-null int64
2 high time 1 non-null datetime64[ns, America/New_York]
dtypes: datetime64[ns, America/New_York](1), float64(1), int64(1)
memory usage: 152.0 bytes

Group by of dataframe with average of a column

I am really new to python..just a week ago started learning it. I have a query and hope you guys can help me to solve it. Thanks in advance..!!
I have data in below format.
Date Product Price Discount
1/1/2020 A 17,490 30
1/1/2020 B 34,990 21
1/1/2020 C 20,734 11
1/2/2020 A 16,884 26
1/2/2020 B 26,990 40
1/2/2020 C 17,936 10
1/3/2020 A 16,670 36
1/3/2020 B 12,990 13
1/3/2020 C 30,990 43
I want to take the average of discount column for each date and just have 2 columns.. It aint working out.. :(
Date AVG_Discount
1/1/2020 x %
1/2/2020 y %
1/3/2020 z %
What I have tried doing is below.. As I said, I am novice in Python so approach might be incorrect.. Need guidance guys.. TIA
mean_col=df.groupby(df['time'])['discount'].mean()
df=df.set_index(['time'])
df['mean_col']=mean_col
df=df.reset_index()
df.groupby(df['time'])['discount'].mean() Is already returning series with time as index.
All you need to do is just use reset_index function on this.
grouped_df = df.groupby(df['time'])['discount'].mean().reset_index()
As Quang Hoang Suggested in comments. You can also pass as_index=False to groupby.
Apparently, you have read your DataFrame from a text file,
e.g. CSV, but with separator other than a comma.
Run df.info() and I assume that you got result something like below:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 9 entries, 0 to 8
Data columns (total 4 columns):
Date 9 non-null object
Product 9 non-null object
Price 9 non-null object
Discount 9 non-null int64
dtypes: int64(1), object(3)
Note that Date, Product and Price columns are of object type
(actually, a string). This remark is especially importoant in case of
Price column, because to compte mean you should have source column
as a number (not a string).
So first you should convert Date and Price columns to proper types
(datetime and float). To do it run:
df.Date = pd.to_datetime(df.Date)
df.Price = df.Price.str.replace(',', '.').astype(float)
Run df.info() again and now the result should be:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 9 entries, 0 to 8
Data columns (total 4 columns):
Date 9 non-null datetime64[ns]
Product 9 non-null object
Price 9 non-null float64
Discount 9 non-null int64
dtypes: datetime64[ns](1), float64(1), int64(1), object(1)
And now you can compute the mean discount, running:
df.groupby('Date').Discount.mean()
For your data I got:
Date
2020-01-01 20.666667
2020-01-02 25.333333
2020-01-03 30.666667
Name: Discount, dtype: float64
Note that your code sample contains the following errors:
Argument of groupby is the column name (or a list of column names), so:
df between parentheses is not needed,
instead of time you should write Date (you have no time column).
Your Discount column is written starting with capital D.

Division of two dataframe with Group by of a Column Pandas

I have a dataframe df_F1:
df_F1.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 7 columns):
class_energy 2 non-null object
ACT_TIME_AERATEUR_1_F1 2 non-null float64
ACT_TIME_AERATEUR_1_F3 2 non-null float64
ACT_TIME_AERATEUR_1_F5 2 non-null float64
ACT_TIME_AERATEUR_1_F8 2 non-null float64
ACT_TIME_AERATEUR_1_F7 2 non-null float64
ACT_TIME_AERATEUR_1_F8 2 non-null float64
dtypes: float64(6), object(1)
memory usage: 128.0+ bytes
df_F1.head()
class_energy ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5
low 5.875550 431.000000 856.666667
medium 856.666667 856.666667 856.666667
I try to create a dataframe Ratio which contain for each class_energy the value of energy of each ACT_TIME_AERATEUR_1_Fx divided by the sum of energy of all ACT_TIME_AERATEUR_1_Fx.
For example:
ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5
low 5.875550/(5.875550 + 431.000000+856.666667) 431.000000/(5.875550+431.000000+856.666667) 856.666667/(5.875550+431.000000+856.666667)
medium 856.666667/(856.666667+856.666667+856.666667) 856.666667/(856.666667+856.666667+856.666667) 856.666667/(856.666667+856.666667+856.666667)
Any idea please to help me?
You could use DF.divide to divide the required columns with their sum along the same columns as shown:
df.iloc[:,1:4] = df.iloc[:,1:4].divide(df.sum(axis=1), axis=0)
print(df)
class_energy ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 \
0 low 0.004542 0.333194
1 medium 0.333333 0.333333
ACT_TIME_AERATEUR_1_F5
0 0.662264
1 0.333333

Plotting with GroupBy in Pandas/Python

Although it is straight-forward and easy to plot groupby objects in pandas, I am wondering what the most pythonic (pandastic?) way to grab the unique groups from a groupby object is. For example:
I am working with atmospheric data and trying to plot diurnal trends over a period of several days or more. The following is the DataFrame containing many days worth of data where the timestamp is the index:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 10909 entries, 2013-08-04 12:01:00 to 2013-08-13 17:43:00
Data columns (total 17 columns):
Date 10909 non-null values
Flags 10909 non-null values
Time 10909 non-null values
convt 10909 non-null values
hino 10909 non-null values
hinox 10909 non-null values
intt 10909 non-null values
no 10909 non-null values
nox 10909 non-null values
ozonf 10909 non-null values
pmtt 10909 non-null values
pmtv 10909 non-null values
pres 10909 non-null values
rctt 10909 non-null values
smplf 10909 non-null values
stamp 10909 non-null values
no2 10909 non-null values
dtypes: datetime64[ns](1), float64(11), int64(2), object(3)
To be able to average (and take other statistics) the data at every minute for several days, I group the dataframe:
data = no.groupby('Time')
I can then easily plot the mean NO concentration as well as quartiles:
ax = figure(figsize=(12,8)).add_subplot(111)
title('Diurnal Profile for NO, NO2, and NOx: East St. Louis Air Quality Study')
ylabel('Concentration [ppb]')
data.no.mean().plot(ax=ax, style='b', label='Mean')
data.no.apply(lambda x: percentile(x, 25)).plot(ax=ax, style='r', label='25%')
data.no.apply(lambda x: percentile(x, 75)).plot(ax=ax, style='r', label='75%')
The issue that fuels my question, is that in order to plot more interesting looking things like plots using like fill_between() it is necessary to know the x-axis information per the documentation
fill_between(x, y1, y2=0, where=None, interpolate=False, hold=None, **kwargs)
For the life of me, I cannot figure out the best way to accomplish this. I have tried:
Iterating over the groupby object and creating an array of the groups
Grabbing all of the unique Time entries from the original DataFrame
I can make these work, but I know there is a better way. Python is far too beautiful. Any ideas/hints?
UPDATES:
The statistics can be dumped into a new dataframe using unstack() such as
no_new = no.groupby('Time')['no'].describe().unstack()
no_new.info()
<class 'pandas.core.frame.DataFrame'>
Index: 1440 entries, 00:00 to 23:59
Data columns (total 8 columns):
count 1440 non-null values
mean 1440 non-null values
std 1440 non-null values
min 1440 non-null values
25% 1440 non-null values
50% 1440 non-null values
75% 1440 non-null values
max 1440 non-null values
dtypes: float64(8)
Although I should be able to plot with fill_between() using no_new.index, I receive a TypeError.
Current Plot code and TypeError:
ax = figure(figzise=(12,8)).add_subplot(111)
ax.plot(no_new['mean'])
ax.fill_between(no_new.index, no_new['mean'], no_new['75%'], alpha=.5, facecolor='green')
TypeError:
TypeError Traceback (most recent call last)
<ipython-input-6-47493de920f1> in <module>()
2 ax = figure(figsize=(12,8)).add_subplot(111)
3 ax.plot(no_new['mean'])
----> 4 ax.fill_between(no_new.index, no_new['mean'], no_new['75%'], alpha=.5, facecolor='green')
5 #title('Diurnal Profile for NO, NO2, and NOx: East St. Louis Air Quality Study')
6 #ylabel('Concentration [ppb]')
C:\Users\David\AppData\Local\Enthought\Canopy\User\lib\site-packages\matplotlib\axes.pyc in fill_between(self, x, y1, y2, where, interpolate, **kwargs)
6986
6987 # Convert the arrays so we can work with them
-> 6988 x = ma.masked_invalid(self.convert_xunits(x))
6989 y1 = ma.masked_invalid(self.convert_yunits(y1))
6990 y2 = ma.masked_invalid(self.convert_yunits(y2))
C:\Users\David\AppData\Local\Enthought\Canopy\User\lib\site-packages\numpy\ma\core.pyc in masked_invalid(a, copy)
2237 cls = type(a)
2238 else:
-> 2239 condition = ~(np.isfinite(a))
2240 cls = MaskedArray
2241 result = a.view(cls)
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
The plot as of now looks like this:
Storing the groupby stats (mean/25/75) as columns in a new dataframe and then passing the new dataframe's index as the x parameter of plt.fill_between() works for me (tested with matplotlib 1.3.1). e.g.,
gdf = df.groupby('Time')[col].describe().unstack()
plt.fill_between(gdf.index, gdf['25%'], gdf['75%'], alpha=.5)
gdf.info() should look like this:
<class 'pandas.core.frame.DataFrame'>
Index: 12 entries, 00:00:00 to 22:00:00
Data columns (total 8 columns):
count 12 non-null float64
mean 12 non-null float64
std 12 non-null float64
min 12 non-null float64
25% 12 non-null float64
50% 12 non-null float64
75% 12 non-null float64
max 12 non-null float64
dtypes: float64(8)
Update: to address the TypeError: ufunc 'isfinite' not supported exception, it is necessary to first convert the Time column from a series of string objects in "HH:MM" format to a series of datetime.time objects, which can be done as follows:
df['Time'] = df.Time.map(lambda x: pd.datetools.parse(x).time())

missing values using pandas.rolling_mean

I have lots of missing values when calculating rollng_mean with:
import datetime as dt
import pandas as pd
import pandas.io.data as web
stocklist = ['MSFT', 'BELG.BR']
# read historical prices for last 11 years
def get_px(stock, start):
return web.get_data_yahoo(stock, start)['Adj Close']
today = dt.date.today()
start = str(dt.date(today.year-11, today.month, today.day))
px = pd.DataFrame({n: get_px(n, start) for n in stocklist})
px.ffill()
sma200 = pd.rolling_mean(px, 200)
got following result:
In [14]: px
Out[14]:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 2836 entries, 2002-01-14 00:00:00 to 2013-01-11 00:00:00
Data columns:
BELG.BR 2270 non-null values
MSFT 2769 non-null values
dtypes: float64(2)
In [15]: sma200
Out[15]:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 2836 entries, 2002-01-14 00:00:00 to 2013-01-11 00:00:00
Data columns:
BELG.BR 689 non-null values
MSFT 400 non-null values
dtypes: float64(2)
Any idea why most of the sma200 rolling_mean values are missing and how to get the complete list ?
px.ffill() returns a new DataFrame. To modify px itself, use inplace = True.
px.ffill(inplace = True)
sma200 = pd.rolling_mean(px, 200)
print(sma200)
yields
Data columns:
BELG.BR 2085 non-null values
MSFT 2635 non-null values
dtypes: float64(2)
If you print sma200, you will probably find lots of null or missing values. This is because the threshold for number of non-nulls is high by default for rolling_mean.
Try using
sma200 = pd.rolling_mean(px, 200, min_periods=2)
From the pandas docs:
min_periods: threshold of non-null data points to require (otherwise result is NA)
You could also try changing the size of the window if your dataset is missing many points.

Categories

Resources