python/pandas: DataFrame inheritance and DataFrame update when 'inplace' is not possible - python

I am sorry, I am aware the title is somewhat fuzzy.
Context
I am using a Dataframe to keep track of files because pandas DataFrame features several relevant functions to do all kind of filtering a dict cannot do, with loc, pd.IndexSlice, .index, .columns, pd.MultiIndex...
Ok, so this may not appear as the best choice for expert developers (which I am not), but all these functions have been so much handy that I have come to use a DataFrame for this.
And cherry on the cake, __repr__ of a MultiIndex Dataframe is just perfect when I want to know what is inside my file list.
Quick introduction to Summary class, inheriting from DataFrame
Because my DataFrame, that I call 'Summary', has some specific functions, I would like to make it a class, inheriting from pandas DataFrame class.
It also has 'fixed' MultiIndexes, for both rows and columns.
Finally, because my Summary class is defined outside the Store class which is actually managing file organization, Summary class needs a function from Store to be able to retrieve file organization.
Questions
Trouble with pd.DataFrame is (AFAIK) you cannot append rows without creating a new DataFrame.
As Summary has a refresh function so that it can recreate itself by reading folder content, a refresh somehow 'reset' the 'Summary' object.
To manage Summary refresh, I have come up with a first code (not working) and finally a second one (working).
import pandas as pd
import numpy as np
# Dummy function
def summa(a,b):
return a+b
# Does not work
class DatF1(pd.DataFrame):
def __init__(self,meth,data=None):
cmidx = pd.MultiIndex.from_arrays([['Index', 'Index'],['First', 'Last']])
rmidx = pd.MultiIndex(levels=[[],[]], codes=[[],[]],
names=['Component','Interval'])
super().__init__(data=data, index=rmidx, columns=cmidx, dtype=np.datetime64)
self.meth=meth
def refresh(self):
values = [[pd.Timestamp('2020/02/10 8:00'),pd.Timestamp('2020/02/10 8:00')],
[pd.Timestamp('2020/02/11 8:00'),pd.Timestamp('2020/02/12 8:00')]]
rmidx = pd.MultiIndex.from_arrays([['Comp1','Comp1'],['1h','1W']],names=['Component','Interval'])
self = pd.DataFrame(values, index=rmidx, columns=self.columns)
ex1 = DatF1(summa)
In [10]: ex1.meth(3,4)
Out[10]: 7
ex1.refresh()
In [11]: ex1
Out[11]: Empty DatF1
Columns: [(Index, First), (Index, Last)]
Index: []
After refresh(), ex1 is still empty. refresh has not worked correctly.
# Works
class DatF2(pd.DataFrame):
def __init__(self,meth,data=None):
cmidx = pd.MultiIndex.from_arrays([['Index', 'Index'],['First', 'Last']])
rmidx = pd.MultiIndex(levels=[[],[]], codes=[[],[]],
names=['Component','Interval'])
super().__init__(data=data, index=rmidx, columns=cmidx, dtype=np.datetime64)
self.meth=meth
def refresh(self):
values = [[pd.Timestamp('2020/02/10 8:00'),pd.Timestamp('2020/02/10 8:00')],
[pd.Timestamp('2020/02/11 8:00'),pd.Timestamp('2020/02/12 8:00')]]
rmidx = pd.MultiIndex.from_arrays([['Comp1','Comp1'],['1h','1W']],names=['Component','Interval'])
super().__init__(values, index=rmidx, columns=self.columns)
ex2 = DatF2(summa)
In [10]: ex2.meth(3,4)
Out[10]: 7
ex2.refresh()
In [11]: ex2
Out[11]: Index
First Last
Component Interval
Comp1 1h 2020-02-10 08:00:00 2020-02-10 08:00:00
1W 2020-02-11 08:00:00 2020-02-12 08:00:00
This code works!
I have 2 questions:
why the 1st code is not working? (I am sorry, this is maybe obvious, but I am completely ignorant why it does not work)
is calling super().__init__ in my refresh method acceptable coding practise? (or rephrased differently: is it acceptable to call super().__init__ in other places than in __init__ of my subclass?)
Thanks a lot for your help and advice. The world of class inheritance is for me quite new, and the fact that DataFrame content cannot be directly modified, so to say, seems to me to make it a step more difficult to handle.
Have a good day,
Bests,
Error message when adding a new row
import pandas as pd
import numpy as np
# Dummy function
def new_rows():
return [['Comp1','Comp1'],['1h','1W']]
# Does not work
class DatF1(pd.DataFrame):
def __init__(self,meth,data=None):
cmidx = pd.MultiIndex.from_arrays([['Index', 'Index'],['First', 'Last']])
rmidx = pd.MultiIndex(levels=[[],[]], codes=[[],[]],
names=['Component','Interval'])
super().__init__(data=data, index=rmidx, columns=cmidx, dtype=np.datetime64)
self.meth=meth
def refresh(self):
values = [[pd.Timestamp('2020/02/10 8:00'),pd.Timestamp('2020/02/10 8:00')],
[pd.Timestamp('2020/02/11 8:00'),pd.Timestamp('2020/02/12 8:00')]]
rmidx = self.meth()
self[rmidx] = values
ex1 = DatF1(new_rows)
ex1.refresh()
KeyError: "None of [MultiIndex([('Comp1', 'Comp1'),\n ( '1h', '1W')],\n names=['Component', 'Interval'])] are in the [index]"

Answers to your questions
why the 1st code is not working?
You are trying to call the class you've inherited from. Honestly, I don't know what's happening exactly in your case. I assumed this would produce an error but you got an empty dataframe.
is calling super().__init__ in my refresh method acceptable coding practise?
Maybe a legitimate use case exists for calling super().__init__ outside the __init__() method. But your case is not one of them. You have already inherited evertyhing from in your __init__() . Why use it again?
A better solution
The solution to your problem is unexpectedly simple. Because you can append a row to a Dataframe:
df['new_row'] = [value_1, value_2, ...]
Or in your case with an MultiIndex (see this SO post):
df.loc[('1h', '1W'), :] = [pd.Timestamp('2020/02/10 8:00'), pd.Timestamp('2020/02/10 8:00')]
Best practice
You should not inherit from pd.DataFrame. If you want to extend pandas use the documented API.

Related

Class method called in __init__ not giving same output as the same function used outside the class

I'm sure I'm missing something in how classes work here, but basically this is my class:
import pandas as pd
import numpy as np
import scipy
#example DF with OHLC columns and 100 rows
gold = pd.DataFrame({'Open':[i for i in range(100)],'Close':[i for i in range(100)],'High':[i for i in range(100)],'Low':[i for i in range(100)]})
class Backtest:
def __init__(self, ticker, df):
self.ticker = ticker
self.df = df
self.levels = pivot_points(self.df)
def pivot_points(self,df,period=30):
highs = scipy.signal.argrelmax(df.High.values,order=period)
lows = scipy.signal.argrelmin(df.Low.values,order=period)
return list(df.High[highs[0]]) + list(df.Low[lows[0]])
inst = Backtest('gold',gold) #gold is a Pandas Dataframe with Open High Low Close columns and data
inst.levels # This give me the whole dataframe (inst.df) instead of the expected output of the pivot_point function (a list of integers)
The problem is inst.levels returns the whole DataFrame instead of the return value of the function pivot_points (which is supposed to be a list of integers)
When I called the pivot_points function on the same DataFrame outside this class I got the list I expected
I expected to get the result of the pivot_points() function after assigning it to self.levels inside the init but instead I got the entire DataFrame
You would have to address pivot_points() as self.pivot_points()
And there is no need to add period as an argument if you are not changing it, if you are, its okay there.
I'm not sure if this helps, but here are some tips about your class:
class Backtest:
def __init__(self, ticker, df):
self.ticker = ticker
self.df = df
# no need to define a instance variable here, you can access the method directly
# self.levels = pivot_points(self.df)
def pivot_points(self):
period = 30
# period is a local variable to pivot_points so I can access it directly
print(f'period inside Backtest.pivot_points: {period}')
# df is an instance variable and can be accessed in any method of Backtest after it is instantiated
print(f'self.df inside Backtest.pivot_points(): {self.df}')
# to get any values out of pivot_points we return some calcualtions
return 1 + 1
# if you do need an attribute like level to access it by inst.level you could create a property
#property
def level(self):
return self.pivot_points()
gold = 'some data'
inst = Backtest('gold', gold) # gold is a Pandas Dataframe with Open High Low Close columns and data
print(f'inst.pivot_points() outside the class: {inst.pivot_points()}')
print(f'inst.level outside the class: {inst.level}')
This would be the result:
period inside Backtest.pivot_points: 30
self.df inside Backtest.pivot_points(): some data
inst.pivot_points() outside the class: 2
period inside Backtest.pivot_points: 30
self.df inside Backtest.pivot_points(): some data
inst.level outside the class: 2
Thanks to the commenter Henry Ecker I found that I had the function by the same name defined elsewhere in the file where the output is the df. After changing that my original code is working as expected

I added a row to Dataframe of pandas 3 times. However, only the last line is added

I wrote a function.
Dataframe was added 3 times using append.
But the result is only added one last time.
======
・It was an error to declare the dataframe type outside of the function first.
So I declared it in a function.
・Later, I wrote Dataframe outside of def AddDataframe (ymd, sb, vol) :. Then I got an error. The error is below.
NameError: name 'Hisframe10' is not defined
import pandas as pd
def AddDataframe(ymd,sb,vol):
data={'yyyymmdd':[],
'Sell':[],
'Buy':[],
'Volume':[],
'JPX':[],
'FutPrice':[]}
Hisframe8=pd.DataFrame(data)
Hisframe8
print('')
print('Hisframe8= ',Hisframe8)
adddata={'yyyymmdd':[ymd],
'Sell':[sb],
'Buy':['Nan'],
'Volume':[vol],
'JPX':[-1],
'FutPrice':[0.]}
Hisframe10=pd.DataFrame(adddata)
Hisframe10
return(Hisframe8.append(Hisframe10))
AddDataframe('2019-05-03','sell',123)
AddDataframe('2019-05-04','sell',345)
AddDataframe('2019-05-05','sell',456)
#Hisframe10 #err
======
I want to add 3 lines of data frame.
How should I do it?
https://imgur.com/i1lAB8M
you can make Hisframe8 as global
import pandas as pd
Hisframe8=pd.DataFrame(columns=['yyyymmdd','Sell','Buy','Volume','JPX','FutPrice'])
def AddDataframe(ymd,sb,vol):
global Hisframe8
adddata={'yyyymmdd':[ymd],'Sell':[sb],'Buy':['Nan'],'Volume':[vol],'JPX':[-1],'FutPrice':[0.]}
Hisframe10=pd.DataFrame(adddata)
Hisframe8 = Hisframe8.append(Hisframe10)
AddDataframe('2019-05-03','sell',123)
AddDataframe('2019-05-04','sell',345)
AddDataframe('2019-05-05','sell',456)
print(Hisframe8)
It is not necessary that you create an additional dataframe and append that to the first.
You can instead just append the dictionary to the existing df as shown here: append dictionary to data frame
Also a better style maybe would be to define the dataframe you are inserting into first, since that is not the primary use of your function.
My suggestion:
structure = {'date':[], 'Sell':[], 'Buy':[], 'Volume':[], 'JPX':[], 'FutPrice':[]}
df = pd.DataFrame(structure)
def add_data(df, date, sb, vol):
insert_dict = {'date':[date], 'Sell':[sb], 'Buy':[np.nan],
'Volume':[vol], 'JPX':[-1], 'FutPrice':[0.]}
return df.append(insert_dict , ignore_index = True)
df_appended = add_data(df, '2019-01-01', 'sell', 456)
I hope this helps

Why does subclassing a DataFrame mutate the original object?

I am ignoring the warnings and trying to subclass a pandas DataFrame. My reasons for doing so are as follows:
I want to retain all the existing methods of DataFrame.
I want to set a few additional attributes at class instantiation, which will later be used to define additional methods that I can call on the subclass.
Here's a snippet:
class SubFrame(pd.DataFrame):
def __init__(self, *args, **kwargs):
freq = kwargs.pop('freq', None)
ddof = kwargs.pop('ddof', None)
super(SubFrame, self).__init__(*args, **kwargs)
self.freq = freq
self.ddof = ddof
self.index.freq = pd.tseries.frequencies.to_offset(self.freq)
#property
def _constructor(self):
return SubFrame
Here's a use example. Say I have the DataFrame
print(df)
col0 col1 col2
2014-07-31 0.28393 1.84587 -1.37899
2014-08-31 5.71914 2.19755 3.97959
2014-09-30 -3.16015 -7.47063 -1.40869
2014-10-31 5.08850 1.14998 2.43273
2014-11-30 1.89474 -1.08953 2.67830
where the index has no frequency
print(df.index)
DatetimeIndex(['2014-07-31', '2014-08-31', '2014-09-30', '2014-10-31',
'2014-11-30'],
dtype='datetime64[ns]', freq=None)
Using SubFrame allows me to specify that frequency in one step:
sf = SubFrame(df, freq='M')
print(sf.index)
DatetimeIndex(['2014-07-31', '2014-08-31', '2014-09-30', '2014-10-31',
'2014-11-30'],
dtype='datetime64[ns]', freq='M')
The issue is, this modifies df:
print(df.index.freq)
<MonthEnd>
What's going on here, and how can I avoid this?
Moreover, I profess to using copied code that I don't understand all that well. What is happening within __init__ above? Is it necessary to use args/kwargs with pop here? (Why can't I just specify params as usual?)
I'll add to the warnings. Not that I want to discourage you, I actually applaud your efforts.
However, this won't the last of your questions as to what is going on.
That said, once you run:
super(SubFrame, self).__init__(*args, **kwargs)
self is a bone-fide dataframe. You created it by passing another dataframe to the constructor.
Try this as an experiment
d1 = pd.DataFrame(1, list('AB'), list('XY'))
d2 = pd.DataFrame(d1)
d2.index.name = 'IDX'
d1
X Y
IDX
A 1 1
B 1 1
So the observed behavior is consistent, in that when you construct one dataframe by passing another dataframe to the constructor, you end up pointing to the same objects.
To answer your question, subclassing isn't what is allowing the mutating of the original object... its the way pandas constructs a dataframe from a passed dataframe.
Avoid this by instantiating with a copy
d2 = pd.DataFrame(d1.copy())
What's going on in the __init__
You want to pass on all the args and kwargs to pd.DataFrame.__init__ with the exception of the specific kwargs that are intended for your subclass. In this case, freq and ddof. pop is a convenient way to grab the values and delete the key from kwargs before passing it on to pd.DataFrame.__init__
How I'd implement pipe
def add_freq(df, freq):
df = df.copy()
df.index.freq = pd.tseries.frequencies.to_offset(freq)
return df
df = pd.DataFrame(dict(A=[1, 2]), pd.to_datetime(['2017-03-31', '2017-04-30']))
df.pipe(add_freq, 'M')

How to use Timer with a function that returns a dataframe?

I want to query an API every n minutes and parse the response into a dataframe. However, I am getting a TypeError when trying to do so:
TypeError: 'DataFrame' object is not callable
I have tried:
Running a function that returns a get request
Running a function that returns the above into a pandas dataframe
In #1, I get "'dict' object is not callable". In #2, I get "'DataFrame not callable'" error (as shown above). Both work fine if I instead print the result of the functions BUT I need to do computations on the result and hence require a dataframe response to be returned.
It seems like I am missing something obvious. Can anyone please elucidate?
Reference:
from threading import Timer, Thread
def run_alert(time):
t = Timer(time, print_query_results(*args))
t.start()
EDIT #1:
The DataFrame object is response from the API formatted into a 10x3 table:
import numpy as np
import pandas as pd
from pandas.io.json import json_normalize
medium source pageviews
0 DIRECT (not set) xxx
1 ORGANIC google xxx
2 ORGANIC yahoo xxx
4 ORGANIC bing xxx
* * * *
EDIT #2:
def print_query_results(ids, metrics, dimensions, filters, sort):
#get results from request
results = run_query(ids, metrics, dimensions, filters, sort)
#convert json into dataframe
cols = json_normalize(results['columnHeaders'])['name']
rows = json_normalize(results, 'rows')
cols_names = []
for name in cols:
cols_names.append(name.split(":")[1])
df = pd.DataFrame(rows)
df.columns = [cols_names]
df.rename(columns = {'pageviews':'pageviews'+" "+strftime('%I:%M %p')}, inplace=True)
df = df.convert_objects(convert_numeric=True)
return df
Like I said in my comments (and there should really be appropriate tags and even import statements to show exactly what Timer/DataFrame are), your threading.Timer object wants something callable - so that when the time is elapsed, it can spin up a thread with that chunk of computation.
When you pass Timer print_query_results(*args), the function print_query_results is evaluated by the interpreter before being passed to the Timer, so you get a dict or a DataFrame or whatever the function is returning - not a function. One way to work around this would be
t = Timer(time, lambda: print_query_results(*args))
We can't see enough of your program to make recommendations beyond this sort of workaround, but in these type mismatch situations there's usually an underlying conceptual problem which merits some refactoring.

Propagate pandas series metadata through joins

I'd like to be able attach metadata to the series of dataframes (specifically, the original filename), so that after joining two dataframes I can see metadata on where each of the series came from.
I see github issues regarding _metadata (here, here), including some relating to the current _metadata attribute (here), but nothing in the pandas docs.
So far I can modify the _metadata attribute to supposedly allow preservation of metadata, but get an AttributeError after the join.
df1 = pd.DataFrame(np.random.randint(0, 4, (6, 3)))
df2 = pd.DataFrame(np.random.randint(0, 4, (6, 3)))
df1._metadata.append('filename')
df1[df1.columns[0]]._metadata.append('filename')
for c in df1:
df1[c].filename = 'fname1.csv'
df2[c].filename = 'fname2.csv'
df1[0]._metadata # ['name', 'filename']
df1[0].filename # fname1.csv
df2[0].filename # fname2.csv
df1[0][:3].filename # fname1.csv
mgd = pd.merge(df1, df2, on=[0])
mgd['1_x']._metadata # ['name', 'filename']
mgd['1_x'].filename # raises AttributeError
Any way to preserve this?
Update: Epilogue
As discussed here, __finalize__ cannot keep track of Series that are members of a dataframe, only independent series. So for now I'll keep track of the Series-level metadata by maintaining a dictionary of metadata attached to the dataframes. My code looks like:
def cust_merge(d1, d2):
"Custom merge function for 2 dicts"
...
def finalize_df(self, other, method=None, **kwargs):
for name in self._metadata:
if method == 'merge':
lmeta = getattr(other.left, name, {})
rmeta = getattr(other.right, name, {})
newmeta = cust_merge(lmeta, rmeta)
object.__setattr__(self, name, newmeta)
else:
object.__setattr__(self, name, getattr(other, name, None))
return self
df1.filenames = {c: 'fname1.csv' for c in df1}
df2.filenames = {c: 'fname2.csv' for c in df2}
pd.DataFrame._metadata = ['filenames']
pd.DataFrame.__finalize__ = finalize_df
I think something like this will work (and if not, pls file a bug report as this, while supported is a bit bleading edge, iow it IS possible that the join methods don't call this all the time. That is a bit untested).
See this issue for a more detailed example/bug fix.
DataFrame._metadata = ['name','filename']
def __finalize__(self, other, method=None, **kwargs):
"""
propagate metadata from other to self
Parameters
----------
other : the object from which to get the attributes that we are going
to propagate
method : optional, a passed method name ; possibly to take different
types of propagation actions based on this
"""
### you need to arbitrate when their are conflicts
for name in self._metadata:
object.__setattr__(self, name, getattr(other, name, None))
return self
DataFrame.__finalize__ = __finalize__
So this replaces the default finalizer for DataFrame with your custom one. Where I have indicated, you need to put some code which can arbitrate between conflicts. This is the reason this is not done by default, e.g. frame1 has name 'foo' and frame2 has name 'bar', what do you do when the method is __add__, what about another method?. Let us know what you do and how it works out.
This is ONLY replacing for DataFrame (and you can simply do the default action if you want), which is to propogate other to self; you can also not set anything except under special cases of method.
This method is meant to be overriden if sub-classes, that's why you are monkey patching here (rather than sub-classing which is most of the time overkill).

Categories

Resources