how to load a long panel dataset in Pandas? - python

I have a panel dataset in the long format, that is observations in the data are at the Panel_ID - Day level. I have, say, m Panel_IDs and each Panel_ID has T(m) Day observations.
For instance, the data would look like this. I show an example with 2 panel IDs (1 and 2) but the data contains a lot of them. X is one variable of interest.
Panel_ID Day X
1 2-feb 5
1 3-feb 4.3
1 5-feb 3
2 2-feb 0
2 5-feb 0.5
2 8-feb 3.2
etc. Days are not necessarily the same across Panel_IDs and each Panel_ID has its own number of daily observations.
How can I load this dataset in Pandas so that Pandas recognize its panel structure?
Many thanks!

Just load it normally, with read_csv() or whatever. I copied your data and used read_clipboard() myself.
Then, set the index:
df = df.set_index(['Panel_ID','Day'])
X
Panel_ID Day
1 2-feb 5.0
3-feb 4.3
5-feb 3.0
2 2-feb 0.0
5-feb 0.5
8-feb 3.2
If you want, you are done at this point, but if you want to convert from dataframe to panel, then it is easy after you have indexed the df:
pan = df.to_panel()
Honestly, I generally prefer to keep things as a multi-indexed dataframe rather than add the complexity of the panel structure, but you can do things either way. Note, that even keeping it as a standard dataframe, you can do lots of reshaping easily with things like stack(). For example, convert from narrow to wide with unstack():
df.unstack(level=1)
X
Day 2-feb 3-feb 5-feb 8-feb
Panel_ID
1 5 4.3 3.0 NaN
2 0 NaN 0.5 3.2
Also see the docs here.

Related

Deleting entire rows of a dataset for outliers found in a single column

I am currently trying to remove the outlier values from my dataset, using the median absolute deviation method.
To do so, I followed the instructions given by #tanemaki in Detect and exclude outliers in Pandas data frame, which enables the deletion of entire rows that hold at least one outlier value.
In the post I linked, the same question was asked, but was not answered.
The problem is that I only want the outliers to be searched in a single column.
So, for example, my dataframe looks like:
Temperature Date
1 24.72 2.3
2 25.76 4.6
3 25.42 7.0
4 40.31 9.3
5 26.21 15.6
6 26.59 17.9
For example, there are two 'anomalies in the data:
The Temperature value in row [4]
The Date value in row [5]
So, what I want is for the outlier function to only 'notice' the anomaly in the Temperature column, and delete its corresponding row.
The outlier code I am using is:
df=pd.read_excel(r'/home/.../myfile.xlsx')
from scipy import stats
df[pd.isnull(df)]=0
dfn=df[(np.abs(stats.zscore(df))<4).all(axis=1)] ##taneski
print(dfn)
And my resulting data frame currently looks like:
Temperature Date
1 24.72 2.3
2 25.76 4.6
3 25.42 7.0
6 26.59 17.9
In case I am not getting my message across, the desired output would be:
Temperature Date
1 24.72 2.3
2 25.76 4.6
3 25.42 7.0
5 26.21 15.6
6 26.59 17.9
Any pointers would be of great help. Thanks!
You can always limit the stats.zscore operation on only the Temperature column instead of the whole df. Like this maybe:
In [573]: dfn = df[(np.abs(stats.zscore(df['Temperature']))<4)]
In [574]: dfn
Out[574]:
Temperature Date
1 24.72 2.3
2 25.76 4.6
3 25.42 7.0
5 26.21 15.6
6 26.59 17.9
At the moment, you're calculating the zscores for the whole dataframe and then filtering the dataframe with those calculated scores; what you want to do is just apply the same idea to one column.
Instead of
dfn=df[(np.abs(stats.zscore(df))<4).all(axis=1)]
You want to have
df[np.abs(stats.zscore(df["Temperature"])) < 4]
As a side note, I found that I was unable to get your example results by comparing the zscores to 4; I had to switch it down to 2.

Plotting by Index with different labels

I am using pandas and matplotlib to generate some charts.
My DataFrame:
Journal Papers per year in journal
0 Information and Software Technology 4
1 2012 International Conference on Cyber Securit... 4
2 Journal of Network and Computer Applications 4
3 IEEE Security & Privacy 5
4 Computers & Security 11
My Dataframe is a result of a groupby out of a larger dataframe. What I want now, is a simple barchart, which in theory works fine with a df_groupby_time.plot(kind='bar'). However, I get this:
What I want are different colored bars, and a legend which states which color corresponds to which paper.
Playing around with relabeling hasn't gotten me anywhere so far. And I have no idea anymore on how to achieve what I want.
EDIT:
Resetting the index and plotting isn't what I want:
df_groupby_time.set_index("Journals").plot(kind='bar')
I found a solution, based on this question here.
SO, the dataframe needs to be transformed into a matrix, were the values exist only on the main diagonal.
First, I save the column journals for later in a variable.
new_cols = df["Journal"].values
Secondly, I wrote a function, that takes a series, the column Papers per year in Journal, and the previously saved new columns, as input parameters, and returns a dataframe, where the values are only on the main diagonal.:
def values_into_main_diagonal(some_series, new_cols):
"""Puts the values of a series onto the main diagonal of a new df.
some_series - any series given
new_cols - the new column labels as list or numpy.ndarray"""
x = [{i: some_series[i]} for i in range(len(some_series))]
main_diag_df = pd.DataFrame(x)
main_diag_df.columns = new_cols
return main_diag_df
Thirdly, feeding the function the Papers per year in Journal column and our saved new columns names, returns the following dataframe:
new_df:
1_journal 2_journal 3_journal 4_journal 5_journal
0 4 NaN NaN NaN NaN
1 NaN 4 NaN NaN NaN
2 NaN NaN 4 NaN NaN
3 NaN NaN NaN 5 NaN
4 NaN NaN NaN NaN 11
Finally plotting the new_df via new_df.plot(kind='bar', stacked=True) gives me what I want. The Journals in different colors as the legend and NOT on the axis.:

Rolling standard deviation with Pandas, and NaNs

I have data that looks like this:
1472698113000000000 -28.84
1472698118000000000 -26.69
1472698163000000000 -27.65
1472698168000000000 -26.1
1472698238000000000 -27.33
1472698243000000000 -26.47
1472698248000000000 -25.24
1472698253000000000 -25.53
1472698283000000000 -27.3
...
This is a time series that grows. Each time it grows, I attempt to get the rolling standard deviation of the set, using pandas.rolling_std. Each time, the result includes NaNs, which I cannot use (I am trying to insert the result into InfluxDB, and it complains when it sees the NaNs.)
I've experimented with different window sizes. I am doing this on different series, of varying rates of growth and current sizes (some just a couple of measurements long, some hundreds or thousands).
Simply, I just want to have a rolling standard deviation in InfluxDB so that I can graph it and watch how the source data is changing over time, with respect to its mean. How can I overcome this NaN problem?
If you are doing something like
df.rolling(5).std()
and getting
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 5.032395e+10 1.037386
5 5.345559e+10 0.633024
6 4.263215e+10 0.967352
7 3.510698e+10 0.822879
8 1.767767e+10 0.971972
You can strip away the NaNs by using .dropna().
df.rolling(5).std().dropna():
4 5.032395e+10 1.037386
5 5.345559e+10 0.633024
6 4.263215e+10 0.967352
7 3.510698e+10 0.822879
8 1.767767e+10 0.971972

Grouping records with close DateTimes in Python pandas DataFrame

I have been spinning my wheels with this problem and was wondering if anyone has any insight on how best to approach it. I have a pandas DataFrame with a number of columns, including one datetime64[ns]. I would like to find some way to 'group' records together which have datetimes which are very close to one another. For example, I might be interested in grouping the following transactions together if they occur within two seconds of each other by assigning a common ID called Grouped ID:
Transaction ID Time Grouped ID
1 08:10:02 1
2 08:10:03 1
3 08:10:50
4 08:10:55
5 08:11:00 2
6 08:11:01 2
7 08:11:02 2
8 08:11:03 3
9 08:11:04 3
10 08:15:00
Note that I am not looking to have the time window expand ad infinitum if transactions continue to occur at quick intervals - once a full 2 second window has passed, a new window would begin with the next transaction (as shown in transactions 5 - 9). Additionally, I will ultimately be performing this analysis at the millisecond level (i.e. combine transactions within 50 ms) but stuck with seconds for ease of presentation above.
Thanks very much for any insight you can offer!
The solution i suggest requires you to reindex your data with your Time data.
You can use a list of datetimes with the desired frequency, use searchsorted to find the nearest datetimes in your index, and then use it for slicing (as suggested in question python pandas dataframe slicing by date conditions and Python pandas, how to truncate DatetimeIndex and fill missing data only in certain interval).
I'm using pandas 0.14.1 and the DataOffset object (http://pandas.pydata.org/pandas-docs/dev/timeseries.html?highlight=dateoffset). I didn't check with datetime64, but i guess you might adapt the code. DataOffset goes down to the microsecond level.
Using the following code,
import pandas as pd
import pandas.tseries.offsets as pto
import numpy as np
# Create some ome test data
d_size = 15
df = pd.DataFrame({"value": np.arange(d_size)}, index=pd.date_range("2014/11/03", periods=d_size, freq=pto.Milli()))
# Define periods to define groups (ticks)
ticks = pd.date_range("2014/11/03", periods=d_size/3, freq=5*pto.Milli())
# find nearest indexes matching the ticks
index_ticks = np.unique(df.index.searchsorted(ticks))
# make a dataframe with the group ids
dgroups = pa.DataFrame(index=df.index, columns=['Group id',])
# sets the group ids
for i, (mini, maxi) in enumerate(zip(index_ticks[:-1], index_ticks[1:])):
dgroups.loc[mini:maxi] = i
# update original dataframe
df['Group id'] = dgroups['Group id']
I was able to obtain this kind of dataframe:
value Group id
2014-11-03 00:00:00 0 0
2014-11-03 00:00:00.001000 1 0
2014-11-03 00:00:00.002000 2 0
2014-11-03 00:00:00.003000 3 0
2014-11-03 00:00:00.004000 4 0
2014-11-03 00:00:00.005000 5 1
2014-11-03 00:00:00.006000 6 1
2014-11-03 00:00:00.007000 7 1
2014-11-03 00:00:00.008000 8 1
2014-11-03 00:00:00.009000 9 1
2014-11-03 00:00:00.010000 10 2
2014-11-03 00:00:00.011000 11 2
2014-11-03 00:00:00.012000 12 2
2014-11-03 00:00:00.013000 13 2
2014-11-03 00:00:00.014000 14 2

Combining two columns from two dataframes; same indices but different lengths

Please be advised, I am a beginning programmer and a beginning python/pandas user. I'm a behavioral scientist and learning to use pandas to process and organize my data. As a result, some of this might seem completely obvious and it may seem like a question not worthy of the forum. Please have tolerance! To me, this is days of work, and I have indeed spent hours trying to figure out the answer to this question already. Thanks in advance for any help.
My data look like this. The "real" Actor and Recipient data are always 5-digit numbers, and the "Behavior" data are always letter codes. My problem is that I also use this format for special lines, denoted by markers like "date" or "s" in the Actor column. These markers indicate that the "Behavior" column holds this special type of data, and not actual Behavior data. So, I want to replace the markers in the Actor column with NaN values, and grab the special data from the behavior column to put in another column (in this example, the empty Activity column).
follow Activity Actor Behavior Recipient1
0 1 NaN date 2.1.3.2012 NaN
1 1 NaN s ss.hx NaN
2 1 NaN 50505 vo 51608
3 1 NaN 51608 vr 50505
4 1 NaN s ss.he NaN
So far, I have written some code in pandas to select out the "s" lines into a new dataframe:
def get_act_line(group):
return group.ix[(group.Actor == 's')]
result = trimdata.groupby('follow').apply(get_act_line)
I've copied over the Behavior column in this dataframe to the Activity column, and replaced the Actor and Behavior values with NaN:
result.Activity = result.Behavior
result.Behavior = np.nan
result.Actor = np.nan
result.head()
So my new dataframe looks like this:
follow follow Activity Actor Behavior Recipient1
1 2 1 ss.hx NaN NaN NaN
34 1 hf.xa NaN NaN f.53702
74 1 hf.fe NaN NaN NaN
10 1287 10 ss.hf NaN NaN db
1335 10 fe NaN NaN db
What I would like to do now is to combine this dataframe with the original, replacing all of the values in these selected rows, but maintaining values for the other rows in the original dataframe.
This may seem like a simple question with an obvious solution, or perhaps I have gone about it all wrong to begin with!
I've worked through Wes McKinney's book, I've read the documentation on different types of merges, mapping, joining, transformations, concatenations, etc. I have browsed the forums and have not found an answer that helps me to figure this out. Your help will be very much appreciated.
One way you can do this (though there may be more optimal or elegant ways) is:
mask = (df['Actor']=='s')
df['Activity'] = df[mask]['Behavior']
df.ix[mask, 'Behavior'] = np.nan
where df is equivalent to your results dataframe. This should return (my column orders are slightly different):
Activity Actor Behavior Recipient1 follow
0 NaN date 2013-04-01 00:00:00 NaN 1
1 ss.hx NaN ss.hx NaN 1
2 NaN 50505 vo 51608 1
3 NaN 51608 vr 50505 1
4 ss.he NaN ss.hx NaN 1
References:
Explanation of df.ix from other STO post.

Categories

Resources