Consider the following Multiindex Pandas Seires:
import pandas as pd
import numpy as np
val = np.array([ 0.4, -0.6, 0.6, 0.5, -0.4, 0.2, 0.6, 1.2, -0.4])
inds = [(-1000, 1921.6), (-1000, 1922.3), (-1000, 1923.0), (-500, 1921.6),
(-500, 1922.3), (-500, 1923.0), (-400, 1921.6), (-400, 1922.3),
(-400, 1923.0)]
names = ['pp_delay', 'wavenumber']
example = pd.Series(val)
example.index = pd.MultiIndex.from_tuples(inds, names=names)
example should now look like
pp_delay wavenumber
-1000 1921.6 0.4
1922.3 -0.6
1923.0 0.6
-500 1921.6 0.5
1922.3 -0.4
1923.0 0.2
-400 1921.6 0.6
1922.3 1.2
1923.0 -0.4
dtype: float64
I want to group example by pp_delay and select a range within each group using the wavenumber index and perform an operation on that subgroup. To clarify what I mean, I have a few examples.
Here is a position based solution.
example.groupby(level="pp_delay").nth(list(range(1,3))).groupby(level="pp_delay").sum()
this gives
pp_delay
-1000 0.0
-500 -0.2
-400 0.8
dtype: float64
Now the last to elements of each pp_delay group have been summed.
An alternative solution and more straight forward is to loop over the groups:
delays = example.index.levels[0]
res = np.zeros(delays.shape)
roi = slice(1922, 1924)
for i in range(3):
res[i] = example[delays[i]][roi].sum()
res
gives
array([ 0. , -0.2, 0.8])
Anyhow I don't like it much ether because it doesn't fit well with the usual pandas style.
Now what I ideally would want something like:
example.groupby(level="pp_delay").loc[1922:1924].sum()
or maybe even something like
example[:, 1922:1924].sum()
But apparently pandas indexing doesn't work that way. Anybody got a better way?
Cheers
I'd skip the groupby
example.unstack(0).ix[1922:1924].sum()
pp_delay
-1000 0.0
-500 -0.2
-400 0.8
dtype: float64
Related
I'm currently writing Python code that compares offensive and defensive stats in basketball and I want to be able to create weights with the given stats. I have my stats saved in a dataframe according to: team, position, and other numerical stats. I want to be able to loop through each team and their respective positions and corresponding stats. e.g.:
['DAL', 'C', 0.0, 3.0, 0.5, 0.4, 0.5, 0.7, 6.4] vs ['BOS', 'C', 1.7, 6.0, 2.1, 0.1, 0.7, 1.9, 9.0]
So I would like to compare BOS vs DAL at the C position and compare points, rebounds, assists etc. If one is greater than the other then divide the greater by the lesser.
The best thing I have so far is to convert the the dataframes to numpy and then proceed to loop through those and append into a blank list:
df1 = df1.to_numpy()
df2 = df2.to_numpy()
df1_array = []
df2_array = []
for x in range(len(df1)):
for a, h in zip(away, home):
if df1[x][0] == a or df1[x][0] == h:
df1_array.append(df1[x])
After I get the new arrays I would then loop through them again to compare values, however I feel like this is too rudimentary. What could be a more efficient or smarter way of executing this?
Use numpy.where to compare rows and return the truth value of ('team1' > 'team2') element-wise:
import pandas as pd
import numpy as np
# Creating the dataframe
team1 = ['DAL', 'C', 0.1, 3.0, 0.5, 0.4, 0.5, 0.7, 6.4]
team2 = ['BOS', 'C', 1.7, 6.0, 2.1, 0.1, 0.7, 1.9, 9.0]
df = pd.DataFrame(
{'team1':team1,
'team2':team2,
})
# Select the rows that contain numbers
df2 = df.iloc[2:].copy()
# Make the comparison, if team1 is larger than team2 then team1/team2 and viseversa.
df2['result'] = np.where(df2['team1']>df2['team2'], \
df2['team1']/df2['team2'], \
df2['team2']/df2['team1'])
df['result'] = df2['result'].fillna(0)
This yields
team1 team2 result
0 DAL BOS NaN
1 C C NaN
2 0.1 1.7 17.0
3 3.0 6.0 2.0
4 0.5 2.1 4.2
5 0.4 0.1 4.0
6 0.5 0.7 1.4
7 0.7 1.9 2.714286
8 6.4 9.0 1.40625
Becareful with the 0 in the first column of values in your problem description though, I changed it to 0.1 as otherwise it will give zero division error.
I have "reference population" (say, v=np.random.rand(100)) and I want to compute percentile ranks for a given set (say, np.array([0.3, 0.5, 0.7])).
It is easy to compute one by one:
def percentile_rank(x):
return (v<x).sum() / len(v)
percentile_rank(0.4)
=> 0.4
(actually, there is an ootb scipy.stats.percentileofscore - but it does not work on vectors).
np.vectorize(percentile_rank)(np.array([0.3, 0.5, 0.7]))
=> [ 0.33 0.48 0.71]
This produces the expected results, but I have a feeling that there should be a built-in for this.
I can also cheat:
pd.concat([pd.Series([0.3, 0.5, 0.7]),pd.Series(v)],ignore_index=True).rank(pct=True).loc[0:2]
0 0.330097
1 0.485437
2 0.718447
This is bad on two counts:
I don't want the test data [0.3, 0.5, 0.7] to be a part of the ranking.
I don't want to waste time computing ranks for the reference population.
So, what is the idiomatic way to accomplish this?
Setup:
In [62]: v=np.random.rand(100)
In [63]: x=np.array([0.3, 0.4, 0.7])
Using Numpy broadcasting:
In [64]: (v<x[:,None]).mean(axis=1)
Out[64]: array([ 0.18, 0.28, 0.6 ])
Check:
In [67]: percentile_rank(0.3)
Out[67]: 0.17999999999999999
In [68]: percentile_rank(0.4)
Out[68]: 0.28000000000000003
In [69]: percentile_rank(0.7)
Out[69]: 0.59999999999999998
I think pd.cut can do that
s=pd.Series([-np.inf,0.3, 0.5, 0.7])
pd.cut(v,s,right=False).value_counts().cumsum()/len(v)
Out[702]:
[-inf, 0.3) 0.37
[0.3, 0.5) 0.54
[0.5, 0.7) 0.71
dtype: float64
Result from your function
np.vectorize(percentile_rank)(np.array([0.3, 0.5, 0.7]))
Out[696]: array([0.37, 0.54, 0.71])
You can use quantile:
np.random.seed(123)
v=np.random.rand(100)
s = pd.Series(v)
arr = np.array([0.3,0.5,0.7])
s.quantile(arr)
Output:
0.3 0.352177
0.5 0.506130
0.7 0.644875
dtype: float64
I know I am a little late to the party, but wanted to add that pandas has another way to get what you are after with Series.rank. Just use the pct=True option.
I would like to convert all the values in a pandas dataframe from strings to floats. My dataframe contains various NaN values (e.g. NaN, NA, None). For example,
import pandas as pd
import numpy as np
my_data = np.array([[0.5, 0.2, 0.1], ["NA", 0.45, 0.2], [0.9, 0.02, "N/A"]])
df = pd.DataFrame(my_data, dtype=str)
I have found here and here (among other places) that convert_objects might be the way to go. However, I get a message that it is deprecated (I am using Pandas 0.17.1) and should instead use to_numeric.
df2 = df.convert_objects(convert_numeric=True)
Output:
FutureWarning: convert_objects is deprecated. Use the data-type specific converters pd.to_datetime, pd.to_timedelta and pd.to_numeric.
But to_numeric doesn't seem to actually convert the strings.
df3 = pd.to_numeric(df, errors='force')
Output:
df2:
0 1 2
0 0.5 0.20 0.1
1 NaN 0.45 0.2
2 0.9 0.02 NaN
df2 dtypes:
0 float64
1 float64
2 float64
dtype: object
df3:
0 1 2
0 0.5 0.2 0.1
1 NA 0.45 0.2
2 0.9 0.02 N/A
df3 dtypes:
0 object
1 object
2 object
dtype: object
Should I use convert_objects and deal with the warning message, or is there a proper way to do what I want with to_numeric?
Strangely this works:
In [11]:
df.apply(lambda x: pd.to_numeric(x, errors='force'))
Out[11]:
0 1 2
0 0.5 0.20 0.1
1 NaN 0.45 0.2
2 0.9 0.02 NaN
It seems that it's not able to coerce the entire df for some reason which is a little surprising
If you hate typing (thanks to #Zero) then you can just use:
df.apply(pd.to_numeric, errors='force')
You can try replace and astype:
import pandas as pd
import numpy as np
my_data = np.array([[0.5, 0.2, 0.1], ["NA", 0.45, 0.2], [0.9, 0.02, "N/A"]])
df = pd.DataFrame(my_data, dtype=str)
print df.replace({r'N': np.nan}, regex=True).astype(float)
0 1 2
0 0.5 0.20 0.1
1 NaN 0.45 0.2
2 0.9 0.02 NaN
Considering the following pandas dataframe:
import pandas as pd
change = [0.475, 0.625, 0.1, 0.2, -0.1, -0.75, 0.1, -0.1, 0.2, -0.2]
position = [1.0, 1.0, nan, nan, nan, -1.0, nan, nan, nan, nan]
date = ['20150101', '20150102', '20150103', '20150104', '20150105', '20150106', '20150107', '20150108', '20150109', '20150110']
pd.DataFrame({'date': date, 'position': position, 'change': change})
Outputs
date change position
20150101 0.475 1
20150102 0.625 1
20150103 0.1 np.nan
20150104 0.2 np.nan
20150105 -0.1 np.nan
20150106 -0.75 -1
20150107 0.1 np.nan
20150108 -0.1 np.nan
20150109 0.2 np.nan
20150110 -0.2 np.nan
I want to fillna with the following rules:
For rows whose "position" value is np.nan, if value of "change" has the same sign of last non-null value of position (change * position>0, such as 0.1*1 and 0.2*1 >0), we fillna with last non-null value.
For rows whose "position" value is np.nan, if value of "change" has the same sign of last non-null value value of position (change * position <=0 such as -1*0.1), we fillna with 0.
Once one np.nan is filled with 0, the following np.nan will be filled with 0 as well.
The following are the expected results from the sample data frame:
date change position
20150101 0.475 1
20150102 0.625 1
20150103 0.1 1
20150104 0.2 1
20150105 -0.1 0
20150106 -0.75 -1
20150107 0.1 0
20150108 -0.1 0
20150109 0.2 0
20150110 -0.2 0
EDIT:
The method I developed is the following:
while(any(np.isnan(x['position']))):
conditions = [(np.isnan(x['position'])) & (x['position'].shift(1) * x['change'] > 0),
(np.isnan(x['position'])) & (x['position'].shift(1) * x['change'] <= 0)]
choices = [x['position'].shift(1), 0]
x['position'] = np.select(conditions, choices, default=x['position'])
but as you can see, it is not very satisfying, and very slow if you have a 80,000,000 rows of data.
Any suggestions? thanks for the help!
I think your code is pretty solid, the main issue is you are iterating through it more times than you need to. shift() only goes back one line at a time, but if you change that to fillna(method='ffill') then you essentially get an unlimitied number of shifts but only have to do this one time instead of with multiple iterations (how many iterations will depend on your data).
conditions = [
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']>0),
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']<=0)]
But I believe you can go one step further and eliminate the while by adding another fillna at the end:
conditions = [
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']>0),
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']<=0)]
choices=[x['position'].shift(1),0]
x['position'] = np.select(conditions,choices,default=x['position'])
x['position'] = x['position'].fillna(method='ffill')
On your sample data, the first change is about 2x faster than your code, and the second is about 4x. I get the same answers as you, but of course you'll want to verify this on the real data to be sure.
I'm looking for a good way to store and use conditional probabilities in python.
I'm thinking of using a pandas dataframe. If the conditional probabilities of some X are P(X=A|P1=1, P2=1) = 0.2, P(X=B|P1=2, P2=1) = 0.9 etc., I would use the dataframe
A B
P1 P2
1 1 0.2 0.8
2 0.5 0.5
2 1 0.9 0.1
2 0.9 0.1
and given the marginal probabilities of P1 and P2 as Series
1 0.4
2 0.6
Name: P1
1 0.7
2 0.3
Name: P2
I would like to obtain the Series of marginal probabilities of X, i.e. the series
A 0.602
B 0.398
Name: X
I can get what I want by
X = sum(
sum(
X.xs(i, level="P1")*P1[i]
for i in P1.index
).xs(j)*P2[j]
for j in P2.index
)
X.name="X"
but this is not easily generalizable to more dependencies, the asymmetry between the first xs with level and the second one without looks weird and as usual when working with pandas I'm very sure that there is a better solution using it's tricks and methods.
Is pandas a good tool for this, should I represent my data in another way, and what is the best way to do this calculation, which is essentially an indexed tensor product, in pandas?
One way to vectorize is access the values in Series P1 and P2 by indexing with an array of labels.
In [20]: df = X.reset_index()
In [21]: mP1 = P1[df.P1].values
In [22]: mP2 = P2[df.P2].values
In [23]: mP1
Out[23]: array([ 0.4, 0.4, 0.6, 0.6])
In [24]: mP2
Out[24]: array([ 0.7, 0.3, 0.7, 0.3])
In [25]: mp = mP1 * mP2
In [26]: mp
Out[26]: array([ 0.28, 0.12, 0.42, 0.18])
In [27]: X.mul(mp, axis=0)
Out[27]:
A B
P1 P2
1 1 0.056 0.224
2 0.060 0.060
2 1 0.378 0.042
2 0.162 0.018
In [28]: X.mul(mp, axis=0).sum()
Out[28]:
A 0.656
B 0.344
In [29]: sum(
sum(
X.xs(i, level="P1")*P1[i]
for i in P1.index
).xs(j)*P2[j]
for j in P2.index
)
Out[29]:
A 0.656
B 0.344
(Alternately, access the values of a MultiIndex
without resetting the index as follows.)
In [38]: P1[X.index.get_level_values("P1")].values
Out[38]: array([ 0.4, 0.4, 0.6, 0.6])