How to update a matrix of probabilities - python

I am trying to find/figure out a function that can update probabilities.
Suppose there are three players and each of them get a fruit out of a basket: ["apple", "orange", "banana"]
I store the probabilities of each player having each fruit in a matrix (like this table):
apple
orange
banana
Player 1
0.3333
0.3333
0.3333
Player 2
0.3333
0.3333
0.3333
Player 3
0.3333
0.3333
0.3333
The table can be interpreted as the belief of someone (S) who doesn't know who has what. Each row and column sums to 1.0 because each player has one of the fruits and each fruit is at one of the players.
I want to update these probabilities based on some knowledge that S gains. Example information:
Player 1 did X. We know that Player 1 does X with 80% probability if he has an apple. With 50% if he has an orange. With 10% if he has a banana.
This can be written more concisely as [0.8, 0.5, 0.1] and let us call it reach_probability.
A fairly easy to comprehend example is:
probabilities = [
[0.5, 0.5, 0.0],
[0.0, 0.5, 0.5],
[0.5, 0.0, 0.5],
]
# Player 1's
reach_probability = [1.0, 0.0, 1.0]
new_probabilities = [
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
]
The above example can be fairly easily thought through.
another example:
probabilities = [
[0.25, 0.25, 0.50],
[0.25, 0.50, 0.25],
[0.50, 0.25, 0.25],
]
# Player 1's
reach_probability = [1.0, 0.5, 0.5]
new_probabilities = [
[0.4, 0.2, 0.4],
[0.2, 0.5, 0.3],
[0.4, 0.3, 0.3],
]
In my use case using a simulation is not an option. My probabilities matrix is big. Not sure if the only way to calculate this is using an iterative algorithm or if there is a better way.
I looked at bayesian stuff and not sure how to apply it in this case. Updating it row by row then spreading out the difference proportionally to the previous probabilities seems promising but I haven't managed to make it work correctly. Maybe it isn't even possible like that.

Initial condition: p(apple) = p(orange) = p(banana) = 1/3.
Player 1 did X. We know that Player 1 does X with 80% probability if he has an apple. With 50% if he has an orange. With 10% if he has a banana.
p(X | apple) = 0.8
p(x | orange) = 0.5
p(x | banana) = 0.1
Since apple, orange, and banana are all equally likely at 1/3, we have p(x) = 1/3 * 1.4) ~ 0.466666666.
Recall Bayes theorem: p(a | b) = p(b|a) * p(a) / p(b)
So p(apple | x) = p(x | apple) * p(apple) / p(x) = 0.8 * (1/3) / 0.46666666 ~ 57.14%
similarly p(orange | x) = 0.5 * (1/3) / 0.46666666 ~ 35.71%
and p(banana | x) = 0.1 * (1/3) / 0.46666666 ~ 7.14%
Taking your example:
probabilities = [
[0.25, 0.25, 0.50],
[0.25, 0.50, 0.25],
[0.50, 0.25, 0.25],
]
# Player 1's
reach_probability = [1.0, 0.5, 0.5]
new_probabilities = [
[0.4, 0.2, 0.4],
[0.2, 0.5, 0.3],
[0.4, 0.3, 0.3],
]
p(x) = 0.25 * 1.0 + 0.25 * 0.5 + 0.5 * 0.5 = 0.625
p(a|x) = p(x|a) * p(a) / p(x) = 1.0 * 0.25 / 0.625 = 0.4
p(b|x) = p(x|b) * p(b) / p(x) = 0.5 * 0.25 / 0.625 = 0.2
p(c|x) = p(x|c) * p(c) / p(x) = 0.5 * 0.50 / 0.625 = 0.4
As desired. The other entries of each column can just be scaled to get a column sum of 1.0.
E.g. in column 1 we multiple the other entries by (1-0.4)/(1-0.25). This takes 0.25 -> 0.2 and 0.50 -> 0.40. Similarly for the other columns.
new_probabilities = [
[0.4, 0.200, 0.4],
[0.2, 0.533, 0.3],
[0.4, 0.266, 0.3],
]
If then player 2 does y with the same conditional probabilities we get:
p(y) = 0.2 * 1.0 + 0.533 * 0.5 + 0.3 * 0.5 = 0.6165
p(a|y) = p(y|a) * p(a) / p(y) = 1.0 * 0.2 / 0.6165 = 0.3244
p(b|y) = p(y|b) * p(b) / p(y) = 0.5 * 0.533 / 0.6165 = 0.4323
p(c|y) = p(y|c) * p(c) / p(y) = 0.5 * 0.266 / 0.6165 = 0.2157

Check this document:
Endgame Solving in Large Imperfect-Information Games∗
(S. Ganzfried, T. Sandholm, in International Conference on Autonomous Agents and MultiAgent Systems (AAMAS) (2015), pp. 37–45.)

Here is how I would approach this - have not worked through whether this has problems too but it seems alright in your examples.
Assume each update is of the form "X,Y has probability p'" Mark element X,Y dirty with delta p - p', where p was the old probability. Now, redistribute the delta proportionally to all unmarked elements in the row, then the column, marking each dirty with its own delta, and marking the first clean. Continue until no dirty entry remains.
0.5 0.5 0.0
0.0 0.5 0.5
0.5 0.0 0.5
Belief: 2,1 has probability zero.
0.5 0.0* 0.0 update 2,1 and mark dirty
0.0 0.5 0.5 delta is 0.5
0.5 0.0 0.5
1.0* 0.0' 0.0 distribute 0.5 to row & col
0.0 1.0* 0.5 update as dirty, both deltas -0.5
0.5 0.0 0.5
1.0' 0.0' 0.0 distribute -0.5 to rows & cols
0.0 1.0' 0.0* update as dirty, both deltas 0.5
0.0* 0.0 0.5
1.0' 0.0' 0.0 distribute 0.5 to row & col
0.0 1.0' 0.0' update as dirty, delta is -0.5
0.0' 0.0 1.0*
1.0' 0.0' 0.0 distribute on row/col
0.0 1.0' 0.0' no new dirty elements, complete
0.0' 0.0 1.0'
In your first example:
1/3 1/3 1/3
1/3 1/3 1/3
1/3 1/3 1/3
Belief: 3,1 has probability 0
1/3 1/3 0* update 3,1 to zero, mark dirty
1/3 1/3 1/3 delta is 1/3
1/3 1/3 1/3
1/2* 1/2* 0' distribute 1/3 proportionally across row then col
1/3 1/3 1/2* delta is -1/6
1/3 1/3 1/2*
1/2' 1/2' 0' distribute -1/6 proportionally across row then col
1/4* 1/4* 1/2' delta is 1/12
1/4* 1/4* 1/2'
1/2' 1/2' 0' distribute prportionally to unmarked entries
1/4' 1/4' 1/2' no new dirty entries, terminate
1/4' 1/4' 1/2'
You can mark entries dirty by inserting them with associated deltas into a queue and a hashset. Entries in both the queue and hash set are dirty. Entries in the hashset only are clean. Process the queue until you run out of entries.
I do not show an example where distribution is uneven, but the key is to distribute proportionally. Entries with 0 can never become non-zero except by a new belief.

Unfortunately there’s no known nice solution.
The way that I would apply Bayesian reasoning is to store a likelihood
matrix instead of a probability matrix. (Actually I’d store
log-likelihoods to prevent underflow, but that’s an implementation
detail.) We can start with the matrix
Apple
Orange
Banana
1
1
1
1
2
1
1
1
3
1
1
1
representing no knowledge. You could use the all-1/3 matrix instead, but
I’ve used 1 to emphasize that normalization is not required. To apply an
update like Player 1 doing X with conditional probabilities [0.8, 0.5,
0.1], we just multiply the row element-wise:
Apple
Orange
Banana
1
0.8
0.5
0.1
2
1
1
1
3
1
1
1
If Player 1 does Y independently with the same conditional
probabilities, then we get
Apple
Orange
Banana
1
0.64
0.25
0.01
2
1
1
1
3
1
1
1
Now, the rub is that these likelihoods don’t have a nice relationship to
probabilities of specific outcomes. All we know is that the probability
of a specific matching is proportional to the product of its matrix
entries. As a simple example, with a matrix like
Apple
Orange
Banana
1
1
0
0
2
0
1
0
3
0
1
1
the entry for Player 3 having Orange is 1, yet this assignment has
probability 0 because both possibilities for completing the matching
have probability 0.
What we need is the
permanent,
which sums the likelihood of every matching, and the minor for each
matrix entry, which sums the likelihood of every matching that makes the
corresponding assignment. Unfortunately we don’t know a good exact
algorithm for computing the permanent, and experts are skeptical that
one exists (the problem is NP-hard, and actually #P-complete). The
known approximation employs sampling via Markov chains.

Related

Filling dataframe with average of previous columns values

I have a dataframe with having 5 columns with having missing values.
How do i fill the missing values with taking the average of previous two column values.
Here is the sample code for the same.
coh0 = [0.5, 0.3, 0.1, 0.2,0.2]
coh1 = [0.4,0.3,0.6,0.5]
coh2 = [0.2,0.2,0.3]
coh3 = [0.8,0.8]
coh4 = [0.5]
df= pd.DataFrame({'coh0': pd.Series(coh0), 'coh1': pd.Series(coh1),'coh2': pd.Series(coh2), 'coh3': pd.Series(coh3),'coh4': pd.Series(coh4)})
df
Here is the sample output
coh0coh1coh2coh3coh4
0 0.5 0.4 0.2 0.8 0.5
1 0.3 0.3 0.2 0.8 NaN
2 0.1 0.6 0.3 NaN NaN
3 0.2 0.5 NaN NaN NaN
4 0.2 NaN NaN NaN NaN
Here is the desired result i am looking for.
The NaN value in each column should be replaced by the previous two columns average value at the same position. However for the first NaN value in second column, it will take the default last value of first column.
The sample desired output would be like below.
For the exception you named, the first NaN, you can do
df.iloc[1, -1] = df.iloc[0, -1]
though it doesn't make a difference in this case as the mean of .2 and .8 is .5, anyway.
Either way, the rest is something like a rolling window calculation, except it has to be computed incrementally. Normally, you want to vectorize your operations and avoid iterating over the dataframe, but IMHO this is one of the rarer cases where it's actually appropriate to loop over the columns (cf. this excellent post), i.e.,
compute the row-wise (axis=1) mean of up to two columns left of the current one (df.iloc[:, max(0, i-2):i]),
and fill its NaN values from the resulting series.
for i in range(1, df.shape[1]):
mean_df = df.iloc[:, max(0, i-2):i].mean(axis=1)
df.iloc[:, i] = df.iloc[:, i].fillna(mean_df)
which results in
coh0 coh1 coh2 coh3 coh4
0 0.5 0.4 0.20 0.800 0.5000
1 0.3 0.3 0.20 0.800 0.5000
2 0.1 0.6 0.30 0.450 0.3750
3 0.2 0.5 0.35 0.425 0.3875
4 0.2 0.2 0.20 0.200 0.2000

Pandas efficiently add new column true/false if between two other columns

Using Pandas, how can I efficiently add a new column that is true/false if the value in one column (x) is between the values in two other columns (low and high)?
The np.select approach from here works perfectly, but I "feel" like there should be a one-liner way to do this.
Using Python 3.7
fid = [0, 1, 2, 3, 4]
x = [0.18, 0.07, 0.11, 0.3, 0.33]
low = [0.1, 0.1, 0.1, 0.1, 0.1]
high = [0.2, 0.2, 0.2, 0.2, 0.2]
test = pd.DataFrame(data=zip(fid, x, low, high), columns=["fid", "x", "low", "high"])
conditions = [(test["x"] >= test["low"]) & (test["x"] <= test["high"])]
labels = ["True"]
test["between"] = np.select(conditions, labels, default="False")
display(test)
Like mentioned by #Brebdan, you can use this builtin:
test["between"] = test["x"].between(test["low"], test["high"])
output:
fid x low high between
0 0 0.18 0.1 0.2 True
1 1 0.07 0.1 0.2 False
2 2 0.11 0.1 0.2 True
3 3 0.30 0.1 0.2 False
4 4 0.33 0.1 0.2 False

Averaging values with irregular time intervals

I have several pairs of arrays of measurements and the times at which the measurements were taken that I want to average. Unfortunately the times at which these measurements were taken isn't regular or the same for each pair.
My idea for averaging them is to create a new array with the value at each second then average these. It works but it seems a bit clumsy and means I have to create many unnecessarily long arrays.
Example Inputs
m1 = [0.4, 0.6, 0.2]
t1 = [0.0, 2.4, 5.2]
m2 = [1.0, 1.4, 1.0]
t2 = [0.0, 3.6, 4.8]
Generated Regular Arrays for values at each second
r1 = [0.4, 0.4, 0.4, 0.6, 0.6, 0.6, 0.2]
r2 = [1.0, 1.0, 1.0, 1.0, 1.4, 1.0]
Average values up to length of shortest array
a = [0.7, 0.7, 0.7, 0.8, 1.0, 0.8]
My attempt given list of measurement arrays measurements and respective list of time interval arrays times
def granulate(values, times):
count = 0
regular_values = []
for index, x in enumerate(times):
while count <= x:
regular_values.append(values[index])
count += 1
return np.array(regular_values)
processed_measurements = [granulate(m, t) for m, t in zip(measurements, times)]
min_length = min(len(m) for m in processed_measurements )
processed_measurements = [m[:min_length] for m in processed_measurements]
average_measurement = np.mean(processed_measurements, axis=0)
Is there a better way to do it, ideally using numpy functions?
This will average to closest second:
time_series = np.arange(np.stack((t1, t2)).max())
np.mean([m1[abs(t1-time_series[:,None]).argmin(axis=1)], m2[abs(t2-time_series[:,None]).argmin(axis=1)]], axis=0)
If you want to floor times to each second (with possibility of generalizing to more arrays):
m = [m1, m2]
t = [t1, t2]
m_t=[]
time_series = np.arange(np.stack(t).max())
for i in range(len(t)):
time_diff = time_series-t[i][:,None]
m_t.append(m[i][np.where(time_diff > 0, time_diff, np.inf).argmin(axis=0)])
average = np.mean(m_t, axis=0)
output:
[0.7 0.7 0.7 0.8 1. 0.8]
You can do (a bit more numpy-ish solution):
import numpy as np
# oddly enough - numpy doesn't have it's own ffill function:
def np_ffill(arr):
mask = np.arange(len(arr))
mask[np.isnan(arr)]=0
np.maximum.accumulate(mask, axis=0, out=mask)
return arr[mask]
t1=np.ceil(t1).astype("int")
t2=np.ceil(t2).astype("int")
r1=np.empty(max(t1)+1)
r2=np.empty(max(t2)+1)
r1[:]=np.nan
r2[:]=np.nan
r1[t1]=m1
r2[t2]=m2
r1=np_ffill(r1)
r2=np_ffill(r2)
>>> print(r1,r2)
[0.4 0.4 0.4 0.6 0.6 0.6 0.2] [1. 1. 1. 1. 1.4 1. ]
#in order to get avg:
r3=np.vstack([r1[:len(r2)],r2[:len(r1)]]).mean(axis=0)
>>> print(r3)
[0.7 0.7 0.7 0.8 1. 0.8]
I see two possible solutions:
Create a 'bucket' for each time step, lets say 1 second, and insert all measurements that were taken at the time step +/- 1 second in the bucket. Average all values in the bucket.
Interpolate every measurement row, so that they have equal time steps. Average all measurements for every time step

Increase value of several rows based on condition fulfilling all rows

I have a pandas dataframe with three columns and want to multiply/increase the float numbers of each row by the same amount until the sum of all three cells (one row) fulfils the critera (value equal or greater than 0.9)
df = pd.DataFrame({'A':[0.03, 0.0, 0.4],
'B': [0.1234, 0.4, 0.333],
'C': [0.5, 0.4, 0.0333]})
Outcome:
The different cells in each row were multiplied so that the sum of all three cells of each row is 0.9 (The sum of each row is not exactly 0.9 as I tried to come close with simple multiplication, hence the actual outcome would get to 0.9). It is important that the cells which are 0 would stay 0.
print (df)
A B C
0 0.0414 0.170292 0.690000
1 0.0000 0.452000 0.452000
2 0.4720 0.392940 0.039294
You can take sum on axis=1 and subtract with 0.9 ,then divide with df.shape[1] to add it back:
df.add((0.9-df.sum(axis=1))/df.shape[1],axis=0)
A B C
0 0.112200 0.205600 0.582200
1 0.033333 0.433333 0.433333
2 0.444567 0.377567 0.077867
You want to apply a scaling function along the rows:
def scale(xs, target=0.9):
"""Scale the features such that their sum equals the target."""
xs_sum = xs.sum()
if xs_sum < target:
return xs * (target / xs_sum)
else:
return xs
df.apply(scale), axis=1)
For example:
df = pd.DataFrame({'A':[0.03, 0.0, 0.4],
'B': [0.1234, 0.4, 0.333],
'C': [0.5, 0.4, 0.0333]})
df.apply(scale, axis=1)
Should give:
A B C
0 0.041322 0.169972 0.688705
1 0.000000 0.450000 0.450000
2 0.469790 0.391100 0.039110
The rows of that dataframe all sum to 0.9:
df.apply(scale), axis=1).sum(axis=1)
0 0.9
1 0.9
2 0.9
dtype: float64

replacing a range of values with one value

I have a list that I'm adding to a pandas data frame it contains a range of decimal values.
I want to divide it into 3 ranges each range represents one value
sents=[]
for sent in sentis:
if sent > 0:
if sent < 0.40:
sents.append('negative')
if (sent >= 0.40 and sent <= 0.60):
sents.append('neutral')
if sent > 0.60
sents.append('positive')
my question is if there is a more efficient way in pandas to do this as i'm trying to implement this on a bigger list and
Thanks in advance.
You can use pd.cut to produce the results that are of type categorical and have the appropriate labels.
In order to fix the inclusion of .4 and .6 for the neutral category, I add and subtract the smallest float epsilon
sentis = np.linspace(0, 1, 11)
eps = np.finfo(float).eps
pd.DataFrame(dict(
Value=sentis,
Sentiment=pd.cut(
sentis, [-np.inf, .4 - eps, .6 + eps, np.inf],
labels=['negative', 'neutral', 'positive']
),
))
Sentiment Value
0 negative 0.0
1 negative 0.1
2 negative 0.2
3 negative 0.3
4 neutral 0.4
5 neutral 0.5
6 neutral 0.6
7 positive 0.7
8 positive 0.8
9 positive 0.9
10 positive 1.0
List comprehension:
['negative' if x < 0.4 else 'positive' if x > 0.6 else 'neutral' for x in sentis]

Categories

Resources