Specify interpolation times in boost::odeint - python

I have started using boost::odeint in my C++ code, and I think I'm missing a simple feature available in other integrators, namely Scipy's odeint.
scipy.odeint lets the user specify the times at which the state must be added to the output state history.scipy.odeint is a variable timestep integrator whose one-liner call looks like this (the state is integrated from the initial condition X0 and interpolated/stored at the times specified in times)
X = scipy.odeint(dxdt,X0,times,atol = 1e-13,rtol = 1e-13)
where X is a matrix that has as many rows as there are elements in times
Basically, I am looking for a similar feature in boost::odeint in order to do two things:
Propagate a state from t0 to tf, but only retrieve the final value of the state. I think I could write an observer only storing the state if the internal time satisfies t == tf, but this looks like a rather ugly hack. If I want to let the integrator choose the proper internal timestep to meet the tolerance values, storing intermediate states is a unnecessary burden.
Propagate a state from t0 to tf but store the state at times specified in advance, that are not necessarily evenly distributed, in a similar fashion to the call to scipy.odeint hereabove, while also letting the integrator choose the proper internal timestep.
The closest I've been to achieving that is with the following
size_t steps = integrate_adaptive( make_controlled< error_stepper_type >( 1.0e-10 , 1.0e-16 ) ,
dynamics , x , 0.0 , 10.0 , 1. , push_back_state_and_time( x_vec , times ));
The tolerances are met, but all the states are stored into x_vec by the observer, without letting me specify what the storage times should be.
How should I proceed?

It seems you are looking for the integrate_times function:
It lets you specify a range of exact times for which the observer will be invoked, adjusting the step size to reach every time step exactly, if necessary.
Especially for adaptive methods, this is quite useful as it computes the solution at the exact times you specified while still controlling the time step size to not exceed the error bounds.
So your current call could be modified to something like
auto stepper = make_controlled<error_stepper_type>( 1.0e-10 , 1.0e-16 );
// std::vector<time> times;
// std::vector<state> x_vec;
// time tstep;
auto tbegin = times.begin();
auto tend = times.end();
integrate_times(stepper, dynamics, x, tbegin, tend, tstep, push_back_state(x_vec));

Related

Python native gridworld implementation (no NumPy)

I've implemented gridworld example from the book Reinforcement Learning - An Introduction, second edition" from Richard S. Sutton and Andrew G. Barto, Chapter 4, sections 4.1 and 4.2, page 80.
Here is my implementation:
https://github.com/ozrentk/dynamic-programming-gridworld-playground
The original algorithm seems to have a bug since the value function (mapping) is updated one by one value in the source mapping structure. Why is that incorrect? It means that inside the loop for each s (of set S), in the same evaluation loop pass, the next value of the element s (e.g. s_2 of set S) will be evaluated from the newly evaluated element in that pass (e.g. s_1 of set S), instead of s from the current iteration. This problem is avoided here using the double buffering technique. An additional buffer is used for new values of set S. It also means that the program uses more memory because of that buffer.
I must admit that I'm not 100% sure if this is a bug, or if I misinterpreted the algorithm.
Generally, this is the code I'm using:
...
while True:
delta = 0
# NOTE: algorithm modified a bit, additional buffer new_values introduced
# Barto & Sutton seem to have a bug in their algorithm (iterative estimation does not fit figure 4.1)
# Instead of tracking one state value inside a loop, we track entire state value function mapping
# outside that loop. Also note that after that change algorithm consumes more memory.
new_values = [0] * states_n
for s in non_terminal_states:
# Evaluate state value under current policy
next_states = get_next_states(s, policy[s], world_size)
sum_items = [p * (reward + gamma * values[s_next]) for s_next, p in next_states.items()]
new_values[s] = sum(sum_items)
# Track delta
delta = max(delta, abs(values[s] - new_values[s]))
# (now we switch state value function buffer, instead of switching single state value in the loop)
values = new_values
if use_policy_improvement:
# Policy_improvement is done inside improve_policy(), and if new policy is no better than the
# old one, return value of is_policy_stable is True
is_policy_stable, improved_policy = improve_policy()
if is_policy_stable:
print("Policy is stable.")
break
else:
print("- Improving policy... ----------------")
policy = improved_policy
visualize_policy(policy, states, world_size)
# In case we don't track policy improvement, we need to track delta for the convergence sake
if delta < theta:
break
# Track iteration count
k += 1
...
Am I wrong or there is a problem with the policy evaluation part of the algorithm in the book?
The original algorithm is the "asynchronous version" of policy evaluation. And your impletation using two buffer is the "synchronous version". Both are correct.
The "asynchronous version" also converge to the optimal solution(You can find the proof in book Parallel and Distributed Computation: Numerical Methods). And as you may find in the book, it "usually converges faster".
I find that This link provides a good explanation.

How to constrain optimization based on number of negative values of variable in pyomo

I'm working with a pyomo model (mostly written by someone else, to be updated by me) that optimizes electric vehicle charging (ie, how much power will a vehicle import or export at a given timestep). The optimization variable (u) is power, and the objective is to minimize total charging cost given the charging cost at each timestep.
I'm trying to write a new optimization function to limit the number of times that the model will allow each vehicle to export power (ie, to set u < 0). I've written a constraint called max_call_rule that counts the number of times u < 0, and constrains it to be less than a given value (max_calls) for each vehicle. (max_calls is a dictionary with a label for each vehicle paired with an integer value for the number of calls allowed.)
The code is very long, but I've put the core pieces below:
model.u = Var(model.t, model.v, domain=Integers, doc='Power used')
model.max_calls = Param(model.v, initialize = max_calls)
def max_call_rule(model, v):
return len([x for x in [model.u[t, v] for t in model.t] if x < 0]) <= model.max_calls[v]
model.max_call_rule = Constraint(model.v, rule=max_call_rule, doc='Max call rule')
This approach doesn't work--I get the following error when I try to run the code.
ERROR: Rule failed when generating expression for constraint max_call_rule
with index 16: ValueError: Cannot create an InequalityExpression with more
than 3 terms.
ERROR: Constructing component 'max_call_rule' from data=None failed:
ValueError: Cannot create an InequalityExpression with more than 3 terms.
I'm new to working with pyomo and suspect that this error means that I'm trying to do something that fundamentally won't work with an optimization program. So--is there a better way for me to constrain the number of times that my variable u can be less than 0?
If what you're trying to do is minimize the number of times vehicles are exporting power, you can introduce a binary variable that allows/disallows vehicles discharging. You want this variable to be indexed over time and vehicles.
Note that if the rest of your model is LP (linear, without any integer variables), this will turn it into a MIP/MILP. There's a significant difference in terms of computational effort required to solve, and the types of solvers you can use. The larger the problems, the bigger the difference this will make. I'm not sure why u is currently set as Integers, that seems quite strange given it represents power.
model.allowed_to_discharge = Var(model.t, model.v, within=Boolean)
def enforce_vehicle_discharging_logic_rule(model, t, v):
"""
When `allowed_to_discharge[t,v]` is 1,
this constraint doesn't have any effect.
When `allowed_to_discharge[t,v]` is 1, u[t,v] >= 0.
Note that 1e9 is just a "big M", i.e. any big number
that you're sure exceeds the maximum value of `model.u`.
"""
return model.u[t,v] >= 0 - model.allowed_to_discharge[t,v] * 1e9
model.enforce_vehicle_discharging_logic = Constraint(
model.t, model.v, rule=enforce_vehicle_discharging_logic_rule
)
Now that you have the binary variable, you can count the events, and specifically you can assign a cost to such events and add it to your objective function (just in case, you can only have one objective function, so you're just adding a "component" to it, not adding a second objective function).
def objective_rule(model):
return (
... # the same objective function as before
+ sum(model.u[t, v] for t in model.t for v in model.v) * model.cost_of_discharge_event
)
model.objective = Objective(rule=objective_rule)
If what you instead of what you add to your objective function is a cost associated to the total energy discharged by the vehicles (instead of the number of events), you want to introduce two separate variables for charging and discharging - both non-negative, and then define the "net discharge" (which right now you call u) as an Expression which is the difference between discharge and charge.
You can then add a cost component to your objective function that is the sum of all the discharge power, and multiply it by the cost associated with it.

How to implement a cost minimization objective function correctly in Gurobi?

Given transport costs, per single unit of delivery, for a supermarket from three distribution centers to ten separate stores.
Note: Please look in the #data section of my code to see the data that I'm not allowed to post in photo form. ALSO note while my costs are a vector with 30 entries. Each distribution centre can only access 10 costs each. So DC1 costs = entries 1-10, DC2 costs = entries 11-20 etc..
I want to minimize the transport cost subject to each of the ten stores demand (in units of delivery).
This can be done by inspection. The the minimum cost being $150313. The problem being implementing the solution with Python and Gurobi and producing the same result.
What I've tried is a somewhat sloppy model of the problem in Gurobi so far. I'm not sure how to correctly index and iterate through my sets that are required to produce a result.
This is my main problem: The objective function I define to minimize transport costs is not correct as I produce a non-answer.
The code "runs" though. If I change to maximization I just get an unbounded problem. So I feel like I am definitely not calling the correct data/iterations through sets into play.
My solution so far is quite small, so I feel like I can format it into the question and comment along the way.
from gurobipy import *
#Sets
Distro = ["DC0","DC1","DC2"]
Stores = ["S0", "S1", "S2", "S3", "S4", "S5", "S6", "S7", "S8", "S9"]
D = range(len(Distro))
S = range(len(Stores))
Here I define my sets of distribution centres and set of stores. I am not sure where or how to exactly define the D and S iteration variables to get a correct answer.
#Data
Demand = [10,16,11,8,8,18,11,20,13,12]
Costs = [1992,2666,977,1761,2933,1387,2307,1814,706,1162,
2471,2023,3096,2103,712,2304,1440,2180,2925,2432,
1642,2058,1533,1102,1970,908,1372,1317,1341,776]
Just a block of my relevant data. I am not sure if my cost data should be 3 separate sets considering each distribution centre only has access to 10 costs and not 30. Or if there is a way to keep my costs as one set but make sure each centre can only access the costs relevant to itself I would not know.
m = Model("WonderMarket")
#Variables
X = {}
for d in D:
for s in S:
X[d,s] = m.addVar()
Declaring my objective variable. Again, I'm blindly iterating at this point to produce something that works. I've never programmed before. But I'm learning and putting as much thought into this question as possible.
#set objective
m.setObjective(quicksum(Costs[s] * X[d, s] * Demand[s] for d in D for s in S), GRB.MINIMIZE)
My objective function is attempting to multiply the cost of each delivery from a centre to a store, subject to a stores demand, then make that the smallest value possible. I do not have a non zero constraint yet. I will need one eventually?! But right now I have bigger fish to fry.
m.optimize()
I produce a 0 row, 30 column with 0 nonzero entries model that gives me a solution of 0. I need to set up my program so that I get the value that can be calculated easily by hand. I believe the issue is my general declaring of variables and low knowledge of iteration and general "what goes where" issues. A lot of thinking for just a study exercise!
Appreciate anyone who has read all the way through. Thank you for any tips or help in advance.
Your objective is 0 because you do not have defined any constraints. By default all variables have a lower bound of 0 and hence minizing an unconstrained problem puts all variables to this lower bound.
A few comments:
Unless you need the names for the distribution centers and stores, you could define them as follows:
D = 3
S = 10
Distro = range(D)
Stores = range(S)
You could define the costs as a 2-dimensional array, e.g.
Costs = [[1992,2666,977,1761,2933,1387,2307,1814,706,1162],
[2471,2023,3096,2103,712,2304,1440,2180,2925,2432],
[1642,2058,1533,1102,1970,908,1372,1317,1341,776]]
Then the cost of transportation from distribution center d to store s are stored in Costs[d][s].
You can add all variables at once and I assume you want them to be binary:
X = m.addVars(D, S, vtype=GRB.BINARY)
(or use Distro and Stores instead of D and S if you need to use the names).
Your definition of the objective function then becomes:
m.setObjective(quicksum(Costs[d][s] * X[d, s] * Demand[s] for d in Distro for s in Stores), GRB.MINIMIZE)
(This is all assuming that each store can only be delivered from one distribution center, but since your distribution centers do not have a maximal capacity this seems to be a fair assumption.)
You need constraints ensuring that the stores' demands are actually satisfied. For this it suffices to ensure that each store is being delivered from one distribution center, i.e., that for each s one X[d, s] is 1.
m.addConstrs(quicksum(X[d, s] for d in Distro) == 1 for s in Stores)
When I optimize this, I indeed get an optimal solution with value 150313.

Devising objective function for integer linear programming

I am working to devise a objective function for a integer linear programming model. The goal is to determine the copy number of two genes as well as if a gene conversion event has happened (where one copy is overwritten by the other, which looks like one was deleted but the net copy number has not changed).
The problem involves two data vectors, P_A and P_B. The vectors contain continuous values larger than zero that correspond to a measure of copy number made at each position. P_{A,i} is not necessarily the same spot across the gene as P_{B,i} is, because the positions are unique to each copy (and can be mapped to an absolute position in the genome).
Given this, my plan was to try and minimize the difference between my decision variables and the measured data across different genome windows, giving me different slices of the two data vectors that correspond to the same region.
Decision variables:
A_w = copy number of A in window in {0,1,2,3,4}
B_w = copy number of B in window in {0,1,2,3,4}
C_w = gene conversion in {-2,-1,0,1,2}
The goal then would be to minimize the difference between the left and right sides of the below equations:
A_w - C_w ~= mean(P_{A,W})
B_w + C_w ~= mean(P_{B,W})
Subject to a handful of constraints such as 2 <- A_w + B_w <= 4
But I am unsure how to formulate this into a function to minimize. I have two equations that are not really a function, and the decision variables have no coefficients.
I am also unsure of how to handle the negative values of C_w.
I also am unsure of how to bring the results back together; after I solve the LP in each window, I still need to merge it into one gene-wide call (and ideally identify which window(s) had non-zero values of C_w.
Create the LpProblem instance:
problem = LpProblem("Another LpProblem", LpMinimize)
Objective (per what you've vaguely described above):
problem += (mean(P_{A,W}) - (A_w - C_w)) + (mean(P_{B,W}) - (B_w + C_w))
This is all I could tell from your really rather vague question. You'll need to be much more specific with what you mean by terms like "bring the results back together", or "handle the negative values in C_w". Add in your current code snippets and the errors you're getting for more details.

Python: sliding window of variable width

I'm writing a program in Python that's processing some data generated during experiments, and it needs to estimate the slope of the data. I've written a piece of code that does this quite nicely, but it's horribly slow (and I'm not very patient). Let me explain how this code works:
1) It grabs a small piece of data of size dx (starting with 3 datapoints)
2) It evaluates whether the difference (i.e. |y(x+dx)-y(x-dx)| ) is larger than a certain minimum value (40x std. dev. of noise)
3) If the difference is large enough, it will calculate the slope using OLS regression. If the difference is too small, it will increase dx and redo the loop with this new dx
4) This continues for all the datapoints
[See updated code further down]
For a datasize of about 100k measurements, this takes about 40 minutes, whereas the rest of the program (it does more processing than just this bit) takes about 10 seconds. I am certain there is a much more efficient way of doing these operations, could you guys please help me out?
Thanks
EDIT:
Ok, so I've got the problem solved by using only binary searches, limiting the number of allowed steps by 200. I thank everyone for their input and I selected the answer that helped me most.
FINAL UPDATED CODE:
def slope(self, data, time):
(wave1, wave2) = wt.dwt(data, "db3")
std = 2*np.std(wave2)
e = std/0.05
de = 5*std
N = len(data)
slopes = np.ones(shape=(N,))
data2 = np.concatenate((-data[::-1]+2*data[0], data, -data[::-1]+2*data[N-1]))
time2 = np.concatenate((-time[::-1]+2*time[0], time, -time[::-1]+2*time[N-1]))
for n in xrange(N+1, 2*N):
left = N+1
right = 2*N
for i in xrange(200):
mid = int(0.5*(left+right))
diff = np.abs(data2[n-mid+N]-data2[n+mid-N])
if diff >= e:
if diff < e + de:
break
right = mid - 1
continue
left = mid + 1
leftlim = n - mid + N
rightlim = n + mid - N
y = data2[leftlim:rightlim:int(0.05*(rightlim-leftlim)+1)]
x = time2[leftlim:rightlim:int(0.05*(rightlim-leftlim)+1)]
xavg = np.average(x)
yavg = np.average(y)
xlen = len(x)
slopes[n-N] = (np.dot(x,y)-xavg*yavg*xlen)/(np.dot(x,x)-xavg*xavg*xlen)
return np.array(slopes)
Your comments suggest that you need to find a better method to estimate ik+1 given ik. No knowledge of values in data would yield to the naive algorithm:
At each iteration for n, leave i at previous value, and see if the abs(data[start]-data[end]) value is less than e. If it is, leave i at its previous value, and find your new one by incrementing it by 1 as you do now. If it is greater, or equal, do a binary search on i to find the appropriate value. You can possibly do a binary search forwards, but finding a good candidate upper limit without knowledge of data can prove to be difficult. This algorithm won't perform worse than your current estimation method.
If you know that data is kind of smooth (no sudden jumps, and hence a smooth plot for all i values) and monotonically increasing, you can replace the binary search with a search backwards by decrementing its value by 1 instead.
How to optimize this will depend on some properties of your data, but here are some ideas:
Have you tried profiling the code? Using one of the Python profilers can give you some useful information about what's taking the most time. Often, a piece of code you've just written will have one biggest bottleneck, and it's not always obvious which piece it is; profiling lets you figure that out and attack the main bottleneck first.
Do you know what typical values of i are? If you have some idea, you can speed things up by starting with i greater than 0 (as #vhallac noted), or by increasing i by larger amounts — if you often see big values for i, increase i by 2 or 3 at a time; if the distribution of is has a long tail, try doubling it each time; etc.
Do you need all the data when doing the least squares regression? If that function call is the bottleneck, you may be able to speed it up by using only some of the data in the range. Suppose, for instance, that at a particular point, you need i to be 200 to see a large enough (above-noise) change in the data. But you may not need all 400 points to get a good estimate of the slope — just using 10 or 20 points, evenly spaced in the start:end range, may be sufficient, and might speed up the code a lot.
I work with Python for similar analyses, and have a few suggestions to make. I didn't look at the details of your code, just to your problem statement:
1) It grabs a small piece of data of size dx (starting with 3
datapoints)
2) It evaluates whether the difference (i.e. |y(x+dx)-y(x-dx)| ) is
larger than a certain minimum value (40x std. dev. of noise)
3) If the difference is large enough, it will calculate the slope
using OLS regression. If the difference is too small, it will increase
dx and redo the loop with this new dx
4) This continues for all the datapoints
I think the more obvious reason for slow execution is the LOOPING nature of your code, when perhaps you could use the VECTORIZED (array-based operations) nature of Numpy.
For step 1, instead of taking pairs of points, you can perform directly `data[3:] - data[-3:] and get all the differences in a single array operation;
For step 2, you can use the result from array-based tests like numpy.argwhere(data > threshold) instead of testing every element inside some loop;
Step 3 sounds conceptually wrong to me. You say that if the difference is too small, it will increase dx. But if the difference is small, the resulting slope would be small because it IS actually small. Then, getting a small value is the right result, and artificially increasing dx to get a "better" result might not be what you want. Well, it might actually be what you want, but you should consider this. I would suggest that you calculate the slope for a fixed dx across the whole data, and then take the resulting array of slopes to select your regions of interest (for example, using data_slope[numpy.argwhere(data_slope > minimum_slope)].
Hope this helps!

Categories

Resources