As I'm really struggleing to get from R-code, to Python code, I would like to ask some help. The code I want to use has been provided to my from withing the mathematics forum of stackexchange.
https://math.stackexchange.com/questions/2205573/curve-fitting-on-dataset
I do understand what is going on. But I'm really having a hard time trying to solve the R-code, as I have never seen anything of it. I have written the function to return the sum of squares. But I'm stuck at how I could use a function similar to the optim function. And also I don't really like the guesswork at the initial values. I would like it better to run and re-run a type of optim function untill I get the wanted result, because my needs for a nearly perfect curve fit are really high.
def model (par,x):
n = len(x)
res = []
for i in range(1,n):
A0 = par[3] + (par[4]-par[1])*par[6] + (par[5]-par[2])*par[6]**2
if(x[i] == par[6]):
res[i] = A0 + par[1]*x[i] + par[2]*x[i]**2
else:
res[i] = par[3] + par[4]*x[i] + par[5]*x[i]**2
return res
This is my model function...
def sum_squares (par, x, y):
ss = sum((y-model(par,x))^2)
return ss
And this is the sum of squares
But I have no idea on how to convert this:
#I found these initial values with a few minutes of guess and check.
par0 <- c(7,-1,-395,70,-2.3,10)
sol <- optim(par= par0, fn=sqerror, x=x, y=y)$par
To Python code...
I wrote an open source Python package (BSD license) that has a genetic algorithm (Differential Evolution) front end to the scipy Levenberg-Marquardt solver, it functions similarly to what you describe in your question. The github URL is:
https://github.com/zunzun/pyeq3
It comes with a "user-defined function" example that's fairly easy to use:
https://github.com/zunzun/pyeq3/blob/master/Examples/Simple/FitUserDefinedFunction_2D.py
along with command-line, GUI, cluster, parallel, and web-based examples. You can install the package with "pip3 install pyeq3" to see if it might suit your needs.
Seems like I have been able to fix the problem.
def model (par,x):
n = len(x)
res = np.array([])
for i in range(0,n):
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
if(x[i] <= par[5]):
res = np.append(res, A0 + par[0]*x[i] + par[1]*x[i]**2)
else:
res = np.append(res,par[2] + par[3]*x[i] + par[4]*x[i]**2)
return res
def sum_squares (par, x, y):
ss = sum((y-model(par,x))**2)
print('Sum of squares = {0}'.format(ss))
return ss
And then I used the functions as follow:
parameter = sy.array([0.0,-8.0,0.0018,0.0018,0,200])
res = least_squares(sum_squares, parameter, bounds=(-360,360), args=(x1,y1),verbose = 1)
The only problem is that it doesn't produce the results I'm looking for... And that is mainly because my x values are [0,360] and the Y values only vary by about 0.2, so it's a hard nut to crack for this function, and it produces this (poor) result:
Result
I think that the range of x values [0, 360] and y values (which you say is ~0.2) is probably not the problem. Getting good initial values for the parameters is probably much more important.
In Python with numpy / scipy, you would definitely want to not loop over values of x but do something more like
def model(par,x):
res = par[2] + par[3]*x + par[4]*x**2
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
res[np.where(x <= par[5])] = A0 + par[0]*x + par[1]*x**2
return res
It's not clear to me that that form is really what you want: why should A0 (a value independent of x added to a portion of the model) be so complicated and interdependent on the other parameters?
More importantly, your sum_of_squares() function is actually not what least_squares() wants: you should return the residual array, you should not do the sum of squares yourself. So, that should be
def sum_of_squares(par, x, y):
return (y - model(par, x))
But most importantly, there is a conceptual problem that is probably going to plague this model: Your par[5] is meant to represent a breakpoint where the model changes form. This is going to be very hard for these optimization routines to find. These routines generally make a very small change to each parameter value to estimate to derivative of the residual array with respect to that variable in order to figure out how to change that variable. With a parameter that is essentially used as an integer, the small change in the initial value will have no effect at all, and the algorithm will not be able to determine the value for this parameter. With some of the scipy.optimize algorithms (notably, leastsq) you can specify a scale for the relative change to make. With leastsq that is called epsfcn. You may need to set this as high as 0.3 or 1.0 for fitting the breakpoint to work. Unfortunately, this cannot be set per variable, only per fit. You might need to experiment with this and other options to least_squares or leastsq.
Related
I'm performing Data Science and am calculating the Log Likelihood of a Poisson Distribution of arrival times.
def LogLikelihood(arrival_times, _lambda):
"""Calculate the likelihood that _lambda predicts the arrival_times well."""
ll = 0
for t in arrival_times:
ll += -(_lambda) + t*np.log(_lambda) - np.log(factorial(t))
return ll
Mathematically, the expression is on the last line:
Is there a more pythonic way to perform this sum? Perhaps in one line?
Seems perfectly Pythonic to me; but since numpy is already here, why not to vectorize the whole thing?
return (
-_lambda
+ arrival_times * np.log(_lambda)
- np.log(np.vectorize(np.math.factorial)(arrival_times))
).sum()
If you have scipy available, use loggamma which is more robust than chaining log and factorial:
from scipy import special
def loglikeli(at,l):
return (np.log(l)*at-l-special.loggamma(at+1)).sum()
### example
rng = np.random.default_rng()
at = rng.integers(1,3,10).cumsum()
l = rng.uniform(0,1)
### check against OP's impementation
np.isclose(loglikeli(at,l),LogLikelihood(at,l))
# True
This looks ugly to me but it fits a single line:
def LogLikelihood(arrival_times, _lambda):
return np.cumsum(list(map(lambda t: -(_lambda) + t*np.log(_lambda) - np.log(factorial(t)),arrival_times)))[-1]
I'm trying to find the global minimum of the function from the hundred digit hundred dollars challenge, question #4 as an exercise for simulated annealing.
As the basis of my understanding and approach to writing the code, I refer to the global optimization algorithms version 3 book which is found for free online.
Consequently, I've initially come up with the following code:
The noisy func:
def noisy_func(x, y):
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
The function used to mutate the values:
def mutate(X_Value, Y_Value):
mutationResult_X = X_Value + randomNumForInput()
mutationResult_Y = Y_Value + randomNumForInput()
while mutationResult_X > 4 or mutationResult_X < -4:
mutationResult_X = X_Value + randomNumForInput()
while mutationResult_Y > 4 or mutationResult_Y < -4:
mutationResult_Y = Y_Value + randomNumForInput()
mutationResults = [mutationResult_X, mutationResult_Y]
return mutationResults
randomNumForInput simply returns a random number between 4 and -4. (Interval Limits for the search.) Hence it is equivalent to random.uniform(-4, 4).
This is the central function of the program.
def simulated_annealing(f):
"""Peforms simulated annealing to find a solution"""
#Start by initializing the current state with the initial state
#acquired by a random generation of a number and then using it
#in the noisy func, also set solution(best_state) as current_state
#for a start
pCurSelect = [randomNumForInput(),randomNumForInput()]
current_state = f(pCurSelect[0],pCurSelect[1])
best_state = current_state
#Begin time monitoring, this will represent the
#Number of steps over time
TimeStamp = 1
#Init current temp via the func, using such values as to get the initial temp
initial_temp = 100
final_temp = .1
alpha = 0.001
num_of_steps = 1000000
#calculates by how much the temperature should be tweaked
#each iteration
#suppose the number of steps is linear, we'll send in 100
temp_Delta = calcTempDelta(initial_temp, final_temp, num_of_steps)
#set current_temp via initial temp
current_temp = getTemperature(initial_temp, temp_Delta)
#max_iterations = 100
#initial_temp = get_Temperature_Poly(TimeStamp)
#current_temp > final_temp
while current_temp > final_temp:
#get a mutated value from the current value
#hence being a 'neighbour' value
#with it, acquire the neighbouring state
#to the current state
neighbour_values = mutate(pCurSelect[0], pCurSelect[1])
neighbour_state = f(neighbour_values[0], neighbour_values[1])
#calculate the difference between the newly mutated
#neighbour state and the current state
delta_E_Of_States = neighbour_state - current_state
# Check if neighbor_state is the best state so far
# if the new solution is better (lower), accept it
if delta_E_Of_States <= 0:
pCurSelect = neighbour_values
current_state = neighbour_state
if current_state < best_state:
best_state = current_state
# if the new solution is not better, accept it with a probability of e^(-cost/temp)
else:
if random.uniform(0, 1) < math.exp(-(delta_E_Of_States) / current_temp):
pCurSelect = neighbour_values
current_state = neighbour_state
# Here, we'd decrement the temperature or increase the timestamp, normally
"""current_temp -= alpha"""
#print("Run number: " + str(TimeStamp) + " current_state = " + str(current_state) )
#increment TimeStamp
TimeStamp = TimeStamp + 1
# calc temp for next iteration
current_temp = getTemperature(current_temp, temp_Delta)
#print("Iteration Count: " + str(TimeStamp))
return best_state
alpha is not used for this implementation, however temperature is moderated linearly using the following funcs:
def calcTempDelta(T_Initial, T_Final, N):
return((T_Initial-T_Final)/N)
def getTemperature(T_old, T_new):
return (T_old - T_new)
This is how I implemented the solution described in page 245 of the book. However, this implementation does not return to me the global minimum of the noisy function, but rather, one of its near-by local minimum.
The reasons I implemented the solution in this way is two fold:
It has been provided to me as a working example of a linear temperature moderation, and thus a working template.
Although I have tried to understand the other forms of temperature moderation laid out in the book in pages 248-249, it is not entirely clear to me how the variable "Ts" is calculated, and even after trying to look through some of the cited sources the book references, it remains esoteric for me still. Thus I figured, I'd rather try to make this "simple" solution work correctly first, before proceeding to attempt other approaches of temperature quenching (logarithmic, exponential, etc).
Since then I have tried in numerous ways to acquire the global minimum of the noisy func through various different iterations of the code, which would be too much to post here all at once. I've tried different rewrites of this code:
Decrease the randomly rolled number over each iteration as in order to search within a smaller scope every time, this has resulted in more consistent but still incorrect results.
Mutate by different increments, so lets say, between -1 and 1, etc. Same effect.
Rewrite mutate as in order to examine the neighbouring points to the current point via some step size, and examine neighboring points by adding/reducing said step size from the current point's x/y values, checking the differences between the newly generated point and the current point (the delta of E's, basically), and return the appropriate values with whichever one produced the lowest distance to the current function, thus being its closest proximity neighbour.
Reduce the intervals limits over which the search occurs.
It is in these, the solutions involving step-size/reducing limits/checking neighbours by quadrants that I have used movements comprised of some constant alpha times the time_stamp.
These and other solutions which I've attempted have not worked, either producing even less accurate results (albeit in some cases more consistent results) or in one case, not working at all.
Therefore I must be missing something, whether its to do with the temperature moderation, or the precise way (formula) by which I'm supposed to make the next step (mutate) in the algorithm.
I know its a lot to take in and look at, but I'd appreciate any constructive criticism/help/advice you can provide me.
If it will be of any help to showcase code bits of the other solution attempts, I'll post them if asked.
It is important that you keep track of what you are doing.
I have put a few important tips on frigidum
The alpha cooling generally works well, it makes sure you don't speed through the interesting sweet-spot, where about 0.1 of the proposals are accepted.
Make sure your proposals are not too coarse, I have put a example where I only change x or y, but never both. The idea is that annealing will take whats best, or take a tour, and let the scheme decide.
I use the package frigidum for the algo, but its pretty much the same are your code. Also notice I have 2 proposals, a large change and a small change, combinations usually work well.
Finally, I noticed its hopping a lot. A small variation would be to pick the best-so-far before you go in the last 5% of your cooling.
I use/install frigidum
!pip install frigidum
And made a small change to make use of numpy arrays;
import math
def noisy_func(X):
x, y = X
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
import frigidum
import numpy as np
import random
def random_start():
return np.random.random( 2 ) * 4
def random_small_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.02 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.02 * (random.random() - .5), 0] ), -4,4)
def random_big_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.5 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.5 * (random.random() - .5), 0] ), -4,4)
local_opt = frigidum.sa(random_start=random_start,
neighbours=[random_small_step, random_big_step],
objective_function=noisy_func,
T_start=10**2,
T_stop=0.00001,
repeats=10**4,
copy_state=frigidum.annealing.copy)
The output of the above was
---
Neighbour Statistics:
(proportion of proposals which got accepted *and* changed the objective function)
random_small_step : 0.451045
random_big_step : 0.268002
---
(Local) Minimum Objective Value Found:
-3.30669277
With the above code sometimes I get below -3, but I also noticed sometimes it has found something around -2, than it is stuck in the last phase.
So a small tweak would be to re-anneal the last phase of the annealing, with the best-found-so-far.
Hope that helps, let me know if any questions.
"""Some simulations to predict the future portfolio value based on past distribution. x is
a numpy array that contains past returns.The interpolated_returns are the returns
generated from the cdf of the past returns to simulate future returns. The portfolio
starts with a value of 100. portfolio_value is filled up progressively as
the program goes through every loop. The value is multiplied by the returns in that
period and a dollar is removed."""
portfolio_final = []
for i in range(10000):
portfolio_value = [100]
rand_values = np.random.rand(600)
interpolated_returns = np.interp(rand_values,cdf_values,x)
interpolated_returns = np.add(interpolated_returns,1)
for j in range(1,len(interpolated_returns)+1):
portfolio_value.append(interpolated_returns[j-1]*portfolio_value[j-1])
portfolio_value[j] = portfolio_value[j]-1
portfolio_final.append(portfolio_value[-1])
print (np.mean(portfolio_final))
I couldn't find a way to write this code using numpy. I was having a look at iterations using nditer but I was unable to move ahead with that.
I guess the easiest way to figure out how you can vectorize your stuff would be to look at the equations that govern your evolution and see how your portfolio actually iterates, finding patterns that could be vectorized instead of trying to vectorize the code you already have. You would have noticed that the cumprod actually appears quite often in your iterations.
Nevertheless you can find the semi-vectorized code below. I included your code as well such that you can compare the results. I also included a simple loop version of your code which is much easier to read and translatable into mathematical equations. So if you share this code with somebody else I would definitely use the simple loop option. If you want some fancy-pants vectorizing you can use the vector version. In case you need to keep track of your single steps you can also add an array to the simple loop option and append the pv at every step.
Hope that helps.
Edit: I have not tested anything for speed. That's something you can easily do yourself with timeit.
import numpy as np
from scipy.special import erf
# Prepare simple return model - Normal distributed with mu &sigma = 0.01
x = np.linspace(-10,10,100)
cdf_values = 0.5*(1+erf((x-0.01)/(0.01*np.sqrt(2))))
# Prepare setup such that every code snippet uses the same number of steps
# and the same random numbers
nSteps = 600
nIterations = 1
rnd = np.random.rand(nSteps)
# Your code - Gives the (supposedly) correct results
portfolio_final = []
for i in range(nIterations):
portfolio_value = [100]
rand_values = rnd
interpolated_returns = np.interp(rand_values,cdf_values,x)
interpolated_returns = np.add(interpolated_returns,1)
for j in range(1,len(interpolated_returns)+1):
portfolio_value.append(interpolated_returns[j-1]*portfolio_value[j-1])
portfolio_value[j] = portfolio_value[j]-1
portfolio_final.append(portfolio_value[-1])
print (np.mean(portfolio_final))
# Using vectors
portfolio_final = []
for i in range(nIterations):
portfolio_values = np.ones(nSteps)*100.0
rcp = np.cumprod(np.interp(rnd,cdf_values,x) + 1)
portfolio_values = rcp * (portfolio_values - np.cumsum(1.0/rcp))
portfolio_final.append(portfolio_values[-1])
print (np.mean(portfolio_final))
# Simple loop
portfolio_final = []
for i in range(nIterations):
pv = 100
rets = np.interp(rnd,cdf_values,x) + 1
for i in range(nSteps):
pv = pv * rets[i] - 1
portfolio_final.append(pv)
print (np.mean(portfolio_final))
Forget about np.nditer. It does not improve the speed of iterations. Only use if you intend to go one and use the C version (via cython).
I'm puzzled about that inner loop. What is it supposed to be doing special? Why the loop?
In tests with simulated values these 2 blocks of code produce the same thing:
interpolated_returns = np.add(interpolated_returns,1)
for j in range(1,len(interpolated_returns)+1):
portfolio_value.append(interpolated_returns[j-1]*portfolio[j-1])
portfolio_value[j] = portfolio_value[j]-1
interpolated_returns = (interpolated_returns+1)*portfolio - 1
portfolio_value = portfolio_value + interpolated_returns.tolist()
I assuming that interpolated_returns and portfolio are 1d arrays of the same length.
I have a large (>2000 equations) system of ODE's that I want to solve with python scipy's odeint.
I have three problems that I want to solve (maybe I will have to ask 3 different questions?).
For simplicity, I will explain them here with a toy model, but please keep in mind that my system is large.
Suppose I have the following system of ODE's:
dS/dt = -beta*S
dI/dt = beta*S - gamma*I
dR/dt = gamma*I
with beta = cpI
where c, p and gamma are parameters that I want to pass to odeint.
odeint is expecting a file like this:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
S = y[0]
I = y[1]
R = y[2]
dydt = [-beta*S*I,
beta*S*I - gamma*I,
- gamma*I]
return dydt
that then can be passed to odeint like this:
myoutput = odeint(myODEs, [1000, 1, 0], np.linspace(0, 100, 50), args = ([c,p,gamma], ))
I generated a text file in Mathematica, say myOdes.txt, where each line of the file corresponds to the RHS of my system of ODE's, so it looks like this
#myODEs.txt
-beta*S*I
beta*S*I - gamma*I
- gamma*I
My text file looks similar to what odeint is expecting, but I am not quite there yet.
I have three main problems:
How can I pass my text file so that odeint understands that this is the RHS of my system?
How can I define my variables in a smart way, that is, in a systematic way? Since there are >2000 of them, I cannot manually define them. Ideally I would define them in a separate file and read that as well.
How can I pass the parameters (there are a lot of them) as a text file too?
I read this question that is close to my problems 1 and 2 and tried to copy it (I directly put values for the parameters so that I didn't have to worry about my point 3 above):
systemOfEquations = []
with open("myODEs.txt", "r") as fp :
for line in fp :
systemOfEquations.append(line)
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 1, 5))
but I got the error:
odepack.error: Result from function call is not a proper array of floats.
ValueError: could not convert string to float: -((12*0.01/1000)*I*S),
Edit: I modified my code to:
systemOfEquations = []
with open("SIREquationsMathematica2.txt", "r") as fp :
for line in fp :
pattern = regex.compile(r'.+?\s+=\s+(.+?)$')
expressionString = regex.search(pattern, line)
systemOfEquations.append( sympy.sympify( expressionString) )
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 100, 50), )
and this works (I don't quite get what the first two lines of the for loop are doing). However, I would like to do the process of defining the variables more automatic, and I still don't know how to use this solution and pass parameters in a text file. Along the same lines, how can I define parameters (that will depend on the variables) inside the dX_dt function?
Thanks in advance!
This isn't a full answer, but rather some observations/questions, but they are too long for comments.
dX_dt is called many times by odeint with a 1d array y and tuple t. You provide t via the args parameter. y is generated by odeint and varies with each step. dX_dt should be streamlined so it runs fast.
Usually an expresion like [eq for eq in systemOfEquations] can be simplified to systemOfEquations. [eq for eq...] doesn't do anything meaningful. But there may be something about systemOfEquations that requires it.
I'd suggest you print out systemOfEquations (for this small 3 line case), both for your benefit and ours. You are using sympy to translated the strings from the file into equations. We need to see what it produces.
Note that myODEs is a function, not a file. It may be imported from a module, which of course is a file.
The point to vals = dict(S=X[0], I=X[1], R=X[2], t=t) is to produce a dictionary that the sympy expressions can work with. A more direct (and I think faster) dX_dt function would look like:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
return dydt
I suspect that the dX_dt that runs sympy generated expressions will be a lot slower than a 'hardcoded' one like this.
I'm going add sympy tag, because, as written, that is the key to translating your text file into a function that odeint can use.
I'd be inclined to put the equation variability in the t parameters, rather a list of sympy expressions.
That is replace:
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
with something like
arg12=np.array([-beta, beta, 0])
arg1 = np.array([0, -gamma, -gamma])
arg0 = np.array([0,0,0])
dydt = arg12*y[0]*y[1] + arg1*y[1] + arg0*y[0]
Once this is right, then the argxx definitions can be move outside dX_dt, and passed via args. Now dX_dt is just a simple, and fast, calculation.
This whole sympy approach may work fine, but I'm afraid that in practice it will be slow. But someone with more sympy experience may have other insights.
I am working on some very large scale linear programming problems. (Matrices are currently roughly 1000x1000 and these are the 'mini' ones.)
I thought that I had the program running successfully, only I have realized that I am getting some very unintuitive answers. For example, let's say I were to maximize x+y+z subject to a set of constraints x+y<10 and y+z <5. I run this and get an optimal solution. Then, I run the same equation but with different constraints: x+y<20 and y+z<5. Yet in the second iteration, my maximization decreases!
I have painstakingly gone through and assured myself that the constraints are loading correctly.
Does anyone know what the problem might be?
I found something in the documentation about lpx_check_kkt which seems to tell you when your solution is likely to be correct or high confidence (or low confidence for that matter), but I don't know how to use it.
I made an attempt and got the error message lpx_check_kkt not defined.
I am adding some code as an addendum in hopes that someone can find an error.
The result of this is that it claims an optimal solution has been found. And yet every time I raise an upper bound, it gets less optimal.
I have confirmed that my bounds are going up and not down.
size = 10000000+1
ia = intArray(size)
ja = intArray(size)
ar = doubleArray(size)
prob = glp_create_prob()
glp_set_prob_name(prob, "sample")
glp_set_obj_dir(prob, GLP_MAX)
glp_add_rows(prob, Num_constraints)
for x in range(Num_constraints):
Variables.add_variables(Constraints_for_simplex)
glp_set_row_name(prob, x+1, Variables.variers[x])
glp_set_row_bnds(prob, x+1, GLP_UP, 0, Constraints_for_simplex[x][1])
print 'we set the row_bnd for', x+1,' to ',Constraints_for_simplex[x][1]
glp_add_cols(prob, len(All_Loops))
for x in range(len(All_Loops)):
glp_set_col_name(prob, x+1, "".join(["x",str(x)]))
glp_set_col_bnds(prob,x+1,GLP_LO,0,0)
glp_set_obj_coef(prob,x+1,1)
for x in range(1,len(All_Loops)+1):
z=Constraints_for_simplex[0][0][x-1]
ia[x] = 1; ja[x] = x; ar[x] = z
x=len(All_Loops)+1
while x<Num_constraints + len(All_Loops):
for y in range(2, Num_constraints+1):
z=Constraints_for_simplex[y-1][0][0]
ia[x] = y; ja[x] =1 ; ar[x] = z
x+=1
x=Num_constraints+len(All_Loops)
while x <len(All_Loops)*(Num_constraints-1):
for z in range(2,len(All_Loops)+1):
for y in range(2,Num_constraints+1):
if x<len(All_Loops)*Num_constraints+1:
q = Constraints_for_simplex[y-1][0][z-1]
ia[x] = y ; ja[x]=z; ar[x] = q
x+=1
glp_load_matrix(prob, len(All_Loops)*Num_constraints, ia, ja, ar)
glp_exact(prob,None)
Z = glp_get_obj_val(prob)
Start by solving your problematic instances with different solvers and checking the objective function value. If you can export your model to .mps format (I don't know how to do this with GLPK, sorry), you can upload the mps file to http://www.neos-server.org/neos/solvers/index.html and solve it with several different LP solvers.