I'm making a fast and simple equation solver in python for a home assaignment where I need to calculate the value of sig_a which is between 0 and 700 for each eps_komp. I've done this with 2 for loops, the first to select the eps_komp value and the second to search for the closest value of sig_a between 0 and 700 in order for the equation to be as close to zero as possible. I defined the allowable error with "delta".
It's similar logic like in bisection method. This is the code:
eps_komp = [0.00012893048999999997,
0.018839115269999998,
0.01230539995,
0.022996934109999999,
-0.0037319012899999999,
0.023293921169999999,
0.0036927752099999997,
0.020621037629999998,
0.0063656587500000002,
0.020324050569999998]
Rm=700
sigma = np.linspace(0, Rm-0.01, Rm/0.01)
delta = 0.001
sig_a = []
for j in range(len(eps_komp)):
eps_j = eps_komp[j]
for i in range(len(sig)):
eps_j - sigma[i]/Emod - (sigma[i]/RO_K)**(1/RO_n) = diff
if diff <= delta:
sig_a.append(sigma[i])
The values of eps_komp are just the first 10, there are more of them but I just gave the first 10 for an example.
Now I keep getting this error:
SyntaxError: can't assign to operator
I know it has something to do with an incorrect index but I just can't see the problem....
If anyone can help, it would mean a lot to me. Thanks.
Luka
to assign diff to the calculation it needs to be on the left side of the equal sign:
diff = eps_j - sigma[i]/Emod - (sigma[i]/RO_K)**(1/RO_n)
Related
I am trying to run a simple LP pyomo Concrete model with Gurobisolver :
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
model = pyo.ConcreteModel()
nb_years = 3
nb_mins = 2
step = 8760*1.5
delta = 10000
#Range of hour
model.h = pyo.RangeSet(0,8760*nb_years-1)
#Individual minimums
model.min = pyo.RangeSet(0, nb_mins-1)
model.mins = pyo.Var(model.min, within=model.h, initialize=[i for i in model.min])
def maximal_step_between_mins_constraint_rule(model, min):
next_min = min + 1 if min < nb_mins-1 else 0
if next_min == 0: # We need to take circularity into account
return 8760*nb_years - model.mins[min] + model.mins[next_min] <= step + delta
return model.mins[next_min] - model.mins[min] <= step + delta
def minimal_step_between_mins_constraint_rule(model, min):
next_min = min + 1 if min < nb_mins-1 else 0
if next_min == 0: # We need to take circularity into account
return 8760*nb_years - model.mins[min] + model.mins[next_min] >= step - delta
return model.mins[next_min] - model.mins[min] >= step - delta
model.input_list = pyo.Param(model.h, initialize=my_input_list, within=pyo.Reals, mutable=False)
def objective_rule(model):
return sum([model.input_list[model.mins[min]] for min in model.min])
model.maximal_step_between_mins_constraint= pyo.Constraint(model.min, rule=maximal_step_between_mins_constraint_rule)
model.minimal_step_between_mins_constraint= pyo.Constraint(model.min, rule=minimal_step_between_mins_constraint_rule)
model.objective = pyo.Objective(rule=objective_rule, sense=pyo.minimize)
opt = SolverFactory('gurobi')
results = opt.solve(model, options={'Presolve':2})
Basically I am trying to find two hours in my input list (which looks like this) spanning over 3 years of data, with constraints on the distance separating them, and where the sum of both value is minimized by the model.
I implemented my list as a parameter of fixed value, however even if mutable is set to False running my model produces this error :
ERROR: Rule failed when generating expression for Objective objective with
index None: RuntimeError: Error retrieving the value of an indexed item
input_list: index 0 is not a constant value. This is likely not what you
meant to do, as if you later change the fixed value of the object this
lookup will not change. If you understand the implications of using non-
constant values, you can get the current value of the object using the
value() function.
ERROR: Constructing component 'objective' from data=None failed: RuntimeError:
Error retrieving the value of an indexed item input_list: index 0 is not a
constant value. This is likely not what you meant to do, as if you later
change the fixed value of the object this lookup will not change. If you
understand the implications of using non-constant values, you can get the
current value of the object using the value() function.
Any idea why I get this error and how to fix it ?
Obviously, changing the objective function to sum([pyo.value(model.input_list[model.mins[min]]) for min in model.min]) is not a solution to my problem.
I also tried not to use pyomo parameters (with something like sum([input_list[model.mins[min]] for min in model.min]), but pyomo can't iterate over it and raises the following error :
ERROR: Constructing component 'objective' from data=None failed: TypeError:
list indices must be integers or slices, not _GeneralVarData
You have a couple serious syntax and structure problems in your model. Not all of the elements are included in the code you provide, but you (minimally) need to fix these:
In this snippet, you are initializing the value of each variable to a list, which is invalid. Start with no variable initializations:
model.mins = pyo.Var(model.min, within=model.h, initialize=[i for i in model.min])
In this summation, you appear to be using a variable as the index for some data. This is an invalid construct. The value of the variable is unkown when the model is built. You need to reformulate:
return sum([model.input_list[model.mins[min]] for min in model.min])
My suggestion: Start with a very small chunk of your data and pprint() your model and read it carefully for quality before you attempt to solve.
model.pprint()
I have been trying to get into python optimization, and I have found that pyomo is probably the way to go; I had some experience with GUROBI as a student, but of course that is no longer possible, so I have to look into the open source options.
I basically want to perform an non-linear mixed integer problem in which I will minimized a certain ratio. The problem itself is setting up a power purchase agreement (PPA) in a renewable energy scenario. Depending on the electricity generated, you will have to either buy or sell electricity acording to the PPA.
The only starting data is the generation; the PPA is the main decision variable, but I will need others. "buy", "sell", "b1" and "b2" are unknown without the PPA value. These are the equations:
Equations that rule the problem (by hand).
Using pyomo, I was trying to set up the problem as:
# Dataframe with my Generation information:
January = Data['Full_Data'][(Data['Full_Data']['Month'] == 1) & (Data['Full_Data']['Year'] == 2011)]
Gen = January['Producible (MWh)']
Time = len(Generacion)
M=100
# Model variables and definition:
m = ConcreteModel()
m.IDX = range(time)
m.PPA = Var(initialize = 2.0, bounds =(1,7))
m.compra = Var(m.IDX, bounds = (0, None))
m.venta = Var(m.IDX, bounds = (0, None))
m.b1 = Var(m.IDX, within = Binary)
m.b2 = Var(m.IDX, within = Binary)
And then, the constraint; only the first one, as I was already getting errors:
m.b1_rule = Constraint(
expr = (((Gen[i] - PPA)/M for i in m.IDX) <= m.b1[i])
)
which gives me the error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-5d5f5584ebca> in <module>
1 m.b1_rule = Constraint(
----> 2 expr = (((Generacion[i] - PPA)/M for i in m.IDX) <= m.b1[i])
3 )
pyomo\core\expr\numvalue.pyx in pyomo.core.expr.numvalue.NumericValue.__ge__()
pyomo\core\expr\logical_expr.pyx in pyomo.core.expr.logical_expr._generate_relational_expression()
AttributeError: 'generator' object has no attribute 'is_expression_type'
I honestly have no idea what this means. I feel like this should be a simple problem, but I am strugling with the syntax. I basically have to apply a constraint to each individual data from "Generation", there is no sum involved; all constraints are 1-to-1 contraints set so that the physical energy requirements make sense.
How do I set up the constraints like this?
Thank you very much
You have a couple things to fix. First, the error you are getting is because you have "extra parenthesis" around an expression that python is trying to convert to a generator. So, step 1 is to remove the outer parenthesis, but that will not solve your issue.
You said you want to generate this constraint "for each" value of your index. Any time you want to generate copies of a constraint "for each" you will need to either do that by making a constraint list and adding to it with some kind of loop, or use a function-rule combination. There are examples of each in the pyomo documentation and plenty on this site (I have posted a ton if you look at some of my posts.) I would suggest the function-rule combo and you should end up with something like:
def my_constr(m, i):
return m.Gen[i] - m.PPA <= m.b1[i] * M
m.C1 = Constraint(m.IDX, rule=my_constr)
I'm trying to find the global minimum of the function from the hundred digit hundred dollars challenge, question #4 as an exercise for simulated annealing.
As the basis of my understanding and approach to writing the code, I refer to the global optimization algorithms version 3 book which is found for free online.
Consequently, I've initially come up with the following code:
The noisy func:
def noisy_func(x, y):
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
The function used to mutate the values:
def mutate(X_Value, Y_Value):
mutationResult_X = X_Value + randomNumForInput()
mutationResult_Y = Y_Value + randomNumForInput()
while mutationResult_X > 4 or mutationResult_X < -4:
mutationResult_X = X_Value + randomNumForInput()
while mutationResult_Y > 4 or mutationResult_Y < -4:
mutationResult_Y = Y_Value + randomNumForInput()
mutationResults = [mutationResult_X, mutationResult_Y]
return mutationResults
randomNumForInput simply returns a random number between 4 and -4. (Interval Limits for the search.) Hence it is equivalent to random.uniform(-4, 4).
This is the central function of the program.
def simulated_annealing(f):
"""Peforms simulated annealing to find a solution"""
#Start by initializing the current state with the initial state
#acquired by a random generation of a number and then using it
#in the noisy func, also set solution(best_state) as current_state
#for a start
pCurSelect = [randomNumForInput(),randomNumForInput()]
current_state = f(pCurSelect[0],pCurSelect[1])
best_state = current_state
#Begin time monitoring, this will represent the
#Number of steps over time
TimeStamp = 1
#Init current temp via the func, using such values as to get the initial temp
initial_temp = 100
final_temp = .1
alpha = 0.001
num_of_steps = 1000000
#calculates by how much the temperature should be tweaked
#each iteration
#suppose the number of steps is linear, we'll send in 100
temp_Delta = calcTempDelta(initial_temp, final_temp, num_of_steps)
#set current_temp via initial temp
current_temp = getTemperature(initial_temp, temp_Delta)
#max_iterations = 100
#initial_temp = get_Temperature_Poly(TimeStamp)
#current_temp > final_temp
while current_temp > final_temp:
#get a mutated value from the current value
#hence being a 'neighbour' value
#with it, acquire the neighbouring state
#to the current state
neighbour_values = mutate(pCurSelect[0], pCurSelect[1])
neighbour_state = f(neighbour_values[0], neighbour_values[1])
#calculate the difference between the newly mutated
#neighbour state and the current state
delta_E_Of_States = neighbour_state - current_state
# Check if neighbor_state is the best state so far
# if the new solution is better (lower), accept it
if delta_E_Of_States <= 0:
pCurSelect = neighbour_values
current_state = neighbour_state
if current_state < best_state:
best_state = current_state
# if the new solution is not better, accept it with a probability of e^(-cost/temp)
else:
if random.uniform(0, 1) < math.exp(-(delta_E_Of_States) / current_temp):
pCurSelect = neighbour_values
current_state = neighbour_state
# Here, we'd decrement the temperature or increase the timestamp, normally
"""current_temp -= alpha"""
#print("Run number: " + str(TimeStamp) + " current_state = " + str(current_state) )
#increment TimeStamp
TimeStamp = TimeStamp + 1
# calc temp for next iteration
current_temp = getTemperature(current_temp, temp_Delta)
#print("Iteration Count: " + str(TimeStamp))
return best_state
alpha is not used for this implementation, however temperature is moderated linearly using the following funcs:
def calcTempDelta(T_Initial, T_Final, N):
return((T_Initial-T_Final)/N)
def getTemperature(T_old, T_new):
return (T_old - T_new)
This is how I implemented the solution described in page 245 of the book. However, this implementation does not return to me the global minimum of the noisy function, but rather, one of its near-by local minimum.
The reasons I implemented the solution in this way is two fold:
It has been provided to me as a working example of a linear temperature moderation, and thus a working template.
Although I have tried to understand the other forms of temperature moderation laid out in the book in pages 248-249, it is not entirely clear to me how the variable "Ts" is calculated, and even after trying to look through some of the cited sources the book references, it remains esoteric for me still. Thus I figured, I'd rather try to make this "simple" solution work correctly first, before proceeding to attempt other approaches of temperature quenching (logarithmic, exponential, etc).
Since then I have tried in numerous ways to acquire the global minimum of the noisy func through various different iterations of the code, which would be too much to post here all at once. I've tried different rewrites of this code:
Decrease the randomly rolled number over each iteration as in order to search within a smaller scope every time, this has resulted in more consistent but still incorrect results.
Mutate by different increments, so lets say, between -1 and 1, etc. Same effect.
Rewrite mutate as in order to examine the neighbouring points to the current point via some step size, and examine neighboring points by adding/reducing said step size from the current point's x/y values, checking the differences between the newly generated point and the current point (the delta of E's, basically), and return the appropriate values with whichever one produced the lowest distance to the current function, thus being its closest proximity neighbour.
Reduce the intervals limits over which the search occurs.
It is in these, the solutions involving step-size/reducing limits/checking neighbours by quadrants that I have used movements comprised of some constant alpha times the time_stamp.
These and other solutions which I've attempted have not worked, either producing even less accurate results (albeit in some cases more consistent results) or in one case, not working at all.
Therefore I must be missing something, whether its to do with the temperature moderation, or the precise way (formula) by which I'm supposed to make the next step (mutate) in the algorithm.
I know its a lot to take in and look at, but I'd appreciate any constructive criticism/help/advice you can provide me.
If it will be of any help to showcase code bits of the other solution attempts, I'll post them if asked.
It is important that you keep track of what you are doing.
I have put a few important tips on frigidum
The alpha cooling generally works well, it makes sure you don't speed through the interesting sweet-spot, where about 0.1 of the proposals are accepted.
Make sure your proposals are not too coarse, I have put a example where I only change x or y, but never both. The idea is that annealing will take whats best, or take a tour, and let the scheme decide.
I use the package frigidum for the algo, but its pretty much the same are your code. Also notice I have 2 proposals, a large change and a small change, combinations usually work well.
Finally, I noticed its hopping a lot. A small variation would be to pick the best-so-far before you go in the last 5% of your cooling.
I use/install frigidum
!pip install frigidum
And made a small change to make use of numpy arrays;
import math
def noisy_func(X):
x, y = X
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
import frigidum
import numpy as np
import random
def random_start():
return np.random.random( 2 ) * 4
def random_small_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.02 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.02 * (random.random() - .5), 0] ), -4,4)
def random_big_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.5 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.5 * (random.random() - .5), 0] ), -4,4)
local_opt = frigidum.sa(random_start=random_start,
neighbours=[random_small_step, random_big_step],
objective_function=noisy_func,
T_start=10**2,
T_stop=0.00001,
repeats=10**4,
copy_state=frigidum.annealing.copy)
The output of the above was
---
Neighbour Statistics:
(proportion of proposals which got accepted *and* changed the objective function)
random_small_step : 0.451045
random_big_step : 0.268002
---
(Local) Minimum Objective Value Found:
-3.30669277
With the above code sometimes I get below -3, but I also noticed sometimes it has found something around -2, than it is stuck in the last phase.
So a small tweak would be to re-anneal the last phase of the annealing, with the best-found-so-far.
Hope that helps, let me know if any questions.
With the comments from the answer, I rewrote the code below (math.1p(x)->math.log(x)), which now should work and give a good approximation of the volatility.
I am trying to create a short code to calculate the implied volatility of a European Call option. I wrote the code below:
from scipy.stats import norm
import math
norm.cdf(1.96)
#c_p - Call(+1) or Put(-1) option
#P - Price of option
#S - Strike price
#E - Exercise price
#T - Time to expiration
#r - Risk-free rate
#C = SN(d_1) - Ee^{-rT}N(D_2)
def implied_volatility(Price,Stock,Exercise,Time,Rf):
P = float(Price)
S = float(Stock)
E = float(Exercise)
T = float(Time)
r = float(Rf)
sigma = 0.01
print (P, S, E, T, r)
while sigma < 1:
d_1 = float(float((math.log(S/E)+(r+(sigma**2)/2)*T))/float((sigma*(math.sqrt(T)))))
d_2 = float(float((math.log(S/E)+(r-(sigma**2)/2)*T))/float((sigma*(math.sqrt(T)))))
P_implied = float(S*norm.cdf(d_1) - E*math.exp(-r*T)*norm.cdf(d_2))
if P-(P_implied) < 0.001:
return sigma
sigma +=0.001
return "could not find the right volatility"
print implied_volatility(15,100,100,1,0.05)
This yields: 0.595 volatility which should be somewhere 0.3203. That is a huge difference...
I know this is not a fast method by any means, I just want to demonstrate how the principle works, but I am not able to calculate a good approximation.
For some reason when I call the function it gives me really bad approximation of the actual implied volatility which I calculated using a Matlab Program and the following webpage: Implied Volatility. Could anyone please help me to figure out where I made the mistake?
There are two problems I see, none of which are directly python related:
You are using log1p(x), which is the natural logarithm of 1+x, while you actually want log(x), which is the natural logarithm of x (cf. Wikipedia).
An option price of 100 is way to high considering the other parameters. Try to calculate the implied volatility for a price of 10 - which should be about 0.18 both by your program and the calculator you linked.
In Python2, the result of 5 / 2 is 2. It uses floor division. To fix that, make every number a float. In your implied_volatility function, change P = Price to P = float(Price), S = Stock to S = float(Stock), etc.
Hey so I am just working on some coding homework for my Python class using JES. Our assignment is to take a sound, add some white noise to the background and to add an echo as well. There is a bit more exacts but I believe I am fine with that. There are four different functions that we are making: a main, an echo equation based on a user defined length of time and amount of echos, a white noise generation function, and a function to merge the noises.
Here is what I have so far, haven't started the merging or the main yet.
#put the following line at the top of your file. This will let
#you access the random module functions
import random
#White noise Generation functiton, requires a sound to match sound length
def whiteNoiseGenerator(baseSound) :
noise = makeEmptySound(getLength(baseSound))
index = 0
for index in range(0, getLength(baseSound)) :
sample = random.randint(-500, 500)
setSampleValueAt(noise, index, sample)
return noise
def multipleEchoesGenerator(sound, delay, number) :
endSound = getLength(sound)
newEndSound = endSound +(delay * number)
len = 1 + int(newEndSound/getSamplingRate(sound))
newSound = makeEmptySound(len)
echoAmplitude = 1.0
for echoCount in range (1, number) :
echoAmplitude = echoAmplitude * 0.60
for posns1 in range (0, endSound):
posns2 = posns1 + (delay * echoCount)
values1 = getSampleValueAt(sound, posns1) * echoAmplitude
values2 = getSampleValueAt(newSound, posns2)
setSampleValueAt (newSound, posns2, values1 + values2)
return newSound
I receive this error whenever I try to load it in.
The error was:
Inappropriate argument value (of correct type).
An error occurred attempting to pass an argument to a function.
Please check line 38 of C:\Users\insanity180\Desktop\Work\Winter Sophomore\CS 140\homework3\homework_3.py
That line of code is:
setSampleValueAt (newSound, posns2, values1 + values2)
Anyone have an idea what might be happening here? Any assistance would be great since I am hoping to give myself plenty of time to finish coding this assignment. I have gotten a similar error before and it was usually a syntax error however I don't see any such errors here.
The sound is made before I run this program and I defined delay and number as values 1 and 3 respectively.
Check the arguments to setSampleValueAt; your sample value must be out of bounds (should be within -32768 - 32767). You need to do some kind of output clamping for your algorithm.
Another possibility (which indeed was the error, according to further input) is that your echo will be out of the range of the sample - that is, if your sample was 5 seconds long, and echo was 0.5 seconds long; or the posns1 + delay is beyond the length of the sample; the length of the new sound is not calculated correctly.