I'm trying to solve simple problem which involves calculating square root, yet for some reason z3 throws an error like failed to solve or z3types.Z3Exception: model is not available
from z3 import *
x = Int('x')
y = Int('y')
solve(x > 0, y > x, y ** 0.5 == x)
from z3 import *
x = Int('x')
y = Int('y')
s = Solver()
s.add(x > 0)
s.add(y > x)
s.add(y ** 0.5 == x)
print(s.check())
print(s.model())
What I'm doing wrong?
There are two problems here. One you can fix. Other, unfortunately not.
The first is the use of the constant 0.5. z3 tries to be helpful and coerces constants, but in this case it doesn't do what you think it should. If you try:
from z3 import *
x = Int('x')
y = Int('y')
s = Solver()
s.add(x > 0)
s.add(y > x)
s.add(y ** 0.5 == x)
print(s.sexpr())
It'll print:
(declare-fun x () Int)
(declare-fun y () Int)
(assert (> x 0))
(assert (> y x))
(assert (= (^ y 0) (to_real x)))
Note what happened to your constant 0.5. It became 0. z3 does this because it sees y is an integer, and thus coerces the constant 0.5 to an integer, which truncates the value to 0. Clearly, this isn't what you wanted. The proper way to code this is:
s.add(y ** Q(1,2) == x)
where Q(1,2) represents the real number 1/2, i.e., 0.5. In this case, z3 will coerce y to a real value.
I should add that there's no deep/good reason why z3 behaves this way. It could've been more careful and coerced what you wrote correctly as well, alas that's not what it does. So, this isn't a bug. It's just how z3 works, something you have to keep in mind.
Having said that, if you try the above encoding, you'll notice that z3 still can't solve this problem. It'll print unknown. The reason for this is more technical, but suffice it to say that the exponentiation function is hard to reason with. It brings non-linear arithmetic into play and mixed with Integers/Reals, this leads to a theory that does not have a decision procedure.
What can you do? Well, it really depends on your what your ultimate goal is. But a good starting point is to avoid exponentiation, and cast everything in terms of multiplication instead:
s.add(y == x * x)
If you do this, you'll get:
sat
[x = 2, y = 4]
whether this'll work in all problems is hard to guess. Different problems might require different encodings. But bottom line, avoid the exponentiation operator if you can, especially when mixing reals and integers: It creates really difficult problems for SMT solvers, and you'll unlikely to get successful results when exponentiation is heavily used.
Related
I am trying to find a solution to the following system where f and g are R^2 -> R^2 functions:
f(x1,x2) = (y1,y2)
g(y1,y2) = (x1,x2)
I tried solving it using scipy.optimize.fsolve as follows:
def eqm(vars):
x1,x2,y1,y2 = vars
eq1 = f([x1, x2])[0] - y1
eq2 = f([x1, x2])[1] - y2
eq3 = g([y1, y2])[0] - x1
eq4 = g([y1, y2])[1] - x2
return [eq1, eq2, eq3, eq4]
fsolve(eqm, x0 = [1,0.5,1,0.5])
Although it is returning an output, it does not seem to be a correct one as it does not seem to satisfy the two conditions, and seems to vary a lot with the x0 specified. Also getting a warning:
'The iteration is not making good progress, as measured by the improvement from the last ten iterations.' I do know for a fact that a unique solution exists, which I have obtained algebraically.
Not sure what is going on and if there is a simpler way of solving it, especially using just two equations instead of splitting up into 4. Something like:
def equations(vars):
X,Y = vars
eq1 = f(X)-Y
eq2 = g(Y)-X
return [eq1, eq2]
fsolve(equations, x0 =[[1,0.5],[1,0.5]])
Suggestions on other modules e.g. sympy are also welcome!
First, I recommend working with numpy arrays since manipulating these is simpler than lists.
I've slighlty rewritten your code:
import scipy.optimize as opt
def f(x):
return x
def g(x):
return x
def func(vars):
input = np.array(vars)
eq1 = f(input[:2]) - input[2:]
eq2 = g(input[2:]) - input[:2]
return np.concatenate([eq1, eq2])
root = opt.fsolve(func, [1, 1, 0., 1.2])
print(root)
print(func(root)) # should be close to zeros
What you have should work correctly, so I believe there is something wrong with the equations you're using. If you provide those, I can try to see what may be wrong.
This seems to be more of a problem of numerical mathematics than Python coding. Your functions may have "ugly" behavior around the solution, may be strongly non-linear or contain singularities. We cannot help further without seeing the functions. One thing you might try is to instead solve a system
g(f(x)) - x = 0
and simplify g(f(x)) as much as possible analytically. Then calculate y = f(x) after solving the equation.
A function determines y(integer) from given x (integer) and s (float) as follows:
floor(x * s)
If x and y are known how to calculate s so that floor(x * s) is guaranteed to be exactly equal to y.
If I simply perform s = y / x is there any chance that floor(x * s) won't be equal to y due to floating point operations?
If I simply perform s = y / x is there any chance that floor(x * s) won't be equal to y due to floating point operations?
Yes, there is a chance it won't be equal. #Eric Postpischil offer a simple counter example: y = 1 and x = 49.
(For discussion, let us limit x,y > 0.)
To find a scale factor s for a given x,y, that often works, we need to reverse y = floor(x * s) mathematically. We need to account for the multiplication error (see ULP) and floor truncation.
# Pseudo code
e = ULP(x*s)
y < (x*s + 0.5*e) + 1
y >= (x*s - 0.5*e)
# Estimate e
est = ULP((float)y)
s_lower = ((float)y - 1 - 0.5*est)/(float)x
s_upper = ((float)y + 0.5*est)/(float)x
A candidate s will lie s_lower < s <= s_upper.
Perform the above with higher precision routines. Then I recommend to use the float closest to the mid-point of s_lower, s_upper.
Alternatively, an initial stab at s could use:
s_first_attempt = ((float)y - 0.5)/(float)x
If we rephrase your question, you are wondering if the equation y = floor( x * y/x ) holds for x and y integers, where y/x translates in python into a 64-bit floating-point, and the subsequent multiplication also generates a 64b floating point value.
Python's 64b floating points follow the IEEE-754 norm, which gives them 15-17 bits of decimal precision. To perform the division and multiplication, both x and y are converted into floats, and these operations might reduce the minimum precision in up to 1 bit (really worst case), but they will for sure not increase the precision. As such, you can only expect up to 15-17 bits of precision in this operation. This means that y values above 10^15 might/will present rounding errors.
More practically, one example of this can be (and you can reuse this code for other examples):
import numpy as np
print("{:f}".format(np.floor(1.3 * (1.1e24 / 1.3))))
#> 1100000000000000008388608.000000
How can I make sympy.solve not return negative solutions?
This seems to be a different task than adding a constraint like positive=True to the symbol I'm solving for. While
import sympy
x = sympy.symbols("x")
print(sympy.solve(x**2-4, x))
x = sympy.symbols("x", positive=True)
print(sympy.solve(x**2-4, x))
prints
[-2, 2]
[2]
as expected - I still get a negative solve result for omega with
import sympy
omega, omega_0, gamma = sympy.symbols("omega, omega_0, gamma", real=True, positive=True)
zeta = 1/((omega_0**2 - omega**2)**2 + gamma**2*omega**2)
omega_R = sympy.solve(sympy.diff(zeta, omega), omega)
print(omega_R)
which returns
[-sqrt(2)*sqrt(-gamma**2 + 2*omega_0**2)/2, sqrt(2)*sqrt(-gamma**2 + 2*omega_0**2)/2]
even though -sqrt(2)*sqrt(-gamma**2 + 2*omega_0**2)/2 will never be positive for real and positive symbols omega_0 and gamma.
Alternatively, whats's the best way to eliminate the negative solutions afterwards?
SymPy's assumptions system isn't smart enough to know that -sqrt(2)*sqrt(-gamma**2 + 2*omega_0**2)/2 cannot be positive give the real and positive assumptions on omega_0 and gamma (I opened an issue for it). To be on the safe side, SymPy only filters solutions if it knows they cannot have the given assumptions. If the assumptions system gives None, meaning it doesn't know, it includes it anyway. For now your best bet is to just filter this solution manually.
Background:
I am trying to implement a function doing an inverse transform sampling. I use sympy for calculating CDF and getting its inverse function. While for some simple PDFs I get correct results, for a PDF which CDF's inverse function includes Lambert-W function, results are wrong.
Example:
Consider following example CDF:
import sympy as sym
y = sym.Symbol('y')
cdf = (-y - 1) * sym.exp(-y) + 1 # derived from `pdf = x * sym.exp(-x)`
sym.plot(cdf, (y, -1, 5))
Now calculating inverse of this function:
x = sym.Symbol('x')
inverse = sym.solve(sym.Eq(x, cdf), y)
print(inverse)
Output:
[-LambertW((x - 1)*exp(-1)) - 1]
This, in fact, is only a left branch of negative y's of a given CDF:
sym.plot(inverse[0], (x, -0.5, 1))
Question:
How can I get the right branch for positive y's of a given CDF?
What I tried:
Specifying x and y to be only positive:
x = sym.Symbol('x', positive=True)
y = sym.Symbol('y', positive=True)
This doesn't have any effect, even for the first CDF plot.
Making CDF a Piecewise function:
cdf = sym.Piecewise((0, y < 0),
((-y - 1) * sym.exp(-y) + 1, True))
Again no effect. Strange thing here is that on another computer plotting this function gave a proper graph with zero for negative y's, but solving for a positive y's branch doesn't work anywhere. (Different versions? I also had to specify adaptive=False to sympy.plot to make it work there.)
Using sympy.solveset instead of sympy.solve:
This just gives a useless ConditionSet(y, Eq(x*exp(y) + y - exp(y) + 1, 0), Complexes(S.Reals x S.Reals, False)) as a result. Apparently, solveset still doesn't know how to deal with LambertW functions. From the docs:
When cases which are not solved or can only be solved incompletely, a
ConditionSet is used and acts as an unevaluated solveset object. <...>
There are still a few things solveset can’t do, which the old solve
can, such as solving non linear multivariate & LambertW type
equations.
Is it a bug or am I missing something? Is there any workaround to get the desired result?
The inverse produced by sympy is almost correct. The problem lies in the fact that the LambertW function has multiple branches over the domain (-1/e, 0). By default, it uses the upper branch, however for your problem you require the lower branch. The lower branch can be accessed by passing in a second argument to LambertW with a value of -1.
inverse = -sym.LambertW((x - 1)*sym.exp(-1), -1) - 1
sym.plot(inverse, (x, 0, 0.999))
Gives
I want to solve a high amount of bilinear ODE systems in python. The derivative is this:
def x_(x, t, growth, connections):
return x * growth + np.dot(connections, x) * x
I am not interested in very accurate results but in the qualitative behavior, i.e. whether a component goes to zero or not.
Because I have to solve such a big quantity of high-deminsional systems, I want to use a step size as big as possible.
Due to big step sizes it can happen that the ODE goes in one component below zero. This should not be possible since (because of the structure of the particular ODE) each component is bounded by zero. Hence - to prevent wrong results - I would like to set each component manually to zero once it is below.
Furtherly, in the systems that I want to solve it can happen that solutions blow up. I want to prevent this by setting an upper bound as well, i.e. if a value exceeds the bound it is set back to the value of the bound.
I hope I can make my goal understandable giving you the following pseudo-code of what I want to do:
for t in range(0, tEnd, dt):
$ compute x(t) using x(t-dt) $
x(t) = np.minimum(np.maximum(x(t), 0), upperBound)
I implemented this using a Runge-Kutta algorithm. Everything works fine. Just the performance is bad. Therefore, I would prefer using a pre-implemented method like scipy.integrate.odeint.
Thereby, I have no idea on how to set such bounds. An option that I tried was to manipulate the ODE that way, that the derivative becomes 0 once x is above the bound, and (positive) one once x is below 0. In addition, to prevent too high jumps within one timestep, I also bounded the derivative:
def x_(x, t, growth, connections, bound):
return (x > 0) * np.minimum((x < bound) * \
( x * growth + np.dot(connections, x) * x ), bound) + (x < 0)
Though this solution (especially for the zero-bound) is very ugly it would be sufficient if it worked. Unfortunately, it does not work. Using odeint
x = scipy.integrate.odeint(x_, x0, timesteps, param)
I get very often one of these two errors:
Repeated convergence failures (perhaps bad Jacobian or tolerances).
Excess work done on this call (perhaps wrong Dfun type).
They may be due to the discontinuities of my manipulated ODE. There are plenty of threads about these error messages on the internet but they did not help me. E.g. increasing the amount of allowed steps did neither prevent this issue nor is it a good solution for me since I need to use big step sizes. Furtherly, passing the Jacobian did not help either.
Having a look onto the solutions one can see that two types of strange behavior happen when the errors occure:
The solution blows in one single time-step up to +-1e250 (that should be impossible since dx/dt is bounded).
It first reaches the bound but goes down again (that should be impossible because x is at the bound and therefore x_ is 0).
I would appreciate all hints on how to solve the issue - no matter whether it is help on
how to prevent the errors in odeint
how to manipulate the ODE properly or on
how to write a very fast ODE solver where I can directly implement my needs.
I thank you in advance!
Edit
I was asked for a minimal example:
import numpy as np
import random as rd
rd.seed()
import scipy.integrate
def simulate(simParam, dim = 20, connectivity = .8, conRange = 1, threshold = 1E-3,
constGrowing=None):
"""
Creates the random system matrix and starts a simulation
"""
x0 = np.zeros(dim, dtype='float') + 1
connections = np.zeros(shape=(dim, dim), dtype='float')
growth = np.zeros(dim, dtype='float') +
(constGrowing if not constGrowing == None else 0)
for i in range(dim):
for j in range(dim):
if i != j:
connections[i][j] = rd.uniform(-conRange, conRange)
tend, step = simParam
return RK4NumPy(x_, (growth, connections), x0, 0, tend, step)
def x_(x, t, growth, connections, bound):
"""
Derivative of the ODE
"""
return (x > 0) * np.minimum((x < bound) *
(x * growth + np.dot(connections, x) * x), bound) + (x < 0)
def RK4NumPy(x_, param, x0, t0, tend, step, maxV = 1.0E2, silent=True):
"""
solving method
"""
param = param + (maxV,)
timesteps = np.arange(t0 + step, tend, step)
return scipy.integrate.odeint(x_, x0, timesteps, param)
simulate((300, 0.5))
To see the solution one would have to plot x. With the given parameters I get very often the above mentioned error
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.