Making a Qobj out of a picos variable - python

I need to write a semidefinite program that minimizes the trace of an operator, say R, subject to the constraint that tr_A(R)^{Tb} >>0 . That means that R represents a 3 qubit quantum system and the trace over the first system gives you an operator that represents the remaining 2 qubit systems. Taking the partial transpose with respect to one of the qubits, you get the partially transposed quantum state of the restricted 2 qubit system. It is this state that I want to make positive semidefinite.
I am using PICOS (to write the SDP) and qutip (to do the operations).
P = pic.Problem()
Rho = P.add_variable('Rho',(n,n),'hermitian')
P.add_constraint(pic.trace(Rho)==1)
P.add_constraint(Rho>>0)
RhoQOBJ = Qobj(Rho)
RhoABtr = ptrace(RhoQOBJ, [0,1])
RhoABqbj = partial_transpose(RhoABtr, [0], method='dense')
RhoAB = RhoABqbj.full()
Problem: I need to make Rho a Qobj, for qutip to be able to understand it, but Rho above is only an instance of the Variable class. Anyone has any idea on how to do this?
Also I looked here, http://picos.zib.de/tuto.html#variables , it became even more confusing as this function puts the instance in a dictionary and only gives you back a key.

You need to be able to output a numpy array or sparse matrix to convert to a Qobj. I could not find anything in the picos docs that discusses this option.

I am seeing this post very late, but maybe I can help... I am not sure what the function Qobj() is doing, can you please tell me more about it.
Otherwise, there is now a new partial_transpose() function in PICOS (version released today), which hopefully does what you need.
Best,
Guillaume.

Related

How do I perform a discrete time dependent update to my variables while using odeint in python?

I'm trying to simulate a system of ODEs. While doing so, I need to increase the current value of certain variables by some factor at specific time points when the odeint runs?
I tried doing the following. But what i could notice is that the time values are in floating point. This makes it difficult for me to specify an if-condition for adding a certain value to the inputs that are going to be integrated further in the process.
Below is the problem case. Please help me out with this.
def myfunc(s,t):
# whenever the time is an even day, increase the variable by 2
if t%2==0:
addition = 2
else:
addition = 0
dsdt = (2s+8)+addition
return dsdt
Problem: The incoming time(t) in the function is a floating point number. This prevents me from applying a if condition for specific discrete even values of 't'
Detailed description:
(a)I define a timespan vector , Tspan = np.linspace(1,100,100), and a initial condition s0 = [3].
(b) When I run the " odeint(myfunc, s0, Tspan) ", I need to update the incoming 's' variable by some factor, only at certain timepoints ( Say, for t = 25,50,75).
(c) But for me to this, if I place print(t) inside the "myfunc(s,t)", I could watch out that the incoming 't' is in float type.
(d) And one important note is that the # myfunc is called > #Timesteps in the Tspan vector. This is why the runtime 't' is in floating points.
(e) with this said if i try to perform "if ceil(t)%25==0 or round" the same int is returned for next 4 to 5 function calls ( this is because the there are few number of function calls happening between two subsequent timepoints), as a result, if I try to update the incoming 's' with an if condition on the ceiled(t), the update on 's' is performed for 4 to 5 subsequent function calls instead of once at a specific time point, and this should be avoided.
I hope my problem is clear. Please help me out if you could, in someway. Thanks folks!
All "professional" solvers use an internal adaptive step size. The step size for the output has no or minimal influence on this. Each method step uses multiple evaluations of the ODE function. Depending on the output sampling frequency, you can have multiple internal steps per output step, or multiple output steps get interpolated from the same internal step.
What you describe as desired mechanism is different from your example code. Your example code changes a parameter of the ODE. The description amounts to a state change. While the first can be done with deleterious but recoverable effects on the step size controller, the second requires an event-action mechanism with a state-changing action. Such is not implemented in any of the scipy solvers.
The best way is to solve for the segments between the changes, perform the state change at the end of each segment and restart the integration with the new state. Use array concatenation on the time and value segments to get the large solution function table.
t1=np.linspace(0,25,25+1);
u10=u0
u1=odeint(fun,u10,t1);
t2=t1+25; # or a more specific construction for non-equal interval lengths
u20 = 3*u1[-1] # or some other change of the last state u1[-1]
u2=odeint(fun,u20,t2);
t3=t2+25;
u30 = u2[-1].copy();
u30[0] -=5; # or what the actual state change was supposed to be
u3=odeint(fun,u30,t3);
# combine the segments for plotting, gives vertical lines at the event locations
t=np.concatenate([t1,t2,t3]);
u=np.concatenate([u1,u2,u3]);
For more segments it is better do organize this in a loop, especially if the state change at the event locations via an uniform procedure depending on a few parameters.

solve an equation (with max-statement) in python

i am seeking a solution for the following equation in Python.
345-0.25*t = 37.5 * x_a
'with'
t = max(0, 10-x_a)*(20-10) + max(0,25-5*x_a)*(3-4) + max(0,4-0.25*x_a)*(30-12.5)
'x_a = ??'
If there is more than one solution to the problem (I am not even sure, whether this can happen from a mathematical point of view?), I want my code to return a positive(!) value for x_a, that minimizes t.
With my previous knowledge in the Basics of Python, Pandas and NumPy I actually have no clue, how to tackle this problem. Can someone give me a hint?
For Clarification: I inserted some exemplary numbers in the equation to make it easier to gasp the problem. In my final code, there might of course be different numbers for different scenarios. However, in every scenario x_a is the only unknown variable.
Update
I thought about the problem again and came up with the following solution, which yields the same result as the calculations done by MichaƂ Mazur:
import itertools
from sympy import Eq, Symbol, solve
import numpy as np
x_a = Symbol('x_a')
possible_elements = np.array([10-x_a, 25-5*x_a, 4-0.25*x_a])
assumptions = np.array(list(itertools.product([True, False], repeat=3)))
for assumption in assumptions:
x_a = Symbol('x_a')
elements = assumption.astype(int) * possible_elements
t = elements[0]*(20-10) + elements[1]*(3-4) + elements[2]*(30-12.5)
eqn = Eq(300-0.25*t, 40*x_a)
solution = solve(eqn)
if len(solution)>2:
print('Warning! the code may suppress possible solutions')
if len(solution)==1:
solution = solution[0]
if (((float(possible_elements[0].subs(x_a,solution))) > 0) == assumption[0]) &\
(((float(possible_elements[1].subs(x_a,solution))) > 0) == assumption[1]) &\
(((float(possible_elements[2].subs(x_a,solution)))> 0) == assumption[2]):
print('solution:', solution)
Compared to the already suggested approach this may have an little advantage as it does not rely on testing all possible values and therefore can be used for very small as well as very big solutions without taking a lot of time (?). However, it probably only is useful as long as you don't have more complex functions for t (even having for example 5 max(...) statements and therefore (2^5=)32 scenarios to test seems quite cumbersome).
As far as I'm concerned, I just realized that my problem is even more complex as i thought. For my project the calculations needed to derive the value of "t" are pretty entangled and can not be written in just one equation. However it still is a function, that only relies on x_a. So I am still hoping for a Python-Solution similar to the proposed Solver in Excel... or I will stick to the approach of simply testing out all possible numbers.
If you are interested in a solution a bit different than the Python one, then I will give you a hand. Open up your Excel, with Solver extention and plug in
the data you are interested in cheking, as the following:
Into the E2 you plug the command I just have writen, into E4 you plug in
=300-0,25*E2
Into the F4 you plug:
=40*F2
Then you open up your Solver menu
Into the Set Objective you put the variable t, which you want to minimize.
Into Changing Variables you put the a.
Into Constraint Menu you put the equality of E4 and F4 cells.
You check the "Make Unconstarained Variables be non-negative" which will prevent your a variable to go below 0. Your method of computing is strictly non-linear, so you leave this option there.
You click solve. The computed value is presented in the screen.
The python approach I can think of:
minimumval=10100
minxa=10000
eps=0.01
for i in range(100000):
k=i/10000
x_a=k
t = max(0, 10-x_a)*(20-10) + max(0,25-5*x_a)*(3-4) + max(0,4-0.25*x_a)*(30-12.5)
val=abs(300-0.25*t-40*x_a)
if (val<eps):
if t<minimumval:
minimumval=t
minxa=x_a
It isn't direct solution, as in it only controls the error that you make in the equality by eps value. However, it gives solution.

Python Z3 solver not correctly reporting satisfiability for exponent constraints

I noticed the Z3 Solver library for python wasn't correctly reporting satisfiability for a problem involving exponents that I was working on. Specifically, it reported finding no solutions on cases where I knew a valid one -- unless I added constraints that effectively "told it the answer".
I simplified the problem to isolate it. In the code below, I'm asking it to find q and m such that q^m == 100. With the constraint 0 <= q < 100, you have, of course, q=10, m=2. But with the code below, it reports finding no solution (raise Z3Exception("model is not available")):
import z3.z3 as z
slv = z.Solver()
m = z.Int('m')
q = z.Int('q')
slv.add(100 == (q ** m))
slv.add(q >= 0)
slv.add(q < 100)
slv.add(m >= 0)
slv.add(m <= 100)
slv.check()
However, if you replace slv.add(m <= 100)) with slv.add(m <= 2) (or slv.add(m == 2)!), it has no problem finding the solution (of q=10, m=2).
Am I using Z3 wrong somehow?
I thought it would only report unsatisfiability ("model is not available") if it proved there was no solution and would otherwise hang while searching for a solution. Is that wrong? I didn't expect to be in a position where it only finds the solution if you shrink down the search space enough.
I haven't had this problem with any other operation besides exponentiation (e.g. addition, modulo, etc.).
You're misinterpreting what z3 is telling you. Change your line:
slv.check()
to:
print(slv.check())
print(slv.reason_unknown())
And you'll see it prints:
unknown
smt tactic failed to show goal to be sat/unsat (incomplete (theory arithmetic))
So, z3 doesn't know if your problem is sat or unsat; so you cannot ask for a model. The reason for this is the power operator: It introduces non-linearity, and the theory of non-linear integer equations is undecidable in general. That is, z3's solver is incomplete for this problem. In practice, this means z3 will apply a bunch of heuristics, and will hopefully solve the problem for you. But you can get unknown as well, as you observed.
It's not surprising that if you add extra constraints you're helping the solver and thus it finds an answer. You're just helping it further and those heuristics have an easier time. With different versions of z3, you can observe different behavior. (i.e., in the future, they might be able to solve this problem out-of-the-box, or maybe the heuristics will get worse and you helping it this way won't resolve the issue either.) Such is the nature of automatic-theorem proving with undecidable theories.
Bottom line: Any call to check can return sat, unsat, or unknown. Your program should check for all three possibilities and interpret the output accordingly.

Scipy minimize iterating past bounds

I am trying to minimize a function of 3 input variables using scipy. The function reads like so-
def myfunc(x):
x[0] = a
x[1] = b
x[2] = c
n = f(a,b,c)
return n
bound1 = (80,100)
bound2 = (10,20)
bound3 = (312,740)
guess = [a0,b0,c0]
bds = (bound1,bound2,bound3)
result = minimize(myfunc, guess,method='L-BFGS-B',bounds=bds)
The function I am trying to currently run reaches a minimum at a=100,b=10,c=740, which is at the end of the bounds.
The minimize function keeps trying to iterate past the end of bound 3 (gets to c0 value of 740.0000000149012 on its last iteration.
Is there any way to stop this from happening? i.e. stop the iteration at the actual end of my bound?
This happens due to numerical-differentiation, which itself is not only needed to infer the step-direction and size, but also to reason about termination.
In general you can't do much without being very careful in regards to whatever solver (and there are many backend-solvers) being used. The basic idea is to replace the automatic numerical-differentiation with one provided by you: this one then respects those bounds and must be careful about the solvers-internals, e.g. "how to reason about termination at this end".
Fix A:
Your problem should vanish automatically when using: Pull-request #10673, which touches your configuration: L-BFGS-B.
It seems, this PR is not part of the current release SciPy 1.4.1 (as this was 2 months before the PR).
See also #6026, where a milestone of 1.5.0 is mentioned in regards to some changes including respecting bounds in num-diff.
For above PR, you will need to install scipy from the sources, which is:
quite doable on linux (and maybe os x)
not something you should try on windows!
trust me...
See the documentation if needed.
Fix B:
Apart from that, as you are doing unconstrained-optimization (with variable-bounds) where more solver-backends are available (compared to constrained-optimization), you might try another solver, trust-constr, which has explicit support for this, see #9098.
Be careful to recognize, that you need to signal this explicitly when setting up the bounds!

Theano matrix multiplication

I have a piece of code that is supposed to calculate a simple
matrix product, in python (using theano). The matrix that I intend to multiply with is a shared variable.
The example is the smallest example that demonstrates my problem.
I have made use of two helper-functions. floatX converts its input to something of type theano.config.floatX
init_weights generates a random matrix (in type floatX), of given dimensions.
The last line causes the code to crash. In fact, this forces so much output on the commandline that I can't even scroll to the top of it anymore.
So, can anyone tell me what I'm doing wrong?
def floatX(x):
return numpy.asarray(x,dtype=theano.config.floatX)
def init_weights(shape):
return floatX(numpy.random.randn(*shape))
a = init_weights([3,3])
b = theano.shared(value=a,name="b")
x = T.matrix()
y = T.dot(x,b)
f = theano.function([x],y)
This work for me. So my guess is that you have a problem with your blas installation. Make sure to use Theano development version:
http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
It have better default for some configuration. If that do not fix the problem, look at the error message. There is main part that is after the code dump. After the stack trace. This is what is the most useful normally.
You can disable direct linking by Theano to blas with this Theano flag: blas.ldflags=
This can cause slowdown. But it is a quick check to confirm the problem is blas.
If you want more help, dump the error message to a text file and put it on the web and link to it from here.

Categories

Resources