Problem of C++ compilation with Expression() - python

I am a beginner in fenics and I am trying to resolve Poisson equation with a boundary condition which is a Perlin noise generated by opensimplex, a Python library.
I'm trying to define f, the boundary condition by Expression().
I tried Expression('function(x[0],x[1],x[2])') where function (x,y,z)=opensimplex.tmp.noise3d(x,y,z)). However, as this opensimplex function is not managed by C++, I got a compilation error; Compilation failed!.
Is there any solution to overcome this error ?

I had a similar problem when starting to work with transient flows in FEniCS.
Defining a subclass for UserExpression, before defining your variational form should enable the compilation.
from dolfin import *
parameters["reorder_dofs_serial"] = True
### (Here you add your domain generation and FunctionSpace definition)
class Expression(SubDomain):
def inside(self,a,on_boundary):
return (x[0]) and (x[1]) and (x[2]) and on_boundary
f=MyExpression(2.0)
print(assemble(f*dx(domain=UnitIntervalMesh(1))))
If this still doesn't enable compilation, please attach the relevant portions of your code, and we can try to work through them.
If you have a fixed dimension order (e.g. 2-D), you might also have to add this after reordering dofs:
parameters["form_compiler"]["quadrature_degree"]=2
Good luck!

Related

Why python doesn't see the members of quantumCircuit class qiskit

I`m trying to learn the programming on quantum computers.
I have installed qiskit in VS Code (all qiskit extentions available in VS Code market) , python compilator (from Vs Code market "Python" and "Python for VSCode"). I have set up my qikit API for correct working
When I run the exemple I get erros: "Instance of 'QuantumCircuit' has no 'h' member"
What shoud I do?
The code:
from qiskit import ClassicalRegister, QuantumRegister
from qiskit import QuantumCircuit, execute
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q)
qc.h(q[0])
qc.cx(q[0], q[1])
qc.measure(q, c)
job_sim = execute(qc, 'local_qasm_simulator')
sim_result = job_sim.result()
print(sim_result.get_counts(qc))
========================
The same error after adding comment # pylint: disable=no-member
The errors in question are coming from pylint, a linter, not from python itself. While pylint is pretty clever, some constructs (particularly those involving dynamically-added properties) are beyond its ability to understand. When you encounter situations like this, the best course of action is twofold:
Check the docs, code, etc. to make absolutely sure the code that you've written is right (i.e. verify that the linter result is a false positive)
Tell the linter that you know what you're doing and it should ignore the false positive
user2357112 took care of the first step in the comments above, demonstrating that the property gets dynamically set by another part of the library.
The second step can be accomplished for pylint by adding a comment after each of the offending lines telling it to turn of that particular check for that particular line:
qc.h(q[0]) # pylint: disable=no-member

How to remove constraint in ORTools

Is there any way to remove defined constraint from solver with out clearing solver and creating constraints from first?
suppose my problem is to maximize sum of 3 variables which two constraints
constraint1: variable 2 should be between 8 - 10
constraint2: variable 3 should be between 5 - 10
from ortools.linear_solver import pywraplp
solver = pywraplp.Solver('SolveIntegerProblem',
pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
objective = solver.Objective()
Variable[0] = solver.IntVar(0, 5, variable 0 )
Variable[1] = solver.IntVar(0, 10, variable 1 )
Variable[2] = solver.IntVar(0, 20, variable 2 )
objective.SetCoefficient(Variable[0], 1)
objective.SetCoefficient(Variable[1], 1)
objective.SetCoefficient(Variable[2], 1)
objective.SetMaximization()
constraints.append(solver.Constraint(8,10))
constraints[0].SetCoefficient(variable[1],1)
constraints.append(solver.Constraint(5,10))
constraints[1].SetCoefficient(variable[2],1)
Now in the second time of running my code I want to remove constraint number 2, but I can not find any operation to do it and the only way is to clear solver and define constraint from first.
In this semi code the number of constraints were limited but actually, in my real code the number of constraint are many and I can not define them from first.
I know this question is quite old but:
As far as I know, or-tools does not provide any interface that removes constraints or variables. From an engineering perspective, messing with the internal logic to remove them 'by hand' is dangerous.
I absolutely needed that feature for my tech stack and tried multiple python linear programming librairies out there (wrappers around clp/cbc really) and I settle on or-tools despite that flaw for 2 main reasons 1) this was the only librairy with the minimal features support I required out of the box and 2) at the time (~4-5 years ago) it was the only librairy using C bindings.
All others used one form of another of interfacing with the cbc command line which is a ... horrible way to interface with python. It is unscalable due to the overhead of writing and reading files on disk. Nasty nasty nasty. So if I remember correctly, only pylp and or-tools had c bindings and again if I remember correctly, pylp was NOT python 3 compatible (and has been in limbo ever since) so I settled on or-tools.
So to answer your question: to 'remove' variables or constraints with or-tools, I had to build my own python wrapper around or-tools. To deactivate a variable or a constraint, I would set coefficients to zero and free bounds (set to +/- infinity) and set costs to zero to effectively deactivate the constraint. In my wrapper, I would keep a list of deactivated constraints/variables and recycle them instead of creating new ones (which was proven to lead to both increased runtimes and memory leaks because C++ + python is a nightmare in those areas). I heavily suspect that I get floating points noise in the recycling but it's stable enough in practice for my needs.
So in your code example, to rerun without creating an new model from scratch you need to do:
(...)
constr1 = solver.Constraint(8,10)
constraints.append(constr1)
constraints[0].SetCoefficient(variable[1],1)
constr2 = solver.Constraint(5,10)
constraints.append(constr2)
constraints[1].SetCoefficient(variable[2],1)
constr2.SetBounds(-solver.infinity(), solver.infinity())
constr2.SetCoefficient(variable[2], 0)
# constr2 is now deactivated. If you wanted to add a new constraints, you can
# change the bounds on constr2 to recycle it and add new variables
# coefficients
That being said, very recently, python-mip was released and it supports both removing variables and constraints and has c-bindings.
Did you try to use the MPConstraint::Clear() method ?
Declaration: https://github.com/google/or-tools/blob/9487eb85f4620f93abfed64899371be88d65c6ec/ortools/linear_solver/linear_solver.h#L865
Definition: https://github.com/google/or-tools/blob/9487eb85f4620f93abfed64899371be88d65c6ec/ortools/linear_solver/linear_solver.cc#L101
Concerning Python swig wrapper MPConstraint is exported as Constraint object.
src: https://github.com/google/or-tools/blob/9487eb85f4620f93abfed64899371be88d65c6ec/ortools/linear_solver/python/linear_solver.i#L180
But the method Constraint::Clear() seems not exposed
https://github.com/google/or-tools/blob/9487eb85f4620f93abfed64899371be88d65c6ec/ortools/linear_solver/python/linear_solver.i#L270
You can try to patch the swig file and recompile make python && make install_python

Returning results of a Pyomo optimisation from a function

I have set up a problem with Pyomo that optimises the control strategy of a CHP unit, which is laid out (very roughly) in the following way:
class Problem
def OptiControl
<Pyomo Concrete model formulation>
return (model.obj.value())
The problem class is there because there are several control strategies methods that I'm investigating, and so I call these by using (for example) b = problem1.OptiControl().
The problem is that if I try to return the value of the objective my script gets stuck and when I Ctrl+C out of it I get (' Signal', 2, 'recieved, but no process queued'). Also if I write model.write() the script ends normally but nothing is displayed in IPython. Trying to print model.obj.value() also doesn't work.
I assume this has something to do with the fact that I'm calling Pyomo in a function because the model worked succesfully before, but I don't know how to get round this.
EDIT: writing the values to a file also doesn't work. If it helps, this is the excerpt of my code where I solve the model:
opt = SolverFactory("glpk") # Choose solver
solution = opt.solve(model) # Solve model
model.write()
with open('ffs.txt','w') as f:
model.obj.expr()
for t in model.P:
f.write(model.f["CHP",t].value)
I ended up working around this problem by re-writing my model as an AbstractModel, writing my data to a .dat file and reading that (clunky, but I need results now so it had to be done). For some reason it works now and does everything I expect it would do.
It also turned out my problem was infeasible which might have added to my problems, but at least with this method I could use results.write() and I was able to find that out which wasn't the case before.

What is the alternative of tf.Variable.ref() in Tensorflow version 0.12?

I'm trying to run open code of A3C reinforcement learning algorithm to learn A3C in A3C code
However,I got several errors and I could fix except one.
In the code, ref() which is a member function of tf.Variable is used (1,2), but in recent tensorflow version 0.12rc, that function seems to be deprecated.
So I don't know what is the best way to replace it (I don't understand exactly why the author used ref()). When I just changed it to the variable itself (for example v.ref() to v), there was no error, but reward is not changed. It seems it cannot learn and I guess it is because the variables are not properly updated.
Please advise me what is the proper way to modify the code to work.
The new method tf.Variable.read_value() is the replacement for tf.Variable.ref() in TensorFlow 0.12 and later.
The use case for this method is slightly tricky to explain, and is motivated by some caching behavior that causes multiple uses of a remote variable on a different device to use a cached value. Let's say you have the following code:
with tf.device("/cpu:0")
v = tf.Variable([[1.]])
with tf.device("/gpu:0")
# The value of `v` will be captured at this point and cached until `m2`
# is computed.
m1 = tf.matmul(v, ...)
with tf.control_dependencies([m1])
# The assign happens (on the GPU) after `m1`, but before `m2` is computed.
assign_op = v.assign([[2.]])
with tf.control_dependencies([assign_op]):
with tf.device("/gpu:0"):
# The initially read value of `v` (i.e. [[1.]]) will be used here,
# even though `m2` is computed after the assign.
m2 = tf.matmul(v, ...)
sess.run(m2)
You can use tf.Variable.read_value() to force TensorFlow to read the variable again later, and it will be subject to whatever control dependencies are in place. So if you wanted to see the result of the assign when computing m2, you'd modify the last block of the program as follows:
with tf.control_dependencies([assign_op]):
with tf.device("/gpu:0"):
# The `read_value()` call will cause TensorFlow to transfer the
# new value of `v` from the CPU to the GPU before computing `m2`.
m2 = tf.matmul(v.read_value(), ...)
(Note that, currently, if all of the ops were on the same device, you wouldn't need to use read_value(), because TensorFlow doesn't make a copy of the variable when it is used as the input to an op on the same device. This can cause a lot of confusion—for example when you enqueue a variable to a queue!—and it's one of the reasons that we're working on enhancing the memory model for variables.)

Theano matrix multiplication

I have a piece of code that is supposed to calculate a simple
matrix product, in python (using theano). The matrix that I intend to multiply with is a shared variable.
The example is the smallest example that demonstrates my problem.
I have made use of two helper-functions. floatX converts its input to something of type theano.config.floatX
init_weights generates a random matrix (in type floatX), of given dimensions.
The last line causes the code to crash. In fact, this forces so much output on the commandline that I can't even scroll to the top of it anymore.
So, can anyone tell me what I'm doing wrong?
def floatX(x):
return numpy.asarray(x,dtype=theano.config.floatX)
def init_weights(shape):
return floatX(numpy.random.randn(*shape))
a = init_weights([3,3])
b = theano.shared(value=a,name="b")
x = T.matrix()
y = T.dot(x,b)
f = theano.function([x],y)
This work for me. So my guess is that you have a problem with your blas installation. Make sure to use Theano development version:
http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions
It have better default for some configuration. If that do not fix the problem, look at the error message. There is main part that is after the code dump. After the stack trace. This is what is the most useful normally.
You can disable direct linking by Theano to blas with this Theano flag: blas.ldflags=
This can cause slowdown. But it is a quick check to confirm the problem is blas.
If you want more help, dump the error message to a text file and put it on the web and link to it from here.

Categories

Resources