I am looking the way How to create Entanglement, Superposition and interference? Can someone help to show how to code in qiskit? and the docs related to these mentioned above?
Thanks
Here you go
from qiskit import * # importing qiskit
# instead of importing everything you could import things separately
def circ_superposition(number):
# raising errors
try:
number = int(number)
except:
return "Number must be an integer"
circ = QuantumCircuit(number, number) # creating the circuit
for i in range(number):
circ.h(i)
return circ
You might want to use the below to see the circuit
print(f'{circ_superposition(6)}') # add any integer in place of 6
and to create entanglement do the following
entanglement = QuantumCircuit(n, n) # replace n by any integer
# apply h gates the qubits
for i in range(n): # replace n by any integer
entanglment.h(i)
# apply cx gate
entanglement.cx(control, target) # replace control and target by control qubit and target qubit respectively
from further assistance refer to:
Circuit Basics
And Getting Stared With Qiskit
Also consider visiting:
Super Dense Coding And Quantum Teleportation for applications of Quantum Interference.
Related
I have been trying to get into python optimization, and I have found that pyomo is probably the way to go; I had some experience with GUROBI as a student, but of course that is no longer possible, so I have to look into the open source options.
I basically want to perform an non-linear mixed integer problem in which I will minimized a certain ratio. The problem itself is setting up a power purchase agreement (PPA) in a renewable energy scenario. Depending on the electricity generated, you will have to either buy or sell electricity acording to the PPA.
The only starting data is the generation; the PPA is the main decision variable, but I will need others. "buy", "sell", "b1" and "b2" are unknown without the PPA value. These are the equations:
Equations that rule the problem (by hand).
Using pyomo, I was trying to set up the problem as:
# Dataframe with my Generation information:
January = Data['Full_Data'][(Data['Full_Data']['Month'] == 1) & (Data['Full_Data']['Year'] == 2011)]
Gen = January['Producible (MWh)']
Time = len(Generacion)
M=100
# Model variables and definition:
m = ConcreteModel()
m.IDX = range(time)
m.PPA = Var(initialize = 2.0, bounds =(1,7))
m.compra = Var(m.IDX, bounds = (0, None))
m.venta = Var(m.IDX, bounds = (0, None))
m.b1 = Var(m.IDX, within = Binary)
m.b2 = Var(m.IDX, within = Binary)
And then, the constraint; only the first one, as I was already getting errors:
m.b1_rule = Constraint(
expr = (((Gen[i] - PPA)/M for i in m.IDX) <= m.b1[i])
)
which gives me the error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-5d5f5584ebca> in <module>
1 m.b1_rule = Constraint(
----> 2 expr = (((Generacion[i] - PPA)/M for i in m.IDX) <= m.b1[i])
3 )
pyomo\core\expr\numvalue.pyx in pyomo.core.expr.numvalue.NumericValue.__ge__()
pyomo\core\expr\logical_expr.pyx in pyomo.core.expr.logical_expr._generate_relational_expression()
AttributeError: 'generator' object has no attribute 'is_expression_type'
I honestly have no idea what this means. I feel like this should be a simple problem, but I am strugling with the syntax. I basically have to apply a constraint to each individual data from "Generation", there is no sum involved; all constraints are 1-to-1 contraints set so that the physical energy requirements make sense.
How do I set up the constraints like this?
Thank you very much
You have a couple things to fix. First, the error you are getting is because you have "extra parenthesis" around an expression that python is trying to convert to a generator. So, step 1 is to remove the outer parenthesis, but that will not solve your issue.
You said you want to generate this constraint "for each" value of your index. Any time you want to generate copies of a constraint "for each" you will need to either do that by making a constraint list and adding to it with some kind of loop, or use a function-rule combination. There are examples of each in the pyomo documentation and plenty on this site (I have posted a ton if you look at some of my posts.) I would suggest the function-rule combo and you should end up with something like:
def my_constr(m, i):
return m.Gen[i] - m.PPA <= m.b1[i] * M
m.C1 = Constraint(m.IDX, rule=my_constr)
My objective is to implement the following system (Taken from signal processing schematic-ish):
Now, X[n] could be any signal, any function, for simplicity and for testing the program we could see it as the exponential function. And as the signal goes along the subsystems it undergoes some transformations, for S1 the transformation is:
and after S2 is:
and so far I have done this:
import numpy as np
import timeit
n=np.arange(-20,20)
y=np.zeros(n.shape)
x=np.exp(n)
def S1(inputSign, x_values, finalSys=True):
if finalSys==True:
output=np.zeros(x_values.shape)
for i in range(len(x_values)):
if x_values[i]%2==0:
output[i]=inputSign(x_values[i]/2)
else:
output[i] = 0
return output
else:
return inputSign
def S2(inputSign, x_values, finalSys=True):
if finalSys==True:
output=np.zeros(x_values.shape)
for i in range(len(x_values)):
output[i]=inputSign(x_values)+0.5*inputSign(x_values)+0.25*inputSign(x_values)
return output
else:
return inputSign+0.5*?????
Now the functions are defined with a condition finalSys that is supposed to be that if it is the last system it gives me an array of values. The objective is to "cascade"(S2(S1,n,finalSys=True), maybe not the correct term) the functions to as to get a final answer dependent only of the initial input signal but transformed along the way by the subsystems. I could do it manually and get an expression to the resulting system but I am doing this as a learning exercise so please bear with me. My trouble has been how to alter the arguments of the inputSign. As an example, S1(np.exp,n,finalSys=False)(2) should return the same as np.exp(1).
I have a list of tuples like this.
a = [(1,2),(1,3),(1,4),(2,5),(6,5),(7,8)]
In this list 1 relates to 2 and then 2 relates to 5 and 5 relates to 6 therefore 1 relates to 6. Similarly I need to find the relations between other elements in tuples. I need a function that takes the input values and outputs as follows:
input = (1,6) #output = True
input = (5,3) #output = True
input = (2,8) #output = False
I do not have knowledge of itertools or map functions. Can they be used to solve these types of problems?
And for the sake of curiosity and interest where can I find these types of questions to solve and where are these types of problems encountered in real life situations?
This can be easily done by considering the tuples as edges in a graph. The question is then reduced to checking if there is a path between the two nodes.
There exists lots of nice libraries for this, see e.g. networkx
import networkx as nx
a = [(1,2),(1,3),(1,4),(2,5),(6,5),(7,8)]
G = nx.Graph(a)
nx.has_path(G, 1, 6) # True
nx.has_path(G, 5, 3) # True
nx.has_path(G, 2, 8) # False
This answer here nicely states your problem as a graph problem, where every time you need to run your algorithm you need to check for the existence of a path between your input vertices. The time complexity for every query then depends on the size, order, diameter, degree of the underlying graph.
However, if you intend to run this algorithm many times with the same array a, it may be worth doing some preprocessing on the input graph to find the connected components (Wikipedia : connected components) first. In that case you can get constant time for every query. Here is the code I suggest :
# NOTE : tested using python 3.6.1
# WARNING : no input sanitization
a = [(1,2),(1,3),(1,4),(2,5),(6,5),(7,8)]
n = 8 # order of the underlying graph
# prepare graph as lists of neighbors for every vertex, i.e. adjacency lists (extra unused vertex '0', just to match the value range of the problem)
graph = [[] for i in range(n+1)]
for edge in a:
graph[edge[0]].append(edge[1])
graph[edge[1]].append(edge[0])
print( "graph : " + str(graph) )
# list of unprocessed vertices : contains all of them at the beginning
unprocessed_vertices = {i for i in range(1,n+1)}
# subroutine to discover the connected component of a vertex
def build_component():
component = [] # current connected component
curr_vertices = {unprocessed_vertices.pop()} # locally unprocessed vertices, initialize with one of the globally unprocessed vertices
while len(curr_vertices) > 0:
curr_vertex = curr_vertices.pop() # vertex to be processed
# add unprocessed neighbours of current vertex to the set of vertices to process
for neighbour in graph[curr_vertex]:
if neighbour in unprocessed_vertices:
curr_vertices.add(neighbour)
unprocessed_vertices.remove(neighbour)
component.append(curr_vertex)
return component
# main algorithm : graph traversal on multiple connected components
components = []
while len(unprocessed_vertices) > 0:
components.append( build_component() )
print( "components : " + str(components) )
# assign a number to each component
component_numbers = [None] * (n+1)
curr_number = 1
for comp in components:
for vertex in comp:
component_numbers[vertex] = curr_number
curr_number += 1
print( "component_numbers : " + str(component_numbers) )
# main functionality
def is_connected( pair ):
return component_numbers[pair[0]] == component_numbers[pair[1]]
# run main functionnality on inputs : every call is executed in constant time now, regardless of the size of the graph
print( is_connected( (1,6) ) )
print( is_connected( (5,3) ) )
print( is_connected( (2,8) ) )
I don't really know about the most likely situations where this problem could be encountered, but I suppose it can have application is some clustering tasks, or maybe if you want to know if it is possible to go from one place to another. If the edges of the graph represent dependencies between modules, this problem would tell you if two parts depend on each other, so maybe some potential applications in compiling or the managment of large projects. The underlying problem is a "Connected component" problem which is among the problems we know polynomial algorithms for.
It is generally very useful to model these kind of problems with graphs as these objects have a very simple structure, and most of the time we can reduce the original problem to a well known problem on graphs.
I am using code I found and slightly modified for my purposes. The problem is, it is not doing exactly what I want, and I am stuck with what to change to fix it.
I am searching for all neighbouring polygons, that share common borded (a line), that is not a point
My goal: 135/12 is neigbour with 319/2 135/4, 317 but not with 320/1
What I get in my QGIS table after I run my script
NEIGBOURS are the neighbouring polygons,
SUM is the number of neighbouring polygons
The code I use also includes 320/1 as a neighbouring polygon. How to fix it?
from qgis.utils import iface
from PyQt4.QtCore import QVariant
_NAME_FIELD = 'Nr'
_SUM_FIELD = 'calc'
_NEW_NEIGHBORS_FIELD = 'NEIGHBORS'
_NEW_SUM_FIELD = 'SUM'
layer = iface.activeLayer()
layer.startEditing()
layer.dataProvider().addAttributes(
[QgsField(_NEW_NEIGHBORS_FIELD, QVariant.String),
QgsField(_NEW_SUM_FIELD, QVariant.Int)])
layer.updateFields()
feature_dict = {f.id(): f for f in layer.getFeatures()}
index = QgsSpatialIndex()
for f in feature_dict.values():
index.insertFeature(f)
for f in feature_dict.values():
print 'Working on %s' % f[_NAME_FIELD]
geom = f.geometry()
intersecting_ids = index.intersects(geom.boundingBox())
neighbors = []
neighbors_sum = 0
for intersecting_id in intersecting_ids:
intersecting_f = feature_dict[intersecting_id]
if (f != intersecting_f and
not intersecting_f.geometry().disjoint(geom)):
neighbors.append(intersecting_f[_NAME_FIELD])
neighbors_sum += intersecting_f[_SUM_FIELD]
f[_NEW_NEIGHBORS_FIELD] = ','.join(neighbors)
f[_NEW_SUM_FIELD] = neighbors_sum
layer.updateFeature(f)
layer.commitChanges()
print 'Processing complete.'
I have found somewhat a workaround for it. Before using my script, I create a small (for my purposes, 0,01 m was enough) buffer around all joints. Later, I use a Difference tool to remove the buffer areas from my main layer, thus removing not-needed neighbouring polygons. Using the code now works fine
really not long ago I had my first dumb question answered here so... there I am again, with a hopefully less dumb and more interesting headscratcher. Keep in my mind I am still making my baby steps in scripting !
There it is : I need to rig a feathered wing, and I already have all the feathers in place. I thought of mimicking another rig I animated recently that had the feathers point-constrained to the arm and forearm, and orient-constrained to three other controllers on the arm : each and every feather was constrained to two of those controllers at a time, and the constraint's weights would shift as you went down the forearm towards the wrist, so that one feather perfectly at mid-distance between the elbow and the forearm would be equally constrained by both controllers... you get the picture.
My reasoning was as follows : let's make a loop that iterates over every feather, gets its world position, finds the distance from that feather to each of the orient controllers (through Pythagoras), normalize that and feed the values into the weight attribute of an orient constraint. I could even go the extra mile and pass the normalized distance through a sine function to get a nice easing into the feathers' silhouette.
My pseudo-code is ugly and broken, but it's a try. My issues are inlined.
Second try !
It works now, but only on active object, instead of the whole selection. What could be happening ?
import maya.cmds as cmds
# find world space position of targets
base_pos = cmds.xform('base',q=1,ws=1,rp=1)
tip_pos = cmds.xform('tip',q=1,ws=1,rp=1)
def relative_dist_from_pos(pos, ref):
# vector substract to get relative pos
pos_from_ref = [m - n for m, n in zip(pos, ref)]
# pythagoras to get distance from vector
dist_from_ref = (pos_from_ref[0]**2 + pos_from_ref[1]**2 + pos_from_ref[2]**2)**.5
return dist_from_ref
def weight_from_dist(dist_from_base, dist_to_tip):
normalize_fac = (1/(dist_from_base + dist_to_tip))
dist_from_base *= normalize_fac
dist_to_tip *= normalize_fac
return dist_from_base, dist_to_tip
sel = cmds.ls(selection=True)
for obj in sel:
# find world space pos of feather
feather_pos = cmds.xform(obj, q=1, ws=1, rp=1)
# call relative_dist_from_pos
dist_from_base = relative_dist_from_pos(feather_pos, base_pos)
dist_to_tip = relative_dist_from_pos(feather_pos, tip_pos)
# normalize distances
weight_from_dist(dist_from_base, dist_to_tip)
# constrain the feather - weights are inverted
# because the smaller the distance, the stronger the constraint
cmds.orientConstraint('base', obj, w=dist_to_tip)
cmds.orientConstraint('tip', obj, w=dist_from_base)
There you are. Any pointers are appreciated.
Have a good night,
Hadriscus