Im trying to write a code in python to solve pipe network problem using Hardy Cross. I'm almost done but the results shown does not satisfy the conservation of water in junction with outflow.
while True:
a1=-(kAB*GIVEN[0][3]**2+kBE*GIVEN[1][3]**2+kEI*GIVEN[2][3]**2-kIH*GIVEN[3][3]**2+kHA*GIVEN[4][3]**2)\
/(2*(kAB*GIVEN[0][3]+kBE*GIVEN[1][3]+kEI*GIVEN[2][3]+kIH*GIVEN[3][3]+kHA*GIVEN[4][3]))
a2=-(kBC*GIVEN[5][3]**2+kCF*GIVEN[6][3]**2+kFE*GIVEN[7][3]**2-kBE*GIVEN[1][3]**2)/(2*(kBC*GIVEN[5][3]+kCF*GIVEN[6][3]+kFE*GIVEN[7][3]+kBE*GIVEN[1][3]))
a3=-(kCD*GIVEN[8][3]**2+kDG*GIVEN[9][3]**2+kGF*GIVEN[10][3]**2-kCF*GIVEN[6][3]**2)/(2*(kCD*GIVEN[8][3]+kDG*GIVEN[9][3]+kGF*GIVEN[10][3]+kCF*GIVEN[6][3]))
a4=-(-kFE*GIVEN[7][3]**2-kGF*GIVEN[10][3]**2-kGJ*GIVEN[11][3]**2-kJI*GIVEN[12][3]**2-kEI*GIVEN[2][3]**2)\
/(2*(kFE*GIVEN[7][3]+kGF*GIVEN[10][3]+kGJ*GIVEN[11][3]+kJI*GIVEN[12][3]+kEI*GIVEN[2][3]))
if abs(a1)<0.001 and abs(a2)<0.001 and abs(a3)<0.001 and abs(a4)<0.001:
print("P_AB = ",GIVEN[0][3]*1000,"L/s"
break
else:
#compute corrected flow
#abs function is used to remove negative when Qcorrected is used for another iteration
GIVEN[0][3]=abs(GIVEN[0][3]+a1 )#AB
GIVEN[1][3]=abs(GIVEN[1][3]+a1-a2 )#BE
GIVEN[2][3]=abs(GIVEN[2][3]+a1-a4 )#EI
GIVEN[3][3]=abs(GIVEN[3][3]-a1 )#IH
GIVEN[4][3]=abs(GIVEN[4][3]+a1 )#HA
GIVEN[5][3]=abs(GIVEN[5][3]+a2 )#BC
GIVEN[6][3]=abs(GIVEN[6][3]+a2-a3 )#CF
GIVEN[7][3]=abs(GIVEN[7][3]+a2-a4 )#FE
GIVEN[8][3]=abs(GIVEN[8][3]+a3 )#CD
GIVEN[9][3]=abs(GIVEN[9][3]+a3 )#DG
GIVEN[10][3]=abs(GIVEN[10][3]+a3-a4 )#GF
GIVEN[11][3]=abs(GIVEN[11][3]-a4 )#GJ
GIVEN[12][3]=abs(GIVEN[12][3]-a4 )#JI
Related
I tried to implement Deutsch algorithm using qiskit. The following is a code.
circ = QuantumCircuit(2, 2) # |q_1q_0>
circ.x(0)
circ.h([0,1])
# Oracle
circ.barrier()
circ.x(1)
circ.barrier()
circ.h(0)
circ.measure([0,1], [0,1])
backend_sim = Aer.get_backend('qasm_simulator')
job = execute(circ, backend_sim, shots=1024)
result = job.result()
counts = result.get_counts(circ)
print(counts)
enter image description here
I expected that the first classical bit is 0 (that is, the function corresponding to that oracle is a constant function). But, the output is the following.
{'11': 496, '01': 528}
Why does the output imply the function is a balanced one?
Deutsch algorithm applies the X gate to the qubit you use for phase kickback trick, to prepare it in the |-⟩ state before applying the oracle. Your implementation applies it to the "data" qubit instead, so that the combined effect of the algorithm (after H gates cancel out) is just preparing the data qubit in |1⟩ state.
I am trying to rebuild a paper (Dreher et al. 2020. Aid, China, and Growth: Evidence from a New Global Development Finance Dataset) with Python. The paper's calculations were performed with Stata. I managed to rebuild everything until now, but I am stuck.
The data of this project is PanelData and I have to include year specific effects and country specific effects,
time_specific_effects = True, entity_effects = True / other_effects = data4.code
(because the variable data4.code is here the same as the entity).
The plan is to use 2SLS:
Regression
IV Strategy
The Stata code is given by the author:
*xtivreg2 growth_pc (l2.OFn_all= l3.IV_reserves_OFn_all_1_ln l3.IV_factor1_OFn_all_1_ln) l.population_ln time* if code!="CHN", fe first savefprefix(first) cluster(code) endog(l2.OFn_all)*
I rebuilt all the lagged variables in Python using shift() and it worked:
data4["l3IV_reserves_OFn_all_1_ln"] = data4["IV_reserves_OFn_all_1_ln"].shift(3)
data4["l3IV_factor1_OFn_all_1_ln"] = data4["IV_factor1_OFn_all_1_ln"].shift(3)
So the setup is the same as it is for the author:
As far as I know, there is no library in Python that can perform 2SLS with fixed effects. So I thought that I will just use linear model PanelOLS (which is suited for panel data with fixed effects) to perform the First Stage and Second Stage separately:
dependendFS = data4.l2OFn_all
exog2 = sm.tools.add_constant(data4[["l1population_ln", "l3IV_reserves_OFn_all_1_ln","l3IV_factor1_OFn_all_1_ln"]])
mod = lm.panel.PanelOLS(dependendFS, exog2, time_effects = True, entity_effects=True, drop_absorbed=True)
mod_new21c = mod.fit(cov_type='clustered', clusters = data4.code)
# Safe the fitted values
fitted_c = mod_new21c.fitted_values
data4["fitted_values_c"] = fitted_c
dependentSS = data4.growth_pc
exog = sm.tools.add_constant(data4[["fitted_values_c", "l1population_ln"]])
mod = lm.panel.PanelOLS(dependentSS, exog, time_effects=True, entity_effects= True)
mod_new211c = mod.fit(cov_type='clustered', clusters = data4.code)
I tried several combinations of the fixed effects and for the covariance, but it did not so far deliver the results I need. Here is my output for the Second Stage:
Results after Second Stage
and this is what they should look like:
dependent variable is growth p.c, SE in brackets
Where is my mistake? Do I have to adjust my data or the output of the First Stage since I am separately performing 2SLS? Is there a mistake or a better method of estimating 2SLS in Python?
Working with some molecules and reactions, it seems that chiral centers in smiles may not be found after applying reactions.
What I get after applying some reactions on a molecule is this smile: C[C](C)[C]1[CH+]/C=C(\\C)CC/C=C(\\C)CC1
which actually seems to a have a chiral center in carbon 3 [C]. If I use Chem.FindMolChiralCenters(n,force=True,includeUnassigned=True) I get an empty list which means that there is no chiral center.
The thing is that if I add H to that Carbon 3 so it becomes [CH] it is recognized as chiral center but with unassigned type (R or S). I tried adding Hs using Chem.AddHs(mol) and then try again Chem.FindMolChiralCenters() but didn't get any chiral center.
I was wondering if there is a way to recognize this chiral center even if they are not added H and to set the proper chiral tag following some kind of rules.
Afer applying two 1,2 hydride shift to my initial mol (Chem.MolFromSmiles('C/C1=C\\C[C#H]([C+](C)C)CC/C(C)=C/CC1')) I get the smiles mentioned before. So given that I had some initial chiral tag I want to know if there is a way to recover lost chirality after reactions.
smarts used for 1,2 hydride shift: [Ch:1]-[C+1:2]>>[C+1:1]-[Ch+0:2]
mol = Chem.MolFromSmiles('C/C1=C\\C[C#H]([C+](C)C)CC/C(C)=C/CC1')
rxn = AllChem.ReactionFromSmarts('[Ch:1]-[C+1:2]>>[C+1:1]-[Ch+0:2]')
products = list()
for product in rxn.RunReactant(mol, 0):
Chem.SanitizeMol(product[0])
products.append(product[0])
print(Chem.MolToSmiles(products[0]))
After applying this reaction twice to the product created I eventually get this smile.
Output:
'C[C](C)[C]1[CH+]/C=C(\\C)CC/C=C(\\C)CC1'
which actually is where it is supposed to be a chiral center in carbon 3
Any idea or should I report it as a bug?
This is not a bug. I think you don't specify that you want a canonical smiles in the MolToSmiles function. So when I try:
mol = Chem.MolFromSmiles('C/C1=C\\C[C#H]([C+](C)C)CC/C(C)=C/CC1')
rxn = AllChem.ReactionFromSmarts('[Ch:1]-[C+1:2]>>[C+1:1]-[Ch+0:2]')
products = list()
for product in rxn.RunReactant(mol, 0):
Chem.SanitizeMol(product[0])
products.append(product[0])
print(Chem.MolToSmiles(products[0]))
Chem.MolToSmiles(ps[0][0])
I obtained exactly the same result as you:
'C[C](C)[CH+]1CC=C(C)CCC=C(C)CC1'
'CC1=CC[CH](CCC(C)=CCC1)=C(C)C'
but when you use this one:
Chem.MolToSmiles(ps[0][0], True)
You can obtain this result:
'CC(C)=[C#H]1C/C=C(\\C)CC/C=C(\\C)CC1'
I am working on a code to solve for the optimum combination of diameter size of number of pipelines. The objective function is to find the least sum of pressure drops in six pipelines.
As I have 15 choices of discrete diameter sizes which are [2,4,6,8,12,16,20,24,30,36,40,42,50,60,80] that can be used for any of the six pipelines that I have in the system, the list of possible solutions becomes 15^6 which is equal to 11,390,625
To solve the problem, I am using Mixed-Integer Linear Programming using Pulp package. I am able to find the solution for the combination of same diameters (e.g. [2,2,2,2,2,2] or [4,4,4,4,4,4]) but what I need is to go through all combinations (e.g. [2,4,2,2,4,2] or [4,2,4,2,4,2] to find the minimum. I attempted to do this but the process is taking a very long time to go through all combinations. Is there a faster way to do this ?
Note that I cannot calculate the pressure drop for each pipeline as the choice of diameter will affect the total pressure drop in the system. Therefore, at anytime, I need to calculate the pressure drop of each combination in the system.
I also need to constraint the problem such that the rate/cross section of pipeline area > 2.
Your help is much appreciated.
The first attempt for my code is the following:
from pulp import *
import random
import itertools
import numpy
rate = 5000
numberOfPipelines = 15
def pressure(diameter):
diameterList = numpy.tile(diameter,numberOfPipelines)
pressure = 0.0
for pipeline in range(numberOfPipelines):
pressure += rate/diameterList[pipeline]
return pressure
diameterList = [2,4,6,8,12,16,20,24,30,36,40,42,50,60,80]
pipelineIds = range(0,numberOfPipelines)
pipelinePressures = {}
for diameter in diameterList:
pressures = []
for pipeline in range(numberOfPipelines):
pressures.append(pressure(diameter))
pressureList = dict(zip(pipelineIds,pressures))
pipelinePressures[diameter] = pressureList
print 'pipepressure', pipelinePressures
prob = LpProblem("Warehouse Allocation",LpMinimize)
use_diameter = LpVariable.dicts("UseDiameter", diameterList, cat=LpBinary)
use_pipeline = LpVariable.dicts("UsePipeline", [(i,j) for i in pipelineIds for j in diameterList], cat = LpBinary)
## Objective Function:
prob += lpSum(pipelinePressures[j][i] * use_pipeline[(i,j)] for i in pipelineIds for j in diameterList)
## At least each pipeline must be connected to a diameter:
for i in pipelineIds:
prob += lpSum(use_pipeline[(i,j)] for j in diameterList) ==1
## The diameter is activiated if at least one pipelines is assigned to it:
for j in diameterList:
for i in pipelineIds:
prob += use_diameter[j] >= lpSum(use_pipeline[(i,j)])
## run the solution
prob.solve()
print("Status:", LpStatus[prob.status])
for i in diameterList:
if use_diameter[i].varValue> pressureTest:
print("Diameter Size",i)
for v in prob.variables():
print(v.name,"=",v.varValue)
This what I did for the combination part which took really long time.
xList = np.array(list(itertools.product(diameterList,repeat = numberOfPipelines)))
print len(xList)
for combination in xList:
pressures = []
for pipeline in range(numberOfPipelines):
pressures.append(pressure(combination))
pressureList = dict(zip(pipelineIds,pressures))
pipelinePressures[combination] = pressureList
print 'pipelinePressures',pipelinePressures
I would iterate through all combinations, I think you would run into memory problems otherwise trying to model ALL combinations in a MIP.
If you iterate through the problems perhaps using the multiprocessing library to use all cores, it shouldn't take long just remember only to hold information on the best combination so far, and not to try and generate all combinations at once and then evaluate them.
If the problem gets bigger you should consider Dynamic Programming Algorithms or use pulp with column generation.
We are trying to make a cluster analysis for a big amount of data. We are kind of new to python and found out that an iterative function is way more efficient than an recursive one. Now we are trying to change that but it is way harder than we thought.
This code underneath is the heart of our clustering function. This takes over 90 percent of the time. Can you help us to change that into a recursive one?
Some extra information: The taunach function gets neighbours of our point which will later form the clusters. The problem is that we have many many points.
def taunach(tau,delta, i,s,nach,anz):
dis=tabelle[s].dist
#delta=tau
x=data[i]
y=Skalarprodukt(data[tabelle[s].index]-x)
a=tau-abs(dis)
#LA.norm(data[tabelle[s].index]-x)
if y<a*abs(a):
nach.update({item.index for item in tabelle[tabelle[s].inner:tabelle[s].outer-1]})
anz = anzahl(delta, i, tabelle[s].inner, anz)
if dis>-1:
b=dis-tau
if y>=b*abs(b):#*(1-0.001):
nach,anz=taunach(tau,delta, i,tabelle[s].outer,nach,anz)
else:
if y<tau**2:
nach.add(tabelle[s].index)
if y < delta:
anz += 1
if tabelle[s].dist>-4:
b = dis - tau
if y>=b*abs(b):#*(1-0.001)):
nach,anz=taunach(tau,delta, i,tabelle[s].outer,nach,anz)
if tabelle[s].dist > -1:
if y<=(dis+tau)**2:
nach,anz=taunach(tau,delta, i,tabelle[s].inner,nach,anz)
return nach,anz