Python Numpy Array indexing - python

I am having a small difficulty with Numpy indexing. The script gives only the index of the last array three times when it supposed to give index of three different arrays (F_fit in the script). I am sure it is a simple thing, but I haven't figured it out yet. The 3_phases.txt file contains these 3 lines
1 -1 -1 -1 1 1
1 1 1 -1 1 1
1 1 -1 -1 -1 1
Here is the code:
import numpy as np
import matplotlib.pyplot as plt
D = 12.96
n = np.arange(1,7)
F0 = 1.0
x = np.linspace(0.001,4,2000)
Q = 2*np.pi*np.array([1/D, 2/D, 3/D, 4/D, 5/D, 6/D])
I = (11.159, 43.857, 26.302, 2.047, 0.513, 0.998)
phase = np.genfromtxt('3_phases.txt')
for row in phase:
F = (np.sqrt(np.square(n)*I/sum(I)))*row
d = sum(i*(np.sin(x*D/2+np.pi*j)/(x*D/2+np.pi*j))for i,j in zip(F,n))
e = sum(i*(np.sin(x*D/2-np.pi*j)/(x*D/2-np.pi*j))for i,j in zip(F,n))
f_0 = F0*(np.sin(x*D/2)/(x*D/2))
F_cont = np.array(d) + np.array(e) + np.array(f_0)
plt.plot(x,F_cont,'r')
#plt.show()
plt.clf()
D2 = 12.3
I2 = (9.4, 38.6, 8.4, 3.25, 0, 0.37)
Q2 = 2*np.pi*np.array([1/D2, 2/D2, 3/D2, 4/D2, 5/D2, 6/D2])
n2 = np.arange(1,7)
for row in phase:
F2 = (np.sqrt(np.square(n2)*I2/sum(I2)))*row
plt.plot(Q2,F2,'o')
#plt.show()
F_data = F2
Q_data = Q2
I_data = np.around(2000*Q2/(4-0.001))
I_data = np.array(map(int,I_data))
F_fit = F_cont[I_data]
print F_fit
R2 = (1-(sum(np.square(F_data-F_fit))/sum(np.square(F_data-np.mean(F_data)))))
Any help would be appreciated.

You are redefining F_cont each time you go through your first loop. By the time you get to your second loop (with all the _2 values) you only have access to the F_cont from the last row.
To fix this, move your _2 definitions above your first loop and only do the loop once, then you'll have access to each F_cont and your printouts will be different.
The following code is identical to yours except for the rearrangement described above, as well as the fact that I implemented my comment from above (using n/D in your Q's).
import numpy as np
import matplotlib.pyplot as plt
D = 12.96
n = np.arange(1,7)
F0 = 1.0
x = np.linspace(0.001,4,2000)
Q = 2*np.pi*n/D
I = (11.159, 43.857, 26.302, 2.047, 0.513, 0.998)
phase = np.genfromtxt('3_phases.txt')
D2 = 12.3
I2 = (9.4, 38.6, 8.4, 3.25, 0, 0.37)
Q2 = 2*np.pi*n/D2
n2 = np.arange(1,7)
for row in phase:
F = (np.sqrt(np.square(n)*I/sum(I)))*row
d = sum(i*(np.sin(x*D/2+np.pi*j)/(x*D/2+np.pi*j))for i,j in zip(F,n))
e = sum(i*(np.sin(x*D/2-np.pi*j)/(x*D/2-np.pi*j))for i,j in zip(F,n))
f_0 = F0*(np.sin(x*D/2)/(x*D/2))
F_cont = np.array(d) + np.array(e) + np.array(f_0)
plt.plot(x,F_cont,'r')
plt.clf()
F2 = (np.sqrt(np.square(n2)*I2/sum(I2)))*row
plt.plot(Q2,F2,'o')
F_data = F2
Q_data = Q2
I_data = np.around(2000*Q2/(4-0.001))
I_data = np.array(map(int,I_data))
F_fit = F_cont[I_data]
print F_fit
R2 = (1-(sum(np.square(F_data-F_fit))/sum(np.square(F_data-np.mean(F_data)))))

F_fit is being calculating from I_data, which is in turn being calculated from Q2. Q2 is set outside the loop, and doesn't depend on row - perhaps you meant I_data to be a function of F2 instead?

Related

How to correctly plot the step response of a MIMO system with python control package

I would need to plot the step responses of a MIMO system with the python control package.
I've tried so far by using the function step_response, that however converts the system into a SISO before computing the step response, so that only one set of output is computed.
I then tried using the function forced_response with different setup for the input (i.e. constant unity value, numpy array of ones etc..., just for the sake of trying).
I get different step responses, so related to other output, but not all the responses (i.e. number of input x number of output).
Here is a minimum sample code that implements a simple 2nd order model with 2 input and 4 output and dummy data. In attachment a plot of the responses I get.
stepResponses
In my test I first run the step_response function, yout results to be of size 4 x size_time (so only the first 4 output are excited).
Then I run the forced_response function, and youtForced still results of size 4 x size_time, instead of size 4 x size_time x 2 (or similar) as I expected (in the hypothesis forced_response treats the system as a MIMO).
Is there a way to have full control of the step response via the forced_response function (similarly to what the MATLAB step function does)?
Unfortunately there is poor documentation and very few practical examples about this.
Many thanks to who can help.
from control import ss, step_response, forced_response
import numpy as np
import matplotlib.pyplot as plt
sz = 2
f1 = 1*2*np.pi
f2 = 1.5*2*np.pi
OM2 = [-f1**2, -f2**2]
ZI = [-2*f1*0.01, -2*f2*0.01]
A11 = np.zeros((sz, sz))
A12 = np.eye(sz)
A21 = np.diag(OM2)
A22 = np.diag(ZI)
A = np.vstack((np.concatenate((A11, A12), axis=1), np.concatenate((A21, A22), axis=1)))
B1 = np.zeros((sz, sz))
B2 = [[1e-6, 1e-7],[2e-6, 2e-7]]
B = np.vstack((B1, B2))
C1 = np.zeros((sz, sz*2))
C1[0] = [1e-4, 2*1e-4, 3*1e-4, 5*1e-5]
C1[1] = [2e-4, 3.5*1e-4, 1.5*1e-4, 2*1e-5]
C2 = np.zeros((sz*2, sz))
C = np.concatenate((C1.T, C2), axis=1)
D = np.zeros((sz*2, sz))
sys = ss(A, B, C, D)
tEnd = 1
time = np.arange(0, tEnd, 1e-3)
tout, youtStep = step_response(sys, T=time)
tout, youtForced, xout = forced_response(sys, T=time, U=1.0)
plt.figure()
for k, y in enumerate(youtStep):
plt.subplot(4,1,k+1)
plt.grid(True)
plt.plot(tout, y,label='step')
plt.plot(tout, youtForced[k], '--r',label='forced')
if k == 0:
plt.legend()
plt.xlabel('Time [s]')
OK the step response is easily manageable via the function control.matlab.step which actually allows the selection of the different input of the MIMO system, something I initially ignored, but was well reported in the official documentation:
https://python-control.readthedocs.io/en/0.8.1/generated/control.matlab.step.html
Here's the output [MIMO step response output]
Luckily it was an easy fix :)
from control import ss
import control.matlab as ctl
import numpy as np
import matplotlib.pyplot as plt
sz = 2
f1 = 1*2*np.pi
f2 = 1.5*2*np.pi
OM2 = [-f1**2, -f2**2]
ZI = [-2*f1*0.01, -2*f2*0.01]
A11 = np.zeros((sz, sz))
A12 = np.eye(sz)
A21 = np.diag(OM2)
A22 = np.diag(ZI)
A = np.vstack((np.concatenate((A11, A12), axis=1), np.concatenate((A21, A22), axis=1)))
B1 = np.zeros((sz, sz))
B2 = [[1e-6, 1e-7],[2e-6, 2e-7]]
B = np.vstack((B1, B2))
C1 = np.zeros((sz, sz*2))
C1[0] = [1e-4, 2*1e-4, 3*1e-4, 5*1e-5]
C1[1] = [2e-4, 3.5*1e-4, 1.5*1e-4, 2*1e-5]
C2 = np.zeros((sz*2, sz))
C = np.concatenate((C1.T, C2), axis=1)
D = np.zeros((sz*2, sz))
sys = ss(A, B, C, D)
tEnd = 100
time = np.arange(0, tEnd, 1e-3)
yy1, tt1 = ctl.step(sys, T=time, input=0)
yy2, tt2 = ctl.step(sys, T=time, input=1)
plt.figure()
for k in range(0, len(yy1[1,:])):
plt.subplot(4,1,k+1)
plt.grid(True)
plt.plot(tt1, yy1[:,k], label='input=0')
plt.plot(tt2, yy2[:,k], label='input=1')
if k == 0:
plt.legend()
plt.xlabel('Time [s]')

Dataframe with Monte Carlo Simulation calculation next row Problem

I want to build up a Dataframe from scratch with calculations based on the Value before named Barrier option. I know that i can use a Monte Carlo simulation to solve it but it just wont work the way i want it to.
The formula is:
Value in row before * np.exp((r-sigma**2/2)*T/TradingDays+sigma*np.sqrt(T/TradingDays)*z)
The first code I write just calculates the first column. I know that I need a second loop but can't really manage it.
The result should be, that for each simulation it will calculate a new value using the the value before, for 500 Day meaning S_1 should be S_500 with a total of 1000 simulations. (I need to generate new columns based on the value before using the formular.)
similar to this:
So for the 1. Simulations 500 days, 2. Simulation 500 day and so on...
import numpy as np
import pandas as pd
from scipy.stats import norm
import random as rd
import math
simulation = 0
S_0 = 42
T = 2
r = 0.02
sigma = 0.20
TradingDays = 500
df = pd.DataFrame()
for i in range (0,TradingDays):
z = norm.ppf(rd.random())
simulation = simulation + 1
S_1 = S_0*np.exp((r-sigma**2/2)*T/TradingDays+sigma*np.sqrt(T/TradingDays)*z)
df = df.append ({
'S_1':S_1,
'S_0':S_0
}, ignore_index=True)
df = df.round ({'Z':6,
'S_T':2
})
df.index += 1
df.index.name = 'Simulation'
print(df)
I found another possible code which i found here and it does solve the problem but just for one row, the next row is just the same calculation. Generate a Dataframe that follow a mathematical function for each column / row
If i just replace it with my formular i get the same problem.
replacing:
exp(r - q * sqrt(sigma))*T+ (np.random.randn(nrows) * sqrt(deltaT)))
with:
exp((r-sigma**2/2)*T/nrows+sigma*np.sqrt(T/nrows)*z))
import numpy as np
import pandas as pd
from scipy.stats import norm
import random as rd
import math
S_0 = 42
T = 2
r = 0.02
sigma = 0.20
TradingDays = 50
Simulation = 100
df = pd.DataFrame({'s0': [S_0] * Simulation})
for i in range(1, TradingDays):
z = norm.ppf(rd.random())
df[f's{i}'] = df.iloc[:, -1] * np.exp((r-sigma**2/2)*T/TradingDays+sigma*np.sqrt(T/TradingDays)*z)
print(df)
I would work more likely with the last code and solve the problem with it.
How about just overwriting the value of S_0 by the new value of S_1 while you loop and keeping all simulations in a list?
Like this:
import numpy as np
import pandas as pd
import random
from scipy.stats import norm
S_0 = 42
T = 2
r = 0.02
sigma = 0.20
trading_days = 50
output = []
for i in range(trading_days):
z = norm.ppf(random.random())
value = S_0*np.exp((r - sigma**2 / 2) * T / trading_days + sigma * np.sqrt(T/trading_days) * z)
output.append(value)
S_0 = value
df = pd.DataFrame({'simulation': output})
Perhaps I'm missing something, but I don't see the need for a second loop.
Also, this eliminates calling df.append() in a loop, which should be avoided. (See here)
Solution based on the the answer of bartaelterman, thank you very much!
import numpy as np
import pandas as pd
from scipy.stats import norm
import random as rd
import math
#Dividing the list in chunks to later append it to the dataframe in the right order
def chunk_list(lst, chunk_size):
for i in range(0, len(lst), chunk_size):
yield lst[i:i + chunk_size]
def blackscholes():
d1 = ((math.log(S_0/K)+(r+sigma**2/2)*T)/(sigma*np.sqrt(2)))
d2 = ((math.log(S_0/K)+(r-sigma**2/2)*T)/(sigma*np.sqrt(2)))
preis_call_option = S_0*norm.cdf(d1)-K*np.exp(-r*T)*norm.cdf(d2)
return preis_call_option
K = 40
S_0 = 42
T = 2
r = 0.02
sigma = 0.2
U = 38
simulation = 10000
trading_days = 500
trading_days = trading_days -1
#creating 2 lists for the first and second loop
loop_simulation = []
loop_trading_days = []
#first loop calculates the first column in a list
for j in range (0,simulation):
print("Progressbar_1_2 {:2.2%}".format(j / simulation), end="\n\r")
S_Tag_new = 0
NORM_S_INV = norm.ppf(rd.random())
S_Tag = S_0*np.exp((r-sigma**2/2)*T/trading_days+sigma*np.sqrt(T/trading_days)*NORM_S_INV)
S_Tag_new = S_Tag
loop_simulation.append(S_Tag)
#second loop calculates the the rows for the columns in a list
for i in range (0,trading_days):
NORM_S_INV = norm.ppf(rd.random())
S_Tag = S_Tag_new*np.exp((r-sigma**2/2)*T/trading_days+sigma*np.sqrt(T/trading_days)*NORM_S_INV)
loop_trading_days.append(S_Tag)
S_Tag_new = S_Tag
#values from the second loop will be divided in number of Trading days per Simulation
loop_trading_days_chunked = list(chunk_list(loop_trading_days,trading_days))
#First dataframe with just the first results from the firstloop for each simulation
df1 = pd.DataFrame({'S_Tag 1': loop_simulation})
#Appending the the chunked list from the second loop to a second dataframe
df2 = pd.DataFrame(loop_trading_days_chunked)
#Merging both dataframe into one
df3 = pd.concat([df1, df2], axis=1)

Why is my code so much slower in function form?

I wrote a code a while ago that processes spectra using data from text files and performing calculations on them. I started with a code that just does everything line-by-line without any functions, and despite being long, it finishes running in 2.11 seconds (according to %%timeit). Below is that original code, labeled as such.
However, I wanted to put my code into functions instead, to allow for easier readability and usage with different models in the future. Even though I'm using all the same steps as I did before (but this time inside my functions), it is so much slower. This code is also below. Now, I have to wait for about 15-20 minutes to get the same outputs. Why is it so much slower, and is there any way I can make it significantly faster but still use functions?
Original Code:
import re
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate
filename = 'bpass_spectra.txt'
extinctionfile = 'ExtinctionLawPoints.txt' # from R_V = 4.0
pointslist = []
datalist = []
speclist = []
# Constants
Msun = 1.98892e30 # solar mass [kg]
h = 4.1357e-15 # Planck's constant [eV s]
c = float(3e8) # speed of light [m/s]
# Read spectra file
f = open(filename, 'r')
rawspectra = f.readlines()
met = re.findall('Z\s=\s(\d*\.\d+)', rawspectra[0])
del rawspectra[0]
for i in range(len(rawspectra)):
newlist = rawspectra[i].split(' ')
datalist.append(newlist)
# Read extinction curve data file
rawpoints = open(extinctionfile, 'r').readlines()
for i in range(len(rawpoints)):
newlst = re.split('(?!\S)\s(?=\S)|(?!\S)\s+(?=\S)', rawpoints[i])
pointslist.append(newlst)
pointslist = pointslist[3:]
lambdalist = [float(item[0]) for item in pointslist]
k_abslist = [float(item[4]) for item in pointslist]
xvallist = [(c*h)/(lamb*1e-6) for lamb in lambdalist]
k_interp = scipy.interpolate.interp1d(xvallist, k_abslist)
# Create new lists
Elist = [float(item[0]) for item in datalist]
speclambdalist = [h*c*1e9/E for E in Elist]
z1list = [float(item[1]) for item in datalist]
speclist.extend(z1list)
met = met[0]
klist = [None]*len(speclist)
Loutlist = [None]*len(speclist)
Tlist = [None]*len(speclist)
# Define parameters
b = 2.0
R = 1.0
z = 1.0
Mgas = 1.0 # mass of gas, input
Mhalo = 2e41 # mass of dark matter halo, known
if float(met) > 0.0052:
DGRlist = [50.0*np.exp(-2.21)*float(met)]*len(speclist)
elif float(met) <= 0.0052:
DGRlist = [((50.0*float(met))**3.15)*np.exp(-0.96)]*len(speclist)
for i in range(len(speclist)):
if float(Elist[i]) <= 4.1357e-3: # frequencies <= 10^12 Hz
klist[i] = 0.1*(float(Elist[i])/(1000.0*h))**b # extinction law [cm^2/g]
elif float(Elist[i]) > 4.1357e-3: # frequencies > 10^12 Hz
klist[i] = k_interp(Elist[i]) # interpolated function's value at Elist[i]
Mdustlist = [Mgas*DGR for DGR in DGRlist] # dust mass
Rhalo = 0.784*(0.27**2.0)*(0.7**(-2.0/3.0))*float(10.0/(1.0+z))*((Mhalo/(1e8*Msun))**(1.0/3.0))
Rdust = 0.018*Rhalo # [kpc]
for i in range(len(speclist)):
Tlist[i] = 3*Mdustlist[i]*klist[i]/(4*np.pi*Rdust)
Linlist = [float(spectra)*R for spectra in speclist]
# Outgoing luminosity as function of wavelength
for i in range(len(Linlist)):
Loutlist[i] = Linlist[i]*np.exp(-Tlist[i])
# Test the calculation
print "LIN ELEMENTS 0 AND 1000:", Linlist[0], Linlist[1000]
print "LOUT ELEMENTS 0 AND 1000:", Loutlist[0], Loutlist[1000]
New "function-ized" Code (much slower):
import re
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate
# Required files and lists
filename = 'bpass_spectra.txt' # number of columns = 4
extinctionfile = 'ExtinctionLawPoints.txt' # R_V = 4.0
datalist = []
if filename == 'bpass_spectra.txt':
filetype = 4
else:
filetype = 1
if extinctionfile == 'ExtinctionLawPoints.txt':
R_V = 4.0
else:
R_V = 1.0 #to be determined
# Constants
M_sun = 1.98892e30 # solar mass [kg]
h = 4.1357e-15 # Planck's constant [eV s]
c = float(3e8) # speed of light [m/s]
# Inputs
beta = 2.0
R = 1.0
z = 1.0
M_gas = 1.0
M_halo = 2e41
# Read spectra file
f = open(filename, 'r')
rawlines = f.readlines()
met = re.findall('Z\s=\s(\d*\.\d+)', rawlines[0])
del rawlines[0]
for i in range(len(rawlines)):
newlist = rawlines[i].split(' ')
datalist.append(newlist)
# Read extinction curve data file
rawpoints = open(extinctionfile, 'r').readlines()
def interpolate(R_V, rawpoints, Elist, j):
pointslist = []
if R_V == 4.0:
for i in range(len(rawpoints)):
newlst = re.split('(?!\S)\s(?=\S)|(?!\S)\s+(?=\S)', rawpoints[i])
pointslist.append(newlst)
pointslist = pointslist[3:]
lambdalist = [float(item[0]) for item in pointslist]
k_abslist = [float(item[4]) for item in pointslist]
xvallist = [(c*h)/(lamb*1e-6) for lamb in lambdalist]
k_interp = scipy.interpolate.interp1d(xvallist, k_abslist)
return k_interp(Elist[j])
# Dust extinction function
def dust(interpolate, filetype, datalist, beta, R, z, M_gas, M_halo, met):
speclist = []
if filetype == 4:
metallicity = float(met[0])
Elist = [float(item[0]) for item in datalist]
speclambdalist = [h*c*1e9/E for E in Elist]
met1list = [float(item[1]) for item in datalist]
speclist.extend(met1list)
klist, Tlist = [None]*len(speclist), [None]*len(speclist)
if metallicity > 0.0052:
DGRlist = [50.0*np.exp(-2.21)*metallicity]*len(speclist) # dust to gas ratio
elif metallicity <= 0.0052:
DGRlist = [((50.0*metallicity)**3.15)*np.exp(-0.96)]*len(speclist)
for i in range(len(speclist)):
if Elist[i] <= 4.1357e-3: # frequencies <= 10^12 Hz
klist[i] = 0.1*(float(Elist[i])/(1000.0*h))**beta # extinction law [cm^2/g]
elif Elist[i] > 4.1357e-3: # frequencies > 10^12 Hz
klist[i] = interpolate(R_V, rawpoints, Elist, i) # interpolated function's value at Elist[i]
Mdustlist = [M_gas*DGR for DGR in DGRlist] # dust mass
R_halo = 0.784*(0.27**2.0)*(0.7**(-2.0/3.0))*float(10/(1+z))*((M_halo/(1e8*M_sun))**(1.0/3.0))
R_dust = 0.018*R_halo # [kpc]
# Optical depth calculation
Tlist = [3*Mdustlist[i]*klist[i]/(4*np.pi*R_dust) for i in range(len(speclist))]
# Ingoing and outgoing luminosities as functions of wavelength
Linlist = [float(spectra)*R for spectra in speclist]
Loutlist = [Linlist[i]*np.exp(-Tlist[i]) for i in range(len(speclist))]
return speclambdalist, Linlist, Loutlist
print dust(interpolate, filetype, datalist, beta, R, z, M_gas, M_halo, met)
Even when I only have the function return Loutlist instead of the tuple of 3 lists, it's still extremely slow. Any ideas on why this is? Also, I'm going to want to return the tuple and then plot speclambdalist versus Linlist, and also plot speclambdalist versus Loutlist on the same plot. But I'm under the impression that each time I call dust(interpolate, filetype, datalist, beta, R, z, M_gas, M_halo, met)[i] where i = 0, 1, or 2 (I'll be doing this multiple times), it'll have to run the function again each time. Is there any way to bypass these extra runs to further increase speed? Thank you!

ploting two scattered graph in one graph

I have two numpy array and want to plot them in the same graph
My current code is like this below
X1 # numpy array
X2 # numpy array
X1Df = pd.DataFrame(columns=['x','y'])
k = 0
for i in X1: # copy second numpy to DataFrome
temp = pd.DataFrame({'x' : i[0],
'y' : i[1]},index=[k])
k = k + 1
X1Df = pd.concat([X1Df,temp])
X1Df.plot(kind='scatter',x = 'x',y='y')
X2Df = pd.DataFrame(columns=['x','y'])
k = 0
for i in X2: ## Copy second numpy to DataFrame
temp = pd.DataFrame({'x' : i[0],
'y' : i[1]},
index=[k])
k = k + 1
X2Df = pd.concat([X2Df,temp])
X2Df.plot(kind='scatter',x = 'x',y='y')
#########
plt.show()
But it makea two graphs separately and I guess my code to copy from numpy to dataframe is ackward...
is ther any better solution??
Use axes handle and ax paramater in df.plot:
X1 = np.array([[1,2],[3,4]])
X2 = np.array([[2,3],[6,7]])
X1Df = pd.DataFrame(columns=['x','y'])
k = 0
for i in X1: # copy second numpy to DataFrome
temp = pd.DataFrame({'x' : i[0],
'y' : i[1]},index=[k])
k = k + 1
X1Df = pd.concat([X1Df,temp])
ax = X1Df.plot(kind='scatter',x = 'x',y='y')
X2Df = pd.DataFrame(columns=['x','y'])
k = 0
for i in X2: ## Copy second numpy to DataFrame
temp = pd.DataFrame({'x' : i[0],
'y' : i[1]},
index=[k])
k = k + 1
X2Df = pd.concat([X2Df,temp])
X2Df.plot(kind='scatter',x = 'x',y='y', ax=ax)
#########
plt.show()

Python's fsolve not working

I'm currently trying to find the intercept of 2 equations from my code (pasted below). I'm using fsolve and have used it successfully in one part but I can't get it to work for the second.
Confusingly it's not showing up an error, if you paste this code into your notebook and run it you'll see 2 grphs, on the first graph there's a line at an angle which should be stopping at the eqm line.
The section which wont work is def q_eqm(x_q). Thank you for your help
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
AC_LK = np.array([4.02232,1206.53,220.291])
AC_HK = np.array([4.0854,1348.77,219.976])
P_Tot = 1 # Bara
N_Size = 11 # 1001 = 0.1% accuracy for xA
xf = 0.7
q = 0.7
xA = np.linspace(0,1,N_Size)
yA = np.linspace(0.00,0.00,N_Size)
T = np.linspace(0.00,0.00,N_Size)
x = np.array([xA[0:N_Size],yA[0:N_Size],T[0:N_Size]]) # x[xA,yA,T]
F = np.empty((1))
def xA_T(N):
xA_Ant = x[0,N]
def P_Ant(T):
PA = pow(10,AC_LK[0]-(AC_LK[1]/(T+AC_LK[2])))*xA_Ant
PB = pow(10,AC_HK[0]-(AC_HK[1]/(T+AC_HK[2])))*(1-xA_Ant)
F[0] = P_Tot - (PA + PB)
return F
return x
TGuess = [100]
T = opt.fsolve(P_Ant,TGuess)
x[2,N] = T
return x
for N in range(0,len(xA)):
xA_T(N)
x[1,N] = pow(10,AC_LK[0]-(AC_LK[1]/(x[2,N]+AC_LK[2])))*x[0,N]/P_Tot
q_int = ((-q*0)/(1-q)) + (xf/(1-q))
Eqm_Poly = np.polyfit(x[0,0:N_Size], x[1,0:N_Size], 6)
q_Poly = np.polyfit([xf,0], [xf,q_int], 1)
F = np.empty((1))
def q_Eqm(x_q):
y_q = q_Poly[0]*x_q + q_Poly[1]
eqm_y = (Eqm_Poly[0]*pow(x_q,6)+Eqm_Poly[1]*pow(x_q,5)+Eqm_Poly[2]*pow(x_q,4)+Eqm_Poly[3]*pow(x_q,3)+Eqm_Poly[4]*pow(x_q,2)+Eqm_Poly[5]*pow(x_q,1)+Eqm_Poly[6]*pow(x_q,0))
F[0] = y_q - eqm_y
return F
x_qGuess = [0]
x_q = opt.fsolve(q_Eqm,x_qGuess)
print(x,Eqm_Poly,x_q,q_int)
plt.plot(x[0,0:N_Size],x[1,0:N_Size],'k-',linewidth=1)
plt.plot([xf,xf],[0,xf],'b-',linewidth=1)
plt.plot([xf,x_q],[xf,(q_Poly[0]*x_q + q_Poly[1])],'r-',linewidth=1)
plt.legend(['Eqm','Feed'])
plt.xlabel('xA')
plt.ylabel('yA')
plt.xlim([0.00, 1])
plt.ylim([0.00, 1])
plt.savefig('x.png')
plt.savefig('x.eps')
plt.show()
plt.plot(x[0,0:N_Size],x[2,0:N_Size],'r--',linewidth=3)
plt.plot(x[1,0:N_Size],x[2,0:N_Size],'b--',linewidth=3)
plt.legend(['xA','yA'])
plt.xlabel('Mol Frac')
plt.ylabel('Temp degC')
plt.xlim([0, 1])
plt.savefig('Txy.png')
plt.savefig('Txy.eps')
plt.show()
The answer turns out to be relatively simple:
#F = np.empty((1)) # remove this
def q_Eqm(x_q):
y_q = q_Poly[0]*x_q + q_Poly[1]
eqm_y = (Eqm_Poly[0]*pow(x_q,6)+Eqm_Poly[1]*pow(x_q,5)+Eqm_Poly[2]*pow(x_q,4)+Eqm_Poly[3]*pow(x_q,3)+Eqm_Poly[4]*pow(x_q,2)+Eqm_Poly[5]*pow(x_q,1)+Eqm_Poly[6]*pow(x_q,0))
return y_q - eqm_y
The original code defines a global F, which is modified in the function and then returned. So in each iteration the function returns different values but they are the same object. This seems to confuse fsolve (I guess it internally stores references to the results rather than values). Removing this F and simply returning the result of the subtraction resolves the problem.

Categories

Resources