I want to use then scan for a tensor3 calculation in my DNN, we can say this tensor3 is a 3D matrix, I want each layer of this matrix go through the T.nnet.softmax(T.dot(v, W)+b) calculation, the W, and b should be the fixed, so I could do the update for later. The following is my coding:
XS = T.tensor3("XS")
W = T.matrix(name="W")
b = T.vector(name="b")
results, updates = theano.scan(lambda XS: T.nnet.softmax(T.dot(XS,W) + b),
sequences=[XS],
outputs_info=None)
result = results
Mutiply = theano.function(inputs=[XS, W, b], outputs=[result])
#initialization of output of layer2
w_o = init_weights((1089, 10))
b_o = init_weights((10,))
myFile_data = h5py.File('/Users/XIN/Masterthesis/Torch_Script/mnist-cluttered/train_data.h5', 'r')
myFile_label= h5py.File('/Users/XIN/Masterthesis/Torch_Script/mnist-cluttered/train_target.h5', 'r')
data = myFile_data['data'][...]
label = myFile_label['target'][...]
data = data.reshape(100000, 10000)
trX = data
trY = label
X_s = downsampling(trX)
X_nine = load_data_train(trX, trX.shape[0], 9)
X_nine = X_nine.transpose((((2,0,1))))
p_x_nine = Mutiply(X_nine, w_o, b_o)[0]
but the results show this error:
runfile('/Users/XIN/Masterthesis/keras/thean_scan_test.py', wdir='/Users/XIN/Masterthesis/keras')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/XIN/anaconda/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "/Users/XIN/anaconda/lib/python2.7/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 81, in execfile
builtins.execfile(filename, *where)
File "/Users/XIN/Masterthesis/keras/thean_scan_test.py", line 115, in <module>
p_x_nine = Mutiply(X_nine, w_o, b_o)[0]
File "/Users/XIN/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 786, in __call__
allow_downcast=s.allow_downcast)
File "/Users/XIN/anaconda/lib/python2.7/site-packages/theano/tensor/type.py", line 86, in filter
'Expected an array-like object, but found a Variable: '
TypeError: ('Bad input argument to theano function with name "/Users/XIN/Masterthesis/keras/thean_scan_test.py:92" at index 1(0-based)', 'Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')
I check the X_nine type is: np array but the w_o, and b_o is theano type.
So what should I do to modify this code.
get the solution by my mentor, coding it like following:
XS = T.tensor3("XS")
w_o = init_weights((1089, 10))
b_o = init_weights((10,))
results, updates = theano.scan(lambda XS: T.nnet.softmax(T.dot(XS,w_o) + b_o),
sequences=[XS],
outputs_info=None)
result = results
Mutiply = theano.function(inputs=[XS], outputs=[result])
Related
When I tried to take the word_vector transformed from Chinese as the feature of sklearn,an error occurred.
The shape of x_train and word_vector are (747,) and (1,100) and the latter's dtype is float64
for this question, I guess the type of the data may be different, but i tried to traverse all the data, it was ok ……
Here are the code:
import pandas as pd
from sklearn.model_selection import train_test_split,GridSearchCV
import SZ_function as sz
import gensim
import numpy as np
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
def remove_stop_words(text):
stop_words = sz.get_step_words('notebook/HIT.txt')
text = text.split()
word_list = ''
for word in text:
if word not in stop_words:
word_list += word
word_list += ' '
return word_list
def pre_process(path):
data = pd.read_excel(path)
data['text'] = data['text'].apply(sz.remove_number_en)
data['text'] = data['text'].apply(sz.cut_words)
data['text'] = data['text'].apply(remove_stop_words)
data = data.replace(to_replace='', value='None')
data = data.replace(to_replace='None', value=np.nan).dropna()
return data
def create_corpus(data):
text = data['text']
return [sentences.split() for sentences in text]
def word_vec(corpus):
model = gensim.models.word2vec.Word2Vec(corpus)
return model
def get_sent_vec(sent,model,size):
vec = np.zeros(size).reshape((1,size))
count = 0
for word in sent[1:]:
try:
vec += model.wv[word].reshape((1,size))
count += 1
except:
continue
if count != 0:
vec /= count
return vec
if __name__ == '__main__':
data = pre_process('datasets_demo.xlsx')
corpus = create_corpus(data)
model = word_vec(corpus)
data['text']=data['text'].apply(get_sent_vec,model=model,size=100)
x_train,y_train,x_test,y_test = train_test_split(data['text'],data['label'])
estimator = MultinomialNB()
estimator.fit(x_train,y_train)
here are the all trackback:
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\12996\AppData\Local\Temp\jieba.cache
Loading model cost 0.628 seconds.
Prefix dict has been built successfully.
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\IPython\core\interactiveshell.py", line 3457, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-8366eff678ac>", line 1, in <module>
runfile('C:/Users/12996/Desktop/Tensorflow_/datasets_demo.py', wdir='C:/Users/12996/Desktop/Tensorflow_')
File "E:\pycharm\PyCharm 2022.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "E:\pycharm\PyCharm 2022.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/12996/Desktop/Tensorflow_/datasets_demo.py", line 66, in <module>
estimator.fit(x_train,y_train)
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\sklearn\naive_bayes.py", line 663, in fit
X, y = self._check_X_y(X, y)
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\sklearn\naive_bayes.py", line 523, in _check_X_y
return self._validate_data(X, y, accept_sparse="csr", reset=reset)
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\sklearn\base.py", line 581, in _validate_data
X, y = check_X_y(X, y, **check_params)
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\sklearn\utils\validation.py", line 976, in check_X_y
estimator=estimator,
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\sklearn\utils\validation.py", line 746, in check_array
array = np.asarray(array, order=order, dtype=dtype)
File "E:\Anaconda3\envs\tensorflow-gpu\lib\site-packages\pandas\core\series.py", line 857, in __array__
return np.asarray(self._values, dtype)
ValueError: setting an array element with a sequence.
I am using Python 3.7 and Numpy 1.18.5.
I tried to run the following code:
import numpy as np
def rot(direc,ang):
c=np.cos(ang)
s=np.sin(ang)
if direc=='x':
R = np.block([[1, 0, 0],[0, c, -s],[0, s, c]])
elif direc=='y':
R = np.block([[c, 0, s],[0, 1, 0],[-s, 0, c]])
elif direc=='z':
R = np.block([[c, -s, 0],[s, c, 0],[0, 0, 1]])
else:
'Direction not recognized.'
return R
def Hjac(fun,x,reg):
class HJAC():
def __init__(self):
self.z = np.spacing(shape=(6,1))
self.A = np.spacing(shape=(6,10))
z = np.vectorize(fun)
n = np.size(x)
m = np.size(z)
A = np.empty(shape=(m,n))
h = n*np.spacing(1)
ns = range(0,n)
for k in ns:
x1 = x;
x1[k] = x1[k]+h*1j
A[:,k] = np.imag(z(x1,reg))/h
HJacobi = HJAC()
HJacobi.z = z
HJacobi.A = A
return HJacobi
def fun(x,reg):
R = rot('x',reg[1])
f = np.dot(R,x*reg)
return f
x = np.random.rand(3,1)
reg = np.random.rand(3,1)
H = Hjac(fun,x,reg)
I am trying to obtain a jacobian of a function 'fun' with respect to a vector 'x'.
But I got the following error message:
runfile('C:/Users/2JY/Documents/Python/temp.py', wdir='C:/Users/2JY/Documents/Python')
C:/Users/2JY/Documents/Python/temp.py:32: ComplexWarning: Casting complex values to real discards the imaginary part
x1[k] = x1[k]+h*1j
Traceback (most recent call last):
File "<ipython-input-1-8907a67fa932>", line 1, in <module>
runfile('C:/Users/2JY/Documents/Python/temp.py', wdir='C:/Users/2JY/Documents/Python')
File "C:\Users\2JY\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\2JY\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/2JY/Documents/Python/temp.py", line 50, in <module>
H = Hjac(fun,x,reg)
File "C:/Users/2JY/Documents/Python/temp.py", line 33, in Hjac
A[:,k] = np.imag(z(x1,reg))/h
File "C:\Users\2JY\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 2091, in __call__
return self._vectorize_call(func=func, args=vargs)
File "C:\Users\2JY\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 2161, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "C:\Users\2JY\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 2121, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "C:/Users/2JY/Documents/Python/temp.py", line 43, in fun
R = rot('x',reg[1])
IndexError: invalid index to scalar variable.
I changed def fun(x,reg) as:
def fun(x,reg):
ang1 = reg[1]
R = rot('x',ang1)
f = np.dot(R,x*reg)
return f
and I got the following error message:
File "C:/Users/2JY/Documents/Python/temp.py", line 43, in fun
ang1 = reg[1]
IndexError: invalid index to scalar variable.
I can't understand why this kind of error happened.
I would really appreciate any answer to solve this problem.
Thank you.
I have a problem with time series analysis. I have a dataset with 5 features. Following is the subset of my input dataset:
date,price,year,day,totaltx
1/1/2016 0:00,434.46,2016,1,126762
1/2/2016 0:00,433.59,2016,2,147449
1/3/2016 0:00,430.36,2016,3,148661
1/4/2016 0:00,433.49,2016,4,185279
1/5/2016 0:00,432.25,2016,5,178723
1/6/2016 0:00,429.46,2016,6,184207
My endogenous data is price column and exogenous data is totaltx price.
This is the code I am running and getting an error:
import statsmodels.api as sm
import pandas as pd
import numpy as np
from numpy.linalg import LinAlgError
def arima(filteredData, coinOutput, window, horizon, trainLength):
start_index = 0
end_index = 0
inputNumber = filteredData.shape[0]
predictions = np.array([], dtype=np.float32)
prices = np.array([], dtype=np.float32)
# sliding on time series data with 1 day step
while ((end_index) < inputNumber - 1):
end_index = start_index + trainLength
trainFeatures = filteredData[start_index:end_index]["totaltx"]
trainOutput = coinOutput[start_index:end_index]["price"]
arima = sm.tsa.statespace.SARIMAX(endog=trainOutput.values, exog=trainFeatures.values, order=(window, 0, 0))
arima_fit = arima.fit(disp=0)
testdata=filteredData[end_index:end_index+1]["totaltx"]
total_sample = end_index-start_index
predicted = arima_fit.predict(start=total_sample, end=total_sample, exog=np.array(testdata.values).reshape(-1,1))
price = coinOutput[end_index:end_index + 1]["price"].values
predictions = np.append(predictions, predicted)
prices = np.append(prices, price)
start_index = start_index + 1
return predictions, prices
def processCoins(bitcoinPrice, window, horizon):
output = bitcoinPrice[horizon:][["date", "day", "year", "price"]]
return output
trainLength=100;
for window in [3,5]:
for horizon in [1,2,5,7,10]:
bitcoinPrice = pd.read_csv("..\\prices.csv", sep=",")
coinOutput = processCoins(bitcoinPrice, window, horizon)
predictions, prices = arima(bitcoinPrice, coinOutput, window, horizon, trainLength)
In this code, I am using rolling window regression technique. I am training arima for start_index:end_index and predicting the test data with end_index:end_index+1
This the error that is thrown from my code:
Traceback (most recent call last):
File "C:/PycharmProjects/coinLogPrediction/src/arima.py", line 115, in <module>
predictions, prices = arima(filteredBitcoinPrice, coinOutput, window, horizon, trainLength, outputFile)
File "C:/PycharmProjects/coinLogPrediction/src/arima.py", line 64, in arima
arima_fit = arima.fit(disp=0)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py", line 469, in fit
skip_hessian=True, **kwargs)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\base\model.py", line 466, in fit
full_output=full_output)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\base\optimizer.py", line 191, in _fit
hess=hessian)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\base\optimizer.py", line 410, in _fit_lbfgs
**extra_kwargs)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\scipy\optimize\lbfgsb.py", line 193, in fmin_l_bfgs_b
**opts)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\scipy\optimize\lbfgsb.py", line 328, in _minimize_lbfgsb
f, g = func_and_grad(x)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\scipy\optimize\lbfgsb.py", line 273, in func_and_grad
f = fun(x, *args)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 292, in function_wrapper
return function(*(wrapper_args + args))
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\base\model.py", line 440, in f
return -self.loglike(params, *args) / nobs
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\tsa\statespace\mlemodel.py", line 646, in loglike
loglike = self.ssm.loglike(complex_step=complex_step, **kwargs)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\tsa\statespace\kalman_filter.py", line 825, in loglike
kfilter = self._filter(**kwargs)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\tsa\statespace\kalman_filter.py", line 747, in _filter
self._initialize_state(prefix=prefix, complex_step=complex_step)
File "C:\AppData\Local\Continuum\Anaconda3\lib\site-packages\statsmodels\tsa\statespace\representation.py", line 723, in _initialize_state
self._statespaces[prefix].initialize_stationary(complex_step)
File "_representation.pyx", line 1351, in statsmodels.tsa.statespace._representation.dStatespace.initialize_stationary
File "_tools.pyx", line 1151, in statsmodels.tsa.statespace._tools._dsolve_discrete_lyapunov
numpy.linalg.linalg.LinAlgError: LU decomposition error.
This looks like it might be a bug. In the meantime, you may be able to fix this by using a different initialization, like so:
arima = sm.tsa.statespace.SARIMAX(
endog=trainOutput.values, exog=trainFeatures.values, order=(window, 0, 0),
initialization='approximate_diffuse')
If you get a chance, please file a bug report at https://github.com/statsmodels/statsmodels/issues/new!
I had the same error.
Erroneous code:
mod = sm.tsa.SARIMAX(y, order=(0 1,0), seasonal_order=(1,0,0,12))
res = mod.fit()
This gave me error :
LinAlgError: Schur decomposition solver error
I was able to solve this error by passing argument enforce_stationarity=False:
mod = sm.tsa.SARIMAX(y, order=(0 1,0), seasonal_order=(1,0,0,12),enforce_stationarity=False)
res = mod.fit()
I am trying to make a 3d line chart with matplotlib in python. The data set is taken from a measuring station with different sensors. Now i have the code pretty much as i want it, but every time i try to run it it shows this problem:
TypeError: Cannot cast array data from dtype('float64') to dtype('S32') according to the rule 'safe'
My code so far:
import csv
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
depth = {
1 : 11, 2 : 35, 3 : -1, 4 : 11, #position of graphs in cm
}
class SetXYZ:
def __init__(self, datalist, NameNum): #datalist: list with data, NameNum: number of variable
self.NameNum = NameNum
self.datalist = datalist
self.Name = datalist[0][NameNum]
self.length = datalist
def X(self): #Creates list with X Variables
Xlist = []
for element in self.datalist:
Xlist.append(element[0])
Xlist.pop(0)
return Xlist
def Y(self): #Creates list with Y Variables
Ylist = []
for element in self.datalist:
Ylist.append(element[self.NameNum])
Ylist.pop(0)
return Ylist
def Z(self): #list with Z variables
Zlist = []
for element in datalist: #Z is the same for every point on one graph
Zlist.append(depth[self.NameNum-1])
Zlist.pop(0)
return Zlist
def csv_to_list(filename): #returns the file as list
with open(filename, 'rb') as data:
data = csv.reader(data, delimiter=';')
datalist = list(data)
return datalist
filename = " " #Filename
datalist = csv_to_list(filename)
Graph1 = SetXYZ(datalist, 1) #creates the graphs
Graph2 = SetXYZ(datalist, 2)
#plots a graph, more or less to test
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(Graph1.X(), Graph1.Y(), Graph1.Z())
plt.show()
a file looks like this:
Time; sensor 1; sensor 2; sensor 3; sensor4
41940.6791666667;16;19.96;4.1;11.52
41940.67986;15.9;20.51;4.07;11.4
41940.67986;15.9;20.53;4.07;11.41
The full error looks like this:
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_qt5.py", line 338, in resizeEvent
self.draw()
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 148, in draw
FigureCanvasAgg.draw(self)
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_agg.py", line 461, in draw
self.figure.draw(self.renderer)
File "C:\Anaconda\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\figure.py", line 1079, in draw
func(*args)
File "C:\Anaconda\lib\site-packages\mpl_toolkits\mplot3d\axes3d.py", line 254, in draw
for col in self.collections]
File "C:\Anaconda\lib\site-packages\mpl_toolkits\mplot3d\art3d.py", line 413, in do_3d_projection
vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, renderer.M)
File "C:\Anaconda\lib\site-packages\mpl_toolkits\mplot3d\proj3d.py", line 208, in proj_transform_clip
return proj_transform_vec_clip(vec, M)
File "C:\Anaconda\lib\site-packages\mpl_toolkits\mplot3d\proj3d.py", line 165, in proj_transform_vec_clip
vecw = np.dot(M, vec)
TypeError: Cannot cast array data from dtype('float64') to dtype('S32') according to the rule 'safe'
When it is finished there can be up to 30 sensors and several thousand times.
Is there a workaround to this, or what am i doing wrong?
Thanks for your help!
I am trying to fit a predefined 2d gaussian function to some observed data with pymc. I keep running into errors and the last one I got was ValueError: setting an array element with a sequence. I understand what the error means, but I am not sure where the error is occurring in the code. My naive guess would be the random variables are being set to some array elements. Any suggestions would be much appreciated. Here is my code so far:
import pymc as mc
import numpy as np
import pyfits as pf
arr = pf.getdata('img.fits')
x=y=np.arange(0,71)
xx,yy=np.meshgrid(x,y)
err_map = pf.getdata('imgwht.fits')
def model((x,y),arr):
amp = mc.Uniform('amp',lower=-1,upper=1,doc='Amplitude')
x0 = mc.Uniform('x0',lower=21,upper=51,doc='xo')
y0 = mc.Uniform('y0',lower=21,upper=51,doc='yo')
sigx = mc.Uniform('sigx',lower=0.1,upper=10,doc='Sigma in X')
sigy = mc.Uniform('sigy',lower=0.1,upper=10,doc='Sigma in Y')
thta = mc.Uniform('theta',lower=0,upper=2*np.pi,doc='Rotation')
os = mc.Uniform('c',lower=-1,upper=1,doc='Vertical offset')
#mc.deterministic(plot=False,trace=False)
def gaussian((x, y)=(xx,yy), amplitude=amp, xo=x0, yo=y0, sigma_x=sigx, sigma_y=sigy, theta=thta, offset=os):
xo = float(xo)
yo = float(yo)
a = (mc.cos(theta)**2)/(2*sigma_x**2) + (mc.sin(theta)**2)/(2*sigma_y**2)
b = -(mc.sin(2*theta))/(4*sigma_x**2) + (mc.sin(2*theta))/(4*sigma_y**2)
c = (mc.sin(theta)**2)/(2*sigma_x**2) + (mc.cos(theta)**2)/(2*sigma_y**2)
gauss = offset+amplitude*mc.exp(-1*(a*((x-xo)**2)+2*b*(x-xo)*(y-yo)+c*((y-yo)**2)))
return gauss
flux = mc.Normal('flux',mu=gaussian,tau=err_map,value=arr,observed=True,doc='Observed Flux')
return locals()
mdl = mc.MCMC(model((xx,yy),arr))
mdl.sample(iter=1e5,burn=9e4)
Full traceback:
File "model.py", line 31, in <module>
mdl = mc.MCMC(model((xx,yy),arr))
File "model.py", line 29, in model
flux = mc.Normal('flux',mu=gaussian,tau=err_map,value=arr,observed=True,doc='Observed Flux')
File "/usr/lib64/python2.7/site-packages/pymc/distributions.py", line 318, in __init__
**arg_dict_out)
File "/usr/lib64/python2.7/site-packages/pymc/PyMCObjects.py", line 761, in __init__
verbose=verbose)
File "/usr/lib64/python2.7/site-packages/pymc/Node.py", line 219, in __init__
Node.__init__(self, doc, name, parents, cache_depth, verbose=verbose)
File "/usr/lib64/python2.7/site-packages/pymc/Node.py", line 129, in __init__
self.parents = parents
File "/usr/lib64/python2.7/site-packages/pymc/Node.py", line 152, in _set_parents
self.gen_lazy_function()
File "/usr/lib64/python2.7/site-packages/pymc/PyMCObjects.py", line 810, in gen_lazy_function
self._logp.force_compute()
File "LazyFunction.pyx", line 257, in pymc.LazyFunction.LazyFunction.force_compute (pymc/LazyFunction.c:2409)
File "/usr/lib64/python2.7/site-packages/pymc/distributions.py", line 2977, in wrapper
return f(value, **kwds)
File "/usr/lib64/python2.7/site-packages/pymc/distributions.py", line 2168, in normal_like
return flib.normal(x, mu, tau)
ValueError: setting an array element with a sequence.
I've run into an issue like this before, but never had a chance to track it down to its source. The problem line in your code is the one for the observed Stochastic:
flux = mc.Normal('flux',mu=gaussian,tau=err_map,value=arr,observed=True,doc='Observed Flux')
I know a work-around that you can use, which is to check if the mu variable is a pymc.Node, and only find the likelihood if it is not:
#mc.observed
def flux(mu=gaussian,tau=err_map,value=arr):
if isinstance(mu, mc.Node):
return 0
else:
return mc.normal_like(value, mu, tau)
I think it would be worth filing a bug report in the PyMC github issue tracker if you have time.
The #mc.deterministic decorator returns a deterministic variable. To get the value of the variable, use the attribute value.
flux = mc.Normal('flux',mu=gaussian.value,tau=err_map,value=arr,observed=True,doc='Observed Flux')