Python linear mixed model predict - python

How I can use the predict method
after fitting a model (http://nbviewer.ipython.org/urls/umich.box.com/shared/static/6tfc1e0q6jincsv5pgfa.ipynb)
With the simple dietox example I get an error.
data = pd.read_csv("dietox.csv")
model = sm.MixedLM.from_formula("Weight ~ Time", data, groups=data["Pig"])
result = model.fit()
print result.summary()
#this and other attempts doesn't work
result.predict(data.ix[1])
NotImplementedError Traceback (most recent call last)
<ipython-input-7-ba818b886233> in <module>()
----> 1 result.predict(data.ix[1])
C:\Anaconda\lib\site-packages\statsmodels\base\model.pyc in predict(self, exog, transform, *args, **kwargs)
747 exog = np.atleast_2d(exog) # needed in count model shape[1]
748
--- > 749 return self.model.predict(self.params, exog, *args, **kwargs)
750
751
C:\Anaconda\lib\site-packages\statsmodels\base\model.pyc in predict(self, params, exog, *args, **kwargs)
175 This is a placeholder intended to be overwritten by individual models.
176 """
--> 177 raise NotImplementedError
178
179
NotImplementedError:

Related

RuntimeError: Cannot find callable custom in hubconf

---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [1], in <cell line: 3>()
1 import torch
----> 3 model = torch.hub.load('C:/Users/user/Desktop/***/model/', 'custom', path='runs/train/***/weights/best.pt', force_reload = True, source='local')
4 # Images
5 imgs = ['/kaggle/input/***/images/image-1.png'] # batch of images
File C:\ProgramData\Anaconda3\lib\site-packages\torch\hub.py:404, in load(repo_or_dir, model, source, force_reload, verbose, skip_validation, *args, **kwargs)
401 if source == 'github':
402 repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
--> 404 model = _load_local(repo_or_dir, model, *args, **kwargs)
405 return model
File C:\ProgramData\Anaconda3\lib\site-packages\torch\hub.py:432, in _load_local(hubconf_dir, model, *args, **kwargs)
429 hubconf_path = os.path.join(hubconf_dir, MODULE_HUBCONF)
430 hub_module = _import_module(MODULE_HUBCONF, hubconf_path)
--> 432 entry = _load_entry_from_hubconf(hub_module, model)
433 model = entry(*args, **kwargs)
435 sys.path.remove(hubconf_dir)
File C:\ProgramData\Anaconda3\lib\site-packages\torch\hub.py:240, in _load_entry_from_hubconf(m, model)
237 func = _load_attr_from_module(m, model)
239 if func is None or not callable(func):
--> 240 raise RuntimeError('Cannot find callable {} in hubconf'.format(model))
242 return func
RuntimeError: Cannot find callable custom in hubconf
I use pytorch on my project
I would use saved model which on wandb by using torch.hub.load()
But there is some entry point error, It's not work
I have read related documents and discussions, but I couldn't find a solution.
please help.....

How to load multiple matrices into a Chainer model using the iterator interface?

I have a question regarding Chainer's iterator interface, and how it interfaces with a Trainer, Updater and Model.
My data are graphs, and hence are of varying matrix shapes. I have concatenated the features matrix into one large dense matrix, adjacency matrices into one large sparse COO matrix, and the summation operator into one large sparse COO matrix. Because this is done with molecular data, I have an atom graph and a bond graph per sample. Hence, the input data is a six-tuple, which for the purposes of deep learning, I consider to be one big giant data point for training. (Until I get this working with this giant matrix, I am not planning on doing train/test splits just yet, to keep my code simple.)
xs = (atom_Fs, atom_As, atom_Ss, bond_Fs, bond_As, bond_Ss)
ts = data['target'].values
dataset = [(xs, ts)]
My model forward pass is as follows:
# model boilerplate above this comment
def forward(self, data):
atom_feats, atom_adjs, atom_sums, bond_feats, bond_adjs, bond_sums = data
atom_feats = self.atom_mp1(atom_feats, atom_adjs)
atom_feats = self.atom_mp2(atom_feats, atom_adjs)
atom_feats = self.atom_gather(atom_feats, atom_sums)
bond_feats = self.atom_mp1(bond_feats, bond_adjs)
bond_feats = self.atom_mp2(bond_feats, bond_adjs)
bond_feats = self.atom_gather(bond_feats, bond_sums)
feats = F.hstack([atom_feats, bond_feats])
feats = F.tanh(self.dense1(feats))
feats = F.tanh(self.dense2(feats))
feats = self.dense3(feats)
return feats
I then pass everything into a Trainer:
from chainer import iterators, training
from chainer.optimizers import SGD, Adam
iterator = iterators.SerialIterator(dataset, batch_size=1)
optimizer = Adam()
optimizer.setup(mpnn)
updater = training.updaters.StandardUpdater(iterator, optimizer)
max_epoch = 50
trainer = training.Trainer(updater, (max_epoch, 'epoch'))
trainer.run()
However, when I run the trainer, I get the following error:
Exception in main training loop: forward() takes 2 positional arguments but 3 were given
Traceback (most recent call last):
File "/home/ericmjl/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/trainer.py", line 315, in run
update()
File "/home/ericmjl/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 165, in update
self.update_core()
File "/home/ericmjl/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 177, in update_core
optimizer.update(loss_func, *in_arrays)
File "/home/ericmjl/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/optimizer.py", line 680, in update
loss = lossfun(*args, **kwds)
File "/home/ericmjl/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/link.py", line 242, in __call__
out = forward(*args, **kwargs)
Will finalize trainer extensions and updater before reraising the exception.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-45-ea26cece43b3> in <module>
9 max_epoch = 50
10 trainer = training.Trainer(updater, (max_epoch, 'epoch'))
---> 11 trainer.run()
~/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg)
327 f.write('Will finalize trainer extensions and updater before '
328 'reraising the exception.\n')
--> 329 six.reraise(*sys.exc_info())
330 finally:
331 for _, entry in extensions:
~/anaconda/envs/mpnn/lib/python3.7/site-packages/six.py in reraise(tp, value, tb)
691 if value.__traceback__ is not tb:
692 raise value.with_traceback(tb)
--> 693 raise value
694 finally:
695 value = None
~/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg)
313 self.observation = {}
314 with reporter.scope(self.observation):
--> 315 update()
316 for name, entry in extensions:
317 if entry.trigger(self):
~/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py in update(self)
163
164 """
--> 165 self.update_core()
166 self.iteration += 1
167
~/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py in update_core(self)
175
176 if isinstance(in_arrays, tuple):
--> 177 optimizer.update(loss_func, *in_arrays)
178 elif isinstance(in_arrays, dict):
179 optimizer.update(loss_func, **in_arrays)
~/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/optimizer.py in update(self, lossfun, *args, **kwds)
678 if lossfun is not None:
679 use_cleargrads = getattr(self, '_use_cleargrads', True)
--> 680 loss = lossfun(*args, **kwds)
681 if use_cleargrads:
682 self.target.cleargrads()
~/anaconda/envs/mpnn/lib/python3.7/site-packages/chainer/link.py in __call__(self, *args, **kwargs)
240 if forward is None:
241 forward = self.forward
--> 242 out = forward(*args, **kwargs)
243
244 # Call forward_postprocess hook
TypeError: forward() takes 2 positional arguments but 3 were given
This baffles me, because I have set up the dataset in the same way as the mnist example, in which the input data are paired with the output data in a tuple. Because of the layers of abstractions in Chainer, I'm not quite sure how to debug this issue. Does anybody have any insight into this?
Are you using mpnn model which gets only xs (or data) and outputs feats?
I think the problem is in the model, not the iterator nor dataset.
You need to prepare a model which gets xs and ts as input argument and calculate loss as output. For example,
class GraphNodeClassifier(chainer.Chain):
def __init__(self, mpnn):
with self.init_scope():
self.mpnn = mpnn
def forward(self, xs, ts):
feat = self.mpnn(xs)
loss = "calculate loss between `feat` and `ts` here..."
return loss
and use this GraphNodeClassifier as optimizer's setup argument.
In the MNIST example above it uses chainer's built-in L.Classifier class which wraps MLP model (which only get x) to get x and t to calculate classification loss.

Keras inheriting built in layers?

I am new to Keras.
Keras's docs show how to make a custom layer where you can have full control over the trainable weights.
My question is how can one just extend an existing layer?
For example, the BatchNormalization layer does not have an activation option, where in practice one may often add an activation function following batch normalization.
This attempt does not work:
class BatchNormalizationActivation(keras.layers.BatchNormalization):
def __init__(self, bn_params={}, activation=keras.activations.relu, act_params={}):
super(BatchNormalizationActivation, self).__init__(**bn_params)
self.act = activation
def call(x):
x = super(BatchNormalizationActivation, self).call(x)
return self.act(x, **act_params)
BatchNormalizationActivation()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-182-6e4f0495a112> in <module>()
----> 1 BatchNormalizationActivation()
<ipython-input-181-2d5c8337234a> in __init__(self, bn_params, activation, act_params)
3
4 def __init__(self, bn_params={}, activation=keras.activations.relu, act_params={}):
----> 5 super(BatchNormalizationActivation, self).__init__(**bn_params)
6 self.act = activation
7
/usr/local/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/layers/normalization.py in __init__(self, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, **kwargs)
105 beta_constraint=constraints.get(beta_constraint),
106 gamma_constraint=constraints.get(gamma_constraint),
--> 107 **kwargs
108 )
109
/usr/local/lib/python3.6/site-packages/tensorflow/python/layers/normalization.py in __init__(self, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, renorm, renorm_clipping, renorm_momentum, fused, trainable, virtual_batch_size, adjustment, name, **kwargs)
144 **kwargs):
145 super(BatchNormalization, self).__init__(
--> 146 name=name, trainable=trainable, **kwargs)
147 if isinstance(axis, list):
148 self.axis = axis[:]
/usr/local/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/base_layer.py in __init__(self, **kwargs)
147 super(Layer, self).__init__(
148 name=name, dtype=dtype, trainable=trainable,
--> 149 activity_regularizer=kwargs.get('activity_regularizer'))
150 self._uses_inputs_arg = True
151
/usr/local/lib/python3.6/site-packages/tensorflow/python/layers/base.py in __init__(self, trainable, name, dtype, activity_regularizer, **kwargs)
130 self._graph = None # Will be set at build time.
131 self._dtype = None if dtype is None else dtypes.as_dtype(dtype).name
--> 132 self._call_fn_args = estimator_util.fn_args(self.call)
133 self._compute_previous_mask = ('mask' in self._call_fn_args or
134 hasattr(self, 'compute_mask'))
/usr/local/lib/python3.6/site-packages/tensorflow/python/estimator/util.py in fn_args(fn)
60 args = tf_inspect.getfullargspec(fn).args
61 if _is_bounded_method(fn):
---> 62 args.remove('self')
63 return tuple(args)
64
ValueError: list.remove(x): x not in list

ValueError when running MaskedAutoregressiveFlow example

I am trying to run the example for MaskedAutoregressiveFlow at https://www.tensorflow.org/api_docs/python/tf/contrib/distributions/bijectors/MaskedAutoregressiveFlow. It's a plain copy from the docs but I receive the following error. I've tried event_shape=[dims, 1] but that doesn't seem to help (different error). I'm not sure what to make of it.
Has anyone seen this as well?
import tensorflow as tf
import tensorflow.contrib.distributions as tfd
from tensorflow.contrib.distributions import bijectors as tfb
dims = 5
# A common choice for a normalizing flow is to use a Gaussian for the base
# distribution. (However, any continuous distribution would work.) E.g.,
maf = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.masked_autoregressive_default_template(
hidden_layers=[512, 512])),
event_shape=[dims])
x = maf.sample() # Expensive; uses `tf.while_loop`, no Bijector caching.
maf.log_prob(x) # Almost free; uses Bijector caching.
maf.log_prob(0.) # Cheap; no `tf.while_loop` despite no Bijector caching.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-3b2fcb2af309> in <module>()
11
12
---> 13 x = maf.sample() # Expensive; uses `tf.while_loop`, no Bijector caching.
14 maf.log_prob(x) # Almost free; uses Bijector caching.
15 maf.log_prob(0.) # Cheap; no `tf.while_loop` despite no Bijector caching.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/distributions/distribution.py in sample(self, sample_shape, seed, name)
687 samples: a `Tensor` with prepended dimensions `sample_shape`.
688 """
--> 689 return self._call_sample_n(sample_shape, seed, name)
690
691 def _log_prob(self, value):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/distributions/transformed_distribution.py in _call_sample_n(self, sample_shape, seed, name, **kwargs)
411 # work, it is imperative that this is the last modification to the
412 # returned result.
--> 413 y = self.bijector.forward(x, **kwargs)
414 y = self._set_sample_static_shape(y, sample_shape)
415
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/distributions/bijector_impl.py in forward(self, x, name)
618 NotImplementedError: if `_forward` is not implemented.
619 """
--> 620 return self._call_forward(x, name)
621
622 def _inverse(self, y):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/distributions/bijector_impl.py in _call_forward(self, x, name, **kwargs)
599 if mapping.y is not None:
600 return mapping.y
--> 601 mapping = mapping.merge(y=self._forward(x, **kwargs))
602 self._cache(mapping)
603 return mapping.y
/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/masked_autoregressive.py in _forward(self, x)
245 y0 = array_ops.zeros_like(x, name="y0")
246 # call the template once to ensure creation
--> 247 _ = self._shift_and_log_scale_fn(y0)
248 def _loop_body(index, y0):
249 """While-loop body for autoregression calculation."""
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/template.py in __call__(self, *args, **kwargs)
358 custom_getter=self._custom_getter) as vs:
359 self._variable_scope = vs
--> 360 result = self._call_func(args, kwargs)
361 return result
362
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/template.py in _call_func(self, args, kwargs)
300 trainable_at_start = len(
301 ops.get_collection(ops.GraphKeys.TRAINABLE_VARIABLES))
--> 302 result = self._func(*args, **kwargs)
303
304 if self._variables_created:
/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/masked_autoregressive.py in _fn(x)
478 activation=activation,
479 *args,
--> 480 **kwargs)
481 x = masked_dense(
482 inputs=x,
/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/masked_autoregressive.py in masked_dense(inputs, units, num_blocks, exclusive, kernel_initializer, reuse, name, *args, **kwargs)
386 *args,
387 **kwargs)
--> 388 return layer.apply(inputs)
389
390
/usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in apply(self, inputs, *args, **kwargs)
807 Output tensor(s).
808 """
--> 809 return self.__call__(inputs, *args, **kwargs)
810
811 def _add_inbound_node(self,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs)
671
672 # Check input assumptions set before layer building, e.g. input rank.
--> 673 self._assert_input_compatibility(inputs)
674 if input_list and self._dtype is None:
675 try:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in _assert_input_compatibility(self, inputs)
1195 ', found ndim=' + str(ndim) +
1196 '. Full shape received: ' +
-> 1197 str(x.get_shape().as_list()))
1198 # Check dtype.
1199 if spec.dtype is not None:
ValueError: Input 0 of layer dense_1 is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [5]
originally defined at:
File "<ipython-input-2-3b2fcb2af309>", line 9, in <module>
hidden_layers=[512, 512])),
File "/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/masked_autoregressive.py", line 499, in masked_autoregressive_default_template
"masked_autoregressive_default_template", _fn)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/template.py", line 152, in make_template
**kwargs)

PoissonZiGMLE : predict not implemented?

I successfully run the function for the zero inflated Poisson model:
(Successfully = it seems to converge, when I print the summary)
PoissonZiGMLE:
PZI = PoissonZiGMLE(df_zip['obs'],Xmat,offset=df_zip['offsetv'])
result = PZI.fit(maxiter = 1000)
print result.summary()
however when I try:
result.predict(df_zip, offset=offsetv)
I get this error:
--------------------------------------------------------------------------- NotImplementedError Traceback (most recent call
last)
in ()
----> 1 result.predict(df_zip, offset=offsetv)
/software/centos6/x86_64/canopy-1.7.4/Canopy_64bit/User/lib/python2.7/site-packages/statsmodels/base/model.py
in predict(self, exog, transform, *args, **kwargs)
747 exog = np.atleast_2d(exog) # needed in count model shape1
748
--> 749 return self.model.predict(self.params, exog, *args, **kwargs)
750
751
/software/centos6/x86_64/canopy-1.7.4/Canopy_64bit/User/lib/python2.7/site-packages/statsmodels/base/model.py
in predict(self, params, exog, *args, **kwargs)
175 This is a placeholder intended to be overwritten by individual models.
176 """
--> 177 raise NotImplementedError
178
179
NotImplementedError:
before submitting an issue on github i was wondering if anyone has used PoissonZiGMLE and has any insight on how i can bypass the predict function if not implemented.

Categories

Resources