unexpected keyword argument within init - python

I try tu run a model with python (no built by me) and I obtain this error:
This comes from:
from seirsplus.models import *
import networkx
numNodes = 10000
baseGraph = networkx.barabasi_albert_graph(n=numNodes, m=9)
G_normal = custom_exponential_graph(baseGraph, scale=100)
# Social distancing interactions:
G_distancing = custom_exponential_graph(baseGraph, scale=10)
# Quarantine interactions:
G_quarantine = custom_exponential_graph(baseGraph, scale=5)
model = SEIRSNetworkModel(G=G_normal, beta=0.155, sigma=1/5.2, gamma=1/12.39, mu_I=0.0004, p=0.5,
Q=G_quarantine, beta_D=0.155, sigma_D=1/5.2, gamma_D=1/12.39, mu_D=0.0004,
theta_E=0.02, theta_I=0.02, phi_E=0.2, phi_I=0.2, psi_E=1.0, psi_I=1.0, q=0.5,
initI=10)
checkpoints = {'t': [20, 100], 'G': [G_distancing, G_normal], 'p': [0.1, 0.5], 'theta_E': [0.02, 0.02], 'theta_I': [0.02, 0.02], 'phi_E': [0.2, 0.2], 'phi_I': [0.2, 0.2]}
model.run(T=300, checkpoints=checkpoints)
model.figure_infections()
i leave it with an image to see the highlighted part.
From what I understand, this has to do with the way the class SEIRSNetworkModel is constructed. I already forked the original repository:
https://github.com/ryansmcgee/seirsplus/wiki/SEIRSNetworkModel-class
but I don´t know where to look for this constructor, and what to search for in order to fix this problem. This may be too simple, but I can´t find my way.
I'd appreciate any help, as simple as possible please since you can see I don't know how to navigate in here.

Related

Evaluating a formula with the variable changing each time

I am trying to evaluate an expression for 4 different values for on of my variables. I am trying do create a for loop using np.arange as my variable is a float.
import numpy as np
for Mf in np.arange(0.8,0.01,1.5):
Vinf=Mf*(gamma*R*tatm)**0.5
print(Mf)
I want to evaluate the above expression for Mf = 0.8, 0.9, 1.2 and 1.5. I simply don't know how to do this or using a for loop is even appropriate. Finally, I want to save the output Vinf in an array. How could I achieve all of this?
Edit:
Ok I got the above code working thanks to user gmds. I am trying to use the list created for Mf_values to be used in the expression for P0 in my code. I have tried it in the following way:
Mf_values=[0.8, 0.9, 1.2, 1.5]
Vinf_values=[Mf_value*(gamma*R*tatm)**0.5 for Mf_value in Mf_values]
print(Vinf_values)
P0=[(1+((gamma-1)/2)*(Mf_values**2)**(gamma/(gamma-1))]
print(P0)
T0=(1+((gamma-1)/2)*(Mf_values**2))*tatm
I want to use the 4 different Mf_values for solving the expression for P0 and T0 and save the results in a list in a similar fashion to Vinf_values. However, python gives me the following error:
P0=[(1+((gamma-1)/2)*(Mf_values**2)**(gamma/(gamma-1))]
^
SyntaxError: invalid syntax
How do I solve this issue?
You can use a list comprehension:
Mf_values = [0.8, 0.9, 1.2, 1.5]
Vinf_values = [Mf_value * (gamma * R * tatm) ** 0.5 for Mf_value in Mf_values]
For this tiny example you don't need any numpy, You can do it in pure python like this:
Vinfs = []
for Mf in [0.8, 0.9, 1.2, 1.5]:
Vinf=Mf*(gamma*R*tatm)**0.5
Vinfs.append(Vinf)
Vinfs = np.array(Vinfs) # If you want to have `ndarray` as your output

Trying to do extremely basic (3 categorical variables) Bayesian inference with PYMC3 and NetworkX

I'm trying to understand this example of a Bayesian network. Figured I'd dumb it down even more such that it's only looking at three variables: D1, D2, and D3. Each is categorical, with their probability tables given at the top of the code below. I'd like to set D3 = 0 and then compute the posterior probabilities of D1 and D2, like a simpler version of what's done at the bottom of this page. I've tried to do this by playing with the code from the first source but have been unsuccessful and I don't understand the error messages.
Any assistance in this would be greatly appreciated - I've really been struggling to implement Bayesian inference. I've tried looking at the PYMC3 Categorical documentation but it's pretty bare-bones. And the example of inference I could find uses continuous variables and seems to be doing a different thing than what I'm trying to do. Or if it isn't, I'm not smart enough to make the connection and use whatever they're demonstrating to meet my needs.
I'm not sure if posting large sections of code is approved here? But I'm not sure how else to do this. Here is my code (a much shorter, simpler version of the code in the first source):
import networkx as nx
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pymc3 as pm
import theano
import theano.tensor as T
from theano.compile.ops import as_op
d1_prob = np.array([0.3,0.7]) # 2 choices
d2_prob = np.array([0.6,0.3,0.1]) # 3 choices
d3_prob = np.array([[[0.1, 0.9], # (2x3)x2 choices
[0.3, 0.7],
[0.4, 0.6]],
[[0.6, 0.4],
[0.8, 0.2],
[0.9, 0.1]]])
BN = nx.DiGraph()
BN.add_node('D1', dtype='Discrete', prob=d1_prob)
BN.add_node('D2', dtype='Discrete', prob=d2_prob)
BN.add_node('D3', dtype='Discrete', prob = d3_prob, observe=np.array([0.]))
BN.add_edges_from([('D1', 'D3'), ('D2', 'D3')])
#print(BN.nodes(data=True))
#print(BN.pred['D3'])
def gpm(BN, node, num=0):
return BN.node[BN.predecessors(node)[num]]['dist_obj']
with pm.Model() as mod2:
BN.node['D1']['dist_obj'] = pm.Categorical('D1', p=BN.node['D1']['prob'])
BN.node['D2']['dist_obj'] = pm.Categorical('D2', p=BN.node['D2']['prob'])
BN.node['D3']['dist_obj'] = pm.Categorical('D3', p=BN.node['D3']['prob'][
gpm(BN,'D3', num=1),
gpm(BN,'D3', num=0)
], observed=BN.node['D3']['observe'])
with mod2:
trace = pm.sample(10000)
pm.summary(trace, varnames=['D3'], start=1000)
pm.traceplot(trace[1000:], varnames=['D3'])
I can't help you with PyMC3 , sorry. But maybe you just need the numbers.
Actually I don't understand why you need an inference algorithm at all here.
The probability tables are fully specified, there is no missing data, and therefore you can just apply Bayes Rule here. Admittedly I don't want to do this with pencil and paper even for such a simple example. So I've used java-based GUI tool samiam here, to use Bayes' rule for me.
When nothing is observed:
Interpreting your code gpm() and observe(), you observe d3 = 1. Then the CPT values change to this:
(The state0 values are arbitrary, samiam just assigns default labels stateX). The row-position in the CPT is what matters.

HMMLearn: Too Many Values to Unpack

I'm trying to use hmmlearn to get the most likely hidden state sequence from a Hidden Markov Model, given start probabilities, transition probabilities, and emission probabilities.
I have two hidden states and four possible emission values, so I'm doing this:
num_states = 2
num_observations = 4
start_probs = np.array([0.2, 0.8])
trans_probs = np.array([[0.75, 0.25], [0.1, 0.9]])
emission_probs = np.array([[0.3, 0.2, 0.2, 0.3], [0.3, 0.3, 0.3, 0.1]])
model = hmm.MultinomialHMM(n_components=num_states)
model.startprob_ = start_probs
model.transmat_ = trans_probs
model.emissionprob_ = emission_probs
seq = np.array([[3, 3, 2, 2]]).T
model.fit(seq)
log_prob, state_seq = model.decode(seq)
My stack trace points to the decode call and throws this error:
ValueError: too many values to unpack (expected 2)
I thought decode (looking at the docs) returns a log probability and the state sequence, so I'm confused.
Any idea?
Thanks!
The call model.fit(seq) requires seq to be a list of lists, as you correctly set it up like this.
However, model.decode(seq) requires seq to only be a list, not a list of lists. Thus,
model.fit([[3, 3, 2, 2]])
log_prob, state_seq = model.decode([3, 3, 2, 2])
should work without throwing an error.
See also here.
The error ValueError: too many values to unpack (expected 2) is thrown from a function called by a function called by a function... inside decode. So, the error does not mean that the number of returned objects of decode was wrong, but from framelogprob.shape somewhere inside the base.py. A more meaningful error message would make life easier here.
I had the same issue and it drove me crazy. Hope my post helps somebody.

Kolmogorov Smirnov Test in Spark (Python) not working?

I was doing a normality test in Python spark-ml and saw what I think is an bug.
Here is the setup, i have a data-set that is normalized (range -1, to 1).
When I do a histogram, i can clearly see that the data is NOT normal:
>>> prices_norm.histogram(10)
([-1.0, -0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0],
[226, 269, 119, 95, 52, 26, 8, 2, 2, 5])
When I run the Kolmgorov-Smirnov test I get the following results:
>>> testResults = Statistics.kolmogorovSmirnovTest(prices_norm, "norm")
>>> print testResults
Kolmogorov-Smirnov test summary:
degrees of freedom = 0
statistic = 0.46231145770077375
pValue = 1.742039845709087E-11
Very strong presumption against null hypothesis: Sample follows theoretical distribution.
The Kolmgorov-Smirnov test defines the null hypothesis (H0) as: the data follows a specified distribution (http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm).
In this case the p-value is very low, so we should reject the null hypothesis. This makes sense, as it is clearly not normal.
So why then, does it say:
Sample follows theoretical distribution
Isn't this wrong? Shouldn't it say that the sample does NOT follow a theoretical distribution? Am I missing something?
This was driving me crazy, so I went to look at the source code directly:
git://git.apache.org/spark.git
spark/mllib/src/main/scala/org/apache/spark/mllib/stat/test/KolmogorovSmirnovTest.scala
The code is correct, the null Hypothesis is set as:
object NullHypothesis extends Enumeration {
type NullHypothesis = Value
val OneSampleTwoSided = Value("Sample follows theoretical distribution")
}
The verbiage of the string message is just restating the null hypothesis:
Very strong presumption against null hypothesis: Sample follows theoretical distribution.
________________________________________
H0
Arguably the verbiage is confusing as it could be interpreted both ways. But it is indeed correct.

Matplotlib ``figure.titlesize`` rcParam

I'm trying to update my matplotlib rcParams. According to http://matplotlib.org/users/customizing.html, the option should be present, but it is not. These are the items I can find in my rcParam file:
u'figure.autolayout': False,
u'figure.dpi': 80.0,
u'figure.edgecolor': u'w',
u'figure.facecolor': u'0.75',
u'figure.figsize': [46.666666666666664, 35.0],
u'figure.frameon': True,
u'figure.max_open_warning': 20,
u'figure.subplot.bottom': 0.1,
u'figure.subplot.hspace': 0.2,
u'figure.subplot.left': 0.125,
u'figure.subplot.right': 0.9,
u'figure.subplot.top': 0.9,
u'figure.subplot.wspace': 0.2,
Has this feature been removed or is it somehow hidden?
I think this is a new option added between v1.4.3 and v1.5.x. For example, compare the code on github for v1.4.3, with v1.5.x.
The documentation you linked to must be for v1.5.x. So, maybe you could upgrade to v1.5.x if you need that option?

Categories

Resources