Reasoning for numpy random seed in this function? - python

I'm following a tutorial on the usage of Python in bioinformatics. In the tutorial a Mann-Whitney U test was performed via the function below.
numpy.random.seed was used in the first line after packages but nowhere else. I was wondering what is the use for this action as it seemingly doesn't effect the results?
def mannwhitney(descriptor, verbose=False):
from numpy.random import seed
from numpy.random import randn
from scipy.stats import mannwhitneyu
seed(1)
selection =[descriptor, "Bioactivity_Class"]
df = df_2class[selection]
active = df[df.Bioactivity_Class == "active"]
active = active[descriptor]
selection=[descriptor,"Bioactivity_Class"]
df = df_2class[selection]
inactive = df[df.Bioactivity_Class == "inactive"]
inactive = inactive[descriptor]
stat,p = mannwhitneyu(active,inactive)
#creating a result dataframe for easier interpretation
alpha = 0.05
if p> alpha:
interpretation = "Same distribution (fail to reject H0)"
else:
interpretation = "Different distribution (reject H0)"
results = pd.DataFrame ({"Descriptor": descriptor,"Statistics": stat,"p":p,
"alpha":alpha, "Interpretation":interpretation},
index =[0])
return results

Related

How to save the set of dominated solutions while solving NSGA 2 in pymoo into a dataframe?

I am trying to solve a multiobjective optimization problem with 3 objectives and 2 decision variables using NSGA 2. The pymoo code for NSGA2 algorithm and termination criteria is given below. My pop_size is 100 and n_offspring is 100. The algorithm is iterated over 100 generations. I want to store all 100 values of decision variables considered in each generation for all 100 generations in a dataframe.
NSGA2 implementation in pymoo code:
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_sampling, get_crossover, get_mutation
algorithm = NSGA2(
pop_size=20,
n_offsprings=10,
sampling=get_sampling("real_random"),
crossover=get_crossover("real_sbx", prob=0.9, eta=15),
mutation=get_mutation("real_pm", prob=0.01,eta=20),
eliminate_duplicates=True
)
from pymoo.factory import get_termination
termination = get_termination("n_gen", 100)
from pymoo.optimize import minimize
res = minimize(MyProblem(),
algorithm,
termination,
seed=1,
save_history=True,
verbose=True)
What I have tried (My reference: stackoverflow question):
import pandas as pd
df2 = pd.DataFrame (algorithm.pop)
df2.head(10)
The result from above code is blank and on passing
print(df2)
I get
Empty DataFrame
Columns: []
Index: []
Glad you intend to use pymoo for your research. You have correctly enabled the save_history option, which means you can access the algorithm objects.
To have all solutions from the run, you can combine the offsprings (algorithm.off) from each generation. Don't forget the Population objects contain Individual objectives. With the get method you can get the X and F or other values. See the code below.
import pandas as pd
from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_sampling, get_crossover, get_mutation, ZDT1 from pymoo.factory import get_termination from pymoo.model.population import Population from pymoo.optimize import minimize
problem = ZDT1()
algorithm = NSGA2(
pop_size=20,
n_offsprings=10,
sampling=get_sampling("real_random"),
crossover=get_crossover("real_sbx", prob=0.9, eta=15),
mutation=get_mutation("real_pm", prob=0.01,eta=20),
eliminate_duplicates=True )
termination = get_termination("n_gen", 10)
res = minimize(problem,
algorithm,
termination,
seed=1,
save_history=True,
verbose=True)
all_pop = Population()
for algorithm in res.history:
all_pop = Population.merge(all_pop, algorithm.off)
df = pd.DataFrame(all_pop.get("X"), columns=[f"X{i+1}" for i in range(problem.n_var)])
print(df)
Another way would be to use a callback and fill the data frame each generation. Similar as shown here: https://pymoo.org/interface/callback.html

Why does numpy.random.Generator.choice provides different results (seeded) with given uniform distribution compared to default uniform distribution?

Simple test code:
pop = numpy.arange(20)
rng = numpy.random.default_rng(1)
rng.choice(pop,p=numpy.repeat(1/len(pop),len(pop))) # yields 10
rng = numpy.random.default_rng(1)
rng.choice(pop) # yields 9
The numpy documentation says:
The probabilities associated with each entry in a. If not given the sample assumes a uniform distribution over all entries in a.
I don't know of any other way to create a uniform distribution, but numpy.repeat(1/len(pop),len(pop)).
Is numpy using something else? Why?
If not, how does setting the distribution affects the seed?
Shouldn't the distribution and the seed be independent?
What am I missing here?
The distribution doesn't affect the seed. Details as bellow:
I checked out the source code: numpy/random/_generator.pyx#L669
If p is given, it will use rng.random to get a random value:
import numpy
pop = numpy.arange(20)
seed = 1
rng = numpy.random.default_rng(seed)
# rng.choice works like bellow
rand = rng.random()
p = numpy.repeat(1/len(pop),len(pop))
cdf = p.cumsum()
cdf /= cdf[-1]
uniform_samples = rand
idx = cdf.searchsorted(uniform_samples, side='right')
idx = numpy.array(idx, copy=False, dtype=numpy.int64) # yields 10
print(idx)
# -----------------------
rng = numpy.random.default_rng(seed)
idx = rng.choice(pop,p=numpy.repeat(1/len(pop),len(pop))) # same as above
print(idx)
If p is not given, it will use rng.integers to get a random value:
rng = numpy.random.default_rng(seed)
idx = rng.integers(0, pop.shape[0]) # yields 9
print(idx)
# -----------------------
rng = numpy.random.default_rng(seed)
idx = rng.choice(pop) # same as above
print(idx)
You can play around using different seed value. I don't know what happens in rng.random and rng.integers, but you could see that they behave differently. That's why you got different results.
A more idiomatic way of creating a uniform distribution with numpy would be:
numpy.random.uniform(low=0.0, high=1.0, size=None)
or in your case
numpy.random.uniform(low=0.0, high=20.0, size=1)
Alternatively, you could simply do
rng = numpy.random.default_rng(1)
rng.uniform()*20
As for your question on why the two methods of calling the rnd.choice result in different outputs, my guess would be that they are executed slightly differently by the interpreter and thus, although you start at the same random initialization, by the time the random variable call is executed, you are at a different random elements in the two calls and get different results.

How to calculate risk contribution of assets in Python

I'm trying to write a block of code that will allow me to identify the risk contribution of assets in a portfolio. The covariance matrix is a 6x6 pandas dataframe.
My code is as follows:
import numpy as np
import pandas as pd
weights = np.array([.1,.2,.05,.25,.1,.3])
data = pd.DataFrame(np.random.randn(1000,6),columns = 'a','b','c','d','e','f'])
covariance = data.cov()
portfolio_variance = (weights*covariance*weights.T)[0,0]
sigma = np.sqrt(portfolio_variance)
marginal_risk = covariance*weights.T
risk_contribution = np.multiply(marginal_risk, weights.T)/sigma
print(risk_contribution)
When I try to run the code I get a KeyError, and if I remove the [0,0] from portfolio_variance I get output that doesn't seem to make sense.
Can somebody point me to my error(s)?
Three problems with your code:
Open your list operator square brackets on line 6:
data = pd.DataFrame(np.random.randn(1000,6),columns = ['a','b','c','d','e','f'])
You're using the two dimensional indexing operator wrong. You can't say [0,0], you have to say [0][0].
And last, because you named the columns, you have to use them when indexing, so it's actually ['a'][0]:
portfolio_variance = (weights*covariance*weights.T)['a'][0]
Final working code:
import numpy as np
import pandas as pd
weights = np.array([.1,.2,.05,.25,.1,.3])
data = pd.DataFrame(np.random.randn(1000,6),columns = ['a','b','c','d','e','f'])
covariance = data.cov()
portfolio_variance = (weights*covariance*weights.T)['a'][0]
sigma = np.sqrt(portfolio_variance)
marginal_risk = covariance*weights.T
risk_contribution = np.multiply(marginal_risk, weights.T)/sigma
print(risk_contribution)
portfolio_variance =(weights*covariance*weights.T)
portfolio_variance should be
portfolio_variance =(weights#covariance#weights.T)
This will provide the portfolio variance, which should be a single number.
same for marginal risk, it should be
marginal_risk = covariance#weights.T

Single Component Metropolis-Hastings

So, let's say I have the following 2-dimensional target distribution that I would like to sample from (a mixture of bivariate normal distributions) -
import numba
import numpy as np
import scipy.stats as stats
import seaborn as sns
import pandas as pd
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
%matplotlib inline
def targ_dist(x):
target = (stats.multivariate_normal.pdf(x,[0,0],[[1,0],[0,1]])+stats.multivariate_normal.pdf(x,[-6,-6],[[1,0.9],[0.9,1]])+stats.multivariate_normal.pdf(x,[4,4],[[1,-0.9],[-0.9,1]]))/3
return target
and the following proposal distribution (a bivariate random walk) -
def T(x,y,sigma):
return stats.multivariate_normal.pdf(y,x,[[sigma**2,0],[0,sigma**2]])
The following is the Metropolis Hastings code for updating the "entire" state in every iteration -
#Initialising
n_iter = 30000
# tuning parameter i.e. variance of proposal distribution
sigma = 2
# initial state
X = stats.uniform.rvs(loc=-5, scale=10, size=2, random_state=None)
# count number of acceptances
accept = 0
# store the samples
MHsamples = np.zeros((n_iter,2))
# MH sampler
for t in range(n_iter):
# proposals
Y = X+stats.norm.rvs(0,sigma,2)
# accept or reject
u = stats.uniform.rvs(loc=0, scale=1, size=1)
# acceptance probability
r = (targ_dist(Y)*T(Y,X,sigma))/(targ_dist(X)*T(X,Y,sigma))
if u < r:
X = Y
accept += 1
MHsamples[t] = X
However, I would like to update "per component" (i.e. component-wise updating) in every iteration. Is there a simple way of doing this?
Thank you for your help!
From the tone of your question I assume you are looking performance improvements.
MonteCarlo algorithms are quite compute intensive. You will get better results, if you perform in algorithms on a lower level than in an interpreted language like python, e.g. writing a c-extension.
There are also implementations available for python (PyStan, PyMC3).

perturbation analysis in pandas

For a given Series I want to change the value of each element around it's current value and then calculate an arbitrary function (here std) as shown in the following code:
import pandas as pd
import numpy as np
a = pd.Series(np.random.randn(10))
perturb = {}
for item in range(2,len(a)):
serturb = {}
for ep in np.arange(-1,1,0.1):
temp = a.ix[0:item]
temp.iloc[-1] += ep
serturb[ep] = temp.std()
perturb[item] = pd.Series(serturb)
perturb = pd.DataFrame(perturb).T
The above code will become too slow for a large amount of data. The above process, when applied on a DataFrame would return a Panel. Is there an efficient way of doing it, since a lot of the calculations are being repeated?

Categories

Resources