Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I would like to normalize a vector such that the mean of the normalized vector would be a certain pre-defined value. For instance, I want the mean to be 0.1 in the following example:
import numpy as np
from sklearn.preprocessing import normalize
array = np.arange(1,11)
array_norm = normalize(array[:,np.newaxis], axis=0).ravel()
Of course, np.mean(array_norm) is 0.28 and not 0.1. Is there a way to this in Python?
You could just multiply each element by mean_you_want / current_mean. If you multiply each element by a scalar, the mean will also be multiplied by that scalar. In your case, that would be 0.1/np.mean(array_norm)
array_norm *= 0.1/np.mean(array_norm)
This should do the trick.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
import tensorflow as tf
input=[50,10]
O1 = layers.fully connected(input, 20, tf.sigmoid)
Why my input is wrong?
I am not sure I understand the question, but...
The sigmoid layer will output an array with numbers between 0 and 1, but you can't really calculate what the standard deviation will be before feeding your network.
If you are talking about the matrix that contains the weight parameters, then this depends on how you initialize them. But after the training of the network, the deviation will not be the same as before the training.
EDIT:
Ok, so you simply want to calculate the standard deviation for a matrix. In that case see numpy.
a = np.array([[1, 2], [3, 4]]) # or your 50 by 50 matrix
np.std(a)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
x = np.array([1, 3, 5, 7, 9])
y = np.array([ 6, 3, 9, 5 , 4])
m , b = np.polyfit(x, y, 1)
how does the 1(deg) work in this linear regression? I do know it represents the degree of fitting the polynomial but how does it actually work.
The degree-parameter n determines the polynimial equation used for fitting. The coefficients p in this formula are in descending powers, and the length of p is n+1
This formula is then fitted (in a least-squares sense) to the data.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Want to know the correct method of calculating SNR of a single image.Added some gaussian noise to original image and want to calculate SNR.
You can create a function as:
def signaltonoise(a, axis=0, ddof=0):
"""
The signal-to-noise ratio of the input data.
Returns the signal-to-noise ratio of `a`, here defined as the mean
divided by the standard deviation.
Parameters
----------
a : array_like
An array_like object containing the sample data.
axis : int or None, optional
Axis along which to operate. Default is 0. If None, compute over
the whole array `a`.
ddof : int, optional
Degrees of freedom correction for standard deviation. Default is 0.
Returns
-------
s2n : ndarray
The mean to standard deviation ratio(s) along `axis`, or 0 where the
standard deviation is 0.
"""
a = np.asanyarray(a)
m = a.mean(axis)
sd = a.std(axis=axis, ddof=ddof)
return np.where(sd == 0, 0, m/sd)
Source: https://github.com/scipy/scipy/blob/v0.16.0/scipy/stats/stats.py#L1963
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to find an autoregressive model on some data stored in a dataframe and I have 96 data points per day. The data is the value of solar irradiance in some region and I know it has a 1-day seasonality. I want to obtain a simple linear model using scikit LinearRegression and I want to specify which lagged data points to use. I would like to use the last 10 data points, plus the data point that has a lag of 97, which corresponds to the data point of 24 hour earlier. How can I specify the lagged coefficients that I want to use? I don't want to have 97 coefficients, I just want to use 11 of them: the previous 10 data points and the data point 97 positions back.
Just make a dataset X with 11 columns [x0-97, x0-10, x0-9,...,x0-1]. Then series of x0 will be your target Y.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
would like to ask if it is possible to calculate the area under curve for a fitted distribution curve?
The curve would look like this
I've seen some post online regarding the usage of trapz, but i'm not sure if it will work for a curve like that. Please enlighten me and thank you for the help!
If your distribution, f, is discretized on a set of points, x, that you know about, then you can use scipy.integrate.trapz or scipy.integrate.simps directly (pass f, x as arguments in that order). For a quick check (e.g. that your distribution is normalized), just sum the values of f and multiply by the grid spacing:
import numpy as np
from scipy.integrate import trapz, simps
x, dx = np.linspace(-100, 250, 50, retstep=True)
mean, sigma = 90, 20
f = np.exp(-((x-mean)/sigma)**2/2) / sigma / np.sqrt(2 * np.pi)
print('{:18.16f}'.format(np.sum(f)*dx))
print('{:18.16f}'.format(trapz(f, x)))
print('{:18.16f}'.format(simps(f, x)))
Output:
1.0000000000000002
0.9999999999999992
1.0000000000000016
Firstly, you have to find a function from a graph. You can check here. Then you can use integration in python with scipy. You can check here for integration.
It is just math stuff as Daniel Sanchez says.