I have a dataframe like this:
Category Frequency
1 30000
2 45000
3 32400
4 42200
5 56300
6 98200
How do I calculate the mean, median and skewness of the Categories?
I have tried the following:
df['cum_freq'] = [df["Category"]]*df["Frequnecy"]
mean = df['cum_freq'].mean()
median = df['cum_freq'].median()
skew = df['cum_freq'].skew()
If the total frequency is small enough to fit into memory, use repeat to generate the data and then you can easily call those methods.
s = df['Category'].repeat(df['Frequency']).reset_index(drop=True)
print(s.mean(), s.var(ddof=1), s.skew(), s.kurtosis())
# 4.13252219664584 3.045585008424625 -0.4512924988072343 -1.1923306818513022
Otherwise, you will need more complicated algebra to calculate the moments, which can be done with the k-statistics Some of the lower moments can be done with other libraries like numpy or statsmodels. But for things like skewness and kurtosis this is done manually from the sums of the de-meaned values (calculated from counts). Since these sums will overflow numpy, we need use normal python.
def moments_from_counts(vals, weights):
"""
Returns tuple (mean, N-1 variance, skewness, kurtosis) from count data
"""
vals = [float(x) for x in vals]
weights = [float(x) for x in weights]
n = sum(weights)
mu = sum([x*y for x,y in zip(vals,weights)])/n
S1 = sum([(x-mu)**1*y for x,y in zip(vals,weights)])
S2 = sum([(x-mu)**2*y for x,y in zip(vals,weights)])
S3 = sum([(x-mu)**3*y for x,y in zip(vals,weights)])
S4 = sum([(x-mu)**4*y for x,y in zip(vals,weights)])
k1 = S1/n
k2 = (n*S2-S1**2)/(n*(n-1))
k3 = (2*S1**3 - 3*n*S1*S2 + n**2*S3)/(n*(n-1)*(n-2))
k4 = (-6*S1**4 + 12*n*S1**2*S2 - 3*n*(n-1)*S2**2 -4*n*(n+1)*S1*S3 + n**2*(n+1)*S4)/(n*(n-1)*(n-2)*(n-3))
return mu, k2, k3/k2**1.5, k4/k2**2
moments_from_counts(df['Category'], df['Frequency'])
#(4.13252219664584, 3.045585008418879, -0.4512924988072345, -1.1923306818513018)
statsmodels has a nice class that takes care of lower moments, as well as the quantiles.
from statsmodels.stats.weightstats import DescrStatsW
d = DescrStatsW(df['Category'], weights=df['Frequency'])
d.mean
#4.13252219664584
d.var_ddof(1)
#3.045585008418879
the DescrStatsW class also gives you access to the underlying data as an array if you call d.asrepeats()
Related
For a project I have to predict the 2D profile of a certain function C(x,y) by sampling many "rows", i.e. C(x,0) where this term will depend on a certain parameter alpha which is picked with uniform distribution on a certain interval decided a priori.
To set things up:
x = linspace(0,1,50)
y = linspace(0,1,50)
Once I defined the function I want to predict through its 1D profiles, I want use least square method to find a numerical solution. Thus first we have to define a matrix M:
def matrix_of_profiles(x,y): #x is an array, y is float64
M = empty((0,50))
random_alpha =empty(1)
riga = []
for i in range(0,5000): #genrate the 1Dim profile of C(x,y)
random_alpha = random.uniform(1,3+1e-12) #pick a different distribution?
riga = C(x,y, random_alpha)
M = r_[M, [riga]]
return M
Then, as polynomial (which I found through some Taylor expansion):
def predicting_poly(x,shift):
y=1+4*x+(1-shift)/(2*pi)
return y
My final goal is to obtain a 50 x 50 matrix which should resemble the 2D I wanted to reconstruct. But now: if I was to run:
M = matrix_of_profiles(x,0)
t = linspace(0,1, 5000)
value_poly = predicting_poly(t,0)
value_poly = value_poly[:,newaxis]
col_1 = linalg.lstsq(M,value_poly, rcond=None)
plot(x,col_1[0])
then col_1 behaves as I wished (a sinusoid). While, if I go for:
def predicted_profile(x):
t = linspace(0,1, 5000)
prediction = empty((50,50))
M = matrix_of_profiles(x,0)
for j in range(0,50):
shift = -1+j/50
value_poly = predicting_poly(t,shift)
value_poly = value_poly[:,newaxis]
predicted_value = linalg.lstsq(M,value_poly, rcond=None)
predicted_value = predicted_value[0].reshape(50,)
prediction[:,j] = predicted_value[0]
return prediction
the column of the new matrix prediction should behave similar to what I previously defined as col_1 but it does not: it is now just a line and I do not understand why. Did I mess up in the last function?
I have this data:
puf = pd.DataFrame({'id':[1,2,3,4,5,6,7,8],
'val':[850,1889,3289,6083,10349,17860,28180,41236]})
The data seems to follow an exponential curve. Let's see the plot:
puf.plot('id','val')
I want to fit an exponential curve ($$ y = Ae^{Bx} $$, A times e to the B*X)and add it as a column in Pandas. Firstly I tried to log the values:
puf['log_val'] = np.log(puf['val'])
And then to use Numpy to fit the equation:
puf['fit'] = np.polyfit(puf['id'],puf['log_val'],1)
But I get an error:
ValueError: Length of values (2) does not match length of index (8)
My expected result is the fitted values as a new column in Pandas. I attach an image with the column fitted values I want (in orange):
I'm stuck in this code. I'm not sure what I am doing wrong. How can I create a new column with my fitted values?
Note that you asked for an exponential model yet you have the results for log-linear model.
Check out the work below:
For log-linear, we are fitting E(log(Y))ie log(y) - (log(b[0]) +b[1]*x):
from scipy.optimize import least_squares
least_squares(lambda b: np.log(puf['val']) -(np.log(b[0]) + b[1] * puf['id']),
[1,1])['x']
array([5.99531305e+02, 5.51106793e-01])
These are the values that excel gives.
On the other hand to fit an exponential curve, the randomness is on Y and not on its logarithm, E(Y)=b[0]*exp(b[1] *x) Hence we have:
least_squares(lambda b: puf['val'] - b[0]*exp(b[1] * puf['id']), [0,1])['x']
array([1.08047304e+03, 4.58116127e-01]) # correct results for exponential fit
Depending on your model choice, the values are alittle different.
Better Model? Since you have same number of parameters, consider the one that gives you lower deviance or better out of sample prediction
Note that the ideal exponential model is E(Y) = A'B'^X which for comparison can be written as log(E(Y)) = A + XB while log-linear model will be E(log(Y) = A + XB. Note the difference in Expectation.
From the two models we have:
Notice how when we go to higher numbers the log-linear overestimates. While in the lower numbers the exponential overestimates.
Code for image:
from scipy.optimize import least_squares
log_lin = least_squares(lambda b: np.log(puf['val']) -(np.log(b[0]) + b[1] * puf['id']),
[1,1])['x']
expo = least_squares(lambda b: puf['val'] - b[0]*exp(b[1] * puf['id']), [0,1])['x']
exp_fun = lambda x: expo[0] * exp(expo[1]*x)
log_lin_fun = lambda x:log_lin[0] * exp(log_lin[1]*x)
plt.plot(puf.id, puf.val, label = 'original')
plt.plot(puf.id, exp_fun(puf.id), label='exponential')
plt.plot(puf.id, log_lin_fun(puf.id), label='log-linear')
plt.legend()
Your getting that error because np.polyfit(puf['id'],puf['log_val'],1) returns two values array([0.55110679, 6.39614819]) which isn't the shape of your dataframe.
This is what you want
y = a* exp (b*x) -> ln(y)=ln(a)+bx
f = np.polyfit(df['id'], np.log(df['val']), 1)
where
a = np.exp(f[1]) -> 599.5313046712091
b = f[0] -> 0.5511067934637022
Giving
puf['fit'] = a * np.exp(b * puf['id'])
id val fit
0 1 850 1040.290193
1 2 1889 1805.082864
2 3 3289 3132.130026
3 4 6083 5434.785677
4 5 10349 9430.290286
5 6 17860 16363.179739
6 7 28180 28392.938399
7 8 41236 49266.644002
I'm trying to calculate the Fourier transform of three muon polarization signals, which are simply cosine functions multiplied by an exponential decay.
So, doing the Fourier transform, we are going to see broadened peaks centered at the corresponding frequency.
The problem is that I have already tried to do the Fourier transform, but I do not know if it's correct; furthermore, I'm trying to calculate the FWHM using the scipy.stats.moment function, using the 2-nd moment: is it correct?
Can you tell me if the code is correct?
I put here the three signals in .npy file and the code used for the Fourier analysis.
The signals are signal[0], signal[1] and signal[2], arrays of 10 dimension.
Each signal[k] contains 10 polarization functions (1 for each applied magnetic field), which are signals of 400 points.
The corresponding files are signal_100, signal_110, signal_111, provided here:
https://github.com/JonathanFrassineti/UNDI-examples.
Ah, the frequencies range from 0 Hz to 40 MHz.
Thank you!
N = 400 # Number of signal points.
N1 = 40000000
T = 1./800. # Sampling spacing.
xf = np.fft.rfftfreq(N1, T)
yf1 = FWHM1 = sigma1 = delta1 = bhar1 = np.zeros(fields, dtype = object)
yf2 = FWHM2 = sigma2 = delta2 = bhar2 = np.zeros(fields, dtype = object)
yf3 = FWHM3 = sigma3 = delta3 = bhar3 = np.zeros(fields, dtype = object)
for j in range(fields):
# Fourier transform.
yf1[j] = np.fft.rfft(signal[0][j])
yf2[j] = np.fft.rfft(signal[1][j])
yf3[j] = np.fft.rfft(signal[2][j])
FWHM1[j] = moment(yf1[j], moment=2)
FWHM2[j] = moment(yf2[j], moment=2)
FWHM3[j] = moment(yf3[j], moment=2)
sigma1[j] = np.sqrt(np.abs(FWHM3[j]))/2.355
sigma2[j] = np.sqrt(np.abs(FWHM2[j]))/2.355
sigma3[j] = np.sqrt(np.abs(FWHM3[j]))/2.355
delta1[j] = sigma1[j]/gamma_Cu
delta2[j] = sigma2[j]/gamma_Cu
delta3[j] = sigma3[j]/gamma_Cu
bhar1[j] = (((a*angtom)**3)/(1e-7*gamma_Cu*hbar))*delta1[j]
bhar2[j] = (((a*angtom)**3)/(1e-7*gamma_Cu*hbar))*delta2[j]
bhar3[j] = (((a*angtom)**3)/(1e-7*gamma_Cu*hbar))*delta3[j]
Currently i work in a python project with same object. I've a set of data of magnetic field B(x,y,z), i think ideal would be to organize your data periodically at event and deduce Fe (sampling_rate).
f(A, t)=A*( cos(2*pi*fe*t) - sin(2*pi*fe*t)
B=[ 50, 50, 10, 3 ] # where each data is |B| normal at second
res=[ f(a, time) for time, a in enumerate(B) ]
fourrier_transform=np.fft.fft( res )
frequency= fftfreq([ time for time in range(len(B)) ]) # U can use fftfreq provide by scipy
Please star this project, research ressource to contribute
RFSignalToolkit github project
I have some probability density function:
T = 10000
tmin = 0
tmax = 10**20
t = np.linspace(tmin, tmax, T)
time = np.asarray(t) #this line may be redundant
for j in range(T):
timedep_PD[j]= probdensity_func(x,time[j],initial_state)
I want to integrate it over two distinct regions of x. I tried the following to split the timedep_PD array into two spatial regions and then proceeded to integrate:
step = abs(xmin - xmax) / T
l1 = int(np.floor((abs(ab - xmin)* T ) / abs(xmin - xmax)))
l2 = int(np.floor((abs(bd - ab)* T ) / abs(xmin - xmax)))
#For spatial region 1
R1 = np.empty([l1])
R1 = x[:l1]
for i in range(T):
Pd1[i] = Pd[i][:l1]
#For spatial region 2
Pd2 = np.empty([T,l2])
R2 = np.empty([l2])
R2 = x[l1:l1+l2]
for i in range(T):
Pd2[i] = Pd[i][l1:l1+l2]
#Integrating over each spatial region
for i in range(T):
P[0][i] = np.trapz(Pd1[i],R1)
P[1][i] = np.trapz(Pd2[i],R2)
Is there an easier/more clear way to go about splitting up a probability density function into two spatial regions and then integrating within each spatial region at each time-step?
The loops can be eliminated by using vectorized operations instead. It's not clear whether Pd is a 2D NumPy array; it it's something else (e.g., a list of lists), it should be converted to a 2D NumPy array with np.array(...). After that you can do this:
Pd1 = Pd[:, :l1]
Pd2 = Pd[:, l1:l1+l2]
No need to loop over the time index; the slicing happens for all times at once (having : in place of an index means "all valid indices").
Similarly, np.trapz can integrate all time slices at once:
P1 = np.trapz(Pd1, R1, axis=1)
P2 = np.trapz(Pd2, R2, axis=1)
Each P1 and P2 is now a time series of integrals. The axis parameter determines along which axis Pd1 gets integrated - it's the second axis, i.e., space.
So I'm running a KNN in order to create clusters. From each cluster, I would like to obtain the medoid of the cluster.
I'm employing a fractional distance metric in order to calculate distances:
where d is the number of dimensions, the first data point's coordinates are x^i, the second data point's coordinates are y^i, and f is an arbitrary number between 0 and 1
I would then calculate the medoid as:
where S is the set of datapoints, and δ is the absolute value of the distance metric used above.
I've looked online to no avail trying to find implementations of medoid (even with other distance metrics, but most thing were specifically k-means or k-medoid which [I think] is relatively different from what I want.
Essentially this boils down to me being unable to translate the math into effective programming. Any help would or pointers in the right direction would be much appreciated! Here's a short list of what I have so far:
I have figured out how to calculate the fractional distance metric (the first equation) so I think I'm good there.
I know numpy has an argmin() function (documented here).
Extra points for increased efficiency without lack of accuracy (I'm trying not to brute force by calculating every single fractional distance metric (because the number of point pairs might lead to a factorial complexity...).
compute pairwise distance matrix
compute column or row sum
argmin to find medoid index
i.e. numpy.argmin(distMatrix.sum(axis=0)) or similar.
So I've accepted the answer here, but I thought I'd provide my implementation if anyone else was trying to do something similar:
(1) This is the distance function:
def fractional(p_coord_array, q_coord_array):
# f is an arbitrary value, but must be greater than zero and
# less than one. In this case, I used 3/10. I took advantage
# of the difference of cubes in this case, so that I wouldn't
# encounter an overflow error.
a = np.sum(np.array(p_coord_array, dtype=np.float64))
b = np.sum(np.array(q_coord_array, dtype=np.float64))
a2 = np.sum(np.power(p_coord_array, 2))
ab = np.sum(p_coord_array) * np.sum(q_coord_array)
b2 = np.sum(np.power(p_coord_array, 2))
diffab = a - b
suma2abb2 = a2 + ab + b2
temp_dist = abs(diffab * suma2abb2)
temp_dist = np.power(temp_dist, 1./10)
dist = np.power(temp_dist, 10./3)
return dist
(2) The medoid function (if the length of the dataset was less than 6000 [if greater than that, I ran into overflow errors... I'm still working on that bit to be perfectly honest...]):
def medoid(dataset):
point = []
w = len(dataset)
if(len(dataset) < 6000):
h = len(dataset)
dist_matrix = [[0 for x in range(w)] for y in range(h)]
list_combinations = [(counter_1, counter_2, data_1, data_2) for counter_1, data_1 in enumerate(dataset) for counter_2, data_2 in enumerate(dataset) if counter_1 < counter_2]
for counter_3, tuple in enumerate(list_combinations):
temp_dist = fractional(tuple[2], tuple[3])
dist_matrix[tuple[0]][tuple[1]] = abs(temp_dist)
dist_matrix[tuple[1]][tuple[0]] = abs(temp_dist)
Any questions, feel free to comment!
If you don't mind using brute force this might help:
def calc_medoid(X, Y, f=2):
n = len(X)
m = len(Y)
dist_mat = np.zeros((m, n))
# compute distance matrix
for j in range(n):
center = X[j, :]
for i in range(m):
if i != j:
dist_mat[i, j] = np.linalg.norm(Y[i, :] - center, ord=f)
medoid_id = np.argmin(dist_mat.sum(axis=0)) # sum over y
return medoid_id, X[medoid_id, :]
Here is an example of computing a medoid for a single cluster with Euclidean distance.
import numpy as np, pandas as pd, matplotlib.pyplot as plt
a, b, c, d = np.array([0,1]), np.array([1, 3]), np.array([4,2]), np.array([3, 1.5])
vCenroid = np.mean([a, b, c, d], axis=0)
def GetMedoid(vX):
vMean = np.mean(vX, axis=0) # compute centroid
return vX[np.argmin([sum((x - vMean)**2) for x in vX])] # pick a point closest to centroid
vMedoid = GetMedoid([a, b, c, d])
print(f'centroid = {vCenroid}')
print(f'medoid = {vMedoid}')
df = pd.DataFrame([a, b, c, d], columns=['x', 'y'])
ax = df.plot.scatter('x', 'y', grid=True, title='Centroid in 2D plane', s=100);
plt.plot(vCenroid[0], vCenroid[1], 'ro', ms=10); # plot centroid as red circle
plt.plot(vMedoid[0], vMedoid[1], 'rx', ms=20); # plot medoid as red star
You can also use the following package to compute medoid for one or more clusters
!pip -q install scikit-learn-extra > log
from sklearn_extra.cluster import KMedoids
GetMedoid = lambda vX: KMedoids(n_clusters=1).fit(vX).cluster_centers_
GetMedoid([a, b, c, d])[0]
I would say that you just need to compute the median.
np.median(np.asarray(points), axis=0)
Your median is the point with the biggest centrality.
Note: if you are using distances different than Euclidean this doesn't hold.