Problem Fitting a Residence Time Distribution Data - python

I am trying to fit Resident Time Distribution (RTD) Data. RTD is typically skewed distribution. I have built a simple code that takes this non equally space-time data set from the RTD.
Data Sett
timeArray = [0.0, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0]
concArray = [0.0, 0.6, 1.4, 5.0, 8.0, 10.0, 8.0, 6.0, 4.0, 3.0, 2.2, 1.5, 0.6, 0.0]
To fit the data I have been using python curve_fit function
parameters, covariance = curve_fit(nCSTR, time, conc, p0=guess)
and different sets of models (ex. CSTR, Sine, Gauss) to fit the data. However, no success so far.
The RTD data that I have correspond to a CSTR and there is an equation that model very accurate this type of behavior.
#Generalize nCSTR model
y = (( (np.power(x/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*x/tau)
As a separate note: from the Generalized nCSTR model I am using gamma instead of (n-1)! factorial terms because of the complexities of the code trying to deal with decimal values in factorials terms.
This CSTR model should be the one fitting the data without problem but for some reason is not able to do so. The outcome after executing my code:
timeArray = [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0]
concArray = [0.0, 0.6, 1.4, 2.6, 5.0, 6.5, 8.0, 9.0, 10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.5, 3.0, 2.5, 2.2, 1.8, 1.5, 1.2, 1.0, 0.8, 0.6, 0.5, 0.3, 0.1, 0.0]
#Recast time and conc into numpy arrays
time = np.asarray(timeArray)
conc = np.asarray(concArray)
plt.plot(time, conc, 'o')
def nCSTR(x, tau, n):
y = (( (np.power(x/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*x/tau)
return y
guess = [1, 12]
parameters, covariance = curve_fit(nCSTR, time, conc, p0=guess)
tau = parameters[0]
n = parameters[1]
y = np.arange(0.0, len(time), 1.0)
for i in range(len(timeArray)):
y[i] = (( (np.power(time[i]/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*time[i]/tau)
plt.plot(time,y)
is this plot Fitting Output
I know I am missing something and any help will be well appreciated. The model has been well known for decades so it should not be related to the equation. I did some dummy data to confirm that the equation is written correctly and the output was the same type of profile that I am looking for. In that end, the equestion is fine.
import numpy as np
import math
t = np.arange(0.0, 10.5, 0.5)
tau = 2
n = 5
y = np.arange(0.0, len(t), 1.0)
for i in range(len(t)):
y[i] = (( (np.power(t[i]/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*t[i]/tau)
print(y)
plt.plot(t,y)
CSTR profile with Dummy Data (image)
If anyone is interested in the theory behind it I recommend any reading related to Tank In Series (specifically CSTR) Fogler has a great book about this topic.

I think that the main problem is that your model does not allow for an overall scale factor or that your data may not be normalized as you expect.
If you'll permit me to convert your curve-fitting program to use lmfit (I am a lead author), you might do:
import numpy as np
from scipy.special import gamma
import matplotlib.pyplot as plt
from lmfit import Model
timeArray = [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0]
concArray = [0.0, 0.6, 1.4, 2.6, 5.0, 6.5, 8.0, 9.0, 10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.5, 3.0, 2.5, 2.2, 1.8, 1.5, 1.2, 1.0, 0.8, 0.6, 0.5, 0.3, 0.1, 0.0]
#Recast time and conc into numpy arrays
time = np.asarray(timeArray)
conc = np.asarray(concArray)
plt.plot(time, conc, 'o', label='data')
def nCSTR(x, scale, tau, n):
"""scaled CSTR model"""
z = n*x/tau
return scale * np.exp(-z) * z**(n-1) * (n/(tau*gamma(n)))
# create a Model for your model function
cmodel = Model(nCSTR)
# now create a set of Parameters for your model (note that parameters
# are named using your function arguments), and give initial values
params = cmodel.make_params(tau=3, scale=10, n=10)
# since you have `xxx**(n-1)`, setting a lower bound of 1 on `n`
# is wise, otherwise you would have to handle complex values
params['n'].min = 1
# now fit the model to your `conc` data with those parameters
# (and also passing in independent variables using `x`: the argument
# name from the signature of the model function)
result = cmodel.fit(conc, params, x=time)
# print out a report of the results
print(result.fit_report())
# you do not need to construct the best fit yourself, it is in `result`:
plt.plot(time, result.best_fit, label='fit')
plt.legend()
plt.show()
This will print out a report that includes statistics and uncertainties:
[[Model]]
Model(nCSTR)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 29
# data points = 29
# variables = 3
chi-square = 2.84348862
reduced chi-square = 0.10936495
Akaike info crit = -61.3456602
Bayesian info crit = -57.2437727
R-squared = 0.98989860
[[Variables]]
scale: 49.7615649 +/- 0.81616118 (1.64%) (init = 10)
tau: 5.06327482 +/- 0.05267918 (1.04%) (init = 3)
n: 4.33771512 +/- 0.14012112 (3.23%) (init = 10)
[[Correlations]] (unreported correlations are < 0.100)
C(scale, n) = -0.521
C(scale, tau) = 0.477
C(tau, n) = -0.406
and generate a plot of

Related

Calculating length of Polyline using a loop? python

I need to calculate the length of this poly line with the following coordinates formatted this exact way:
coords = [[1.0, 1.0], [1.5, 2.0], [2.2, 2.4], [3.0, 3.2], [4.0, 3.6], [4.5, 3.5], [4.8, 3.2], [5.2, 2.8], [5.6, 2.0],
[6.5, 1.2]]
using this distance formula 𝑑 = √(𝑥2 − 𝑥1)
2 + (𝑦2 − 𝑦1)
our lab wants us to use a Loop, and be able to use the same code on other sets of coordinates. I am lost on where to start.
Something like this could work?
Define a function for the Euclidean distance:
import math
def distance(x1,x2,y1,y2):
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
Then, another function which receives coords as input and gives back the sum of point-to-point euclidean distances
def compute_poly_line_lenth(coords):
distances = []
for i in range(len(coords)-1):
current_line = coords[i]
next_line = coords[i+1]
distances.append(distance(current_line[0],next_line[0],current_line[1],next_line[1]))
return sum(distances)

Stack bar-chart intersected between each other

I have the following code for the stack bar chart
cols = ['Bug Prediction','Traceability','Security', 'Program Generation & Repair',
'Performance Prediction','Code Similarity & Clone Detection',
'Code Navigation & Understanding', 'Other_SE']
count_ANN = [2.0,0.0,1.0,0.0,0.0,3.0,5.0,1.0]
count_CNN = [1.0,0.0,5.0,0.0,1.0,4.0,4.0,0.0]
count_RNN = [1.0,0.0,3.0,1.0,0.0,4.0,7.0,2.0]
count_LSTM =[3.0,0.0,5.0,3.0,1.0,9.0,15.0,1.0]
count_GNN = [0.0,0.0,1.0,0.0,0.0,3.0,3.0,3.0]
count_AE = [0.0,0.0,1.0,3.0,0.0,6.0,11.0,0.0]
count_AM = [2.0,0.0,1.0,4.0,1.0,4.0,15.0,1.0]
count_other =[1.0,0.0,2.0,2.0,0.0,1.0,3.0,0.0]
b_RNN = list(np.add(count_ANN,count_CNN))
b_LSTM = list(np.add(np.add(count_ANN,count_CNN),count_RNN))
b_AE = list(np.add(np.add(np.add(count_ANN,count_CNN),count_RNN),count_AE))
b_GNN = list(np.add(b_AE,count_GNN))
b_others = list(np.add(b_GNN,count_other))
plt.bar(cols,count_ANN,0.4,label = "ANN")
plt.bar(cols,count_CNN,0.4,bottom=count_ANN,label = "CNN")
plt.bar(cols,count_RNN,0.4,bottom=b_RNN,label = "RNN")
plt.bar(cols,count_LSTM,0.4,bottom =b_LSTM, label = "LSTM")
plt.bar(cols,count_AE,0.4,bottom=b_AE,label = "Auto-Encoder")
plt.bar(cols,count_GNN,0.4,bottom=b_GNN,label = "GNN")
plt.bar(cols,count_other,0.4,bottom=b_others,label = "Others")
#ax.bar(cols, count)
plt.xticks(np.arange(len(cols))+0.1,cols)
fig.autofmt_xdate()
plt.legend()
plt.show()
Then the output for this is overlapped stacks as in the following figure
The specific problem is that b_AE is calculated wrong. (Also, there is a list called count_AM for which there is no label).
The more general problem, is that calculating all these values "by hand" is very prone to errors and difficult to adapt when there are changes. It helps to write things in a loop.
The magic of numpy's broadcasting and vectorization lets you initialize bottom as a single zero, and then use numpy's adding to add the counts.
To have a bit neater x-axis, you can put the individual words on separate lines. Also, plt.tight_layout() tries to make sure all text fits nicely into the plot.
import matplotlib.pyplot as plt
import numpy as np
cols = ['Bug Prediction', 'Traceability', 'Security', 'Program Generation & Repair',
'Performance Prediction', 'Code Similarity & Clone Detection',
'Code Navigation & Understanding', 'Other_SE']
count_ANN = [2.0, 0.0, 1.0, 0.0, 0.0, 3.0, 5.0, 1.0]
count_CNN = [1.0, 0.0, 5.0, 0.0, 1.0, 4.0, 4.0, 0.0]
count_RNN = [1.0, 0.0, 3.0, 1.0, 0.0, 4.0, 7.0, 2.0]
count_LSTM = [3.0, 0.0, 5.0, 3.0, 1.0, 9.0, 15.0, 1.0]
count_GNN = [0.0, 0.0, 1.0, 0.0, 0.0, 3.0, 3.0, 3.0]
count_AE = [0.0, 0.0, 1.0, 3.0, 0.0, 6.0, 11.0, 0.0]
count_AM = [2.0, 0.0, 1.0, 4.0, 1.0, 4.0, 15.0, 1.0]
count_other = [1.0, 0.0, 2.0, 2.0, 0.0, 1.0, 3.0, 0.0]
all_counts = [count_ANN, count_CNN, count_RNN, count_LSTM, count_GNN, count_AE, count_AM, count_other]
all_labels = ["ANN", "CNN", "RNN", "LSTM", "GNN", "Auto-Encoder", "AM", "Others"]
cols = ["\n".join(c.split(" ")) for c in cols]
cols = [c.replace("&\n", "& ") for c in cols]
bottom = 0
for count_i, label in zip(all_counts, all_labels):
plt.bar(cols, count_i, 0.4, bottom=bottom, label=label)
bottom += np.array(count_i)
# plt.xticks(np.arange(len(cols)) + 0.1, cols)
plt.tick_params(axis='x', labelrotation=45, length=0)
plt.legend()
plt.tight_layout()
plt.show()
PS: To have the bars in the same order as the legend, you could draw them starting from the top:
bottom = np.sum(all_counts, axis=0)
for count_i, label in zip(all_counts, all_labels):
bottom -= np.array(count_i)
plt.bar(cols, count_i, 0.4, bottom=bottom, label=label)

Fitting a Lognormal Distribution in Python using CURVE_FIT

I have a hypothetical y function of x and trying to find/fit a lognormal distribution curve that would shape over the data best. I am using curve_fit function and was able to fit normal distribution, but the curve does not look optimized.
Below are the give y and x data points where y = f(x).
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
y-axis are probabilities of an event occurring in x-axis time bins:
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
I was able to get a better fit on my data using excel and lognormal approach. When I attempt to use lognormal in python, the fit does not work and I am doing something wrong.
Below is the code I have for fitting a normal distribution, which seems to be the only one that I can fit in python (hard to believe):
#fitting distributino on top of savitzky-golay
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import scipy
import scipy.stats
import numpy as np
from scipy.stats import gamma, lognorm, halflogistic, foldcauchy
from scipy.optimize import curve_fit
matplotlib.rcParams['figure.figsize'] = (16.0, 12.0)
matplotlib.style.use('ggplot')
# results from savgol
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
## y_axis values must be normalised
sum_ys = sum(y_axis)
# normalize to 1
y_axis = [_/sum_ys for _ in y_axis]
# def gamma_f(x, a, loc, scale):
# return gamma.pdf(x, a, loc, scale)
def norm_f(x, loc, scale):
# print 'loc: ', loc, 'scale: ', scale, "\n"
return norm.pdf(x, loc, scale)
fitting = norm_f
# param_bounds = ([-np.inf,0,-np.inf],[np.inf,2,np.inf])
result = curve_fit(fitting, x_axis, y_axis)
result_mod = result
# mod scale
# results_adj = [result_mod[0][0]*.75, result_mod[0][1]*.85]
plt.plot(x_axis, y_axis, 'ro')
plt.bar(x_axis, y_axis, 1, alpha=0.75)
plt.plot(x_axis, [fitting(_, *result[0]) for _ in x_axis], 'b-')
plt.axis([0,35,0,.1])
# convert back into probability
y_norm_fit = [fitting(_, *result[0]) for _ in x_axis]
y_fit = [_*sum_ys for _ in y_norm_fit]
print list(y_fit)
plt.show()
I am trying to get answers two questions:
Is this the best fit I will get from normal distribution curve? How can I imporve my the fit?
Normal distribution result:
How can I fit a lognormal distribution to this data or is there a better distribution that I can use?
I was playing around with lognormal distribution curve adjust mu and sigma, it looks like that there is possible a better fit. I don't understand what I am doing wrong to get similar results in python.
Actually, Gamma distribution might be good fit as #Glen_b proposed. I'm using second definition with \alpha and \beta.
NB: trick I use for a quick fit is to compute mean and variance and for typical two-parametric distribution it is enough to recover parameters and get quick idea if it is good fit or not.
Code
import math
from scipy.misc import comb
import matplotlib.pyplot as plt
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
## y_axis values must be normalised
sum_ys = sum(y_axis)
# normalize to 1
y_axis = [_/sum_ys for _ in y_axis]
m = 0.0
for k in range(0, len(x_axis)):
m += y_axis[k] * x_axis[k]
v = 0.0
for k in range(0, len(x_axis)):
t = (x_axis[k] - m)
v += y_axis[k] * t * t
print(m, v)
b = m/v
a = m * b
print(a, b)
z = []
for k in range(0, len(x_axis)):
q = b**a * x_axis[k]**(a-1.0) * math.exp( - b*x_axis[k] ) / math.gamma(a)
z.append(q)
plt.plot(x_axis, y_axis, 'ro')
plt.plot(x_axis, z, 'b*')
plt.axis([0, 35, 0, .1])
plt.show()
Discrete distribution might look better - your x are all integers after all. You have distribution with variance about 3 times higher than mean, asymmetric - so most likely something like Negative Binomial might work quite well. Here is quick fit
r is a bit above 6, so you might want to move to distribution with real r - Polya distribution.
Code
from scipy.misc import comb
import matplotlib.pyplot as plt
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
## y_axis values must be normalised
sum_ys = sum(y_axis)
# normalize to 1
y_axis = [_/sum_ys for _ in y_axis]
s = 1.0 # shift by 1 to have them all at 0
m = 0.0
for k in range(0, len(x_axis)):
m += y_axis[k] * (x_axis[k] - s)
v = 0.0
for k in range(0, len(x_axis)):
t = (x_axis[k] - s - m)
v += y_axis[k] * t * t
print(m, v)
p = 1.0 - m/v
r = int(m*(1.0 - p) / p)
print(p, r)
z = []
for k in range(0, len(x_axis)):
q = comb(k + r - 1, k) * (1.0 - p)**r * p**k
z.append(q)
plt.plot(x_axis, y_axis, 'ro')
plt.plot(x_axis, z, 'b*')
plt.axis([0, 35, 0, .1])
plt.show()
Note that if a lognormal curve is correct and you take logs of both variables, you should have a quadratic relationship; even if that's not a suitable scale for a final model (because of variance effects -- if your variance is near constant on the original scale it will overweight the small values) it should at least give a good starting point for a nonlinear fit.
Indeed aside from the first two points this looks fairly good:
-- a quadratic fit to the solid points would describe that data quite well and should give suitable starting values if you then want to do a nonlinear fit.
(If error in x is at all possible, the lack of fit at the lowest x may be as much issues with error in x as error in y)
Incidentally, that plot seems to hint that a gamma curve may fit a little better overall than a lognormal one (in particular if you don't want to reduce the impact of those first two points relative to points 4-6). A good initial fit for that can be had by regressing log(y) on x and log(x):
The scaled gamma density is g = c.x^(a-1) exp(-bx) ... taking logs, you get log(g) = log(c) + (a-1) log(x) - b x = b0 + b1 log(x) + b2 x ... so supplying log(x) and x to a linear regression routine will fit that. The same caveats about variance effects apply (so it might be best as a starting point for a nonlinear least squares fit if your relative error in y isn't nearly constant).
In Python, I explained a trick here of how to fit a LogNormal very simply using OpenTURNS library:
import openturns as ot
n_times = [int(y_axis[i] * N) for i in range(len(y_axis))]
S = np.repeat(x_axis, n_times)
sample = ot.Sample([[p] for p in S])
fitdist = ot.LogNormalFactory().buildAsLogNormal(sample)
That's it!
print(fitdist) will show you >>> LogNormal(muLog = 2.92142, sigmaLog = 0.305, gamma = -6.24996)
and the fitting seems good:
import matplotlib.pyplot as plt
plt.hist(S, density =True, color = 'grey', bins = 34, alpha = 0.5)
plt.scatter(x_axis, y_axis, color= 'red')
plt.plot(x_axis, fitdist.computePDF(ot.Sample([[p] for p in x_axis])), color = 'black')
plt.show()

H5PY - How to store many 2D arrays of different dimensions

I would like to organize my collected data (from computer simulations) into a hdf5 file using Python.
I measured positions and velocities [x,y,z,vx,vy,vz] of all atoms within a certain space region over many time steps. The number of atoms, of course, varies from time step to time step.
A minimal example could look as follows:
[
[ [x1,y1,z1,vx1,vy1,vz1], [x2,y2,z2,vx2,vy2,vz2] ],
[ [x1,y1,z1,vx1,vy1,vz1], [x2,y2,z2,vx2,vy2,vz2], [x3,y3,z3,vx3,vy3,vz3] ]
]
(2 time steps,
first time step: 2 atoms,
second time step: 3 atoms)
My idea was to create a hdf5 dataset within Python which stores all the information. At each time step it should store a 2d array of alls positions/velocities of all atoms, i.e.
dataset[0] = [ [x1,y1,z1,vx1,vy1,vz1], [x2,y2,z2,vx2,vy2,vz2] ]
dataset[1] = [ [x1,y1,z1,vx1,vy1,vz1], [x2,y2,z2,vx2,vy2,vz2], [x3,y3,z3,vx3,vy3,vz3] ].
The idea is clear, I think. However, I struggle with the definition of the correct data type of the data set with varying array length.
My code looks like this:
import numpy as np
import h5py
file = h5py.File ('file.h5','w')
columnNo = 6
rowtype = np.dtype("%sfloat32" % columnNo)
dt = h5py.special_dtype( vlen=np.dtype(rowtype) )
dataset = file.create_dataset("dset", (2,), dtype=dt)
print dataset.value
testarray = np.array([[1.,2.,3.,2.,3.,4.],[1.,2.,3.,2.,3.,4.]])
print testarray
dataset[0] = testarray
print dataset[0]
This, however, does not work. When I run the script I get the error message "AttributeError: 'float' object has no attribute 'dtype'."
It seems that my defined dtype is wrong.
Does anybody see how it should be defined correctly?
Thanks very much,
Sven
The error in your case is buried, though it is clear it occurs when trying to assign the testarray to the dataset:
Traceback (most recent call last):
File "stack41465480.py", line 26, in <module>
dataset[0] = testarray
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/build/h5py-GhwtGD/h5py-2.6.0/h5py/_objects.c:2577)
...
File "h5py/_conv.pyx", line 712, in h5py._conv.ndarray2vlen (/build/h5py-GhwtGD/h5py-2.6.0/h5py/_conv.c:6171)
AttributeError: 'float' object has no attribute 'dtype'
I'm not skilled with the special_dtype and vlen, but I was able to write a numpy structured arrays to h5py.
import numpy as np
import h5py
file = h5py.File ('file.h5','w')
columnNo = 6
# rowtype = np.dtype("%sfloat32" % columnNo)
rowtype = np.dtype([('f0', '<f4',(6,))])
dt = h5py.special_dtype( vlen=np.dtype(rowtype) )
print('rowtype',rowtype)
print('dt',dt)
dataset = file.create_dataset("dset", (2,), dtype=rowtype)
print('value')
print(dataset.value[0])
arr = np.ones((2,),dtype=rowtype)
print(repr(arr))
dataset[0] = arr[0]
print(dataset.value)
testarray = np.array([([1.,2.,3.,2.,3.,4.],),([2.,3.,4.,1.,2.,3.],)], dtype=rowtype)
print(repr(testarray))
dataset[1] = testarray[1]
print(dataset.value)
print(dataset.value['f0'])
producing
1316:~/mypy$ python3 stack41465480.py
rowtype [('f0', '<f4', (6,))]
dt object
value
([0.0, 0.0, 0.0, 0.0, 0.0, 0.0],)
array([([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],), ([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],)],
dtype=[('f0', '<f4', (6,))])
[([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],) ([0.0, 0.0, 0.0, 0.0, 0.0, 0.0],)]
array([([1.0, 2.0, 3.0, 2.0, 3.0, 4.0],), ([2.0, 3.0, 4.0, 1.0, 2.0, 3.0],)],
dtype=[('f0', '<f4', (6,))])
[([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],) ([2.0, 3.0, 4.0, 1.0, 2.0, 3.0],)]
[[ 1. 1. 1. 1. 1. 1.]
[ 2. 3. 4. 1. 2. 3.]]
Thanks for the quick answer. It helped a lot.
If I now simply change the data type of the data set to
dtype = dt,
I get what I would like to have.
Here, the Python code (for completeness):
import numpy as np
import h5py
file = h5py.File ('file.h5','w')
columnNo = 6
rowtype = np.dtype([('f0', '<f4',(6,))])
dt = h5py.special_dtype( vlen=np.dtype(rowtype) )
print('rowtype',rowtype)
print('dt',dt)
dataset = file.create_dataset("dset", (2,), dtype=dt)
# print('value')
# print(dataset.value[0])
arr = np.ones((3,),dtype=rowtype)
# print(repr(arr))
dataset[0] = arr
# print(dataset.value)
testarray = np.array([([1.,2.,3.,2.,3.,4.],),([2.,3.,4.,1.,2.,3.],)], dtype=rowtype)
# print(repr(testarray))
dataset[1] = testarray
print(dataset.value)
for i in range(2): print dataset[i]
And to corresponding output reads
('rowtype', dtype([('f0', '<f4', (6,))]))
('dt', dtype('O'))
[ array([([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],),
([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],), ([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],)],
dtype=[('f0', '<f4', (6,))])
array([([1.0, 2.0, 3.0, 2.0, 3.0, 4.0],), ([2.0, 3.0, 4.0, 1.0, 2.0, 3.0],)],
dtype=[('f0', '<f4', (6,))])]
[([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],) ([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],)
([1.0, 1.0, 1.0, 1.0, 1.0, 1.0],)]
[([1.0, 2.0, 3.0, 2.0, 3.0, 4.0],) ([2.0, 3.0, 4.0, 1.0, 2.0, 3.0],)]
Just to get it right: The problem in my original code was a bad definition of my rowtype data structure, right?
Best,
Sven

range-like function for floats

I wanted to use the built-in range function for floats, but apparently it doesn't work and from a quick research, i understood that there isn't a built in option for that and that I'll need to code my own function for this. So I did:
def fltrange(mini, maxi, step):
lst = []
while mini < maxi:
lst.append(mini)
mini += step
return lst
rang = fltrange(-20.0, 20.1, 0.1)
print(rang)
input()
but this is what I get:
result
the step should be just 0.1000000..., but instead it's about (sometimes it changes) 0.100000000000001.
Thanks in advance.
Fun fact: 1/10 can't be exactly represented by floating point numbers. The closest you can get is 0.1000000000000000055511151231257827021181583404541015625. The rightmost digits usually get left out when you print them, but they're still there. This explains the accumulation of errors as you continually add more 0.1s to the sum.
You can eliminate some inaccuracy (but not all of it) by using a multiplication approach instead of a cumulative sum:
def fltrange(mini, maxi, step):
lst = []
width = maxi - mini
num_steps = int(width/step)
for i in range(num_steps):
lst.append(mini + i*step)
return lst
rang = fltrange(-20.0, 20.1, 0.1)
print(rang)
Result (newlines added by me for clarity):
[-20.0, -19.9, -19.8, -19.7, -19.6, -19.5, -19.4, -19.3, -19.2, -19.1,
-19.0, -18.9, -18.8, -18.7, -18.6, -18.5, -18.4, -18.3, -18.2, -18.1,
-18.0, -17.9, -17.8, -17.7, -17.6, -17.5, -17.4, -17.3, -17.2, -17.1,
-17.0, -16.9, -16.8, -16.7, -16.6, -16.5, -16.4, -16.3, -16.2, -16.1,
-16.0, -15.899999999999999, -15.8, -15.7, -15.6, -15.5, -15.399999999999999, -15.3, -15.2, -15.1, -15.0,
...
19.1, 19.200000000000003, 19.300000000000004, 19.400000000000006, 19.5, 19.6, 19.700000000000003, 19.800000000000004, 19.900000000000006, 20.0]
You can use numpy for it. There are a few functions for your needs.
import numpy as np # of course :)
linspace :
np.linspace(1, 10, num=200)
array([ 1. , 1.04522613, 1.09045226, 1.13567839,
1.18090452, 1.22613065, 1.27135678, 1.31658291,
...
9.68341709, 9.72864322, 9.77386935, 9.81909548,
9.86432161, 9.90954774, 9.95477387, 10. ])
arange :
np.arange(1., 10., 0.1)
array([ 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. ,
2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. , 3.1,
...
8.7, 8.8, 8.9, 9. , 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7,
9.8, 9.9])
P.S. However, it's not technically a generator, which is a range in Python3 (xrange for Python2.x).

Categories

Resources