plotting high precision data - python

I have an array which contains error values as a function of two different quantities (alpha and eigRange).
I fill my array like this :
for j in range(n):
for i in range(alphaLen):
alpha = alpha_list[i]
c = train.eig(xt_, yt_,m-j, m,alpha, "cpu")
costListTrain[j, i] = cost.err(xt_, xt_, yt_, c)
normedValues=costListTrain/np.max(costListTrain.ravel())
where
n = 20
alpha_list = [0.0001,0.0003,0.0008,0.001,0.003,0.006,0.01,0.03,0.05]
My costListTrain array contains some values that have very small differences, e.g.:
2.809458902485728 2.809458905776425 2.809458913576337 2.809459011062461
2.030326752376704 2.030329906064879 2.030337351188699 2.030428976282031
1.919840839066182 1.919846470077076 1.919859731440199 1.920021453630778
1.858436351617677 1.858444223016128 1.858462730482461 1.858687054377165
1.475871326997542 1.475901926855846 1.475973476249240 1.476822830933632
1.475775410801635 1.475806023102173 1.475877601316863 1.476727286424228
1.475774284270633 1.475804896751524 1.475876475382906 1.476726165223209
1.463578292548192 1.463611627166494 1.463689466240788 1.464609083309240
1.462859608038034 1.462893157900139 1.462971489632478 1.463896516033939
1.461912706143012 1.461954067956570 1.462047793798572 1.463079574605320
1.450581041157659 1.452770209885761 1.454835202839513 1.459676311335618
1.450581041157643 1.452770209885764 1.454835202839484 1.459676311335624
1.450581041157651 1.452770209885735 1.454835202839484 1.459676311335610
1.450581041157597 1.452770209885784 1.454835202839503 1.459676311335620
1.450581041157575 1.452770209885757 1.454835202839496 1.459676311335619
1.450581041157716 1.452770209885711 1.454835202839499 1.459676311335613
1.450581041157667 1.452770209885744 1.454835202839509 1.459676311335625
1.450581041157649 1.452770209885750 1.454835202839476 1.459676311335617
1.450581041157655 1.452770209885708 1.454835202839442 1.459676311335622
1.450581041157571 1.452770209885700 1.454835202839498 1.459676311335622
as you can here the value are very very close together!
I am trying to plotting this data in a way where I have the two quantities in the x, y axes and the error value is represented by the dot color.
This is how I'm plotting my data:
alpha_list = np.log(alpha_list)
eigenvalues, alphaa = np.meshgrid(eigRange, alpha_list)
vMin = np.min(costListTrain)
vMax = np.max(costListTrain)
plt.scatter(x, y, s=70, c=normedValues, vmin=vMin, vmax=vMax, alpha=0.50)
but the result is not correct.
I tried to normalize my error value by dividing all values by the max, but it didn't work !
The only way that I could make it work (which is incorrect) is to normalize my data in two different ways. One is base on each column (which means factor1 is constant, factor 2 changing), and the other one based on row (means factor 2 is constant and factor one changing). But it doesn't really make sense because I need a single plot to show the tradeoff between the two quantities on the error values.
UPDATE
this is what I mean by last paragraph.
normalizing values base on max on each rows which correspond to eigenvalues:
maxsEigBasedTrain= np.amax(costListTrain.T,1)[:,np.newaxis]
maxsEigBasedTest= np.amax(costListTest.T,1)[:,np.newaxis]
normEigCostTrain=costListTrain.T/maxsEigBasedTrain
normEigCostTest=costListTest.T/maxsEigBasedTest
normalizing values base on max on each column which correspond to alphas:
maxsAlphaBasedTrain= np.amax(costListTrain,1)[:,np.newaxis]
maxsAlphaBasedTest= np.amax(costListTest,1)[:,np.newaxis]
normAlphaCostTrain=costListTrain/maxsAlphaBasedTrain
normAlphaCostTest=costListTest/maxsAlphaBasedTest
plot 1:
where no. eigenvalue = 10 and alpha changes (should correspond to column 10 of plot 1) :
where alpha = 0.0001 and eigenvalues change (should correspond to first row of plot1)
but as you can see the results are different from plot 1!
UPDATE:
just to clarify more stuff this is how I read my data:
from sklearn.datasets.samples_generator import make_regression
rng = np.random.RandomState(0)
diabetes = datasets.load_diabetes()
X_diabetes, y_diabetes = diabetes.data, diabetes.target
X_diabetes=np.c_[np.ones(len(X_diabetes)),X_diabetes]
ind = np.arange(X_diabetes.shape[0])
rng.shuffle(ind)
#===============================================================================
# Split Data
#===============================================================================
import math
cross= math.ceil(0.7*len(X_diabetes))
ind_train = ind[:cross]
X_train, y_train = X_diabetes[ind_train], y_diabetes[ind_train]
ind_val=ind[cross:]
X_val,y_val= X_diabetes[ind_val], y_diabetes[ind_val]
I also uploaded .csv files HERE
log.csv contain the original value before normalization for plot 1
normalizedLog.csv for plot 1
eigenConst.csv for plot 2
alphaConst.csv for plot 3

I think I found the answer. First of all there was one problem in my code. I was expecting the "No. of eigenvalue" correspond to rows but in my for loop they fill the columns. The currect answer is this :
for i in range(alphaLen):
for j in range(n):
alpha=alpha_list[i]
c=train.eig(xt_, yt_,m-j,m,alpha,"cpu")
costListTrain[i,j]=cost.err(xt_,xt_,yt_,c)
costListTest[i,j]=cost.err(xt_,xv_,yv_,c)
After asking questions from friends and colleagues I got this answer :
I would assume on default imshow and other plotting commands you
might want to use, do equally sized intervals on the values you are
plotting. if you can set that to logarithmic you should be fine.
Ideally, equally "populated bins" would proof most effective, i guess.
for plotting I just subtract the min value from the error and the add a small number and at the end take the log.
temp=costListTrain- costListTrain.min()
temp+=0.00000001
extent = [0, 20,alpha_list[0], alpha_list[-1]]
plt.imshow(np.log(temp),interpolation="nearest",cmap=plt.get_cmap('spectral'), extent = extent, origin="lower")
plt.colorbar()
and result is :

Related

normalization of values in python np array gone wrong?

I have a matrix of floats shaped (3000, 9).
Across 1 line, there is 1 ''simulation''.
Across columns, for a fixed line, there's the contents of the ''simulation''.
I want that for each simulation, the first 8 columns to be normalized to the sum of the 8 first columns.
That is, the first column's entry (for one fixed line) to become what was before, over the sum of the first 8 columns (for that same fixed line).
A trivial task, but I get from a nice, correct, graph (non-normalized), something totally unphysical when plotting with plt.scatter.
The last column of each line is what we are going to use for the x-axis to plot the first 8 columns (the y values).
So one line will represent 8 datapoints for 1 fixed value of x.
The non-normalized graph:
https://ibb.co/Msr8RVB
The normalized graph:
https://ibb.co/tJp7bZn
The datasets:
non-normalized: https://easyupload.io/oat9kq
My code:
import numpy as np
from matplotlib import pyplot as plt
non_norm = np.loadtxt("integration_results_3000samples_10_20_10_25_Wcm2_BenSimulationFromSlack.txt")
plt.figure()
for i in range(non_norm.shape[1]-1):
plt.scatter(non_norm[:, -1], non_norm[:, i], label="c_{}".format(i+47))
plt.xscale("log")
plt.savefig("non-norm_Ben3000samples.pdf", bbox_inches='tight')
norm = np.empty( (non_norm.shape[0], non_norm.shape[1]) )
norm[:, -1] = non_norm[:, -1]
for i in range(norm.shape[1]-1):
for j in range(norm.shape[0]):
norm[j, i] = np.true_divide(non_norm[j, i] , np.sum(non_norm[j, :-1]))
plt.figure()
for i in range(norm.shape[1]-1):
plt.scatter(norm[:, -1], norm[:, i], label="c_{}".format(i+47))
plt.xscale("log")
plt.savefig("norm_Ben3000samples.pdf", bbox_inches='tight')
Do you see what went wrong?
Thank you
When you're normalising a row that has just one value and 7 zeroes, the value becomes 1 and the rest of the row is 0? This is likely why your plot is messing up.
For example, the plot for the first column looks like this before and after normalization:

Pyplot - show x-axis labels according to y-axis value

I have 1min 20s long video record of 23.813 FPS. More precisely, I have 1923 frames in which I've been scanning desired features. I've detected some specific behavior via neural network and using chosen metric I calculated a value for each frame.
So, now, I have X-Y values to plot a graph:
X: time (each step of size 0,041993869s)
Y: a value measured by neural network
In the default state, the plot looks like this:
So, I've tried to limit the number of bins in the faith that the bins will be spread over all my values. But they are not. As you can see, only first fifteen x-values are rendered:
pyplot.locator_params(axis='x', nbins=15)
But neither one is desired state. The desired state should render the labels of such x-bins with y-value higher than e.g. 1.2. So, it should look like this:
Is possible to achieve such result?
Code:
# draw plot
from pandas import read_csv
from matplotlib import pyplot
test_video_fps = 23.813
df = read_csv('/path/to/csv/file/file.csv', header=None)
df.columns = ['anomaly']
df['time'] = [round((i + 1) / test_video_fps, 2) for i in range(df.shape[0])]
axes = df.plot.bar(x='time', y='anomaly', rot='0')
# pyplot.locator_params(axis='x', nbins=15)
# axes.get_xaxis().set_visible(False)
fig = pyplot.gcf()
fig.set_size_inches(16, 10)
fig.savefig('/path/to/output/plot.png', dpi=100)
# pyplot.show()
Example:
Simple example with a subset of original data.
0.379799
0.383786
0.345488
0.433286
0.469474
0.431993
0.474253
0.418843
0.491070
0.447778
0.384890
0.410994
0.898229
1.872756
2.907009
3.691382
4.685749
4.599612
3.738768
8.043357
7.660785
2.311198
1.956096
2.877326
3.467511
3.896339
4.250552
6.485533
7.452986
7.103761
2.684189
2.516134
1.512196
1.435303
0.852047
0.842551
0.957888
0.983085
0.990608
1.046679
1.082040
1.119655
0.962391
1.263255
1.371034
1.652812
2.160451
2.646674
1.460051
1.163745
0.938030
0.862976
0.734119
0.567076
0.417270
Desired plot:
Your question has become a two-part problem, but it is interesting enough that I will answer both.
I will answer this in Matplotlib object oriented notation with numpy data rather than pandas. This will make things easier to explain, and can be easily generalized to pandas.
I will assume that you have the following two data arrays:
dt = 0.041993869
x = np.arange(0.0, 15 * dt, dt)
y = np.array([1., 1.1, 1.3, 7.6, 2.4, 0.8, 0.7, 0.8, 1.0, 1.5, 10.0, 4.5, 3.2, 0.9, 0.7])
Part 1: Identifying the locations where you want labels
The data can be masked to get the locations of the peaks:
mask = y > 1.2
Consecutive peaks can be easily eliminated by computing the diff. A diff of a boolean mask will be True at the locations where the mask changes sense. You will then have to take every other element to get the locations where it goes from False to True. The following code will capture all the corner cases where you start with a peak or end in the middle of a peak:
d = np.flatnonzero(np.diff(mask))
if mask[d[0]]: # First diff is end of peak: True to False
d = np.concatenate(([0], d[1::2] + 1))
else:
d = d[::2] + 1
d is now an array indices into x and y that represent the first element of each run of peaks. You can get the last element by swapping the indices [1::2] and [::2] in the if-else statement, and removing the + 1 in both cases.
The locations of the labels are now simply x[d].
Part 2: Locating and formatting the labels
For this part, you will need to access Matplotlib's object oriented API via the Axes object you are plotting on. You already have this in the pandas form, making the transfer easy. Here is a sample in raw Matplotlib:
fig, axes = plt.subplots()
axes.plot(x, y)
Now use the ticker API to easily set the locations and labels. You actually set the locations directly (not with a Locator) since you have a very fixed list of ticks:
axes.set_xticks(x[d])
axes.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:0.01g}s'))
For the sample data show here, you get

Sum of Guassian by Multiple regression

I have 16 guassian curves which I have to fit with one guassian curve. I was unable to imply the sum of guassian(multiple regression) in python.
Here is the code I am using:
import matplotlib.pyplot as plt
import numpy as np
a=np.array([3750.0, -250.0, 6750.0, 2750.0, -2050.0, 6350.0, 1550.0, -4050.0, 5750.0, 150.0, -6250.0, 4950.0, -1450.0, -8650.0, 3950.0, -3250.0])
v1=np.array( [2.5470357695283954, 0.1937004980283323, 0.43831655553839766, 6.07645636407398, 0.6331239135554633, 0.969937308645575, 13.38133838752005, 1.3226417845166933, 1.5531178254607325, 27.599625693090765, 2.031000233294804, 1.635762971986014, 53.83073800155456, 2.0719664311822843, 0.0, 100.0])
x=[]
s=[]
v5=9.9e2
for j in range(0,len(a)):
for i in range(-1500,1500):
v11=a[j]+i
x.append(v11)
z=np.exp((-4*np.log(2)*((v11-a[j])/(v5))**2))*((4.5*np.log(2)/(np.pi))**0.5)
s.append(z*v1[j])
plt.plot(x,s,'--r',)
plt.stem(a,v1)
Which generates the following plot (with the problem circled):
Instead of the desired output:
The output of your code shows this overlapping because you are not summing the 16 gaussians but instead creating an array containing [x1_g1,x1_g1,...,x3000_g1,x1_g2,...,x3000_g16] and the same for s. It is a 1d array containing the 3000 x values of the first gaussian, then the 3000 x values of the second gaussian and so on. But they are not added. Thus, the plot shows the 16 independent gaussians instead of the sum which is the desired output.
In the actual code, the x values of each gaussian are different (going -1500 and +1500 around its center) which makes adding the 16 gaussians more complicated.
If we consider only the first 2 gaussians for instance, centered at 3750 and -250, the values appended in x from the first gaussian go from 2250 to 5250 in steps of 1, as well as their images in s which are s(2250)... Afterwards, the values of the second gaussian (x between -1750 and 1250) are appended (not added), which will result in an x list like that:
x = [2250,2251,<in steps of 1>,5249,5250,-1750,-1749,<in steps of 1>,1250]
And s is a list where each position contains the image of the same position in x. Strating from this format, getting the final output which is the sum of the gaussians id difficult, because we wolud have to check for equivalent values of x, and sum their contributions...
However, if instead we always evaluated the gaussians at the same positions (in the exemple between -1750 and 5250 in steps of 1), we will have much more values stored, and most of them will be zero, but adding them will be straightforward.
Half-way vectorization
One option similar to the code in the question is the following:
a = np.array([3750.0, -250.0, 6750.0, 2750.0, -2050.0, 6350.0, 1550.0, -4050.0, 5750.0, 150.0, -6250.0, 4950.0, -1450.0, -8650.0, 3950.0, -3250.0])
v1 = np.array( [2.5470357695283954, 0.1937004980283323, 0.43831655553839766, 6.07645636407398, 0.6331239135554633, 0.969937308645575, 13.38133838752005, 1.3226417845166933, 1.5531178254607325, 27.599625693090765, 2.031000233294804, 1.635762971986014, 53.83073800155456, 2.0719664311822843, 0.0, 100.0])
v5 = 9.9e2
xrange = np.arange(a.min()-1500,a.max()+1500)
# This generates an array between the minimum of a minus 1500 and the maximum of a
# plus 1500. This way, all the values in the old x list are contained in ths array
# Therefore, it becomes really easy to sum the contribution of each gaussian,
# because only an element-wise sum is needed.
s = np.zeros(len(xrange))
for j,aj in enumerate(a):
z = np.exp((-4*np.log(2)*((xrange-aj)/(v5))**2))*((4.5*np.log(2)/(np.pi))**0.5)
s += z*v1[j]
plt.plot(xrange,s,'--r')
plt.stem(a,v1)
The output plot is the same as for the completely vectorized solution.
Completely vectorized solution
One simple solution is to define a unique xrange for all 16 gaussians, then calculate s for each of them (on the same x values) and finally sum over the 16 gaussians:
a = np.array([3750.0, -250.0, 6750.0, 2750.0, -2050.0, 6350.0, 1550.0, -4050.0, 5750.0, 150.0, -6250.0, 4950.0, -1450.0, -8650.0, 3950.0, -3250.0])
v1 = np.array( [2.5470357695283954, 0.1937004980283323, 0.43831655553839766, 6.07645636407398, 0.6331239135554633, 0.969937308645575, 13.38133838752005, 1.3226417845166933, 1.5531178254607325, 27.599625693090765, 2.031000233294804, 1.635762971986014, 53.83073800155456, 2.0719664311822843, 0.0, 100.0])
v5 = 9.9e2
xrange = np.arange(a.min()-1500,a.max()+1500)
z = np.exp((-4*np.log(2)*((xrange-a.reshape((len(a),1)))/(v5))**2))*((4.5*np.log(2)/(np.pi))**0.5)
s = z*v1.reshape((len(a),1))
plt.plot(xrange,s.sum(axis=0),'--r')
plt.stem(a,v1)
Note that I have removed the 2 nested loops using numpy.
The loop over range(-1500,1500) can be avoided defining i=np.arange(-1500,1500) instead of the for i in ... and leaving the rest of the code untouched (only indentation has to be updated). Thet is because numpy operated element-wise over the arrays.
The second loop is a bit trickier than that. The a and v1 arrays are reshaped to a 2d array, in order to generate a z with the shape (16,len(xrange)). Thas is why combining an array xrange of length muxh larger than 16 with a does not raise any error of dimensions not matching, because one is the 1st dimension and the other the second.
The code above generates the following plot:
Groupby solution
There is also the option of working with the same code to generate x and s and afterwards, plot every unique value of x (the same value of x can be found in x[i1],x[i2],x[i3]) versus s[i1]+s[i2]+s[i3].
This can be done adding the following code after the loops:
x,s = np.array(x),np.array(s)
ind = np.argsort(x)
x,s = x[ind],s[ind]
unique_x = np.unique(x)
catsums=[]
for k in unique_x:
catsums.append(np.sum(s[np.where(x==k)]))
plt.plot(u,catsums,'--r')
plt.stem(a,v1)
This groupby can also be vectorized using numpy or pandas as it is explained in this other SO answer

how to plot data with samples that die off?

I am bench-marking the effectiveness of a program that runs until it finds a solution and trying to create charts to show how the program tends to go about finding the solution. the program sometimes takes 500 attempts and sometimes takes 2000, I can show that they both steadily produced better and better answers until they find the target. I have hundreds of runs to examine so I would like to see how the average of all of the runs moves over time, however, numpy does not allow me to average data of different lengths. How can I get it to just average the data points that are available at each test number.
EX: trial1 = [33.4853, 32.3958, 30.2859, 33.2958, 30.1049, 29.3209]
trial2 = [45.2937, 44.2983, 42.2839, 42.1394, 41.2938, 39.2936, 38.1826, 36.2483, 39.2632, 37.1827, 35.9936, 32.4837, 31.5599, 29.3209]
BE = numpy.array([trial1, trial2])
BEave = numpy.average(BE, axis=0)
I would like to get back: BEave = [39.3895, 38.34705, 36.2849, 37.7176, 35.69935, 34.30725, 38.1826, 36.2483, 39.2632, 37.1827, 35.9936, 32.4837, 31.5599, 29.3209]]
You may create a large array of nans and fill in the trials up the respective maximum number of trials. The rest of the array row will stay nan. Then take the mean along the vertical axis, using numpy.nanmean.
import numpy as np
import matplotlib.pyplot as plt
trial1 = [33.4853, 32.3958, 30.2859, 33.2958, 30.1049, 29.3209]
trial2 = [45.2937, 44.2983, 42.2839, 42.1394, 41.2938, 39.2936, 38.1826,
36.2483, 39.2632, 37.1827, 35.9936, 32.4837, 31.5599, 29.3209]
m= np.max([len(trial1), len(trial2)])
# create array of nans
BE = np.ones( (2, m) )*np.nan
# fill it with trials up to the number of trial values
BE[0,:len(trial1)] = trial1
BE[1,:len(trial2)] = trial2
# nanmean = take mean, ignore nans
BEave = np.nanmean(BE, axis=0)
plt.plot(trial1, label="trial1", color="mediumpurple")
plt.plot(trial2, label="trial2", color="violet")
plt.plot(BEave, color="crimson", label="avg")
plt.legend()
plt.show()

Plot sparsely populated 2d numpy array

from an iterative image pattern search with decreasing step size I have a 'quality' array. Due to the nature of the search pattern the array is not fully filled. In the first iteration I go with stepsize 10, find the best spot and there search a +-10 XY range to find the true best spot. So most of the array has every 10th slot filled and there is the small 'best' region that is densely filled. Now I want to plot this array and would want the plot to be 'interpolated' where needed by using the data every 10th slot. Now to do my search I initialize the array with a huge value. All my measurements are smaller and later I use the np.argmin(q) function. That works fine for searching but for plotting it is bad. The dynamic range of the plot is lost.
Here is an example from an older version of the code that does exhaustive but unnecessarily long search :
And here is what I get with the optimized search :
Here is the piece of code that does the plots. (q is the quality array to plot)
fig= plt.figure(1)
im= plt.imshow(q[::-1], cmap='rainbow', interpolation='none', extent=[-search_size,search_size,-search_size,search_size])
fig.savefig(pfn(img_fn), bbox_inches='tight')
The issue may point back to the initialization of the array. Again as I do a minimum search I do this :
q = np.empty(shape=(2*search_size,2*search_size))
q.fill(+1e20)
q_min = 1e20
for xs in range(-search_size,+search_size,search_step):
for ys in range(-search_size,+search_size,search_step):
img_shift = np.zeros_like(img)
img_shift[mom(ys):non(ys), mom(xs):non(xs)] = img[mom(-ys):non(-ys), mom(-xs):non(-xs)]
d = np.absolute(img_shift - prev_img)[search_size:-search_size,search_size:-search_size]
q[ys+search_size,xs+search_size] = np.sum(d)
if q[ys+search_size,xs+search_size] < q_min : q_min= q[ys+search_size,xs+search_size]
#print '1st iter try : %+3d %+3d %6.3f %6.3f' % ( xs, ys, q[ys+search_size,xs+search_size], q_min)
idxmin = np.argmin(q)
dy,dx = np.unravel_index(idxmin, q.shape)
dx= dx-search_size
dy= dy-search_size
print '1st iter best : dx= %+3d dy= %+3d' % ( dx , dy )
Then follows another loop with search_step = 1.
Is it possible to initialize the array i.e. with NaN ? Would that allow the minimum search? And/or would it allow the plotter to jump accross undefined entries?
So what's the best way to initialize / plot so that the search works and the plots look good?
Thanks,
Gert
Update #Nix G-D
The averaging fails. I first tried code following the recommendation.
q_int = pd.DataFrame(q).interpolate(method='linear', axis=0).values
fig= plt.figure(1)
im= plt.imshow(q_int[::-1], cmap='rainbow', interpolation='none', extent=[-search_size,search_size,-search_size,search_size])
However the 2D interpolation failed. (at least as indicated by the plot)
I tried to add code to perform X and Y interpolation.
q_int = pd.DataFrame(q).interpolate(method='linear', axis=0).values
q_int = pd.DataFrame(q_intx).interpolate(method='linear', axis=1).values
fig= plt.figure(1)
im= plt.imshow(q_int[::-1], cmap='rainbow', interpolation='none', extent=[-search_size,search_size,-search_size,search_size])
But results still were corrupted.
Best,
Gert
You can initialize the array with NaN easily:
shape = (2*search_size, 2*search_size)
q = np.full(shape, np.nan)
This can then be searched as normal. To find the minimum indices ignoring NaNs, you can use np.nanargmin()
In [12]: np.nanargmin([1,-1,4,float('nan')])
Out[12]: 1
To get rid of these NaN values we can use, pandas.DataFrame.interpolate():
q_interpolated = pd.DataFrame(q).interpolate(method='linear', axis=0).values

Categories

Resources