When converting polar to cartesian, a slice of the pie is missing - python

I have done measurements on external software where I do my measurements in the cylindrical coordinates R, phi and z. However, I select one z, to make a contourplot over so I have coordinates in R and phi. To turn that into x and y, I make a 2D array of x and y with x being equal to R * cos(phi) and y to R * sin(phi). Like this:
t_xray = np.zeros((Rbins, Phibins))
t_yray = np.zeros((Rbins, Phibins))
for i in range(0, Rbins):
for j in range(0, Phibins):
t_xray[i,j] = Rray[i] * np.cos(Phiray[j])
t_yray[i,j] = Rray[i] * np.sin(Phiray[j])
with Rbins and Phibins being equal to the length of the arrays of R's and phi's. Seems like a legitimate way to get it done, right? Apparently not, as this is what my plot looks like:
Plot with slice of the pie missing. Made possible with:
plt.contourf(t_xray, t_yray, Doos_TG43, 1000, locator = ticker.LogLocator())
cbar = plt.colorbar(label = r'$\it{D}$ (cGy$\cdot$ h$^{-1}$)')
My first thought was that there was somehow a bigger leap in-between two angles that Python couldn't interpolate in-between, but when printing the array of phi's, you can see the leap between the first and last angle in the array is the same as in-between any element of the array (assuming we count k2pi + phi as phi):
[0.03141593 0.09424778 0.15707963 0.21991149 0.28274334 0.34557519
0.40840704 0.4712389 0.53407075 0.5969026 0.65973446 0.72256631
0.78539816 0.84823002 0.91106187 0.97389372 1.03672558 1.09955743
1.16238928 1.22522113 1.28805299 1.35088484 1.41371669 1.47654855
1.5393804 1.60221225 1.66504411 1.72787596 1.79070781 1.85353967
1.91637152 1.97920337 2.04203522 2.10486708 2.16769893 2.23053078
2.29336264 2.35619449 2.41902634 2.4818582 2.54469005 2.6075219
2.67035376 2.73318561 2.79601746 2.85884931 2.92168117 2.98451302
3.04734487 3.11017673 3.17300858 3.23584043 3.29867229 3.36150414
3.42433599 3.48716785 3.5499997 3.61283155 3.6756634 3.73849526
3.80132711 3.86415896 3.92699082 3.98982267 4.05265452 4.11548638
4.17831823 4.24115008 4.30398194 4.36681379 4.42964564 4.49247749
4.55530935 4.6181412 4.68097305 4.74380491 4.80663676 4.86946861
4.93230047 4.99513232 5.05796417 5.12079603 5.18362788 5.24645973
5.30929158 5.37212344 5.43495529 5.49778714 5.560619 5.62345085
5.6862827 5.74911456 5.81194641 5.87477826 5.93761012 6.00044197
6.06327382 6.12610567 6.18893753 6.25176938]
So it seems I am completely out of the loop here. Why is it as if a slice is cut from the 'pie', despite everything I just mentioned?
To summarise, I tried to see whether the problem is something with the angles, but it turns out even that doesn't help giving back the slice. I have no idea what cause a piece to go missing suddenly.

Related

How can I use the scipy.interpolate.interp1d in python to plot 2 y curves instead of 1?

I'm pretty new on python. I have a piece of code that reads some data from a file, creates several arrays and plots them with plt.plot. The arrays are s for the x-axis, and P_abs_i and P_abs_e in the y-axis. The code was working fine until I tried to plot smooth lines instead of the dafault ones.
I tried to use the interpolate.interpid function to plot smooth lines. I used np.arrays to turn my arrays into numpy arrays, following the example in the interpid guide. I then used interpid to create a cubic interpolation curve and np.linspace to get evenly spaced samples. It worked for one of the lines (P_abs_e), so I then tried to copy the same process for the other line (P_abs_i) but I got the error message: "ValueError: x and y must have same first dimension, but have shapes (500,) and (1, 500)". Can somebody help? (The code is below, not sure if it's going to show properly since this is my first time posting):
x_e = np.array(s)
y_e = np.array(P_abs_e)
cubic_interpolation_model_e = interp1d(x_e, y_e, kind = "cubic")
X_e=np.linspace(x_e.min(), x_e.max(), 500)
Y_e=cubic_interpolation_model_e(X_e)
plt.plot(X_e, Y_e, 'b', label = 'e')
x_i = np.array(s)
y_i = np.array(P_abs_i)
cubic_interpolation_model_i = interp1d(x_i, y_i, kind = "cubic")
X_i=np.linspace(x_i.min(), x_i.max(), 500)
Y_i=cubic_interpolation_model_i(X_i)
plt.plot(X_i, Y_i, 'g', label = 'He3')

Converting Matlab to Python, errors with arrays and for loops

I am pretty new to programming so be patient with me lol. I am trying to convert an example code of matlab to python but I am having trouble with arrays in for loops and keep getting index errors.
Here is the given MatLab Code:
clear all
close all
clc
m=100; %kg
k=1000; %N/m
c=25;
v0=0;
x0=0;
dt=0.0005;
F=1000; % N the mag of input force
f0=F/m;
w=2.5; %rad/sec input frequency
t=0:dt:10;
wn=(k/m)^0.5;% rad/sec natural frequency
ze=c/(2*(k*m)^0.5);
A=[0 1; -wn^2 -2*ze*wn];
X0=[x0;v0]; %intial conditions
for i=1:length(t)
X(:,i)=X0;
Finput=[0;f0*cos(w*t(i))];
X0=X0+A*X0*dt+dt*Finput;
end
figure,plot(t,X(1,:));
title('Displacement vs tiem')
xlabel('time (second)')
ylabel('Displacement')
grid on
figure,plot(t,X(2,:),'r');
xlabel('time (second)')
ylabel('Velocity')
My code
import numpy as np
import matplotlib.pyplot as plt
#constants
k=1000
m=100
v0=0.0
x0=0.0
f=1000
c=25
f0 = f/m
wn = np.sqrt(k/m)
w = wn*2
ze =c/(2*(k*m)**0.5)
A = np.array([[0.0,1.0],[-wn**2,-2*ze*wn]])
X0= np.array([x0,v0])
dt = 0.01
t = np.arange(0, 2.5, dt) #get values between -10 and 10 with 0.01 step and set to y
for i in range (len(t)):
print(X0)
X0[:,i]=X0 #error
print(X0)
Finput = np.array([0.0,(f0*np.cos(w*dt*i))])
X0 = X0 + A*dt*X0+dt*Finput
plt.plot(t, X0[0,:])
plt.plot(t, X0[1,:])
plt.show()
I keep getting an "IndexError: too many indices for array" for the X0[:,i]=X0 part in my for loop and am struggling to figure out why.
Many Thanks in advance for the help!
In Matlab code, X(:,i)=X0; assigns X0 to ith column of X. But your python X0[:,i]=X0 #error is assiging X0 to X0 ith column.
The first time MATLAB runs the line
X(:,i)=X0;
it creates a new variable X whose i'th column is equal to X0. In your code i is 1 when this happens but if i were > 1, MATLAB would initialise columns 1...i-1 with zeroes. After the loop is done, the code plots the data from the matrix X.
You have mistakenly translated this as X0[:,i]=X0 in your Python code, which gives an error because you're trying to assign to X0 as if it were a two-dimensional array when it's only one-dimensional.
Python and numpy don't automatically create and grow arrays when you assign to a subarray in the way that MATLAB does, so in Python you need to create the array X before the loop then either resize the array each time before you assign to the next column of it, or just initialise it with the right size when you create it - since you know how big it's going to be, i.e. len(t), do the latter - you can use np.zeros for this.
Also, in the Python code as you have posted it the line X0 = X0 + A*dt*X0+dt*Finput is outside the loop because the previous line has no indentation - Python should raise an IndentationError for this though. Conventionally you should use four spaces for each level of indentation.
After the loop in the Python code you want to plot the contents of X, not X0.

Plot sparsely populated 2d numpy array

from an iterative image pattern search with decreasing step size I have a 'quality' array. Due to the nature of the search pattern the array is not fully filled. In the first iteration I go with stepsize 10, find the best spot and there search a +-10 XY range to find the true best spot. So most of the array has every 10th slot filled and there is the small 'best' region that is densely filled. Now I want to plot this array and would want the plot to be 'interpolated' where needed by using the data every 10th slot. Now to do my search I initialize the array with a huge value. All my measurements are smaller and later I use the np.argmin(q) function. That works fine for searching but for plotting it is bad. The dynamic range of the plot is lost.
Here is an example from an older version of the code that does exhaustive but unnecessarily long search :
And here is what I get with the optimized search :
Here is the piece of code that does the plots. (q is the quality array to plot)
fig= plt.figure(1)
im= plt.imshow(q[::-1], cmap='rainbow', interpolation='none', extent=[-search_size,search_size,-search_size,search_size])
fig.savefig(pfn(img_fn), bbox_inches='tight')
The issue may point back to the initialization of the array. Again as I do a minimum search I do this :
q = np.empty(shape=(2*search_size,2*search_size))
q.fill(+1e20)
q_min = 1e20
for xs in range(-search_size,+search_size,search_step):
for ys in range(-search_size,+search_size,search_step):
img_shift = np.zeros_like(img)
img_shift[mom(ys):non(ys), mom(xs):non(xs)] = img[mom(-ys):non(-ys), mom(-xs):non(-xs)]
d = np.absolute(img_shift - prev_img)[search_size:-search_size,search_size:-search_size]
q[ys+search_size,xs+search_size] = np.sum(d)
if q[ys+search_size,xs+search_size] < q_min : q_min= q[ys+search_size,xs+search_size]
#print '1st iter try : %+3d %+3d %6.3f %6.3f' % ( xs, ys, q[ys+search_size,xs+search_size], q_min)
idxmin = np.argmin(q)
dy,dx = np.unravel_index(idxmin, q.shape)
dx= dx-search_size
dy= dy-search_size
print '1st iter best : dx= %+3d dy= %+3d' % ( dx , dy )
Then follows another loop with search_step = 1.
Is it possible to initialize the array i.e. with NaN ? Would that allow the minimum search? And/or would it allow the plotter to jump accross undefined entries?
So what's the best way to initialize / plot so that the search works and the plots look good?
Thanks,
Gert
Update #Nix G-D
The averaging fails. I first tried code following the recommendation.
q_int = pd.DataFrame(q).interpolate(method='linear', axis=0).values
fig= plt.figure(1)
im= plt.imshow(q_int[::-1], cmap='rainbow', interpolation='none', extent=[-search_size,search_size,-search_size,search_size])
However the 2D interpolation failed. (at least as indicated by the plot)
I tried to add code to perform X and Y interpolation.
q_int = pd.DataFrame(q).interpolate(method='linear', axis=0).values
q_int = pd.DataFrame(q_intx).interpolate(method='linear', axis=1).values
fig= plt.figure(1)
im= plt.imshow(q_int[::-1], cmap='rainbow', interpolation='none', extent=[-search_size,search_size,-search_size,search_size])
But results still were corrupted.
Best,
Gert
You can initialize the array with NaN easily:
shape = (2*search_size, 2*search_size)
q = np.full(shape, np.nan)
This can then be searched as normal. To find the minimum indices ignoring NaNs, you can use np.nanargmin()
In [12]: np.nanargmin([1,-1,4,float('nan')])
Out[12]: 1
To get rid of these NaN values we can use, pandas.DataFrame.interpolate():
q_interpolated = pd.DataFrame(q).interpolate(method='linear', axis=0).values

ZeroDivisionError: float division by zero in a code for Surface plot

I have got this code to generate a surface plot. But it gives a zero division error. I am not able to figure out what is wrong. Thank you.
import pylab, csv
import numpy
from mayavi.mlab import *
def getData(fileName):
try:
data = csv.reader(open(fileName,'rb'))
except:
print 'File not found'
else:
data = [[float(row[0]), float(row[1]),float(row[2])] for row in data]
x = [row[0] for row in data]
y = [row[1] for row in data]
z = [row[2] for row in data]
return (x, y, z)
def plotData(fileName):
xVals, yVals, zVals = getData(fileName)
xVals = pylab.array(xVals)
yVals = pylab.array(yVals)
zVals = (pylab.array(zVals)*10**3)
x, y = numpy.mgrid[-0.5:0.5:0.001, -0.5:0.5:0.001]
s = surf(x, y, zVals)
return s
plotData('data')
If I have understood the code correctly, there is a problem with zVals in mayavi.mlab.surf.
According to the documentation of the function, s is the elevation matrix, a 2D array, where indices along the first array axis represent x locations, and indices along the second array axis represent y locations. Your file reader seems to return a 1D vector instead of an array.
However, this may not be the most difficult problem. Your file seems to contain triplets of x, y, and z coordinates. You can use mayavi.mlab.surf only if your x and y coordinates in the file form a regular square grid. If this is the case, then you just have to recover that grid and form nice 2D arrays of all three parts. If the points are in the file in a known order, it is easy, otherwise it is rather tricky.
Maybe you would want to start with mayavi.mlab.points3d(xVals, yVals, zVals). That will give you an overall impression of your data. (Or if already know more about your data, you might give us a hint by editing your question and adding more information!)
Just to give you an idea of probably slightly pythonic style of writing this, your code is rewritten (and surf replaced) in the following:
import mayavi.mlab as ml
import numpy
def plot_data(filename):
data = numpy.loadtxt(filename)
xvals = data[:,0]
yvals = data[:,1]
zvals = data[:,2] * 1000.
return ml.points3d(x, y, z)
plot_data('data')
(Essential changes: the use of numpy.loadtxt, get rid of pylab namespace here, no import *, no CamelCase variable or function names. For more information, see PEP 8.)
If you only need to see the shape of the surface, and the data in the file is ordered row-by-row and with the same number of data points in each row (i.e. fixed number of columns), then you may use:
import mayavi.mlab as ml
import numpy
importt matplotlib.pyplot as plt
# whatever you have as the number of points per row
columns = 13
data = numpy.loadtxt(filename)
# draw the data points into a XY plane to check that they really for a rectangular grid:
plt.plot(data[:,0], data[:,1])
# draw the surface
zvals = data[:,2].reshape(-1,columns)
ml.surf(zvals, warp_scale='auto')
As you can see, this code allows you to check that your values really are in the right kind of grid. It does not check that they are in the correct order, but at least you can see they form a nice grid. Also, you have to input the number of columns manually. The keyword warp_scale takes care of the surface scaling so that it should look reasonable.

plot a huge amount of data points

I have encountered a strange problem: when I store a huge amount of data points from a nonlinear equation to 3 arrays (x, y ,and z) and then tried to plot them in a 2D graph (theta-phi plot, hence its 2D).
I tried to eliminate points needed to be plotted by sampling points from every 20 data points, since the z-data is approximately periodic. I picked those points with z value just above zero to make sure I picked one point for every period.
The problem arises when I tried to do the above. I got only a very limited number of points on the graph, approximately 152 points, regardless of how I changed my initial number of data points (as long as it surpassed a certain number of course).
I suspect that it might be some command I use wrongly or the capacity of array is smaller then I expected (seems unlikely), could anyone help me find out where is the problem?
def drawstaticplot(m,n, d_n, n_o):
counter=0
for i in range(0,m):
n=vector.rungekutta1(n, d_n)
d_n=vector.rungekutta2(n, d_n, i)
x1 = n[0]
y1 = n[1]
z1 = n[2]
if i%20==0:
xarray.append(x1)
yarray.append(y1)
zarray.append(z1)
for j in range(0,(m/20)-20):
if (((zarray[j]-n_o)>0) and ((zarray[j+1]-n_o)<0)):
counter= counter +1
print zarray[j]-n_o,counter
plotthetaphi(xarray[j],yarray[j],zarray[j])
def plotthetaphi(x,y,z):
phi= math.acos(z/math.sqrt(x**2+y**2+z**2))
theta = math.acos(x/math.sqrt(x**2 + y**2))
plot(theta, phi,'.',color='red')
Besides, I tried to apply the code in the following SO question to my code, I want a very similar result except that my data points are not randomly generated.
Shiuan,
I am still investigating your problem, how ever a few notes:
Instead of looping and appending to an array you could do:
select every nth element:
# inside IPython console:
[2]: a=np.arange(0,10)
In [3]: a[::2] # here we select every 2nd element.
Out[3]: array([0, 2, 4, 6, 8])
so instead of calcultating runga-kutta on all elements of m:
new_m = m[::20] # select every element of m.
now call your function like this:
def drawstaticplot(new_m,n, d_n, n_o):
n=vector.rungekutta1(n, d_n)
d_n=vector.rungekutta2(n, d_n, i)
x1 = n[0]
y1 = n[1]
z1 = n[2]
xarray.append(x1)
yarray.append(y1)
zarray.append(z1)
...
about appending, and iterating over large data sets:
append in general is slow, because it copies the whole array and then
stacks the new element. Instead, you already know the size of n, so you could do:
def drawstaticplot(new_m,n, d_n, n_o):
# create the storage based on n,
# notice i assumed that rungekutta, returns n the size of new_m,
# but you can change it.
x,y,z = np.zeros(n.shape[0]),np.zeros(n.shape[0]), np.zeros(n.shape[0])
for idx, itme in enumerate(new_m): # notice the function enumerate, make it your friend!
n=vector.rungekutta1(n, d_n)
d_n=vector.rungekutta2(n, d_n, ite,)
x1 = n[0]
y1 = n[1]
z1 = n[2]
#if i%20==0: # we don't need to check for the 20th element, m is already filtered...
xarray[idx] = n[0]
yarray[idx] = n[1]
zarray[idx] = n[2]
# is the second loop necessary?
if (((zarray[idx]-n_o)>0) and ((zarray[j+1]-n_o)<0)):
print zarray[idx]-n_o,counter
plotthetaphi(xarray[idx],yarray[idx],zarray[idx])
You can use the approach suggested here:
Efficiently create a density plot for high-density regions, points for sparse regions
e.g. histogram where you have too many points and points where the density is low.
Or also you can use rasterized flag for matplotlib, which speeds up matplotlib.

Categories

Resources