How to generate point in 2D uniformly distributed on a rectangle? - python

I tried to generate a 1000 points in 2D uniformly distributed on a rectangle of dimensions [-1,1]x[0,0.5], then plot the points, but I couldn't. I get this error.
typeError: 'float' object cannot be interpreted as an integer.
Here is the code I came up with:
import matplotlib.pyplot as plt
vect = np.random.uniform(1000)
plt.plot(range(-1,1), range(0,0.5), vect)
plt.show()
I think I don't really understand how to do it. Should I use np.random.randn or np.random.rand? I would have to have some explanations about what I did wrong. (if possible to each line so that i can better understand)
Thank you

first: xlist = np.random.uniform(1000,low=-1, high = 1)
second ylist = np.random.uniform(1000,low=0, high = 0.5)
last plt.scatter(xlist, ylist)

Related

2d interpolators of scipy for scattered data going crazy

I want to find the derivatives of some scattered data. I have tried two different methods:
projecting the scattered data on a regular grid using scipy.interpolate.griddata, then computing the gradients with numpy.gradients, and then projecting values back to the scattered locations.
creating a CloughTocher2DInterpolater (but I have the same issue with others) and getting the gradients out of it
The second one is an order of magnitude faster than the first one but unfortunately, it also goes crazy quite quickly when data are a bit complex. For instance starting with this signal (called F and which is a simple addition of tanh stepwise functions along x and y):
When I process F using the two methods, I get:
Method 1 gives a good approximation. Method 2 is also good but I need force the colormap because of the existence of some extreme values.
Now, if I add a small noise (i.e. of amplitude 0.1 while the signal has amplitudes between -3 and 3), the interpolator just goes crazy giving very large extreme values:
I don't know how to deal with this. I understand the interpolator won't like irregular function or noise, but I was not expecting such discrepancy. My first idea was to smooth data first but strangely I can't find any method that would help me on this. Another idea would be to make a 2d fit of F to try to remove noise but I'm dry here too...any idea ?
Here is the corresponding python example (working on python3.6.9):
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
plt.interactive(True)
# scattered data
N = 200
coordu = np.random.rand(N**2,2)
Xu=coordu[:,0]
Yu=coordu[:,1]
noise = 0.
noise = np.random.rand(Xu.shape[0])*0.1
Zu=np.tanh((Xu-0.25)/0.01+(Yu-0.25)/0.001)+np.tanh((Xu-0.5)/0.01+(Yu-0.5)/0.001)+np.tanh((Xu-0.75)/0.001+(Yu-0.75)/0.001)+noise
plt.figure();plt.scatter(Xu,Yu,1,Zu)
plt.title('Data signal F')
#plt.savefig('signalF_noisy.png')
### get the gradient
# using griddata np.gradients
Xs,Ys=np.meshgrid(np.linspace(0,1,N),np.linspace(0,1,N))
coords = np.array([Xs,Ys]).T
Zs = interpolate.griddata(coordu,Zu,coords)
nearest = interpolate.griddata(coordu,Zu,coords,method='nearest')
znan = np.isnan(Zs)
Zs[znan] = nearest[znan]
dZs = np.gradient(Zs,np.min(np.diff(Xs[0,:])))
dZus = interpolate.griddata(coords.reshape(N*N,2),dZs[0].reshape(N*N),coordu)
hist_dzus = np.histogram(dZus,100)
plt.figure();plt.scatter(Xu,Yu,1,dZus)
plt.colorbar()
plt.clim([0 ,10])
plt.title('dF/dx using griddata and np.gradients')
#plt.savefig('dxF_griddata_noisy.png')
# using interpolation method Clough
interp = interpolate.CloughTocher2DInterpolator(coordu,Zu)
dZuCT = interp.grad
hist_dzct = np.histogram(dZuCT[:,0,0],100)
plt.figure();plt.scatter(Xu,Yu,1,dZuCT[:,0,0])
plt.colorbar()
plt.clim([0 ,10])
plt.title('dF/dx using CloughTocher2DInterpolator')
#plt.savefig('dxF_CT2D_noisy.png')
# histograms
plt.figure()
plt.semilogy(hist_dzus[1][:-1],hist_dzus[0],'.-')
plt.semilogy(hist_dzct[1][:-1],hist_dzct[0],'.-')
plt.title('histogram of dF/dx')
plt.legend(('griddata','ClouhTocher'))
#plt.savefig('dxF_hist_noisy.png')

Bifurcation and Lyapunov exponent in Python

I am a relative beginner when it comes to python, and I currently am trying to figure out some python for a problem I have.
I am attempting to calculate the lyapunov exponent of a bifurcation diagram I am supposed to be creating.
The equation is X_(n+1) = asin(pi x_(n)),
where a = 0.9 (for when I calculate the exponent)
This is currently the code that i have set up to create an array of values becoming large.
import numpy as np
np.set_printoptions(threshold=np.nan)
import matplotlib.pyplot as plt
a = np.linspace(0,1)
xn = np.array([.001], dtype = float)
for i in range(0,10000):
y = a*np.sin(np.pi*xn[i])
xn = np.append(xn,y)
plt.plot(a,xn[-1])
However, very obviously, when i plot xn, i just get a mad mess of dots instead of a bifurcation diagram. I was hoping I could get some guidance as to moving towards the correct diagram which i can hopefully use to get closer to my end goal.
Thanks for any help, I appreciate it!
I'm not exactly sure what you are trying to accomplish, and I don't know enough about bifurcations to really figure it out on my own, but I was able to get something that seems to work. The main caveat seems to be that if alpha starts at less than 0.158, it won't produce the right output.
import numpy as np
import matplotlib.pyplot as plt
x = [0.001]
a = np.linspace(0.2,1,100000)
for i in range(1,a.shape[0]):
x.append(a[i]*np.sin(np.pi*x[i-1]))
fig = plt.figure(figsize=(8,4))
plt.scatter(a,x,s=0.1)
which produces the figure:

Python 3D plot with different array sizes

I have been using Matlab for a few years which is quite easy (in my opinion) and powerful when it comes to 3D-plots such as surf, contour or contourf.
It seems at least more unintuitive to me to do the same in Python.
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0,100,0.1) # time domain
sp = np.arange(0,50,0.2) # spatial domain
c = 0.5
u0 = np.exp(-(sp-5)**2)
u = np.empty((len(t),len(sp))
for i in range(0,len(t)):
u[i][:] = u0*(sp-c*t)
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
ax.plot_surface(t,sp,u)
plt.show()
So, in Matlab it would be that easy I think.
What do I have to do in order to get a 3D-Plot (surface or whatever) with two arrays for the x and y dimensions with different sizes and a z-matrix giving a value to each grid point?
As this is a basic question, feel free to explain a bit more or just give me a link with an answer. Unfortunately, I do not really understand what is happening in the codes I read regarding this problem so far.
I don't think what you have written would work in matlab either (I may be wrong, I haven't used it in a while).
To do a plot_surface(X, Y, Z), X, Y, Z must be 2D arrays of equal size. So, just like you would do in matlab:
T, SP = numpy.meshgrid(t, sp)
plot_surface(T, SP, u)

Trying to plot a quadratic regression, getting multiple lines

I'm making a demonstration of a different types of regression in numpy with ipython, and so far, I've been able to plot a simple linear regression without difficulty. Now, when I go on to make a quadratic fit to my data and go to plot it, I don't get a quadratic curve but instead get many lines. Here's the code I'm running that generates the problem:
import numpy
from numpy import random
from matplotlib import pyplot as plt
import math
# Generate random data
X = random.random((100,1))
epsilon=random.randn(100,1)
f = 3+5*X+epsilon
# least squares system
A =numpy.array([numpy.ones((100,1)),X,X**2])
A = numpy.squeeze(A)
A = A.T
quadfit = numpy.linalg.solve(numpy.dot(A.transpose(),A),numpy.dot(A.transpose(),f))
# plot the data and the fitted parabola
qdbeta0,qdbeta1,qdbeta2 = quadfit[0][0],quadfit[1][0],quadfit[2][0]
plt.scatter(X,f)
plt.plot(X,qdbeta0+qdbeta1*X+qdbeta2*X**2)
plt.show()
What I get is this picture (zoomed in to show the problem):
You can see that rather than having a single parabola that fits the data, I have a huge number of individual lines doing something that I'm not sure of. Any help would be greatly appreciated.
Your X is ordered randomly, so it's not a good set of x values to use to draw one continuous line, because it has to double back on itself. You could sort it, I guess, but TBH I'd just make a new array of x coordinates and use those:
plt.scatter(X,f)
x = np.linspace(0, 1, 1000)
plt.plot(x,qdbeta0+qdbeta1*x+qdbeta2*x**2)
gives me

How do I limit the interpolation region in the InterpolatedUnivariateSpline in Python when given non-uniform samples?

I'm trying to get a nice upsampler using Python when I have non-uniform spaced inputs. Any suggestions would be helpful. I've tried a number of interp functions. Here's an example:
from scipy.interpolate import InterpolatedUnivariateSpline
from numpy import linspace, arange, append
from matplotlib.pyplot import plot
F=[0, 1000,1500,2000,2500,3000,3500,4000,4500,5000,5500,22050]
M=[0.,2.85,2.49,1.65,1.55,1.81,1.35,1.00,1.13,1.58,1.21,0.]
ff=linspace(F[0],F[1],10)
for i in arange(2, len(F)):
ff=append(ff,linspace(F[i-1],F[i], 10))
aa=InterpolatedUnivariateSpline(x=F,y=M,k=2);
mm=aa(ff)
plot(F,M,'r-o'); plot(ff,mm,'bo'); show()
This is the plot I get:
I need to get interpolated values that don't go below 0. Note that the blue dots go below zero. The red line represents the original F vs. M data. If I use k=1 (piece-wise linear interp) then I get good values as shown here:
aa=InterpolatedUnivariateSpline(x=F,y=M,k=1)
mm=aa(ff); plot(F,M,'r-o');plot(ff,mm,'bo'); show()
The problem is that I need to have a "smooth" interpolation and not the piece-wise value. Does anyone know if the bbox argument in InterpolatedUnivarientSpline helps to fix that? I cant find any documentation on what bbox does. Is there another easier way to accomplish this?
Thanks in advance for any help.
Positivity-preserving interpolation is hard (if it wasn't, there wouldn't be a bunch of papers written about it). The splines of low degree (2, 3) usually do pretty well in this regard, but your data has that large gap in it, and it happens to be at the end of data range, making things worse.
One solution is to do interpolation in two steps: first upsample the data by piecewise linear interpolation, then interpolate new data with a smooth spline (I'll use cubic spline below, though quadratic also works).
The gap_size array records how large each gap is, relative to the smallest one. In subsequent loop, uniformly spaced points are replaced in large gaps (those that are at least twice the size of smallest one). The result is F_new, a nearly-uniform better grid that still includes the original points. The corresponding M values for it are generated by a piecewise linear spline.
Subsequent cubic interpolation produces a smooth curve that stays positive.
F = [0, 1000,1500,2000,2500,3000,3500,4000,4500,5000,5500,22050]
M = [0.,2.85,2.49,1.65,1.55,1.81,1.35,1.00,1.13,1.58,1.21,0.]
gap_size = np.diff(F) // np.diff(F).min()
F_new = []
for i in range(len(F)-1):
F_new.extend(np.linspace(F[i], F[i+1], gap_size[i], endpoint=False))
F_new.append(F[-1])
pl_spline = InterpolatedUnivariateSpline(F, M, k=1);
M_new = pl_spline(F_new)
smooth_spline = InterpolatedUnivariateSpline(F_new, M_new, k=3)
ff = np.linspace(F[0], F[-1], 100)
plt.plot(F, M, 'ro')
plt.plot(ff, smooth_spline(ff), 'b')
plt.show()
Of course, no tricks can hide the truth that we don't know what happens between 5500 and 22050 (Hz, I presume), the nearly-linear part is just a placeholder.

Categories

Resources