I'm a beginner and I don't speak english very well so sorry about that.
I'd like to draw the bifurcation diagram of the sequence :
x(n+1)=ux(n)(1-x(n)) with x(0)=0.7 and u between 0.7 and 4.
I am supposed to get something like this :
So, for each value of u, I'd like to calculate the accumulation points of this sequence. That's why I'd like to code something that could display every points (u;x1001),(u;x1002)...(u;x1050) for each value of u.
I did this :
import matplotlib.pyplot as plt
import numpy as np
P=np.linspace(0.7,4,10000)
m=0.7
Y=[m]
l=np.linspace(1000,1050,51)
for u in P:
X=[u]
for n in range(1001):
m=(u*m)*(1-m)
break
for l in range(1051):
m=(u*m)*(1-m)
Y.append(m)
plt.plot(X,Y)
plt.show()
And, I get a blank graphic.
This is the first thing I try to code and I don't know anything yet in Python so I need help please.
There are a few issues in your code. Although the problem you have is a code review problem, generating bifurcation diagrams is a problem of general interest (it might need a relocation on scicomp but I don't know how to request that formally).
import matplotlib.pyplot as plt
import numpy as np
P=np.linspace(0.7,4,10000)
m=0.7
# Initialize your data containers identically
X = []
Y = []
# l is never used, I removed it.
for u in P:
# Add one value to X instead of resetting it.
X.append(u)
# Start with a random value of m instead of remaining stuck
# on a particular branch of the diagram
m = np.random.random()
for n in range(1001):
m=(u*m)*(1-m)
# The break is harmful here as it prevents completion of
# the loop and collection of data in Y
for l in range(1051):
m=(u*m)*(1-m)
# Collection of data in Y must be done once per value of u
Y.append(m)
# Remove the line between successive data points, this renders
# the plot illegible. Use a small marker instead.
plt.plot(X, Y, ls='', marker=',')
plt.show()
Also, X is useless here as it contains a copy of P.
To save bifurcation diagram in png format, you can try this simple code.
# Bifurcation diagram of the logistic map
import math
from PIL import Image
imgx = 1000
imgy = 500
image = Image.new("RGB", (imgx, imgy))
xa = 2.9
xb = 4.0
maxit = 1000
for i in range(imgx):
r = xa + (xb - xa) * float(i) / (imgx - 1)
x = 0.5
for j in range(maxit):
x = r * x * (1 - x)
if j > maxit / 2:
image.putpixel((i, int(x * imgy)), (255, 255, 255))
image.save("Bifurcation.png", "PNG")
Related
I'm developping for my math stuff but I'm blocked. Here is the statement :
We denote by U and V two sequences of random floats, obtained using a random type generator in the interval [0,1].
If we combine the U and V sequences according to the transformation: Z = m + σ √(−2lnU) sin (2πV). We obtain a variable Z according to the law N (m, σ).
I want to obtain a normal distribution with mean m = 3 and standard deviation 1.
So, I've generated my array of 30 000 Z floats with :
def generateFloat(tr):
list = []
for i in range(tr):
list.append(random.uniform(0,1))
return list
def generateZ(nb,m,o):
U = generateFloat(nb)
V = generateFloat(nb)
tab = []
for i in range(nb):
Z = m+(o*sqrt(-2*log(U[i])) * sin (2*pi*V[i]) )
tab.append(Z)
return tab
I would like to make the chi-square test and do the 3D spectral test. Actually my chi-square test give me weird values...
def khi2(x):
S = 0
E = len(x)/(max(x))
print(max(x)-1)
for i in range(max(x)):
O = x[i]
S += ((O-E)**2)/E
return S
and for my 3d spectral test i have this code :
from mpl_toolkits import mplot3d
from matplotlib import pyplot
import numpy as np
fig = pyplot.figure()
ax = pyplot.axes()
num_bars = 30000
x_pos = genererFlottant(num_bars)
y_pos = genererFlottant(num_bars)
ax.scatter(x_pos, y_pos)
pyplot.show()
I think but i'm not sure about values that i obtened are good on my graph... And this isn't in 3d. So, i don't think it's a good graph
I don't know if i'm doing right. If someone can help me with something, i take any help that i can have :/
thank you.
I'm trying to plot a velocity profile, with the following code. The axes are plotted, however no data points are plotted.
import pandas as pd
from matplotlib import pyplot as plt
n = 0.4
k = 53
d = 0.000264
r = 0.000132
p = 15000
u = (n/n+1)*(p*1/2*k)**(1/n)*(d**((n+1)/n) - r**((n+1)/n))
plt.plot(u)
Graph produced:
First off, note that (p*1/2*k) is a very confusing way to write a multiplication. In (about all) programming languages, the multiplications and divisions are done left to right, so, (p*1/2*k) equals (p*k/2) while perhaps you meant (p/(2*k)).
When plotting a 2D graph, you have to think what you want in the x direction, and what in the y direction. As you only give an x, there is nothing to plot. Also, plot default want to draw lines and for a line at least 2 xy pairs are needed. To only draw a point, plot accepts a third parameter, for example 'ro' to represent a red dot. Supposing u is meant to be the y direction and you don't have an x, you could give it a zero:
plt.plot(0, u, 'ro')
Now, probably you want to draw a curve of u for different values of some x. As in your equation there is no x nor a t, it is hard for me to know what you would like to see on the horizontal direction.
Let's suppose you want to show u as a function of d and that d goes from 0.0 to 0.0005. Typically, with numpy you create a sequence of values for d, lets say split into 200 small intervals: d = np.linspace(0.0, 0.0005, 200). Then, there is the magick of numpy, that when you write u = f(d), numpy makes an array for u with as many entries as d.
Example:
import numpy as np
from matplotlib import pyplot as plt
n = 0.4
k = 53
d = np.linspace(0.0, 0.0005, 200) # 0.000264
r = 0.000132
p = 15000
u = (n / n + 1) * (p * 1 / 2 * k) ** (1 / n) * (d ** ((n + 1) / n) - r ** ((n + 1) / n))
plt.plot(d, u)
plt.show()
I'm interested in solving,
\frac{\delta \phi}{\delta t} - D \nabla^2 \phi - \alpha \phi - \gamma \phi = 0
The following is working, but I have a few questions:
Is it possible to increase performance with FiPy? I feel like the nx, ny, nz bins are very small here, despite a long computation time. I don't understand why the arrays X, Y, and Z are so large.
Notice in the first frame, we are zoomed in. How can I force the extents to automatically be [0..nx, 0..ny, 0..nz] in all plots?
Data for the first frame is a sphere of points with values 1.0 surrounded by 0.0. Why does there appear to be a gradient? Is Mayavi interpolating? If so, how can I disable this?
Code:
from fipy import *
import mayavi.mlab as mlab
import numpy as np
import time
# Spatial parameters
nx = ny = nz = 30 # bins
dx = dy = dz = 1 # Must this be an integer?
L = nx * dx
# Diffusion and time step
D = 1.
dt = 10.0 * dx**2 / (2. * D)
steps = 4
# Initial value and radius of concentration
phi0 = 1.0
r = 3.0
# Rates
alpha = 1.0 # Source coeficcient
gamma = .01 # Sink coeficcient
mesh = Grid3D(nx=nx, ny=ny, nz=nz, dx=dx, dy=dy, dz=dz)
X, Y, Z = mesh.cellCenters # These are large arrays
phi = CellVariable(mesh=mesh, name=r"$\phi$", value=0.)
src = phi * alpha # Source term (zeroth order reaction)
degr = -gamma * phi # Sink term (degredation)
eq = TransientTerm() == DiffusionTerm(D) + src + degr
# Initial concentration is a sphere located in the center of a bounded cube
phi.setValue(1.0, where=( ((X-nx/2))**2 + (Y-ny/2)**2 + (Z-nz/2)**2 < r**2) )
# Solve
start_time = time.time()
results = [phi.getNumericValue().copy()]
for step in range(steps):
eq.solve(var=phi, dt=dt)
results.append(phi.getNumericValue().copy())
print 'Time elapsed:', time.time() - start_time
# Plot
for i, res in enumerate(results):
fig = mlab.figure()
res = res.reshape(nx, ny, nz)
mlab.contour3d(res, opacity=.3, vmin=0, vmax=1, contours=100, transparent=True, extent=[0, 10, 0, 10, 0, 10])
mlab.colorbar()
mlab.savefig('diffusion3d_%i.png'%(i+1))
mlab.close()
Time elapsed: 68.2 seconds
It's hard to tell from your question, but in the course of diagnosing things, I discovered that the LinearLUSolver scales very poorly as the dimension of the problem increases (see https://github.com/usnistgov/fipy/issues/474).
For this symmetric problem, PySparse should use the PCG solver and Trilinos should use GMRES. If you didn't install either of these, then you'll get the SciPy sparse solvers, which defaults to LU (I don't know why; something for us to look into), and things will be really slow in 3D. Try adding solver=LinearGMRESSolver() to your eq.solve(...) statement.
As far as the size of X, Y, and Z, you've declared a 30*30*30 cube of cells, so each of the cell center coordinate vectors will be 27000 elements long. Did you have a different expectation for cellCenters?
I suggest you subclass our MayaviDaemon class, or at least look at how it sets up the display in Mayavi. In short, we set a data_set_clipper to the desired bounds.
I don't know.
I have an application that requires a disk populated with 'n' points in a quasi-random fashion. I want the points to be somewhat random, but still have a more or less regular density over the disk.
My current method is to place a point, check if it's inside the disk, and then check if it is also far enough away from all other points already kept. My code is below:
import os
import random
import math
# ------------------------------------------------ #
# geometric constants
center_x = -1188.2
center_y = -576.9
center_z = -3638.3
disk_distance = 2.0*5465.6
disk_diam = 5465.6
# ------------------------------------------------ #
pts_per_disk = 256
closeness_criteria = 200.0
min_closeness_criteria = disk_diam/closeness_criteria
disk_center = [(center_x-disk_distance),center_y,center_z]
pts_in_disk = []
while len(pts_in_disk) < (pts_per_disk):
potential_pt_x = disk_center[0]
potential_pt_dy = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_y = disk_center[1]+potential_pt_dy
potential_pt_dz = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_z = disk_center[2]+potential_pt_dz
potential_pt_rad = math.sqrt((potential_pt_dy)**2+(potential_pt_dz)**2)
if potential_pt_rad < (disk_diam/2.0):
far_enough_away = True
for pt in pts_in_disk:
if math.sqrt((potential_pt_x - pt[0])**2+(potential_pt_y - pt[1])**2+(potential_pt_z - pt[2])**2) > min_closeness_criteria:
pass
else:
far_enough_away = False
break
if far_enough_away:
pts_in_disk.append([potential_pt_x,potential_pt_y,potential_pt_z])
outfile_name = "pt_locs_x_lo_"+str(pts_per_disk)+"_pts.txt"
outfile = open(outfile_name,'w')
for pt in pts_in_disk:
outfile.write(" ".join([("%.5f" % (pt[0]/1000.0)),("%.5f" % (pt[1]/1000.0)),("%.5f" % (pt[2]/1000.0))])+'\n')
outfile.close()
In order to get the most even point density, what I do is basically iteratively run this script using another script, with the 'closeness' criteria reduced for each successive iteration. At some point, the script can not finish, and I just use the points of the last successful iteration.
So my question is rather broad: is there a better way to do this? My method is ok for now, but my gut says that there is a better way to generate such a field of points.
An illustration of the output is graphed below, one with a high closeness criteria, and another with a 'lowest found' closeness criteria (what I want).
A simple solution based on Disk Point Picking from MathWorld:
import numpy as np
import matplotlib.pyplot as plt
n = 1000
r = np.random.uniform(low=0, high=1, size=n) # radius
theta = np.random.uniform(low=0, high=2*np.pi, size=n) # angle
x = np.sqrt(r) * np.cos(theta)
y = np.sqrt(r) * np.sin(theta)
# for plotting circle line:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'-', alpha=.5) # draw unit circle line
ax.plot(x, y, '.') # plot random points
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives.
Alternatively, you also could create a regular grid and distort it randomly:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
n = 20
tt = np.linspace(-1, 1, n)
xx, yy = np.meshgrid(tt, tt) # create unit square grid
s_x, s_y = xx.ravel(), yy.ravel()
ii = np.argwhere(s_x**2 + s_y**2 <= 1).ravel() # mask off unwanted points
x, y = s_x[ii], s_y[ii]
triang = tri.Triangulation(x, y) # create triangluar grid
# distort the grid
g = .5 # distortion factor
rx = x + np.random.uniform(low=-g/n, high=g/n, size=x.shape)
ry = y + np.random.uniform(low=-g/n, high=g/n, size=y.shape)
rtri = tri.Triangulation(rx, ry, triang.triangles) # distorted grid
# for circle:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'k-', alpha=.2) # circle line
ax.triplot(triang, "g-", alpha=.4)
ax.triplot(rtri, 'b-', alpha=.5)
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives
The triangles are just there for visualization. The obvious disadvantage is that depending on your choice of grid, either in the middle or on the borders (as shown here), there will be more or less large "holes" due to the grid discretization.
If you have a defined area like a disc (circle) that you wish to generate random points within you are better off using an equation for a circle and limiting on the radius:
x^2 + y^2 = r^2 (0 < r < R)
or parametrized to two variables
cos(a) = x/r
sin(a) = y/r
sin^2(a) + cos^2(a) = 1
To generate something like the pseudo-random distribution with low density you should take the following approach:
For randomly distributed ranges of r and a choose n points.
This allows you to generate your distribution to roughly meet your density criteria.
To understand why this works imagine your circle first divided into small rings of length dr, now imagine your circle divided into pie slices of angle da. Your randomness now has equal probability over the whole boxed area arou d the circle. If you divide the areas of allowed randomness throughout your circle you will get a more even distribution around the overall circle and small random variation for the individual areas giving you the psudo-random look and feel you are after.
Now your job is just to generate n points for each given area. You will want to have n be dependant on r as the area of each division changes as you move out of the circle. You can proportion this to the exact change in area each space brings:
for the n-th to n+1-th ring:
d(Area,n,n-1) = Area(n) - Area(n-1)
The area of any given ring is:
Area = pi*(dr*n)^2 - pi*(dr*(n-1))
So the difference becomes:
d(Area,n,n-1) = [pi*(dr*n)^2 - pi*(dr*(n-1))^2] - [pi*(dr*(n-1))^2 - pi*(dr*(n-2))^2]
d(Area,n,n-1) = pi*[(dr*n)^2 - 2*(dr*(n-1))^2 + (dr*(n-2))^2]
You could expound this to gain some insight on how much n should increase but it may be faster to just guess at some percentage increase (30%) or something.
The example I have provided is a small subset and decreasing da and dr will dramatically improve your results.
Here is some rough code for generating such points:
import random
import math
R = 10.
n_rings = 10.
n_angles = 10.
dr = 10./n_rings
da = 2*math.pi/n_angles
base_points_per_division = 3
increase_per_level = 1.1
points = []
ring = 0
while ring < n_rings:
angle = 0
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + da*math.random()
rr = r*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
base_points_per_division = base_points_per_division*increase_per_level
ring += 1
I tested it with the parameters:
n_rings = 20
n_angles = 20
base_points = .9
increase_per_level = 1.1
And got the following results:
It looks more dense than your provided image, but I imagine further tweaking of those variables could be beneficial.
You can add an additional part to scale the density properly by calculating the number of points per ring.
points_per_ring = densitymath.pi(dr**2)*(2*n+1)
points_per_division = points_per_ring/n_angles
This will provide a an even better scaled distribution.
density = .03
points = []
ring = 0
while ring < n_rings:
angle = 0
base_points_per_division = density*math.pi*(dr**2)*(2*ring+1)/n_angles
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + min(da,da*random.random())
rr = ring*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
ring += 1
Giving better results using the following parameters
R = 1.
n_rings = 10.
n_angles = 10.
density = 10/(dr*da) # ~ ten points per unit area
With a graph...
and for fun you can graph the divisions to see how well it is matching your distriubtion and adjust.
Depending on how random the points need to be, it may be simple enough to just make a grid of points within the disk, and then displace each point by some small but random amount.
It may be that you want more randomness, but if you just want to fill your disc with an even-looking distribution of points that aren't on an obvious grid, you could try a spiral with a random phase.
import math
import random
import pylab
n = 300
alpha = math.pi * (3 - math.sqrt(5)) # the "golden angle"
phase = random.random() * 2 * math.pi
points = []
for k in xrange(n):
theta = k * alpha + phase
r = math.sqrt(float(k)/n)
points.append((r * math.cos(theta), r * math.sin(theta)))
pylab.scatter(*zip(*points))
pylab.show()
Probability theory ensures that the rejection method is an appropriate method
to generate uniformly distributed points within the disk, D(0,r), centered at origin and of radius r. Namely, one generates points within the square [-r,r] x [-r,r], until a point falls within the disk:
do{
generate P in [-r,r]x[-r,r];
}while(P[0]**2+P[1]**2>r);
return P;
unif_rnd_disk is a generator function implementing this rejection method:
import matplotlib.pyplot as plt
import numpy as np
import itertools
def unif_rnd_disk(r=1.0):
pt=np.zeros(2)
while True:
yield pt
while True:
pt=-r+2*r*np.random.random(2)
if (pt[0]**2+pt[1]**2<=r):
break
G=unif_rnd_disk()# generator of points in disk D(0,r=1)
X,Y=zip(*[pt for pt in itertools.islice(G, 1, 1000)])
plt.scatter(X, Y, color='r', s=3)
plt.axis('equal')
If we want to generate points in a disk centered at C(a,b), we have to apply a translation to the points in the disk D(0,r):
C=[2.0, -3.5]
plt.scatter(C[0]+np.array(X), C[1]+np.array(Y), color='r', s=3)
plt.axis('equal')
I want to create a simple animation of how bad the Forward Time Central Space (FTCS) solves the flux conservation equation for a Gaussian velocity distribution ("Physics... yeah!"). I have written a small animation based on this tutorial. I have attached the code below. I'm satisfied with it (given that I don't really know anything about matplotlib's animation package), but I cannot get the animation to be slow enough so that I can actually see something.
This boils down to me not understanding how to set the parameters in the animation.FuncAnimation in the last line of the code. Could anybody explain, help?
import math
import numpy as np
import scipy as sci
import matplotlib.pyplot as plt
from matplotlib import animation
#generate velocity distribution
sigma = 1.
xZero = 0.
N = 101
x = np.linspace(-10,10,N)
uZero = 1. / math.sqrt(2 * math.pi * (sigma**2)) * np.exp(-0.5*((x - xZero)/(2*sigma))**2)
v = 1
xStep = x[2]-x[1]
tStep = 0.1
alpha = v * tStep/xStep * 0.5
#include boundary conditions
u = np.hstack((0.,uZero,0.))
uNext = np.zeros(N + 2)
#solve with forward time central space and store each outer loop in data
#so it can be used in the animation
data = []
data.append(u[1:-1])
for n in range(0,100):
for i in range(1,N+1):
uNext[i] = -alpha * u[i+1] + u[i] + alpha*u[i-1]
u = uNext
data.append(u[1:-1])
data = np.array(data)
#launch up the animation
fig = plt.figure()
ax = plt.axes(xlim=(-10,10),ylim=(-1,1))
line, = ax.plot([],[],lw=2)
def init():
line.set_data([],[])
return line,
#get the data for animation from the data array
def animate(i):
y = data[i]
line.set_data(x,y)
return line,
#the actual animation
anim = animation.FuncAnimation(fig,animate,init_func=init,frames=200,interval=2000,blit=True)
plt.show()
Explore your data variable first. If i run your code, only data[0] and data[1] are different, from data[1] onwards all data (thus frames) are the same.
np.allclose(data[1], data[100])
True