Packing hard spheres in a box - python

I am trying to pack hard-spheres in a unit cubical box, such that these spheres cannot overlap on each other. This is being done in Python.
I am given some packing fraction f, and the number of spheres in the system is N.
So, I say that the diameter of each sphere will be
d = (p*6/(math.pi*N)**)1/3).
My box has periodic boundary conditions - which means that there is a recurring image of my box in all direction. If there is a particle who is at the edge of the box and has a portion of it going beyond the wall, it will stick out at the other side.
My attempt:
Create a numpy N-by-3 array box which holds the position vector of each particle [x,y,z]
The first particle is fine as it is.
The next particle in the array is checked with all the previous particles. If the distance between them is more than d, move on to the next particle. If they overlap, randomly change the position vector of the particle in question. If the new position does not overlap with the previous atoms, accept it.
Repeat steps 2-3 for the next particle.
I am trying to populate my box with these hard spheres, in the following manner:
for i in range(1,N):
mybool=True
print("particles in box: " + str(i))
while (mybool): #the deal with this while loop is that if we place a bad particle, we need to change its position, and restart the process of checking
for j in range(0,i):
displacement=box[j,:]-box[i,:]
for k in range(3):
if abs(displacement[k])>L/2:
displacement[k] -= L*np.sign(displacement[k])
distance = np.linalg.norm(displacement,2) #check distance between ith particle and the trailing j particles
if distance<diameter:
box[i,:] = np.random.uniform(0,1,(1,3)) #change the position of the ith particle randomly, restart the process
break
if j==i-1 and distance>diameter:
mybool = False
break
The problem with this code is that if p=0.45, it is taking a really, really long time to converge. Is there a better method to solve this problem, more efficiently?

I think what you are looking for is either the hexagonal closed-packed (HCP or sometime called face-centered cubic, FCC) lattice or the cubic closed-packed one (CCP). See e.g. Wikipedia on Close-packing of equal spheres.
Since your space has periodic conditions, I believe it doesn't matter which one you chose (hcp or ccp), and they both achieve the same density of ~74.04%, which was proved by Gauss to be the highest density by lattice packing.
Update:
For the follow-up question on how to generate efficiently one such lattice, let's take as an example the HCP lattice. First, let's create a bunch of (i, j, k) indices [(0,0,0), (1,0,0), (2,0,0), ..., (0,1,0), ...]. Then, get xyz coordinates from those indices and return a DataFrame with them:
def hcp(n):
dim = 3
k, j, i = [v.flatten()
for v in np.meshgrid(*([range(n)] * dim), indexing='ij')]
df = pd.DataFrame({
'x': 2 * i + (j + k) % 2,
'y': np.sqrt(3) * (j + 1/3 * (k % 2)),
'z': 2 * np.sqrt(6) / 3 * k,
})
return df
We can plot the result as scatter3d using plotly for interactive exploration:
import plotly.graph_objects as go
df = hcp(12)
fig = go.Figure(data=go.Scatter3d(
x=df.x, y=df.y, z=df.z, mode='markers',
marker=dict(size=df.x*0 + 30, symbol="circle", color=-df.z, opacity=1),
))
fig.show()
Note: plotly's scatter3d is not a very good rendering of spheres: the marker sizes are constant (so when you zoom in and out, the "spheres" will appear to change relative size), and there is no shading, limited z-ordering faithfulness, etc., but it's convenient to interact with the plot.
Resize and clip to the unit box:
Here, a strict clipping (each sphere needs to be completely inside the unit box). Your "periodic boundary condition" is something you will need to address separately (see further below for ideas).
def hcp_unitbox(r):
n = int(np.ceil(1 / (np.sqrt(3) * r)))
df = hcp(n) * r
df += r
df = df[(df <= 1 - r).all(axis=1)]
return df
With this, you find that a radius of 0.06 gives you 608 fully enclosed spheres:
hcp_unitbox(.06).shape # (608, 3)
Where you would go next:
You may dig deeper into the effect of your so-called "periodic boundary conditions", and perhaps play with some rotations (and small translations).
To do so, you may try to generate an HCP-lattice that is large enough that any rotation will still fully enclose your unit cube. For example:
r = 0.2 # example
n = int(np.ceil(2 / r))
df = hcp(n) * r - 1
Then rotate it (by any amount) and translate it (by up to 1 radius in any direction) as you wish for your research, and clip. The "periodic boundary conditions", as you call them, present a bit of extra challenge, as the clipping becomes trickier. First, clip any sphere whose center is outside your box. Then select spheres close enough to the boundaries, or even partition the regions of interest into overlapping regions along the walls of your cube, then check for collisions among the spheres (as per your periodic boundary conditions) that fall in each such region.

Related

Calculating the nearest neighbour in a 2d grid using multilevel solution

I have a problem where in a grid of x*y size I am provided a single dot, and I need to find the nearest neighbour. In practice, I am trying to find the closest dot to the cursor in pygame that crosses a color distance threshold that is calculated as following:
sqrt(((rgb1[0]-rgb2[0])**2)+((rgb1[1]-rgb2[1])**2)+((rgb1[2]-rgb2[2])**2))
So far I have a function that calculates the different resolutions for the grid and reduces it by a factor of two while always maintaining the darkest pixel. It looks as following:
from PIL import Image
from typing import Dict
import numpy as np
#we input a pillow image object and retrieve a dictionary with every grid version of the 3 dimensional array:
def calculate_resolutions(image: Image) -> Dict[int, np.ndarray]:
resolutions = {}
#we start with the highest resolution image, the size of which we initially divide by 1, then 2, then 4 etc.:
divisor = 1
#reduce the grid by 5 iterations
resolution_iterations = 5
for i in range(resolution_iterations):
pixel_lookup = image.load() #convert image to PixelValues object, which allows for pixellookup via [x,y] index
#calculate the resolution of the new grid, round upwards:
resolution = (int((image.size[0] - 1) // divisor + 1), int((image.size[1] - 1) // divisor + 1))
#generate 3d array with new grid resolution, fill in values that are darker than white:
new_grid = np.full((resolution[0],resolution[1],3),np.array([255,255,255]))
for x in range(image.size[0]):
for y in range(image.size[1]):
if not x%divisor and not y%divisor:
darkest_pixel = (255,255,255)
x_range = divisor if x+divisor<image.size[0] else (0 if image.size[0]-x<0 else image.size[0]-x)
y_range = divisor if y+divisor<image.size[1] else (0 if image.size[1]-y<0 else image.size[1]-y)
for x_ in range(x,x+x_range):
for y_ in range(y,y+y_range):
if pixel_lookup[x_,y_][0]+pixel_lookup[x_,y_][1]+pixel_lookup[x_,y_][2] < darkest_pixel[0]+darkest_pixel[1]+darkest_pixel[2]:
darkest_pixel = pixel_lookup[x_,y_]
if darkest_pixel != (255,255,255):
new_grid[int(x/divisor)][int(y/divisor)] = np.array(darkest_pixel)
resolutions[i] = new_grid
divisor = divisor*2
return resolutions
This is the most performance efficient solution I was able to come up with. If this function is run on a grid that continually changes, like a video with x fps, it will be very performance intensive. I also considered using a kd-tree algorithm that simply adds and removes any dots that happen to change on the grid, but when it comes to finding individual nearest neighbours on a static grid this solution has the potential to be more resource efficient. I am open to any kinds of suggestions in terms of how this function could be improved in terms of performance.
Now, I am in a position where for example, I try to find the nearest neighbour of the current cursor position in a 100x100 grid. The resulting reduced grids are 50^2, 25^2, 13^2, and 7^2. In a situation where a part of the grid looks as following:
And I am on the aggregation step where a part of the grid consisting of six large squares, the black one being the current cursor position and the orange dots being dots where the color distance threshold is crossed, I would not know which diagonally located closest neighbour I would want to pick to search next. In this case, going one aggregation step down shows that the lower left would be the right choice. Depending on how many grid layers I have this could result in a very large error in terms of the nearest neighbour search. Is there a good way how I can solve this problem? If there are multiple squares that show they have a relevant location, do I have to search them all in the next step to be sure? And if that is the case, the further away I get the more I would need to make use of math functions such as the pythagorean theorem to assert whether the two positive squares I find are overlapping in terms of distance and could potentially contain the closest neighbour, which would start to be performance intensive again if the function is called frequently. Would it still make sense to pursue this solution over a regular kd tree? For now the grid size is still fairly small (~800-600) but if the grid gets larger the performance may start suffering again. Is there a good scalable solution to this problem that could be applied here?

Inverse FFT returns negative values when it should not

I have several points (x,y,z coordinates) in a 3D box with associated masses. I want to draw an histogram of the mass-density that is found in spheres of a given radius R.
I have written a code that, providing I did not make any errors which I think I may have, works in the following way:
My "real" data is something huge thus I wrote a little code to generate non overlapping points randomly with arbitrary mass in a box.
I compute a 3D histogram (weighted by mass) with a binning about 10 times smaller than the radius of my spheres.
I take the FFT of my histogram, compute the wave-modes (kx, ky and kz) and use them to multiply my histogram in Fourier space by the analytic expression of the 3D top-hat window (sphere filtering) function in Fourier space.
I inverse FFT my newly computed grid.
Thus drawing a 1D-histogram of the values on each bin would give me what I want.
My issue is the following: given what I do there should not be any negative values in my inverted FFT grid (step 4), but I get some, and with values much higher that the numerical error.
If I run my code on a small box (300x300x300 cm3 and the points of separated by at least 1 cm) I do not get the issue. I do get it for 600x600x600 cm3 though.
If I set all the masses to 0, thus working on an empty grid, I do get back my 0 without any noted issues.
I here give my code in a full block so that it is easily copied.
import numpy as np
import matplotlib.pyplot as plt
import random
from numba import njit
# 1. Generate a bunch of points with masses from 1 to 3 separated by a radius of 1 cm
radius = 1
rangeX = (0, 100)
rangeY = (0, 100)
rangeZ = (0, 100)
rangem = (1,3)
qty = 20000 # or however many points you want
# Generate a set of all points within 1 of the origin, to be used as offsets later
deltas = set()
for x in range(-radius, radius+1):
for y in range(-radius, radius+1):
for z in range(-radius, radius+1):
if x*x + y*y + z*z<= radius*radius:
deltas.add((x,y,z))
X = []
Y = []
Z = []
M = []
excluded = set()
for i in range(qty):
x = random.randrange(*rangeX)
y = random.randrange(*rangeY)
z = random.randrange(*rangeZ)
m = random.uniform(*rangem)
if (x,y,z) in excluded: continue
X.append(x)
Y.append(y)
Z.append(z)
M.append(m)
excluded.update((x+dx, y+dy, z+dz) for (dx,dy,dz) in deltas)
print("There is ",len(X)," points in the box")
# Compute the 3D histogram
a = np.vstack((X, Y, Z)).T
b = 200
H, edges = np.histogramdd(a, weights=M, bins = b)
# Compute the FFT of the grid
Fh = np.fft.fftn(H, axes=(-3,-2, -1))
# Compute the different wave-modes
kx = 2*np.pi*np.fft.fftfreq(len(edges[0][:-1]))*len(edges[0][:-1])/(np.amax(X)-np.amin(X))
ky = 2*np.pi*np.fft.fftfreq(len(edges[1][:-1]))*len(edges[1][:-1])/(np.amax(Y)-np.amin(Y))
kz = 2*np.pi*np.fft.fftfreq(len(edges[2][:-1]))*len(edges[2][:-1])/(np.amax(Z)-np.amin(Z))
# I create a matrix containing the values of the filter in each point of the grid in Fourier space
R = 5
Kh = np.empty((len(kx),len(ky),len(kz)))
#njit(parallel=True)
def func_njit(kx, ky, kz, Kh):
for i in range(len(kx)):
for j in range(len(ky)):
for k in range(len(kz)):
if np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2) != 0:
Kh[i][j][k] = (np.sin((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R)-(np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R*np.cos((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R))*3/((np.sqrt(kx[i]**2+ky[j]**2+kz[k]**2))*R)**3
else:
Kh[i][j][k] = 1
return Kh
Kh = func_njit(kx, ky, kz, Kh)
# I multiply each point of my grid by the associated value of the filter (multiplication in Fourier space = convolution in real space)
Gh = np.multiply(Fh, Kh)
# I take the inverse FFT of my filtered grid. I take the real part to get back floats but there should only be zeros for the imaginary part.
Density = np.real(np.fft.ifftn(Gh,axes=(-3,-2, -1)))
# Here it shows if there are negative values the magnitude of the error
print(np.min(Density))
D = Density.flatten()
N = np.mean(D)
# I then compute the histogram I want
hist, bins = np.histogram(D/N, bins='auto', density=True)
bin_centers = (bins[1:]+bins[:-1])*0.5
plt.plot(bin_centers, hist)
plt.xlabel('rho/rhom')
plt.ylabel('P(rho)')
plt.show()
Do you know why I'm getting these negative values? Do you think there is a simpler way to proceed?
Sorry if this is a very long post, I tried to make it very clear and will edit it with your comments, thanks a lot!
-EDIT-
A follow-up question on the issue can be found [here].1
The filter you create in the frequency domain is only an approximation to the filter you want to create. The problem is that we are dealing with the DFT here, not the continuous-domain FT (with its infinite frequencies). The Fourier transform of a ball is indeed the function you describe, however this function is infinitely large -- it is not band-limited!
By sampling this function only within a window, you are effectively multiplying it with an ideal low-pass filter (the rectangle of the domain). This low-pass filter, in the spatial domain, has negative values. Therefore, the filter you create also has negative values in the spatial domain.
This is a slice through the origin of the inverse transform of Kh (after I applied fftshift to move the origin to the middle of the image, for better display):
As you can tell here, there is some ringing that leads to negative values.
One way to overcome this ringing is to apply a windowing function in the frequency domain. Another option is to generate a ball in the spatial domain, and compute its Fourier transform. This second option would be the simplest to achieve. Do remember that the kernel in the spatial domain must also have the origin at the top-left pixel to obtain a correct FFT.
A windowing function is typically applied in the spatial domain to avoid issues with the image border when computing the FFT. Here, I propose to apply such a window in the frequency domain to avoid similar issues when computing the IFFT. Note, however, that this will always further reduce the bandwidth of the kernel (the windowing function would work as a low-pass filter after all), and therefore yield a smoother transition of foreground to background in the spatial domain (i.e. the spatial domain kernel will not have as sharp a transition as you might like). The best known windowing functions are Hamming and Hann windows, but there are many others worth trying out.
Unsolicited advice:
I simplified your code to compute Kh to the following:
kr = np.sqrt(kx[:,None,None]**2 + ky[None,:,None]**2 + kz[None,None,:]**2)
kr *= R
Kh = (np.sin(kr)-kr*np.cos(kr))*3/(kr)**3
Kh[0,0,0] = 1
I find this easier to read than the nested loops. It should also be significantly faster, and avoid the need for njit. Note that you were computing the same distance (what I call kr here) 5 times. Factoring out such computation is not only faster, but yields more readable code.
Just a guess:
Where do you get the idea that the imaginary part MUST be zero? Have you ever tried to take the absolute values (sqrt(re^2 + im^2)) and forget about the phase instead of just taking the real part? Just something that came to my mind.

Monte Carlo Method, Darts in overlapping area of two circles

I am trying to estimate the value of pi using a monte carlo simulation. I need to use two unit circles that are a user input distance from the origin. I understand how this problem works with a single circle, I just don't understand how I am meant to use two circles. Here is what I have got so far (this is the modified code I used for a previous problem the used one circle with radius 2.
import random
import math
import sys
def main():
numDarts=int(sys.argv[1])
distance=float(sys.argv[2])
print(montePi(numDarts,distance))
def montePi(numDarts,distance):
if distance>=1:
return(0)
inCircle=0
for I in range(numDarts):
x=(2*(random.random()))-2
y=random.random()
d=math.sqrt(x**2+y**2)
if d<=2 and d>=-2:
inCircle=inCircle+1
pi=inCircle/numDarts*4
return pi
main()
I need to change this code to work with 2 unit circles, but I do not understand how to use trigonometry to do this, or am I overthinking the problem? Either way help will be appreciated as I continue trying to figure this out.
What I do know is that I need to change the X coordinate, as well as the equation that determines "d" (d=math.sqrt(x*2+y*2)), im just not sure how.
These are my instructions-
Write a program called mcintersection.py that uses the Monte Carlo method to
estimate the area of this shape (and prints the result). Your program should take
two command-line parameters: distance and numDarts. The distance parameter
specifies how far away the circles are from the origin on the x-axis. So if distance
is 0, then both circles are centered on the origin, and completely overlap. If
distance is 0.5 then one circle is centered at (-0.5, 0) and the other at (0.5, 0). If
distance is 1 or greater, then the circles do not overlap at all! In that last case, your
program can simply output 0. The numDarts parameter should specify the number
of random points to pick in the Monte Carlo process.
In this case, the rectangle should be 2 units tall (with the top at y = 1 and the
bottom at y = -1). You could also safely make the rectangle 2 units wide, but this
will generally be much bigger than necessary. Instead, you should figure out
exactly how wide the shape is, based on the distance parameter. That way you can
use as skinny a rectangle as possible.
If I understand the problem correctly, you have two unit circles centered at (distance, 0) and (-distance, 0) (that is, one is slightly to the right of the origin and one is slightly to the left). You're trying to determine if a given point, (x, y) is within both circles.
The simplest approach might be to simply compute the distance between the point and the center of each of the circles. You've already done this in your previous code, just repeat the computation twice, once with the offset distance inverted, then use and to see if your point is in both circles.
But a more elegant solution would be to notice how your two circles intersect each other exactly on the y-axis. To the right of the axis, the left circle is completely contained within the right one. To the left of the y-axis, the right circle is entirely within the left circle. And since the shape is symmetrical, the two halves are of exactly equal size.
This means you can limit your darts to only hitting on one side of the axis, and then get away with just a single distance test:
def circle_intersection_area(num_darts, distance):
if distance >= 1:
return 0
in_circle = 0
width = 1-distance # this is enough to cover half of the target
for i in range(num_darts):
x = random.random()*width # random value from 0 to 1-distance
y = random.random()*2 - 1 # random value from -1 to 1
d = math.sqrt((x+distance)**2 + y**2) # distance from (-distance, 0)
if d <= 1:
in_circle += 1
sample_area = width * 2
target_area = sample_area * (in_circle / num_darts)
return target_area * 2 # double, since we were only testing half the target

Pixels and geometrical shapes - Python/PIL

I'm trying to build a basic heatmap based on points. Each point has a heat radius, and therefore is represented by a circle.
Problem is that the circle needs to be converted in a list of pixels colored based on the distance from the circle's center.
Finding it hard to find an optimal solution for many points, what I have for now is something similar to this:
for pixels in pixels:
if (pixel.x - circle.x)**2 + (pixel.y - circle.y)**2 <= circle.radius:
pixel.set_color(circle.color)
Edit:
data I have:
pixel at the center of the circle
circle radius (integer)
Any tips?
Instead of doing it pixel-by-pixel, use a higher level interface with anti-aliasing, like the aggdraw module and its ellipse(xy, pen, brush) function.
Loop over the number of color steps you want (lets say, radius/2) and use 255/number_of_steps*current_step as the alpha value for the fill color.
For plotting it is usually recommended to use the matplotlib library (e.g. using imshow for heatmaps). Of course matplotlib also supports color gradients.
However, I don't really understand what you are trying to accomplish. If you just want to draw a bunch of colored circles then pretty much any graphics library will do (e.g. using the ellipse function in PIL).
It sounds like you want to color the pixel according to their distance from the center, but your own example code suggests that the color is constant?
If you are handling your pixels by yourself and your point is to increase performances, you can just focus on the square [x - radius; x + radius] * [y - radius; y + radius] since the points of your circle live here. That will save you a lot of useless iterations, if of course you CAN focus on this region (i.e. your pixels are not just an array without index per line and column).
You can even be sure that the pixels in the square [x - radius*sqrt(2)/2; x + radius*sqrt(2)/2] * [y - radius*sqrt(2)/2; y + radius*sqrt(2)/2] must be colored, with basic trigonometry (maximum square inside the circle).
So you could do:
import math
half_sqrt = math.sqrt(2) / 2
x_max = x + half_sqrt
y_max = y + half_sqrt
for (i in range(x, x + radius + 1):
for (j in range(y, y + radius + 1):
if (x <= x_max and y <= y_max):
colorize_4_parts(i, j)
else:
pixel = get_pixel(i, j)
if (pixel.x - circle.x)**2 + (pixel.y - circle.y)**2 <= circle.radius:
# Apply same colors as above, could be a function
colorize_4_parts(i, j)
def colorize_4_parts(i, j):
# Hoping you have access to such a function get_pixel !
pixel_top_right = get_pixel(i, j)
pixel_top_right.set_color(circle.color)
pixel_top_left = get_pixel(2 * x - i, j)
pixel_top_leftt.set_color(circle.color)
pixel_bot_right = get_pixel(i, 2 * y - j)
pixel_bot_right.set_color(circle.color)
pixel_bot_left = get_pixel(2 * x - i, 2 * y - j)
pixel_bot_leftt.set_color(circle.color)
This is optimized to reduce costly computations to the minimum.
EDIT: function updated to be more efficient again: I had forgotten that we had a double symetry horizontal and vertical, so we can compute only for the top right corner !
This is a very common operation, and here's how people do it...
summary: Represent the point density on a grid, smooth this using a 2D convolution if needed (this gives your points to circles), and plot this as a heatmap using matplotlib.
In more detail: First, make a 2D grid for your heatmap, and add your data points to the grid, incrementing by the cells by 1 when a data point lands in the cell. Second, make another grid to represents the shape you want to give each point (usually people use a cylinder or gaussian or something like this). Third, convolve these two together, using, say scipy.signal.convolve2d. Finally, use matplotlib's imshow function to plot the convolution, and this will be your heatmap.
If you can't use the tools suggested in the standard approach then you might find work-arounds, but it has advantages. For example, the convolution will deal well with cases when the circles overlap.

How to 'zoom' in on a section of the Mandelbrot set?

I have created a Python file to generate a Mandelbrot set image. The original maths code was not mine, so I do not understand it - I only heavily modified it to make it about 250x faster (Threads rule!).
Anyway, I was wondering how I could modify the maths part of the code to make it render one specific bit. Here is the maths part:
for y in xrange(size[1]):
coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))
z = complex(coords[0],coords[1])
o = complex(0,0)
dotcolor = 0 # default, convergent
for trials in xrange(n):
if abs(o) <= 2.0:
o = o**2 + z
else:
dotcolor = trials
break # diverged
im.putpixel((x,y),dotcolor)
And the size definitions:
size1 = 500
size2 = 500
n=64
box=((-2,1.25),(0.5,-1.25))
plus = size[1]+size[0]
uleft = box[0]
lright = box[1]
xwidth = lright[0] - uleft[0]
ywidth = uleft[1] - lright[1]
what do I need to modify to make it render a certain section of the set?
The line:
box=((-2,1.25),(0.5,-1.25))
is the bit that defines the area of coordinate space that is being rendered, so you just need to change this line. First coordinate pair is the top-left of the area, the second is the bottom right.
To get a new coordinate from the image should be quite straightforward. You've got two coordinate systems, your "image" system 100x100 pixels in size, origin at (0,0). And your "complex" plane coordinate system defined by "box". For X:
X_complex=X_complex_origin+(X_image/X_image_width)*X_complex_width
The key in understanding how to do this is to understand what the coords = line is doing:
coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))
Effectively, the x and y values you are looping through which correspond to the coordinates of the on-screen pixel are being translated to the corresponding point on the complex plane being looked at. This means that (0,0) screen coordinate will translate to the upper left region being looked at (-2,1.25), and (1,0) will be the same, but moved 1/500 of the distance (assuming a 500 pixel width window) between the -2 and 0.5 x-coordinate.
That's exactly what that line is doing - I'll expand just the X-coordinate bit with more illustrative variable names to indicate this:
mandel_x = mandel_start_x + (screen_x / screen_width) * mandel_width
(The mandel_ variables refer to the coordinates on the complex plane, the screen_ variables refer to the on-screen coordinates of the pixel being plotted.)
If you want then to take a region of the screen to zoom into, you want to do exactly the same: take the screen coordinates of the upper-left and lower-right region, translate them to the complex-plane coordinates, and make those the new uleft and lright variables. ie to zoom in on the box delimited by on-screen coordinates (x1,y1)..(x2,y2), use:
new_uleft = (uleft[0] + (x1/size[0]) * (xwidth), uleft[1] - (y1/size[1]) * (ywidth))
new_lright = (uleft[0] + (x2/size[0]) * (xwidth), uleft[1] - (y2/size[1]) * (ywidth))
(Obviously you'll need to recalculate the size, xwidth, ywidth and other dependent variables based on the new coordinates)
In case you're curious, the maths behind the mandelbrot set isn't that complicated (just complex).
All it is doing is taking a particular coordinate, treating it as a complex number, and then repeatedly squaring it and adding the original number to it.
For some numbers, doing this will cause the result diverge, constantly growing towards infinity as you repeat the process. For others, it will always stay below a certain level (eg. obviously (0.0, 0.0) never gets any bigger under this process. The mandelbrot set (the black region) is those coordinates which don't diverge. Its been shown that if any number gets above the square root of 5, it will diverge - your code is just using 2.0 as its approximation to sqrt(5) (~2.236), but this won't make much noticeable difference.
Usually the regions that diverge get plotted with the number of iterations of the process that it takes for them to exceed this value (the trials variable in your code) which is what produces the coloured regions.

Categories

Resources