Creating a spatial index for QGIS 2 spatial join (PyQGIS) - python

I've written a bit of code to do a simple spatial join in QGIS 2 and 2.2 (points that lie within a buffer to take attribute of the buffer). However, I'd like to employ a QgsSpatialIndex in order to speed things up a bit. Where can I go from here:
pointProvider = self.pointLayer.dataProvider()
rotateProvider = self.rotateBUFF.dataProvider()
all_point = pointProvider.getFeatures()
point_spIndex = QgsSpatialIndex()
for feat in all_point:
point_spIndex.insertFeature(feat)
all_line = rotateProvider.getFeatures()
line_spIndex = QgsSpatialIndex()
for feat in all_line:
line_spIndex.insertFeature(feat)
rotate_IDX = self.rotateBUFF.fieldNameIndex('bearing')
point_IDX = self.pointLayer.fieldNameIndex('bearing')
self.pointLayer.startEditing()
for rotatefeat in self.rotateBUFF.getFeatures():
for pointfeat in self.pointLayer.getFeatures():
if pointfeat.geometry().intersects(rotatefeat.geometry()) == True:
pointID = pointfeat.id()
bearing = rotatefeat.attributes()[rotate_IDX]
self.pointLayer.changeAttributeValue(pointID, point_IDX, bearing)
self.pointLayer.commitChanges()

To do this kind of spatial join, you can use the QgsSpatialIndex (http://www.qgis.org/api/classQgsSpatialIndex.html) intersects(QgsRectangle) function to get a list of candidate featureIDs or the nearestNeighbor (QgsPoint,n) function to get the list of the n nearest neighbours as featureIDs.
Since you only want the points that lie within the buffer, the intersects function seems most suitable. I have not tested if a degenerate bbox (point) can be used. If not, just make a very small bounding box around your point.
The intersects function returns all features that have a bounding box that intersects the given rectangle, so you will have to test these candidate features for a true intersection.
Your outer loop should be on the points (you want to to add attribute values to each point from their containing buffer).
# If degenerate rectangles are allowed, delta could be 0,
# if not, choose a suitable, small value
delta = 0.1
# Loop through the points
for point in all_point:
# Create a search rectangle
# Assuming that all_point consist of QgsPoint
searchRectangle = QgsRectangle(point.x() - delta, point.y() - delta, point.x() + delta, point.y() + delta)
# Use the search rectangle to get candidate buffers from the buffer index
candidateIDs = line_index.intesects(searchRectangle)
# Loop through the candidate buffers to find the first one that contains the point
for candidateID in candidateIDs:
candFeature == rotateProvider.getFeatures(QgsFeatureRequest(candidateID)).next()
if candFeature.geometry().contains(point):
# Do something useful with the point - buffer pair
# No need to look further, so break
break

Related

How can I determine which curve is closest to a given set of points?

I have several dataframes which each contain two columns of x and y values, so each row represents a point on a curve. The different dataframes then represent contours on a map. I have another series of data points (fewer in number), and I'd like to see which contour they are closest to on average.
I would like to establish the distance from each datapoint to each point on the curve, with sqrt(x^2+y^2) - sqrt(x_1^2 + y_1^2), add them up for each point on the curve. The trouble is that there are several thousand points on the curve, and there are only a few dozen datapoints to assess, so I can't simply put these in columns next to each other.
I think I need to cycle through the datapoints, checking the sqdistance between them and each point in the curve.
I don't know whether there is an easy function or module that can do this.
Thanks in advance!
Edit: Thanks for the comments. #Alexander: I've tried the vectorize function, as follows, with a sample dataset. I'm actually using contours which comprise several thousand datapoints, and the dataset to compare against are 100+, so I'd like to be able to automate as much as possible. I'm currently able to create a distance measurement from the first datapoint against my contour, but I would ideally like to cycle through j as well. When I try it, it comes up with an error:
import numpy as np
from numpy import vectorize
import pandas as pd
from pandas import DataFrame
df1 = {'X1':['1', '2', '2', '3'], 'Y1':['2', '5', '7', '9']}
df1 = DataFrame(df1, columns=['X1', 'Y1'])
df2 = {'X2':['3', '5', '6'], 'Y2':['10', '15', '16']}
df2 = DataFrame(df2, columns=['X2', 'Y2'])
df1=df1.astype(float)
df2=df2.astype(float)
Distance=pd.DataFrame()
i = range(0, len(df1))
j = range(0, len(df2))
def myfunc(x1, y1, x2, y2):
return np.sqrt((x2-x1)**2+np.sqrt(y2-y1)**2)
vfunc=np.vectorize(myfunc)
Distance['Distance of Datapoint j to Contour']=vfunc(df1.iloc[i] ['X1'], df1.iloc[i]['Y1'], df2.iloc[0]['X2'], df2.iloc[0]['Y2'])
Distance['Distance of Datapoint j to Contour']=vfunc(df1.iloc[i] ['X1'], df1.iloc[i]['Y1'], df2.iloc[1]['X2'], df2.iloc[1]['Y2'])
Distance
General idea
The "curve" is actually a polygon with a lot's of points. There definetly some libraries to calculate the distance between the polygon and the point. But generally it will be something like:
Calculate "approximate distance" to whole polygon, e.g. to the bounding box of a polygon (from point to 4 line segments), or to the center of bounding box
calculate distances to the lines of a polygon. If you have too many points then as an extra step "resolution" of a polygon might be reduced.
Smallest found distance is the distance from point to the polygon.
repeat for each point and each polygon
Existing solutions
Some libraries already can do that:
shapely question, shapely Geo-Python docs
Using shapely in geopandas to calculate distance
scipy.spatial.distance: scipy can be used to calculate distance between arbitrary number of points
numpy.linalg.norm(point1-point2): some answers propose different ways to calculate distance using numpy. Some even show performance benchmarks
sklearn.neighbors: not really about curves and distances to them, but can be used if you want to check "to which area point is most likely related"
And you can always calculate distances yourself using D(x1, y1, x2, y2) = sqrt((x₂-x₁)² + (y₂-y₁)²) and search for best combination of points that gives minimal distance
Example:
# get distance from points of 1 dataset to all the points of another dataset
from scipy.spatial import distance
d = distance.cdist(df1.to_numpy(), df2.to_numpy(), 'euclidean')
print(d)
# Results will be a matrix of all possible distances:
# [[ D(Point_df1_0, Point_df2_0), D(Point_df1_0, Point_df2_1), D(Point_df1_0, Point_df2_2)]
# [ D(Point_df1_1, Point_df2_0), D(Point_df1_1, Point_df2_1), D(Point_df1_1, Point_df2_2)]
# [ D(Point_df1_3, Point_df2_0), D(Point_df1_2, Point_df2_1), D(Point_df1_2, Point_df2_2)]
# [ D(Point_df1_3, Point_df2_0), D(Point_df1_3, Point_df2_1), D(Point_df1_3, Point_df2_2)]]
[[ 8.24621125 13.60147051 14.86606875]
[ 5.09901951 10.44030651 11.70469991]
[ 3.16227766 8.54400375 9.8488578 ]
[ 1. 6.32455532 7.61577311]]
What to do next is up to you. For example as a metric of "general distance between curves" you can:
Pick smallest values in each row and each column (if you skip some columns/rows, then you might end up with candidate that "matches only a part of contour), and calculate their median: np.median(np.hstack([np.amin(d, axis) for axis in range(len(d.shape))])).
Or you can calculate mean value of:
all the distances: np.median(d)
of "smallest 2/3 of distances": np.median(d[d<np.percentile(d, 66, interpolation='higher')])
of "smallest distances that cover at least each rows and each columns":
for min_value in np.sort(d, None):
chosen_indices = d<=min_value
if np.all(np.hstack([np.amax(chosen_indices, axis) for axis in range(len(chosen_indices.shape))])):
break
similarity = np.median(d[chosen_indices])
Or maybe you can use different type of distance from the begining (e.g. "correlation distance" looks promising to your task)
Maybe use "Procrustes analysis, a similarity test for two data sets" together with distances.
Maybe you can use minkowski distance as a similarity metric.
Alternative approach
Alternative approach would be to use some "geometry" library to compare areas of concave hulls:
Build concave hulls for contours and for "candidate datapoints" (not easy, but possible: using shapely , using concaveman). But if you are sure that your contours are already ordered and without overlapping segments, then you can directly build polygons from those points without need for concave hull.
Use "intersection area" minus "non-common area" as a metric of similarity (shapely can be used for that):
Non-common area is: union - intersection or simply "symmetric difference"
Final metric: intersection.area - symmetric_difference.area (intersection, area)
This approach might be better than processing distances in some situations, for example:
You want to prefer "fewer points covering whole area" over "huge amount of very close points that cover only half of the area"
It's more obvious way to compare candidates with different number of points
But it has it's disadvantages too (just draw some examples on paper and experiment to find them)
Other ideas:
instead of using polygons or concave hull you can:
build a linear ring from your points and then use contour.buffer(some_distance). This way you ignore "internal area" of the contour and only compare contour itself (with tolerance of some_distance). Distance between centroids (or double of that) may be used as value for some_distance
You can build polygons/lines from segments using ops.polygonize
instead of using intersection.area - symmetric_difference.area you can:
Snap one object to another, and then compare snapped object to original
Before comparing real objects you can compare "simpler" versions of the objects to filter out obvious mismatches:
For example you can check if boundaries of objects intersect
Or you can simplify geometries before comparing them
For the distance, you need to change your formula to
def getDistance(x, y, x_i, y_i):
return sqrt((x_i -x)^2 + (y_i - y)^2)
with (x,y) being your datapoint and (x_i, y_i) being a point from the curve.
Consider using NumPy for vectorization. Explicitly looping through your data points will most likely be less efficient, depending on your use case, it might however be quick enough. (If you need to run it on a regular basis, I think vectorization will easily outspeed the explicit way) This could look something like this:
import numpy as np # Universal abbreviation for the module
datapoints = np.random.rand(3,2) # Returns a vector with randomized entries of size 3x2 (Imagine it as 3 sets of x- and y-values
contour1 = np.random.rand(1000, 2) # Other than the size (which is 1000x2) no different than datapoints
contour2 = np.random.rand(1000, 2)
contour3 = np.random.rand(1000, 2)
def squareDistanceUnvectorized(datapoint, contour):
retVal = 0.
print("Using datapoint with values x:{}, y:{}".format(datapoint[0], datapoint[1]))
lengthOfContour = np.size(contour, 0) # This gets you the number of lines in the vector
for pointID in range(lengthOfContour):
squaredXDiff = np.square(contour[pointID,0] - datapoint[0])
squaredYDiff = np.square(contour[pointID,1] - datapoint[1])
retVal += np.sqrt(squaredXDiff + squaredYDiff)
retVal = retVal / lengthOfContour # As we want the average, we are dividing the sum by the element count
return retVal
if __name__ == "__main__":
noOfDatapoints = np.size(datapoints,0)
contID = 0
for currentDPID in range(noOfDatapoints):
dist1 = squareDistanceUnvectorized(datapoints[currentDPID,:], contour1)
dist2 = squareDistanceUnvectorized(datapoints[currentDPID,:], contour2)
dist3 = squareDistanceUnvectorized(datapoints[currentDPID,:], contour3)
if dist1 > dist2 and dist1 > dist3:
contID = 1
elif dist2 > dist1 and dist2 > dist3:
contID = 2
elif dist3 > dist1 and dist3 > dist2:
contID = 3
else:
contID = 0
if contID == 0:
print("Datapoint {} is inbetween two contours".format(currentDPID))
else:
print("Datapoint {} is closest to contour {}".format(currentDPID, contID))
Okay, now moving on to vector-land.
I have taken the liberty to adjust this part to what I think is your dataset. Try it and let me know if it works.
import numpy as np
import pandas as pd
# Generate 1000 points (2-dim Vector) with random values between 0 and 1. Make them strings afterwards.
# This is the first contour
random2Ddata1 = np.random.rand(1000,2)
listOfX1 = [str(x) for x in random2Ddata1[:,0]]
listOfY1 = [str(y) for y in random2Ddata1[:,1]]
# Do the same for a second contour, except that we de-center this 255 units into the first dimension
random2Ddata2 = np.random.rand(1000,2)+[255,0]
listOfX2 = [str(x) for x in random2Ddata2[:,0]]
listOfY2 = [str(y) for y in random2Ddata2[:,1]]
# After this step, our 'contours' are basically two blobs of datapoints whose centers are approx. 255 units apart.
# Generate a set of 4 datapoints and make them a Pandas-DataFrame
datapoints = {'X': ['0.5', '0', '255.5', '0'], 'Y': ['0.5', '0', '0.5', '-254.5']}
datapoints = pd.DataFrame(datapoints, columns=['X', 'Y'])
# Do the same for the two contours
contour1 = {'Xf': listOfX1, 'Yf': listOfY1}
contour1 = pd.DataFrame(contour1, columns=['Xf', 'Yf'])
contour2 = {'Xf': listOfX2, 'Yf': listOfY2}
contour2 = pd.DataFrame(contour2, columns=['Xf', 'Yf'])
# We do now have 4 datapoints.
# - The first datapoint is basically where we expect the mean of the first contour to be.
# Contour 1 consists of 1000 points with x, y- values between 0 and 1
# - The second datapoint is at the origin. Its distances should be similar to the once of the first datapoint
# - The third datapoint would be the result of shifting the first datapoint 255 units into the positive first dimension
# - The fourth datapoint would be the result of shifting the first datapoint 255 units into the negative second dimension
# Transformation into numpy array
# First the x and y values of the data points
dpArray = ((datapoints.values).T).astype(np.float)
c1Array = ((contour1.values).T).astype(np.float)
c2Array = ((contour2.values).T).astype(np.float)
# This did the following:
# - Transform the datapoints and contours into numpy arrays
# - Transpose them afterwards so that if we want all x values, we can write var[0,:] instead of var[:,0].
# A personal preference, maybe
# - Convert all the values into floats.
# Now, we iterate through the contours. If you have a lot of them, putting them into a list beforehand would do the job
for contourid, contour in enumerate([c1Array, c2Array]):
# Now for the datapoints
for _index, _value in enumerate(dpArray[0,:]):
# The next two lines do vectorization magic.
# First, we square the difference between one dpArray entry and the contour x values.
# You might notice that contour[0,:] returns an 1x1000 vector while dpArray[0,_index] is an 1x1 float value.
# This works because dpArray[0,_index] is broadcasted to fit the size of contour[0,:].
dx = np.square(dpArray[0,_index] - contour[0,:])
# The same happens for dpArray[1,_index] and contour[1,:]
dy = np.square(dpArray[1,_index] - contour[1,:])
# Now, we take (for one datapoint and one contour) the mean value and print it.
# You could write it into an array or do basically anything with it that you can imagine
distance = np.mean(np.sqrt(dx+dy))
print("Mean distance between contour {} and datapoint {}: {}".format(contourid+1, _index+1, distance))
# But you want to be able to call this... so here we go, generating a function out of it!
def getDistanceFromDatapointsToListOfContoursFindBetterName(datapoints, listOfContourDataFrames):
""" Takes a DataFrame with points and a list of different contours to return the average distance for each combination"""
dpArray = ((datapoints.values).T).astype(np.float)
listOfContours = []
for item in listOfContourDataFrames:
listOfContours.append(((item.values).T).astype(np.float))
retVal = np.zeros((np.size(dpArray,1), len(listOfContours)))
for contourid, contour in enumerate(listOfContours):
for _index, _value in enumerate(dpArray[0,:]):
dx = np.square(dpArray[0,_index] - contour[0,:])
dy = np.square(dpArray[1,_index] - contour[1,:])
distance = np.mean(np.sqrt(dx+dy))
print("Mean distance between contour {} and datapoint {}: {}".format(contourid+1, _index+1, distance))
retVal[_index, contourid] = distance
return retVal
# And just to see that it is, indeed, returning the same results, run it once
getDistanceFromDatapointsToListOfContoursFindBetterName(datapoints, [contour1, contour2])

Finding the intersection of two cylinders in 3D space using VTK in python

Using VTK in python i wrote some code to create an actor for objects that i want, e.g. for cylinder:
def cylinder_object(startPoint, endPoint, radius, my_color="DarkRed"):
USER_MATRIX = True
colors = vtk.vtkNamedColors()
cylinderSource = vtk.vtkCylinderSource()
cylinderSource.SetRadius(radius)
cylinderSource.SetResolution(50)
rng = vtk.vtkMinimalStandardRandomSequence()
rng.SetSeed(8775070) # For testing.8775070
# Compute a basis
normalizedX = [0] * 3
normalizedY = [0] * 3
normalizedZ = [0] * 3
# The X axis is a vector from start to end
vtk.vtkMath.Subtract(endPoint, startPoint, normalizedX)
length = vtk.vtkMath.Norm(normalizedX)
vtk.vtkMath.Normalize(normalizedX)
# The Z axis is an arbitrary vector cross X
arbitrary = [0] * 3
for i in range(0, 3):
rng.Next()
arbitrary[i] = rng.GetRangeValue(-10, 10)
vtk.vtkMath.Cross(normalizedX, arbitrary, normalizedZ)
vtk.vtkMath.Normalize(normalizedZ)
# The Y axis is Z cross X
vtk.vtkMath.Cross(normalizedZ, normalizedX, normalizedY)
matrix = vtk.vtkMatrix4x4()
# Create the direction cosine matrix
matrix.Identity()
for i in range(0, 3):
matrix.SetElement(i, 0, normalizedX[i])
matrix.SetElement(i, 1, normalizedY[i])
matrix.SetElement(i, 2, normalizedZ[i])
# Apply the transforms
transform = vtk.vtkTransform()
transform.Translate(startPoint) # translate to starting point
transform.Concatenate(matrix) # apply direction cosines
transform.RotateZ(-90.0) # align cylinder to x axis
transform.Scale(1.0, length, 1.0) # scale along the height vector
transform.Translate(0, .5, 0) # translate to start of cylinder
# Transform the polydata
transformPD = vtk.vtkTransformPolyDataFilter()
transformPD.SetTransform(transform)
transformPD.SetInputConnection(cylinderSource.GetOutputPort())
# Create a mapper and actor for the arrow
mapper = vtk.vtkPolyDataMapper()
actor = vtk.vtkActor()
if USER_MATRIX:
mapper.SetInputConnection(cylinderSource.GetOutputPort())
actor.SetUserMatrix(transform.GetMatrix())
else:
mapper.SetInputConnection(transformPD.GetOutputPort())
actor.SetMapper(mapper)
actor.GetProperty().SetColor(colors.GetColor3d(my_color))
return actor
This function returns an actor where i can render it later using vtkRender.
Now what i want is to first find whether two given cylinder Actors are intersected or not and second find the intersection points.
Could i use the vtkTriangleFilter on my cylinder and use the vtkOBBTree and ray casting to find whether the intersection happens or not?
Here are two oriented cylinders that are intersected:
First, you'll need to work on the vtkPolyData object (i.e. the geometry), not on the vtkActor. You'll probably need to use vtkTransformPolyDataFilter output as your vtkPolyData (as you did in the else statement - example here) rather than calling setUserMatrix.
You can use vtkBooleanOperationPolyDataFilter: an example can be found here (in C++, but I'm sure it can help) and here (in Python). If the resulting geometry is not empty, then the cylinders intersect.
If it does not fit your needs, you can convert the cylinders from polydata to imagedata (image volume, voxels) using vtkImplicitModeller; then computing the intersection volume is easier and more accurate (you can use vtkImageLogic). You can also convert the intersection back to vtkPolyData using vtkFlyingEdges3D (a fast version of vtkMarchingCubes).
Edit: as disussed in the comments, because there are many cylinders execution time is a matter. You could try to optimize the process by computing the distance between the axis of each pair of cylinders to detect IF they intersect and in case they do, compute the intersection as described in the first part of this answer. My idea is the following: compute the shortest distance between the segments (one method is described here, there's also the c++ code for segment-to-segment distance, that's what you need). Compare the distance with the sum of the radius of the two cylinders and if it's shorter, compute the intersection.

Find irregular region in 4D numpy array of gridded data (lat/lon)

I have a large 4-dimensional dataset of Temperatures [time,pressure,lat,lon].
I need to find all grid points within a region defined by lat/lon indices and calculate an average over the region to leave me with a 2-dimensional array.
I know how to do this if my region is a rectangle (or square) but how can this be done with an irregular polygon?
Below is an image showing the regions I need to average together and the lat/lon grid the data is gridded to in the array
I believe this should solve your problem.
The code below generates all cells in a polygon defined by a list of vertices.
It "scans" the polygon row by row keeping track of the transition columns where you (re)-enter or exit the polygon.
def row(x, transitions):
""" generator spitting all cells in a row given a list of transition (in/out) columns."""
i = 1
in_poly = True
y = transitions[0]
while i < len(transitions):
if in_poly:
while y < transitions[i]:
yield (x,y)
y += 1
in_poly = False
else:
in_poly = True
y = transitions[i]
i += 1
def get_same_row_vert(i, vertices):
""" find all vertex columns in the same row as vertices[i], and return next vertex index as well."""
vert = []
x = vertices[i][0]
while i < len(vertices) and vertices[i][0] == x:
vert.append(vertices[i][1])
i += 1
return vert, i
def update_transitions(old, new):
""" update old transition columns for a row given new vertices.
That is: merge both lists and remove duplicate values (2 transitions at the same column cancel each other)"""
if old == []:
return new
if new == []:
return old
o0 = old[0]
n0 = new[0]
if o0 == n0:
return update_transitions(old[1:], new[1:])
if o0 < n0:
return [o0] + update_transitions(old[1:], new)
return [n0] + update_transitions(old, new[1:])
def polygon(vertices):
""" generator spitting all cells in the polygon defined by given vertices."""
vertices.sort()
x = vertices[0][0]
transitions, i = get_same_row_vert(0, vertices)
while i < len(vertices):
while x < vertices[i][0]:
for cell in row(x, transitions):
yield cell
x += 1
vert, i = get_same_row_vert(i, vertices)
transitions = update_transitions(transitions, vert)
# define a "strange" polygon (hook shaped)
vertices = [(0,0),(0,3),(4,3),(4,0),(3,0),(3,2),(1,2),(1,1),(2,1),(2,0)]
for cell in polygon(vertices):
print cell
# or do whatever you need to do
The general class of problems is called "Point in Polygon", where the (fairly) standard algorithm is based on drawing a test line through the point under consideration and counting the number of times it crosses polygon boundaries (its really cool/weird that it works so simply, I think). This is a really good overview which includes implementation information.
For your problem in particular, since each of your regions are defined based on a small number of square cells - I think a more brute-force approach might be better. Perhaps something like:
For each region, form a list of all of the (lat/lon) squares which define it. Depending on how your regions are defined, this may be trivial, or annoying...
For each point you are examining, figure out which square it lives in. Since the squares are so well behaves, you can do this manually using opposite corners of each square, or using a method like numpy.digitize.
Test whether the square the point lives in, is in one of the regions.
If you're still having trouble, please provide some more details about your problem (specifically, how your regions are defined) --- that will make it easier to offer advice.

Choosing correct distance from a list

This question is somewhat similar to this. I've gone a bit farther than the OP, though, and I'm in Python 2 (not sure what he was using).
I have a Python function that can determine the distance from a point inside a convex polygon to regularly-defined intervals along the polygon's perimeter. The problem is that it returns "extra" distances that I need to eliminate. (Please note--I suspect this will not work for rectangles yet. I'm not finished with it.)
First, the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# t1.py
#
# Copyright 2015 FRED <fred#matthew24-25>
#
# THIS IS TESTING CODE ONLY. IT WILL BE MOVED INTO THE CORRECT MODULE
# UPON COMPLETION.
#
from __future__ import division
import math
import matplotlib.pyplot as plt
def Dist(center_point, Pairs, deg_Increment):
# I want to set empty lists to store the values of m_lnsgmnt and b_lnsgmnts
# for every iteration of the for loop.
m_linesegments = []
b_linesegments = []
# Scream and die if Pairs[0] is the same as the last element of Pairs--i.e.
# it has already been run once.
#if Pairs[0] == Pairs[len(Pairs)-1]:
##print "The vertices contain duplicate points!"
## Creates a new list containing the original list plus the first element. I did this because, due
## to the way the for loop is set up, the last iteration of the loop subtracts the value of the
## last value of Pairs from the first value. I therefore duplicated the first value.
#elif:
new_Pairs = Pairs + [Pairs[0]]
# This will calculate the slopes and y-intercepts of the linesegments of the polygon.
for a in range(len(Pairs)):
# This calculates the slope of each line segment and appends it to m_linesegments.
m_lnsgmnt = (new_Pairs[a+1][2] - new_Pairs[a][3]) / (new_Pairs[a+1][0] - new_Pairs[a][0])
m_linesegments.append(m_lnsgmnt)
# This calculates the y-intercept of each line segment and appends it to b_linesegments.
b_lnsgmnt = (Pairs[a][4]) - (m_lnsgmnt * Pairs[a][0])
b_linesegments.append(b_lnsgmnt)
# These are temporary testing codes.
print "m_linesegments =", m_linesegments
print "b_linesegments =", b_linesegments
# I want to set empty lists to store the value of m_rys and b_rys for every
# iteration of the for loop.
m_rays = []
b_rays = []
# I need to set a range of degrees the intercepts will be calculated for.
theta = range(0, 360, deg_Increment)
# Temporary testing line.
print "theta =", theta
# Calculate the slope and y-intercepts of the rays radiating from the center_point.
for b in range(len(theta)):
m_rys = math.tan(math.radians(theta[b]))
m_rays.append(m_rys)
b_rys = center_point[1] - (m_rys * center_point[0])
b_rays.append(b_rys)
# Temporary testing lines.
print "m_rays =", m_rays
print "b_rays =", b_rays
# Set empty matrix for Intercepts.
Intercepts = []
angle = []
# Calculate the intersections of the rays with the line segments.
for c in range((360//deg_Increment)):
for d in range(len(Pairs)):
# Calculate the x-coordinates and the y-coordinates of each
# intersection
x_Int = (b_rays[c] - b_linesegments[d]) / (m_linesegments[d] - m_rays[c])
y_Int = ((m_linesegments[d] * x_Int) + b_linesegments[d])
Intercepts.append((x_Int, y_Int))
# Calculates the angle of the ray. Rounding is necessary to
# compensate for binary-decimal errors.
a_ngle = round(math.degrees(math.atan2((y_Int - center_point[1]), (x_Int - center_point[0]))))
# Substitutes positive equivalent for every negative angle,
# i.e. -270 degrees equals 90 degrees.
if a_ngle < 0:
a_ngle = a_ngle + 360
# Selects the angles that correspond to theta
if a_ngle == theta[c]:
angle.append(a_ngle)
print "INT1=", Intercepts
print "angle=", angle
dist = []
# Calculates distance.
for e in range(len(Intercepts) - 1):
distA = math.sqrt(((Intercepts[e][0] - center_point[0])**2) + ((Intercepts[e][5]- center_point[1])**2))
dist.append(distA)
print "dist=", dist
if __name__ == "__main__":
main()
Now, as to how it works:
The code takes 3 inputs: center_point (a point contained in the polygon, given in (x,y) coordinates), Pairs (the vertices of the polygon, also given in (x,y) coordinats), and deg_Increment ( which defines how often to calculate distance).
Let's assume that center_point = (4,5), Pairs = [(1, 4), (3, 8), (7, 2)], and deg_Increment = 20. This means that a polygon is created (sort of) whose vertices are Pairs, and center_point is a point contained inside the polygon.
Now rays are set to radiate from center_point every 20 degrees (which isdeg_Increment). The intersection points of the rays with the perimeter of the polygon are determined, and the distance is calculated using the distance formula.
The only problem is that I'm getting too many distances. :( In my example above, the correct distances are
1.00000 0.85638 0.83712 0.92820 1.20455 2.07086 2.67949 2.29898 2.25083 2.50000 3.05227 2.22683 1.93669 1.91811 2.15767 2.85976 2.96279 1.40513
But my code is returning
dist= [2.5, 1.0, 6.000000000000001, 3.2523178818773006, 0.8563799085248148, 3.0522653889161626, 5.622391569468206, 0.8371216462519347, 2.226834844885431, 37.320508075688686, 0.9282032302755089, 1.9366857335569072, 7.8429970322236064, 1.2045483557883576, 1.9181147622136665, 3.753460385470896, 2.070863609380179, 2.157671808913309, 2.6794919243112276, 12.92820323027545, 2.85976265663383, 2.298981118867903, 2.962792920643178, 5.162096782237789, 2.250827351906659, 1.4051274947736863, 69.47032761621092, 2.4999999999999996, 1.0, 6.000000000000004, 3.2523178818773006, 0.8563799085248148, 3.0522653889161626, 5.622391569468206, 0.8371216462519347, 2.226834844885431, 37.32050807568848, 0.9282032302755087, 1.9366857335569074, 7.842997032223602, 1.2045483557883576, 1.9181147622136665, 3.7534603854708997, 2.0708636093801767, 2.1576718089133085, 2.679491924311227, 12.928203230275532, 2.85976265663383, 2.298981118867903, 2.9627929206431776, 5.162096782237789, 2.250827351906659, 1.4051274947736847]
If anyone can help me get only the correct distances, I'd greatly appreciate it.
Thanks!
And just for reference, here's what my example looks like with the correct distances only:
You're getting too many values in Intercepts because it's being appended to inside the second for-loop [for d in range(len(Pairs))].
You only want one value in Intercept per step through the outer for-loop [for c in range((360//deg_Increment))], so the append to Intercept needs to be in this loop.
I'm not sure what you're doing with the inner loop, but you seem to be calculating a separate intercept for each of the lines that make up the polygon sides. But you only want the one that you're going to hit "first" when going in that direction.
You'll have to add some code to figure out which of the 3 (in this case) sides of the polygon you're actually going to encounter first.

How to 'zoom' in on a section of the Mandelbrot set?

I have created a Python file to generate a Mandelbrot set image. The original maths code was not mine, so I do not understand it - I only heavily modified it to make it about 250x faster (Threads rule!).
Anyway, I was wondering how I could modify the maths part of the code to make it render one specific bit. Here is the maths part:
for y in xrange(size[1]):
coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))
z = complex(coords[0],coords[1])
o = complex(0,0)
dotcolor = 0 # default, convergent
for trials in xrange(n):
if abs(o) <= 2.0:
o = o**2 + z
else:
dotcolor = trials
break # diverged
im.putpixel((x,y),dotcolor)
And the size definitions:
size1 = 500
size2 = 500
n=64
box=((-2,1.25),(0.5,-1.25))
plus = size[1]+size[0]
uleft = box[0]
lright = box[1]
xwidth = lright[0] - uleft[0]
ywidth = uleft[1] - lright[1]
what do I need to modify to make it render a certain section of the set?
The line:
box=((-2,1.25),(0.5,-1.25))
is the bit that defines the area of coordinate space that is being rendered, so you just need to change this line. First coordinate pair is the top-left of the area, the second is the bottom right.
To get a new coordinate from the image should be quite straightforward. You've got two coordinate systems, your "image" system 100x100 pixels in size, origin at (0,0). And your "complex" plane coordinate system defined by "box". For X:
X_complex=X_complex_origin+(X_image/X_image_width)*X_complex_width
The key in understanding how to do this is to understand what the coords = line is doing:
coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))
Effectively, the x and y values you are looping through which correspond to the coordinates of the on-screen pixel are being translated to the corresponding point on the complex plane being looked at. This means that (0,0) screen coordinate will translate to the upper left region being looked at (-2,1.25), and (1,0) will be the same, but moved 1/500 of the distance (assuming a 500 pixel width window) between the -2 and 0.5 x-coordinate.
That's exactly what that line is doing - I'll expand just the X-coordinate bit with more illustrative variable names to indicate this:
mandel_x = mandel_start_x + (screen_x / screen_width) * mandel_width
(The mandel_ variables refer to the coordinates on the complex plane, the screen_ variables refer to the on-screen coordinates of the pixel being plotted.)
If you want then to take a region of the screen to zoom into, you want to do exactly the same: take the screen coordinates of the upper-left and lower-right region, translate them to the complex-plane coordinates, and make those the new uleft and lright variables. ie to zoom in on the box delimited by on-screen coordinates (x1,y1)..(x2,y2), use:
new_uleft = (uleft[0] + (x1/size[0]) * (xwidth), uleft[1] - (y1/size[1]) * (ywidth))
new_lright = (uleft[0] + (x2/size[0]) * (xwidth), uleft[1] - (y2/size[1]) * (ywidth))
(Obviously you'll need to recalculate the size, xwidth, ywidth and other dependent variables based on the new coordinates)
In case you're curious, the maths behind the mandelbrot set isn't that complicated (just complex).
All it is doing is taking a particular coordinate, treating it as a complex number, and then repeatedly squaring it and adding the original number to it.
For some numbers, doing this will cause the result diverge, constantly growing towards infinity as you repeat the process. For others, it will always stay below a certain level (eg. obviously (0.0, 0.0) never gets any bigger under this process. The mandelbrot set (the black region) is those coordinates which don't diverge. Its been shown that if any number gets above the square root of 5, it will diverge - your code is just using 2.0 as its approximation to sqrt(5) (~2.236), but this won't make much noticeable difference.
Usually the regions that diverge get plotted with the number of iterations of the process that it takes for them to exceed this value (the trials variable in your code) which is what produces the coloured regions.

Categories

Resources