For my project, I need to measure the distance between two STL files. I wrote a script that allows reading the files, positioning them in relation to each other in the desired position. Now, in the next step I need to check the distance from one object to the other. Is there a function or script available on a library that allows me to carry out this process? Because then I’m going to want to define metrics like interpenetration area, maximum negative distance etc etc so I need to check first the distance between those objects and see if there is like mesh intersection and mesure that distance. I put the url for the combination of the 2 objects that I want to mesure the distance:
https://imgur.com/wgNaalh
Pyvista offers a really easy way of calculating just that:
import pyvista as pv
import numpy as np
mesh_1 = pv.read(**path to mesh 1**)
mesh_2 = pv.read(**path to mesh 2**)
closest_cells, closest_points = mesh_2.find_closest_cell(mesh_1.points, return_closest_point=True)
d_exact = np.linalg.norm(mesh_1 .points - closest_points, axis=1)
print(f'mean distance is: {np.mean(d_exact)}')
For more methods and examples, have a look at:
https://docs.pyvista.org/examples/01-filter/distance-between-surfaces.html#using-pyvista-filter
To calculate the distance between two meshes, first one needs to check whether these meshes intersect. If not, then the resulting distance can be computed as the distance between two closest points, one from each mesh (as on the picture below).
If the meshes do intersect, then it is necessary to find the part of each mesh, which is inside the other mesh, then find two most distant points, one from each inner part. The distance between these points will be the maximum deepness of the meshes interpenetration. It can be returned with negative sign to distinguish it from the distance between separated meshes.
In Python, one can use MeshLib library and findSignedDistance function from it as follows:
import meshlib.mrmeshpy as mr
mesh1 = mr.loadMesh("Cube.stl")
mesh2 = mr.loadMesh("Torus.stl"))
z = mr.findSignedDistance(mesh1, mesh2)
print(z.signedDist) // 0.3624192774295807
Related
I'm trying to make a code that calculates the distance between a set of points from a set of point / line / polygon.
Code below with the sample data gives me data but it is taking forever to go through all the points (Around an hour or so)
I am using shapely because it should also include distance between:
point - point
point - line segment
point - polygon
Line segments and polygons are not included in the code
Is it because I am using for loop?
Is there more efficient way of achieving this?
from timeit import default_timer as timer
import time
start = timer()
import numpy as np
import shapely
import progressbar
from shapely.geometry import Point
#Create 10k Random X and Y coordinates
x_coordinates=np.random.rand(10000)
y_coordinates=np.random.rand(10000)
#Create 40k Center points of Circles
Circles=np.ones((201*201,2),dtype=float)
linspace=np.linspace(-1, 1, num=201) #set distance between circles for sample data. Actual data are more randomly placed and changes from design to design
temp=0
#Make array of circles
for x in linspace:
Circles[temp:temp+201,0]=x
Circles[temp:temp+201,1]=linspace
temp=temp+201
#Create empty array for saving result
#result should save which circle the point belongs
result=np.empty([10000, 2], dtype=object)
for x in progressbar.progressbar(range(10000)):
defect = Point(x_coordinates[x],y_coordinates[x]) #go through 10000 points
for j in range(201*201):
if defect.distance(Point(Circles[j,:]))<0.005: #go through 40000 circles
result[x]=Circles[j,:]
break #break if match found
end = timer()
print(end - start)
I'm not familiar with numpy (or shapely) but based on your code you are looking for circles that are close to your points. Slightly confused about why you have circles at all based on the title of the question. (Is there any need for circles in your code as they seem to be only used as points anyway?)
distance between a set of points from a set of point
Do you need the distance for each point to each of your circles? What is it that you are specifically looking for?
Your algorithm is slow for two reasons:
Calculate distance between two points. This is a straight forward calculation but involves taking a square root which is slow compared to other operations.
Instead use the square of the distance. Use the formula for the distance but just skip the square root. Maybe this isn't too slow in shapely.
Comparing each point with possibly EVERY circle by calculating the distance. This is most likely the main cause of your code being slow.
If you don't need the distance from each point to all the circles then you need a fast way to find the circle you are looking for. You could consider ordering your circles by the x-coordinates to achieve this. Based on your code looking for circles very close (0.005) to your point, you could easily eliminate all circles whose x-coordinate is further than that from your point and completely skip calculating the distance between the two. (Then you could do the same for the y-coordinates.) This way you wouldn't need to look at all the circles but could jump out of that loop because you know the rest is going to be further away on that axis.
If you provide a better description of what exactly you are looking for then someone can probably provide an example of how you could implement it.
I generated a path between locations A and B with the constrain of locations that I have to pass throw them or at close to them so the route looks like: A -> c1 -> c2 - > B, even though it is not the shortest path.
I used for path in nx.all_shortest_paths(UG, source=l1_node_id, target=l2_node_id,weight = 'wgt'):
when 'wgt' is the distance of the edge/driving speed in this road.
I generated a list of lists where each inner list is the node_id for example:
l_list = [
[n11,n12,n13,n14....]
[n21,n22,n23,n24....]
..
]
and on the map, its looks like:(the markers are the beginning of each route and I also colored them with different color)
I want to change it to one route but as you can see there are some splits like the green and the red, some common sequences(which I can handle) and the second problem is the beginning of the blue route\end of the black one which is unimportant.
I can't just remove the red route because it supposed to be a generic algorithm and I don't know even where it will happen again during this route.
I do have timestamps of each marker but it just says that I have been close to this area. (it is locations of cellular antennas)
First, you gonna need to define what is "almost parallel" more concisely, or more formally, you need to define a similarity function.
Choosing a similarity/distance function
There are plenty of ways to define a similarity function, here is one of them
Resample
Assuming each node n_i has an x and y coordinates (n_i_x,n_i_y).
You can resample the points on the x-axis, such that the new points are sampled at 1km.
Then, for each 2 routes, you'll can sum the difference in the y axis.
Use this distance in order to cluster routes.
Other ideas
Earth mover distance
Jaccard (~ % of common nodes)
Clustering
Once you defined a similarity function, you can use a distance based clstering algorithm, I recommend using sklearn's agglomerative clustering.
After the clustering is done, all you have left to do is to choose one route from each cluster.
I'm using griddata() to interpolate my (irregular) 2-dimensional depth-measurements; x,y,depth. The method does a great job - but it interpolates over the entire grid where it can find to opposing points. I don't want that behaviour. I'd like to have an interpolation around the existing measurements, say with up to an extent of a certain radius.
Is it possible to tell numpy/scipy: don't interpolate if you're too far from an existing measurement? Resulting in a NODATA-value? ideal = griddata(.., .., .., radius=5.0)
edit example:
In the image below; black dots are the measurements. Shades of blue are the interpolated cells by numpy. The area marked in green is in fact part of the picture but is considered as NODATA by numpy (because there's no points in between). Now, the red areas, are interpolated, but I want to get rid of them. any ideas?
Ok cool. I don't think there is a built-in option for griddata() that does what you want, so you will need to write it yourself.
This comes down to calculating the distances between N input data points and M interpolation points. This is simple enough to do but if you have a lot of points it can be slow at ~O(M*N). But here's an example that calculates the distances to allN data points, for each interpolation point. If the number of data points withing the radius is at least neighbors, it keeps the value. Otherwise is writes the value of NODATA.
neighbors is 4 because griddata() will use biilinear interpolation which needs points bounding the interpolants in each dimension (2*2 = 4).
#invec - input points Nx2 numpy array
#mvec - interpolation points Mx2 numpy array
#just some random points for example
N=100
invec = 10*np.random.random([N,2])
M=50
mvec = 10*np.random.random([M,2])
# --- here you would put your griddata() call, returning interpolated_values
interpolated_values = np.zeros(M)
NODATA=np.nan
radius = 5.0
neighbors = 4
for m in range(M):
data_in_radius = np.sqrt(np.sum( (invec - mvec[m])**2, axis=1)) <= radius
if np.sum(data_in_radius) < neighbors :
interpolated_values[m] = NODATA
Edit:
Ok re-read and noticed the input is really 2D. Example modified.
Just as an additional comment, this could be greatly accelerated if you first build a coarse mapping from each point mvec[m] to a subset of the relevant data points.
The costliest step in the loop would change from
np.sqrt(np.sum( (invec - mvec[m])**2, axis=1))
to something like
np.sqrt(np.sum( (invec[subset[m]] - mvec[m])**2, axis=1))
There are plenty of ways to do this, for example using a Quadtree, hashing function, or 2D index. But whether this gives performance advantage depends on the application, how your data is structured, etc.
I'd like to interpolate some 3D finite-element stress field data from a bunch of known nodes at points where nodes don't exist. I realise that node stresses are already extrapolated from gauss points, but it is the best I can do with the data I have available. The image below gives a 2D representation. The red and pink points would represent locations where I'd like to interpolate the value.
Initially I thought I could find the smallest bounding box (hull) or simplex that contained the point of interest and no other known points. Visualising this in 2D I realised that this might lead to ignoring data from a close-by value, incorrectly. I was planning on using the scipy LindearNDInterpolator but I notice there is some unexpected behaviour, and I'm worried it will exclude nearby points in the way that I just described. Notice how the pink point would not reference from the green triangle but ignore the point outside the orange triangle, although it is probably more relevant.
As far as I can tell the best way is to take the nearest surrounding nodes, and interpolating by weighted averaging on distance. I'm not sure if there is something readily available or if it needs to be written. I'd imagine this is a fairly common problem so I'd presume the wheel has already been invented...
Actually my final goal is to interpolate/regress values for a 3D line through the set of points.
You can try Inverse distance weighting. Here is an example in 1D (easily generalizable to 3D):
from pylab import *
# imaginary samples
xmax=10
Npoints=10
x=0.1*randint(0,10*xmax,Npoints)
y=sin(2*x)+x
plot(x,y,ls="",marker="x",color="red",label="samples",ms=9,mew=2)
# interpolation
x2=linspace(0,xmax,150) # new sampling
def weight(x,x0,p): # modify this function in 3D
return 1/(((x-x0)**2)**(p/2)+0.00001) # 0.00001 to avoid infinity
y2=zeros_like(x2)
for p in range(1,4):
for i in range(len(y2)):
y2[i]=sum(y*weight(x,x2[i],p))/sum(weight(x,x2[i],p))
plot(x2,y2,label="Interpolation p="+str(p))
legend(loc=2)
show()
Here is the result
As you can see, it's not really fantastic. The best results are, I think, for p=2, but it will be different in 3D. I have obtained better curves with a gaussian weight, but have no theorical background for such a choice.
https://stackoverflow.com/a/36337428/2372254
The first answer here was helpful but the 1-D example shows that the approach actually does some strange things with p=1 (wildy different from the data) and with p=3 we get some weird plateaux.
I took a look at Radial Basis Functions which are implemented in SciPy, and modified JPG's code as follows.
Modified Code
from pylab import *
from scipy.interpolate import Rbf, InterpolatedUnivariateSpline
# imaginary samples
xmax=10
Npoints=10
x=0.1*randint(0,10*xmax,Npoints)
Rbf requires sorted lists:
x.sort()
y=sin(2*x)+x
plot(x,y,ls="",marker="x",color="red",label="samples",ms=9,mew=2)
# interpolation
x2=linspace(0,xmax,150) # new sampling
def weight(x,x0,p): # modify this function in 3D
return 1/(((x-x0)**2)**(p/2)+0.00001) # 0.00001 to avoid infinity
y2=zeros_like(x2)
for p in range(1,4):
for i in range(len(y2)):
y2[i]=sum(y*weight(x,x2[i],p))/sum(weight(x,x2[i],p))
plot(x2,y2,label="Interpolation p="+str(p))
yrbf = Rbf(x, y)
fi = yrbf(x2)
plot(x2, fi, label="Radial Basis Function")
ius = InterpolatedUnivariateSpline(x, y)
yius = ius(x2)
plot(x2, yius, label="Univariate Spline")
legend(loc=2)
show()
The results are interesting and probably more suitable to my intended usage. The following figure was produced.
But the RBF implementation in SciPy (google for alternatives) has a major problem when points are repeated - not likely in a real scenario - and goes completely ballistic:
When smoothed (smooth=0.1 was used) it goes normal again. This might show some programming weirdness.
I am working with an algorithm that, for each iteration, needs to find which region of a Voronoi diagram a set of arbirary coordinats belong to. that is, which region each coordinate is located within. (We can assume that all coordinates will belong to a region, if that makes any difference.)
I don't have any code that works in Python yet, but the the pseudo code looks something like this:
## we are in two dimensions and we have 0<x<1, 0<y<1.
for i in xrange(1000):
XY = get_random_points_in_domain()
XY_candidates = get_random_points_in_domain()
vor = Voronoi(XY) # for instance scipy.spatial.Voronoi
regions = get_regions_of_candidates(vor,XY_candidates) # this is the function i need
## use regions for something
I know that the scipy.Delaunay has a function called find_simplex which will do pretty much what I want for simplices in a Delaunay triangulation, but I need the Voronoi diagram, and constructing both is something I wish to avoid.
Questions:
1. Is there a library of some sort that will let me do this easily?
2. If not, is there a good algorithm I could look at that will let me do this efficiently?
Update
Jamie's solution is exactly what I wanted. I'm a little embarrassed that I didn't think of it myself though ...
You don't need to actually calculate the Voronoi regions for this. By definition the Voronoi region around a point in your set is made up of all points that are closer to that point than to any other point in the set. So you only need to calculate distances and find nearest neighbors. Using scipy's cKDTree you could do:
import numpy as np
from scipy.spatial import cKDTree
n_voronoi, n_test = 100, 1000
voronoi_points = np.random.rand(n_voronoi, 2)
test_points = np.random.rand(n_test, 2)
voronoi_kdtree = cKDTree(voronoi_points)
test_point_dist, test_point_regions = voronoi_kdtree.query(test_points, k=1)
test_point_regions Now holds an array of shape (n_test, 1) with the indices of the points in voronoi_points closest to each of your test_points.