First of all, sorry if this is rather basic but it is certainly not my field of expertise.
So, I'm working with protein surface and I have this cavity:
Protein cavity
It is part of a larger, watertight, triangular mesh (.ply format) that represents a protein surface.
What I want to do, is find whether this particular "sub-mesh" is found in other proteins. However, I'm not looking for a perfect fit, rather similar "sub-meshes" since the only place I will find this exact shape is in the original protein.
I've been reading the docs for the Python modules trimesh and open3d. Trimesh does have a comparison module, but it doesn't seem to have the functionality I'm looking for. Also, open3d has a "compute point cloud distance" function that is recommended to compare the difference between two point cloud or meshes.
However, since what I'm actually trying to find is similarity, I would need a way to fit my cavity's "sub-mesh" onto the surface of the protein I'm analyzing, and then "score" how different or deformed the fitted submesh is. Another way would be to rotate and translate my sub-mesh to match the most vertices and faces on the protein surface and score that I guess.
Just a heads-up, I'm a biotechnologist, self-taught in Python and with extremely limited experience in anything 3D. At this point, anything helps, be it a paper, Python module or whatever knowledge you have that you think might be useful.
Thank you very much for any help you can provide with this!
Related
I'm producing an ugv prototype. The goal is to perform the desired actions to the targets set within the maze. When I surf the Internet, the mere right to navigate in the labyrinth is usually made with a distance sensor. I want to consult more ideas than the question.
I want to navigate the labyrinth by analyzing the image from the 3d stereo camera. Is there a resource or successful method you can suggest for this? As a secondary problem, the car must start in front of the entrance of the labyrinth, see the entrance and go in, and then leave the labyrinth after it completes operations in the labyrinth.
I would be glad if you suggest a source for this problem. :)
The problem description is a bit vague, but i'll try to highlight some general ideas.
An useful assumption is that labyrinth is a 2D environment which you want to explore. You need to know, at every moment, which part of the map has been explored, which part of the map still needs exploring, and which part of the map is accessible in any way (in other words, where are the walls).
An easy initial data structure to help with this is a simple matrix, where each cell represents a square in the real world. Each cell can be then labelled according to its state, starting in an unexplored state. Then you start moving, and exploring. Based on the distances reported by the camera, you can estimate the state of each cell. The exploration can be guided by something such as A* or Q-learning.
Now, a rather subtle issue is that you will have to deal with uncertainty and noise. Sometimes you can ignore it, sometimes you don't. The finer the resolution you need, the bigger is the issue. A probabilistic framework is most likely the best solution.
There is an entire field of research of the so-called SLAM algorithms. SLAM stands for simultaneous localization and mapping. They build a map using some sort of input from various types of cameras or sensors, and they build a map. While building the map, they also solve the localization problem within the map. The algorithms are usually designed for 3d environments, and are more demanding than the simpler solution indicated above, but you can find ready to use implementations. For exploration, something like Q-learning still have to be used.
I'm working on a python script whose goal is to detect if a point is out of a row of points (gps statement from an agricultural machine).
Input data are shapefile and I use Geopandas library for all geotreatments.
My first idea was to make a buffer around the 2 points around considered point. After that, I watch if my point is in the buffer. But results aren't good.
So I ask myself if there is a mathematical smart method, maybe with Scikit lib... Somebody is able to help me?
try arcgis.
build two new attributes in arcgis with their X and Y coordinate,then calculate the distance between the points you want
Question is kinda vague, but my guess would be to find approximation/regression line (I believe, numpy.polyfit of 2nd degree) and take points with largest distance from line, probably with threshold relative to overall fit loss
I would like to represent a bunch of particles (~100k grains), for which I have the position (including rotation) and a discretized level set function (i.e. the signed distance of every voxel from surface). Due to the large sample, I'm searching for eficient solutions to visualize it.
I first went for vtk, using its python interface, but I'm not really sure if it's the best (and simplest) way to do it since, as far as I know, there is no direct implementation for getting an isosurface from a 3D data set. In the beginning I was thinking usind marching cubes, but then I still would have to use a threshold or to interpolate in order to get the voxels that are on the surface and label them in order to be used by the marching cubes.
Now I found mayavi which has a python function
mlab.pipeline.iso_surface()
I however did not find much documentation on it and was wondering how it behaves in terms of performance.
Does someone have experience with this kind of tools? Which would be the best solution (in terms of efficiency and, secondly, in terms of simplicity - I do not know the vtk library, but if there is a huge difference in performance I can dig into it, also without python interface).
Lowest/Highest Combined Surface(s)
I'm looking for a methodology (and/or preferably a software approach) to create what I'm calling the Lowest (or highest) combined surface for a set of polygons.
So if our input was these two polygons that partially overlap and definitely intersect
My Lowest Combined output would be these three polygons
Given a number of "surfaces" (3d polygons)
We've gone through a variety of approaches and the best solution we could come up with involved applying a point grid to each polygon and performing calculations to return the lowest sets of points at each grid location. The problem is that the original geometry is lost in this approach which doesn't give us a working solution.
Background
I'm looking at a variety of "surfaces" that can be represented by 3d faces (cad Speak) or polygons and usually are distributed in a shapefile (.shp). When there are two surfaces that interact I'm interested in taking either the lowest combined or highest combined surface. I'm able to do this in CAD by manually tracing out new polygons for the interaction zones - but once I get into more than a handful of surfaces this becomes too labor intensive.
The current Approach
My current approach which falls somewhere in the terrible category is to generate a point cloud from each surface on a 1m grid and then do a grid cell based comparison of the points.
I do this by using AutoCAD Civl 3D's surface Generation Tools to create a TIN from each polygon surface and then using its Surface. This is then exported to a 1m DEM file which I believe is a gridded output format.
Next each DEM file is brought into Global Mapper where I generate a single point at the center of each "elevation grid cell". Next this data is exported to a .csv file where each point contains a variety of attributes such as what the name of the surface this point came from and what its altitude is
Next once I have a set of CSV files I run them through a python script that will export the lowest point (and associated attributes) at each grid. I do everything in UTM because the UTM grid is based on meters and it makes everything easier.
Lastly we bring the point file back into global mapper - coloring each point by what surface it started from.
There a variety of issues with this approach - sometimes things don't line up perfectly and there is a variety of cleanup I have to do
Also the edges end up being jagged - as is the case because I've converted nice straight lines into a point cloud
Alternatively we came up with a similar approach in Arc GIS using the Surface Comparison tool, however it had similar limitations to what we ran into with my approach.
What I'm looking for is a way to do this automatically with a variable number of inputs. I'm willing to use just about any tool to have this done, as it seems like it shouldn't be too difficult a process
Software?
When I look at this problem from a programmers point of view it looks rather straight forward - but I'm at a total loss how to proceed. I'm assuming Stack Overflow is the correct stack exchange for this question - but if it should be somewhere else - I'm happy to move it to a different exchange.
I wasn't sure if something like Mathematica (which i have zero experience) with could handle this situation or whether there was some fancy 3d math library in python that could chop polygons up by how they interact and then give me the lowest for co-located polys.
In any case I'm willing to try anything out so please if you have an idea of what tools and/or libraries I can use to do this please share! I have to assume that there is SOMETHING out there that can handle this type of 3d geometric processing
Thanks
EDIT
Because the commenters seem confused I am not asking for code - I am asking for methodologies, libraries, support tools, or even software packages that can perform these operations. I plan to write software to do this, however, I am hoping I don't need to pull out my trig books and write all these operations by hand. I have to assume there is somebody out there that has dealt with something similar before.
I am working on some code that needs to recognize some fairly basic geometry based on a cloud of nodes. I would be interested in detecting:
plates (simple bounded planes)
cylinders (two node loops)
half cylinders (arc+line+arc+line)
domes (n*loop+top node)
I tried searching for "geometry from node cloud", "get geometry from nodes", but I cant find a nice reference. There is probably a whole field on this, can someone point me the way? i already started coding something, but I feel like re-inventing the wheel...
A good start is to just get the convex hull (the tightest fitting polygon that can surround your node cloud) of the nodes, use either Grahams algorithm or QuickHull. Note that QuickHull is easier to code and probably faster, unless you are really unlucky. There is a pure python implementation of QuickHull here. But I'm sure a quick Google search will show many other results.
Usually the convex hull is the starting point for most other shape recognition algorithms, if your cloud can be described as a sequence of strokes, there are many algorithms and approaches:
Recognizing multistroke geometric shapes: an experimental evaluation
This may be even better, once you have the convex hull, break down the polygon to pairs of vertices and run this algorithm to match based on similarity to training data:
Hierarchical shape recognition using polygon approximation and dynamic alignment
Both of these papers are fairly old, so you can use google scholar to see who cites these papers and there you have a nice literature trail of attempts to solve this problem.
There are a multitude of different methods and approaches, this has been well studied in the literature, what method you take really depends on the level of accuracy you hope to achieve, and the amount of shapes you want to recognize, as well as your input data set.
Either way, using a convex hull algorithm to produce polygons out of point clouds is the very first step and usually input to the more sophisticated algorithmms.
EDIT:
I did not consider the 3D case, for that their is a lot of really interesting work in computer graphics that has focused on this, for instance this paper Efficient RANSAC for Point-Cloud Shape Detection
Selections from from Abstract:
We present an automatic algorithm to detect basic shapes in unorganized point clouds. The algorithm decomposes the point cloud into a concise, hybrid structure of inherent shapes and a set of remaining points. Each detected shape serves as a proxy for a set of corresponding points. Our method is based on random sampling and detects planes, spheres, cylinders, cones and tori...We demonstrate that the algorithm is robust even in the presence of many outliers and a high degree of noise...Moreover the algorithm is conceptually simple and easy to implement...
To complement Josiah's answer -- since you didn't say whether there is a single such object to be detected in your point cloud -- a good solution can be to use a (generalized) Hough transform.
The idea is that each point will vote for a set of candidates in the parameter space of the shape you are considering. For instance, if you think the current object is a cylinder, you have a 7D parameter space consisting of the cylinder center (3D), direction (2D), height (1D) and radius (1D), and each point in your point cloud will vote for all parameters that agree with the observation of that point. Doing so allows to find the parameters of the actual cylinder by taking the set of parameters who have the highest number of votes.
Doing the same thing for planes, spheres etc.., will give you the best matching shape.
A strength of this method is that it allows for multiple objects in the same point cloud.