How to unwrap coordinates produced by LAMMPS in python - python

Okay so I am using LAMMPS to produce wrapped coordinates since I am using periodic boundary conditions with their simulation, in a box of x=[0,91.24] by y=[0,91.24], and I was looking how I would be able to unwrap the coordinates so that I get correct coordinates to be able to calculate the MSD
I have tried putting the origin of the box and having an offset on it, I have seen online that you have to write your coordinates as (Length of the box/coordinate)+equilibrium position

Related

How to get all Edges and Faces (Triangles) in a mesh and their nodes (3D co-ordinates) in GMSH (Python API)?

I need lines and triangles (coordinates corresponding to them) as a list using Python API, how do I go about it?
I have tried these functions
gmsh.model.mesh.createEdges()
edgeTags, edgeNodes = gmsh.model.mesh.getAllEdges()
gmsh.model.mesh.createFaces()
faceTags, faceNodes = gmsh.model.mesh.getAllFaces(3)
And I am not sure how I can proceed to extract the coordinates from the output of these functions.
I did not really find any way to get the coordinates in any tutorials as well.

find the polygon enclosing the given coordinates and find the coordinates of polygon (python opencv)

Example image used in program
I am trying to find the coordinates of a polygon in an image
(just like flood fill algorithm we are given a coordinate and we need to search the surrounding pixels for the boundary, if boundary is found we need to append its coordinate to the list if not we need to keep searching other pixels.)and if all the pixels are traversed the program should stop returning the list of pixels.
usually color of boundary is black and image is a gray scale image of maps of building.
It seems that flood-fill will be good enough to completely fill a room, despite the extra annotations. After filling, extract the outer outline. Now you can detect the straight portions of the outline by checking the angle formed by three successive points. I would keep a spacing between them to avoid local inaccuracies.
You will find a sequence of line segments, possibly interrupted at corners. Optionally use line fitting to maximize accuracy, and recompute the corners by intersecting the segments. Also consider joining aligned segments that are interrupted by short excursions.
If the rooms are not well closed, flood filling can leak and you are a little stuck. Consider filling with a larger brush, though this can cause other problems.

What do negative coordinates mean with cv2.perspectiveTransform?

What do negative coordinates mean when I apply the function:
transformed_coordinates = cv2.perspectiveTransform(points, homography)
The documentation ,mentions nothing about this. Could someone please explain this?
Negative coordinates are entirely normal. That means that the projected points from 3D space to 2D image space are out of bounds or defined outside of the image boundaries. It's not documented because it's implicit.
Now you are probably wondering why you're getting these. I have no idea where points came from, but I suspect that you are visualizing some point cloud in 3D space and the transform maps visible points from the point cloud to where the camera is located. Therefore, it's perfectly normal to have points that are outside the field of view of the camera be mapped to negative coordinates which tells you they simply cannot appear or be visualized when projected to image space.

OpenCV find object's position with solvePnPRansac with not-corresponding points

I am trying to find object's position relative to camera position in real-world coordinates by tracking a known 2D LED pattern on the object.
I did camera calibration. I was able to sucessfully detect LEDs in the pattern and find their exact coordinates in the image frame. These points however do not correspond exactly 1-to-1 to the known coordinates in the pattern, they are in random order. The correspondence is important in functions like solvePnPRansac or findHomography, which would be my first choice to use.
How can I find the correspondence between these sets of points or maybe should I use some other function to calculate transformation just like solvePnPRansac does?
As you did not ask about the way to estimate the relative pose between your object and your camera, I will let this topic aside and focus on the way to find correspondences between each LED and their 2D projections.
In order to obtain a unique 1-to-1 correspondence set, the LED pattern you use should be unambiguous with respect to rotation. For example, you may use a regular NxN grid with the top-left cell containing an additional LED, or LEDs located on a circle with one extra LED underneath a single one, etc. Then, the method to find the correspondences depends on the pattern you chose.
In the case of the circle pattern, you could do the following:
Estimate the center of gravity of the points
Find the disambiguing point, which is the only one not lying on a circle, and define the closest of the other points as the first observed point
Order the remaining points by increasing angle with respect to the center of gravity (i.e. clock-wise order)
In the case of the regular grid pattern, you could try the following:
Find the four corners of the grid (those with min/max coordinates)
Estimate the homography which transforms these four corners to the corners of a regular NxN square (with orthogonal angles)
Transform the other points using this homography
Find the disambiguing point, which is the only one for which X-floor(X) and Y-floor(Y) are close to 0.5, and define the closest of the four initial corners as the first observed point
Order the remaining points by increasing angle with respect to the center of the grid and decreasing distance to the center of the grid
You could also study the algorithm used by the function findChessboardCorners (see calibinit.cppin the calib3D module), which uses a similar approach to order the detected corners.

How can i find area of an element in a meshed surface using python

I am new to python . So help me in this. I have X,Y,Z coordinates(3D data points), lets say 1000 points, which makes a surface in 3d space. I have to find the total surface area of it.
This can be done by meshing the coordinates within X, Y, Z and then finding the area of each element and summing up.
I have meshed the coordinates in 3d space.
Now what i need is to find the area of each element. Is there any method in python where i can calculate the surface area.
I was suggested to use Gaussian quadrature method to do it. But i din get how to use it in python to get the area.
Can anyone help me in finding the area of the surface using python.
Any help is thankful.
You can use Gaussian quadrature to calculate the area, either by doing an area integral or by doing a contour integral around the perimeter of each element.
Maybe this will get you started:
http://www.physics.ohio-state.edu/~ntg/780/readings/hjorth-jensen_notes2009_07.pdf
I wouldn't wait for someone to hand you Python code. Better to get a shovel and start digging.

Categories

Resources