Pythonocc. How can i get cursor coordinates in 3d space? - python

I need to get the coordinates of the cursor in the 3d space of the scene when I put it on some surface. How can I do that?
I tried to use the ConvertWithProj function to get at least the ray and then find its intersection with the surface. The direction of the beam seems to be true, but the point is not. As a result, the beam does not pass through the point at which the cursor is hovered, but very far from it.
I have also tried to use ConvertToGrid function but i never managed to get it to work

Related

How to unwrap coordinates produced by LAMMPS in python

Okay so I am using LAMMPS to produce wrapped coordinates since I am using periodic boundary conditions with their simulation, in a box of x=[0,91.24] by y=[0,91.24], and I was looking how I would be able to unwrap the coordinates so that I get correct coordinates to be able to calculate the MSD
I have tried putting the origin of the box and having an offset on it, I have seen online that you have to write your coordinates as (Length of the box/coordinate)+equilibrium position

What do negative coordinates mean with cv2.perspectiveTransform?

What do negative coordinates mean when I apply the function:
transformed_coordinates = cv2.perspectiveTransform(points, homography)
The documentation ,mentions nothing about this. Could someone please explain this?
Negative coordinates are entirely normal. That means that the projected points from 3D space to 2D image space are out of bounds or defined outside of the image boundaries. It's not documented because it's implicit.
Now you are probably wondering why you're getting these. I have no idea where points came from, but I suspect that you are visualizing some point cloud in 3D space and the transform maps visible points from the point cloud to where the camera is located. Therefore, it's perfectly normal to have points that are outside the field of view of the camera be mapped to negative coordinates which tells you they simply cannot appear or be visualized when projected to image space.

how to draw a graph based on GPS coordinates in pyqtgraph?

I have a data set which contains latitude and longitude.(These are car racing data ) I like to draw a map in pyqtgrapgh based on this coordinates and then interact with it. The problem is I cannot find the proper way to draw the map with the current pyqtgrapgh api. (or maybe I am missing something).
Does anybody know how can I draw the map with pyqtgraph?
I would start with Qt's primitives like QGraphicsPathItem or QGraphicsPolygonItem. If you are starting from a numpy array of coordinates, then you might find pg.arrayToQPath() useful as well.

Snap 3D cursor to an opaque part of plane (blender)

I have a question regarding python scripting in Blender, and I'd really appreciate it if someone could give me at least some conceptual guidelines to how I could do this:
Basically I have around 100 planes (simple primitive planes) and each of them has its own material and each material has it's own transparency map applied to it.
I need a way to snap each of those plane's respective pivots to their opaque parts. I.e. if there is a way to tell the following to blender through python language - "hey, go over every one of these planes, and do the following for each - snap a 3D cursor to an opaque part of the plane (it doesn't matter where exactly, as long as it's inside of an opaque part of the plane) and then snap plane's pivot point to the 3D cursor".
Of course I don't expect anyone to write me a full algorithm for this, I am just asking for a little help and a push in the right direction, as I do have experience with python, but not with blender :/
Any help would be appreciated.
You can find documentation on blender's python api here.
Within blender's image class you can access the pixel data at image.pixels as an array of floats, 4 floats per pixel (RGBA I think). image.size[0] is the width in pixels image.size[1] for height.
Given bpy.data.objects['Plane'].bound_box is an [8][3] array of points defining the outer extremes of the plane, you can locate a point on the plane for the pixel location to get the target point for the origin. You will also find bpy.data.objects['Plane'].matrix_world useful to translate the object coordinates to global.
bpy.context.scene.cursor_location = Vector((x,y,z)) will move the cursor to where you want.
bpy.ops.object.origin_set(type='ORIGIN_CURSOR') will set the active objects origin to the cursor. Note that this works on the active object, so you will need to alter your selection as you go.

How can i find area of an element in a meshed surface using python

I am new to python . So help me in this. I have X,Y,Z coordinates(3D data points), lets say 1000 points, which makes a surface in 3d space. I have to find the total surface area of it.
This can be done by meshing the coordinates within X, Y, Z and then finding the area of each element and summing up.
I have meshed the coordinates in 3d space.
Now what i need is to find the area of each element. Is there any method in python where i can calculate the surface area.
I was suggested to use Gaussian quadrature method to do it. But i din get how to use it in python to get the area.
Can anyone help me in finding the area of the surface using python.
Any help is thankful.
You can use Gaussian quadrature to calculate the area, either by doing an area integral or by doing a contour integral around the perimeter of each element.
Maybe this will get you started:
http://www.physics.ohio-state.edu/~ntg/780/readings/hjorth-jensen_notes2009_07.pdf
I wouldn't wait for someone to hand you Python code. Better to get a shovel and start digging.

Categories

Resources