Freely rotate instances in Abaqus - python

I would like to manually rotate and translate instances of my Abaqus model without inserting values (see image). I don't have coordinates to translate the instance to, but I merely want an estimation of the final shape of my model. Therefore I want to 'manually' shift and drag the instances with my computermouse. I cannot find the buttons for it. Does anyone know if a certain option exists?
Thank you in advance.

Related

How can I create a triangle mesh in an .ifc file from scratch in python?

I am new to working with .ifc files in python. What is the best way to create a triangle mesh when I have two arrays - one with vertices and one with faces - in a new .ifc file and how can I do this with python with the ifcopenshell package?
I have searched the documentation endlessly and was not able to find it. I would be very thankful if someone can point me in the right direction.
I want to have a similar script like this but instead of creating a wall I just want to create a triangle surface https://blenderbim.org/docs-python/ifcopenshell-python/code_examples.html#create-a-simple-model-from-scratch. I however have not found the right "ifc_class" with the corresponding parameters for that.
If you're targetting IFC4 (and above) you can use IfcTriangulatedFaceSet which is specifically designed for a triangulated surface such as this.
If you need to support IFC2x3 (which doesn't have the above entity), you probably want IfcShellBasedSurfaceModel - a more generalised entity that can perform the same thing.

How to get all points that make up a certain shape in vtk

I am working in my graduation project and one of the tasks I am required to draw a 3D shape(for example an ellipsoid using vtkSampleFunction), this represents the heart for example. I need to change the color of certain areas in that shape and make color gradients using 2 colors for example. How can this be achieved? All I could find is that cant be done without having polydata(points) and I dont know how to access specific points on the outline of my shape. Any help would be appreciated
I tried millions of ways to access points on the outline of my shape but I cant find anyway to do it.
I am new to VTK so please try to simply any answer. Thank You
If you are looking for a way to extract all the points inside a surface, you can use the vtkSelectEnclosedPoints method. For example, if you want to find out which all points in pointsPolydata lie inside the surface surfacePolydata, you can use the below example.
select = vtkSelectEnclosedPoints()
select.CheckSurfaceOn ()
select.SetTolerance(0.001)
select.SetInputData(pointsPolydata)
select.SetSurfaceData(surfacePolydata)
select.Update()
outPut=select.GetOutput()
The outPut polydata will have an array named "SelectedPoints", with 0 for point outside the surface and 1 for points inside the surface.
For more details, refer vtkSelectEnclosedPoints

remove inside information when merging two 3d objects

Hi i'm currently working on a project where we have to combine multiple 3d objects. So for example place them next to each other and some times they also intersect.
I'm looking for an algorithm/library or any idea that would reduce this new merged object to only consist of the outside faces. (Our 3d objects currently are .stl files but we are not bound to this format)
We've tried combining these objects with numpy-stl but it seems like this library does not have any optimisation that would help with this problem. We also tried using the boolean merge from pymesh but this takes very much time with detailed objects.
We want to loose all information that is inside the object and only keep the information that is outside. So for example if you would put this combined 3d object in water, we only want the faces that would be touched by the water.
We prefer python but any algorythm that could be implemented in python would bring us forward.
We appreciate every answer :)
LibIGL appears to have Python bindings. I would suggest thresholding the ambient occlusion of each facet. For example, maybe delete all facets with an occlusion value higher than 0.8
https://libigl.github.io/libigl-python-bindings/igl_docs/#ambient_occlusion
The inputs to this function are the vertices, the facet indexing into the vertices, the position of the facet centroids, and the normals for each facet. The output is the ambient occlusion for each facet, which is a value between 0 and 1. A value of 0 means the facet is fully visible, and a value of 1 means it is completely shadowed.

How can i make camera commands of paraview take effect simultaneously

Good evening,
I have a script that rotates the camera in paraview. It looks like this.
camera.Elevation(45)
camera.Roll(90)
Render()
The thing is, changing the order of the commands changes the final orientation as the camera rotates the second command starting from the already rotated position. Is there a way to make both commands take effect at the same time?
Thank you for any suggestions
Given a vtkCamera object, there is a method ApplyTransform which will allow you to apply a vtkTransform object to your camera.
vtkTransform objects have many more methods for transforms than the simple ones exposed in the vtkCamera interface. You can even use multiple transform objects to build up a transform system. If you have a transformation matrix for the camera already, you can pass it to the vtkTransform object with the SetMatrix method.
https://www.vtk.org/doc/nightly/html/classvtkTransform.html
You cannot apply the two commands at the same time. Moreover, the two operations (Elevation and Roll) are noncommutative:
Indeed, you can see here:
https://www.paraview.org/Wiki/ParaView_and_Python
that Roll(angle) perform a rotation around the axis defined by the view direction and the origin of the dataset.
As the view direction is changed by the use or not of Elevation, so does the final result.

Creating own object detection for Euro pallets

I thought about tackling a new project in which I use Tensorflow object detection API to detect Euro pallets (eg. pic).
My ultimate goal is to know how far I am away from the pallet and which relative position I have to it. So I thought about first detecting the euro pallet in an RGB feed from a kinect camera and then using its 3D feature to get the distance to the pallet.
But how do I go about the relative position of the pallet? I could create different classes, for example one is "Front view laying pallet" another one Side view laying pallet etc. but I think for that to be accurate I'd need quite a few pictures for each class for it to be valid? Like 200 for each class?
Since my guess is that there are no such labeled datasets yet thats quite a pain to create by myself.
Another way I could think of, is if I label my pallets with segmentation instead of bounding boxes, maybe there is another way to find out my relative position to the pallet? I never did semantic segmentation labeling myself but can anyone name any good programs which I could use?
I'm hoping someone can help point me in the right direction. Any help would be appreciated.
Some ideas: assuming detection and segmentation with classifier(s) works, one could then try feature detection like edges / lines to obtain clues about its orientation (bounding box).
Of course this will be tricky for simple feature detection because of very different surfaces (wood, dirt), backgrounds and lighting.
Also, "markerless tracking" (a topic in augmented reality) and "bin picking" (actually applied in the automation industry) may be keywords for similar problems, although you are probably not starting with an unordered pile of pallets.

Categories

Resources