I can't get RenderCollision to work, no matter how I try.
The documentation says :
RenderCollision (view_id: int, vtx_layout: VertexLayout, prg: ProgramHandle, render_state: RenderState, depth: int) -> None
Here's my (limited) understanding of what I should pass as parameters to this function :
view_id : can be set from 0 to 255, according to the doc. In my case, it is 0
vtx_layout : the vertex layout to store 3D lines
ProgramHandle : the program (shader) needed to draw 3D lines
RenderState : something I'm supposed to provide using ComputeRenderState (found it here)
depth : something relative to the zDepth, I guess?
At this point, I feel I'm not far from using it properly, but I'm having a hard time to figure out the RenderState thing.
Anyone been there before?
RenderCollision is a debug function, so it won't "consume" any view_id. Indeed, you can pass it the view_id, it will write into the current view.
vtx_layout and prg, as you guessed it, handle the rendering of the debug lines (RenderCollision is using lines to draw the collision shapes).
It usually works this way:
Avoid clearing the view when drawing the debug info
hg.SetViewClear(view_id, hg.CF_None, 0, 1.0, 0)
Set the rect of the current view (the same as your main rendering)
hg.SetViewRect(view_id, 0, 0, screen_size_x, screen_size_y)
Set the camera transformation matrix (the same as your current camera)
hg.SetViewTransform(view_id, view_matrix, projection_matrix)
This is the one you were probably looking at: BM_Opaque will let know Harfang where you want to specifically draw within the rendering pipeline:
render_state = hg.ComputeRenderState(hg.BM_Opaque, hg.DT_Disabled, hg.FC_Disabled)
Final instruction that will draw the collision shapes:
physics.RenderCollision(view_id, vtx_line_layout, line_shader, render_state , 0)
You will find a working example here, I hope it will help:
https://github.com/harfang3d/tutorials-hg2/blob/master/physics_overrides_matrix.py#L69
Related
I'm trying to render a cube (default blender scene) with a camera facing it. I have added a spotlight at the same location as the camera. Spotlight direction also faces towards the cube.
When I render, location changes take effect for both camera and spotlight but, rotations don't. scene context update is deprecated now. I have seen other update answers, but they don't seem to help.
I have done some workarounds and they seem to work, but this is not the correct way.
If I render the same set of commands twice (in a loop), I get the correct render.
If I run the script from the blender's python console (only once), I get the correct render. But If the same code is run as a script inside the blender, render is again wrong.
import pdb
import numpy as np
import bpy
def look_at(obj_camera, point):
loc_camera = obj_camera.matrix_world.to_translation()
direction = point - loc_camera
rot_quat = direction.to_track_quat('-Z', 'Y')
obj_camera.rotation_euler = rot_quat.to_euler()
data_path='some folder'
locs=np.array([ 0.00000000e+00, -1.00000000e+01, 3.00000011e-06]) #Assume, (I have big array where camera and spotlight needs to be placed, and then made to look towards cube)
obj_camera = bpy.data.objects["Camera"]
obj_other = bpy.data.objects["Cube"]
bpy.data.lights['Light'].type='SPOT'
obj_light=bpy.data.objects['Light']
loc=locs
i=0
##### if I run following lines two times, correct render is obtained.
obj_camera.location = loc
obj_light.location= obj_camera.location
look_at(obj_light, obj_other.matrix_world.to_translation())
look_at(obj_camera, obj_other.matrix_world.to_translation())
bpy.context.scene.render.filepath = data_path+'image_{}.png'.format(i)
bpy.ops.render.render(write_still = True)
You might need to call bpy.context.view_layer.update() (bpy.context.scene.update() with older versions than blender 2.8) after changing the camera orientation by obj_camera.rotation_euler = rot_quat.to_euler() and make sure that the layers that are going to be rendered are active when calling update() (see here https://blender.stackexchange.com/questions/104958/object-locations-not-updating-before-render-python).
(A bit late ;-) but this was one of the rare questions I found for a related issue.)
I use OCC with python to visualize .igs and .stl format. In .stl file I have a mesh on my model and I want to know what vertex on this mesh was clicked. At least to get some kind of id. I see that the model that I choose automatically highlights without any settings, so I guess there is a way to do this. But I couldn’t find any information about it.
Okay, found it. In case someone else will need it:
display = self.occWidget._display
display.SetSelectionModeVertex() # This is the required function
display.register_select_callback(recognize_clicked)
where recognize_clicked is
def recognize_clicked(shp, *kwargs):
""" This is the function called every time
a face is clicked in the 3d view
"""
for shape in shp:
print("Face selected: ", shape)
Face selection - SetSelectionModeFace()
Vertex selection - SetSelectionModeVertex()
Edge selection - SetSelectionModeEdge()
Shape selection - SetSelectionModeShape()
Neutral (default) selection - SetSelectionModeNeutral()
That is all the modes that I've found in other examples. Please, if you find more, write in a comment that resource.
Here's some background:
I'm working on a game engine and I've recently added a physics engine (physx in this case ). problem is that my transform class uses euler angles for rotation and the physics engine's transform class uses euler angles. so I just implemented a method to change my transform class to the physics engine transform and back. It's working well but I've discovered a weird bug.
Behavior I get:
When the Yaw(second element of the euler vector) of the rotation is above 90 degrees it doesn't make the object rotate anymore on the y axis and start's messing around with the pitch and the roll (weird shaking skip from 0 to 180 and back a lot).
The debugging tools shows that the rotation doesn't go above 91 but does go to about 90.0003 max I do transfer the degrees to radians.
Example:
To show this bug I have a cube with a python script rotating it :
from TOEngine import *
class rotate:
direction = vec3(0,10,0)
def Start(self):
pass
def Update(self,deltaTime):
transform.Rotate(self.direction*deltaTime*5)
pass
Engine itself written in cpp but I got a scripting system working with embedded python. TOEngine is just my module and the script itself is just rotating the cube every frame.
The cube start at 0 , 0 , 0 rotation and rotating fine but stops and
90 degress yaw and starts shaking.
This only happens when the physics system is enabled so I know that the bug must be in the method transferring the rotation from euler to quat and back every frame using glm.
Here's the actual problematic code :
void RigidBody::SetTransform(Transform transform)
{
glm::vec3 axis = transform.rotation;
rigidbody->setGlobalPose(PxTransform(*(PxVec3*)&transform.position,*(PxQuat*)&glm::quat(glm::radians(transform.rotation))));//Attention Over Here
}
Transform RigidBody::GetTransform()
{
auto t = rigidbody->getGlobalPose();
return Transform(*(glm::vec3*)&t.p, glm::degrees(glm::eulerAngles(*(glm::quat*)&t.q)), entity->transform.scale);
}
Avoid the weird type punning PxQuat is basically the same as glm::quat and PxVec3 is basically the same as glm::vec3. I expect this code to transfer between the physics's engine transform class and my transform class by changing the rotation from euler angles degress to a quat with radians(the hard part).
And the inside the physics system:
void PreUpdate(float deltaTime)override { //Set Physics simulation changes to the scene
mScene->fetchResults(true);
for (auto entity : Events::scene->entities)
for (auto component : entity->components)
if (component->GetName() == "RigidBody")
entity->transform = ((RigidBody*)component)->GetTransform(); //This is running on the cube entity
}
void PostUpdate(float deltaTime)override { //Set Scene changes To Physics simulation
for (auto entity : Events::scene->entities)
for (auto component : entity->components)
if (component->GetName() == "RigidBody")
((RigidBody*)component)->SetTransform(entity->transform);//This is running on the cube entity
mScene->simulate(deltaTime);
}
PreUpdate runs before the update every frame PostUpdate runs after the update every frame. the method Update(showed in the script above) as the name suggests runs on the update...(in between PreUpdate and PostUpdate). The cube has a rigidbody component.
What I expect getting:
a rotating cube that doesn't stop rotating when it reaches yaw 90 degrees.
I know this one is a bit complex. I tried my best to explain the bug I believe the problem is in changing Euler angles to a quat.
Regarding the conversion from PxQuat to glm::quat, do read the documentation at https://en.cppreference.com/w/cpp/language/explicit_cast and https://en.cppreference.com/w/cpp/language/reinterpret_cast and look for undefined behavior in the page on reinterpret_cast. As far as I can tell, that c-style cast is not guaranteed to work, or even desirable. While I am digressing at this point, but remember that you have two options for this conversion.
glm::quat glmQuat = GenerateQuat();
physx::PxQuat someQuat = *(physx::PxQuat*)(&glmQuat); //< (1)
physx::PxQuat someOtherQuat = ConvertGlmQuatToPxQuat(glmQuat); //< (2)
(1) This option has potential to result in undefined behavior but more importantly, you did not save a copy. That statement is certain to result in 1 copy constructor invocation.
(2) This option, on account of return value optimization, will also result in a single construction of physx::PxQuat.
So in effect, by taking option (1), you do not save any cost but are risking undefined behavior. With option (2), cost is the same but code is now standards compliant. Now back to the original point.
I would ordinarily do everything in my capacity to avoid using euler angles as they are error prone and far more confusing than quaternions. That said here is a simple test you can set up to test your quaternion conversion from euler angles (keep physx out of this for the time being).
You need to generate the following methods.
glm::mat3 CreateRotationMatrix(glm::vec3 rotationDegrees);
glm::mat3 CreateRotationMatrix(glm::quat inputQuat);
glm::quat ConvertEulerAnglesToQuat(glm::vec3 rotationDegrees);
and then your pseudo code for the test looks like below.
for (auto angles : allPossibleAngleCombinations) {
auto expectedRotationMatrix = CreateRotationMatrix(angles);
auto convertedQuat = ConvertEulerAnglesToQuat(angles);
auto actualRotationMatrix = CreateRotationMatrix(convertedQuat);
ASSERT(expectedRotationMatrix, actualRotationMatrix);
}
Only if this test passes for you, can you then look at the next problem of converting them to PxQuat. I would guess that this test is going to fail for you. The one advice I would give is that one of the input angles, (depends on convention) needs to be range limited. Say, if you keep your yaw angle limited between -90, 90 degrees, chances are your test might succeed. This is because, there are a non-unique combination of euler angles that can result in the same rotation matrix.
I'm trying to implement Object Tracker using OpenCV and I'm new to Python. I'll call it from C# code via IronPython. What I'm trying to do, I want to set a custom rectangle as a parameter to Tracker instead of selecting it by mouse.
(Tracker code is the common example you can find on the internet)
Here is the problematic part :
This is how I set and create a rectangle
initBB = cv2.rectangle(frame ,(154, 278),(173,183), (0, 255, 00),1)
This is Tracker's init method
tracker.init(frame, initBB)
and this is the error
SystemError: new style getargs format but argument is not a tuple
If I wanted to use "normal" way, initBB set would be like
initBB = cv2.selectROI("Frame", frame, fromCenter=False,
showCrosshair=False)
I couldn't see which part I'm doing wrong, am I trying to set the wrong type of object to initBB or setting it in wrong way?
Thanks! Have a nice day!
Your error comes from a misunderstanding of what cv2.rectangle does.
It doesn't return a rectangle as you imagine. It is actually a drawing function. It draws the rectangle on the image you pass as argument and returns None.
A rectangle is just a tuple in Python with the following coordinates: (start_col, start_row, width, height). You can create it without using an OpenCV function.
i've been having this weird error after starting to try out "masking" (halfway through an activity given by a lecturer). The lecturer recommended that i create a new solution. However, after making 3 solutions which produced the same error.
http://puu.sh/1foxu <- Picture of the error
http://pastebin.com/GPsLTjdm <- Pastebin for code (used pastebin because Panda3D thingy is indent sensitive)
Thank you!
Try moving your box model before reparenting it to its bullet node.
self.world.attachRigidBody(np.node())
model = loader.loadModel('models/box.egg')
model.setPos(-0.5,-0.5,-0.5) # <- Add this line
model.reparentTo(np)
Adjusting the model position is needed because Bullet shapes assume that the center of the model is its (0,0,0) coordinates, but in most cases the (0,0,0) is actually the bounds of the model.
EDIT:
To solve your texture problem try:
model.setTexture(tex, 1)
...instead of...
model.setTexture(tex)
A fragment from the manual:
Normally, you simply pass 1 as the second parameter to setTexture().
Without this override, the texture that is assigned directly at the
Geom level will have precedence over the state change you make at the
model node, and the texture change won't be made.