i've been having this weird error after starting to try out "masking" (halfway through an activity given by a lecturer). The lecturer recommended that i create a new solution. However, after making 3 solutions which produced the same error.
http://puu.sh/1foxu <- Picture of the error
http://pastebin.com/GPsLTjdm <- Pastebin for code (used pastebin because Panda3D thingy is indent sensitive)
Thank you!
Try moving your box model before reparenting it to its bullet node.
self.world.attachRigidBody(np.node())
model = loader.loadModel('models/box.egg')
model.setPos(-0.5,-0.5,-0.5) # <- Add this line
model.reparentTo(np)
Adjusting the model position is needed because Bullet shapes assume that the center of the model is its (0,0,0) coordinates, but in most cases the (0,0,0) is actually the bounds of the model.
EDIT:
To solve your texture problem try:
model.setTexture(tex, 1)
...instead of...
model.setTexture(tex)
A fragment from the manual:
Normally, you simply pass 1 as the second parameter to setTexture().
Without this override, the texture that is assigned directly at the
Geom level will have precedence over the state change you make at the
model node, and the texture change won't be made.
Related
I can't get RenderCollision to work, no matter how I try.
The documentation says :
RenderCollision (view_id: int, vtx_layout: VertexLayout, prg: ProgramHandle, render_state: RenderState, depth: int) -> None
Here's my (limited) understanding of what I should pass as parameters to this function :
view_id : can be set from 0 to 255, according to the doc. In my case, it is 0
vtx_layout : the vertex layout to store 3D lines
ProgramHandle : the program (shader) needed to draw 3D lines
RenderState : something I'm supposed to provide using ComputeRenderState (found it here)
depth : something relative to the zDepth, I guess?
At this point, I feel I'm not far from using it properly, but I'm having a hard time to figure out the RenderState thing.
Anyone been there before?
RenderCollision is a debug function, so it won't "consume" any view_id. Indeed, you can pass it the view_id, it will write into the current view.
vtx_layout and prg, as you guessed it, handle the rendering of the debug lines (RenderCollision is using lines to draw the collision shapes).
It usually works this way:
Avoid clearing the view when drawing the debug info
hg.SetViewClear(view_id, hg.CF_None, 0, 1.0, 0)
Set the rect of the current view (the same as your main rendering)
hg.SetViewRect(view_id, 0, 0, screen_size_x, screen_size_y)
Set the camera transformation matrix (the same as your current camera)
hg.SetViewTransform(view_id, view_matrix, projection_matrix)
This is the one you were probably looking at: BM_Opaque will let know Harfang where you want to specifically draw within the rendering pipeline:
render_state = hg.ComputeRenderState(hg.BM_Opaque, hg.DT_Disabled, hg.FC_Disabled)
Final instruction that will draw the collision shapes:
physics.RenderCollision(view_id, vtx_line_layout, line_shader, render_state , 0)
You will find a working example here, I hope it will help:
https://github.com/harfang3d/tutorials-hg2/blob/master/physics_overrides_matrix.py#L69
I'm trying to render a cube (default blender scene) with a camera facing it. I have added a spotlight at the same location as the camera. Spotlight direction also faces towards the cube.
When I render, location changes take effect for both camera and spotlight but, rotations don't. scene context update is deprecated now. I have seen other update answers, but they don't seem to help.
I have done some workarounds and they seem to work, but this is not the correct way.
If I render the same set of commands twice (in a loop), I get the correct render.
If I run the script from the blender's python console (only once), I get the correct render. But If the same code is run as a script inside the blender, render is again wrong.
import pdb
import numpy as np
import bpy
def look_at(obj_camera, point):
loc_camera = obj_camera.matrix_world.to_translation()
direction = point - loc_camera
rot_quat = direction.to_track_quat('-Z', 'Y')
obj_camera.rotation_euler = rot_quat.to_euler()
data_path='some folder'
locs=np.array([ 0.00000000e+00, -1.00000000e+01, 3.00000011e-06]) #Assume, (I have big array where camera and spotlight needs to be placed, and then made to look towards cube)
obj_camera = bpy.data.objects["Camera"]
obj_other = bpy.data.objects["Cube"]
bpy.data.lights['Light'].type='SPOT'
obj_light=bpy.data.objects['Light']
loc=locs
i=0
##### if I run following lines two times, correct render is obtained.
obj_camera.location = loc
obj_light.location= obj_camera.location
look_at(obj_light, obj_other.matrix_world.to_translation())
look_at(obj_camera, obj_other.matrix_world.to_translation())
bpy.context.scene.render.filepath = data_path+'image_{}.png'.format(i)
bpy.ops.render.render(write_still = True)
You might need to call bpy.context.view_layer.update() (bpy.context.scene.update() with older versions than blender 2.8) after changing the camera orientation by obj_camera.rotation_euler = rot_quat.to_euler() and make sure that the layers that are going to be rendered are active when calling update() (see here https://blender.stackexchange.com/questions/104958/object-locations-not-updating-before-render-python).
(A bit late ;-) but this was one of the rare questions I found for a related issue.)
I use OCC with python to visualize .igs and .stl format. In .stl file I have a mesh on my model and I want to know what vertex on this mesh was clicked. At least to get some kind of id. I see that the model that I choose automatically highlights without any settings, so I guess there is a way to do this. But I couldn’t find any information about it.
Okay, found it. In case someone else will need it:
display = self.occWidget._display
display.SetSelectionModeVertex() # This is the required function
display.register_select_callback(recognize_clicked)
where recognize_clicked is
def recognize_clicked(shp, *kwargs):
""" This is the function called every time
a face is clicked in the 3d view
"""
for shape in shp:
print("Face selected: ", shape)
Face selection - SetSelectionModeFace()
Vertex selection - SetSelectionModeVertex()
Edge selection - SetSelectionModeEdge()
Shape selection - SetSelectionModeShape()
Neutral (default) selection - SetSelectionModeNeutral()
That is all the modes that I've found in other examples. Please, if you find more, write in a comment that resource.
I'm trying to implement Object Tracker using OpenCV and I'm new to Python. I'll call it from C# code via IronPython. What I'm trying to do, I want to set a custom rectangle as a parameter to Tracker instead of selecting it by mouse.
(Tracker code is the common example you can find on the internet)
Here is the problematic part :
This is how I set and create a rectangle
initBB = cv2.rectangle(frame ,(154, 278),(173,183), (0, 255, 00),1)
This is Tracker's init method
tracker.init(frame, initBB)
and this is the error
SystemError: new style getargs format but argument is not a tuple
If I wanted to use "normal" way, initBB set would be like
initBB = cv2.selectROI("Frame", frame, fromCenter=False,
showCrosshair=False)
I couldn't see which part I'm doing wrong, am I trying to set the wrong type of object to initBB or setting it in wrong way?
Thanks! Have a nice day!
Your error comes from a misunderstanding of what cv2.rectangle does.
It doesn't return a rectangle as you imagine. It is actually a drawing function. It draws the rectangle on the image you pass as argument and returns None.
A rectangle is just a tuple in Python with the following coordinates: (start_col, start_row, width, height). You can create it without using an OpenCV function.
I am working on real time mapping of model with the user data obtained from Kinect.
I am able to get access to the individual bone using bge.types.BL_ArmatureObject().channels
which give the list of bones. I am not able to change the position bone. I tried to use rotation_euler to give it some rotation but it had no effect. Please tell me how to do it.
Maybe a little late, but for blender >= 2.5 this should do the trick:
# Get the whole bge scene
scene = bge.logic.getCurrentScene()
# Helper vars for convenience
source = scene.objects
# Get the whole Armature
main_arm = source.get('NAME OF YOUR ARMATURE')
main_arm.channels['NAME OF THE BONE YOU WANT TO ROTATE'].joint_rotation[ x, y ,z] # x,y,z = FLOAT VALUE
main_arm.update()
I also wrote this down in an extensive tutorial, starting here: http://www.warp1337.com/content/blender-robotics-part-1-introduction-and-modelling