Setting Custom RoI for OpenCV BoundryBox in Python - python

I'm trying to implement Object Tracker using OpenCV and I'm new to Python. I'll call it from C# code via IronPython. What I'm trying to do, I want to set a custom rectangle as a parameter to Tracker instead of selecting it by mouse.
(Tracker code is the common example you can find on the internet)
Here is the problematic part :
This is how I set and create a rectangle
initBB = cv2.rectangle(frame ,(154, 278),(173,183), (0, 255, 00),1)
This is Tracker's init method
tracker.init(frame, initBB)
and this is the error
SystemError: new style getargs format but argument is not a tuple
If I wanted to use "normal" way, initBB set would be like
initBB = cv2.selectROI("Frame", frame, fromCenter=False,
showCrosshair=False)
I couldn't see which part I'm doing wrong, am I trying to set the wrong type of object to initBB or setting it in wrong way?
Thanks! Have a nice day!

Your error comes from a misunderstanding of what cv2.rectangle does.
It doesn't return a rectangle as you imagine. It is actually a drawing function. It draws the rectangle on the image you pass as argument and returns None.
A rectangle is just a tuple in Python with the following coordinates: (start_col, start_row, width, height). You can create it without using an OpenCV function.

Related

Display Harfang 3D physics debug with RenderCollision?

I can't get RenderCollision to work, no matter how I try.
The documentation says :
RenderCollision (view_id: int, vtx_layout: VertexLayout, prg: ProgramHandle, render_state: RenderState, depth: int) -> None
Here's my (limited) understanding of what I should pass as parameters to this function :
view_id : can be set from 0 to 255, according to the doc. In my case, it is 0
vtx_layout : the vertex layout to store 3D lines
ProgramHandle : the program (shader) needed to draw 3D lines
RenderState : something I'm supposed to provide using ComputeRenderState (found it here)
depth : something relative to the zDepth, I guess?
At this point, I feel I'm not far from using it properly, but I'm having a hard time to figure out the RenderState thing.
Anyone been there before?
RenderCollision is a debug function, so it won't "consume" any view_id. Indeed, you can pass it the view_id, it will write into the current view.
vtx_layout and prg, as you guessed it, handle the rendering of the debug lines (RenderCollision is using lines to draw the collision shapes).
It usually works this way:
Avoid clearing the view when drawing the debug info
hg.SetViewClear(view_id, hg.CF_None, 0, 1.0, 0)
Set the rect of the current view (the same as your main rendering)
hg.SetViewRect(view_id, 0, 0, screen_size_x, screen_size_y)
Set the camera transformation matrix (the same as your current camera)
hg.SetViewTransform(view_id, view_matrix, projection_matrix)
This is the one you were probably looking at: BM_Opaque will let know Harfang where you want to specifically draw within the rendering pipeline:
render_state = hg.ComputeRenderState(hg.BM_Opaque, hg.DT_Disabled, hg.FC_Disabled)
Final instruction that will draw the collision shapes:
physics.RenderCollision(view_id, vtx_line_layout, line_shader, render_state , 0)
You will find a working example here, I hope it will help:
https://github.com/harfang3d/tutorials-hg2/blob/master/physics_overrides_matrix.py#L69

how to select part.surface in abaqus using python script

How do I select the surfaces on part in Abaqus? I have tried:
tubePart.surface(faces = tubePart.faces[4:8],name = 'innerFaces')
but it keeps saying part object has no attribute surface.
Ideally, you should create a new surface by calling Surface() function (not surface()), i.e.
tubePart.Surface(...)
Secondly, there must be side1Faces instead of faces (thanks to agentp for comment). Thus, the final peace of code should look like this:
tubePart.Surface(side1Faces = tubePart.faces[4:8],name = 'innerFaces')

[Python]obj.rotate on a specific object

This is my little Python programm Using Vpython
I want to rotate a box.
I want to use the boxes axis and not the one of the scene.
so for example if its rotated to the right and then i want to get the "nose" down, i want to do this in the view of the box...
imagine i was a jet ;)
BTW: I´m a python 3
from visual import *
a=box(size=(5,1,3),axis=(1,0,0))
def tasten():
"Looooopings "
if scene.kb.keys: #action on keyboard?
druck=scene.kb.getkey() #save to cache
if druck=='left':
a.rotate(angle=-1/100, axis=(1,0,0)) #links drehen
if druck=='right':
a.rotate(angle=1/100, axis=(1,0,0)) #rechts drehen
if druck=='up':
a.rotate(angle=-1,axis=(0,0,1)) #nose down
while True:
tasten()
I would recommend creating a box class that stores the orientation, as martineau is suggesting. The class would have a vector that stores its orientation, then a method to rotate it in whatever way required.

Rainbow box when running program in Panda3D using VS2010

i've been having this weird error after starting to try out "masking" (halfway through an activity given by a lecturer). The lecturer recommended that i create a new solution. However, after making 3 solutions which produced the same error.
http://puu.sh/1foxu <- Picture of the error
http://pastebin.com/GPsLTjdm <- Pastebin for code (used pastebin because Panda3D thingy is indent sensitive)
Thank you!
Try moving your box model before reparenting it to its bullet node.
self.world.attachRigidBody(np.node())
model = loader.loadModel('models/box.egg')
model.setPos(-0.5,-0.5,-0.5) # <- Add this line
model.reparentTo(np)
Adjusting the model position is needed because Bullet shapes assume that the center of the model is its (0,0,0) coordinates, but in most cases the (0,0,0) is actually the bounds of the model.
EDIT:
To solve your texture problem try:
model.setTexture(tex, 1)
...instead of...
model.setTexture(tex)
A fragment from the manual:
Normally, you simply pass 1 as the second parameter to setTexture().
Without this override, the texture that is assigned directly at the
Geom level will have precedence over the state change you make at the
model node, and the texture change won't be made.

About the optional argument in Canvas in PyS60

In Python for Symbian60 blit() is defined as:
blit(image [,target=(0,0), source=((0,0),image.size), mask=None, scale=0 ])
In the optional parameter source what is the significance of image.size?
My guess is that blit() will automatically use the result of image.size when you don't specify anything else (and thus blitting the whole image from (0,0) to (width,height)).
If you want only a smaller part of the image copied, you can use the source parameter to define a different rectangle to copy.
Think that source=((0,0)) is the top left corner and image.size is the bottom right corner. You blit whatever is between those two points.
Similar for target, btw.

Categories

Resources