I want to make a ct labeling software,for target detection,But I don't know how to realize the function of drawing rectangle.
The read image has three perspectives. After labeling, each perspective will display a rectangle box corresponding to the perspective, and the length of one perspective rectangle box will change accordingly for the other two perspectives.
like mimics
enter image description here
I have try vtkBorderWidget and vtkBoxWidget but not use
Related
I would need a way to detect the size and position of the color blob representing the central object in a photo.
i have prepared a image that should explain what i am after. this is done in photoshop, so its just a handmade explanation. I need to simplify the object, in oder to remove reflections and small details. Then I need to find the outer box coordinates to be able to locate it in the image.
The object can have any color, and will always differentiate from the background. I am interested in the object that covers the central pixel of the image.
How can this be done in python using opencv?
thank you
Original image:
simplyfied color-blob image
needed box:
I want to fit a camera-view to a mesh from a fixed point. See attached image.
Example
So I need to adjust the camera rotation, focal length and frame width/height.
What is the best way to do this with python?
What you're asking is relatively complex in terms of operation. You're adjusting multiple camera properties to frame an object.
I recommend you decompose the problem into parts and ignore focal length all together. Simply transform a camera so it frames the object. You can then add a supplementary step to modify the width and height of the camera to tightly frame it.
The gist of what you want to do is as follows:
get bounding box of object
get camera aspect ratio
get active viewport representation
get camera matrix based on object bounding box and corresponding camera aspect ratio mapped from active viewport
apply matrix to camera
This will be much easier if you're familiar with the OpenMaya API. The OpenMayaUI.M3dView and the OpenMaya.MFnCamera classes should get you started.
https://help.autodesk.com/view/MAYAUL/2019/ENU/?guid=__py_ref_class_open_maya_u_i_1_1_m3d_view_html
https://help.autodesk.com/view/MAYAUL/2019/ENU/?guid=__py_ref_class_open_maya_1_1_m_fn_camera_html
If you're unfamiliar with the API, then scour the mel scripts and check how the FrameSelectedWithoutChildren Runtime Command (F key in the viewport) shortcut works, and use that to automate the process.
I have a question regarding screen regions or possibly mouse/cursor coordinates in x11vnc.
I am trying to create all possible mouse positions using an image within an image.
The primary image will always be 765 by 503 pixels. The secondary image is unknown until I figure out how to extract the secondary region in a easily-reproducible manner. Meaning, I will need some way of somewhat some-what accurately grab the secondary image that I want, and then be able to extract the x,y of the secondary image in respect of the primary image. Meaning that the bottom-left corner of the secondary image coordinates of x,y will not be 0,0 but rather the x,y values as if it were the main image. This is only meant to help create accurate coordinates, actual results may differ.
I know the image will be there because I am using vncdotool to expect the image, and then perform an operation once the image is found.
Note: I am not sure what what rexpect within the vncdotool does
Using x11vnc I have shared a single application, so all coordinates are needed, I just need to figure out a way to map image(s) to coordinates upon the vncdotool expect finding the secondary image
This github repo solves this question
You can create an XML file using a simple gui on the image to draw the rectangle you wish to create the bounding box on and get the x_min, y_min, x_max, y_max pixel coordinates which can translate into mouse/cursor coordinates.
https://github.com/tzutalin/labelImg
I'm implementing Boundary Fill Algorithm for polygons in python.How to get and set the color of a pixel?
I'm using graphics.py file.
Zelle graphics provides operators to manipulate pixels of images as documented in the code:
The library also provides a very simple class for pixel-based image
manipulation, Pixmap. A pixmap can be loaded from a file and displayed
using an Image object. Both getPixel and setPixel methods are provided
for manipulating the image.
But not higher level objects like polygons.
This answer to Get color of coordinate of figure drawn with Python Zelle graphics shows how to get the fill color of an object like a polygon located at a given (x, y) coordinate using the tkinter underpinnings of Zelle graphics. I doubt this technique can be used to set the color of a pixel of a polygon, however.
I have an image that represents the elevation of some area. But the drone that made it didn't necessarily go in a straight line(although image is always rectangular). I also have gps coordinates generated every 20cm of the way.
How can I "bend" this rectangular image (curve/mosaic) so that it represents the curved path that the drone actually went through? (in python)
I haven't managed to write any code as I have no idea what is the name of this "warping" of the image. Please find the attached image as a wanted end state, and normal horizontal letters as a start state.
There might be a better answer, but I guess you could use the remapping functions of openCV for that.
The process would look like that :
From your data, get your warping function. This will be a function that maps (x,y) pixel values from your input image I to (x,y) pixel values from your output image O
Compute the size needed in the output image to host your whole warped image, and create it
Create two maps, mapx and mapy, which will tell the pixel coordinates in I for every pixel in 0 (that's, in a sense, the inverse of your warping function)
Apply OpenCV remap function (which is better than simply applying your maps because it interpolates if the output image is larger than the input)
Depending on your warping function, it might be very simple, or close to impossible to apply this technique.
You can find an example with a super simple warping function here : https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/remap/remap.html
More complex examples can be looked at in OpenCV doc and code when looking at distortion and rectification of camera images.