I have a Canon VB-H45 and I know how to access the camera and take pictures with cv2, but I would like to be able to use rotation and zoom as well.
Related
I want to work on a self project that makes me create 3D animated environment.
Is there a way to convert cameras input into 3D Modeling with python code?
If so, How many cameras do I need ?
Yes, you need stereo or trinocular cameras and OpenCV.
https://docs.opencv.org/master/dd/d53/tutorial_py_depthmap.html
Or, you can make a 3D scanner from one camera and photogrammetry software.
https://hackaday.com/2019/04/07/get-great-3d-scans-with-open-photogrammetry/
I want to fit a camera-view to a mesh from a fixed point. See attached image.
Example
So I need to adjust the camera rotation, focal length and frame width/height.
What is the best way to do this with python?
What you're asking is relatively complex in terms of operation. You're adjusting multiple camera properties to frame an object.
I recommend you decompose the problem into parts and ignore focal length all together. Simply transform a camera so it frames the object. You can then add a supplementary step to modify the width and height of the camera to tightly frame it.
The gist of what you want to do is as follows:
get bounding box of object
get camera aspect ratio
get active viewport representation
get camera matrix based on object bounding box and corresponding camera aspect ratio mapped from active viewport
apply matrix to camera
This will be much easier if you're familiar with the OpenMaya API. The OpenMayaUI.M3dView and the OpenMaya.MFnCamera classes should get you started.
https://help.autodesk.com/view/MAYAUL/2019/ENU/?guid=__py_ref_class_open_maya_u_i_1_1_m3d_view_html
https://help.autodesk.com/view/MAYAUL/2019/ENU/?guid=__py_ref_class_open_maya_1_1_m_fn_camera_html
If you're unfamiliar with the API, then scour the mel scripts and check how the FrameSelectedWithoutChildren Runtime Command (F key in the viewport) shortcut works, and use that to automate the process.
I have multiple cameras that are closely located to each other, looking at the same scene.
I can calibrate all of them (at once - currently using the openCV algorithm).
What I now want to do is, to overlay for example the following:
Let one camera be a LIDAR depth, the second a grayscale and the third an infrared cam. I want to overlay now the "grayscale scene" in an image format with the depth and infrared information being on the correct pixel. (Similar to depth-grayscale overlays that many 3D-cameras bring).
The cameras have different opening angles and resolutions.
I appreciate any hint or comment :-)
Cheers.
I am trying to create a custom VR headset that displays a live feed from a remote camera, and in order for the view to be clear from the vr headset, I need to duplicate the image from the camera for both eyes and apply barrel distortion (see attached picture) to them to offset the distortion from the lenses. Duplicating the image should be simple, but I do not know how to apply the distortion.
Most of the solutions I've found online are built in to some sort of game engine or VR SDK, but I don't want to use a game engine since I'm only processing a raw camera feed.
I am planning on using OpenCV to do this and I'm hoping to get at least 30fps at 1080p (hardware is an NVIDIA Jetson Nano with a CSI camera). What would be the best way to go about doing this?
Is there any predefined code for this or I have to write my own code?
Also, I do not have the camera properties for this, I have only the image taken in fisheye lens and now I have to flatten the images
OpenCV provides a module for working with fisheye images: https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html
This is a tutorial with an example application.
Keep in mind that your task might be a bit hard to achieve since the problem is under-determined. If you have some cues in the image (such as straight lines), that might help. Otherwise, you should seek a way of getting more information about the lens. If it's a known lens type, you might find calibration info online. Also, some images might have the lens used to capture them in the EXIF data.