Create a projected raster from array with python gdal - python

I've got a processed image array from UAV and want to write it into a projected tiff. I am aware with the array to tiff writting process with python gdal, however not sure how to project it correctly.
I have got the central GPS, UAV height, pixel size of the image array, the array is northward. The orginal UAV image's metadata can not be recognized by gdal, so I have to extract them out and then rearrange them to project the array.
Many thanks!

This question is too vague. The process you need to look into is called "ortho-rectification". You should read about the process and then break it down into stages. Then, figure out the specific pieces you have and don't have.
Fundamentally, in order to create an ortho-image, you need a digital elevation model (DEM), intrinsic camera parameters, and extrinsic parameters (pose). You can find documentation on the algorithm online or in a standard Remote Sensing book.
Another option is if your camera provides Rational Polynomial Coefficients (RPCs), which I assume is no.
Generic Amazon Search of Remote Sensing Books
https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=remote+sensing

Related

How to get homography matrix from gps information

I am working a gps-denied UAV localization project. I am reading this paper GPS-Denied UAV Localization using Pre-existing Satellite Imagery. In this paper, they try to align a UAV frame sequentially with a satellite map using an homography. The first homography is estimated using the GPS information stored in the first UAV frame. In the code, I don't see any information about this part. I am wondering if someone can explain this or point me to some reference that can help.

Is there a way to take a large group of 2D images and turn the into a 3D image?

I am currently working on a summer research project and we have generated 360 slices of a tumor. I now need to compile (if that's the right word) these images into one large 3D image. Is there a way to do this with either a python module or an outside source? I would prefer to use a free software if that is possible.
Perhaps via matplotlib, but anyway may require preprocessing I suppose:
https://www.youtube.com/watch?v=5E5mVVsrwZw
In your case, the z axis (3rd dimension) should be specified by your vector of images. Nonetheless, before proceeding, I suppose you would need to extract the shapes of the object you want to reconstruct. For instance, if i take any image of the many 2D you have, I expect to find RGB value for each pixel, but then, for instance if you want to plot a skull like in the video link, as I understand you would need to extract the borders of your object and from each of its frame (2D shape) and then plot their serie. But anyway, the processing may depend on the encoding of the information you have. Perhaps is sufficient to simply plot the series of images.
Some useful link I found:
https://www.researchgate.net/post/How_to_reconstruct_3D_images_from_two_or_four_2D_images
Python: 3D contour from a 2D image - pylab and contourf

3D point cloud from continuous video stream of two (stereo) cameras

I have continuous videos taken from two cameras placed on up right and up left corners of my car's windshield (please note that they are not fixed to each other, and I aligned them approximately straight). Now I am trying to make a 3D point cloud out of that and have no idea how to do that. I surfed the internet a lot and still couldn't find any useful info. Can you send me some links or hints on how can I make that work in Python.
You can try the stereo matching and point cloud generation implementation in the OpenCV library. Start with this short Python sample.
I suppose that you have two independent video streams that are not exactly synchronized. You will have to synchronize them first, because the linked sample expects two images, not videos. Extract images from videos using OpenCV or ffmpeg and find an image pair that shares exactly the same timepoint (e.g. green appearing on a traffic light). Alternatively you can use the audio tracks for synchronization, see https://github.com/benkno/audio-offset-finder. Beware: synchronization based on a single frame pair or a short audio excerpt will probably work only for few minutes before and after the synchronized timepoint.

Is there a method of aligning 2D arrays based on features much like Image Registration

I have two 2D arrays. One consist of reference data and the other measured data. While the matrices have the same shape the measured data will not be perfectly centered; meaning, the sample may not have been perfectly aligned with the detector. It could be rotated or translated. I would like to align the matrices based on the features much like image registration. I'm hoping someone can point me in the direction of a python package capable of this or let me know if opencv can do this for numpy arrays with arbitrary values that do not fit the mold of a typical .png or .jpg file.
I have aligned images using opencv image registration functions. I have attempted to convert my arrays to images using PIL with the intent being to use the image registration functions within opencv. If needed I can post my sample code but at this point I want to know if there is a package with functions capable of doing this.

Artificially incorporate non-rigid motions in Images for generating data using Python/Matlab

The main challenges in Medical Imaging is Data acquisition. There are different types of motions (Rigid & Non Rigid) possible during acquitions(Body movement,breathing etc).
Suppose I want to generate different types of motion artificially in an Image(eg. 3D NIFTI MRI image).
Motions can be global rigid motions or elastic deformation or bspline based local deformations. Input will be an 3D image and output will be a newly generated data incorporated the desired motion.
I was wondering if there is any package or software available to do this, but didn't find any. Using this type of feature we can validate our registration methods or simulate different deformation models.
I want some help in generating such artificial data using python or matlab for NIFTI/DICOM 3D images.
Within Python, there are a couple options. The first is using the pydicom module for I/O along with numpy to represent/process the layers. In order to use this, you may additionally have to use matplotlib, scipy/scikit-image, or Pillow to visualize the input and generated output.
However there is also VTK, which comes with both a Python interface and a DICOM reader/writer. Using vtkpython will allow you to create a fairly simple application for viewing and interacting with the data. For generating the motion layers, I think numpy may still be the best option with this route.
This page has a good introduction to using both of these methods: https://pyscience.wordpress.com/2014/09/08/dicom-in-python-importing-medical-image-data-into-numpy-with-pydicom-and-vtk/

Categories

Resources