Calculating and Plotting 2nd moment of image - python

I am trying to plot the 2nd moments onto a image file (the image file is a numpy array for brightness distribution). I have a rough understanding that 2nd moment is sort of like moment of inertia (Ixx,Iyy) which is a tensor but I am not too sure how to calculate it and how it would translate into two intersecting lines with the centroid at its intersection. I tried using scipy.stats.mstats.moment but I am unsure what to put as axis if I just want two 2nd moments that intersects at centroid.
Also it returns an array but I am not exactly sure what the values in the array signify, and how that relate to what I am going to plot (because the scatter method in the plotting module takes in at least 2 corresponding values in order to be plotted) ?
Thank you.

Related

How to map a rotated square onto another array and edit the values in python

I have a 15000x7500 numpy array which is representing the surface of a planet and then the 4 corners of a field of view of a satellite projected onto the surface. I have these 4 corners as the indexes of the corners of the square I want to edit.
This field of view can be at any angle and I want to be able to change all the values of the array within this square from 0 to 1 to see what part of the surface it sees.
I know how to do it with indexes if it is in the same orientation as the columns and rows, just not if it is off-axis.
I've added some pictures from Excel to try to demonstrate what I mean:
I know there is a numpy function to find diagonals however this works by taking the main diagonal, with an offset which isn't what I'm looking for. Is there a different numpy or other command I can use to do this?
Thanks for any help :)

Error with triangulation using cv.triangulatePoints()

I am trying to find the corresponding 3d point from two images using the openCV function triangulatePoints() in python. It takes as input the two Projection matrices of both cameras and 2 corresponding image point coordinates (i have all four of thes inputs). --> cv.triangulatePoints(projMatr1, projMatr2, projPoints1, projPoints2).
However, i can't seem to figure out in which form the 2 image point coordinates should be. I've looked up documentation which says :
projPoints1 2xN array of feature points in the first image. It can be also a cell array of feature points {[x,y], ...} or two-channel array of size 1xNx2 or Nx1x2.
However I try to use these coordinates as an input, i always get an error. Does anyone know how i should input these?

Laplace interpolation between known values in a matrix

I'm working on a heatmap generation program which hopefully will fill in the colors based on value samples provided from a building layout (this is not GPS based).
If I have only a few known data points such as these in a large matrix of unknowns, how do I get the values in between interpolated in Python?:
0,0,0,0,1,0,0,0,0,0,5,0,0,0,0,9
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,2,0,0,0,0,0,0,0,0,8,0,0,0
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,8,0,0,0,0,0,0,0,6,0,0,0,0,0,0
0,0,0,0,0,3,0,0,0,0,0,0,0,0,7,0
I understand that bilinear won't do it, and Gaussian will bring all the peaks down to low values due to the sheer number of surrounding zeros. This is obviously a matrix handling proposition, and I don't need it to be Bezier curve smooth, just close enough to be a graphic representation would be fine. My matrix will end up being about 1500×900 cells in size, with approximately 100 known points.
Once the values are interpolated, I have written code to convert it all to colors, no problem. It's just that right now I'm getting single colored pixels sprinkled over a black background.
Proposing a naive solution:
Step 1: interpolate and extrapolate existing data points onto surroundings.
This can be done using "wave propagation" type algorithm.
The known points "spread out" their values onto surroundings until all the grid is "flooded" with some known values. At the end of this stage you have a number of intersected "disks", and no zeroes left.
Step 2: smoothen the result (using bilinear filtering or some other filtering).
If you are able to use ScyPy, then interp2d does exactly what you want. A possible problem with is that it seems to not extrapolate smoothly according to this issue. This means that all values near the walls are going to be the same as closest their neighbour points. This can be solved by putting thermometers in all 4 corners :)

How to find neighbors in binary image with given horizontal and vertical distance (Python)

I have an Image (or several hundreds of them) that need to be analyzed. The goal is to find all black spots close to each other.
For example all black spots with a Horizontal distance of 160 pixel and vertical 40 pixel.
For now I just look at each Pixel and if there is a black pixel I call a recursive Method to find its neighbours (i can post the code too if you want to)
It works, but its very slow. At the moment the script runs about 3-4 minutes depending on image size.
Is there some easy/fast way to accomplish this (best would be a scikit-image method to help out here) I'm using Python.
edit: I tried to use scikit.measure.find_contours, now i have an array with arrays containing the contours of the black spots. Now I only need to find the contours in the neighbourhood of these contours.
When you get the coordinates of the different black spots, rather than computing all distances between all pairs of black pixels, you can use a cKDTree (in scipy.spatial, http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html#scipy.spatial.cKDTree). The exact method of cKDTree to use depends on your exact criterion (you can for example use cKDTree.query_ball_tree to know whether there exists a pair of points belonging to two different labels, with a maximal distance that you give).
KDTrees are a great method to reduce the complexity of problems based on neighboring points. If you want to use KDTrees, you'll need to rescale the coordinates so that you can use one of the classical norms to compute the distance between points.
Disclaimer: I'm not proficient with the scikit image library at all, but I've tackled similar problems using MATLAB so I've searched for the equivalent methods in scikit, and I hope my findings below help you.
First you can use skimage.measure.label which returns label_image, i.e. an image where all connected regions are labelled with the same number. I believe you should call this function with background=255 because from your description it seems that the background in your images is the while region (hence the value 255).
This is essentially an image where the background pixels are assigned the value 0 and the pixels that make up each (connected) spot are assigned the value of an integer label, so all the pixels of one spot will be labelled with the value 1, the pixels of another spot will be labelled with the value 2, and so on. Below I'll refer to "spots" and "labelled regions" interchangeably.
You can then call skimage.measure.regionprops, that takes as input the label_image obtained in the previous step. This function returns a list of RegionProperties (one for each labelled region), which is a summary of properties of a labelled region.
Depending on your definition of
The goal is to find all black spots close to each other.
there are different fields of the RegionProperties that you can use to help solve your problem:
bbox gives you the set of coordinates of the bounding box that contains that labelled region,
centroid gives you the coordinates of the centroid pixel of that labelled region,
local_centroid gives you the centroid relative to the bounding box bbox
(Note there are also area and bbox_area properties which you can use to decide whether to throw away very small spots that you might not be interested in, thus reducing computation time when it comes to comparing proximity of each pair of spots)
If you're looking for a coarse comparison, then comparing the centroid or local_centroid between each pair of labelled regions might be enough.
Otherwise you can use the bbox coordinates to measure the exact distance between the outer bounds of any two regions.
If you want to base the decision on the precise distance between the pixel(s) of each pair of regions that are closest to each other, then you'll likely have to use the coords property.
If your input image is binary, you could separate your regions of interest as follows:
"grow" all the regions by the expected distance (actually half of it, as you grow from "both sides of the gap") with binary_dilation, where the structure is a kernel (e.g. rectangular: http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.rectangle) of, let's say, 20x80pixels;
use the resulting mask as an input to skimage.measure.label to assign different values for different regions' pixels;
multiply your input image by the mask created above to zero dilated pixels.
Here are the results of proposed method on your image and kernel = rectange(5,5):
Dilated binary image (output of step 1):
Labeled version of the above (output of step 2):
Multiplication results (output of step 3):

how to merge images in intensity plot

I am doing a project about image-processing, and I asked about how to solve a very large overdetermined systems of linear equations here. Before I can figure out a better way to accomplish the task, I just split the image into four equal parts and solve the systems of equations separately. The result is shown in the attached file.
The image represents the surface height of a pictured object. You can think of the two axes as the x and y axis, and the z-axis is the axis coming out of the screen. I solved the very large systems of equations to get z(x,y), which is displayed in this intensity plot. I have the following questions:
The lower left part is not shown because when I solved the equations for that region, the intensity plot calculated is affected by some extreme values. One or two pixels have the intensity (which represents the height) as high as 60, and because of the scaling of the colourbar scale, the rest of the image (which can be seen has height ranging only from -15 to 9) appears largely the same colour. I am still figuring out why those one or two pixels have such abnormal results, but if I do get these abnormal results, how can I eliminate/ignore them so the rest of the image can be seen properly?
I am using the imshow() in matplotlib. I also tried using a 3D plot, with the z-axis representing the surface height, but the result is not good. Are there any other visualization tools that can display the results in a nice way (preferably showing it in a 3D way) given that I have obtained z(x,y) for many pairs of (x,y)?
The four separate parts are clearly visible. Are there any ways to merge the separate parts together? First, I am thinking of sharing the central column and row. e.g. The top-left region spans from column=0 to 250, and the top-right region spans from column=250 to the right. In this case, values in col=250 will be calculated twice in total, and the values in each region will almost certainly differ from the other one slightly. How to reconcile the two slightly different values together to combine the different regions? Just taking the average of the two, do something related to curve fitting to merge the two regions, or what? Or should I stick to col=0 to 250, then col=251 to rightmost?
thanks
About point 2: you could try hill shading. See the matplotlib example and/or the novitsky blog

Categories

Resources