I'm in the process of identifying objects whose float value is greater than a certain threshold in a 2-D numpy array. I then need to determine the length of the major axis of each object and make sure that the object's major axis length satisfies a certain threshold in kilometers.
I am able to identify the objects I want in my 2-D numpy array by using the scipy.ndimage.measurements.label module. I then am able to determine the length of each object's major axis using the scikit-image regionprops module (skimage.measure.regionprops).
However, I am unsure about what the units of the object's length are as the 2-D numpy array by itself does not have any information about coordinates. The 2-D numpy array is essentially a dataset that maps to a subdomain on the surface of the globe. Additionally, I have two other 2-D numpy arrays that are the same size as my data array with one array containing the latitude coordinates for each grid point and the other containing the longitude coordinates. I believe I somehow need to use the lat/lon arrays to determine the length of the major axis of my objects but I have no idea how.
This is the code I have so far:
from scipy import ndimage
from skimage.measure import regionprops
import numpy as np
# 2-D numpy array with data.
data
# 2-D numpy arrays with latitude and longitude coordinates that are same grid as data array.
lat
lon
# Allow label module to have diagonal object matching.
struct = np.ones((3,3), dtype=bool)
# Find objects in data array.
labl, n_features = ndimage.measurements.label(data>=35,structure=struct)
# Find major axis length in labl array for each object found.
props = regionprops(labl)
# Loop through each object.
for p in props:
# Find object's major axis length.
length = p.major_axis_length
(some code to compute major axis length in kilometers?)
if length < 125: #(125 is in km)
(keep object)
Any help would be greatly appreciated. Thanks!
Related
I am trying to plot a 3D numpy array (256 x 256 x 256) as essentially an array of points, which should be colored by their value. I keep getting something like this
if I multiply by data by 1000, I get this:
My data is a 3D np array composed of slices, and the slices look like this:
There are about 50 or so slices, so the 3D visualization should be showing a sphere. I can visualize the slices just fine
My code is a modified example of this example: https://docs.pyvista.org/examples/02-plot/volume.html using the technique given here in this link to make a pyvista.UniformGrid object out of a 3D numpy array: https://docs.pyvista.org/examples/00-load/create-uniform-grid.html
def plot_3d_pyvista(self):
import pyvista as pv
values = self.reconstructed_source_3D
# Create the spatial reference
grid = pv.UniformGrid()
# Set the grid dimensions: shape because we want to inject our values on the
# POINT data
grid.dimensions = values.shape
# Edit the spatial reference
#grid.origin = (1, 1, 1) # The bottom left corner of the data set
#grid.spacing = (1, 1, 1) # These are the cell sizes along each axis
# Add the data values to the cell data
grid.point_data["values"] = values.flatten(order="F")
p = pv.Plotter()
p.add_volume(grid)
p.show()
If I use a sphere, eg, a 3D numpy array of zeros with only the center spherical elements set to 1, I get something like this which clearly shows a sphere but is way too transparent:
Alternatively, if anyone knows a way using a different package to plot a volume like this, I could use that. I just need a way to visualize what the 3D looks like.
Recently i was struggling trying to take the pixel values of a 3D volume (np array) using specific space coordinate of a STL object.
The STL object is spatially overlapped with the 3D volume but the latter has no coordinate and so i don't know how to pick pixel values corresponding to the STL coordinates.
Any idea?
If the STL object is truly in the 3d volume's coordinate space, then you can simply STL's coordinate as an index to lookup the value from the 3d array. This lookup does nearest neighbor interpolation of the 3d image. For better looking results you'd want to do linear (or even cubic) interpolation of the nearby pixels.
In most 3d imaging tasks, those coordinate spaces do not align. So there is a transform to go from world space to 3d volume space. But if all you have is a 3d numpy array, then there is no transformation information.
Update:
To index into the 3d volume take the X, Y, Z coordinates of your point from the STL object and convert them into integer value I, J, K. Then lookup in the numpy array using I,J,K as indices: np_array[K][J][I]. I think you have to reverse the order of the indices because of the array ordering numpy uses.
When you way 3d array and the STL align in python, how are you showing that? The original DICOM or Nifti certainly have world coordinate transformations in the metadata.
I have a boolean array with one connected component of True values, the border of which I would like to convert to a polygon, e.g. in shapely.
Assuming my array is img, I can get the border indices like this
import numpy as np
from skimage.morphology binary_erosion
border_indices = np.transpose(np.nonzero(np.logical_xor(binary_erosion(img), img)))
but just feeding those into a shapely.Polygon object does not work because the points are not ordered along the boundary, but in increasing x and y values.
It may be possible to use alpha shapes to solve this (note that I'm not looking for the convex hull), but maybe someone can suggest a simpler way of getting to the bounding polygon, ideally directly operating on the original array.
It sounds like rasterio.features.shapes is what you are looking for. A simple example that should illustrate the procedure:
import rasterio.features
import shapely.geometry
import numpy as np
im = np.zeros([5, 5], dtype=np.uint8)
im[1:-1, 1:-1] = 1
shapes = rasterio.features.shapes(im)
shapes is a generator with pairs of (geometry, value). To get the geometry corresponding to where the value is equal to 1:
polygons = [shapely.geometry.Polygon(shape[0]["coordinates"][0]) for shape in shapes if shape[1] == 1]
This creates a list of shapely polygons corresponding to the areas in the array where the value is equal to 1.
print(polygons)
[<shapely.geometry.polygon.Polygon object at 0x7f64bf9ac9e8>]
This must be possible but I am unsure as to how to approach it.
I have a geographical domain, with a set number of lat and lons. Using these, I am able to plot a simple Basemap of the domain:
fp_mhd = name.footprints('path/to/file')
domain_lon = fp_mhd.lon
domain_lat = fp_mhd.lat
### Construct Basemap ###
m = Basemap(resolution='c',projection='gall',
llcrnrlat=(np.min(domain_lat)),
urcrnrlat=(np.max(domain_lat)),
llcrnrlon=(np.min(domain_lon)),
urcrnrlon=(np.max(domain_lon)))
What I need is someway of distinguishing countries from oceans within this domain, and returning the results as an array i.e. 1s for land and 0s for ocean (though these values don't matter). The array needs to be 2-D where each point corresponds to a specific lat and lon. So say there were 100 lats and 200 lons, there would be 20000 1s and 0s. I wondered if there was some way to convert the Basemap object to an array, but failed to achieve this. Is it possible?
Thanks in advance!
Have a look at the following function. You should be able to extract the mask as it returns a numpy masked array.
.mask returns a Boolean array.
mpl_toolkits.basemap.maskoceans(lonsin, latsin, datain, inlands=True,
resolution='l', grid=5)
returns a masked array the same shape as datain with “wet” points
masked.
Basemap documentation
Numpy Masked Array Documentation
I am having some problems with Affine tranformation coefficients while creating a new GeoTIFF file. What I am doing is ETL on a scientific dataset that results in a 2D Ndarray along with a set of meshgrid Ndarrays that contain Lat and Lon. Both the meshgrids and the dataset arrays have the same dimensions of 645 x 980. From what I understand the GeoTIFF requires a list of Affine coefficients when created from Python GDAL via the SetGeoTransform() method. The list has the form of [xllcorner, xrotation, x_cellsize, yllcorner, yrotation, y_cellsize]. My approach to this is similar to what is outlined here: http://adventuresindevelopment.blogspot.com/2008/12/python-gdal-adding-geotiff-meta-data.html
At this point is where I run into problems. I calculate the xllcorner and the yllcorner using the min() method for the two meshgrid arrays for lat & lon respectively, and I manually calculate the x and y cellsize by applying the formula [max-min]/dimension size with the x dimension size being the x axis size for the lons meshgrid and the y dimension size being the y axis size for the lats meshgrid. When I apply this method and try to write out the array band via GetRasterBand().WriteArray() I get this error message:
Traceback (most recent call last):
...
raise ValueError("array larger than output file, or offset off edge")
ValueError: array larger than output file, or offset off edge
Therefore I assume that I have composed my affine coefficients incorrectly but given the data this makes no sense to me. I even made sure that the Spatial Reference System was set to WGS:84 before attempting the affine coefficient creation. So my question is how to properly create the Affine coefficients with lat/lon meshgrids and a data array that share common dimensions? I think my cell size calculation can't simply be lat/lon differences; but I am not sure.
This error is typically shown when the expected array shape does not match. For instance, see what shape the expected shape is with:
band = src.GetRasterBand(1)
arr = band.ReadAsArray()
print(arr.shape) # (656L, 515L)
This will need to be the shape of the numpy array to be written:
assert other_array.shape == arr.shape
band.WriteArray(other_array)
And to raise the same ValueError, change the shape so it is longer in one dimension, e.g.:
band.WriteArray(other_array.T)
As for affine transformations, this is probably not raising any errors, as it is often just stored as data. GIS rasters typically register the world coordinate in the upper-left corner, and use a -dy value to count rows downwards. However, using a lower-left corner with +dy is usually fine by most software. It will just be upside down when comparing the array as a printed matrix versus mapped raster.